Databases Reference
In-Depth Information
[19] shows that perfect privacy is decidable in Π 2
in the combined size of
the queries defining
. The result follows from a key lemma showing that
privacy holds for all domains if it holds for some domain of size polynomial
in the number of variables and constants appearing in the view and secret
queries. Essentially, to check the guarantee on such a domain Dom , one simply
needs to enumerate the databases over Dom . There are only finitely many of
them (though their number is exponential in the domain size). In a follow-up
paper, Machanavajjhala et al. [15] provide an alternative decision procedure
which reduces perfect privacy to checking a number of containments between
queries constructed from the views and secret definitions. This allows them
to leverage well-known results on the complexity of query containment to
identify restrictions leading to a PTIME-checkability of the perfect privacy
guarantee.
In addition to a decision procedure for perfect privacy, [19] introduce also
a notion equivalent to the bounded belief revision guarantee BBR P,S from
Section 2.2 (again considering only independent-tuple distributions). Further-
more, Miklau and Suciu consider a limited flavor of the “no further belief
revision” guarantee NFBR P,S , in which the already published views are de-
fined by boolean queries.
As recognized in [19, 20], the fact that perfect privacy only defends against
attackers described by independent-tuple distributions is a limitation because
it ignores attackers whose background knowledge gives them correlations be-
tween tuples. For instance, the attacker's background knowledge that review-
ers r 1 and r 2 have similar research expertize and taste can be modeled by a
distribution in which the probability that r 1 bids for a paper is similar to the
probability that r 2 does. In an additional example, the attacker may know
that if a patient has a highly contagious disease, then her spouse likely has it,
too. Such background information cannot be modeled by independent-tuple
distributions.
However, limiting attackers to those characterized by independent-tuple
distributions strikes a good balance in the trade-off between guarantee strength
and feasibility of checking the guarantee. This conclusion is reinforced by a
study (discussed next) of what happens if the limitation is removed.
V
,
S
3.2 More General Classes of Attackers
[8] explores an alternate way to balance the tension between the strength of
the guarantee and the feasibility of checking it.
The study starts from the thesis that data owners cannot presume that at-
tacker's beliefs are induced exclusively by the independent-tuple distributions
of [19, 20]. However, strengthening the guarantees to consider more general
classes of attackers carries the potential danger of rendering them too rigid,
i.e. violated by too many desirable publishing scenarios. Therefore, [8] simul-
taneously considers a relaxation along a different dimension: data owners are
assumed willing to accept the privacy breach caused by an already published
Search WWH ::




Custom Search