Databases Reference
In-Depth Information
set of views
will cause no further
breach. “Breach” is defined as a revision of belief from the a priori of having
observed
V
, but want to ensure that a new view
N
( D ).
In the terminology of Section 2.2, [8] introduces and studies precisely the
various flavors of the NFBR P,S guarantee: extent-dependent (NFBR D
V
( D ) to the a posteriori of having also observed
N
), and
also extent-independent. Moreover, [8] argues that a privacy guarantee that
holds for given D ,
P
,
S
V
,
S
,and
N
may be violated if it is also known that D
satisfies a set
C
of integrity constraints.
Example 8. Assume a hospital database consisting of four tables:
PW associates patients with the ward they are in;
WD associates doctors with the wards they are responsible for (several
doctors may share responsibility for the same ward, and the same doctor
may share responsibility for several wards);
DA associates doctors with the ailments they treat;
PA associates patients with the ailments they suffer from.
Assume that PW, WD, DA are published and PA is the secret. If the owner
also discloses (or common sense leads the attacker to assume) the following
integrity constraints, the attacker's belief can be affected.
Patients can be treated only by doctors responsible for their ward.
If a patient p suffers from an ailment a then some doctor treats p for a .
If these constraints do not hold, an attacker may consider a possible database
associating a patient p with a doctor d who does not cover p 's ward and hold a
non-zero belief that p suffers from some ailment a treated only by d . However,
under the constraints the secret patient-ailment association PA is a subset of
Π PA ( P W W D D A ), to which ( p, a ) does not belong. This forces the
attacker to revise to 0 his belief about any possible database witnessing ( p, a ).
[8] takes into account such semantic and integrity constraints when checking
privacy.
Maybe the most interesting dimension of the study in [8] stems from
proposing a natural way to classify attackers, yielding two groups.
First, we have the class of all attackers, described by set
P a of unrestricted
distributions. Ideally, this is whom the owner wishes to defend against.
P a cap-
tures attackers who exploit correlations between tuples, and strictly includes
attackers who don't (the ones described by the independent-tuple distribu-
tions of [19, 20]).
Second, [8] observes that the attacker is often unaware of (or uninterested
in) the details of the possible database D witnessing a secret
( D ), as D may
also involve data that are tangential or irrelevant to the secret. For example,
the attacker trying to link patients to their ailment does not care about the
patient's insurance provider or the hospital's parking facilities, all of which
could be also stored in the database.
S
Search WWH ::




Custom Search