Databases Reference
In-Depth Information
P
δ
[
s
1
|V
(
D
)]
>
0. Now assume that the attacker were to observe the extents
of the new views, which are
V
PD
=
{
(John, Dr. MacDonald), (Jane,Dr.
Zhivago), (Jack,Dr. Zhivago)
}
and
V
DA
=
{
(Dr. MacDonald, flu), (Dr.
Zhivago, pneumonia), (Dr. Zhivago, cold)
The attacker must now revise
to 0 his a posteriori belief that
s
1
is the secret. Indeed, only Dr. Zhivago
treats pneumonia, but John sees Dr. MacDonald, therefore John cannot have
pneumonia:
P
δ
[
s
1
|V
}
(
D
)
∧N
(
D
)] = 0.
An alternative intuition for the no-further-belief-revision guarantee is the
following. After observing
(
D
), the attacker reverse-engineers the possible
databases [
D
]
V
and uses his background knowledge to assign a likelihood to
each of them. After subsequently observing
V
N
(
D
), the attacker rules out all
databases which are possible for
V
(
D
) but not for
N
(
D
), being left with only
those in [
D
]
V
∩
[
D
]
N
. Ruling out even one database results in re-distributing its
probability over the remaining ones, thus potentially modifying the attacker's
a posteriori belief about the secret. For instance, in an extreme case, the
possible databases in [
D
]
V
may witness two secrets
s
1
and
s
2
.If[
D
]
V
∩
[
D
]
N
rules out all witnesses of
s
2
(and maybe also some but not all witnesses of
s
1
), then by (3) the attacker's belief about the secret being
s
2
drops to 0 and
the belief of
s
1
becomes 1, i.e. certainty.
This intuition is formalized by the following result.
Theorem 1 ([8]).
Let
contain all possible distributions, thus modeling all
attackers. Then for every database D and secret
P
S
no attacker's belief is re-
vised upon observing
N
(
D
)
if and only if the possible databases do not change:
NFBR
D
P
∀
D
∀S
(
N
,
V
)
⇔
[
D
]
V
=[
D
]
V
∩
[
D
]
N
,
S
Note that despite being defined in probabilistic fashion, the no-further-belief-
revision guarantee remarkably reduces by Theorem 1 to a purely model-
theoretic problem involving reasoning solely about possible databases.
Bounded belief revision (BBR
D
P,S
).
It is often useful to consider relax-
ing privacy guarantees to allow desirable publishing functions. We next con-
sider a natural relaxation of the NBR
D
P,S
guarantee of no belief revision, which
offers the owner more control over the trade-off between privacy and utility
of publishing functions. The idea is to allow revision, but only if bounded
by an owner-defined threshold. In this case, a breach is formally defined as
|
[0
,
1] is the threshold. This definition of
breach induces a family of privacy guarantees, parameterized by the threshold:
P
δ
[
s
|V
(
D
)]
−
P
δ
[
s
]
|
>
, where
∈
BBR
D
P
(
V
,
):=
∀
s
∀
(
δ
∈P
)
|
P
δ
[
s
|V
(
D
)]
−
P
δ
[
s
]
|≤
.
,
S
Bounded further belief revision (BFBR
D
P,S
).
The same idea of allow-
ing bounded belief revision yields a natural relaxation of guarantee NFBR
D
P,S
:
BFBR
D
P,S
N
,
V
,
):=
∀
s
∀
(
δ
∈P
|
P
δ
[
s
|V
(
D
)]
−
P
δ
[
s
|V
(
D
)
∧N
(
D
)]
|≤
.
(
)