Databases Reference
In-Depth Information
treating: V DA :
PDA ( p, d, a ) . If for some database D the view extents are
V PD ( D )=
{
(John, Dr. MacDonald)
}
and V DA ( D )=
{
(Dr. MacDonald,
pneumonia)
}
, then D is exposed since [ D ] V PD ,V DA is the PDA table with the
single tuple
{
(John, Dr. MacDonald, pneumonia)
}
. If on the other hand the
attacker observes V PD ( D )=
{
(John, Dr. MacDonald), (Jane, Dr. MacDon-
ald)
}
and V DA ( D )=
{
(Dr. MacDonald, flu), (Dr. MacDonald, pneumonia)
}
, then D is not exposed since there are several possible databases. One in
which John has flu and Jane pneumonia, on in which John has both diseases
and Jane has flu, etc.
No complete secret exposure (NSE D
S
). Even if the actual database
is not exposed, it may be that all possible databases have the same image
under
, thus completely exposing the secret. To guard against this case,
we define the breach as having a single possible secret:
S
S
([ D ] V )=
{S
( D )
}
.
Non-exposure of the secret requires at least two possible secrets:
NSE D
S
(
V
):=
|S
([ D ] V )
|≥
2 .
Example 5. For the schema of Example 1, assume that the hospital publishes
the view V P from Example 1 and view V DA from Example 4. If the attacker
observes V P ( D )=
{
(John), (Jane)
}
and V DA ( D )=
{
(Dr. MacDonald,
pneumonia), (Dr. Zhivago, pneumonia)
, then D is not exposed since there
are several possible databases: one in which John sees Dr. MacDonald and
Jane Dr. Zhivago, one in which they swap doctors, one in which John sees
both doctors and Jane only one of them, etc. And yet, the secret is exposed,
since both doctors treat the same disease so no matter whom they see, both
John and Jane must suffer from pneumonia.
No belief revision (NBR D
P
}
). The non-exposure guarantees fulfill only
the very basic owner expectations. They do not suce to put her mind at
ease since attackers can “learn” something about some candidate secret, thus
improving their odds of guessing the actual secret.
For a given attacker described by probability distribution δ , we define
“learning something about candidate secret s ” in the strongest, information-
theoretic sense, as revision of attacker's belief about the secret. The belief
revision is the change between the δ -induced a priori and a posteriori be-
liefs that s is the secret. Formally, a belief revision occurs precisely when
P δ [ s
,
S
|V
( D )]
= P δ [ s ]. The guarantee that no attacker from a class
P
revises
his belief amounts to
NBR D
P
(
V
):=
s
( δ
∈P
) P δ [ s
|V
( D )] = P δ [ s ] .
,
S
This guarantee is preferred by the owner because it makes no assumptions
on the attacker's computational resources. When the guarantee holds, the
owner can rest assured that nothing can be learned about the secret. The
following example however shows that such a guarantee is often unreasonably
strong and is violated by most publishing functions, which is why we need to
set our sights on more relaxed guarantees.
Search WWH ::




Custom Search