Databases Reference
In-Depth Information
three users, including the actually issuer. A non-trivial question is to e-
ciently generalize a given request so that the generalized request is safe, while
keeping the area in the request as small as possible. Interesting algorithms
have appeared in the literature, e.g., [9, 8, 16].
Another interesting context consists of, in addition to the assumptions
in Example 2, the assumption that the attacker knows the generalization
function itself [11, 3]. We call it the “inversion assumption”:
inver. The attacker knows the generalization algorithm.
The context with assumptions 1, 2 (in Example 2) and inver is denoted by
C ist .
The inversion assumption brings in an interesting twist on defense func-
tions against C ist in comparison to against C st . Indeed, a typical defense
against C st for a request r is to start with the area in r and systematically
expand this area until it includes k
1 other users. This approach may not
be safe against C ist [11]. This is because the attacker can use the knowledge
of the algorithm to rule out candidate issuers from the anonymity set whose
location would have been generalized differently by the algorithm.
An intuitive defense against attack UAtt C ist is to divide the users into
regions without consulting the location of the given request r . Once each
region contains at least k users, then pick up the region that contains r as its
generalized area. It can be shown that the knowledge of this defense function
is useless to an attacker. Kalnis et al. [11] and Bettini et al. [3] studied such
defenses.
4 Techniques to enforce anonymity
In this section we present the main techniques proposed so far to enforce
anonymity for LBS privacy preservation. We first address in detail techniques
that consider the static single-issuer case, the only one extensively studied up
to now, while we discuss open issues related to techniques for other cases in
Subsection 4.3.
The presentation of the anonymization algorithms proposed for the static
single-issuer case distinguishes C I -safe from C I -unsafe algorithms, depending
on the fact that the knowledge of the specific generalization algorithm can be
obtained by a potential attacker or not, respectively. More formally, we call
C I -safe the algorithms that achieve anonymity even in the case the current
context includes the inver assumption, and C I -unsafe those that consider a
context without this assumption.
4.1 C I -unsafe algorithms
The first generalization algorithm that appeared in the literature was named
IntervalCloaking [9]. The idea of the algorithm is to iteratively divide the
Search WWH ::




Custom Search