Information Technology Reference
In-Depth Information
Fig. 2. Experimental area of evaluation on Produre
u id ,K R pub (
,t stamp i ,where u id is the identity of the sender (i.e., user).
K S prv (
Cont
))
K R pub
and K S prv are respectively the receiver's public key and the sender's private key.
t stamp i
is the sending timestamp while Cont is the sending content.
3
Evaluation
In this section, we describe the evaluation of performance of Produre. First, we intro-
duce the experimental environment and the evaluation metric. Then, we describe the
result of evaluation.
3.1
Experiment Setup
To evaluate our approach, we develop a prototype location tagging system where the
whole mechanism of Produre is implemented. We recruit several volunteers who behave
as users in the tagging system. They are required to generate the location tags based
on their handheld wireless devices (e.g., mobile phones, GPS, or RFID tags). In other
words, users first check the position information given by handheld devices and then
give their subjective location tags. The experimental area is shown in Fig. 2. It is a
square whose length and width are both
5 m ×
5 m squares. Ideally, there should be location tag receivers at each red point in Fig. 2
to assure high accuracy of transmission. However, lacking of so many receivers, we
only put some receivers at several red pints ( one at one ) in the area. We assume
that the accuracy of transmission has been affected quite a little. The users randomly
walk in the experimental area and repeatedly send location tags to the receivers and
then to the processing center. However, some devices are malicious by providing wrong
information about locations.
To discover the proximity set of a user, it is necessary to define a distance threshold
of D t two users who can be viewed as each other's proximities. In the experiment,
we set D t to
1
kilometer. It is divided into many little
meters to see how the performance of the whole location tagging
system is. We use the ratio of successful detection (SDR) as the evaluation metric which
is defined as the ratio of the number of successful detection of wrong tags to the number
of all the wrong tags.
5
,
10
,
50
 
Search WWH ::




Custom Search