Geography Reference
In-Depth Information
approaching direction which sign is the most visible at an intersection. It is not
sufficient to do this simply from the middle of the street, but it needs to account
for which side of the street a pedestrian is traveling on. Accordingly, the algorithm
accounts for the orientation of a sign relative to the orientation of the street to ensure
it is actually visible. The fallback solution of selecting a building only accounts for a
building's location (the corner of and the distance to the intersection). Conceptually,
Wither et al.'s approach is similar to the approach of Raubal and Winter presented
in Chap. 5 .
The system presents panoramic images at decision points, with an enlarged
display of the sign at the bottom of the screen to ensure it is readable on
screen. These panoramic images are canonical views of an approaching intersection
selected for eight different directions, i.e., for each building eight canonical views
are calculated. Users can click back and forth between panoramas, but cannot
otherwise alter the view. Based on a user's location, the system automatically selects
the next relevant panorama.
In addition to the common problem of huge data demands—in this case Navteq
collecting the required city models and photographs, the Nokia system is mainly
useful for business districts and inner cities as it heavily relies on (business) signage
on buildings. This renders the system, which is technically promising, insufficient
for more global use as a navigation service.
6.3
Understanding Landmarks
After we have discussed how computers can produce landmark references, we
now turn to how they may be enabled to understand references to landmarks. As
discussed in the introduction to this chapter, this is a much harder problem than
producing such references.
6.3.1
Understanding Verbal Landmarks
Understanding verbal references to landmarks is first and foremost a matter of
natural language processing (NLP). There has been a lot of progress in NLP over
the years. For an overview of the state of the art see, for example, the topic by
Jurafsky et al. [ 39 ] . We can assume that the parsing of written and spoken natural
language input will soon be reliable. This does not provide a full interpretation
of what has been said yet, however. Individual words can be identified and also
how they relate to each other in the uttered sentence. For example, the utterance 'I
am at the bar in the cinema' places the speaker in something called 'bar' which is
located in something called 'cinema'. To get the full meaning, the computer needs
to understand what 'bar' and 'cinema' means—here ontologies come into play [ 2 ] .
 
 
 
Search WWH ::




Custom Search