Geoscience Reference
In-Depth Information
of the PARKAGENT model that are done until now are evidently insufficient to cover possible
differences between drivers' behaviours and parking policies. In what follows, I thus limit the pre-
sentation of drivers' behavioural rules to the Tel Aviv situation, where on-street parking is free for
residents. These rules are established based on experimental trips with the drivers and an analysis
of the drivers' GPS records (Benenson et al., 2008; Levy et al., 2012).
The agents in the PARKAGENT model aim at imitating the behaviour of real-world drivers and
parking inspectors. As presented in Sections 9.2.1 and 9.2.2, each driver agent is assigned a desti-
nation and the rules of driving and parking behaviour distinguish between: (1) driving towards the
destination before searching for parking actually commences, (2) parking search and choice close
to the destination before reaching it, (3) parking search and choice after the destination is missed,
(4) parking and (5) driving out.
9.4.4.1 Initialisation of Drivers and Driving towards the Destination
The initiation of a driver agent in the PARKAGENT model begins with assigning a destination and
a desired parking duration. Then the driver agent enters the model by landing randomly at one of
the street segments at a driving distance of 500 m from the destination, in proportion to the data on
the intensity of traffic on the street segments. From this initial position, the driver agent's car drives
towards the destination at a speed of 12 km/h and searches for a parking place.
The PARKAGENT model enables two algorithms for finding the way to a destination. According
to the first heuristic algorithm, at each junction, the driver agent chooses the street segment that takes
it to the junction closest to the destination (Benenson et al., 2008). The second is just a shortest-
distance path algorithm between the current driver agent's location and the destination. A salient
feature of the PARKAGENT model is an agent's reaction to congestion. Before advancing, a driver
agent checks if the street ahead is not occupied by another car. If yes, the driver agent does not
advance during the time step.
9.4.4.2 To Park or to Continue Driving to the Destination?
During driving, the driver agent has to decide whether to park or to continue driving in order to park
closer to the destination. To decide, an agent constantly re-estimates the distance to the destination
and the chance of finding a parking place closer to it. To estimate this chance, a driver agent reflects
on free and occupied parking places and, depending on the distance to the destination, estimates the
expected number, H, of free parking places on the remaining route. Based on H, when passing a free
parking place, the driver agent decides whether to park or continue driving towards the destination
in order to park closer to it. The assumption in the PARKAGENT model (Benenson et al., 2008) is
that the decision depends on the value of H as follows:
Continue driving towards the destination
if H > H 2 .
Park immediately
if H < H 1 .
Continue driving with probability p = (H − H 1 )/(H 2 − H 1 )
if H 1 ≤ H ≤ H 2 .
Currently, the model employs the values of H 1 = 1 and H 2 = 3 in all applications.
9.4.4.3 Driving and Parking after the Destination Is Missed
The driver agent who has passed his/her destination cancels the decision rule employed at the stage
of driving towards the destination and is ready to park anywhere as long as it is not too far from the
destination. The PARKAGENT model assumes that after passing the destination, the driver agent
aims to park within an appropriate parking area - a circle of a certain radius with the destination at
its centre. The initial radius of the appropriate area is 100 m and it is assumed to grow linearly at a
rate of 30 m/min up to a 400 m Euclidean distance from the destination, thus reaching its maximum
in 10 min. Reaching a junction, the driver agent will choose the street that would take them to a
random junction within the appropriate parking area (Benenson et al., 2008).
Search WWH ::




Custom Search