Information Technology Reference
In-Depth Information
Fig. 2.
Generation of training examples
the feature values is generated. These values are passed to the learned (or man-
ually specified) strategy
strategy
leading to the proposed behavior
a
A
.
Depending on the set of actions, it might not be useful to continuously initi-
ate the strategy decision as it might take some time until an effect of the newly
selected strategy can be seen. It might even be disadvantageous to switch be-
haviors too often as a transition could lead to additional costs (e.g., changing
the route of a vehicle) and oscillating behavior could occur in borderline situa-
tions. For our first investigations in the tra
c simulation domain, we invoke the
strategy decision in fixed intervals, e.g., every 60s of simulated time.
∈
5
Evaluation
The evaluation scenarios for this work are located on a motorway. Simulated
cars vary in driving behavior w.r.t.
different acceleration potentials
a
⊕
(
v
t
),
maximum velocities
v
max
, dallying behavior
prob
v
t
+
3
and car lengths. For
each simulation run, a uniformly distributed probability
p
truck
smaller than 0
.
1
is determined. A road user is generated as a truck with probability
p
truck
and
as a car otherwise. Trucks can be seen as a special case of cars. They have a
lower maximum velocity, accelerate slower, and have a greater length.
The evaluation is divided into two parts: At first, a static scenario with one
trac situation per simulation run is used to learn a classifier. Afterwards, a
dynamic scenario with time-dependent heterogeneous trac situations is used
to check the coherence of the approach for more realistic problems.
5.1 Static Scenario: Imposing a Speed Limit
Our first evaluation scenario is placed on a rather simple road map representing
a circle with 24 km length. The road has two lanes. In trac theory, it is useful
to homogenize the trac (e.g., by speed limits) in order to prevent from distur-
bances that could lead to a trac jam. It is clear that on low trac densities this