Information Technology Reference
In-Depth Information
To define the user proficiency in using the gamepad buttons, a simple method was
implemented. Each sequence defined on the grammar should have an associated diffi-
culty level (easy, medium or hard). The difficulty type of a sequence may be related
to its size, and to the physical distance between the buttons on the gamepad. Since the
layout of a generic gamepad may change depending on the model, defining whether
or not a sequence is of easy, medium or difficulty level is left to the operator.
When the user completes the gamepad sequences training part, an error rate is cal-
culated for each of the difficulty levels. If these rates are higher than a minimum
acceptable configurable value, the user classification in this item is immediately
defined. This classification is then used to turn on the security feature, which is cha-
racterized by a confirmation event performed by the navigation assistant. For a
grammar with 5 sequences of difficulty type easy, the maximum number of accepted
errors would be 1. If the user fails more than one sequence, the confirmation event is
triggered for any input sequence, of any difficulty type, and the gamepad training
session is terminated. If the error rate for the easy type is less than 20% (=1/5) the
training with the sub-set composed by the sequences of medium difficulty is initiated.
At the end, a similar method is applied. If the error rate for the medium level is higher
than 30%, the confirmation is triggered for the medium and hard levels of difficulty,
and the training session is terminated. Finally, if the user makes it to the last level of
difficulty, the training for the hard sequences sub-set is started. If the error rate is
higher than 50%, the confirmation event is triggered only for sequences with a hard
difficulty level. The best scenario takes place when the user is able to surpass the
maximum accepted error rates for all the difficulty levels. In this situation, the con-
firmation event is turned off, and an output request is immediately triggered for any
kind of input sequence composed only by gamepad buttons.
Defining the ideal maximum acceptable error rates is not easy. With this in mind,
we made it possible to also configure these values in the XML grammar.
The joystick phase of the training session can be used to calculate the maximum
amplitude achieved by the user. This value can then be used to parameterize the max-
imum speed value. Imagining a user who can only push the joystick to 50% of its
maximum amplitude, the speed can be calculated by multiplying the axis value by
two. This feature was not implemented. However, all the background preparation to
implement it was set for future work.
The speech component of the training session was used to define the recognition
trust level for each of the voice commands. The trust level is a percentage value re-
trieved by the speech recognition engine. This value is used to set the minimum rec-
ognition level for the recognition module.
Finally, the head movement phase of the training session has a similar purpose to
the joystick's phase. Additionally, the maximum amplitude for each direction can be
used to determine the range that will trigger each one of the leaning inputs of the head
gestures recognition.
An extension of this profiling is related to the facial expressions and thoughts. A
brain computer interface (BCI) was incorporated which can recognize the facial
expressions and thoughts. However several patients suffering of cerebral palsy for
example, are not able to produce all the facial expressions. For that reason it is also
implemented a component in the profiling for testing the facial expressions (and even
the thoughts) and where all the brain activity is recorded using the 14 sensors in the
BCI for posterior analysis.
Search WWH ::




Custom Search