Biomedical Engineering Reference
In-Depth Information
responsibility, and transparency (described below) connected with their use. This
gives a pause to any effort to insert such products into national security applications,
where reliability is essential, and those involved in revolutionary S&T must respect
the operational tension between innovation and conservatism.
Although system design based on human reasoning (e.g., expert systems, artifi-
cial intelligence) is not a new field, and the science of human-machine interaction
based on real-time physiological states has shown great promise, the ontological
implications of human-machine relations have not been resolved to any degree and
will likely have an initially disruptive impact on planning and practice of mili-
tary operations. Advances in CR-NR could, for example, enable dramatic improve-
ments in mission performance of both human operators and autonomous machines
(National Research Council 2009). Such capabilities will require a rethinking of
military operating doctrine at several levels. How much information will operators
need to reveal about their cognitive functioning (i.e., by allowing their physiologi-
cal status to be monitored) to obtain improved mission performance? How can the
decision processes of brain-based human-machine systems or autonomous systems
be evaluated when the underlying algorithms are dynamic and may differ from
mission to mission, from person to person, and at different times within a mission?
How will brain-based systems interact with non-brain-based systems in distributed
networks? Initial answers to such questions, and assessments of their impact on
military doctrine, must accompany any effort to introduce these technologies into
operational use. While government transition processes provide for graduated test-
ing of technologies (see Figure 2.1), the impact of the issues described here may
not be manifested until operational experience and exposure with such systems is
accumulated.
Brain-based systems used in national security applications will almost certainly
reflect a combination of autonomous initiative and original problem solving by both
human and machine. This means shared agency (who or what acts) and responsibil-
ity (who or what is accountable for the result) in military decisions. Although shared
agency between humans and computers lies at the core of many combat tasks, such
sharing is largely based on predetermined decision models that persist across opera-
tional conditions, and the machine's role is an instantiation of one or more rule sets.
The issue of agency and responsibility is expanded, when machine intelligence is
more powerful and based on real-time exercise of humanlike faculties, even if those
faculties are used to support human decisions. Recognizing the new status of such
advanced machine capabilities will require a large adjustment in military and soci-
etal thinking about what constitutes a legitimate “mind” in military operations. The
effective proliferation of these technologies into any arena of human activity will
depend on how much attention and debate is offered to resolving such issues sooner,
rather than later, in the transition process.
System operation based on either autonomous or shared information processing
must be visible. Much of the difficulty encountered during early attempts to intro-
duce intelligent (e.g., expert) systems into organizational settings was the lack of
explanatory capabilities (Woods 1996) or transparency; systems could not make their
reasoning explicit and understandable to operators, and therefore, system output was
often not trusted. Because systems based on cognition and neuroscience principles
Search WWH ::




Custom Search