Biomedical Engineering Reference
In-Depth Information
Animals and humans have evolved remarkably successful means of moving through
complex environments, perceiving objects, and acting in accord with their percep-
tions and goals. Neuroscientists investigate these mechanisms. Many scientists use
robots to test their hypotheses about these complex processes, so the application of
neuroscience to robotics can be quite direct. Accordingly, research on animal loco-
motion and other functions of the nervous system is supported by military agencies
such as the U.S. military's Defense Advanced Research Projects Agency (DARPA)
(DARPA-BAA-11-65 2012; DARPA-Our Work 2012).
Animals move, perceive, make decisions, and act as autonomous agents. Much dis-
cussion surrounds the possibility of similarly autonomous robots. Currently, robots
such as drones are used for surveillance and for killing, but the decision to kill is
made by a human being. The longtime delays involved in keeping a human being
in the loop as a decision maker and the vulnerability of the communication links to
interference have led to the potential use of autonomous robots. The U.S. military
has argued, for example, that its drones have the same right to defend themselves
from enemy radar that human pilots have (Singer 2012).
Robots and especially autonomous robots raise the legal and ethical issues of
accountability (Singer 2009, 2012). Who bears responsibility for what robots do?
They also raise the ethical issues of making war both easier to start and harder to
stop because they remove the possibility of human casualties for one side, perhaps the
most important traditional impediment to starting and continuing a war (Howlader
and Giordano 2013). Reliance on robots is part of a larger mind-set of overconfidence
in the superiority of one's technology that can also make war more likely.
Animals and humans have formidable perceptual and cognitive abilities that
cannot be easily matched at present by machines but are of critical importance for
the military and for intelligence gathering. Surveillance drones, for example, pro-
vide massive amounts of video images that require hundreds of human analysts to
monitor for useful information (Benjamin 2012). Other forms of surveillance such
as monitoring of phone calls or email messages require the same human skills and
employ thousands of analysts (Bamford 2012). Understanding the mechanisms of
animal and human cognition, and implementing that understanding in machines, is
of clear utility for “National Security.” Hybrid systems have been developed in which
brain responses are recorded from soldiers as they watch successive images (Bardin
2012). Images that evoke brain responses associated with detection of “objects of
interest” can be selected for further analysis.
The “dual-use” dilemma is always present in any discussion of neuroscience and
the military. Almost every application of neuroscience can be used for benign peace-
ful uses as well as for military purposes. As pointed out by Nagel (2010), the poten-
tial for beneficial uses is often used as a means to silence those who raise fears of
misuse. Such critics pose the question, “Surely you are not against helping quadriple-
gics with BMI devices or finding earthquake victims with robots?” The pledge does
not oppose such uses of neurotechnology, rather it relates to the dual-use issue by
asking neuroscientists to stay aware of the potential for misuse of neuroscience and
by asking that they refuse to participate knowingly in such misuse. It is not sufficient
to view BMIs only from the perspective of helping quadriplegics, or autonomous
robots only from the perspective of finding earthquake victims.
Search WWH ::




Custom Search