Biomedical Engineering Reference
In-Depth Information
human research subjects. Projects such as MK-ULTRA, which sought to use lysergic
acid diethylamide (LSD) as a drug for “mind control” in the 1950s and 1960s, never
achieved this goal, but did in fact permanently disable and even kill human research
subjects in the process (Streatfeild 2007).
Finally, scientists must be wary of the rush to apply basic science in operational
settings before it is ready (Giordano 2012). This legitimate concern was fully dis-
played during the fentanyl debacle in Moscow. It can also be seen in attempts to
utilize neuroscientific techniques and technologies to detect liars. The neuroscience
of lie detection is yet infant, but that has not prevented some from deploying the tech-
nology in such pursuits (Greely and Illes 2007; see also Chapters 9 and 12).
Hence, Lesson Four from history is that there are legitimate concerns pertain-
ing to the relationship between neuroscience and the military. This is not to say or
imply that neuroscientists who assist the military are unethical, abusers of research
subjects, or promoters of bad science. But it is to say that history justifies legitimate
concerns associated with military-purposed science: concerns about killing, con-
cerns about conduct, concerns about research subjects, and concerns about poorly
implemented and operationally translated research.
OPTIONS FOR REGULATING NEUROSECURITY
Can and should we regulate neurosecurity? There are two trends in the literature that
attempt to answer this question. First, a number of authors have simply called for the
discourse, more questions , and more awareness . After asking a series of hypotheti-
cal questions about government-supported neural monitoring, Nita Farahany (2008)
concluded, “These are just some of the questions we must ask as we balance scientific
advances and the promise of enhanced safety against a loss of liberty.” After review-
ing Moreno's Mind Wars , Hugh Gusterson (2007) concluded, “Time to start talking!”,
and Charles Jennings (2006) added, “[ Mind Wars ] should help bring these questions
into the open.” Such calls are a valuable starting point, but we must inevitably ask,
“Then what?” Calls for questions and talking don't actually provide solutions.
Second, and at the opposite extreme, others have suggested international accords,
or engagement of nongovernmental regimes to take responsibility for regulation
(Moreno 2006; Huang and Kosal 2008). A problem with this, as noted by Green
(2008), is that there is a diversity of perspectives on science, the military, and
governance/regulation across different nations and scientists from different countries,
cultures, and politics. As a result, it is hard to imagine a consensus statement that
would bring such diverse parties into agreement on something directly related to
national security.
Here, I focus on options available for regulation at the intersection of academic
neuroscience and the military in the United States. There are several advantages for
focusing such energies here. First, by keeping discussion and emphasis confined to
U.S. applications, we may avoid problems associated with diversity at the interna-
tional level. And second, there already are a number of existing models for regulat-
ing academic science in the United States; thus, these models can be evaluated in
terms of their appropriate application to military-purposed neuroscience rather than
attempting to reinvent an entirely new approach.
Search WWH ::




Custom Search