Biomedical Engineering Reference
In-Depth Information
NOT AT SEA IN A SIEVE: CHARACTER,
CONSENT, AND CONSEQUENCE
Fortunately, the same ethical and moral frameworks that inform the development
of traditional just war considerations—when is the use of force justified and in what
fashion can it be permissibly applied?—can also be used to help us answer moral
questions about the use of national security neuroscience technology across all
phases of conflict. Jus in bello (“justice during the war”) concerns about how non-
combatants are treated, for instance, reflect praiseworthy considerations about both
the rights of people not to be used as a mere means and utilitarian concerns about
what practices will help minimize the harmful effects of war if it becomes morally
necessary (Johnson 1999).
The three grand traditions of ethical theory can thus be of use as we think about a
normative or moral framework for evaluating national security neurotechnology: these
are virtue theory, deontology, and utilitarianism. The first highlights the person tak-
ing an action; the second focuses on the nature of the action being taken; and the third
highlights the consequences of the action. All but the most adamant partisans of a par-
ticular approach to moral theory can agree that, at least for heuristic value, these three
traditions have thrived because they focus attention on ethical aspects of a situation we
might otherwise be prone to ignore. Here are thumbnail sketches of each approach.
Virtue theorists, such as the Greek philosophers Plato (427-347 BC) and Aristotle
(384-322 BC), make paramount the concept of “human flourishing”; to be maxi-
mally moral is just to function as well as one can given one's nature. This involves
the cultivation of virtues (such as wisdom) and the avoidance of vices (such as intem-
perance) and is a practical affair. Deontologists, exemplified by the Prussian philoso-
pher Immanuel Kant (1724-1804), do not place emphasis upon the consequences
of actions, as utilitarians would, nor on the character of people, as a virtue theorist
would. Instead, they focus on the maxim of the action—the intent-based principle that
plays itself out in an agent's mind. We must do our duty, as derived from the dictates
of pure reason and the “categorical imperative,” for duty's sake alone. Deontologists
are particularly concerned to highlight the duties that free and reasonable creatures
(paradigmatically, human beings) owe to one another. Maximizing happiness or cul-
tivating character is not the primary goal on this scheme; instead, ensuring that we
do not violate another's rights is paramount. The typical utilitarian, such as British
philosopher John Stuart Mill (1806-1873), thinks one ought to take that action (or
follow that “rule”) that if taken (or followed) would produce the greatest amount of
happiness for the largest number of sentient beings (where by happiness, Mill means
the presence of pleasure or the absence of pain). The second flavor of utility we just
described, “rule utilitarianism,” is probably the most popular.
These three frameworks can be captured by remembering the “three C's”—
“Character, Consent, and Consequence.” A comprehensive evaluation of any
particular neuroscience and national security technology would ask whether its
development and use enables us to flourish as human beings and is conducive to the
development of traits allowing us to do so (“Character”), whether the technology is
being developed and used in a fashion consistent with the human right not be used
as a mere means to someone else's end (“Consent”), and whether the development
Search WWH ::




Custom Search