Biomedical Engineering Reference
In-Depth Information
outcome, usually in terms of some number of valued entities lost, or some number of
valued entities that we fail to gain, or some number of disvalued entities gained. This
fourth sense of risk is the most common sense of “risk” in professional risk analysis.
In particular, this concept of “risk” can be defined as “a numerical representation of
severity, that is obtained by multiplying the probability of an unwanted event, or lack
of wanted event, with a measure of its disvalue” (Allhoff 2009).
In RBA, it is this fourth conception of expected value that is often of most inter-
est to decision makers (see, e.g., Sen 1987). That is, what people usually most want
to know is the expected value of the result, sometimes conflated with the “expected
utility.” This allows a quantitative assessment of both risk and benefit in a way that
gives a clear numerical answer for a course of action—a “decision algorithm” of
sorts. For example, we could decide that causing paralysis to 150 soldiers is unac-
ceptable and demand changes to the bioenhancements to make them safer before they
are used. But if the expected loss can be reduced to, say, 0.5%—that is, we expect
five soldiers out of 1000 to be paralyzed as a result—we may deem the enhancement
“safe enough” to use. Such judgments are routinely made for vaccines and other
public health interventions that bear some risk for the individual while enhancing
the whole. Such judgments are also routine for commanders of troops in wartime,
assessing whether particular tactics in battle are too risky or not.
But of course, while this sense of risk as expected value may be desirable for
policy makers, it often greatly oversimplifies the intractable problem of ascribing
mathematically exact probabilities to all the undesired outcomes of our policies.
It  often suggests an aura of false precision in ethical theorizing. It also ignores a
common issue concerning risk assessment in bioethics: the distinction between
“statistical victims” and “identifiable victims.” RBA might well assert a statistical
certainty that we would save more lives (or quality-adjusted life years or whatever the
unit of assessment) by diverting money we would spend on “last-chance treatments”
to instead campaigns to, say, prevent smoking. But the “rule of rescue” (Jonsen 1986)
and related ethical rules of thumb rely on the idea that we actually value saving
identifiable lives more than statistical lives. That is, we tend to care more about using
every last measure to save grandma from her stage IV cancer than to save many more
lives of future strangers. Or, in the military, I may unquestioningly risk the future
well-being of myself and even my entire unit in the mad dash to rescue a wounded
brother-in-arms, in a way that RBA would consider irrational but in fact may result
in a medal of valor, even if posthumously awarded. As long as the difference in
our moral attitudes toward statistical victims and identifiable victims is defensible,
attempts to use RBA are problematic at best.
What then can we say for certain about risk, especially with respect to military
neuroenhancement? How can we answer the question of determining acceptable
risk? We can begin by seeing that risk and safety are two sides of the normal human
attempt to reduce the probability of harm to oneself and others, even as we are often
unsure of the exact probabilities involved. To make things even more difficult, war
is a strange human activity, not least because it reverses this tendency: in war, one
ordinarily wishes to increase the probability of harm to one's enemies. But the laws
of armed conflict and the typical rules of engagement make clear that not all ways of
increasing risk for one's enemy are morally legitimate, and some ways of increasing
Search WWH ::




Custom Search