Robotics Reference
In-Depth Information
us serious harm, just as an intoxicated human being can cause us harm by
performing a task on which other human beings depend for their safety
and well-being. The easiest way to protect our interests is to avoid such
cases arising, by making the robot totally incapable of self-modification,
ensuring that it does not disable itself in a way or at a time that would
harm others, just as we attempt to ensure that human pilots do not work
while under the influence of alcohol or mind-altering drugs.
Any discussion of the legal rights and responsibilities of robots should
include consideration of how errant robots might be punished, given that
they could be instantly reprogrammed and thereby become, in effect,
a different robot. A robot might commit a crime while running the
Aggressive Personality program, but then switch its software the Mild-
mannered Personality program when the police arrive at its front door.
Would this be a case of false arrest? And if the robot is convicted, should
all existing copies of the Aggressive Personality program also be found
guilty? If so, should they all suffer the same punishment? If not, is it
double jeopardy to take another copy of that program to trial for the same
offence committed by a physically different robot? The offending robot
could be released with its aggressive program excised from its memory,
but this may offend our sense of justice, and the reprogramming of a
criminal robot might be considered as a violation of its right to privacy
or any of its other rights. Denying a robot the running of its preferred
software would be like keeping a human in a permanent coma, which
seems like cruel and unusual punishment.
Such concerns lead us to ask: “If robots can do wrong, what (if any)
is the ethical role of punishment?” Humans who break accepted conven-
tions are punished in various ways, but how, from an ethical standpoint,
should we deal with the transgressions of robots? Luciano Floridi and
Jeff Sanders point out that preserving consistency between human and
artificial moral agents leads us “to contemplate the following analogous
steps for the censure of immoral artificial agents:
1. monitoring and modification (i.e., maintenance);
2. removal to a disconnected component of cyberspace;
3. deletion from cyberspace (without backup).” [24]
This is not so very different from the conventional approach to human
punishment in many countries: corrective training, incarceration and
even death.
Search WWH ::




Custom Search