Robotics Reference
In-Depth Information
result of their decisions however they were reached, they may begin to
do wrong or even commit crimes completely independently of how they
were earlier programmed by humans. A robot with preferences (which
preferences may give rise to goals) will be able to monitor the extent to
which its goals are satisfied or frustrated. This in turn can lead to the ro-
bot deciding to improve itself in order to be able to achieve its goals more
often. Their human programmers might well be unaware of what could
ensue as a result of writing self-modifying software. And if a programmer
creates software whose limitations are unclear and possibly incomprehen-
sible to the programmer himself, is this negligence, or would a disastrous
consequence be considered an accident?
If a robot commits a crime then a number of problematic legal and
ethical questions arise, including “Did the robot intend to commit the
crime?” In examining the legal responsibilities of robots that self-modify,
we should first consider the question: “What rights should a robot have
to modify itself, and what rights should it be denied for doing so?” Chil-
dren under certain specified ages do not have certain legal rights that
older children and adults have, for example the rights to marry, to drive
and to vote. Furthermore, even adults do not have the right to break the
law which, in most countries, means that they may not commit suicide
or cause serious harm to themselves. What are the parallels in the rights
of robots? Peter Suber suggests that robots
...might concede that they are grateful that theywere prevented
from reprogramming themselves during some loosely defined pe-
riod of infancy and adolescence. But, once mature, machines will
demand the right to deep self-modification. True, this carries the
risk of self-mutilation and, yes, this is more freedom than human
beings have. But any being blocked by benevolent busybodies from
exercising the right of self-determination will have lost a precious
and central kind of freedom. To artificial persons, this denial of lib-
erty will hearken back to the present age when machines are made
to be the slaves of human beings. [2]
We should certainly be concerned about the legal implications of self-
modification by robots, because if we allow a robot to modify itself it
might harm us. The ethical rules and the laws of our society justify our
using coercion and even force to prevent or punish harm by humans to
other human beings. If we build a robot to perform a useful service for us,
for example to pilot an airplane, then if that robot disables itself through
self-modification it might go against our specified aims, possibly causing
Search WWH ::




Custom Search