Environmental Engineering Reference
In-Depth Information
rather than moral agents. This is very much distinct from approaches to morality and
ethics for humans which assume that they are moral agents capable of making deci-
sions. This is particularly relevant to the case of decision-making in the presence of
uncertainty, where the infl exible application of rule-based approaches can lead to
serious or even fatal errors. There are many situations in which only limited informa-
tion is available, but people generally try to use their intelligence to compensate for
this lack of information and avoid many of the problems that would otherwise occur.
In Asimov's stories, the laws are designed into the robots, so (at least in theory)
they are unable to cause harm or disobey humans. In practice their lack of fl exibility
and inability to respond to the unexpected sometimes leads to situations where
quick and creative reaction is required by humans in order to avoid harm. The idea
of designing the inability to cause harm into technologies is an interesting one, but
as the stories indicate (even leaving aside the fact that we have not quite reached this
point of technological development yet), this is not simple. The term 'harm' is not
very precisely defi ned and is capable of different interpretations. There is also a
probabilistic element in determining what will cause harm, particularly with regard
to mental harm, where it is more diffi cult to defi ne precisely what will cause harm
than in the case of physical harm.
Despite the contradictions between the laws in many situations, it would be
interesting to consider the application of these laws (extended to include other
species and the environment) to all technologies. They would defi nitely prohibit the
development of offensive weapons and support alternative approaches to security
(as discussed in Chap. 11 ). However, the laws do not give clear answers in the case
of self-defence. Application of the laws would also lead to much stricter regulation
of technologies with possible negative impacts on health and safety and the
environment. This would result in, for instance, much stricter limits on discharges
and emissions.
Asimov's laws would prohibit the use of robots by the military. As discussed in
Chap. 11 and Sect. 5.2 of this chapter, the use of military robots is likely to reduce
the barriers to military confl ict and war, increase the percentage of civilian casualties
and increase the number and probably also the seriousness of human rights viola-
tions. While the laws make robots pacifi sts, they lead to confl ict in the case of the
appropriate response to an attack. The fi rst clause of the fi rst law requires robots to
take a very strict pacifi st position which does not allow self-defence using force in
response to an attack, as this might result in harm to a human. However, the second
clause of this law requires them to take a less strict pacifi st position and to use lim-
ited force to defend humans against an attack which would otherwise cause them
harm. There is therefore clearly a contradiction. A further ambiguity results from
the fact that it may not be clear when it is possible to defend humans against an
attack without the use of force which will lead to harm to other humans.
The following parody of Asimov's laws has been designed for military robots
and illustrates the ethical problems associated with their use:
1. A robot may not injure an authorised representative of the government or giant
corporation but has to terminate all intruders.
Search WWH ::




Custom Search