Information Technology Reference
In-Depth Information
LaChat evaluates the issue in the following way. Some people would like to try to
construct a personal AI —a machine that is conscious of its own existence. No one has
proven it can't be done, so let's assume it's theoretically possible. Is it morally acceptable
to attempt the construction of a personal AI?
Here is one line of reasoning: According to the second formulation of the Categor-
ical Imperative, we should always treat other persons as ends in themselves and never
treat other persons merely as means to an end. In the attempt to construct a personal
AI, scientists would be treating the personal AI they created as a means to the end of in-
creasing scientific knowledge. It is reasonable to assume that a fully conscious personal
AI would be unwilling to accept its status as a piece of property. In this case, owning a
personal AI would be a form of exploitation.
Are we prepared to grant a personal AI the same rights guaranteed to human per-
sons under the United Nation's Universal Declaration of Human Rights, which (among
other things) forbids slavery and servitude, and guarantees everyone freedom of move-
ment? If we plan to treat personal AIs as property, then from a Kantian point of view any
effort to bring about a personal AI would be immoral.
LaChat concedes that this line of reasoning rests on the controversial assumption
that a conscious machine should be given the same moral status as a human being. The
argument assumes that a personal AI would have free will and the ability to make moral
choices. Perhaps any system operated by a computer program does not have free will,
because it has no choice other than to execute the program's instructions as dictated by
the architecture of the CPU. If a personal AI does not have free will, it cannot make
moral choices, and from a Kantian point of view it should not be valued as an end in
itself. Despite its intelligence, it would not have the same moral status as a human being.
Creating a personal AI without free will would be morally acceptable.
We do not know whether scientists and engineers will ever be able to construct a
personal AI, and we cannot say whether a personal AI would possess free will. Our pre-
dictions are uncertain because we do not understand the source of free will in humans.
In fact, some philosophers, psychologists, and neuroscientists deny the existence of free
will. LaChat concludes, “Though the first word of ethics is 'do no harm,' we can perhaps
look forward to innovation with a thoughtful caution,” knowing that we may “eclipse
ourselves with our own inventions” [24].
It is important to note that mainstream opinion in the artificial intelligence research
community holds that the prospects of a personal AI being constructed are quite remote.
A panel of leading experts in artificial intelligence met in Pacific Grove, California,
in February 2009, to reflect on the societal consequences of the advances in machine
intelligence. According to a report from the meeting, the experts were skeptical of the
view that machines with superhuman intelligence are on the horizon [25].
10.3 Workplace Changes
Experts debate whether or not information technology has resulted in a net reduction
in available jobs, but there is no dispute that information technology has affected how
 
 
Search WWH ::




Custom Search