Information Technology Reference
In-Depth Information
least some of these universal values to be considered are happiness, life, ability, security, knowledge,
freedom, opportunity, and resources. Notice that these are core goods that any sane human wants
regardless of which society the human is in.
In the ethical decision process, step one is to consider a set of policies for acting in the kind of situation
under consideration. Step two is to consider the relevant duties, rights, and consequences involved
with each policy. Step three is to decide whether the policy can be impartially advocated as a public
policy, that is, anyone should be allowed to act in a similar way in similar circumstances. Many policies
may be readily acceptable. Many may be easily rejected. And some may be in dispute, as people may
weigh the relevant values differently or disagree about the factual outcomes.
In general, rights and duties carry prima facie weight in ethical decision making, and in general cannot
be overridden lightly. But if the consequences of following certain rights and duties are bad enough,
then overriding them may be acceptable as long as this kind of exception can be an acceptable public
policy. In controversial cases, there will be rational disagreements. Just consequentialism does not
require complete agreement on every issue. Note that we have disagreements in ordinary nonethical
decision making as well. But just consequentialism does guide us in determining where and why the
disagreements occur so that further discussion and resolution may be possible.
You have also studied the field of artificial intelligence froma philosophical point of view.
Do you believe it is possible to create a truly intelligent machine capable of ethical decision
making? If so, how far are we from making such a machine a reality?
Nobody has shown that it is impossible, but I think we are very far away from such a possibility. The
problem may have less to do with ethics than with epistemology. Computers (expert systems) some-
times possess considerable knowledge about special topics, but they lack commonsense knowledge.
Without even the ability to understand simple things that any normal child can grasp, computers will
not be able to make considered ethical decisions in any robust sense.
Can an inanimate object have intrinsic moral worth, or is the value of an object strictly
determined by its utility to one or more humans?
I take values or moral worth to be a judgment based on standards. The standards that count for us
are human. We judge other objects using our standards. This may go beyond utility, however, as we
might judge a nonuseful object to be aesthetically pleasing. Our human standards might be challenged
sometime in the future if robots developed consciousness or if we become cyborgs with a different set
of standards. Stay tuned.
 
Search WWH ::




Custom Search