Information Technology Reference
In-Depth Information
Without establishing trustworthy relationships, these new infrastructures and services, these
new artificial agents, these new robots, these new pervasive technologies, do not impact with
sufficient strength and in fact do not really integrate with the real society.
One of the main features of these new artificial entities (but in fact of this new technological
paradigm) is in particular, autonomy : having the capacity to realize tasks without direct human
control and monitoring, having the capacity to attribute and permit the single (human or
artificial) entities to realize their own goals, having the capacity to make decisions on the
basis of their own attitudes, beliefs and evaluations (see Chapter 7 for a detailed analysis
of this concept and for its relationships with trust). The new environments are increasing
the autonomy levels and complexities of these agents offering sophisticated interaction and
cooperation. In these environments no agent can know everything, there is no central authority
that controls all the agents (due to the features of the environment).
At the same time these complex autonomies (in open environments, broad communities and
with indirect interaction) increase human diffidence and risks.
Technology should not only be reliable, safe, secure, but it should be also perceived as
such, the user must believe that it is reliable, and must feel confident while using it and
depending on it. The unique real answer for coping with others' autonomy is to establish a
real trust relationship. For these reasons, the ability of understand and model the trust concept
to transfer its utility in the technological cooperative framework will be in fact the bottleneck
of the development of the autonomy-based technology that is the technology of the future.
12.1 Main Difference Between Security and Trust
One important thing to underline is the conceptual difference between the two notions of
security and trust . In general, a secure system should provide mechanisms (Wong and Sycara,
2000) able to contrast (oppose) potential threats and guarantee a set of features:
certainty of identification : in particular techniques of authentication should be able to identify
the interacting agents; this identification allows accessibility to defined rights and resources
(Grandison and Sloman, 2000);
integrity : the messages and the actions of the agents should not be corrupted by a third party;
confidentiality and not intrusivity : the communication and interaction should remain private
if the decision of the agents is so;
nonrepudiation : in specific cases it should be possible to identify unambiguously the author
of messages or actions, and they cannot deny this objective identification;
secure delegation : it should be clear who is the delegator of each agent.
There are various research areas ( encryption (Ellis and Speed, 2001), cryptography
(Schneier, 1996), (Stallings, 1999), authentication (Stallings, 2001), access control (Anderson,
2001)) that develop techniques for achieving the above specified features of security.
The objective of automating the procedures of the traditional security systems has viewed
and currently views many studies and applications, some of them make explicit reference to
trust even if this concept is used in a very reductive and basic sense, oriented toward the strict
security rather than to the more complex and general concept of trust. Examples of this use
are the so called Trusted Systems (Abrams, 1995); the so called Trusted Computing (mainly
Search WWH ::




Custom Search