Information Technology Reference
In-Depth Information
12.3 Concluding Remarks
It is now clear - even in engineering - that when we build a new technology, for direct
human use, in fact we are building a new 'socio-tecnical system' and even a new 'cognition'
dealing with and incorporating that mental, pragmatic, and social 'extension'. This is true
with mechanical engineering (factories, cars, tractors, etc,) but is much more important with
cognitive and social technologies: like computers, web, and their mediation and support
of the entire human individual and social activity, from study and learning to work, from
friendship and communities to political participation, from market and business to smart
learning environments.
We have to design hand in hand with technology the cognitive, interactive, and collective
dimensions. More precisely, we have to design technology with those incorporated dimensions .
But in order to do this one should have the appropriate understanding of those dimensions
and some theoretical abstraction of them and some possible modeling of them. Otherwise we
proceed just in an empirical, haphazard (trials and errors) way.
This is why we believe that a deep and complete model of trust (including the cognitive,
emotional, decisional, social, institutional) dimension be not just useful but necessary.
In particular, we believe that to support this kind of human computer, human ambient,
human robot interaction, and computer-mediated/supported interaction, organization, work,
etc. a technology able to deal with typical human cognitive and social features and phenomena
(like expectation, intentions, preferences, like emotions, trust, etc. like norms, roles, like
institutions, collectives, etc.) must be designed. A technology endowed with autonomous
learning, decentralization, acquisition of local and timely information; able to reason and
solve problems; endowed with some proactivity and a real collaborative (not just executive)
attitude. We think that autonomous computational 'agents' will play a significant role. But
if this is true this will make even more central the role of trust and delegation and of their
modelling.
References
Abdul-Rahman, A. and Hailes, S. (2000) Supporting Trust in Virtual Communities. In Proceedings of the Hawaii
International Conference on System Sciences, Maui, Hawaii, 4-7 January 2000.
Abrams, M.D. (1995) Trusted system concepts. Computers and Security , 14 (1): 45-56.
Amazon site, http://www.amazon.com, world wide web.
Anderson, R. (2001) Security Engineering: A Guide to Building Dependable Distributed Systems , John Wiley & Sons
Ltd.
Antoniou, G. and van Harmelen, F. (2008) A Semantic Web Primer , 2nd edition. The MIT Press.
Baecker, R.M. and Buxton, W.A.S. (eds.) (1987) Readings in Human-Computer Interaction. A multidisciplinary
approach. Los Altos, CA: Morgan-Kaufmann Publishers.
Barber, S., and Kim, J. (2000) Belief Revision Process based on trust: agents evaluating reputation of information
sources, Autonomous Agents 2000 Workshop on 'Deception, Fraud and Trust in Agent Societies' , Barcelona, Spain,
June 4, pp. 15-26.
Bishop, M. (2005) Introduction to Computer Security . Reading, MA: Addison-Wesley.
Blaze, M., Feigenbaum, J., Lacy, J. (1996) Decentralized trust management. In Proceedings of the 1996 IEEE
Conferente on Security and Privacy, Oakland, CA.
Card, S.K., Moran, T.P. and Newell, A. (1983) The Psychology of Human-Computer Interaction. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Castelfranchi, C. (1996) Reasons: belief support and goal dynamics. Mathware & Soft Computing , 3: 233-247.
Search WWH ::




Custom Search