Information Technology Reference
In-Depth Information
8
Conclusion and Further Work
A model, in the form of a (state-based) specification, together with a link is able to gen-
erate tests of a putative implementation. It has been seen that, roughly, the more detailed
a test is, the more information the tester needs to have of the link. Thus laws concerning
the operations may involve little information beyond the specification; but they might
require knowledge of how the link represents the system state in the implementation.
The trade-off between specification and implementation knowledge has already been
revealed by Stocks [22,23], so it is not surprising that it has reappeared here.
A specifier might ask the question: what extra functional information am I in a posi-
tion to provide about the system that yields the kind of redundancy necessary for test-
ing? Various answers have been discussed here, including: system invariants (explicit
or implicit), further confidence conditions, partiality of operators, boundary testing and
'nonfunctional' properties like security.
The specifier is well placed to augment the specification with various pieces of
information—along those lines—to generate tests. But the more detailed the test, the
more information is required beyond mere specification information. For otherwise the
result might be a test which is not able to be executed due to restrictions on the test-
ing interface. That is particularly true of web-based systems of the kind represented by
Webbo .
This paper represents work very much in progress. It is hoped to continue it by for-
malising the action-word approach to linking specification with implementation (using
the concepts of data simulation and process algebra), automating the ideas contained
here, applying them to Webbo to determine the nature of the tests generated, and iden-
tifying conditions sufficient for a test interface to execute a given family of tests.
References
1. Aichernig, B.K.: Test-design through abstraction — A systematic approach based on the
refinement calculus. Journal of Universal Computer Science 7(8), 710-735 (2001)
2. Binder, R.V.: Design for testability in object-oriented systems. Commun. ACM 37(9), 87-
101 (1994)
3. Boyapati, C., Khurshid, S., Marinov, D.: Korat: automated testing based on Java predicates.
In: Proceedings of the International Symposium on Software Testing and Analysis (ISSTA
2002), Rome, Italy, 22-24, IEEE, Los Alamitos (2002)
4. Broy, M., Jonsson, B., Katoen, J.-P., Leucker, M., Pretschner, A. (eds.): Model-Based Testing
of Reactive Systems. LNCS, vol. 3472. Springer, Heidelberg (2005)
5. Buwalda, H.: Action Figures. Software Testing and Quality Engineering, 42-47 (2003)
6. Buwalda, H., Kasdorp, M.: Getting automated testing under control. Software Testing and
Quality Engineering , 39-44 (November/December, 1999)
7. Dick, J., Faivre, A.: Automating the generation and sequencing of test cases from model-
based specifications. In: Larsen, P.G., Woodcock, J.C.P. (eds.) FME 1993. LNCS, vol. 670,
pp. 268-284. Springer, Heidelberg (1993)
8. Duke, R., Rose, G.: Formal Object-Oriented Specification Using Object-Z. Macmillan Press,
NYC (2000)
9. El-Far, I.K., Whittaker, J.A.: Model-based software testing. In: Marciniak, J.J. (ed.) Encyclo-
pedia of Software Engineering, vol. 1, pp. 825-837. Wiley-Interscience, Chichester (2002)
Search WWH ::




Custom Search