Hardware Reference
In-Depth Information
software testing. Data from industry tend to be collected and processed in an informal
and even slanted way so they are not trustable as an accurate view of reality; acade-
mia usually experience many problems to access software professionals and organiza-
tions to get information. The above presented studies are a contribution to overcome
the mentioned absence of information.
In general, besides the need of improvement of organizational and individual prac-
tices, many of the 23 explicative factors have been confirmed by a varied and signifi-
cant sample of 127 software professionals so there is now a guideline for improving
software testing conditions. One of the most evident barriers is the lack of training
and expertise, something consistent with other surveys [14] although market maturity
and career issues are also considered very important factors. It is remarkable the tradi-
tional divorce between the development deliverables and test case design methods,
something also detected in the data from individual practices (section 2).
We are now working to launch this survey across Europe with the help of CEPIS to
check if the factors are common or if local differences arise. To support this effort, we
intend to use results of the factorial analysis of the questionnaires for the grouping of
items as well as for establishing the final model of factors to be applied. Anyway the
model would be useful also for other researchers who may collect data in this area.
Acknowledgments
This study was supported by the projects TIN2007-67843-C06-01 and TIN2007-
30391-E partially funded by the Spanish Ministry of Science and Innovation.
References
1. Jones, C.: Estimating software costs. McGraw-Hill, New York (1998)
2. Grindal, M., Offutt, J., Mellin, J.: On the Testing Maturity of Software Producing Or-
ganiza-tions: Detailed Data. Technical Report ISE-TR-06-03, Department of Information
and Software Engineering, George Mason University (2006)
3. McGarry, F., Pajerski, R., Page G., Waligora, S., Basili V., Zelkowitz, M.: Software Proc-
ess Improvement in the NASA Software Engineering Laboratory, Technical Report,
CMU/SEI-94-TR-22, SEI Carnegie-Mellon University (1994)
4. Martin, D., Rooksby, J., Rouncefield, M., Sommerville, I.: 'Good' Organisational Reasons
for 'Bad' Software Testing: An Ethnographic Study of Testing in a Small Software Com-
pany. In: Proc. of 29th Int. Conf. on Soft. Engin., pp. 602-611 (2007)
5. Fernandez-Sanz, L.: Un sondeo sobre la práctica actual de pruebas de software en España.
REICIS 2, 43-54 (2005)
6. SEI: CMMi® for Development. SEI-Carnegie Mellon University (2006)
7. Paulk, M., Weber, C., Curtis, B., Chrisis, M.: The Capability Maturity Model. Addison-
Wesley, Reading (1995)
8. van Veenendaal, E.: Test Maturity Model Integration (TMMi) Versión 1.0. TMMI Foun-
dation (2008), http://www.tmmifoundation.org
9. Burnstein, I.: Practical Software Testing. Springer, Heidelberg (2002)
10. VanVeenendaal, E.: Guidelines for Testing Maturity. STEN IV, 1-10 (2006)
Search WWH ::




Custom Search