Evaluating IS Quality as a Measure of IS Effectiveness

INTRODUCTION

An enduring question in information systems research and practice concerns evaluation of the impact of information systems (IS). It endures, as to date there is no ready solution. Focusing on one aspect, measuring IS success or effectiveness, there are ranges of measures available. At one end of the scale we have perceptual measures like use and user satisfaction; somewhere along that scale we have the more objective measures like quality; whilst at the other end we have objective measures like increased market share, price recovery and increased product quality.
Measurement of IS success or effectiveness has been shaped by DeLone and McLean (1992), who proposed a taxonomy and an interactive model that conceptualized and operationalized IS success. However, this was based on theoretical and empirical work from the 1970s and 1980s, published in the period 1981-1988. Information systems, not being a static phenomenon, have progressed and changed. DeLone and McLean (2002, 2003) themselves acknowledged this in their recent revisitation, reexamination and reformulation of their IS success model. Their view correctly affirms that we cannot leave people outside this equation; meaning objective measures alone are not appropriate. Furthermore, the subjectivity of perceptual measures mean they are of questionable usefulness. Taking the middle ground, where quality is the measure, the question then becomes how best to measure quality of a delivered IS.
In an equation that seeks to define our understanding of the value of information technology (IT) to the business process, the system as a stand-alone object is worthless. The worth of the system lies in its role in the business process: and it is people who make it work in these processes. What is therefore required is a measure that takes account of human reactions to delivered systems. This can be evaluated by considering a variety of end-user stakeholder expectations and/or perceptions as measures of quality. In fact, much insight can be gained by measuring the disconfirmation of expectations of ideal service and perceptions of reality (Wilkin, 2001), particularly if this is assessed at various levels of seniority.


MEASURING QUALITY

Debate has surrounded measuring quality from a disconfirmation perspective (Carr, 2002; Peter, Churchill & Brown, 1993; Van Dyke, Prybutok & Kappelman, 1999). Justification for including expectations (Cronin & Taylor, 1992, 1994; Teas, 1993, 1994; Van Dyke, Kappelman & Prybutok, 1997) centred on the insight it provided about how users formulated perceptions or how significant such users saw each dimension or statement (Carman, 1990; Kettinger & Lee, 1997; Parasuraman, Zeithaml & Berry, 1986; Pitt, Watson & Kavan, 1995). Moreover, expectations are seen as essential to both understanding and achieving IS effectiveness, particularly given the different internal opinions held by different user stakeholders where a low or high perception rating could provide misleading information. A measure that includes expectations provides insight regarding changes in the system environment (Watson, Pitt & Kavan, 1998; Wilkin,2001).
The perception’s only measure, another approach to defining and evaluating quality, was proposed in a belief that a measurement of service quality derived by the difference score only captured factors that were related to service quality and did not measure customers’ view of the concept itself (Cronin & Taylor, 1992). However, support can be found for the view that a single measure of performance provides little information about a user’s thoughts in relation to product features, nor the process by which performance is converted into understanding by the consumer (Oliver, 1989; Spreng, MacKenzie & Olshavsky, 1996).
A definition of quality could have many contradictory functions: sometimes implicit/sometimes explicit; at times mechanistic/at times humanistic; and sometimes conceptually/sometimes operationally understood. In an IT context, there is not any single understanding of the term. Quality, being concerned with the totality of features, is best evaluated as a multi-dimensional construct using multiple statements to capture the quality of each dimension.
Applying a measure of quality to evaluate something as complex as a delivered IS requires consideration and understanding of the mechanisms that underpin an IS. The DeLone and McLean model conceptualized system quality (not system) and information quality (not information). Despite the complexity and technical nature of some IT products, in order to achieve success, we need to look beyond the process and delivery of the product, to the system as a whole, and ask whether benefits can be gained by focusing on customer views of the quality of the product, product delivery and associated concerns (Wilkin, 2001).
Quality has many elements. If we put this human evaluation of a delivered system into context, then it is not just measurement of the system itself (system quality), nor the information so generated (information quality) that is important, but a balanced evaluation that also takes account of service (service quality) and the role of an IS unit in contributing to the effectiveness of delivered IS, which is important (Wilkin, 2001). Support for the argument to include service quality in this evaluation can be found in the work of other researchers too (DeLone & McLean, 2002, 2003; Kettinger & Lee, 1994; Li, 1997; Pitt, Watson & Kavan, 1995; Wilkin & Hewett, 1999).
Assuming a multi-dimensional approach to evaluating quality of delivered IS encompassing the system, information and service aspects, the issue then is which dimensions are important for each aspect (component). Table 1 summarizes the important dimensions (Wilkin, 2001) in measuring each component (system quality, information quality and service quality). Following on, what are then required are indicators capable of measuring aspects of each component. These are many and vary from “responds quickly to all commands” (system quality), to “quickly interpreted” (information quality) and “delivers support in a timely manner” (service quality).
Under this multi-dimensional approach, ratings for the various aspects of quality, 1, 2 and so on, captured on a Likert scale of1 to 7 (strongly agree to strongly disagree), highlight problematic areas, which when viewed in conjunction with organizational goals and objectives, can facilitate the establishment of priorities.
At a strategic level, the merits of this approach, where multiple dimensions and statements are used to evaluate the quality/effectiveness of an information system, relate to the ease and simplicity with which insight into the system in question is provided. Predecessors have captured quality or surrogates of quality in a single statement, thereby limiting insights provided to interested parties on the aspects of the business system/application stakeholders perceive as problematic. Thinking beyond the impact on the individual and organization, the value provided by such an approach is significant in light of the advancement of organizations to what Drucker (1988) forecast as the third period of change in organizational structure, namely to an information-based organization. Herein, “information is data endowed with relevance and purpose and knowledge, by definition, is specialized” (Drucker, 1988, p. 58). Thus, it is accordingly vital that the IS delivers information of the required quality.
In line with Drucker (1988), this multi-dimensional approach allows the evaluator to directly target and compile the views of a broad cross-section of stakeholders regarding the quality of the IS with respect to the performance of their duties.
At an operational level, the merits of the approach include:
• the flexibility to add and subtract dimensions for each component according to users requirements;
• the use of different dimensions to measure the different components of quality;
• the capability for benchmarking where expectations, measured at intermittent intervals, is balanced with more timely assessment and reassessments of perceptions;
• the opportunity, because of the use of dimensionality, to discover specific problematic areas, and then “drill down” into those areas; and
• improvement in the “usefulness” of the results through the addition of statements specific to the situation – something that is offset to a degree against the increase in length.

FUTURE TRENDS

Despite much work having been done on evaluation of the impact of IS, further investigation is warranted to balance subjective and objective measures of quality of these systems. The answers to this investigation will probably flow from the debate concerning the relative merits of considering the desirability and relevance of monetary evaluations balanced with subjective judgments related to end-user stakeholder evaluations of IS performance and productivity.

Table 1. Important dimensions in measuring system quality, information quality and service quality

System Quality Information Quality Service Quality
Functionality Accuracy Expertise
Integration Availability Credibility
Usability Relevance Availability
Reliability Presentation Responsiveness
Security Promptness Supportiveness

CONCLUSION

Iacocca’s (1998) words, quality “doesn’t have a beginning or a middle. And it better not have an end” (p. 257), are as valid today as ever, since the realization of high quality/effectiveness is only achievable when it becomes an intrinsic part of business operations through every stakeholder’s mindset.
The quality-based, multi-dimensional approach to evaluation of a delivered IS outlined here (comprising components, dimensions and indicators), enables problematic areas to be more accurately pinpointed. The magnitude of organizations’ investment and commitment to IT, compounded by the increasingly complex and interwoven nature of IS, make evaluation of quality of a delivered IS a significant issue. In this regard, this article has discussed a number of critical issues, which offer to business and researchers alike implications and challenges. Hence, despite persistent difficulties in measuring the quality of these delivered systems (Davis, 1989), we should pursue work on balancing subjective and objective measures of quality in a timely manner.

KEY TERMS

Component: A term used to describe an information system and its composition for the purposes of this work. Specifically, the components in this work are: system quality, information quality and service quality.
Dimension: Refers to the determinants of quality of each of the three components, namely, system quality, information quality and service quality.
Expectations: These have a future time perspective and a degree of uncertainty. Expectations are a set of beliefs, held by targeted users of an information system, associated with certain attributes, beliefs or outcomes. They are associated with the eventual perception of a system and with the performance of the system.
Indicator: A term used to refer to something that would point to quality or a lack thereof.
Information Quality: A globaljudgment ofthe degree to which these stakeholders are provided with information of excellent quality with regard to their defined needs, excluding user manuals and help screens (features of system quality).
IS Success: A global judgment of the degree to which these stakeholders believe they are better off. The term is sometimes used interchangeably with IS effectiveness.
Perceptions: Contingent upon prior expectations, perceptions have been used by some as a reality check of expectations, where an assessment of quality is derived by the disconfirmation of the two. Moreover, they have also been proposed as a measure of adequacy (percep-tions)/importance.
Quality: An elusive and indistinct construct defined in terms of customer perceptions and expectations. In arriving at a definition, one must take account of both audience and circumstance. There has been some attempt to define it as a global judgment about a product’s (or service’s) overall excellence. Quality can be measured on the basis of customer expectations and perceptions, along the lines of Brown’s (1992, p. 255) definition: “[t]he totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs”.
Service Quality: A global judgment or attitude relating to an assessment of the level of superiority or excellence of service provided by the IS department and support personnel.
System Quality: A global judgment of the degree to which the technical components of delivered IS provide the quality of information and service as required by stakeholders, including hardware, software, help screens and user manuals.

Next post:

Previous post: