Supporting Assurance and Compliance Monitoring

INTRODUCTION

Governments and commercial organizations typically use monitoring facilities that depend on data that identify source agents and their relationships, to detect and draw attention to possible anomalies and potential non-compliance.
The assurance of compliance monitoring requires decision support and appropriate domain knowledge, relevant to the level of user, to manage the results of the surveillance. This is required in order to fulfill the necessary and sufficient evidence verifying or refuting this output.

BACKGROUND

This article discusses methods to support assurance of surveillance monitoring and output verification knowledge management (CV-KM), including a brief discussion on primary monitoring systems; the different environments in which they operate; the verification problem solving and decision making tasks; the problem structure and the coordination of the review process to facilitate truth maintenance. The surveillance operation is considered a primary monitoring function, with the analysis of the resulting output the secondary monitoring function -the assurance component.
Examples of monitoring systems range from standard data processing routines that ensure internal control, such as data input, processing and output compliance. Weber (1999) provides a comprehensive discussion on these processes, to the monitoring of events transacted in more complex environments, such as fraud detection, intrusion detection, data mining systems and the like, via sophisticated statistical, artificial intelligence and neural computing techniques, or hybrid combinations. These devices are termed primary monitoring systems (PSS).
Assuring, verifying and managing PSS information quality and integrity is fundamental to the success of modern information-dependent organizations. Concurrent with the need for surveillance is a need to maintain personal privacy, due diligence, and accountability (Cillufo, 2000).
Clarke (1988) highlights the inherent dangers of drawing conclusions resulting from the electronic monitoring of data related to individuals and groups of individuals, and points out that a major problem in “dataveillance” is the high noise to signal ratio, which may be misleading. Davis and Ord (1990) acknowledge the problem of setting threshold levels in an ever-changing environment. With any set of tolerance levels, deviant (even fraudulently motivated) behaviour may escape detection. Tightening tolerance levels limits increases the likelihood that exception conditions will trigger an alert but also increases false positive alerts since the number of instances that fall outside the tolerance increases. The cost for the analyst (the decision-maker) to review the additional non-exception condition alerts must be assessed in relation to the imputed value of identifying the additional true exceptions detected by more stringent limits (Davis & Ord,1990).
Advances have, in general, reduced the problem of misleading results produced from “noisy data,” including improvements in data processing and the increased use of sophisticated computational techniques such as statistical, knowledge-based and artificial neural computational methods. These systems are centered on the events being monitored and the events’ source agents. Their results, however, may still require human judgment to determine their validity (Goldschmidt, 2001). CV-KM systems act as a secondary monitoring facility supporting, verifying and assuring data and information compliance by assisting in analyzing and categorizing exceptions, or results, generated by PSS. CV-KMs assist in assuring the fulfillment of the necessary and sufficient evidence supporting (true positive/negative) or refuting (false positive) hypotheses of non-compliance. The input to CV-KMs requires the output resulting from the organization’s domain-specific PSS plus related information. Operationally, the CV-KMs are a bolt-on addition to the PSS.


WHAT ARE PRIMARY SYSTEMS?

Typically, these systems examine the integrity of transaction data as well as the entire transaction, or event, to ensure compliance with predetermined conditions. An exception report identifies any variances. This identification either fulfills the conditions of necessary and sufficient evidence and determines an instance of non-compliance, or indicates possible non-compliance. In the latter case further evidence may be sought to substantiate the hypothesis of non-compliance.
The function of PSS is twofold: identifying a variance, and producing and accumulating supporting evidence. When both these conditions are met, the evidence points to the detective, corrective or preventative actions required.
The detective function is fulfilled by recognition of the variance; correction can then be made to the data or the event, which is then reprocessed. The preventative function is fulfilled by the recognition of the variance resulting in the rejection of the event. Decision-makers must interpret ambiguous evidence to determine what action is required, or if the non-compliant indicator is a true or a false positive directive.
Examples of PSS range from standard data processing routines that ensure internal control, such as data input, processing and output compliance, to the use of sophisticated statistical (procedural) techniques, artificial intelligent (declarative) techniques and neural (associative) techniques, or hybrid combinations. In general, computational techniques are either demons or objects (O’Leary, 1991; Vasarhelyi & Halper, 1991). Demons are computerized routines that are instantiated by data or events received, as opposed to being requested by some program. “Demons add knowledge to a system without specification of where they will be used … like competent assistants they do not need to be told when to act” (Winston, 1977, p. 380). They are data or event dependent, rather than program dependent, and provide intelligent self-activation for monitoring data triggered by compliance threshold levels. O’Leary points out that demons have been developed to monitor patterns for the purpose of auditing activities conducted on computer-based systems. Vasarhelyi and Halper describe an alternate: CPAS, Continuous Process Audit System. CPAS allows for the continuous audit of on-line systems by monitoring transactions to determine variance between monitored information and expected information.

THE PSS AND CV-KM ENVIRONMENT

PSS and CV-KM can be classified by levels of complexity, characterized by their place on the simple or complex environmental continuum in which they operate and the decisions required to determine instances of non-compliance. Constraints may take the form of an organization’s predetermined policies and procedures, needed to ensure data and event integrity, contractual agreements, and statutory requirements. These constraints are not mutually exclusive and can be seen as bounds or threshold levels. The parameters used to construct these levels may change with modifications to threshold requirements such as evolutionary changes in constraints and changes in data and event requirements. A simple environment is so-called because: 1) the threshold levels either seldom change or only change over the longer term; 2) the identification of the variance fulfils the conditions of necessary and sufficient evidence to determine an instance of non-compliance; and 3) the decisions, needed to determine if events comply, lie on the structured to highly structured portion of the decision-making continuum. The degree to which the bounds of the threshold levels are set, very narrow to very broad, determines the type of decision required. Under a simple environment the bounds or threshold limits are narrow, characteristic of structured decisions such as data input integrity and customer credit checks. Decision-making in this environment is ex-ante, made of a single step, and the constraints are all predetermined.
In a complex environment, decision-making is ex-post, complex and may require multiple steps. Initial monitoring uses a priori thresholds broader than in a simple environment, that is, more granular and produces exceptions that identify suspected non-compliant events (SNCEs). Once these exceptions have been produced, the decision-maker must substantiate true positive exceptions. This task must be broken down into smaller components and sub-goals must be developed (Simon, 1973) to identify, categorise and discard any false positive exceptions. False negatives do not generate an exception, and allow possible suspect events to slip through the surveillance sieve. If the threshold limits are stringent enough, marginal false negatives could be subsumed and later considered. Nevertheless, this would not necessarily reduce the occurrences of true false negatives, as their characteristics may not be known. True positives are those exceptions that the decision-maker has determined are indeed anomalous. Evidence for this decision uses the results of the initial monitoring as well as important information related to the event, characterized by a need for judgmental expertise. Examples of these approaches to complex environments include: Byrnes et al. (1990), Major and Riedinger (1992), Senator et al. (1995), and Kirkland et al.(1999).

CV PROBLEM SOLVING AND DECISION-MAKING TASKS

Secondary monitoring problem solving, human evaluation of the exceptions produced by the primary monitoring system, determines if a generated exception is feasible. This is similar to an analytical review (AR) conducted by auditors, characterised by Libby (1985) as a diagnostic-inference process. Koonce (1993) defines AR as the diagnostic process of identifying and determining the cause of unexpected fluctuations in account balances and other financial relationships. Similarly, secondary monitoring problem solving identifies and determines the causes of unexpected variances resulting from the primary monitoring facility. Blocher and Cooper (1988) found that analytical review (AR) typically follows four distinct diagnostic inference components: accumulation and evaluation of relevant information; initial recognition of unusual fluctuations; subsequent hypothesis generation; and information search and hypothesis evaluation.
With CV-KM, accumulation and evaluation is guided by the results of the PSS. Subsequently, a hypothesis of the potential causes of the observed variance is generated. The diagnostic approach takes the form of defeasible logic, which means that any inference made may be only tentative, as the inference may require revision if new information is presented. The decision-maker must evaluate all possible legitimate reasons for the occurrence of the variant. If none is found, the hypothesis of non-compliance is strengthened.

CV PROBLEM STRUCTURE

Following Sol (1982), structuredness of the complex problem is twofold: the variance identification is the structured component, and the accumulation of evidence supporting or refuting the non-compliant event (NCE) hypothesis is the ill-structured component. The variance is typically the product of some algorithm indicating a possible occurrence of NCEs, but in order to substantiate a true NCE the required accumulation of evidence requires judgment of agent behaviour. The agents include the source of the event, the identification of the source agents’ possible motivations, the environment in which the source agent is operating and the impact this event may have on the environment.

COORDINATION: THE REVIEW PROCESS TO FACILITATE TRUTH MAINTENANCE

Coordination refers to the managing of interactions between multiple agents cooperating in some collective task. Pete et al. (1993) show that optimal organizational design depends on the task environment and, as with an audit team or group, is hierarchical. The objective is to reduce the problems discussed by Freedman (1991), to reduce any potentially redundant activities conducted by the evaluating agents, and to increase efficiency and effectiveness. The agent or agents may be human or machine based. Machine based or independent software agents function as repositories of human opinions related to the event under scrutiny.
The process of review when evaluating judgments made on accounting data and information is well established in the auditing literature (Libby & Trotman, 1993). To facilitate coordination, evaluating agents should communicate their findings via a communication protocol. Communication protocol establishes the means and modes of communication between agents. Information exchange can be either via an implicit communication mechanism such as a common memory or blackboard (Hayes-Roth et al., 1983), orvia explicit communication mechanisms, such as message sending. Using the blackboard approach, the SNCE’s details plus the evaluating agents’ assumptions and results are posted. This facilitates the more senior agents imposing their criteria on lesser agents’ results, as well as using their task-specific criteria to further refine the classifications.
Computerised decision support systems have been proposed and built to address some of the previously mentioned problems. A limited framework for a CV-KM intelligent decision support system using multi-agent technology is presented in Chang et al. (1993) and Goldschmidt (1996, 2001).

FUTURE TRENDS

With the increase in the reliance of electronic communications for business, industry, medicine, defense and government, assuring, verifying and managing the integrity of transactions and managing the results of these monitoring systems for information quality and integrity is fundamental to the success of modern information-dependent organizations. Concurrent with the need for surveillance is a need to maintain personal privacy, due diligence, and accountability (Cillufo, 2000).

Advantages of CV-KM

The company ALERT-KM Pty Ltd holds the CV-KM IP in 24 countries and is currently commercializing this technology.

Adds functionality to primary monitoring infrastructure, without modifying primary system.
Proposes a framework for compliance verification knowledge management.
Provides for the decomposition of surveillance tasks.
Provides a consistent evidence evaluation and combination structure.
Provides records of evidence from each stage.
Adds value to surveillance operations by reducing the cost of surveillance monitoring, assisting in
surveillance accountability and providing transparency, when required, thereby contributing to
surveillance governance and due diligence.
Employs a method that adds value to a generated exception by encapsulating and associating the
event’s attributes, its source agent’s characteristics, the evaluating agent’s analysis and the
recommended remedial action plus the substantiating evidence.
Exploits an infrastructure support construct and secondary filter, allowing for collaboration, truth
maintenance, audit trails and decision support, thereby facilitating decision consistency and greater
processing volume.
Using the approach as a decision aid and secondary filter, analysis of results can then be used to
review the analyst’s decision-making processes and to refine the primary filter tolerance levels.
Supports a structured, flexible and inclusive approach to surveillance analysis.
By adding a cost function to the surveillance-monitoring infrastructure can capture cost-benefit
trade-off.
Insight is gained from the knowledge acquisition component when setting up parameters and
heuristics.
Assists in the development of an effective accountability structure.
Reduces distrust of surveillance monitoring systems, by reinforcing accountability and transparency.

Data mining technology, distributed heterogeneous database access and information distribution has moved from a silo approach to a more pooled approach within organizations. This has led to further need for information assurance that is reliant on data monitoring. Therefore the management of the accuracy and validity of the monitoring output necessitates the assurance of decisions made based on this information.

CONCLUSION

CV-KM operates in highly complex environments, domains where the threshold granularity is high and the decision-making time factor is short may benefit from the decision support discussed. It is essential for accountability that organizations in these domains ensure transactions identified as suspected NCE are scrutinized and substantiated. This assists in minimizing false positive conclusions that may result from the speed, volume and increased complexity of transactions, and the information used to analyze them. CV-KM also addresses some of the problems highlighted by Clarke (1988), that electronic monitoring of data related to individuals and groups of individuals is subject to inherent dangers of drawing misleading conclusions from these data. Assurance and compliance monitoring team infrastructure support includes aspects of information systems, cognitive sciences, decision support and auditing judgment. Fuzzy set theory is advocated in decision environments where there may be a high degree of uncertainty and ambiguity, catering for qualitative and quantitative evidence validating and assuring the assertion of noncompliance.
Current research efforts in monitoring and assurance systems (Roohani, 2003; Schneier, 2001; SRI, 1999; UCD, 1996)still concentrate on improving the efficiency and accuracy of primary monitoring systems. Whilst this is necessary, further research opportunities exist in addressing and improving the utility and effectiveness of supporting the analysts responsible for evaluating the results of these primary systems and ensuring their accountability.

KEY TERMS

Complex Environments: Complexity increases as: the granularity increase, the frequency of changes increases, the time availability decreases, and the degree of judgment required increases. Decision-making is ex-post, complex and may require multiple steps. Initial monitoring uses a priori thresholds broader than in a simple environment, that is, more granular and produces exceptions that identify suspected non-compliant events (SNCEs). Evidence for decision-making uses the results of the initial monitoring as well as important information related to the event, characterized by a need for judgmental expertise. Examples of these approaches to complex environments include: Byrnes et al. (1990), Major and Riedinger (1992), Senator et al. (1995), and Kirkland et al. (1999).
Compliance Verification: Ensuring the necessary and sufficient evidence supports the assertion of non-compliance.
Dataveillance: Surveillance of data using automated data analysis to identify variances. These typically depend on data that identify source agents and their relationships and is used to draw a compliance analyst’s attention to a particular event or group of events that indicate possible anomalies.
Primary Surveillance Systems (PSS): Processes, methods or devices that monitor data identifying events, their source agents and their relationships, to draw attention to possible anomalies and potential non-compliance.
Secondary Monitoring: Secondary monitoring supports agents with verifying and assuring data and information compliance by assisting in analyzing and categorizing exceptions, or results, generated by PSS. This assists in assuring the fulfillment of the necessary and sufficient evidence supporting (true positive/negative) or refuting (false positive) hypotheses of non-compliance. The input is the results of the organization’s do-main-specific PSS plus related information and human judgment from either human agents or machine agents encoded with heuristics.
Simple Environments: Monitoring environments in which the threshold levels either seldom change or only change over the longer term; the identification of the variance fulfils the conditions of necessary and sufficient evidence to determine an instance of non-compliance; and the decisions, needed to determine if events comply, lie on the structured to highly structured portion of the decision-making continuum. Decision- making in this environment is ex-ante, made of a single step, and the constraints are all predetermined.
Supporting Compliance Monitoring Assurance: A process or method supporting human or machine agents in verifying and assuring the validity of generated suspected non-compliant event by assuring the necessary and sufficient evidence supporting the hypothesis of non-compliance.
Suspected Non-Compliance Event (SNCE): An event triggering an exception by the primary monitoring facility that indicates instances of possible non-compliance. The exception still requires verification.

Next post:

Previous post: