Database Reference
In-Depth Information
analysis process (such as false positives) and, for instance, the level of esoteric
correlations which would be found acceptable for future usage.
Intuitively, however, when addressing transparency, public opinion focuses on
stage (c); the actual strategies and practices for using the data that government
applies. In other words, these are the predictive models formulated through the
data mining process. For instance, they are the actual “profiles,” according to
which DHS or other entities single out individuals or events. Governments are
reluctant to provide transparency at this juncture, and expose the relevant
information to public scrutiny. Such reluctance is mirrored in the existing legal
rules. For instance, in the US, the Internal Review Service does not share the
details of the audit profiling algorithm it applies (Schauer, 2003).
Formulating a theoretical framework to achieve transparency at this juncture is
challenging. Accounting for the way predictive modeling truly transpires quickly
leads to a conclusion that simple solutions previously contemplated are outdated.
Regulation cannot merely call for disclosing the factors used in a profiling
scheme. With advanced prediction, there is no static “profile” to reveal. There is
merely a dynamic learning process. Rather than a set profile, the government uses
an algorithm that singles out higher risk events. But such an algorithm cannot be
disclosed in a simplified format. The algorithm might be revealing a complex
association rule which includes a multitude of factors, as well as the interaction
among them. In other instances, the algorithm might be revealing clusters of
factors and attributes with blurry and constantly changing borders which are used
to identify higher risks. Conveying information about these practices to the public
in an understandable way calls for setting new regulatory paradigms in place.
Obviously, whether the process is interpretable or non-interpretable will impact
the ability to achieve transparency at this juncture.
Moreover, achieving meaningful transparency at this stage calls for an
additional set of disclosures rules. The government might not only be required to
present the factors correlated with the events it strive to predict, but also establish
a causation theory that stands behind the selection of these factors. Furthermore,
the government might be required to assure that the prediction schemes do not
involve the use of factors (either directly, or by proxy) society finds
discriminatory and unethical. For achieving this objective, government would be
required to conduct studies examining the impact of the prediction scheme. Only
with such information could the process be considered as transparent. In other
words, these measures will call upon government to produce new information,
rather than provide access to information it already has (Weil, et al., 2011).
Finally, unique transparency requirements relate to the last segment of
predictive analysis (d) the feedback process following the use of the model.
Examining the use of predictive models can lead to important insights. It reveals
how many of those indicated as a higher risk turn out to be of no risk at all (false
positives). It could further indicate how many of those considered as lower risks
should have been indicated as a high risk, yet were “missed” (false negatives) by
this analysis. In addition, the analysis of the ongoing process will provide
information as to whether the practices facilitated de facto illegal or unethical
Search WWH ::




Custom Search