Information Technology Reference
In-Depth Information
possess such programs. Stoecker (2007) notes that
not for profits collect a lot of data but do not use
much of the data collected. One step in effectively
using the tool is to think systematically about
programs delivered by the agencies asking what
in particular is this program intended to achieve
and with what results for the client group. The
name given to this approach is the logic model
and here agencies specify inputs, activities, outputs
and outcomes.
When using interpretative approaches, ac-
countability and the use of information systems
pose special challenges. Funders may not like
these approaches because outcomes are not clearly
quantifiable and therefore hard to evaluate and
compare with similarly funded programs. Never-
theless, it is possible to meet funder accountability
requirements using information systems. The
emergent interpretative models rely on description
and documentation of multiple views, relationship
building, understanding of complexity, consensus
building processes, options and possibilities and
the learning process. These can be documented in
reflective journals, narratives of critical incidents,
day journals, contact logs, methods journals,
notes of meetings, and transcripts of significant
meetings. These can be supplemented with vari-
ous forms of quantitative data. Case management
software programs can be used to store this data,
make and edit notes. They can be stored in the
system and retrieved for analysis by exporting
them to such programs as NVIVO.
interpretative approaches
to outcomes
The program logic model is quantitative in ori-
entation. There are also qualitative evaluative
approaches rather than quantitative using inter-
pretative orientations guided by social construc-
tivist theories. Here participants create their own
understanding based on past and current experi-
ences alone and in collaboration with others. This
interpretative model allows practitioners, to learn
as they proceed with the program and modify and
adapt according to the clients and issues faced
along the way with outcomes developed through
the process rather than planned before the pro-
gram commences. It is particularly relevant in
service delivery when the services and problems
are complex.
Although interpretive models are qualitative,
there is logic to interpretative evaluation with
stages that include engagement, discovery, sense
making and unfolding (Netting, O'Connor & Fauri
2008, p.136). These stages are not necessarily
sequential but more of a spiral progression. The
process of engagement and relationship building
with diverse groups, discovery and sense mak-
ing happens simultaneously with a deepening of
understanding with the development. Information
collected in this process comes from multiple
sources including qualitative and quantitative data,
narratives from logs or case studies, observations,
views of all participants or even the discovery of
tensions and ambiguities.
HoW Do We evaluate tHe
effectiveneSS of our Work?
There are three ways to demonstrate effective-
ness: experimental, quasi experimental designs
and evaluations that examine impact. Generally
these evaluations are done by large scale organiza-
tions rather than the small not for profit. The gold
standard for effectiveness studies is experimental
randomized studies. These require that participants
are randomly assigned to either a treatment group
or a non treatment group, called the control group.
Generally, these studies are difficult to implement.
The outcomes of these groups are then compared
to determine the impact of the service or inter-
vention. Quasi experimental designs are similar
to experimental designs and are more common
because the allocation of subjects to groups is
not random. There are several variations of ran-
domized designs including time series based on
Search WWH ::




Custom Search