Evaluation of an Open Learning Environment1

INTRODUCTION

Educational goals have generally shifted from knowing everything in a specific domain to knowing how to deal with complex problems. Reasoning and information processing skills have become more important than the sheer amount of information memorized. In medical education, the same evolution occurred. Diagnostic reasoning processes get more strongly emphasized. Whereas previously knowing all symptoms and diseases was stressed, reasoning skills have become educationally more important. They must enable professionals to distinguish between differential diagnoses and recognize patterns of illnesses (e.g., Myers & Dorsey, 1994).

BACKGROUND

Authentic or realistic tasks have been advocated to foster the acquisition of complex problem-solving processes (Jacobson & Spiro, 1995; Jonassen, 1997). In medical education, this has led to the use of expert systems in education. Such systems were initially developed to assist practitioners in their practice (NEOMYCIN, in Cromie,1988; PATHMASTER in Frohlich, Miller, & Morrow,1990; LIED in Console, Molino, Ripa di Meanan, & Torasso, 1992). These systems simulate a real situation and were expected to provoke or develop students’ diagnostic reasoning processes. However, the implementation of such expert systems in regular educational settings has not been successful. Instead of developing reasoning processes, these systems assume them to be available. They focus on quickly getting to a solution rather than reflecting on possible alternatives. Consequently, it was concluded that students need more guidance in the development of diagnostic reasoning skills (Console et al., 1992, Cromie, 1988; Friedman, France, & Drossman, 1991); instructional support was lacking.
KABISA is one of the computer programs that, among other things, aims at helping students to develop their diagnostic reasoning skills (Van den Ende, Blot, Kestens, Van Gompel, & Van den Enden, 1997). It is a dedicated computer-based training program for acquiring diagnostic reasoning skills in tropical medicine.


DESCRIPTION OF THE PROGRAM

KABISA confronts the user with cases or “virtual patients”. The virtual patient is initially presented by three “characteristics”3, randomly selected by the computer. After the presentation of the patient (three characteristics), students can ask additional characteristics gathered through anamnesis, physical examination, laboratory and imaging.
If students click on a particular characteristic, such as a physical examination test, they receive feedback. Students are informed about the presence of a certain symptom, or whether a test is positive or negative. If students ask a “non-considered” characteristic, that is, a characteristic that is not relevant or useful in relation to the virtual patient, they are informed about this and asked whether they want to reveal the diagnosis they were thinking about. When they do so, students receive an overview of the characteristics that were explained by their selection and which ones are not. Additionally, they get the place of the selected diagnosis on a list that ranks diagnoses according to their probability given the characteristics at hand. If students do not want to show the diagnosis they were thinking about, they can just continue asking characteristics.
A session is ended with students giving a final diagnosis. KABISA informs them about the correctness. If it is correct, students are congratulated. If the diagnosis is not correct, students may be either informed that it is a very plausible diagnosis but that they do not have enough evidence, or they may get a ranking of their diagnosis and an overview of the disease characteristics that can and cannot be explained by their answer.
Additionally, different non-embedded support devices, that is, tools, are made available to support learners. These tools allow students to look for information about certain symptoms or diseases, to compare different diagnoses, or to see how much a certain characteristic contributes to the certainty for a specific diagnosis. Students decide themselves when and how they use these devices (for a more detailed description, see Clarebout, Elen, Lowyck, Van den Ende, & Van den Enden, 2004).

FUTURE TRENDS

In this section, some critical issues are put forward that raise discussion points for the future design and development of open learning environments.

A Learning Environment vs. a Performance Environment

KABISA is designed as an open learning environment, that is, students are confronted with a realistic and authentic problem; there is a large amount of learner control and tools are provided to learners to guide their learning (Hannafin, Land & Oliver, 1999). However, the performed evaluation study revealed some interesting issues. A first revelation was that students do not follow a criterion path when working on KABISA. Prior to the evaluation, two domain experts in collaboration with three instructional designers constructed a criterion path. This path represented the ideal paths students should go through to optimally benefit from KABISA (following the “normative approach” of Elstein & Rabinowitz, 1993), including when to use a specific tool. Only five out of 44 students followed this path.
A second issue relates to tool use. KABISA offers different tools to support students. These tools can help students in their problem-solving process. Results suggest that students consult some help functions more than others, but overall they do not consult them frequently and if they use them they do not use them adequately.Students also tend to not use the feedback that they can obtain when asking for a “non-considered” characteristic.
Although this environment can be described as an open learning environment, it seems that students do not perceive it as a learning environment, but rather as a performance environment. Thinking aloud protocols reveal that students think they are cheating or failing when consulting a tool. Giving the limited use of these tools, it becomes difficult to gain insight in the effect of tool use on the learning process.
However, in spite of the observation that in only a small number of consultations the criterion path was followed, students do find in 80% of the consultations the right diagnosis. It seems that by trial and error, by not following the criterion path, students can also obtain the right diagnosis.
The results of this evaluation suggest that students do not use KABISA to foster their diagnostic reasoning skills. Rather, KABISA enables them to train readily available skills.

The Use of Design Models for Designing Open Learning Environments

This evaluation shows the importance of an evaluation phase in the design and development of computer-based training programs. It reveals the valuable contribution of (linear) design models, such as the so-called ADDIE-model (Analyse-Design-Development-Implementation-Evaluation). Although it is argued that in open learning environments a linear design process cannot longer be applied, this evaluation shows that it still can contribute to the design. For instance, a more thorough analysis (first phase) of student characteristics could have provided a means to adapt the difficulty level to the level of the students or to identify what guidance students actually need. Apparently, the feedback given to students does not encourage them to adapt their problem-solving process. Being product- rather than process-oriented, feedback may not be adapted to students’ actual needs. Or, students’ instructional conceptions about computer-based learning environments or their perceptions about KABISA (game versus an educational application) may influence the use of the program. Students’ instructional conceptions should be taken into account through the design process of the program. One possible way to influence these conceptions might be the introduction of the program. In the introduction, the aims of the program, the different functionalities and the relationship with the different courses should be clearly defined (see Kennedy, Petrovi, & Keppell, 1998, for the importance of introductory lessons). This relates to the implementation phase.
Given the difficulty of anticipating potential problems and difficulties students might encounter in open learning environments, it might be considered to break the linearity of such design models and to introduce a formative evaluation after each phase. This would enable the redirection of the program while developing it, rather than after the implementation of the program. Rather than only evaluating a final product, the development process should be taken into consideration as well. Rapid prototyping for testing the program at different phases of the development might be indicated. This leads to a more spiral cycle rather than a linear design process.

Amount of Learner Control in Computer-Based Programs

In the design and development of KABISA, a lot of time and effort is spent in the development of tools, similar to other computer-based programs. However, results show that students do not (adequately) use these tools. Other authors have found similar results (see for instance, Crooks, Klein, Jones, & Dwyer, 1996; Land, 2000). This raises questions about the amount of learner control in open learning environments. Should the environment be made less open and provide embedded support devices instead of tools so that students cannot but use these devices? Or should students receive some additional advice towards the use of these tools? In the first case, support might not be adapted to the learners need. This might cause problems, given that either too much or too less support can both be detrimental (Clark, 1991). The second option leaves the environment open. But also here it can be questioned whether this advice should not also be adapted to the learners’ needs. A possible solution with respect to this issue might come out of the animated pedagogical agent-research. These agents are animated figures that aim at helping learners in their learning process and adapt their support based on the paths learners followed (Moreno, 2004; Shaw, Johnson, & Ganeshan, 1999).

CONCLUSION

The evaluation of KABISA addressed some general issues important to consider in the design, development and implementation of open learning environments. Although these environments are advocated to foster the acquisition of complex problem-solving skills, there seems still to be gap between the intention of the designers and the use by the learners. This relates to the issue addressed by Winne and Marx (1982) about calibration. In order for an instructional intervention to be effective, calibration is needed between the conceptions of the different people involved. The introduction of a pedagogical agent might help to calibrate the conceptions of students to those of the designers. Moreover these agents might help in encouraging students to adequately use tools without reducing the openness of the learning environment.

KEY TERMS

Animated Pedagogical Agents: Animated figures operating in a learning environment and aiming at supporting learners in their learning process and capable of adapting their support to the learners’ paths.
Criterion Path: A representation of an “ideal path” to go through a specific learning environment. It specifies for each possible step in the program what the most ideal subsequent steps are.
Embedded Support Devices: Support devices integrated in the learning environment. Learners cannot but use these devices (e.g., structure in a text).
Instructional Conceptions: Conceptions about the functionalities of (elements of) a learning environment. These conceptions can relate to the effectiveness or efficiency of specific features in a learning environment (e.g., tools) or to the environment as a whole (e.g., KABISA as a learning environment).
Non-Embedded Support Devices (synonym: Tools):Support devices that are put to the disposal of learners. Learners decide themselves when and how to use these tools.
Open Ended Learning Environments: A learning environment that aims at fostering complex problem solving skills by confronting learning with a realistic or authentic problem in a learning environment with a large amount of learner control and different tools.
Perceptions: Students’ perceptions relate to how they perceive a specific environment (c.q., KABISA), they are the results of an interaction between students’ instructional conceptions and a specific learning environment.

Next post:

Previous post: