Graphics Programs Reference
In-Depth Information
of PD through evaluating learning outcomes,
focusing on evaluating solutions to the problem
of learning rather than questioning assumptions
about learning' (Webster-Wright, 2009, p. 711).
We estimate that one of the most challenging
tasks of trainers, developers and researchers is the
cultural competence in evaluation of online FPL
programs (e.g., understanding the cultural context
in which evaluation takes place, that frames the
'what' and 'how' of any evaluation; and that uses
faculty members as the means to arrive at results
and implications). Besides, other issues emerge
regarding online course evaluation: 'Response
rates; anonymity, confidentiality, and authentica-
tion' (Ballantyne, 2003, p. 106). We would add
inter-rater reliability, which acknowledges cultural
differences, worldviews and the admission of the
evaluator's own biases and assumptions. However,
when considering this challenge, we recommend
the use of online self-assessment questionnaires
which are relatively easy to design.
There are three facets that we consider impor-
tant for designing competent portfolio evaluation.
First, we acknowledge that faculty self-assess-
ments are entities for the purposes of formative
online courses; therefore, they are under constant
construction and revision. Second, we construct
test items and record scores online for cultural
and discipline heterogeneity of the faculty group
(i.e., faculty members belong to departments
of pharmacy, management, law, mathematics,
drawing, computer science, and so on). Third,
faculty group culture and discipline structure are
intertwined, and each reinforces the other. Our
studies are based on aggregated self-assessments
of participants in the group; therefore, we analyse
change within the faculty group (Villar & Alegre,
2006b; 2007a, 2007b), in agreement with other
authors' thoughts: “Grouped self-assessments
might legitimately be used to evaluate workshop
effectiveness” (D'Eon et al., 2008, p. 93).
the importance of literacy in online program
assessment. Program evaluation answers
the question of “how good is FPL?” This
is an important question for university and
evaluation agency stakeholders. It requires
universities to perform annual assessments
of their students to ensure accountability.
Therefore, it is important to validate a good
online assessment system. Most web assess-
ment systems have a number of features
which have already been used for teacher
assessment: “Able to be connected through
common Internet Explorer software, able to
identify users by secret codes, able to grade
automatically, and able to collect and record
the information related to student scores”
(Wang, Wang & Huang, 2008, p. 451).
Some research reports tend to focus on the
comparison of features and course operat-
ing systems needed to run the applications,
hence to take decisions concerning technical
information (Hayes, 2000).
We assume that an increase in the amount of
time faculty spend on online assessments tech-
niques will increase their attention to CTC learning
or assessment criteria learning, because we expect
that academic staff address complex intellectual
capabilities that are important for teachable pro-
cesses. Also, we measure satisfaction or dissatis-
faction by general Likert-type scales. Villar and
Alegre (2006b, p. 606) also compared two junior
online FPL programs given at the University of
Jaén with ten CTC program factors (Table 3). The
results by participants' gender, age range and sci-
entific area were significant in a number of factors
concerning course quality. This result underlines
the importance of individual attributes (gender
and age). For instance, Davidovitch and Soen
(2006, p. 370) have found “A significant inverse
correlation between all age groups and assessment
measures in course structure and organization
and clarity of lectures”. Thus participants' age
is a demographic independent variable that must
(1) Evaluation of online course delivery systems.
In recent times, scholars have emphasized
Search WWH ::




Custom Search