Biomedical Engineering Reference
In-Depth Information
9.1 Survey Design
The Ethics and Assistive Technology survey (AT) addressed key ethical issues
in assistive technologies and employed the N-Reasons experimental online
survey platform developed by the Norms Evolving in Response to Dilemmas
(NERD) research team led by Peter Danielson at the University of British
Columbia's Centre for Applied Ethics. This novel platform provides a means
of engaging both the general public and experts in various ethically challeng-
ing issues and debates in two formats: (1) reason-based responses (described
in greater detail below) and (2) the more conventional survey question for-
mats (e.g., multiple choice, ranking) [ 1 , 8 , 2 , 9 , 6 ]. To date, the NERD research
group has launched N-Reasons surveys on a wide variety of topics including
research ethics, stem cell research, and robot ethics [ 9 , 7 , 5 , 4 ].
The AT survey consists of five scenarios accompanied by one or more
questions related to the various issues that each scenario involves. A total of
14 questions are posed, each with the option to answer “Yes,” “Neutral” or
“No.” Participants must select one of these responses and provide a reason,
explanation or elaboration to move forward through the survey. The innova-
tive feature of the N-Reasons platform is the opportunity participants have
to vote for other participants' reasons instead of (or in addition to) provid-
ing their own (see Fig. 9.1 ). The goal is to generate richer and more varied
alternatives based on user-supplied contributions. The number of reasons the
user chooses from (e.g., the “choice problem”) is kept to a reasonable num-
ber by limiting content in three ways. First, by encouraging participants to
use existing reasons rather than generating their own, the number of over-
all reasons is minimized and therefore more likely to result in identifiable
trends or patterns. Second, running vote tallies for each reason are provided,
which allows participants to factor in the valuation of the available reasons by
other participants (e.g., no sums for decisions are displayed in order to make
the reasons, as opposed to the “Yes”/“Neutral”/“No” decision, salient.) The
display ranking method used in the survey gives some weight to recent con-
tributions in order to mitigate the primacy effect; this method is discussed in
more detail in [ 4 ]andshowninFig. 9.1 below, where the third reason from
the top (with a vote of 1.0) is displayed above one with 2.0 votes. Finally,
each participant can vote for multiple reasons so that there is no need to
generate conjoint reason responses: “I agree with R#101 and R#111.”
The NERD research group generally designs each new survey with a back-
ground empirical investigation. For this survey, we explored the effect of
identifying reasons by either their author's pseudonym or merely by a gener-
ated number that represents the reason anonymously. The participants were
divided into two groups with cohort 0 viewing only numbers (N = 45) and
cohort 1 viewing pseudonyms (N = 50); see Fig. 9.1 . All participants viewed
the same reasons; only the author's identifier (appended to each reason, as
shown in Fig. 9.1 )wasvaried.
Search WWH ::




Custom Search