Information Technology Reference
testing is hard to control and the efficiency often does not satisfy the company's
expectation. There is no effective solution for this problem at the present. The
following are crowdsourcing quality testings' two features: (1) the scale of task is
large, (2) it need human to finish the testing. These two features quite match
crowdsourcing characteristics. Through some appropriate changes, we can make
crowdsourcing task A's quality testing as crowdsourcing task B. Then we can achieve
crowdsourcing A's quality testing by accomplishing crowdsourcing task B.
How to make crowdsourcing task A's quality testing as crowdsourcing task B's
subtasks is the key of this method. At the same time, crowdsourcing task B should
have an existing method to finish the quality testing for itself. To design
crowdsourcing task B as a “multilevel label judgment” type of the crowdsourcing task
is a good choice. No matter crowdsourcing task A is independent crowdsourcing task
or collaborating crowdsourcing task, the method we mentioned above is convenient to
achieve. For example, crowdsourcing task A is professional terms translation, and
crowdsourcing task B is translation quality judgment. The need to review the quality
of crowdsourcing task A's submitted results forms the crowdsourcing task B. Since
these are only simple judgments for human, we can apply Gold Standard Test or the
Expectation-Maximization Algorithm with Separation of Bias and Errors. Thus,
conveniently obtained the crowdsourcing task A's quality testing result.
In Fig. 2, the meanings of abbreviations are: TASA (Task Attendants Set of
crowdsourcing task A), TASB (Task Attendants Set of crowdsourcing task B), SSA
(Subtasks of crowdsourcing task A), SSB (Subtasks of crowdsourcing task B).
Fig. 2. Process of Quality Testing Method