The University Carlos III of Madrid at TREC 2011 Crowdsourcing Track: Notebook Paper

J. Urbano, M. Marrero, D. Martín, J. Morato, K. Robles and J. Lloréns
Text REtrieval Conference Notebook, 2011.

The extended final version of this paper can be found in the TREC proceedings.
Best overall results in the TREC 2011 Crowdsourcing Track (as per NIST ground truth).

Abstract

This notebook paper describes our participation in both tasks of the TREC 2011 Crowdsourcing Track. For the first one we submitted three runs that used Amazon Mechanical Turk: one where workers made relevance judgments based on a 3-point scale, and two similar runs where workers provided an explicit ranking of documents. All three runs implemented a quality control mechanism at the task level, which was based on a simple reading comprehension test. For the second task we submitted another three runs: one with a stepwise execution of the GetAnotherLabel algorithm by Ipeirotis et al., and two others with a rule-based and a SVM-based model. We also comment on several topics regarding the Track design and evaluation methods.

Files