The University Carlos III of Madrid at TREC 2011 Crowdsourcing Track
J. Urbano, M. Marrero, D. Martín, J. Morato, K. Robles and J. Lloréns
Text REtrieval Conference, 2011.
Best overall results in the TREC 2011 Crowdsourcing Track (as per NIST ground truth).
Abstract
This paper describes the participation of the uc3m team in both tasks of the TREC 2011 Crowdsourcing Track. For the first task we submitted three runs that used Amazon Mechanical Turk: one where workers made relevance judgments based on a 3-point scale, and two similar runs where workers provided an explicit ranking of documents. All three runs implemented a quality control mechanism at the task level based on a simple reading comprehension test. For the second task we also submitted three runs: one with a stepwise execution of the GetAnotherLabel algorithm and two others with a rule-based and a SVM-based model. According to the NIST gold labels, our runs performed very well in both tasks, ranking at the top for most measures.