Personalized feedback versus money: The effect on reliability of subjective data in online experimental platforms

Teng Ye, Katharina Reinecke, Lionel P. Robert

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Scopus citations

Abstract

We compared the data reliability on a subjective task from two platforms: Amazon's Mechanical Turk (MTurk) and LabintheWild. MTurk incentivizes participants with financial compensation while LabintheWild provides participants with personalized feedback. LabintheWild was found to produce higher data reliability than MTurk. Our findings suggested that online experiment platforms providing feedback in exchange for study participation could produce more reliable data in subjective preference tasks than those offering financial compensation.

Original languageEnglish (US)
Title of host publicationCSCW 2017 - Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing
PublisherAssociation for Computing Machinery, Inc
Pages343-346
Number of pages4
ISBN (Electronic)9781450346887
DOIs
StatePublished - Feb 25 2017
Externally publishedYes
Event2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW 2017 - Portland, United States
Duration: Feb 25 2017Mar 1 2017

Publication series

NameCSCW 2017 - Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing

Other

Other2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW 2017
Country/TerritoryUnited States
CityPortland
Period2/25/173/1/17

Keywords

  • Compensation
  • Crowdsourcing
  • Data Quality
  • Incentives
  • Mechanical Turk
  • Motivation
  • Online Experimentation

Fingerprint

Dive into the research topics of 'Personalized feedback versus money: The effect on reliability of subjective data in online experimental platforms'. Together they form a unique fingerprint.

Cite this