Abstract
Recommender systems face several challenges, e.g., recommending novel and diverse items and generating helpful explanations. Where algorithms struggle, people may excel. We therefore designed CrowdLens to explore different workflows for incorporating people into the recommendation process. We did an online experiment, finding that: compared to a state-of-the-art algorithm, crowdsourcing workflows produced more diverse and novel recommendations favored by human judges; some crowdworkers produced high-quality explanations for their recommendations, and we created an accurate model for identifying high-quality explanations; volunteers from an online community generally performed better than paid crowdworkers, but appropriate algorithmic support erased this gap. We conclude by reflecting on lessons of our work for those considering a crowdsourcing approach and identifying several fundamental issues for future work.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 10th International Conference on Web and Social Media, ICWSM 2016 |
Publisher | AAAI press |
Pages | 52-61 |
Number of pages | 10 |
ISBN (Electronic) | 9781577357582 |
State | Published - 2016 |
Event | 10th International Conference on Web and Social Media, ICWSM 2016 - Cologne, Germany Duration: May 17 2016 → May 20 2016 |
Publication series
Name | Proceedings of the 10th International Conference on Web and Social Media, ICWSM 2016 |
---|
Other
Other | 10th International Conference on Web and Social Media, ICWSM 2016 |
---|---|
Country/Territory | Germany |
City | Cologne |
Period | 5/17/16 → 5/20/16 |
Bibliographical note
Funding Information:We thank anonymous reviewers for their feedback and NSF for funding our research with awards IIS-0968483, IIS-0964695, IIS-1218826, IIS-1210863 and IIS-1017697.
Publisher Copyright:
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.