Skip to main navigation Skip to search Skip to main content

How many bits per rating?

  • Daniel Kluver
  • , Tien T. Nguyen
  • , Michael Ekstrand
  • , Shilad Sen
  • , John Riedl

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Most recommender systems assume user ratings accurately represent user preferences. However, prior research shows that user ratings are imperfect and noisy. Moreover, this noise limits the measurable predictive power of any recommender system. We propose an information theoretic framework for quantifying the preference information contained in ratings and predictions. We computationally explore the properties of our model and apply our framework to estimate the efficiency of different rating scales for real world datasets. We then estimate how the amount of information predictions give to users is related to the scale ratings are collected on. Our findings suggest a tradeoff in rating scale granularity: while previous research indicates that coarse scales (such as thumbs up / thumbs down) take less time, we find that ratings with these scales provide less predictive value to users. We introduce a new measure, preference bits per second, to quantitatively reconcile this tradeoff.

Original languageEnglish (US)
Title of host publicationRecSys'12 - Proceedings of the 6th ACM Conference on Recommender Systems
Pages99-106
Number of pages8
DOIs
StatePublished - 2012
Externally publishedYes
Event6th ACM Conference on Recommender Systems, RecSys 2012 - Dublin, Ireland
Duration: Sep 9 2012Sep 13 2012

Publication series

NameRecSys'12 - Proceedings of the 6th ACM Conference on Recommender Systems

Other

Other6th ACM Conference on Recommender Systems, RecSys 2012
Country/TerritoryIreland
CityDublin
Period9/9/129/13/12

Keywords

  • Evaluation
  • Information theory
  • Metrics
  • Ratings
  • Recommender systems

Fingerprint

Dive into the research topics of 'How many bits per rating?'. Together they form a unique fingerprint.

Cite this