Recommender systems algorithms are generally evaluated primarily on machine learning criteria such as recommendation accuracy or top-n precision. In this work, we evaluate six recommendation algorithms from a user-centric perspective, collecting both objective user activity data and subjective user perceptions. In a field experiment involving 1508 users who participated for at least a month, we compare six algorithms built using machine learning techniques, ranging from supervised matrix factorization, contextual bandit learning to Q learning. We found that the objective design in machine-learning-based recommender systems significantly affects user experience. Specifically, a recommender optimizing for implicit action prediction error engages users more than optimizing for explicit rating prediction error when modeled with the classical matrix factorization algorithms, which empirically explains the historical transition of recommender system research from modeling explicit feedback data to implicit feedback data. However, the action-based recommender is not as precise as the rating-based recommender in that it increases not only positive engagement but also negative engagement, e.g., negative action rate and user browsing effort which are negatively correlated with user satisfaction. We show that blending both explicit and implicit feedback from users through an online learning algorithm can gain the benefits of engagement and mitigate one of the possible costs (i.e., the increased browsing effort).
|Original language||English (US)|
|Title of host publication||Proceedings of the 33rd Annual ACM Symposium on Applied Computing, SAC 2018|
|Publisher||Association for Computing Machinery|
|Number of pages||10|
|State||Published - Apr 9 2018|
|Event||33rd Annual ACM Symposium on Applied Computing, SAC 2018 - Pau, France|
Duration: Apr 9 2018 → Apr 13 2018
|Name||Proceedings of the ACM Symposium on Applied Computing|
|Other||33rd Annual ACM Symposium on Applied Computing, SAC 2018|
|Period||4/9/18 → 4/13/18|
Bibliographical noteFunding Information:
This work was supported by the National Science Foundation under grant IIS-1319382. The first author was also supported by the Doctoral Dissertation Fellowship, 2016-17, by the Graduate School at the University of Minnesota. We thank Liangjie Hong (Etsy Inc., previously at Yahoo Research) and Yue Shi (Facebook, previously at Yahoo Research) for their helpful discussions on reinforcement-learning-based recommender systems. We also thank all the Movie-Lens users who participated in our study.
© 2018 ACM.
- Contextual bandit
- Machine learning
- Q learning
- Recommender systems
- User experiment
- User-centric evaluation