Abstract
Collaborative filtering algorithms find useful patterns in rating and consumption data and exploit these patterns to guide users to good items. Many of these patterns reflect important real-world phenomena driving interactions between the various users and items; other patterns may be irrelevant or reflect undesired discrimination, such as discrimination in publishing or purchasing against authors who are women or ethnic minorities. In this work, we examine the response of collaborative filtering recommender algorithms to the distribution of their input data with respect to one dimension of social concern, namely content creator gender. Using publicly available book ratings data, we measure the distribution of the genders of the authors of books in user rating profiles and recommendation lists produced from this data. We find that common collaborative filtering algorithms tend to propagate at least some of each user’s tendency to rate or read male or female authors into their resulting recommendations, although they differ in both the strength of this propagation and the variance in the gender balance of the recommendation lists they produce. The data, experimental design, and statistical methods are designed to be reusable for studying potentially discriminatory social dimensions of recommendations in other domains and settings as well.
Original language | English (US) |
---|---|
Pages (from-to) | 377-420 |
Number of pages | 44 |
Journal | User Modeling and User-Adapted Interaction |
Volume | 31 |
Issue number | 3 |
DOIs | |
State | Published - Jul 2021 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2021, Springer Nature B.V.
Keywords
- Discrimination
- Gender bias
- Recommender systems
- Research methods