TY - JOUR
T1 - “Or They Could Just Not Use It?”
T2 - The Dilemma of AI Disclosure for Audience Trust in News
AU - Toff, Benjamin
AU - Simon, Felix M.
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2025
Y1 - 2025
N2 - The adoption of artificial intelligence (AI) technologies in the production and distribution of news has generated theoretical, normative, and practical concerns around the erosion of journalistic authority and autonomy and the spread of misinformation. With trust in news already low in many places worldwide, both scholars and practitioners are wary of how the public will respond to news generated through automated methods, prompting calls for labeling of AI-generated content. In this study, we present results from a novel survey-experiment conducted using actual AI-generated journalistic content. We test whether audiences in the United States, where trust is particularly polarized along partisan lines, perceive news labeled as AI-generated as more or less trustworthy. We find on average that audiences perceive news labeled as AI-generated as less trustworthy, not more, even when articles themselves are not evaluated as any less accurate or unfair. Furthermore, we find that these effects are largely concentrated among those whose preexisting levels of trust in news are higher to begin with and among those who exhibit higher levels of knowledge about journalism. We also find that negative effects associated with perceived trustworthiness are largely counteracted when articles disclose the list of sources used to generate the content. As news organizations increasingly look toward adopting AI technologies in their newsrooms, our results hold implications for how disclosure about these techniques may contribute to or further undermine audience confidence in the institution of journalism at a time in which its standing with the public is especially tenuous.
AB - The adoption of artificial intelligence (AI) technologies in the production and distribution of news has generated theoretical, normative, and practical concerns around the erosion of journalistic authority and autonomy and the spread of misinformation. With trust in news already low in many places worldwide, both scholars and practitioners are wary of how the public will respond to news generated through automated methods, prompting calls for labeling of AI-generated content. In this study, we present results from a novel survey-experiment conducted using actual AI-generated journalistic content. We test whether audiences in the United States, where trust is particularly polarized along partisan lines, perceive news labeled as AI-generated as more or less trustworthy. We find on average that audiences perceive news labeled as AI-generated as less trustworthy, not more, even when articles themselves are not evaluated as any less accurate or unfair. Furthermore, we find that these effects are largely concentrated among those whose preexisting levels of trust in news are higher to begin with and among those who exhibit higher levels of knowledge about journalism. We also find that negative effects associated with perceived trustworthiness are largely counteracted when articles disclose the list of sources used to generate the content. As news organizations increasingly look toward adopting AI technologies in their newsrooms, our results hold implications for how disclosure about these techniques may contribute to or further undermine audience confidence in the institution of journalism at a time in which its standing with the public is especially tenuous.
KW - artificial intelligence
KW - journalism
KW - LLMs
KW - news
KW - trust
KW - trust in news
UR - http://www.scopus.com/inward/record.url?scp=85213841243&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85213841243&partnerID=8YFLogxK
U2 - 10.1177/19401612241308697
DO - 10.1177/19401612241308697
M3 - Article
AN - SCOPUS:85213841243
SN - 1940-1612
JO - International Journal of Press/Politics
JF - International Journal of Press/Politics
ER -