That uncanny valley of mind: when anthropomorphic AI agents disrupt personalized advertising

Woo Jin Kim, Yuhosua Ryoo, Yung Kyun Choi

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

This research, grounded in privacy calculus theory, examines how the anthropomorphization of AI agents affects consumers’ perceptions of privacy risks associated with personalized ads. Specifically, it explores strategies to reduce potential negative impacts. In Study 1, participants expressed concerns that highly anthropomorphized chatbots might possess human-like autonomous intentions to misuse personal data, a phenomenon referred to as the ‘uncanny valley of mind’. In contrast, participants felt more secure, in control, and less concerned about privacy when interacting with a mechanized, less human-like chatbot. To address this backfiring effect, Study 2 explored the role of algorithmic disclosure–where companies provide transparent information about the underlying algorithms, data handling procedures, and personalization criteria. This strategy effectively mitigated privacy concerns, thereby preventing the negative effects associated with highly anthropomorphized AI chatbots. These findings offer valuable insights for marketers utilizing AI chatbots to craft effective, personalized messages based on social media data.

Original languageEnglish (US)
JournalInternational Journal of Advertising
DOIs
StateAccepted/In press - 2024

Bibliographical note

Publisher Copyright:
© 2024 Advertising Association.

Keywords

  • AI chatbot advertising
  • algorithmic disclosure
  • anthropomorphism
  • personalized advertising
  • privacy calculus model
  • privacy concerns
  • uncanny valley of mind

Fingerprint

Dive into the research topics of 'That uncanny valley of mind: when anthropomorphic AI agents disrupt personalized advertising'. Together they form a unique fingerprint.

Cite this