Auditing the AI Auditors: A Framework for Evaluating Fairness and Bias in High Stakes AI Predictive Models

Richard N. Landers, Tara S. Behrend

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present psychological audits as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Original languageEnglish (US)
JournalAmerican Psychologist
DOIs
StateAccepted/In press - 2022

Bibliographical note

Publisher Copyright:
© 2022. American Psychological Association

Keywords

  • Artificial intelligence
  • Audit
  • Bias
  • Machine learning
  • Psychology

PubMed: MeSH publication types

  • Journal Article

Fingerprint

Dive into the research topics of 'Auditing the AI Auditors: A Framework for Evaluating Fairness and Bias in High Stakes AI Predictive Models'. Together they form a unique fingerprint.

Cite this