The Prevalence and Severity of Underreporting Bias in Machine- and Human-Coded Data

Benjamin E. Bagozzi, Patrick T. Brandt, John R. Freeman, Jennifer S. Holmes, Alisha Kim, Agustin Palao Mendizabal, Carly Potz-Nielsen

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Textual data are plagued by underreporting bias. For example, news sources often fail to report human rights violations. Cook et al. propose a multi-source estimator to gauge, and to account for, the underreporting of state repression events within human codings of news texts produced by the Agence France-Presse and Associated Press. We evaluate this estimator with Monte Carlo experiments, and then use it to compare the prevalence and seriousness of underreporting when comparable texts are machine coded and recorded in the World-Integrated Crisis Early Warning System dataset. We replicate Cook et al.'s investigation of human-coded state repression events with our machine-coded events, and validate both models against an external measure of human rights protections in Africa. We then use the Cook et al. estimator to gauge the seriousness and prevalence of underreporting in machine and human-coded event data on human rights violations in Colombia. We find in both applications that machine-coded data are as valid as human-coded data.

Original languageEnglish (US)
Pages (from-to)641-649
Number of pages9
JournalPolitical Science Research and Methods
Volume7
Issue number3
DOIs
StatePublished - Jul 1 2019

Bibliographical note

Publisher Copyright:
© 2018 The European Political Science Association.

Fingerprint

Dive into the research topics of 'The Prevalence and Severity of Underreporting Bias in Machine- and Human-Coded Data'. Together they form a unique fingerprint.

Cite this