Measurement and design heterogeneity in perceived message effectiveness studies: A call for research

Seth M. Noar, Joshua Barker, Marco Yzer

Research output: Contribution to journalReview article

6 Scopus citations

Abstract

Ratings of perceived message effectiveness (PME) are commonly used during message testing and selection, operating under the assumption that messages scoring higher on PME are more likely to affect actual message effectiveness (AME)—for instance, intentions and behaviors. Such a practice has clear utility, particularly when selecting from a large pool of messages. Recently, O’Keefe (2018) argued against the validity of PME as a basis for message selection. He conducted a meta-analysis of mean ratings of PME and AME, testing how often two messages that differ on PME similarly differ on AME, as tested in separate samples. Comparing 151 message pairs derived from 35 studies, he found that use of PME would only result in choosing a more effective message 58% of the time, which is little better than chance. On that basis, O’Keefe concluded that “message designers might dispense with questions about expected or perceived persuasiveness (PME), and instead pretest messages for actual effectiveness” (p. 135). We do not believe that the meta-analysis supports this conclusion, given the measurement and design issues in the set of studies O’Keefe analyzed.

Original languageEnglish (US)
Pages (from-to)990-993
Number of pages4
JournalJournal of Communication
Volume68
Issue number5
DOIs
StatePublished - Jan 1 2018

Keywords

  • Message
  • Meta-analysis
  • Perceived effectiveness

Fingerprint Dive into the research topics of 'Measurement and design heterogeneity in perceived message effectiveness studies: A call for research'. Together they form a unique fingerprint.

  • Cite this