Common scientific and statistical errors in obesity research

Brandon J. George, T. Mark Beasley, Andrew W. Brown, John Dawson, Rositsa Dimova, Jasmin Divers, Tashauna U. Goldsby, Moonseong Heo, Kathryn A. Kaiser, Scott W. Keith, Mimi Y. Kim, Peng Li, Tapan Mehta, J. Michael Oakes, Asheley Skinner, Elizabeth Stuart, David B. Allison

Research output: Contribution to journalReview articlepeer-review

81 Scopus citations


This review identifies 10 common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research and discuss how they can be avoided. The 10 topics are: 1) misinterpretation of statistical significance, 2) inappropriate testing against baseline values, 3) excessive and undisclosed multiple testing and "P-value hacking," 4) mishandling of clustering in cluster randomized trials, 5) misconceptions about nonparametric tests, 6) mishandling of missing data, 7) miscalculation of effect sizes, 8) ignoring regression to the mean, 9) ignoring confirmation bias, and 10) insufficient statistical reporting. It is hoped that discussion of these errors can improve the quality of obesity research by helping researchers to implement proper statistical practice and to know when to seek the help of a statistician.

Original languageEnglish (US)
Pages (from-to)781-790
Number of pages10
Issue number4
StatePublished - Apr 1 2016

Bibliographical note

Publisher Copyright:
© 2016 The Obesity Society.


Dive into the research topics of 'Common scientific and statistical errors in obesity research'. Together they form a unique fingerprint.

Cite this