Evaluating reliability of assessments in nursing documentation

Karen A Monsen, Amy B. Lytton, Starr Ferrari, Katie M. Halder, David M. Radosevich, Madeleine J. Kerr, Susan M. Mitchell, Joan K. Brandt

Research output: Contribution to journalArticlepeer-review

4 Scopus citations


Clinical-documentation data are increasingly being used for program evaluation and research, and methods for verifying inter-rater reliability are needed. The purpose of this study was to test a panel-of-experts approach for verifying public health nurse (PHN) knowledge, behavior, and status scores for Income, Mental health, and Family planning problems within a convenience sample of 100 PHN client files. The number of instances of agreement between raters across all problems and outcomes averaged 42.0 (2 experts), 21.3 (3 experts), and 7.8 (3 experts and agency). Intra-class correlation coefficients ranged from 0.35-0.63, indicating that inter-rater reliability was not acceptable, even among the experts. Post-processing analysis suggested that insufficient information was available in the files to substantiate scores. It is possible that this method of verifying data reliability could be successful if implemented with procedures specifying that assessments must be substantiated by free text or structured data. There is a continued need for efficient and effective methods to document clinical-data reliability.

Original languageEnglish (US)
JournalOnline Journal of Nursing Informatics
Issue number3
StatePublished - Oct 2011


Dive into the research topics of 'Evaluating reliability of assessments in nursing documentation'. Together they form a unique fingerprint.

Cite this