TY - JOUR
T1 - Evaluating reliability of assessments in nursing documentation
AU - Monsen, Karen A
AU - Lytton, Amy B.
AU - Ferrari, Starr
AU - Halder, Katie M.
AU - Radosevich, David M.
AU - Kerr, Madeleine J.
AU - Mitchell, Susan M.
AU - Brandt, Joan K.
PY - 2011/10
Y1 - 2011/10
N2 - Clinical-documentation data are increasingly being used for program evaluation and research, and methods for verifying inter-rater reliability are needed. The purpose of this study was to test a panel-of-experts approach for verifying public health nurse (PHN) knowledge, behavior, and status scores for Income, Mental health, and Family planning problems within a convenience sample of 100 PHN client files. The number of instances of agreement between raters across all problems and outcomes averaged 42.0 (2 experts), 21.3 (3 experts), and 7.8 (3 experts and agency). Intra-class correlation coefficients ranged from 0.35-0.63, indicating that inter-rater reliability was not acceptable, even among the experts. Post-processing analysis suggested that insufficient information was available in the files to substantiate scores. It is possible that this method of verifying data reliability could be successful if implemented with procedures specifying that assessments must be substantiated by free text or structured data. There is a continued need for efficient and effective methods to document clinical-data reliability.
AB - Clinical-documentation data are increasingly being used for program evaluation and research, and methods for verifying inter-rater reliability are needed. The purpose of this study was to test a panel-of-experts approach for verifying public health nurse (PHN) knowledge, behavior, and status scores for Income, Mental health, and Family planning problems within a convenience sample of 100 PHN client files. The number of instances of agreement between raters across all problems and outcomes averaged 42.0 (2 experts), 21.3 (3 experts), and 7.8 (3 experts and agency). Intra-class correlation coefficients ranged from 0.35-0.63, indicating that inter-rater reliability was not acceptable, even among the experts. Post-processing analysis suggested that insufficient information was available in the files to substantiate scores. It is possible that this method of verifying data reliability could be successful if implemented with procedures specifying that assessments must be substantiated by free text or structured data. There is a continued need for efficient and effective methods to document clinical-data reliability.
UR - http://www.scopus.com/inward/record.url?scp=84865094421&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84865094421&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:84865094421
SN - 1089-9758
VL - 15
JO - Online Journal of Nursing Informatics
JF - Online Journal of Nursing Informatics
IS - 3
ER -