Clinical-documentation data are increasingly being used for program evaluation and research, and methods for verifying inter-rater reliability are needed. The purpose of this study was to test a panel-of-experts approach for verifying public health nurse (PHN) knowledge, behavior, and status scores for Income, Mental health, and Family planning problems within a convenience sample of 100 PHN client files. The number of instances of agreement between raters across all problems and outcomes averaged 42.0 (2 experts), 21.3 (3 experts), and 7.8 (3 experts and agency). Intra-class correlation coefficients ranged from 0.35-0.63, indicating that inter-rater reliability was not acceptable, even among the experts. Post-processing analysis suggested that insufficient information was available in the files to substantiate scores. It is possible that this method of verifying data reliability could be successful if implemented with procedures specifying that assessments must be substantiated by free text or structured data. There is a continued need for efficient and effective methods to document clinical-data reliability.
|Original language||English (US)|
|Journal||Online Journal of Nursing Informatics|
|State||Published - Oct 2011|