Usability of quality measures for online health information: Can commonly used technical quality criteria be reliably assessed?

Elmer V. Bernstam, Smitha Sagaram, Muhammad Walji, Craig W. Johnson, Funda Meric-Bernstam

Research output: Contribution to journalArticle

42 Citations (Scopus)

Abstract

Purpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites. Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK). Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement ≥ 0.6). Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions.

Original languageEnglish (US)
Pages (from-to)675-683
Number of pages9
JournalInternational Journal of Medical Informatics
Volume74
Issue number7-8
DOIs
StatePublished - Aug 1 2005

Fingerprint

Complementary Therapies
Health

Keywords

  • MeSH: Internet
  • Medical informatics
  • Patient education

Cite this

Usability of quality measures for online health information : Can commonly used technical quality criteria be reliably assessed? / Bernstam, Elmer V.; Sagaram, Smitha; Walji, Muhammad; Johnson, Craig W.; Meric-Bernstam, Funda.

In: International Journal of Medical Informatics, Vol. 74, No. 7-8, 01.08.2005, p. 675-683.

Research output: Contribution to journalArticle

Bernstam, Elmer V. ; Sagaram, Smitha ; Walji, Muhammad ; Johnson, Craig W. ; Meric-Bernstam, Funda. / Usability of quality measures for online health information : Can commonly used technical quality criteria be reliably assessed?. In: International Journal of Medical Informatics. 2005 ; Vol. 74, No. 7-8. pp. 675-683.
@article{50042ecb05f345bb94915e89d50f5fab,
title = "Usability of quality measures for online health information: Can commonly used technical quality criteria be reliably assessed?",
abstract = "Purpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites. Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK). Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement ≥ 0.6). Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions.",
keywords = "MeSH: Internet, Medical informatics, Patient education",
author = "Bernstam, {Elmer V.} and Smitha Sagaram and Muhammad Walji and Johnson, {Craig W.} and Funda Meric-Bernstam",
year = "2005",
month = "8",
day = "1",
doi = "10.1016/j.ijmedinf.2005.02.002",
language = "English (US)",
volume = "74",
pages = "675--683",
journal = "International Journal of Medical Informatics",
issn = "1386-5056",
publisher = "Elsevier Ireland Ltd",
number = "7-8",

}

TY - JOUR

T1 - Usability of quality measures for online health information

T2 - Can commonly used technical quality criteria be reliably assessed?

AU - Bernstam, Elmer V.

AU - Sagaram, Smitha

AU - Walji, Muhammad

AU - Johnson, Craig W.

AU - Meric-Bernstam, Funda

PY - 2005/8/1

Y1 - 2005/8/1

N2 - Purpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites. Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK). Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement ≥ 0.6). Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions.

AB - Purpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites. Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK). Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement ≥ 0.6). Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions.

KW - MeSH: Internet

KW - Medical informatics

KW - Patient education

UR - http://www.scopus.com/inward/record.url?scp=22544446452&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=22544446452&partnerID=8YFLogxK

U2 - 10.1016/j.ijmedinf.2005.02.002

DO - 10.1016/j.ijmedinf.2005.02.002

M3 - Article

C2 - 16043090

AN - SCOPUS:22544446452

VL - 74

SP - 675

EP - 683

JO - International Journal of Medical Informatics

JF - International Journal of Medical Informatics

SN - 1386-5056

IS - 7-8

ER -