Accuracy of self-monitoring during learning of radiograph interpretation

Martin V. Pusic, Robert Chiaramonte, Sophia P Gladding, John S. Andrews, Martin R. Pecaric, Kathy Boutis

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

Despite calls for the improvement of self-assessment as a basis for self-directed learning, instructional designs that include reflection in practice are uncommon. Using data from a screen-based simulation for learning radiograph interpretation, we present validity evidence for a simple self-monitoring measure and examine how it can complement skill assessment. Methods: Medical students learning ankle radiograph interpretation were given an online learning set of 50 cases which they were asked to classify as 'abnormal' (fractured) or 'normal' and to indicate the degree to which they felt certain about their response (Definitely or Probably). They received immediate feedback on each case. All students subsequently completed two 20-case post-tests: an immediate post-test (IPT), and a delayed post-test (DPT) administered 2 weeks later. We determined the degree to which certainty (Definitely versus Probably) correlated with accuracy of interpretation and how this relationship changed between the tests. Results: Of 988 students approached, 115 completed both tests. Mean ± SD accuracy scores decreased from 59 ± 17% at the IPT to 53 ± 16% at the DPT (95% confidence interval [CI] for the difference: -2% to -10%). Mean self-assessed certainty did not decrease (rates of Definitely: IPT, 17.6%; DPT, 19.5%; 95% CI for difference: +7.2% to -3.4%). Regression modelling showed that accuracy was positively associated with choosing Definitely over Probably (odds ratio [OR] 1.63, 95% CI 1.27-2.09) and indicated a statistically significant interaction between test timing and certainty (OR 0.72, 95% CI 0.52-0.99); thus, the accuracy of self-monitoring decayed over the retention interval, leaving students relatively overconfident in their abilities. Conclusions: This study shows that, in medical students learning radiograph interpretation, the development of self-monitoring skills can be measured and should not be assumed to necessarily vary in the same way as the underlying clinical skill.

Original languageEnglish (US)
Pages (from-to)838-846
Number of pages9
JournalMedical education
Volume49
Issue number8
DOIs
StatePublished - Aug 1 2015

Fingerprint

Learning
monitoring
interpretation
Confidence Intervals
learning
Students
Medical Students
confidence
Odds Ratio
Aptitude
Clinical Competence
Ankle
medical student
student
self-assessment
regression
simulation
ability
interaction
evidence

Cite this

Pusic, M. V., Chiaramonte, R., Gladding, S. P., Andrews, J. S., Pecaric, M. R., & Boutis, K. (2015). Accuracy of self-monitoring during learning of radiograph interpretation. Medical education, 49(8), 838-846. https://doi.org/10.1111/medu.12774

Accuracy of self-monitoring during learning of radiograph interpretation. / Pusic, Martin V.; Chiaramonte, Robert; Gladding, Sophia P; Andrews, John S.; Pecaric, Martin R.; Boutis, Kathy.

In: Medical education, Vol. 49, No. 8, 01.08.2015, p. 838-846.

Research output: Contribution to journalArticle

Pusic, MV, Chiaramonte, R, Gladding, SP, Andrews, JS, Pecaric, MR & Boutis, K 2015, 'Accuracy of self-monitoring during learning of radiograph interpretation', Medical education, vol. 49, no. 8, pp. 838-846. https://doi.org/10.1111/medu.12774
Pusic, Martin V. ; Chiaramonte, Robert ; Gladding, Sophia P ; Andrews, John S. ; Pecaric, Martin R. ; Boutis, Kathy. / Accuracy of self-monitoring during learning of radiograph interpretation. In: Medical education. 2015 ; Vol. 49, No. 8. pp. 838-846.
@article{563d1f00a3c64d3885d5ea9fd6f72d54,
title = "Accuracy of self-monitoring during learning of radiograph interpretation",
abstract = "Despite calls for the improvement of self-assessment as a basis for self-directed learning, instructional designs that include reflection in practice are uncommon. Using data from a screen-based simulation for learning radiograph interpretation, we present validity evidence for a simple self-monitoring measure and examine how it can complement skill assessment. Methods: Medical students learning ankle radiograph interpretation were given an online learning set of 50 cases which they were asked to classify as 'abnormal' (fractured) or 'normal' and to indicate the degree to which they felt certain about their response (Definitely or Probably). They received immediate feedback on each case. All students subsequently completed two 20-case post-tests: an immediate post-test (IPT), and a delayed post-test (DPT) administered 2 weeks later. We determined the degree to which certainty (Definitely versus Probably) correlated with accuracy of interpretation and how this relationship changed between the tests. Results: Of 988 students approached, 115 completed both tests. Mean ± SD accuracy scores decreased from 59 ± 17{\%} at the IPT to 53 ± 16{\%} at the DPT (95{\%} confidence interval [CI] for the difference: -2{\%} to -10{\%}). Mean self-assessed certainty did not decrease (rates of Definitely: IPT, 17.6{\%}; DPT, 19.5{\%}; 95{\%} CI for difference: +7.2{\%} to -3.4{\%}). Regression modelling showed that accuracy was positively associated with choosing Definitely over Probably (odds ratio [OR] 1.63, 95{\%} CI 1.27-2.09) and indicated a statistically significant interaction between test timing and certainty (OR 0.72, 95{\%} CI 0.52-0.99); thus, the accuracy of self-monitoring decayed over the retention interval, leaving students relatively overconfident in their abilities. Conclusions: This study shows that, in medical students learning radiograph interpretation, the development of self-monitoring skills can be measured and should not be assumed to necessarily vary in the same way as the underlying clinical skill.",
author = "Pusic, {Martin V.} and Robert Chiaramonte and Gladding, {Sophia P} and Andrews, {John S.} and Pecaric, {Martin R.} and Kathy Boutis",
year = "2015",
month = "8",
day = "1",
doi = "10.1111/medu.12774",
language = "English (US)",
volume = "49",
pages = "838--846",
journal = "Medical Education",
issn = "0308-0110",
publisher = "Wiley-Blackwell",
number = "8",

}

TY - JOUR

T1 - Accuracy of self-monitoring during learning of radiograph interpretation

AU - Pusic, Martin V.

AU - Chiaramonte, Robert

AU - Gladding, Sophia P

AU - Andrews, John S.

AU - Pecaric, Martin R.

AU - Boutis, Kathy

PY - 2015/8/1

Y1 - 2015/8/1

N2 - Despite calls for the improvement of self-assessment as a basis for self-directed learning, instructional designs that include reflection in practice are uncommon. Using data from a screen-based simulation for learning radiograph interpretation, we present validity evidence for a simple self-monitoring measure and examine how it can complement skill assessment. Methods: Medical students learning ankle radiograph interpretation were given an online learning set of 50 cases which they were asked to classify as 'abnormal' (fractured) or 'normal' and to indicate the degree to which they felt certain about their response (Definitely or Probably). They received immediate feedback on each case. All students subsequently completed two 20-case post-tests: an immediate post-test (IPT), and a delayed post-test (DPT) administered 2 weeks later. We determined the degree to which certainty (Definitely versus Probably) correlated with accuracy of interpretation and how this relationship changed between the tests. Results: Of 988 students approached, 115 completed both tests. Mean ± SD accuracy scores decreased from 59 ± 17% at the IPT to 53 ± 16% at the DPT (95% confidence interval [CI] for the difference: -2% to -10%). Mean self-assessed certainty did not decrease (rates of Definitely: IPT, 17.6%; DPT, 19.5%; 95% CI for difference: +7.2% to -3.4%). Regression modelling showed that accuracy was positively associated with choosing Definitely over Probably (odds ratio [OR] 1.63, 95% CI 1.27-2.09) and indicated a statistically significant interaction between test timing and certainty (OR 0.72, 95% CI 0.52-0.99); thus, the accuracy of self-monitoring decayed over the retention interval, leaving students relatively overconfident in their abilities. Conclusions: This study shows that, in medical students learning radiograph interpretation, the development of self-monitoring skills can be measured and should not be assumed to necessarily vary in the same way as the underlying clinical skill.

AB - Despite calls for the improvement of self-assessment as a basis for self-directed learning, instructional designs that include reflection in practice are uncommon. Using data from a screen-based simulation for learning radiograph interpretation, we present validity evidence for a simple self-monitoring measure and examine how it can complement skill assessment. Methods: Medical students learning ankle radiograph interpretation were given an online learning set of 50 cases which they were asked to classify as 'abnormal' (fractured) or 'normal' and to indicate the degree to which they felt certain about their response (Definitely or Probably). They received immediate feedback on each case. All students subsequently completed two 20-case post-tests: an immediate post-test (IPT), and a delayed post-test (DPT) administered 2 weeks later. We determined the degree to which certainty (Definitely versus Probably) correlated with accuracy of interpretation and how this relationship changed between the tests. Results: Of 988 students approached, 115 completed both tests. Mean ± SD accuracy scores decreased from 59 ± 17% at the IPT to 53 ± 16% at the DPT (95% confidence interval [CI] for the difference: -2% to -10%). Mean self-assessed certainty did not decrease (rates of Definitely: IPT, 17.6%; DPT, 19.5%; 95% CI for difference: +7.2% to -3.4%). Regression modelling showed that accuracy was positively associated with choosing Definitely over Probably (odds ratio [OR] 1.63, 95% CI 1.27-2.09) and indicated a statistically significant interaction between test timing and certainty (OR 0.72, 95% CI 0.52-0.99); thus, the accuracy of self-monitoring decayed over the retention interval, leaving students relatively overconfident in their abilities. Conclusions: This study shows that, in medical students learning radiograph interpretation, the development of self-monitoring skills can be measured and should not be assumed to necessarily vary in the same way as the underlying clinical skill.

UR - http://www.scopus.com/inward/record.url?scp=84935516099&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84935516099&partnerID=8YFLogxK

U2 - 10.1111/medu.12774

DO - 10.1111/medu.12774

M3 - Article

C2 - 26152495

AN - SCOPUS:84935516099

VL - 49

SP - 838

EP - 846

JO - Medical Education

JF - Medical Education

SN - 0308-0110

IS - 8

ER -