TY - JOUR
T1 - The Effects of Subject-Defined Categories on Judgmental Accuracy in Confidence Assessment Tasks
AU - Browne, Glenn J.
AU - Curley, Shawn P.
AU - Benson, P. George
PY - 1999/11/1
Y1 - 1999/11/1
N2 - The accuracy of confidence judgments can be determined using measures of discrimination and calibration. The present paper utilizes a new assessment methodology that decomposes the confidence assessment task, allowing us to investigate discrimination and calibration skills in greater depth than has been done in previous studies. Researchers investigating the goodness of confidence judgments have typically grouped forecasters' assessments into experimenter-defined categories, generally in equal widths of .10. In the present research, subjects created their own categories and later assigned confidence judgments to the categories, separating the tasks of discriminating categories (discrimination) and assigning numbers to categories (calibration). Further, the typical assessment procedure assumes that subjects are able to discriminate equally across the confidence scale. Since subjects in the present study defined their own assessment categories, they could locate those categories at any point on the scale. A final issue of interest was whether subjects were able to determine accurately the number of categories into which they could discriminate. Sixty subjects performed 1 of 2 tasks, general knowledge or forecasting, in both relatively easy and relatively hard conditions. Results showed a trade-off in performance: Calibration generally became worse as the number of categories increased, while discrimination generally improved. Overall accuracy was not affected by the number of categories used. Further, subjects partitioned categories more at the high end of the scale. Finally, measures showed that subjects were not accurate in their beliefs about their own discrimination ability.
AB - The accuracy of confidence judgments can be determined using measures of discrimination and calibration. The present paper utilizes a new assessment methodology that decomposes the confidence assessment task, allowing us to investigate discrimination and calibration skills in greater depth than has been done in previous studies. Researchers investigating the goodness of confidence judgments have typically grouped forecasters' assessments into experimenter-defined categories, generally in equal widths of .10. In the present research, subjects created their own categories and later assigned confidence judgments to the categories, separating the tasks of discriminating categories (discrimination) and assigning numbers to categories (calibration). Further, the typical assessment procedure assumes that subjects are able to discriminate equally across the confidence scale. Since subjects in the present study defined their own assessment categories, they could locate those categories at any point on the scale. A final issue of interest was whether subjects were able to determine accurately the number of categories into which they could discriminate. Sixty subjects performed 1 of 2 tasks, general knowledge or forecasting, in both relatively easy and relatively hard conditions. Results showed a trade-off in performance: Calibration generally became worse as the number of categories increased, while discrimination generally improved. Overall accuracy was not affected by the number of categories used. Further, subjects partitioned categories more at the high end of the scale. Finally, measures showed that subjects were not accurate in their beliefs about their own discrimination ability.
UR - http://www.scopus.com/inward/record.url?scp=0042221302&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0042221302&partnerID=8YFLogxK
U2 - 10.1006/obhd.1999.2849
DO - 10.1006/obhd.1999.2849
M3 - Article
AN - SCOPUS:0042221302
SN - 0749-5978
VL - 80
SP - 134
EP - 154
JO - Organizational Behavior and Human Decision Processes
JF - Organizational Behavior and Human Decision Processes
IS - 2
ER -