The accuracy of confidence judgments can be determined using measures of discrimination and calibration. The present paper utilizes a new assessment methodology that decomposes the confidence assessment task, allowing us to investigate discrimination and calibration skills in greater depth than has been done in previous studies. Researchers investigating the goodness of confidence judgments have typically grouped forecasters' assessments into experimenter-defined categories, generally in equal widths of .10. In the present research, subjects created their own categories and later assigned confidence judgments to the categories, separating the tasks of discriminating categories (discrimination) and assigning numbers to categories (calibration). Further, the typical assessment procedure assumes that subjects are able to discriminate equally across the confidence scale. Since subjects in the present study defined their own assessment categories, they could locate those categories at any point on the scale. A final issue of interest was whether subjects were able to determine accurately the number of categories into which they could discriminate. Sixty subjects performed 1 of 2 tasks, general knowledge or forecasting, in both relatively easy and relatively hard conditions. Results showed a trade-off in performance: Calibration generally became worse as the number of categories increased, while discrimination generally improved. Overall accuracy was not affected by the number of categories used. Further, subjects partitioned categories more at the high end of the scale. Finally, measures showed that subjects were not accurate in their beliefs about their own discrimination ability.
|Original language||English (US)|
|Number of pages||21|
|Journal||Organizational Behavior and Human Decision Processes|
|State||Published - Nov 1 1999|