### Abstract

The following conclusions were drawn from this study, which investigated potential bias present during grading of reinforced concrete exams: 1. Out of the 35 problems analyzed, there were statistically different results in only 12 problems (34%). The majority of the problems did not have statistically different grades between three professors. 2. Of the 12 questions that had statistically different results, half were short answer and half were computational problems. The type of problem was not an indicator of statistical difference. 3. In the 12 questions with statistically different grades, grader A gave the highest average score on 11 of the problems. Grader C gave the lowest score on eight of the 12 problems. Grader B frequently assigned grade values between graders A and C. 4. While the majority of problems showed no statistical difference in grades, the overall score computed by summing the average results of all 35 problems indicated the following differences: Grader A gave an average of 82%, grader B gave an average of 77%) (5 percentage points less than grader A), and Grader C gave an average of 75% (7 percentage points less than grader A). 5. The question scores did not reveal a bias toward the professor's own students. The grading patterns were the same regardless of the student set graded (student sets varied by university). 6. External factors were held constant to eliminate as much bias as possible when grading the exams. This included eliminating student identifiers. The results indicated that an individual grader did have bias that may have been present when grading. This bias was likely manifested in the grading rubric when valuation was placed on a particular error for each problem. While these differences in valuation did show clear patterns in the grades, the overall scores varied by 7 percentage points or less. This is less than one letter grade using a traditional A through F assessment scale.

Original language | English (US) |
---|---|

Journal | ASEE Annual Conference and Exposition, Conference Proceedings |

Volume | 2018-June |

State | Published - Jun 23 2018 |

Event | 125th ASEE Annual Conference and Exposition - Salt Lake City, United States Duration: Jun 23 2018 → Dec 27 2018 |

### Fingerprint

### Cite this

*ASEE Annual Conference and Exposition, Conference Proceedings*,

*2018-June*.

**The influence of grading bias on reinforced concrete exam scores at three different universities.** / Dymond, Ben; Swenty, Matthew; Carroll, Chris.

Research output: Contribution to journal › Conference article

*ASEE Annual Conference and Exposition, Conference Proceedings*, vol. 2018-June.

}

TY - JOUR

T1 - The influence of grading bias on reinforced concrete exam scores at three different universities

AU - Dymond, Ben

AU - Swenty, Matthew

AU - Carroll, Chris

PY - 2018/6/23

Y1 - 2018/6/23

N2 - The following conclusions were drawn from this study, which investigated potential bias present during grading of reinforced concrete exams: 1. Out of the 35 problems analyzed, there were statistically different results in only 12 problems (34%). The majority of the problems did not have statistically different grades between three professors. 2. Of the 12 questions that had statistically different results, half were short answer and half were computational problems. The type of problem was not an indicator of statistical difference. 3. In the 12 questions with statistically different grades, grader A gave the highest average score on 11 of the problems. Grader C gave the lowest score on eight of the 12 problems. Grader B frequently assigned grade values between graders A and C. 4. While the majority of problems showed no statistical difference in grades, the overall score computed by summing the average results of all 35 problems indicated the following differences: Grader A gave an average of 82%, grader B gave an average of 77%) (5 percentage points less than grader A), and Grader C gave an average of 75% (7 percentage points less than grader A). 5. The question scores did not reveal a bias toward the professor's own students. The grading patterns were the same regardless of the student set graded (student sets varied by university). 6. External factors were held constant to eliminate as much bias as possible when grading the exams. This included eliminating student identifiers. The results indicated that an individual grader did have bias that may have been present when grading. This bias was likely manifested in the grading rubric when valuation was placed on a particular error for each problem. While these differences in valuation did show clear patterns in the grades, the overall scores varied by 7 percentage points or less. This is less than one letter grade using a traditional A through F assessment scale.

AB - The following conclusions were drawn from this study, which investigated potential bias present during grading of reinforced concrete exams: 1. Out of the 35 problems analyzed, there were statistically different results in only 12 problems (34%). The majority of the problems did not have statistically different grades between three professors. 2. Of the 12 questions that had statistically different results, half were short answer and half were computational problems. The type of problem was not an indicator of statistical difference. 3. In the 12 questions with statistically different grades, grader A gave the highest average score on 11 of the problems. Grader C gave the lowest score on eight of the 12 problems. Grader B frequently assigned grade values between graders A and C. 4. While the majority of problems showed no statistical difference in grades, the overall score computed by summing the average results of all 35 problems indicated the following differences: Grader A gave an average of 82%, grader B gave an average of 77%) (5 percentage points less than grader A), and Grader C gave an average of 75% (7 percentage points less than grader A). 5. The question scores did not reveal a bias toward the professor's own students. The grading patterns were the same regardless of the student set graded (student sets varied by university). 6. External factors were held constant to eliminate as much bias as possible when grading the exams. This included eliminating student identifiers. The results indicated that an individual grader did have bias that may have been present when grading. This bias was likely manifested in the grading rubric when valuation was placed on a particular error for each problem. While these differences in valuation did show clear patterns in the grades, the overall scores varied by 7 percentage points or less. This is less than one letter grade using a traditional A through F assessment scale.

UR - http://www.scopus.com/inward/record.url?scp=85051212549&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051212549&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85051212549

VL - 2018-June

JO - ASEE Annual Conference and Exposition, Conference Proceedings

JF - ASEE Annual Conference and Exposition, Conference Proceedings

SN - 2153-5965

ER -