Publication | Closed Access
Understanding Scoring Reliability: Experiments in Calibrating Essay Readers
80
Citations
17
References
1988
Year
EngineeringMeasurementAccuracy And PrecisionEducationQuasi-experimentProgram EvaluationNatural Language ProcessingStatistical CalibrationSimpler Calibration StrategiesBiasAutomated AssessmentContent AnalysisReliability AnalysisStatisticsReliabilityTest DevelopmentRehabilitationExperiment DesignEducational AssessmentCalibrating Essay ReadersNational AdministrationsSurvey Methodology
Scoring reliability of essays and other free-response questions is of considerable concern, especially in large, national administrations. This report describes a statistically designed experiment that was carried out in an operational setting to determine the contributions of different sources of variation to the unreliability of scoring. The experiment made novel use of partially balanced incomplete block designs that facilitated the unbiased estimation of certain main effects without requiring readers to assess the same paper several times. In addition, estimates were obtained of the improvement in reliability that results from removing variability from systematic sources of variation by an appropriate adjustment of the raw scores. This statistical calibration appears to be a cost-effective approach to enhancing scoring reliability when compared to simply increasing the number of readings per paper. The results of the experiment also provide a framework for examining other, simpler calibration strategies. One such strategy is briefly considered.
| Year | Citations | |
|---|---|---|
Page 1
Page 1