Concepedia

Publication | Closed Access

Comparative analysis of the reliability of job performance ratings.

763

Citations

210

References

1996

Year

TLDR

The study used meta‑analytic methods to compare inter‑ and intra‑rater reliabilities of ratings across ten job‑performance dimensions and overall performance. Results show mixed evidence that certain job‑performance dimensions are rated more reliably, with supervisory ratings exhibiting higher inter‑rater reliability (mean .52 for overall performance) than peer ratings, and that inter‑rater reliability is consistently lower than intra‑rater reliability, indicating that relying on intra‑rater estimates can bias research outcomes and has important implications for both research and practice.

Abstract

This study used meta-analytic methods to compare the interrater and intrarater reliabilities of ratings of 10 dimensions of job performance used in the literature; ratings of overall job performance were also examined. There was mixed support for the notion that some dimensions are rated more reliably than others. Supervisory ratings appear to have higher interrater reliability than peer ratings. Consistent with H. R. Rothstein (1990), mean interrater reliability of supervisory ratings of overall job performance was found to be .52. In all cases, interrater reliability is lower than intrarater reliability, indicating that the inappropriate use of intrarater reliability estimates to correct for biases from measurement error leads to biased research results. These findings have important implications for both research and practice.

References

YearCitations

1951

42.3K

1978

25.7K

1977

12.1K

1991

6K

1978

5.3K

1994

2.9K

1971

2.4K

1993

2.2K

1993

1.6K

1990

1.5K

Page 1