Publication | Closed Access
Two Quantitative Approaches for Estimating Content Validity
773
Citations
25
References
2003
Year
Content validity is usually established by qualitative expert reviews, but quantitative agreement analysis is also advocated, and this study compared two such approaches using the Osteoporosis Risk Assessment Tool with eight expert judges. The authors compared proportion agreement and multirater kappa coefficients to assess expert agreement on the ORAT. The analysis identified one item with low CVI and seven items with low kappa, prompting elimination or revision of eight items, and the study evaluated the advantages and disadvantages of both quantitative methods.
Instrument content validity is often established through qualitative expert reviews, yet quantitative analysis of reviewer agreements is also advocated in the literature.Two quantitative approaches to content validity estimations were compared and contrasted using a newly developed instrument called the Osteoporosis Risk Assessment Tool (ORAT).Data obtained from a panel of eight expert judges were analyzed. A Content Validity Index (CVI) initially determined that only one item lacked interrater proportion agreement about its relevance to the instrument as a whole (CVI = 0.57). Concern that higher proportion agreement ratings might be due to random chance stimulated further analysis using a multirater kappa coefficient of agreement. An additional seven items had low kappas, ranging from 0.29 to 0.48 and indicating poor agreement among the experts. The findings supported the elimination or revision of eight items. Pros and cons to using both proportion agreement and kappa coefficient analysis are examined.
| Year | Citations | |
|---|---|---|
Page 1
Page 1