Concepedia

Publication | Closed Access

Validating Measurement of Knowledge Integration in Science Using Multiple-Choice and Explanation Items

130

Citations

45

References

2011

Year

Abstract

This study explores measurement of a construct called knowledge integration in science using multiple-choice and explanation items. We use construct and instructional validity evidence to examine the role multiple-choice and explanation items plays in measuring students' knowledge integration ability. For construct validity, we analyze item properties such as alignment, discrimination, and target range on the knowledge integration scale using a Rasch Partial Credit Model analysis. For instructional validity, we test the sensitivity of multiple-choice and explanation items to knowledge integration instruction using a cohort comparison design. Results show that (1) one third of correct multiple-choice responses are aligned with higher levels of knowledge integration while three quarters of incorrect multiple-choice responses are aligned with lower levels of knowledge integration, (2) explanation items discriminate between high and low knowledge integration ability students much more effectively than multiple-choice items, (3) explanation items measure a wider range of knowledge integration levels than multiple-choice items, and (4) explanation items are more sensitive to knowledge integration instruction than multiple-choice items.

References

YearCitations

1982

3.7K

1994

2.1K

2000

2K

1996

1.9K

1951

1.7K

1999

1.4K

1995

1.2K

1999

855

1998

794

1939

712

Page 1