Multiple choice questions as a tool for summative assessment in medical schools

Document Type : Review Article

Author

Taibah University

Abstract

Objectives: To evaluate the quality of multiple-choice questions (MCQs) used in a summative assessment of a Central Nervous System (CNS) module at the Faculty of Medicine, Jazan University.
Methods: Item analysis was conducted on a 70-item MCQ exam administered to 57 medical students after completing the CNS module. Various departments teach the module utilizing a systems-based curriculum. Item difficulty, discrimination, reliability, and standard error of measurement were analyzed.
Results: Item difficulty ranged from 0.3 to 0.9 on the difficulty index for most items (moderate difficulty). Most items (62/70) appropriately discriminated between high- and low-scoring students. Reliability was very high (Kuder-Richardson 20 = 0.91). The standard error of measurement was 3.7. Analysis of validity evidence included evaluation of content validity through alignment of exam items with module learning objectives using a test blueprint, as well as analysis of internal structure validity supported by item difficulty and discrimination statistics. Discrimination indices above 0.2 indicate items distinguished well between students performing at the upper and lower score ranges. Feasibility of MCQs was evidenced by the resources required. Minimal training and no specialized equipment or longer administration/scoring times were needed compared to other assessment methods. MCQs were well-accepted by students and faculty involved in test development and implementation.
Conclusion: Psychometric analysis of item and exam characteristics provides validity evidence that scores from this MCQ reasonably represent CNS module achievement. While not capturing higher-order skills, MCQs proved a feasible and effective summative assessment of this pre-clinical module when used within an integrated evaluation program.

Keywords