Abstract
Background: In many middle eastern universities, English is the medium of instruction and testing. As nurse educators construct multiple choice questions (MCQs), it is essential that items are developed to be valid and reliable to assess student learning. Method: This study examined the structure of 98 MCQs included in nursing examinations at three middle eastern universities using a checklist composed of 22 literature-based principles. Results: Ninety MCQs (91.8%) experienced one or more item-writing flaws. Examples of these included linguistic errors, various problems with the stem, and answer options. Of importance, most faculty did not use item analysis to assess the integrity of the examinations. Conclusion: Results confirm concerns about the standards faculty use for test construction and item analysis. Universities must ensure that the faculty they hired are fluent in English. Faculty would also benefit from workshops that focus on test construction and the use of item analysis.
| Original language | English |
|---|---|
| Pages (from-to) | 490-496 |
| Number of pages | 7 |
| Journal | Journal of Nursing Education |
| Volume | 56 |
| Issue number | 8 |
| DOIs | |
| State | Published - 2017 |
| Externally published | Yes |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 4 Quality Education
Fingerprint
Dive into the research topics of 'Flaws of multiple choice questions in teacher-constructed nursing examinations: A pilot descriptive study'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver