Search In this Thesis
   Search In this Thesis  
العنوان
Multiple choice questions as a tool for assessing cognitive skills for undergraduate students at alexandria faculty of medicine/
المؤلف
Ismail, Mennatallah Hassan Rizk.
هيئة الاعداد
باحث / منة الله حسن رزق أسماعيل
مناقش / هند ممدوح حنفى
مشرف / سها راشد عارف مصطفى
مشرف / ثناء حسن راضى
مشرف / ايمان حسن دياب
الموضوع
Medical Education.
تاريخ النشر
2015.
عدد الصفحات
90 p. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
الطب (متفرقات)
تاريخ الإجازة
15/6/2015
مكان الإجازة
جامعة الاسكندريه - كلية الطب - Medical Education
الفهرس
Only 14 pages are availabe for public view

from 32

from 32

Abstract

Background: In Alexandria Faculty of Medicine, Multiple choice questions (MCQs) have been one of the most frequently used written assessments of undergraduate medical students for many years. This tool is particularly efficient when there is a large body of material to be tested and when there are a large number of students to be assessed. MCQ examinations usually have manageable logistics, are easy to administer and can be scored rapidly using computers. Difficulty indices and the discriminatory value of each item can be easily calculated, facilitating standard application. However, designing valid questions and responses is a demanding skill that can be time consuming, and criticism has been made that they do not assess higher-order learning and analytic skills. Methods of ensuring quality assurance are essential when assessment is used for the purposes of certification. Hence, information that increases our understanding of multiple-choice items and tests will develop our ability to improve item writing, improve test design, better measure the achievement and skill level, that will ultimately lead to more appropriate score interpretation and decision making.
Aim of the study: The purpose of this study was to critically appraise the MCQ test papers as regards: editing of the test papers and students’ directions, construct validity (levels of cognitive skills tested), conformity of test items with the standard guidelines for MCQ construction regarding: item format, structure (stem; lead-in question; and responses), identification of item-writing technical flaws related to testwiseness and irrelevant difficulty, and finally to analyze the quality of the multiple choice questions in terms of: difficulty and discrimination indices, distractor efficiency and internal consistency reliability (MCQ item analysis).
Methods: All third year, first semester MCQs examination papers (end of module and end of semester) of the three integrated/multidisciplinary modules: Endocrine, Reproductive and Urinary Systems, for the two academic years 2011/2012 and 2013/2014); and their relevant Outputs of Remark Classic OMR® (MCQ test item analysis software) were surveyed. Also Records of Medical Education Department were reviewed to retrieve information about staff members who attended the training workshops on ’Constructing Objective Written Test Questions.
Results: All test papers were completely and accurately labeled. Students’ directions were clear and comprehensive.
A total of 12 exams (450 MCQs) were evaluated. Each MCQ was quantified individually then collectively by four assessors (the researcher and the three research supervisors). The effectiveness of questions was defined by the questions’ ability to measure higher cognitive skills (Application and/or problem solving), as determined by a modification of Bloom’s taxonomy, and their quality was determined by presence of item- writing flaws. The overall performance of the MCQs in all exams showed that 56.4% of questions tested recall of knowledge, 32.9% tested understanding, and just 10.7% addressed application and/or problem solving (the highest order cognitive skills).