# Item Indices for Multiple Choice Questions

Many teachers use multiple choice questions to assess students knowledge in a subject matter. This is especially true if the class is large and marking essays would provide to be impractical.

Even if best practices are used in making multiple choice exams it can still be difficult to know if the questions are doing the work they are supposed too. Fortunately, there are several quantitative measures that can be used to assess the quality of a multiple choice question.

This post will look at three ways that you can determine the quality of your multiple choice questions using quantitative means. These three items are

• Item facility
• Item discrimination
• Distractor efficiency

Item Facility

Item facility measures the difficulty of a particular question. This is determined by the following formula

Item facility = Number of students who answer the item correctly
Total number of students who answered the item

This formula simply calculates the percentage of students who answered the question correctly. There is no boundary for a good or bad item facility score. Your goal should be to try and separate the high ability from the low ability students in your class with challenging items with a low item facility score. In addition, there should be several easier items with a high item facility score for the weaker students to support them as well as serve as warmups for the stronger students.

Item Discrimination

Item discrimination measures a questions ability to separate the strong students from the weak ones.

Item discrimination = # items correct of strong group – # items correct of weak group
1/2(total of two groups)

The first thing that needs to be done in order to calculate the item discrimination is to divide the class into three groups by rank. The top 1/3 is the strong group, the middle third is the average group and the bottom 1/3 is the weak group. The middle group is removed and you use the data on the strong and the weak to determine the item discrimination.

The results of the item discrimination range from zero (no discrimination) to 1 (perfect discrimination). There are no hard cutoff points for item discrimination. However, values near zero are generally removed while a range of values above that is expected on an exam.

Distractor Efficiency

Distractor efficiency looks at the individual responses that the students select in a multiple choice question. For example, if a multiple choice has four possible answers, there should be a reasonable distribution of students who picked the various possible answers.

The Distractor efficiency is tabulated by simply counting the which answer students select for each question. Again there are no hard rules for removal. However, if nobody selected a distractor it may not be a good one.

Conclusion

Assessing multiple choice questions becomes much more important as the size of class grows bigger and bigger or the test needs to be reused multiple times in various context. This information covered here is only an introduction to the much broader subject of item response theory.

## 2 thoughts on “Item Indices for Multiple Choice Questions”

1. Cita Nøgård

Hi! thank you for an interesting blog!
I have a couple of questions
I do not fully understand this formula: Item discrimination = # items correct of strong group – # items correct of weak group
1/2(total of two groups).
Why the .5*total… ? could you explain that a bit? or have I got caught in semantics or layout?
Secondly what is a “reasonable” distribution of students picking the answers. Don’t you mean that an even distribution between the three distractors? We do expect the students to pick the right answer and hopefully students know the right answer. Therefore I should mean that the interesting thing is the plausibility of the distractors. What is the scientific research that you use for your opinion in this matter?