Using assessment results effectively

Not only should providing meaningful feedback to students be an essential part of the assessment process; instructors should also use assessment results to improve student learning and success. One benefit of online assessment, particularly objective online tests, is that it yields quick results that can be analysed with relative ease. These analyses can provide useful information about class performance, an individual student’s performance, as well as the quality of the items in the item bank.

Using overall results of an assessment

The advantage of online assessment, if it is graded online, is that overall results can normally easily be extracted. The type of assessment and the functionality of online environment (LMS) in which an assessment was delivered will, however, determine the level of detail that is available. Some basic calculations and interpretations based on the overall results can nonetheless be useful in providing instructors with a general overview of class performance and in identifying possible gaps that need to be addressed.

Basic calculations and interpretations of overall results

Class Average

The class average for an assessment is a standard metric to indicate overall performance in the assessment. What is considered acceptable as a class average, will be dependent on many factors (class size, assessment type, course content etc.) but anything around 60% is likely to be desirable.

How to calculate class average

With Microsoft Excel, class averages can be calculated easily. Export the results of your assessment to Excel if this option is available, if not capture your results as in the example below. The formula for calculating an average is also shown in screenshot 1.

Screenshot 1

Percentage of students who failed/ percentage of students who passed with distinction

In addition to class average, calculating the percentage of students who performed at the extreme ends of the distribution tells you something about the assessment. If too many students failed the assessment for instance, it is possible that your students do not understand the content covered in the assessment, that the time limit was unreasonable, or that the assessment questions were constructed poorly / ambiguously. If this is the case, it is worth doing further analyses, such as interpreting your item analysis report. On the other hand, if too large a proportion of your class obtain distinctions the test may have been too easy or the time limit too lenient for instance.

How to calculate percentage of students who failed the assessment

Refer to screenshot 2

1. Select the column with the percentage scores for the assessment
2. Sort the percentages from Smallest to Largest

Screenshot 2

Refer to screenshot 3

3. Select all the cells with a ‘fail’ percentage. If a ‘pass’ mark is 50% for instance, the cells shown in screenshot 3 will be selected.
4. Refer to the bottom right corner to view a count of the cells that are selected (7 in the example)
5. Divide the total number of students who failed by the total number of students who completed the test. In the example, 25 students completed the test and 7 students failed which means that 0.28 (28%) of the class failed the test.

Screenshot 3

How to calculate percentage of students who failed the assessment

Follow the same steps as when calculating the percentage of students who failed the test, but instead of selecting the cells of all students who failed, select the cells of all students who obtained a distinction percentage in the test. If 75% constitutes a pass with distinction percentage, this is a total 2 students as shown in screenshot 4 below.

Screenshot 4

Select all the cells with a ‘pass with distinction’ percentage. If a distinction is 75% for instance, the cells shown in screenshot 4 will be selected.

1. Refer to the bottom right corner to view a count of the cells that are selected (2 in the example)
2. Divide the total number of students who passed with distinction by the total number of students who completed the test. In the example, 25 students completed the test and 2 students passed with distinction which means that 0.08 (8%) of the class passed the test with distinction.

Identifying sections that need extra attention

It is useful to look at each question/ section separately to get insight into which questions/ sections students did well in and which sections students struggled with. This will tell you which concepts you may need to re-explain or spend more time in in the course for instance.

How to calculate averages of different sections/ questions in a test

As with calculating the class average, calculating the average for each section or question is simple in Excel. Follow the same steps as with calculating the class average for each question/ section in your results as shown in Screenshot 5.

Screenshot 5

Based on the example shown in Screenshot 5, an instructor may want to have to do further analyses to understand why students performed poorly in Question 6 (average of 44%), 7 (average of 40%), and 9 (average of 36%) for example. Further analyses can be done with an item analysis report.

Item analysis reports

Item analysis is a process of statistical techniques which examines student responses to individual multiple choice questions to assess the quality of the items, as well as the quality of the test as a whole.

Why item analysis reports are useful:

  • It provides an overview of how your students performed in a test
  • It helps instructors determine which questions are of good quality and which questions may need to be revised
  • It aids in the quality assurance of your item (question) bank
  • It helps instructors to identify areas in which students are struggling/ performing poorly

Basic item analysis calculations

Most online test platforms allows instructors to generate item analysis reports automatically. The basic item analysis calculations include item difficulty and discrimination values.

Item difficulty (p-value)

Item difficulty (p-value) is the percentage of students who answered the item correctly. Difficulty ranges from 0 – 100.

Interpreting item difficulty (p-value):

The higher the value the easier the question

Ideal difficulty values:

Five option MCQ 0.6
Four option MCQ 0.62
Three option MCQ 0.66
True/ False (two option MCQ) 0.75

Poor difficulty values:

  • P-value above 0.9 indicate items that may be too easy. Solutions include revising the options (alternatives), revising the question (stem), or removing the question from the question bank.
  • P-value below 0.2 indicate items that may be too difficult. A possible interpretation may be that the language of the question is too difficult and needs to be revised, removing the question from the question bank, or targeting the concept covered in the question for re-instruction.

Discrimination index

The discrimination index is the percentage of students who performed well who answered the question correctly minus the percentage of students who performed poorly who answered the question correctly. It is a value between -1 and 1

Interpreting the discrimination index:

The closer to 1, the more discriminating the item.

Very good item 0.4 or higher
Good item 0.3 – 0.39
Fair item 0.2 – 0.29
Poor item 0.19 or less

  • Items with a discrimination value of close to or below 0 should be removed/revised
  • Discrimination below 0 (negative number) means that more students who performed overall poorly in the test got the question right than students who performed overall well.

If the online test platform that you are using does not automatically generate item analysis reports, you can manually calculate item difficulty and the discrimination index in Excel.

Manually calculating item difficulty:
  1. Correct and incorrect responses for each question is indicated with 1 and 0
  2. Average for each question is calculated
  3. Excel formula =Average(Number 1:Number 2)
  4. Example Excel formula: =Average(F4:F30)

Manual calculation steps to determine discrimination index:

  • Step 1

    1

    Sort students from poorest performing to best performing

    Excel formula (version 2013 and up):

    Sort function (Home tab)

    Excel formula (versions older than 2013)

    Sort function (Home tab)

  • Step 2

    Highlight the top 30% and the bottom 30%

    Excel formula (version 2013 and up):

    n/a

    Excel formula (versions older than 2013)

    n/a

    2

    Step 2

  • Step 3

    3

    Count the number of students in the upper group who got the question right

    Excel formula (version 2013 and up):

    =COUNTIF(number:number;value)

    Excel formula (versions older than 2013)

    =COUNTIF(number:number,“=value”)

  • Step 4

    Count the number of students in the bottom group who got the question right

    Excel formula (version 2013 and up):

    =COUNTIF(number:number;value)

    Excel formula (versions older than 2013)

    =COUNTIF(number:number,“=value”)

    4

    Step 4

  • Step 5

    5

    Percentage of upper group of students who got the question right

    Excel formula (version 2013 and up):

    =cell/total number of students in upper group

    Excel formula (versions older than 2013)

    =cell/total number of students in upper group

  • Step 6

    Percentage of lower group of students who got the question right

    Excel formula (version 2013 and up):

    = cell/total number of students in lower group

    Excel formula (versions older than 2013)

    = cell/total number of students in lower group

    6

    Step 6

  • Step 7

    7

    Discrimination = %upper group right minus % lower group right

    Excel formula (version 2013 and up):

    =cell-cell

    Excel formula (versions older than 2013)

    =cell-cell

References

Baker, G. R., Jankowski, N., Provezis, S. & Kinzie, J. (2012). Using assessment results: Promising practices of institutions that do it well. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Fulcher, K. H., Good, M. R., Coleman, C. M., & Smith, K. L. (2014). A simple model for learning improvement: Weigh pig, feed pig, weigh pig. (NILOA Occasional Paper No. 23). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.

Smith, K. L., Good, M. R., Sanchez, E. H., & Fulcher, K. H. (2015). Communication Is Key: Unpacking “Use of Assessment Results to Improve Student Learning”. Research & Practice in Assessment, 10, 15–29.

Creative Commons License
Except where otherwise noted, content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

© Blended Learning Resources