Making Changes

If you determine that too few students have met your benchmark on a particular learning objective, your next task is to determine why that might have happened. This stage provides an opportunity for program faculty to get together to discuss the evidence collected and offer ideas about the changes that might make a difference in student learning. You will most likely find the answer within one of the dimensions of the teaching/learning/assessment cycle (Suskie, 2012).

Thus, you may determine that the program objective was not written well, or perhaps isn’t really important after all. In this case, your strategy would be to re-write, replace or delete the objective. Or, you may hypothesize that the objective isn’t being properly addressed in the classroom. Perhaps students aren’t getting enough practice, for example. Or maybe the particular concept needs to be addressed at a deeper level. Finally, it may be that the assessment itself is not constructed in a way that best addresses the objective. Perhaps the assignment directions need to be revised.  Alternatively, the scoring rubric might also need revision. For multiple choice questions, it isn’t uncommon for the test question(s) to be worded in a way that leads to many students missing it (them) even though they know the material. Analyzing test questions can be done using a statistical procedure called item analysis.

When you have determined the possible cause of the low scores, you will be able to develop changes directed at improving student performance. A great next step is to implement those changes and then collect the same evidence so that you know if the changes you made helped improve student learning, which *is* the goal of the whole process!

See an example below.

Questions? Contact the Schreyer Institute for Teaching Excellence at assess@psu.edu.


 Assessing a psychology program: interpreting direct evidence and making changes

 Students met our benchmark in their ability to evaluate rationale (75%) and methods (76%) for a psychological research article. However, they did not meet our benchmark when evaluating the results (23%) or conclusions (44%). Faculty reviewers indicated that the difficulty students had with conclusions seemed to be associated with the lack of ability to evaluate the results section. The faculty hypothesize that the difficulty students had evaluating research results may be  associated with their inability to be able to translate statistical results.

Looking at the curriculum map, the faculty noticed that statistics are introduced early, but not addressed to a great extent in later courses. Thus, the faculty have determined that they need to focus more heavily on translating statistics in other courses in the curriculum.