Peer Evaluation of Group Work Options

Qualtrics Peer Evaluation

Qualtrics is a great option for delivering a lightweight option for peer evaluations of group work performance.

At Penn State, faculty and staff are able to request a license to use Qualtrics. This is required for this approach. Please contact your IT department for more details.

Once you have a license make a copy of this project into your list of projects. This is also required because you will be managing the setup, delivery, and analysis of the data.

Click to view/take sample evaluation

To setup your project to use in your course, please follow these directions:

Step 1. Make a copy of the peer evaluation into your Qualtrics account

Step 2. Export your Canvas Grades

Step 3. Download this CSV template

Step 4. Copy-and-paste the first column, all of your student’s names, into the CSV file in the first column replacing the sample data. You can ignore all of the other empty cells under the other columns. Save your changes.

Step 5. Import your CSV roster to your Qualtrics project (see “Importing Reusable Choices” section)

To deliver the peer evaluation in your course, please follow these directions:

  1. Copy the distribution link
  2. Paste the link into your Canvas Course or share via a Canvas Announcement or a class email

To view the results after the evaluation has closed, please follow these directions:

  1. Open the evaluation in Qualtrics
  2. Switch to view the reports

Other Evaluation Approaches Tested

The following tests all fail for the same basic reason: the approaches are not viable because different evaluations are stored in the same position and would require a manual process to separate the results for each student being evaluated. Each test does explore different ways of inputting data and piping text in from previous questions along with using display logic algorithms.

  • Test 1: Depending on how many teammates an evaluator chooses, they are provided just enough fields to enter the names of their teammates. Based on the names they have entered, the evaluator is provided with a matrix of selections for each teammate.
  • Test 2: The owner of the survey copies-and-pastes the names of the students into a list of potential choices for the evaluator to chose from to indicate who is on their team. Based on the names selected, the evaluator is provided with a separate slider-matrix of evaluation rankings.
  • Test 3: Evaluators are provided form entry fields to enter the names of each of their teammates and those names are automatically populated on separate rows of a slider-matrix where the evaluator ranks their teammates contributions to the group work.

Earlier Efforts

  • I tested a similar evaluation process using a different approach that would be used for evaluations of IA/LA/TAs at the end of a semester. I did not feel this was an approach that would realistically work because of the amount of back-end data manipulation required to make sense of the results. This approach asks evaluators to group selections together to indicate who was in the evaluator’s class sections. Then evaluators would provide rankings for each assistant based on their performance.
  • In 2018, I worked with Christian Vinten-Johansen <v23@psu.edu> with the Penn State Accessibility Team to create an accessible alternative to a CATME Peer Evaluation. At the time, CATME was not accessible.
  • I found earlier attempts as well that I’ll eventually document here.

Leave a Reply

Your email address will not be published. Required fields are marked *