Category Archives: Assessment

How valuable are teaching centers? Stepping Up for Outcomes Assessment.

I just returned from the 2012 Annual Meeting of the American Association of Colleges & Universities (AAC&U) in Washington DC.  I was honored to be a panelist in a session focused on the role of teaching centers in institutional transformation.  The panelists provided examples of how teaching centers are collaborating with other units to advance institutional change. My fellow panelists include Phyllis Worthy Dawkins, Provost and Sr. Vice Pres. of Dillard Univ., Peter Felton, Asst. Provost at Elon University, and Virginia Lee a consultant with her own company. All of us are very involved in the professional society for faculty developers in higher ed (podnetwork.org).

My contribution was to briefly talk about the role of the Schreyer Institute for Teaching Excellence in Penn State’s program and student learning outcomes assessment initiatives.  This led me to ponder why I think it is so important for us to take on both leadership and collaborative roles at Penn State.  My short answers:

  1. Teaching centers and faculty developers have valuable knowledge and skills to offer the teaching community;
  2. If we don’t collaborate and lead, the university is at risk of losing a valuable resource because no one will know how valuable we are!

If no one knows how valuable you are, how valuable are you really?

One way for the institution to recognize the value of teaching centers is for us to step-up-to-the-plate and take on areas, tasks, and projects that either no other units want or that are likely to be difficult (another panelist talked about Gen Ed Revision!). 

Assessment is a case in point.  Course, Program, and Institutional Assessment offer a great opportunity to further establish our value. 

So, what did Stepping Up for Outcomes Assessment entail?  First, it did not involve us in the role of assessment enforcer, nor did it involve us gathering and interpreting evidence.  Instead, we have helped faculty and administrators responsible for student learning outcomes assessment to meet their obligations.

A colleague at the University of Washington (J. Turns) coined the phrase “Assessment?  I hate it.  What is it?” which captures what we’ve done quite well! 

We decided to step up and provide:

Information.  When first entering the assessment arena, faculty and administrators have lots of questions (Why do we have to do this?  Why is it important?  Why don’t course grades count?  What exactly am I supposed to do?!?  Is our disciplinary accreditation evidence sufficient?)

Opportunities for intra- and interdisciplinary discussions via workshops, conferences, and meetings

Guidance about the process of assessment

Examples (goals, outcomes, plans)

Templates (curricular mapping, identifying and developing goals and outcomes, reports)

Feedback on assessment plans

Success stories

As we became more involved, we took on maintenance and further development of the University’s assessment website (assess.psu.edu), which has become the “go-to” place for information and updates on the Penn State approach to assessment.  (We also regularly hear from colleagues at other institutions about how they have used these resources.)

Stay tuned!  The Schreyer Institute and Penn State’s assessment story continues to evolve and mature.  And never forget, we are always looking for new opportunities to become even more valuable to our community of teachers and learners. Visit or contact us.

The Red Pen: Grading Reconsidered

Even as we continue to process the distressing events at Penn State, we are aware that some of the normal aspects of academic life continue. Take grading, for example.
We are fast approaching the end of the semester, a time when those of us teaching take up the “red pen” to grade student work.
It’s not a task most of us look forward to, because frankly grading wears us out. After all, it takes time, thought, and energy to give feedback on all those student papers, exams, projects, reports, and bluebooks.  
I invite you to put down the red pen for a moment and consider the following questions: 1) are grading and feedback the same thing? 2) If not, what is feedback for? 3) how much do students need feedback on their performance at the end of the semester?
These are questions David Brooks asked himself. The answers he came up with might surprise you:
http://chronicle.com/article/Wielding-the-Red-Pen/126200/

Assessing Student Blog Activity

As more faculty continue to leverage the University’s blog platform for teaching and learning, we continually are asked:

“How do I assess what my students are doing on the blog?”

This question is particularly challenging for a variety of reasons.  In some instances, students are writing in their own personal blog space.  With a roster of 50 students, this represents 50 different blogs the instructor must visit for each assignment (although an RSS reader can help instructors be more efficient using this method). The model that we see more often now involves instructors creating a blog, then adding all of their students as authors to that blog.  This alleviates the need to go visit each blog separately while also increasing the interaction between students.  When all entries are authored in a single blog, it makes interacting with one another simple.

In terms of the actual assessment of student work, we typically see two different methods.

  1. Assess each individual entry.  This typically involves some sort of rubric to guide the student’s writing, and each individual entry receives a specific grade.  Mark Sample offers a good example rubric in the Chronicle.
  2. Assess the students’ blogging activity as a whole.  This method of assessment provides a single grade for the entirety of a student’s blogging activity throughout the semester.  Chris Long, Associate Professor of Philosophy, assesses student blog work in this manner and also shares the rubric he uses on his website.

Do you have a rubric for assessing student blogging activity?  If you do, and you don’t mind sharing, please feel free to send it to me (bkp10[at]psu.edu).  I’m working on a collection of blog rubrics to share on our website for new faculty looking to experiment with blogs.

Midsemester Feedback – 10 Tips for a Better Class

It is now midway through the semester.  How is your course going?  How do you know?

Now is the perfect time to start soliciting formative feedback from your students.  Collecting feedback from students can serve many purposes.  You can ascertain what students are and are not learning as well as how they are learning it, get formative feedback on your teaching, tailor your course to student needs, increase student motivation, improve student learning and give students an avenue to openly communicate with you about the course.  These tips will help you collect, analyze and implement student responses and forward formative teaching and learning excellence in your classroom.

1.       1. Tell your students that their feedback is important, why you are collecting it, and what you plan to do with their input.  If you let them know how they are going to benefit from their efforts you will get much more thorough and thoughtful responses.

2.       2. Give your students precise instructions and examples of how to present constructive feedback.  Often students do not have experience giving formative (midsemester) responses and may never have been asked their opinions about their own learning experiences.  One of the best ways to solicit good feedbacks is to make feedback a routine part of your course.

3.       3. Let your students know that you are looking for constructive feedback (keep reinforcing this) that you can respond to during the current semester.  You are much more likely to be able to respond to concerns about the pace of your course or difficulty/style of exams rather than pre-determined situational factors such as location, time that the class meets, text book etc…

4.       4. Make sure that you only collect data that you can and will respond to.  One of students greatest complaints are assignments and tasks that take/waste time and aren’t useful to learning outcomes- asking for feedback you can’t or won’t use wastes both your and your students’ time.

5.       5. If you are teaching a large class you may want to use an online polling system to collect your feedback.  Angel, SurveyMonkey and Google Forms all offer anonymous submission options for you to more easily collect, organize and analyze data. 

6.       6. Focus your feedback questions around the following ideas:

a.       What helps you learn in this course?  Examples?

b.      What changes would make the course more helpful? Suggestions?

7.       7. Assess your positive feedback.  Look at what you’re doing well, what the students are responding well to, and what is aiding in student learning.  Keep it up!

8.       8. Carefully look at your feedback and make sure not to focus on a few negative comments.  Compare the responses to your goals and objectives for the course and assess what changes you can make to facilitate student learning.  You may want to review the data with a colleague or make an appointment with a consultant at the Schreyer Institute for Teaching Excellence.  To look more deeply into comments and concerns you may find it helpful to watch yourself lecture or borrow students’ lecture notes and compare what you’re teaching with that students’ are writing down.

9.      9.  It is vitally important that you promptly share your students’ feedback with the class and let them know your plans.  You most likely will not be able to attend to all of the concerns and comments, but your students will appreciate knowing what you plan to do, what you cannot do, and why.

10.   10. Follow-up!

Here’s to formative excellence in teaching and learning!

We have a wide variety of resources available at SITE which you can look at in more depth here or contact us at site@psu.edu!

http://www.schreyerinstitute.psu.edu/Tools/MidsemesterFeedback

Other resources:

University of Sydney’s Quick and Easy Feedback Strategies:

http://www.itl.usyd.edu.au/feedback/gatherstufeed.htm

Cornell’s Teaching Evaluation Handbook:

http://www.cte.cornell.edu/resources/teh/teh.html

Stop, Go, Change…

If I had a nickle for every time I’ve recommended the mid-semester evaluation to my faculty friends, I’d be spending my days sitting on a beach, with an umbrella drink, reading trashy novels.  (And now you know my plan for retirement.) A “well conducted” mid-semester evaluation can gather a plethora of interesting and useful information for the faculty member. We here at SITE recommend a mid-semester evaluation as one of the tools a faculty member should use in almost every consultation we do. I sometimes feel like a doctor explaining why exercise is the key to good health. Don’t you get tired of hearing that? Is it only the fact that it is undeniably true that keeps you from banging your head on the nearest desk? Such is the case with mid-semester evaluations. They are sometimes scary, sometimes a PIA (you can look that up in the Urban Dictionary), but yield fascinating information when done well.

My prescription for a well done mid-semester evaluation is pretty simple (some of my colleagues will disagree with my contentions, but that’s OK too).

  • It should be short – really short. The average student should not need more that 5 minutes to complete it and it will be easy for you to evaluate.
  •  It should appear random and not systematic. Like anything else, a student who becomes inured to an activity will begin to feel bored with it.
  • It should solicit information that you actually may be able to use.
  • Debrief, debrief, debrief. No matter what kind of feedback you get debrief it with the class. Of my four elements for a valuable mid-semester evaluation this is the most important.

Keeping in mind these simple rules I am “presenting” a new, to me, mid-semester evaluation that is both elegant and easy.

It’s called Stop, Go, Change and it goes like this:

Distribute file cards to the class and ask them to make three comments as follows:

1.      Stop: something you don’t like – can be about the professor, the class, the material, your fellow students, yourself, anything at all.

2.      Go: something you do like – ditto above.

3.      Change: something about your own learning – what do you need to do [more of or better] to succeed in this class?

Th  That’s it. Give your class about 5 minutes (maximum) to complete the evaluation, collect the cards as they leave. Now sit down with your favorite comforting beverage, and read through them. You will get lots of information, and most of it will be interesting, if not useful. If there are suggestions that you can implement immediately, do it! If there are unreasonable suggestions (e.g., stop assigning homework), explain why your homework assignments are an integral part of your teaching and perhaps ask what might help the class complete homework assignments.

Th   Debriefing is the key to any good mid-semester evaluation. It says you heard and responded, even if you can’t do what they ask.

 If    If you need help interpreting your evaluations, feel free to come in and speak to any one of us. We’ll be delighted to give you a hand.

      Try it, you’ll see – you won’t be sorry.


IIII 

T

Rhetoric and Civic Life (LA101H): A Teaching & Learning Exemplar

I recently heard about some of the outcomes of one of our Schreyer Institute Teaching Support Grants (TSG).  Veena Raman (Communication Arts & Sciences) and Debra Hawhee (English, Rhetoric) received a grant to conduct an assessment of the innovative interdisciplinary course that integrates elements of Effective Speech (CAS 100) and Rhetoric and Composition (ENGL 15/30) to “develop students skills in composing and delivering purposeful and effective messages, orally, verbally, and digitally” (cf. proposal).  The purpose of the TSG project was to assess the effectiveness of the course material and engagement strategies.

Based on feedback gathered from students, instructors, and the Faculty Senate, the College of the Liberal Arts is now pursuing two linked courses for first-year students, which they propose be required for aspiring Paterno.Fellows and Schreyer Honors Scholars.

The Faculty Advisory Committee of the Schreyer Honors College got just a taste of the course content and activities, but it is impressive.  A couple of things stood out for me, including that this project serves as a great example of:
1) cross-disciplinarity–the project deliberately crosses boundaries that do not support the desired student learning and skills development
2) good pedagogy–by regularly reviewing and tweaking instructional practices to benefit students and instructors. 
3) integration of technology–the course requires students to explore rhetorical writing and thinking using a variety of technological media including blogs, formal writing, videos, podcasts, and speeches.
4) humanities assessment–the project has explicit objectives and the assessments are solidly founded within the ethos of the humanities.  Good examples of assessment practices in the humanities are relatively rare, primarily because Student Learning Outcomes Assessment is new to these disciplines.  Models from disciplines with longer histories with SLOs (engineering, health professions) do not tend to translate well to the humanities.  This project will no doubt help other faculty move from the possible to the actual in humanities learning assessment. 

Kudos to Veena and Debra and to SITE for supporting their efforts.  And I cannot help but note the fluency with which both of these professors use the language of assessment–this is a remarkable accomplishment in just a few years time. We look forward to hearing more in future!
 

Assessing Teamwork

Many of us incorporate some level of teamwork in our courses.  I typically teach in the College of Information Sciences and Technology, where nearly every course has some level of teaming.  For my course in particular, IST 446, over 1/2 of the points in the course are based on team assignments.  With this much emphasis on teaming, it is often difficult to fairly assess team work.  I tried something a bit different this semester, and want to share what I felt were the pros and cons of my method.

First, I used the Comprehensive Assessment for Team-Member Effectiveness (CATME) system that originated from Purdue University. This is a web-based tool that allows faculty (regardless of university) to upload a course list with team data, and create assessments that allow students to rate one another based on 13 different categories related to teaming (all from teaming research and literature).  This year, I used the following categories:

  • Contribution to work
  • Interacting with teammates
  • Keeping Team on track
  • Expecting quality
  • Team satisfaction

The system will generate emails to all students that need to complete team evaluations.  This is actually a 40-point assignment in my course, requiring each student to login to the system and evaluate their teammates.  I tell them that if they complete the surveys for each team member, they will always start out with a 40/40.  Then, based on their evaluations by their peers, I make adjustments to the grade of the assignment.  Here are the steps I used:

  1. I created a ‘team average’ based on the scores of each team member.  For instance, if we had 4 people per team, I simply added up their total scores and divided by 4 for the average.
  2. Next, I compared each individual’s average to the team average.  Individuals that were below a whole point (on a 5-point scale) received a letter grade decrease for the assignment.  Assuming the average was a 4.2, and someone received an average of a 3.1, that student just went from an “A” to an “A-“
  3. I then went to .50 intervals.  So if the student had, for example, a 2.5, I would then drop the student to a “B+”. 

Overall, this was good in concept but I implemented it poorly.  Those students that were rated poorly by their peers brought the team average down very low, in essence making it very difficult to be less than a whole point below the team average.  If I did this again, I would probably use .25 intervals to adjust grades.

Also, I did not find a way to incorporate the qualitative comments the system collects.  On one team, for example, 3 of the 4 members all commented about the lack of participation of a single group member. My quantitative method did drop the student from an “A” to an “A-” for the assignment, but it was clear to me that I should have dropped the student further. But I did not have any method to somehow standardize on written feedback to impact the grade.

Lastly, the CATME system does a lot of interesting analysis for you, and highlights specific students that meet certain criteria.  For instance, one student was flagged as “Overconfident – The team’s average rating for this student is less than 3, and the student has rated themselves more than one point higher  on average than this rating”.  Another student was flagged “Personality Conflict – This student has rated one or more team members a 2 or less, but the median rating of the student(s) by the other team members gives a score of at least 3. Perhaps this student just didn’t get along with the student(s) that got poor ratings?”

Finally, several students were marked as “High Performer – This condition indicates that the average rating for this student by the other members of the team is more than half a point higher than the overall average rating of the team. The students average rating must be higher than a 3.5 to qualify”.

My method this year didn’t do a good enough job penalizing poor performers, or rewarding high performers.  I think moving to .25 increments from the team mean will help me better penalize poor performers, but I’m still not sure how to best reward high performers based on the current structure of the assignment. Ideas?

Intriguing article about the benefits of tests

The New York Times has an interesting story about a study reported in the journal Science. Students were first asked to read a text. Some of the students then took a test (an essay/recall task) about the reading; those students had better recall a week later than students who crammed or students who drew concept maps about the text.

So it’s possible we should be giving low-stakes tests more often, at least when recall is the goal. Any thoughts?

The article is here:

http://www.nytimes.com/2011/01/21/science/21memory.html

Grad schools weigh in on program assessment

One of the Chronicle blogs had an interesting conversation about program assessment in graduate schools. It’s worth reading, both for the administrators’ viewpoints and for the anonymous comments by readers (ranging from thoughtful to petulant): 

http://chronicle.com/blogs/measuring/student-learning-outcomes-come-to-grad-school/27552

 

Recent Releases: Student Learning Outcomes Assessment

The recent AIR Newsletter (Assoc. for Institutional Research), http://eair.airweb.org/November2010/NILOANews.aspx, describes two new publications from the NILOA, National Institute of Learning Outcomes Assessment.

Anyone interested in how the whole regional accreditation process works in the US should take a look at the first of these publications.  The link to the Newsletter includes a brief description.

Provezis, S. (2010, October). Regional Accreditation and Student Learning Outcomes: Mapping the Territory . (NILOA Occasional Paper No.6). Urbana, IL: University of Illinois and Indiana University, National Institute of Learning Outcomes Assessment.  View the paper

Kinzie, J. (2010). Perspectives from Campus Leaders on the Current State of Student Learning Outcomes Assessment: NILOA Focus Group Summary 2009-2010. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). View the paper