Online teaching and promotion and tenure

Yesterday the Institute hosted a panel discussion dealing with “How can the teaching of online courses be evaluated for P & T? A Panel Discussion with Keith Bailey, David DiBiase, Diane Parente and Angela Linse”.
Many PSU folks joined us, both face-to-face in 315 Rider Building as well as people over polycom from Brandywine, Fayette and Erie.  Dave and the folks from the eDutton Institute created a nice peer review guide for online courses that the college of EMS utilizes. One thing Dave mentioned is that, using this guide, non-online teachers would still be capable of providing a quality review of an online course.  This sparked some interesting debate among both panelists and attendees.  Keith Bailey and Diane Parente offered their methods of online course peer review as well.  Both Dave and Diane encourage faculty members to be ‘in’ the online environment the teacher is leveraging for the course for at least a month, sometimes for the entire semester. Keith indicated this could be a tough sell for faculty, when they already have a great deal of commitments vying for their time.  Another interesting point was raised from Brandywine, where they mentioned the guideline they established that keeps most tenure-track instructors away from online instruction.  Some of this is due to SRTE ratings typically being 6/10s of a point lower for online instructors vs. resident instructors. 

All in all, a great discussion and you could tell the panelists and some of the participants were passionate about the dialog.  Hopefully we can build off this enthusiasm and continue working towards a consistent, quality-driven method for evaluating the growing number of online courses offered around PSU. 

4 thoughts on “Online teaching and promotion and tenure


    Nope, I agree, I don’t think we can say something like that just based on quant numbers. I also agree with the quote, but I wonder…how many tenure committees see it this way? My guess is that some decision makers look at SRTEs as a comparison mechanism between instructors, vs. a measuring stick against a certain number that unit/college feels is acceptable to represent ‘good teaching’.

    Just a guess, I really don’t know what goes on behind closed doors when administrators and peers sit down with a faculty member to discuss SRTEs.

  2. Angela R. Linse

    I just came across this interesting quote
    “The spirit of the SRTE is best served by regarding SRTE results as the students’ view of the candidate’s teaching effectiveness in absolute terms – that Professor X (whose evaluation mean is 6.25) is a “very good teacher,” without necessarily saying that Professor X is a better “very good teacher” than Professor Y (whose evaluation mean is 6.10).”

    This is from page 3 of the a report written by Jim Smith, Senator from the Ogontz [Abington] campus and member of the Senate Committee on Faculty Affairs.

    It is the same argument that I was trying to make in response to the 0.6 point difference between online and f2f courses. Can we really say that f2f teachers at Behrend are “better very good teachers” than their colleagues teaching online?


    “a faculty
    member’s tenure or reappointment should not hinge on a 0.6 point
    difference.” – I couldn’t agree more. I understand your point about over interpretation of the ‘meaning’ of this data. But I get the impression we might be in the minority, where the majority puts too much meaning into these numbers. I’m somewhat curious regarding methodology behind Diane’s data and if we should maybe think about replicating this type of study across other campuses. This might be useful to the PS Online committee, but I think some of the other questions we devised are much higher priority.

  4. Angela R. Linse

    Actually, the 0.6 point difference (on the 1-7 SRTE scale) between Online and f2f courses was documented by Penn State Erie-Behrend over the course of 8 years. The faculty member at Penn State Brandywine was concerned about this difference. Her question was whether pre-tenure faculty should be discouraged from teaching online–based on that data.

    The question assumes a number of things:
    1) that Behrend data can be extrapolated to all online vs. f24 comparisons; and
    2) that a .6 point difference is a meaningful difference in terms of teaching quality.

    First, while I am sure that Behrend’s data are sound, they should not be viewed as representative of all Penn State courses. Nor should it be assumed that every year, they found a 0.6 difference between the Online and f2f courses. This is an average difference. The variation between campuses might very well be greater than 0.6 points!

    Second, a faculty member’s tenure or reappointment should not hinge on a 0.6 point difference. That would represent an over-interpretation of the meaning of that basically a half-point on the 1-7 scale. This is so, even if the Behrend’s 0.6 difference is statistically significant. Statistical significance is not a measure of meaning.

    If you had an individual or group of instructors with a mean score of 2.3 and a comparable group with a statistically significantly different mean score of 2.9, would you be comfortable inferring that the 2.9 scorer(s) are meaningfully better instructors than the 2.3 scorers?

    Is the difference more meaningful if the scores are 6.0 and 6.6? Remember that the scores come from most students rating the former low and the latter high. It does not represent students saying that Dr. X is fractionally more excellent than Dr. Y.

    The University Faculty Senate discussed the problems of interpreting SRTE averages (see Senate Agenda 2/21/1989 and page 7 of the Senate Record for the same date). In fact the original legislation referenced the student ratings research literature and cautioned against assigning SRTE scores “a precision that they do not possess” (see original 4/30/85 Senate Agenda and the Corrected Copy published in the 2/25/86 Senate Record).

Leave a Reply