Monthly Archives: December 2011

The final mystery

After…

  • 1000 entries and 2,358 comments on the class blog,
  • 114,000 page views by 52,000 unique visitors from 170 countries,
  • a Nobel Prize winner, a Dean, a President, four Professors and a journalist,
  • 140 multiple choice questions, 
  • 28 polls, 1,800 votes and 502 texts to the comment wall, 
  • 37.5 hours of class room time and 20 hours of revision tutorials,
  • c.300 powerpoint slides, 30 u-tube videos and hundreds of websites
  • a fire alarm, an earthquake, a child molestation scandal,
  • a car crash and a withdrawal

…SC200 2011 is done.  Did it make a difference? 

05mystery_mullusc_dss.jpg
Mystery mollusc  Photo credit: MBARI/MBNMS
I’ll go back to normal teaching when it stops making a difference to me.

Harder?

The only persistent complaint about my course is that my tests are too hard.  I wonder.
The point of university is to make students better at solving problems, better at understanding new material, more questioning of authority, better at thinking for themselves, more creative and better at expressing themselves.  It’s not easy for students to develop those critical thinking skills and, left alone, most won’t. So society (and the students’ current and future families) enlist our help. We are in effect paid to stand between the student and the television.

High jump.jpg

But how high should we set the bar? Surely this is one of the most important — and least discussed — questions in Higher Education. What level of excellence should we demand?

Here’s my contention. We should repeatedly stretch the students as far as they can be stretched – without them snapping.

The hard part is to know where that sweet spot is. After a lot of thought, I still don’t have a clear answer. But the guiding principles must go something like this.
First, we need to set critical thinking targets (expectations) as high as we can. How do we know when the target is high enough? It is surely when most students are not getting most things right most of the time. If they step into the class room and get an A from the get-go, we are not asking enough of them.  The real danger is not that we over do it, but that we under do it.
Second, things should be set up so that students who strive and steadily improve get rewarded. Equally, no achievement, no reward. Period. Reinforcing mediocrity with an A (or a B) is a disaster.
Third, students should get an A not for effort but rather for achievement. It is tragic when a student says s/he deserves an A because they put in the hours. The real world will eat that attitude. Hours don’t count, it’s what you do with them. We must ensure we are assessing only the quality of output.
What do these principles mean for SC200?
It means I need to make the most of my grading algorithm which takes the best performance from several blog periods and tests. Poor performance early in the course has no grade legacy unless the poor performance continues. I should use the early part of semester to force folks out of their comfort zone. Indeed, the ideal would be a bunch of shocking marks initially, with everyone climbing steadily to a final A grade based on outstanding performance in the last blog periods, tests and the final exam. A number won’t make that climb, and there will be complaints, but do we serve our students well if we hold them to lesser standards than we hold ourselves?
I particularly need to make the first test set the standard. I made it too easy this year. It was nice to have few complaints, but I inadvertently created a lot of complacency which morphed into more complaints after subsequent tests. Since I take the best two scores from four class tests, I should use the first test to lay down a marker, and wake everyone up. That will focus minds on revisions sessions. I should also do more pop quizzes on test questions. The tests are the only opportunity I have to force students to think critically.
The class blog is my place to develop student creativity and self-expression skills.  I have to be much more punishing on students who are not participating comprehensively. This year I found myself giving pass marks for Facebook-like efforts. I must stop that. Moreover, I really must (MUST) keep the A’s for frequent participation at the high performance levels described in the rubric (e.g. “Entries are conceptually sophisticated, engaged in a substantive way with the material”). And I need to mark the first blog period very firmly. I put a lot of time into individual feedback. I need to insist they lift their game in return. I need to do class sessions off the back of the first blog period, talking about what makes a good post, and what makes good comments. I’ve lots of examples of good and bad practice now. I also need to encourage the students to tackle difficult topics in their blog posts, or to be creative, or lateral, or especially lucid. Few are shocking me in a good way.
In all of this, the guiding principle should be to set the students clear and high goals and help them to get there. I need to tell them all this and manage expectations. I need to make clear, as Karin Foley memorably did in different context, that I will do everything to get the students over the bar – except lower the bar.

All of this is well and good.  The main obstacle is the students. They do not like to get less than 100% at any point in the course. They view even a B as a failure. Many think they should get an A just by turning up (and some even if they don’t turn up). How did this come to pass?  What warped US education so badly? Anyone ever wonder why transcripts are packed with A’s and the US is way down the international league table in education?

The entitlement generation is not entitled to A’s. Instead, it has earned the right to stretch for an A, and to have us help them do as well as they can.  Getting things wrong is the only way to find your limits. Defining your limits is the first step to pushing them back. The right degree of failure is a spur, a motivator and an educator.  Students learn a lot from failure. We all do (this is why scientists know a lot). So long as the environment is right, having students fall over is a great way to teach. And an important lesson in life.
http://www.youtube.com/watch?v=Y6hz_s2XIAU
However, I have to tread carefully here. Deliberately setting out to make students uncomfortable needs to be handled carefully for a Gen-Ed course like this. We want students choosing to do the course, and we do not want to reinforce the loathing many of them developed for K-12 science. I can keep the numbers up and the enthusiasm high by setting a low bar. But is it naive to think that the right form of stretching will also lead to student happiness – and better-rounded citizens?

Intentional grade inflation

I bribed the students with 2.5% extra credit to do the SRTE questionnaire. Last year, after a lot of verbal cajoling, I got the return rate to 58%. This year, with the grade-inflating incentive, I got the return rate to 86%.  That’s probably an important difference (though how would be know?): I imagine that extra quarter of the class contains the students without strong feelings.

grade inflation.jpg

But the shameless bribery had the following consequences
One student who failed, passed. 
Two students on a D got a C.
One student on a C got a C+. 
Three on a C+ got a B-. 
Nine B-‘s got a B. 
14 B’s got an A-. 
8 A-‘s got an A. 
I do worry about grade inflation, which is a nation-wide (and indeed international) problem. In this particular case, there are some advantages with the 2.5% freebie. First, I do get full feedback (an 86% return rate is rare for large classes). Second, this freebie encourages me to keep my tests hard. And third, there are always a few students on boundaries who want their mark to be higher. It is much easier to say no if their true score was actually 2.5% lower. 
On balance, I think I’ll do the same again next year – unless I get a rapped on the knuckles by the University when word gets around.

Class attendance

IMG_8361.jpgIt is clear from the SRTEs (student questionnaires) that many students are immensely annoyed that so many of their classmates skip class. I find it irritating too, though I guess for different reasons: we are not allowed more students than there are seats in the classroom so non-attenders exclude potentially more motivated students. At the beginning of semester, I did my best to get rid of the half-hearted. In response, seven students left, allowing seven more to join. But there was still a lot of non-attendance:
Attendance 2011.jpg
I do not know how this compares with other classes on campus (I think the university should publish this sort of data – it would be useful for Faculty, and fee-paying parents….). On purely financial grounds, I remain staggered that students chose not to come to class. If students don’t want to interact with Professors they should go somewhere cheaper.
Should I do more about forcing the issue? As things stand now, I give 16% for attendance, made up of a presence at four or more pop quizzes (I did 8 quizzes, without warning; that’s where the data points above come from). Next year I might raise that requirement to six or more out of eight and see what happens. But I need to drop the 16% to 10%. That 16% enabled two students to pass who participated miserably on the blog. I don’t think I need to give too many marks for attendance: poor attendance really shows in the tests and exams (worth 40%).  And why should we reward people with marks for just showing up?
Ultimately, it is not even clear to me that we should be forcing students to come to class. Sure it’s best for them. But they are now adults. Life will not be looking out for slackers. Why should we?

Student rating of teaching effectiveness

Bribed by the offer of extra credit, 85 of my 99 students filled in the course questionnaire. The students are asked to rate things on a scale from 1 (staggeringly bad) to 7. The emerging frequency histograms (with mean score) look like this:

STRE 2011 v3.jpgHaving studied more than my fair share of these sort of scores while on Faculty Promotion and Tenure committees, I am moderately pleased with these, especially given than my audience is neutral to hostile at the start. It is gratifying that most of the students consider that I know stuff and want to get it across. I am also pleased that 2/3 of the students rated the overall quality of the course as 6 or 7.
But two questions are actually important. ‘Rate the extent to which interest in the subject matter was generated by this course’, and ‘Rate the importance of the knowledge learned in this course’. Scores for both are a bit disappointing, but at least only five of the students reported less than average interest-generation. More worrying is that barely half the students rated the ‘importance of the knowledge learned’ as 6 or 7. 
But I’d like to ask them that question again after ten years. I suspect it’s too early for students to know.
Students had various other things to say too…..


Predictably, the most common complaint was about the difficulty of my tests.  
Otherwise, several minority views surprised me in the SRTE comments.   
1. Force us to be more self-disciplined. Some students want more frequent assignments, weekly deadlines, written marked homework etc. My reaction: the real world is way less structured. Time to take a grip folks.
2. Abandon the blog and go for class projects and essays. My reaction: Yawn.
3.  Why should I have to spend a semester trying to figure out the professor’s way of thinking?  My reaction: that’s the point of University.
There were also a couple of excellent suggestions:
  • Give extra credit for full attendance.
  • Throw in some pop quizzes that focus on previous material discussed in class.

The final grade

Class average, 87% (B+).  Breakdown:  A, 19 (including 4 students with >100%, and 11 >98%); A-, 35; B+, 12; B, 13, B-, 5; C+, 7; C, 5; D, 2; Fail: 1.

Overall class averages for the sections:  Blog, 57%, Class test, 72%, Attendance, 73%, and Final Exam, 76%.  How can those low scores give an overall average of 87%? That just shows that there are lots of ways to get a good grade in this class. Very high performance in one section can offset mediocre performance in another.  Some folk were stunningly good in the exams and didn’t blog much.  Others blogged extraordinarily well but were pretty average on the tests….and some were, well, spectacular across the board.
The average extra credit was 2.8%, largely consisting of the 2.5% I gave for the return rate on the SRTE (in the end, a very impressive 85.9%).  I guess those dozen free-loading students who couldn’t be bothered doing the SRTE won’t be bothered reading this, but they should be grateful. Eleven students got additional extra credit, some for unforgettable blog posts, some for exam questions I used in the final, some for being first to find the (extremely rare) mistakes in my tests.
OK, I really do have a plane to catch. When I get back, I’ll post reflections on the 2011 course and what I am thinking of doing differently next year.
Meantime: most of you, thanks so much for a fun time.  If America’s future is the hands of those of you who got involved in this course, all will be well.  Have a good break, and best of luck for 2012.
Don’t stop thinking.

The final exam

Fin.jpgClass average: 76%.  Twelve students got an A, including 6 with 100%.  No one got everything right; three students got 26/28 [recall I ask 28 questions and mark out of 25].  A further 13 students got an A-. The rest: B+, 8: B, 15; B-, 3; C+, 7; C, 8; D, 18; Fail: 14, including 2 no shows.  

Class: In a few hours, I get on a plane to Norway to do a PhD exam so I do not have time to post fully on the exam.  I’ll do it over the Christmas Break. But for now, the questions on the media report were generally well handled (Andrew pleased). The first half of the test was brutal on those absent from class over the last few weeks (Andrew also pleased).  Incredibly, 43% of the class still think data can be made to fit any hypothesis (Andrew near suicidal). 
More later.

Blogging: the final grade

The final blog grade comes from the best mark from the three blog periods.

For SC200 Class of 2011,  the overall grade distribution for what we professionals call Digital Expression is:

A, 3; A-, 28; B+, 29: B, 15; B- 7; C+, 9; C, 4; D, 1, Fails: 3.
Going back to earlier blog periods, I also gave extra credit for the post I talked most about in class (twins with a different Dad) and my favorite post of the year: Hilary interviewing a computer.

Blog Period 3

Maybe it’s the end of semester-itis, but I was pretty disappointed with the final blog period.  Perhaps I shouldn’t expect it to be any other way.  A few students were going all out to boost their marks, but since I take the best mark from the three periods, the well organized stuck with their marks from earlier blog periods. This means the final blog period was dominated by students who have left things to the last minute [112 entries posted in the final 24 hours!], a recipe which does not guarantee a good reading.

Still, there was some quality hidden in the deluge (e.g. cold, bipolar, neuro-marketing), not least among those trying to deal with Sandusky scandal (Mob mentality, you?). I also wish I’d got to know Caroline, who wins the prize for consistently wacky posts (notables: 1, 2)
I gave extra credit for Rachel’s perfection and Jordan’s valid vitamin critique and her subsequent attempt to figure out what went wrong
The bottom line:
A, 1; A-, 15; B+, 8; B, 9; B-, 7; C+, 7; C, 4; D, 2, Fails: 9 participants; 37 no-participants.

Class Test 4. Sigh.

Sometimes you really wonder.

Results.  Four students got 100%, another four got an A, and 5 an A-. There was 1 B+, 7 B, 12 C+, 4 C, 18 D and 23 Fails with 27 No Shows.  Overall average 71%. So some exceptional performances but overall not so hot.
What gives? Clearly there is an end of term exhaustion issue, and among the no-shows were a fair number of students who already had high marks for the class tests so did not need to do this one. Their absence dragged the average down, for sure. And alot of the bad marks came from students who attend erratically.  This test was brutal on those who skip class. And most of those with a bad score on this test have never been to any revision session. In fact, I think I have seen less than 1/4 of the class in revision sessions. Odd.
Still, this course has many ways to get a good grade, and doesn’t penalize the odd bad test. Even for the tests,  83 of the 100 students have a B or better for overall test score.
Was the test too hard….?
I don’t think so. If it was, it would have generated more teachable moments. But the class as a whole came unstuck on only five questions. Two of  those involved last week’s guest instructor on energy futures. Only half the class were even present, and a bunch of those that were there played on their phones and computers. I am pleased I set questions that rewarded those that were present and listening.
Three other questions show I have to go back over randomized control trials (RCT) and why they can be so powerful when done well (they dispense with reverse causation and third variable problems). 
No, the odd thing was that those five questions aside, the class overall did well on each question.  So it shows that most students are having trouble doing consistently well across the board. Maybe that comes back to: go to every class; revise tests; ask Andrew about past questions you did not understand.

I did take immense heart from the way the students handled the risk questions.  Serious progress there.
A question on the scientific method was upsetting: 48% of respondents chose an option consistent with “Data can be made to fit any hypothesis”. If that was true, most of us would be dead by now. Worryingly, this question was a slightly re-worded version of a question given in Class Test 1.  Way back then, <10% of the class thought “data could be made to fit any hypothesis”.  It could be that I am unteaching the students. But a student told me that in my tests ‘all of the above’ is always the right answer, so for this test I liberally sprinkled ‘all of the above’ about the place…
There was of course, Q#22.  I screwed up the grammar which confused things, so I dropped the question from the test mark (a third of the students chose one of each of the three options, never a good sign).  Oddly, only four students raised the mistake with me, even though there is extra credit in being first.  Brittany Musaffi won that prize at 5.47pm, three quarters of the way through the exam period.