That's it.

xp-end-is-hereWell, time to sign off. I’m not sure what the future holds. I am on sabbatical 2017-2018, so if SC200 runs next Fall, it won’t be me running it. Whether that hiatus becomes permanent depends what’s happening when I get back from sabbatical and what the College wants to do with SC200. There are opportunities for new types of Gen Ed course now. I could do something different. I can imagine interesting things to be done now trans-domain courses are possible; I quite fancy joint teaching a course with someone in history or philosophy or literature or economics. I could also focus SC200 a bit more (perhaps entirely on to medicine and health care?) or keep the same general themes and just find different material to try to stay fresh. All things to ponder on sabbatical. I can’t help think that the themes, objectives and subtext of SC200 will become even more important in the coming months and years. Science is going to remain the best system for knowledge generation and problem solving that humanity has, but it is also a hugely civilizing process. Looking forward from this tumultuous year, it looks like we will need that side of things more and more.

But for now, a big thanks to people who made the course happen this year. Thirteen others contributed. Thanks to:

  • The class TAs, Brian, Eric and Sarah. Huge efforts, well above and beyond. The students and I really appreciate your efforts to support the students’ learning and blogging.tas-2016-cropped-pul0aw
  • The guest speakers who volunteered their time and their very different perspectives, Eberly College of Science Dean Doug Cavener, Mike Mann (Meterology) and Jason Wright (Astronomy). Thanks for making us all think, me and the students alike.
  • Monica, who kept me sane while dealing with endless emails, handouts ready in the nick of time, and pieces of paper for attendance grades and extra credit.
  • The graders, five hard working grad students who got things done under immense time pressure and, most impressively and despite the best efforts of sites@psu and Campus Press, found most of the students’ work. Thanks too for taking the time to give the students such detailed feedback.
  • Chris Stubbs, TLT tech guru who got the site up and running and then put up with my frustrations at what he no longer had control over.
2016-11-08-13-39-57

Jason Wright in class, Nov 8, election day.

And to the Class of 2016: Thanks for the challenges. Keep thinking. Have a nice life.

Self reflection

OK Andrew, what worked and what didn’t work this year?

Pluses:

  • Classroom discipline MUCH better. There was still some complaints about distracting whispering, but nothing like the last few years and none of the shameful s*** that’s happened before. None of it got me worked up, unlike previous years.  I think the difference this year was that I talked more at the start of semester about it, pitched it to the students as a deal (the deal works like this, you do x and not y and I will do a and not b). I also took a very light touch approach in class when it happened, politely asking people not to. That seemed to go better than trying to shame them into silence (and way better than getting pissed). I think walking up and down the aisles really helps too.
  • Plagiarism. Just one case this year. Broadly speaking, the countermeasures worked. Unlike the College Academic Integrity Committee, which continues to under perform in this important and delicate arena.
  • I did a lot more work with the students on soft skills. That felt good. I hope it set at least some of them up for a better College experience. No way to know of course, so like a good physician, I’ll roll with confirmation bias and assume I did good.
  • Again, no problems with the laptop ban, and progress on the phone issue.
  • Explaining to students more about why we are doing things was good.
  • Attendance algorithm worked much the same as last year. I think stick with that.

Room for improvement:

  • More active learning. Things like this (interestingly, that video is pitched as being about clickers, but in fact they are pretty irrelevant to that sort of teaching practice).
  • More debates. For example, is it worth testing homeopathic vaccines?
  • More challenge questions to stoke curiosity. Mini research projects?
  • Build a capstone class around John Oliver’s great show?
  • The students continue to find their blog grades disappointing, by and larger, but at the same time by and large don’t produce excellent work. Need to think more about why that is. Show more examples of best practice? Or am I fighting the Facebook/Twitter drivel. Perhaps students don’t read much good writing these days?
  • Talk more about cognitive failings of humans. Use the analogy with optical illusions more. Our eyes/brains can deceive us. Similarly we have cognitive defects too. Monty Hall good for this. Bring this out even more e.g. with the Linda problem. That scientists can use science to both study and (somewhat) over come these problems, even though everyone has them.
  • Talk more about note-taking earlier on. Do some examples — it worked really well this year when I reviewed (all across the blackboard) what Mike Mann and Doug Cavener had said, partly because I (hopefully) reinforced and clarified their messages and partly because the students could see what I mean about reviewing after class what had gone on.
  • The blog software. Groan.
  • Is there some way to motivate students to solve the procrastination problem? Do a class on procrastination science? That might be interesting to think about, especially since what I read does not make too much sense. It’s like a huge human blinder.

The big unknowns remain big unknowns. (1) Am I pushing the students hard enough or too hard? (2) What impact are any of my efforts having? I can imagine ways to investigate #2, though not easily. I still can’t even imagine how to investigate #1. That’s just not my problem. It’s a College universal. If I didn’t find my science so interesting, I might turn my scholarship activities to ruminating on that. I’ve made no progress on it since I first started pondering it in 2011. I haven’t seen or heard on anyone else even wondering about it, even though it goes to the heart of this huge, expensive industry we call Higher Education. Most industries reflect vigorously on what they are doing (or they are forced to by outsiders); why don’t we?

Extra credit

extra_credit_2My use of extra credit has grown over the years, despite my concerns about grade inflation. I use it to

This year I really went crazy with it, in the end offering nine different routes to extra credit. I capped it at 10% so that extra credit does not dominate the grade, but within that constraint, a student can go for it as they wish. Much to my amazement, almost no students make anything close to full use of it. It is much easier to get 10% through extra credit than it is to get an extra 10% by doing better on the tests or blog.

I offered extra credit for:-

  1. Particularly lucid, stimulating, artistic or lateral blog posts (max 5%/post). This is to encourage/reward outstanding work. I thought few students did enough to deserve this, but it’s very good to have the option to reward those who go above and beyond.
  2. Suggesting exam questions. You have to really know your stuff to write exam questions. Just 5 students offered any up, even though I’d give them 2.5% for every question I used.
  3. Finding a mistake in an class test or an exam that causes me to regrade (max 5%/mistake). A couple of students suggested mistakes, but they were typos and so did not need regrading. Nonetheless, I think this extra credit is good because it emphasizes the possibility that Professors can be wrong, it gets hypervigilance going, and students who argue with me learn, even if they are wrong (especially if they are wrong?).
  4. Partaking properly in the first blog period (1%). This is an anti-procrastination (get-off-your-butt) carrot which I was trying for the first time. It did not work at all: only about 100 students participated properly, fewer than last year when no extra credit was available.
  5. Blogging ahead of deadline (2%/deadline). This was a time-management carrot. It too did not work.
  6. Surrendering phones in class (1%/time). This did work.
  7. Writing an extra blog (2%). I asked the students to write about something they learned in class and how it had or might change their life. Just  nine students took advantage of this. Maybe that mean the course had no impact on the 310 other students. But I thought all of the nine were really interesting, especially this and this and this and this.
  8. Opt in to names in the hat (1%). A little under a third of the class did this, which says something about students, but nonetheless, I liked this solution to the problem of cold-calling students in large classrooms.
  9. A bribe to get the SRTE return rate up (1%). I wasn’t going to use this bribe this year, but with just a few days to go, only 30% of the students had given feedback through the Student Rating of Teaching Effectiveness system. Since that 30% was for sure not going to be a random sample [as is clear from what appears on Rate My Professor], I offered the 1% extra credit to everyone in the class if the class return rate got about 80%. It hit 82.5%… I’ve agonized before about this shameless bribe, but I think we have to do it if we are going to take anything meaningful from the SRTEs.

On average, the class got 4.7% extra credit. That’s pretty amazing, given that 4% would happen more or less automatically (3 x 1% for the phone-ins + 1% for the SRTE bribe). Just 13 students got the maximum extra credit and only 34 got 8% or more. I am sure I had more students just ask for more grade.

Bottom line? There is an administrative cost to all this extra credit, and I was able to keep on top of it only because I have Monica supporting the course. Without that, I am not sure I would keep anything except #1-3 and #9. But otherwise, I think worth persisting for the bullet-pointed reasons I give above. For professorial peace of mind, buffers against students complaints and begging are not to be underestimated. More positively, carrots are at least in principle a good way to nudge student behavior, even if there is not much sign they actually worked on my students. Perhaps the time management/anti-procrastination carrots need to be bigger (#4, #5). Just how much do I need to bribe students to do what’s good for them?

Software de-grades

My biggest headaches this semester came from the software platform we used for the class blog. ‘Up’-grades happened this year which, incredibly, degraded functionality. As currently configured, sites@psu is not fit for large-class teachingThe software now creates unnecessary work for instructors and frustrations for students — all while simultaneously creating novel ways for students to cheat – and no way to catch them.not-fit-for-purpose-stamp

  1. Most irritating was the degraded ability to find students’ work. We used to have an alphabetically-arranged Contributions page visible to the world. It enabled students (and us) to easily see with hot links what work they had done within a fixed time window and to find their class mates’ work. That made it easy for them and, most important, it made it very easy for us to grade. The 2016 ‘improvements’ hid all that. Now, no one on the outside can find anything and the themselves students have to log in to the system and run a report on themselves. And the graders? We ran endless reports. Click click click. Tick. Tick. Tick.
  2. During the grading of the third blog period, someone changed the method for running reports on students. You are in the middle of grading hundreds of blogs and someone replaces one lousy search algorithm for another lousy search algorithm — all for no obvious gain?
  3. The search widget does not search by author. Wtf?
  4. We had to rely on students (!) to go into their profiles and make their names correct. As administrators, we couldn’t do that. Students appeared by default with their user I.D. (afr3). They could then call themselves Drew, Andy, Andrew or leave the afr3. We had to ask them to call themselves what our class lists call them. Otherwise we have to be like detectives to figure it out. My favorite: Alexander called herself Xander. When you are searching a drop-down list of 300 students arranged alphabetically by first name…
  5. Yup, that’s right. For much of semester, you could not run reports on a students’ surname or user id.  There was just a drop down menu in alphabetical order of first (!) name. At one point we had a list of students arranged by first name followed by the remaining students arranged by ID number. I did so much scrolling up and down that list.
  6. The default time zone for the blog? Central Russia (no kidding). We figured that out after the first deadline cut off a lot of students’s last minute work.
  7. Some moron set up a clone site. This might have been in response to my complaints about losing the contributions page. I like that they tried. I did not like that they failed. But worse, they made it so the students could post to the clone site. You can see it here (check out the URL!). Once we figured out that there was a live mirror site, I disabled student access to it. But too late. You can still see on the clone site the students who posted to it. That’s the work we did not grade until student complaints unearthed it.
  8. The ability of the grading team to get into the site and find students completely stopped for many hours during a grading period (10/22/16). We have a team of five graders trying to get it all done in less than a week and we lost the better part of a day — without explanation or apology.
  9. My instructor blog vanished completely for six hours (10/17/16). Again no explanation or apology.
  10.  Despite my endless exhortations, many students post at the last minute. Some of this last-minute work took more than 12 hours to become findable because the blog under pressure does not post straight away. We know this because some work appeared after the graders had graded a student… Oh, the complaints (from students and graders).
  11. There is no way to tell if the site is about to exceed its storage limit. Right now, my dashboard tells me that with something in the order of 2,000 posts this year, similar numbers for the classes of 2012, 2013, 2014 and 2015, as well as this Reflections blog, I have 0.00% of 2.93GB used.
  12. There is no log. That’s the thing that would tell you who had done what on the site when. That’s what you need to check whether students are cheating or misleading you. And that matters because:
  13. Unbelievably, the site lets the student determine the publication date of a post. They can do work after a deadline and make it look like they did it before the deadline. I discovered that early in semester and I could not believe it. If you make it possible for students to cheat, some will. Maybe it is good there is no log. I can not tell how often we were taken for a ride.

Juggling 300+ students is hard work, especially on top of a busy research and administrative life. Time is everything. Brain space is everything. I struggle to put into words my feelings about the hours and energy I wasted dealing with software-induced student complaints and concerns. I dare say the College is also unimpressed with the cost of the extra hours the graders had to spend tracking down students’ work. Writing this post has taken even more time I will never get back. I hope it leads to constructive action on someone’s part. Whose, I have no idea. These days, you never get a person to deal with.

In 2010, tech guru Chris and others persuaded me that we could make blogging work for a large class. And indeed, Chris made it work, year after year. For the first five years, the blog software never got in the way of teaching and was never a pointless time-suck. Those were the good old days. In those good old days, Chris had control and was able to build the site himself to aid my pedagogy and grading efficiency. No more: sites@psu got outsourced to folk who don’t believe in local control. Last year, the change of platform was all mildly irritating. This year, I’d have given anything for the old functionality.

bad-softwareIndeed, if this year’s performance had happened in year 1, I would have given up blogging and returned to conventional term papers. And I will unless we get back the functionality we once had.  I continue to think blogging is an exceptionally good teaching tool. But this year, the hassle didn’t justify the pedagogical gains. Not even close.

The only good thing I can say about this year was the speed with which the folks at Texas-based Campus Press (to whom things have been outsourced) got back to me. I learned that if you put URGENT or EMERGENCY in the email, you got a rapid response. As to the responses themselves? Well, here’s one: “The reports did change and unfortunately at this point I don’t have any way to change them back to the previous version.”

Of course a real software ‘up’grade would involve a gain of function. Two new things I would like: (1) A text editor for comments. If you are an administrator editing an existing comment, a text editor appears. But not if you are a student. They have to use html, if you can believe it. That has caused so many utterly pointless headaches for students and TAs over the years. Fixing that would not be an innovation. It would just be making existing tools accessible to actual users. (2) Automatic plagiarism software. This would be an innovation. It would be great to have something that we can turn on after deadlines, and which compares the material on the blog with the rest of the internet (and not least SC200 blogs from previous years). The process doesn’t need to be instant (it could chug away for a week). If that’s too computationally intense, something simpler could still be very useful. For example, just taking well-formed sentences from every blog post (or even a random sample) and doing a google search for that text string would be good. If that’s too much to ask, then how about something that checks the current semesters posts against SC200 blogs from previous years? Plagiarism is a big issue for teaching via a blog. Be great to have a blog that worked with the instructor to make things better.

Mind you, I’d just settle for one that didn’t just make things harder.

Phones con't.

There is evidence that phones are toxic for learning (e.g. 1, 2, 3). My students agree (2015, 2016). So what to do? I tried several things this semester, all for extra credit (1% each time).

(1) Collecting phones. That worked, but it’s a scene and a half. It could be improved on by collecting the phones when the students are in their seats. That would cut down on the time it takes to collect them. Returning 300+ phones would still be, well, a scene.

cell-phones(2) Honesty. This was Julia’s idea: get the students to swop phones and sign a paper form to certify that they had their neighbor’s phone for the entire class. This worked very well the first time I tried it. The second time, when the students knew how the system worked, we had at least two cases that were most easily interpreted as outright cheating. But for that, I would have tried it a third time. Several students were mighty pissed that a couple of cheaters meant the whole class missed an opportunity for extra credit. Me too.

(3) Flipd. This is an app that students download. The download is free, but there is a one-off charge of $3 to actually use it for classroom credit. The instructor sets up the class times, the app notifies the students when to flip their phone off, and system lets the instructor see who has not used their phone during class. I like it because the phone still works, so people who need to be contactable (those with offspring in childcare for instance) don’t get excluded. But I trialed this app with the TAs; the four of us did not find it reliable enough to roll out to 350 students. It’s simple enough to use, but if you push the wrong button at the wrong time, the wheels fall off. I could just imagine endless emails of the  ‘but I was there’ sort. Many professors are making Flipd work and I am sure as the software comes on, it will be great. The $3 is a bit of a downer. Maybe extra credit for the price of a cup of coffee is something students would go for. Instructors can negotiate a class rate – I got it down to $2/student – but then the instructor has to pay ($2*350 students=$$$).

I discussed the various issues with Cristian Villamarin, the guy based in Canada who wrote the app and runs the company. He sent me a flier and a presentation on the system (I enjoyed that one of his slides came direct from the PSU discussions I blogged about). He’s been pretty interactive since we talked, putting me in touch with another PSU professor who has been using it successfully. Flipd is probably going to be the solution when the reliability kinks are sorted (Cristian says they are). I also like Cristian’s slogan summing up the aim of all this: Life is Like a Camera: focus on what’s important and you’ll capture it perfectly.

(4) Pocket Points.  This app is 100% free and students gain points they can use for discounts on food around town, so they are motivated to use it. But the problem is that it gives a list of ALL the PSU students using it on campus at any time (which can be many hundreds): you can’t get a list of just your own students. Moreover, it shows the list in real time, not who was there for the full class period. So while it is a great way for students to impose discipline on themselves, it is not going to be a way to use extra credit to incentivize self-discipline without a major overhaul.


I did not try a solution Bill Goffe pointed me to. Yondr is a hardware solution which even got a mention in the NY Times. Yondr told Bill they have a subscription model — $1.50/pouch/month and 4 undocking stations. Doug Paris at Yondr is the contact. This could be an interesting way to go, but the hardware aspect means one more thing for students to forget/moan about.


yes-no-maybeIn all of this, there is a dilemma for me: phones can be good for active learning. I use PollEverywhere to poll students and to run a comment wall so they can text questions if they are too nervous to put their hand up. I think that is good option in large classes (not everyone likes to speak in front of 300+) and I hate clickers (and so do students). So how to balance those upsides of the ubiquitous phones with their toxic downsides?

Here’s a possible answer. After each of our three phone hand-ins/swops, I noticed fewer phones out during subsequent classes. Could it be that encouraging students to disengage from their phones just a few times in semester is enough to show them how much better off they are when they focus on the classroom……? Could just a few sessions be enough to show them that it is possible to leave texting and social media for a whole hour without the world ending? If so, Julia’s honesty system on a few well-chosen occasions might be enough. Is that too much to hope for?

I feel like there ought to be education specialists or teaching learning and technology specialists trying to sort all this this out. Surely none of this is rocket science.

Study Smarter Not Harder

worksmart2I’ve learned that most students have learned little about how to learn. This leads to the annual tragedy of students asking for a higher grade because they worked hard. That argument doesn’t cut it in the real world. It doesn’t even work in my world (please give me a grant or publish my paper — I worked ever so hard).

No one cares how hard you work. They care about what you did. Outputs count — inputs don’t.

As far as learning goes, the best bet is to learn how to to learn efficiently.  Early in semester, I talked about this in class a fair bit, and posted various things to the Angel site to encourage students to learn better. Angel is about to vanish, so for prosperity, here they are.

  1. Study Smarter Not Harder handout from John and Jackie’s class. Their 2 hour class is voluntary. You’d think their small classroom would be packed. It’s not. I guess students are too busy studying inefficiently.*
  2. John Water’s MOST excellent guide on how to study for exams. Acting on the information in this handout could transform the lives (or at least the transcripts) of so many students.
  3. Study Tips
  4. Top 5 Ways to Accelerate Learning
  5. Make It Stick. An awesome book. Should be compulsory reading and the focus of all Freshmen Seminars.
  6. Good generic wisdom from SC200 2015.

The Class of 2016 offered up remarkably similar advice:

  • Take more and better notes
  • Ask the professor and TAs more questions
  • Pay more attention in class
    • Stay off the phone
    • Sit closer to the front
    • Sit away from friends
  • Don’t skip class
  • Go to exam review sessions
  • Review notes regularly after class

work_smart*My son did an earlier incarnation of John and Jackie’s class. He said it was the most useful two hours he’d spent at PSU.

What to make of this?

test-exam-grades-2016

The class tests and final exam are all identical in format. In an ideal world, we should see grades improve steadily through the semester. We did last year.  That would look like a steady increase left to right for the A’s and a steady decrease left to right for the lower grades. We don’t really see that. Well, certainly not for the A’s. Maybe the B’s and C’s are doing sort of the right thing. A simpler interpretation is that not much happened at over the four class tests and then there was a huge jump in performance for the final exam.

When I first noticed how much better the class did in the final exam than in the class tests, I was pleased (they had learned something! there’ll be fewer complaints! etc etc). Then it started to gnaw at me. The final exam is open, on-line for five whole days (120 hours) and, like the class tests, the students get a second go at the exam, having learned what their score was first time around (but not which questions they got wrong). Could there be widespread cheating? Now this is not something nice to think about, far less discover (just think of the time-suck it would be to run large chunks of the class though academic integrity proceedings). But I decided I should anyway take a look.

The set-up is vulnerable to a class that gets really organized and uses their first exam attempts to try to work out the correct answers. That would take some serious amount of class-wide coordination to pull off. But let’s imagine what that would look like if it happened. Most obviously, test performance should improve over the five days the test is open. There is no sign of that. In fact, if anything overall performance gets worse through time (I believe that’s because procrastinators do worse on average).

testing

Each point is a test score; students get two goes so there are about 600 scores here. I ask 28 questions and grade out of 25; that’s the plotted score. More than 100% is therefore possible (note the two times that happened, it was me, once to test the test and once to test what Angel does when you get something wrong).

This picture doesn’t rule out some types of cheating (e.g. the highly illegal business of getting someone else to do the test for you), but I think it does rule out most plausible scenarios of large-scale class-wide fraud.  So I guess the simplest explanation for the performance jump in the final exam is some combination of (a) me setting an easier exam, and (b) students having more time and motivation to do well.

The way to be 100% sure about this would be to do the exam proctored in the exam center. What a performance for a class on this scale — plus, some students would miss it and need a re-take test. That would be all so tedious.

2016: the bottom line

I calculated the final grades almost a week ago and then let them sit on Angel until now. This is to give the students a chance to complain. That generates a bit of e-traffic but very effectively crowd sources the search for errors in my grade book. With 300+ eagle eyes on it, I am now confident there weren’t any. So the grades are officially posted today. They look like this:

grades-2016

The class average is 87.6% (B+), or 89.6% (B+) for those who passed. We started with 358 students; we ended with 317. Among the finishers, 50% got some type of A, 66% got a B+ or better and 80% got a B or better. With extra credit, 11 students got >100%. Altogether rather similar to last year.

I say it every year, so I guess I’ll say it again: what to make of this grade distribution? Is it about right or too high or too low? We had a Biology faculty meeting a while back, that I sadly missed (not often I say that), where the proportion of A’s was being discussed. In biology classes for 2013 with more than 20 students, the numbers looked like this:

% A and A- 100-200 level Bio Courses 400-level Biol Courses
Mean 24% 42%
Median 27% 36%
Range 13-39% 13-99%

Everybody except the person awarding 100% A’s thought 100% was too generous. The minutes from the meeting helpfully say: “Faculty Senate policy allows faculty to grade according to their best judgement. Although programs can provide guidelines, ultimately grades are at the discretion of the individual faculty member. Several faculty shared their experience of figuring out their grading criteria with little to no guidance. It was widely agreed that some departmental guidelines for grading would be helpful.” No such guidance has been forthcoming because I don’t think any such guidance is possible. It’s a fundamentally challenging problem. The problem is even more difficult for Gen Ed courses where there are no professional discipline-specific views on relevant standards (and how can there be?).

Is 24% about right? My grade distribution with its 50% of A’s is clearly out of line with the 100-200 level Bio courses. Does that matter? People get excessively steamed up about grade inflation, but if we worry about that from data on the proportion of A’s, it implies that the only thing that matters is relative success. And if that’s important, our job is to not what I think it is, but instead it is to identify and anoint the top x% of students.  Which is CRAZY.

Actually, thinking about this too hard might drive me crazy. Previous ruminations are here and here. I am making no mental progress on this problem at all. Worse, I don’t see anyone else even engaged with it. In the shower this morning, I had a thought: isn’t the search for an ideal grade distribution fundamentally silly? What I should care about is the impact I am making to the way students think about the world. The grades might say something about that. But probably not much. So, Andrew, think about what’s important, not what is easily measured. Ruminate on that.

The line in the sand

It’s that time of year where I get inundated by email with students asking for a better grade. These requests fall into two categories.

  1. They’d just like some more. My 2015 response to that is here.
  2.  They’d like to be rounded up. My 2015 explanation of my rounding algorithm is here.

To both those 2015 posts, I note that this year there was up to 10% extra credit available. Students who want a higher grade might think about why they did not make full use of that.

Moreover, all students got at least 1% extra credit (the bribe to get the class SRTE rate return rate above 80%, which it duly was), and many students got more as carrots for time management, phone hand-ins, names in the hat….  That means all students just below a grade boundary got as close as they did because of extra credit — not academic performance. If I took away the non-academic extra credit, they would not be close. Grades are earned not requested.

line-in-the-sand

Final Exam

I am always pleasantly surprised how well students do on the final exam. This time, the average was 88% (B+) (89% (B+) when the fails are excluded). That’s a full 15% better than Class Test 4. There were 7 fails and 5 no-shows. No students got everything right, but on my ask-28-questions-grade-out-of-25 algorithm, 64 students got 100%, and six students got 26/28. There were 119 A‘s, 75 A-, 38 B+, 25 B, 16 B-, 18 C+, 4 C, and 9 D‘s.

Once I have dealt with the final grades, and the e-correspondence they generate (“please sir, can I have some more grade”), I’ll might come back and muse some on why the final exam performance is so much better than in the class test which was just a few days earlier (a 15% jump in average performance — from a C to a B+ — in just five days?)