Monthly Archives: November 2011

Certainty

We had science writer Faye Flam from the Philadelphia Inquirer in class today. She does investigative science journalism and writes a column on evolution and an extremely stimulating blog. Faye said so explicitly many things which I feel the need to dance around.  Fabulous.
Since we had a professional blogger in the room, I asked the class what they’ve found hardest about the blogging they’ve had to do for this course. Rachel said she hated the uncertainty in things: when she is researching a blog post, she just wants certainty. Faye laughed. Tough.
I think no interesting problem in science has certainty. It’s a failure of K-12 education that students expect otherwise. If we had certainty about things, there wouldn’t be scientific problems. 

holy-bible.jpg

But this desire for certainty reminded me of a post on Faye’s blog.  A reader argued that certainty is precisely what is good about creation stories, and more generally about religion.  You can really get certainty.  
My late mother-in-law always said she felt sorry for me. She got the answer to everything on Sunday. I was stuck with trying to make sense of it all myself.

Class Test 3

I worried Class Test 3 was going to be too easy. I think I got it spot-on. 

IMG_8417.jpg

My aim with tests are to force students to think hard, generate teachable moments, tell the students and me who is not understanding what (i.e. kick them and me into action) and then, least important, return a grade. This test hit the important buttons really well. I guess there will be a bunch of pissy students in class this afternoon because students think the grade is most important. But I am at peace.  I and many of the students have some work to do, but we’re in the right place now. This course has a algorithm that really rewards improvement, and there is lots to play for. 
No one got everything right, but under my marking algorithm, five students got 100%.  The average score was 80% among the 90 students who did the test. The breakdown was A, 9; A-, 4; B+, 17; B, 9; B-, 11; C+, 14; C, 8; D, 14; Fail, 4, No-show, 10.  
That’s not all that different from Class Test 2, but this test pushed very hard on risk, as well as the main theme of previous tests, the assessment of scientific inferences. Like everyone, students have serious trouble evaluating risk. One more time around the aerodrome and I think the majority of the students will be fine.  
Most important to me, the students who improved dramatically from last test were among the minority of students who took the time to come to a revision tutorial. One student blew me away, going from 24% in the first test, to 80% in the second, and 100% this time… Hopefully more students will come to the revision tutorials I’ll offer over the next few weeks.  It is amazing to me that students think passive learning works. Apparently it does elsewhere on campus. Weird.
I did have a problem with one question…  

twitter.jpgIt was about Twitter science, based on a colleague’s recent paper.  I asked what the main challenges for that sort of science were. I did give my views on that in class, strongly I thought, but given that most of the class did not agree with my answer, and that said colleague does not necessarily agree with the strong view I did put to the class, I decided to give everyone full marks for that question.  The fact that my view was not written down on any of the material I posted also weighed on my mind, though I really do think that’s a lousy argument.  If its not on a Powerpoint bullet, do the students not take it in?  We should expect better.