Today on the POD Listserv Francine Glazer, VP at NY Inst. of Technology and long time faculty developer, posted a link to an article written by Jim Fairweather from Michigan State for the National Research Council. The article is about promising practices in STEM and whether providing evidence that alternative teaching strategies improve students’ learning is important to the adoption of effective strategies by STEM educators (http://www7.nationalacademies.org/bose/Fairweather_CommissionedPaper.pdf)
See pp. 7-9 for a discussion of strategies used to try to improve student learning in STEM. Fran points us to the strategy he describes as “Improving student learning productivity” (p. 8). Basically, this strategy involves engaging the faculty responsible for “poor instructional outcomes in STEM” (p. 9) to engage in “any form of pedagogy that increases student engagement.” Focusing on the large number of faculty that rely solely on traditional lecturing and getting them to include even a small amount of active learning could have a bigger impact than getting the faculty who already use active learning strategies to use them more often or more effectively.
Fairweather suggests that this population is more motivated by rewards than evidence. I’d have to agree after successfully using a “baby steps approach” for many years–i.e. focus on helping all faculty to consider changing one thing in one class on one day to involve students in some form of active learning. If we can help faculty do that one thing well, they see/experience the improvement in students’ learning, which is a reward of sorts.
While I don’t doubt that Fairweather is correct that evidence won’t convert the true skeptic, being able to provide evidence from STEM teaching for STEM faculty does allow us to credibly make the case, which allows us to move on. If we can’t provide evidence when requested, it can serve as justification to dismiss us and our suggestions. That is, it is faculty developers’ mastery of and access to the evidence that is important, not necessarily that evidence exists (although I realize that we can’t have one w/o the other).
However, the real reason that this idea caught my eye is that I wonder if any teaching center has ever explicitly approached their programming in this way–explicitly focusing on reducing the use of the worst techniques among the largest group of faculty users? I wonder if one could engage STEM faculty by appealing to their empirical experimental sides? Could one convince a large number of all-lecture-all-the-time to make an across the board shift? Could we have a biggest-bang-for-your-buck or ‘make one small change, reap big reawards’ series?