At first glance, the idea of learning progessions seems to make a lot of sense to me. By taking into consideration the more unifying concepts of science, and developing cognitive progressions for students across defined time-frames (3 year chunks), we can begin to try to attack the depth of understanding students have rather than a “mile wide and inch deep” focus of today’s school. I think this type of instruction could allow for more experiential learning opportunities (perhaps this is where more situated learning could be incorporated into the classroom), or at least more room to connect the core concepts across multiple domains. These progressions are a work in progress for sure, and don’t come without some room for improvement (noted by Duncan). But given the abundance of complaints about our current model of instruction and assessment, are LPs any worse? While certainly not a simple fix, I think they at least begin to present a step in a new direction for instruction.
I thought the idea of giving a multiple choice exam to assess where a student would fall on the progression could be an effective grouping tool, but not an end all be all for assessing someone’s knowledge. From the classes I have taken, I feel I have been programmed to think of multiple assessment types as the way to go; multiple choice being perhaps the weakest method. But ordering the multiple choice responses as a means to diagnose the students position across a learning progression seems like a pretty effective and relatively easy diagnostic tool (Steedle and Shavelson paper). While these are not fool proof (students falling into different progressions with their responses, perhaps an indication of knowledge in fragments like diSessa wrote about), they can provide a good first approximation of content knowledge students possess.
In the Wilson article, the aim of using construct maps to begin to build appropriate learning progressions was the first building block to effective LPs. These construct map levels seemed very similar to state anchors of standards, as noted by Wilson. In this case, if the proper construct map is not used, it seems the rest of the LP process is shot. Having a clear understanding of the key concepts and levels in which they typically unfold seems to be the driving force. How we get to these key concepts is the tough part that needs to be agreed upon. The idea of developing assessments from construct maps, and then using assessment results to develop appropriate learning progressions is rational, but could result in a few iterations before we begin to narrow down the correct levels of sophistication. I will consent this testing process if we can get rid of the PSSAs!
Finally, I read the Songer article on LPs and their development goals. Trying to develop an assessment that measures both content knowledge and inquiry reasoning echoed the purpose that Duschl described in developing LPs. Not surprisingly, using more embedded assessment activities along with progression of a unit allowed for greater expansion of student reasoning and more complexity in answers compared to multiple choice, traditional assessment types. This information supports the belief that most probably have regarding multiple choice summative exams; however, this will require a lot more effort and thought from the teachers as well as the students, and instruction methods need to mirror this type of assessment, otherwise I don’t see the value in giving these types of exams and expecting “better” results and thinking from your students.
Tags: Team MACK
Your blog made me wonder if the “mile wide and an inch deep” is such a bad model. Do we want our students to be able to be able to answer a few of the “$250,000 questions” or all of the $32,000 questions? (Who Wants to be a Millionaire? reference). I think if we as a society truly want students to be experts in math/science/reading, we just have to devote more time to it….of course that means that we have to eliminate classes like band and shop class and home economics. So what is our goal with these kids…experts in a few topics or proficient with many topics?
Christian, I like the point that you make about multiple-choice assessments. I, too, believe that this form of assessment does have its place within a classroom. However, like you have mentioned, I don’t think that this form of assessment should be the only method for assessing a student’s knowledge. Using the information from a multiple-choice test to determine where the student lies on the realm of understanding can be a very effective tool for a teacher. But, I don’t believe that a national multiple-choice exam can be used to measure LPs. I believe that teachers need to make exams that are reflective of their own teaching because there are too many variables that come into play when students learn. I strongly believe that assessments need to mirror instruction, as opposed to the current schooling system in which instruction is often built to fit national assessments. LPs may eventually prove to be a useful tool in instruction, but based on the readings done this week, I don’t think that researchers have developed a way to utilize them effectively to improve student learning and understanding.