I don’t know what I think about learning progressions and these reading, to be honest, didn’t really clear much up. The idea, as I understand it, involves looking at how students learn a concept of an extended period of time (several months to several years). The analogy to a spiral was frequently brought up in the readings to suggest that ideas are revisited several times adding greater detail with each successive revolution. As a theory of learning, I think this idea is pretty sound. I am willing to buy the notion that any given scientific topic has aspects which are too complex for a young mind to grasp. It makes sense to start out with a vague, incomplete description of a concept or idea and make it increasingly more complex as the student progresses and their understanding develops. Further, it makes sense to study what students can understand and figure out how best to present certain topics. In other words, figuring out how to design and implement a learning progression.
Monthly Archives: November 2010
30
Nov 10
Learning progressions, good idea or another mechanism for standardization?
30
Nov 10
Misconceptions, Coherence, and oh yeah, Learning Progressions
The readings this week around Learning Progressions brought up some discussions we had earlier in the semester for me, and I’d like to use this post to try and sort out some of these disconnected thoughts. The specific concepts that came up for me were misconceptions, coherence, and the idea from Vygotsky that we shouldn’t just focus on the outcome, but on the process toward getting to the outcome. I’ll talk through these one at a time:
Misconceptions
This one is probably the hardest to tie to a specific reading, but there was a sense of deja vu as I read through the Stevens et al article (developing an LP for the structure of matter) and the Steedle et al article — it seemed like there was a motivation that if we can just map out all the learning progressions, and figure out the paths from one place to the other (the learning trajectories that are mentioned in Steedle et al), we’d be done with educational research. In the margins of the conclusion to the Stevens et al article, I wrote: “catalog the full collection of LP’s, determine appropriate LT’s, build robot teachers, and take over the world!”.
That takes the whole thing too far, but this sense of collection reminded me of some of the earlier research we heard about around misconceptions, where there were goals of cataloging the full set of misconceptions a student could have around a topic and ways to undo those misconceptions, and bingo — you’re done. Learning progressions seem to offer more than misconceptions as a concept did, however, since they offer a specific structure to student concepts, and the notion of movement from one set of concepts to a different set of concepts. That seems more useful as a notion of learning than the idea that you either have one of a ton of possible misconceptions, or you have the right answer, which was the sense I had from the misconceptions discussions we had in class.
Coherence
diSessa brought up the whole idea of coherence when looking at different conceptual change theories of learning, and the Steedle et al reading made me think about this as well. Part of my concern around developing hypothetical learning progressions is that they will be biased toward coherent sets of beliefs — for example, the whole set of physics around the assumption that a constant force is required for an object to move with a constant velocity. From Steedle et al: “…exploratory model results provided no evidence of systematic reasoning beyond that which was identified by the confirmatory model.” I’m interpreting this to say that learners aren’t building an entire physics around a misconception — they can pretty effectively separate that notion from the rest of their ideas about how things move if it gets in the way. This gets to their next point, which is that students don’t always fit within one of the available learning progressions.
Studying the Learning Process
I don’t remember if there was a nifty single word for this, but the last idea that this brought up for me was Vygotsky’s comment that it seems pointless to study the way that someone reacts after they have already learned the behavior — instead we should be looking at how they are learning the behavior. What I like about Learning Progressions after these readings is that they dive into that place and try to open up discussions about exactly what it is that students are thinking when they’re having trouble with a concept. This was certainly available in the misconception research as well, but here it seems like we’re thinking of these models cumulatively moving toward the upper anchor of the learning progression, rather than being a stumbling block on the way to the concept. That way of looking at it seems like it is likely to be more fruitful, since teachers are usually working with students who don’t already know the concept they’re teaching — so ignoring anything outside of mastery seems counterproductive.
30
Nov 10
Learning Progressions
I didn’t know a whole lot about learning progressions before reading these articles, just what I had heard during conversations in some of my classes. From the little that I have read, it seems like LPs are trying to present content in a unified and progressive manner and focusing on a few core ideas at greater depth. I think that this might be a step in the right direction in making some changes to the educational system but there are some things that I question.
30
Nov 10
LP
So I’m a little confused by the Steedle et al article. I guess I’m just confused about how they went about getting their data and where this facet class came from. It also would have been helpful if they would have included the test questions that they used. Oh well. What I got from their article was that their proposed “novice to expert learning progression” did not predict all of the students’ conceptions about force and motion. Their conclusion that a “novice to expert learning progression” cannot describe every students’ understanding of constant speed probably can be carried over to other subjects. Even though this type of learning progression might not capture every student’s level of understanding, I think they are still helpful to the teacher to have an idea of what kind of conceptions their students may have.
I found Wilson’s paper to be quite helpful. It showed the different types of construct maps that you can create for a learning progression that I didn’t know existed. I always thought construct maps were likes those presented by project 2061 that have arrows pointing to each related concept.
Our previous readings would suggest that learning progressions fall into the conceptual change and constructing knowledge category. You are identififying students’ misconceptions and trying to change them at the same time building on previous knowledge.
30
Nov 10
LPs
In continuing my research into learning progressions, I find that I want to believe that they will make an impact in science and all education. That being said, in reading these articles (and others) and recalling some formal and informal conversations, I think that we are seeing a familiar phenomenon in education: rushing to implement a new concept or idea in the hopes that it will help our struggling school system. I will save that discussion for later.
I think the editorial article by Duncan (2009) on learning progressions was a great lead in to the other articles: it was clear and I was able to wrap my brain around what was said. The statement on page 607…”LPs hold the promise of transforming science education by providing better alignment between curriculum, instruction, and assessment” is an example of the pressure placed on implementing LPs without more research and development of them. I do think that LPs do offer a better approach to science instruction than our current “mile wide, inch deep” approach. As the article progresses, it mentions the development of LPs as well as the validation of them. I think the most helpful part of the article is the section of unresolved issues. Once again we are seeing many researchers bringing their own ideas to the LP table and each showing different “grain sizes” methods, assessments, and conclusions. Guess this hints of Dr. McDonald’s “messiness” of education. Nonetheless, the author seems optimistic about the use of LPs in science education.
A few questions regarding LPs in general: in looking at Wilson’s 2009 article (p. 720) and the construct map, I see that there is a skip from grades 5 to 8. My question regarding this is: are the progressions going to include every grade? What is the consequence of skipping grades in the LP? I am assuming this is due to the fact that a particular science is not taught every year. I did appreciate the use of illustrations in the article. These illustrated the various approaches to using construct maps in assembling a LP…more messiness?
In general, I support the idea of LPs and their use in education. I also support (like) the idea of using them for formative assessments, as well as the specific levels that may be used to assess a student’s understanding. I agree with Steedle and Shavelson’s (2009) last statement on page 714, “Thus, learning progression researchers must proceed with caution when attempting to report student understanding in the form of learning progression level diagnoses.” Much like other articles I have read regarding LPs, it seems that many researchers are warning the educational community about the danger of attempting to implement LPs before a significant body of research is developed on them. This mirrors the sentiments of Sikorski and Hammer’s (2010) article in which they warn of the fact that using LPs prematurely could potentially set back progress in LP research.
Oh, and everytime I see “LP” or “LPs” I think of the original use of those letters…long playing record! Guess I am showing my age.
29
Nov 10
Week 14 – Learning Progressions
At first glance, the idea of learning progessions seems to make a lot of sense to me. By taking into consideration the more unifying concepts of science, and developing cognitive progressions for students across defined time-frames (3 year chunks), we can begin to try to attack the depth of understanding students have rather than a “mile wide and inch deep” focus of today’s school. I think this type of instruction could allow for more experiential learning opportunities (perhaps this is where more situated learning could be incorporated into the classroom), or at least more room to connect the core concepts across multiple domains. These progressions are a work in progress for sure, and don’t come without some room for improvement (noted by Duncan). But given the abundance of complaints about our current model of instruction and assessment, are LPs any worse? While certainly not a simple fix, I think they at least begin to present a step in a new direction for instruction.
I thought the idea of giving a multiple choice exam to assess where a student would fall on the progression could be an effective grouping tool, but not an end all be all for assessing someone’s knowledge. From the classes I have taken, I feel I have been programmed to think of multiple assessment types as the way to go; multiple choice being perhaps the weakest method. But ordering the multiple choice responses as a means to diagnose the students position across a learning progression seems like a pretty effective and relatively easy diagnostic tool (Steedle and Shavelson paper). While these are not fool proof (students falling into different progressions with their responses, perhaps an indication of knowledge in fragments like diSessa wrote about), they can provide a good first approximation of content knowledge students possess.
In the Wilson article, the aim of using construct maps to begin to build appropriate learning progressions was the first building block to effective LPs. These construct map levels seemed very similar to state anchors of standards, as noted by Wilson. In this case, if the proper construct map is not used, it seems the rest of the LP process is shot. Having a clear understanding of the key concepts and levels in which they typically unfold seems to be the driving force. How we get to these key concepts is the tough part that needs to be agreed upon. The idea of developing assessments from construct maps, and then using assessment results to develop appropriate learning progressions is rational, but could result in a few iterations before we begin to narrow down the correct levels of sophistication. I will consent this testing process if we can get rid of the PSSAs!
Finally, I read the Songer article on LPs and their development goals. Trying to develop an assessment that measures both content knowledge and inquiry reasoning echoed the purpose that Duschl described in developing LPs. Not surprisingly, using more embedded assessment activities along with progression of a unit allowed for greater expansion of student reasoning and more complexity in answers compared to multiple choice, traditional assessment types. This information supports the belief that most probably have regarding multiple choice summative exams; however, this will require a lot more effort and thought from the teachers as well as the students, and instruction methods need to mirror this type of assessment, otherwise I don’t see the value in giving these types of exams and expecting “better” results and thinking from your students.
29
Nov 10
Week 14- LPs
This week’s readings are related to learning progressions. As far as I understand from the articles, learning progression research is based on cognitive learning perspective and concerns about individual students and attempts to classify students’ understandings in specific LPs levels.
That being said, I am really glad that I read first the Learning Progression chapter from Taking Science to School and Duncan and Hmelo-Silver editorial piece from Journal of Research in Science Teaching. Because these pieces provide historical development of the learning progressions, they helped me to understand where LPs are coming. Before reading these articles, I have been hearing learning progressions but haven’t really read articles about them. I was interesting to learn that the roots of learning progressions are coming from Bruner’s notion of a spiral curriculum and also Gagne’s hierarchical concepts. Duncan piece provided an overview of this week’s articles. Since it is the editorial piece, it provides the gist of all articles. According to the articles LPs provide better alignment between standards, curriculum, and assessment. The purpose of the LPs emerged from the desire to develop assessment systems to track student progress. Duncan and Hmelo-Silver stated that LPs must be informed by empirical research on student thinking and learning in the domain. They mentioned three approaches to validate LPs. 1) An initial progression is developed solely based on existing research and analyses of the domain. 2) LPs could be based on carefully designed cross-sectional studies that document development of students’ knowledge and reasoning on a particular topic across multiple grades. 3) The third approach involves the development of a progression based on the careful sequencing of teaching experiments across multiple grades. This bottom-up approach provides evidence of what students are capable of given carefully designed instructional contexts.
Steedle and Shavelson’s (2009) study tried to determine whether students’ observed responses reflect the systematic application of ideas associated with a single learning progression level and whether they provide a valid interpretation of learning progression level diagnoses. They found that students cannot always be placed in a single level LP. They concluded that interpretations of learning progression level diagnoses on a proposed learning progression would be invalid. They provided evidence that students’ do not always express ideas that are consistent with a single learning progression and thus raise questions about the validity of diagnosing students’ level of performance based on a given LPs. They demonstrated that it is not feasible to develop learning progressions that can adequately describe all students’ understanding of problems dealing with a topic.
Wilson (2009) provides a particular approach; construct maps, to measure LPs. In this article, the manner in which the measurement approach supports the learning progression is referred to as the assessment structure for the learning progression. I think Wilson article demonstrates how complex learning progressions and assessment of them are. Especially his figures with the “thought clouds” as LPs and the relationships between these LPs and construct maps were demonstrating how complex these concepts are.
Schwarz et al article (2009) was interesting in terms of providing LPs for scientific practice rather than a scientific topic. Their learning progression for scientific modeling has two dimensions that combine metaknowledge and elements of practice – scientific models as tools for predicting and explaining and models change as understanding improves. They found that students moved from illustrative to explanatory models and developed increasingly sophisticated views of the explanatory nature of models, shifting from models as correct or incorrect to models as encompassing explanations for multiple aspects of a target phenomenon.
We can conclude from these articles, LPs are new to the field, and there is not enough evidence to judge whether or not they are useful. Research shows that it is difficult to develop LPs, but at the same time since they are evidence based they are promising. I think as more research provides evidence; we can have a better sense about them.
29
Nov 10
PDE’s Science Learning Progression
http://www.pdesas.org/main/fileview/Science_Learning_Progressions_August_2010.pdf
The link is from the Pennsylvania Department of Education’s “Standards Aligned System” website. It is their version of the learning progression for science.