30
Nov 10

Learning progressions, good idea or another mechanism for standardization?

I don’t know what I think about learning progressions and these reading, to be honest, didn’t really clear much up. The idea, as I understand it, involves looking at how students learn a concept of an extended period of time (several months to several years). The analogy to a spiral was frequently brought up in the readings to suggest that ideas are revisited several times adding greater detail with each successive revolution. As a theory of learning, I think this idea is pretty sound. I am willing to buy the notion that any given scientific topic has aspects which are too complex for a young mind to grasp. It makes sense to start out with a vague, incomplete description of a concept or idea and make it increasingly more complex as the student progresses and their understanding develops. Further, it makes sense to study what students can understand and figure out how best to present certain topics. In other words, figuring out how to design and implement a learning progression. 

What concerns me, however, is how learning progressions might be incorporated into a public education system like the one in the United States. How closely should they be tied to instruction and pedagogy. This seems to be the central question surrounding the idea at this time? How do you feasibly do it (connect the notion of learning progressions to formal teaching) without a standardized national curriculum? Say school A decides that they will teach concept X as a learning progression that spans three years. What if Billy starts going to that school two years into that learning progression? I suppose it would be logical to assess his abilities and place him somewhere in the progression that makes sense, but what about the learning progressions of which was a part prior to moving to school A? What if he leaves and goes to another school? I don’t really see how this could work in a system where individual schools act independently of one another and I certainly wouldn’t advocate for a national curriculum either. 
On the other hand, I guess one could argue that the theory of learning progressions could be applied to each individual. An instructor could measure each students progress along the progression in terms of some element of content and then design their instruction accordingly. While this makes more sense to me, I am not clear on how different this is from what we are doing already. The NSES and other standards define where they feel a student should be at a certain point in their educational career and those standards are used (in theory) to inform teaching. How is that different? I kind of get the feeling that this could morph into another justification for standardization and testing. I’m not sure how comfortable I am with that. 

30
Nov 10

Misconceptions, Coherence, and oh yeah, Learning Progressions

The readings this week around Learning Progressions brought up some discussions we had earlier in the semester for me, and I’d like to use this post to try and sort out some of these disconnected thoughts.  The specific concepts that came up for me were misconceptions, coherence, and the idea from Vygotsky that we shouldn’t just focus on the outcome, but on the process toward getting to the outcome.  I’ll talk through these one at a time:

Misconceptions

This one is probably the hardest to tie to a specific reading, but there was a sense of deja vu as I read through the Stevens et al article (developing an LP for the structure of matter) and the Steedle et al article — it seemed like there was a motivation that if we can just map out all the learning progressions, and figure out the paths from one place to the other (the learning trajectories that are mentioned in Steedle et al), we’d be done with educational research.  In the margins of the conclusion to the Stevens et al article, I wrote: “catalog the full collection of LP’s, determine appropriate LT’s, build robot teachers, and take over the world!”.

That takes the whole thing too far, but this sense of collection reminded me of some of the earlier research we heard about around misconceptions, where there were goals of cataloging the full set of misconceptions a student could have around a topic and ways to undo those misconceptions, and bingo — you’re done.  Learning progressions seem to offer more than misconceptions as a concept did, however, since they offer a specific structure to student concepts, and the notion of movement from one set of concepts to a different set of concepts.  That seems more useful as a notion of learning than the idea that you either have one of a ton of possible misconceptions, or you have the right answer, which was the sense I had from the misconceptions discussions we had in class.

 Coherence

diSessa brought up the whole idea of coherence when looking at different conceptual change theories of learning, and the Steedle et al reading made me think about this as well.  Part of my concern around developing hypothetical learning progressions is that they will be biased toward coherent sets of beliefs — for example, the whole set of physics around the assumption that a constant force is required for an object to move with a constant velocity.  From Steedle et al: “…exploratory model results provided no evidence of systematic reasoning beyond that which was identified by the confirmatory model.”  I’m interpreting this to say that learners aren’t building an entire physics around a misconception — they can pretty effectively separate that notion from the rest of their ideas about how things move if it gets in the way.  This gets to their next point, which is that students don’t always fit within one of the available learning progressions.

Studying the Learning Process

I don’t remember if there was a nifty single word for this, but the last idea that this brought up for me was Vygotsky’s comment that it seems pointless to study the way that someone reacts after they have already learned the behavior — instead we should be looking at how they are learning the behavior.    What I like about Learning Progressions after these readings is that they dive into that place and try to open up discussions about exactly what it is that students are thinking when they’re having trouble with a concept.  This was certainly available in the misconception research as well, but here it seems like we’re thinking of these models cumulatively moving toward the upper anchor of the learning progression, rather than being a stumbling block on the way to the concept.  That way of looking at it seems like it is likely to be more fruitful, since teachers are usually working with students who don’t already know the concept they’re teaching — so ignoring anything outside of mastery seems counterproductive.


30
Nov 10

Learning Progressions

I didn’t know a whole lot about learning progressions before reading these articles, just what I had heard during conversations in some of my classes.  From the little that I have read, it seems like LPs are trying to present content in a unified and progressive manner and focusing on a few core ideas at greater depth.  I think that this might be a step in the right direction in making some changes to the educational system but there are some things that I question.  

First, when I was reading the Duncan paper I kept thinking about how we group students in the classroom.  Students are largely grouped by age, not ability.  I think it should be the other way around.  I don’t see anything wrong with a “tracked” system where students are placed in classes with peers of the same/similar ability levels.  If a student is excelling in science or art they should be put into an environment that promotes that ability.  I then started to think about the current trend in main streaming students into “regular” classrooms.  Is there a benefit to this?  I honestly don’t know much about this and I can only think that this hurts the class as a whole to give the teacher a greater range of students to teach to.  Teaching to the middle in this case now means that less students are within that range.  I would like to hear some thoughts on this…
Second, in the Steedle paper seemed to argue that OMC questions are better in assessing students than open questions.  I think the accuracy in the assessment for any type of assessment is related to the motivation of the students to do well on the test.  Student motivation is not something that is discussed often in these papers but I think that it has a huge impact on learning and the performance of the students.  And how can you account for the fact that if a student doesn’t know which answer to choose, they may just guess? 
Lastly, I have more of a general question about the Wilson paper. I confused about the “within” & “between” assessments.  Does “between” only assess for one LP or construct map? 


30
Nov 10

LP

So I’m a little confused by the Steedle et al article. I guess I’m just confused about how they went about getting their data and where this facet class came from. It also would have been helpful if they would have included the test questions that they used. Oh well. What I got from their article was that their proposed “novice to expert learning progression” did not predict all of the students’ conceptions about force and motion.  Their conclusion that a “novice to expert learning progression” cannot describe every students’ understanding of constant speed probably can be carried over to other subjects. Even though this type of learning progression might not capture every student’s level of understanding, I think they are still helpful to the teacher to have an idea of what kind of conceptions their students may have.

I found Wilson’s paper to be quite helpful. It showed the different types of construct maps that you can create for a learning progression that I didn’t know existed. I always thought construct maps were likes those presented by project 2061 that have arrows pointing to each related concept.

Our previous readings would suggest that learning progressions fall into the conceptual change and constructing knowledge category. You are identififying students’ misconceptions and trying to change them at the same time building on previous knowledge.


30
Nov 10

LPs

In continuing my research into learning progressions, I find that I want to believe that they will make an impact in science and all education. That being said, in reading these articles (and others) and recalling some formal and informal conversations, I think that we are seeing a familiar phenomenon in education: rushing to implement a new concept or idea in the hopes that it will help our struggling school system. I will save that discussion for later.

I think the editorial article by Duncan (2009) on learning progressions was a great lead in to the other articles: it was clear and I was able to wrap my brain around what was said. The statement on page 607…”LPs hold the promise of transforming science education by providing better alignment between curriculum, instruction, and assessment” is an example of the pressure placed on implementing LPs without more research and development of them. I do think that LPs do offer a better approach to science instruction than our current “mile wide, inch deep” approach. As the article progresses, it mentions the development of LPs as well as the validation of them. I think the most helpful part of the article is the section of unresolved issues. Once again we are seeing many researchers bringing their own ideas to the LP table and each showing different “grain sizes” methods, assessments, and conclusions. Guess this hints of Dr. McDonald’s “messiness” of education. Nonetheless, the author seems optimistic about the use of LPs in science education.

A few questions regarding LPs in general: in looking at Wilson’s 2009 article (p. 720) and the construct map, I see that there is a skip from grades 5 to 8. My question regarding this is: are the progressions going to include every grade? What is the consequence of skipping grades in the LP? I am assuming this is due to the fact that a particular science is not taught every year. I did appreciate the use of illustrations in the article. These illustrated the various approaches to using construct maps in assembling a LP…more messiness?

In general, I support the idea of LPs and their use in education. I also support (like) the idea of using them for formative assessments, as well as the specific levels that may be used to assess a student’s understanding. I agree with Steedle and Shavelson’s (2009) last statement on page 714, “Thus, learning progression researchers must proceed with caution when attempting to report student understanding in the form of learning progression level diagnoses.” Much like other articles I have read regarding LPs, it seems that many researchers are warning the educational community about the danger of attempting to implement LPs before a significant body of research is developed on them. This mirrors the sentiments of Sikorski and Hammer’s (2010) article in which they warn of the fact that using LPs prematurely could potentially set back progress in LP research.

Oh, and everytime I see “LP” or “LPs” I think of the original use of those letters…long playing record! Guess I am showing my age.


29
Nov 10

Week 14 – Learning Progressions

At first glance, the idea of learning progessions seems to make a lot of sense to me.  By taking into consideration the more unifying concepts of science, and developing cognitive progressions for students across defined time-frames (3 year chunks), we can begin to try to attack the depth of understanding students have rather than a “mile wide and inch deep” focus of today’s school.  I think this type of instruction could allow for more experiential learning opportunities (perhaps this is where more situated learning could be incorporated into the classroom), or at least more room to connect the core concepts across multiple domains.  These progressions are a work in progress for sure, and don’t come without some room for improvement (noted by Duncan).  But given the abundance of complaints about our current model of instruction and assessment, are LPs any worse?  While certainly not a simple fix, I think they at least begin to present a step in a new direction for instruction.

I thought the idea of giving a multiple choice exam to assess where a student would fall on the progression could be an effective grouping tool, but not an end all be all for assessing someone’s knowledge.  From the classes I have taken, I feel I have been programmed to think of multiple assessment types as the way to go; multiple choice being perhaps the weakest method.  But ordering the multiple choice responses as a means to diagnose the students position across a learning progression seems like a pretty effective and relatively easy diagnostic tool (Steedle and Shavelson paper).  While these are not fool proof (students falling into different progressions with their responses, perhaps an indication of knowledge in fragments like diSessa wrote about), they can provide a good first approximation of content knowledge students possess.  

In the Wilson article, the aim of using construct maps to begin to build appropriate learning progressions was the first building block to effective LPs.  These construct map levels seemed very similar to state anchors of standards, as noted by Wilson.  In this case, if the proper construct map is not used, it seems the rest of the LP process is shot.  Having a clear understanding of the key concepts and levels in which they typically unfold seems to be the driving force.  How we get to these key concepts is the tough part that needs to be agreed upon.  The idea of developing assessments from construct maps, and then using assessment results to develop appropriate learning progressions is rational, but could result in a few iterations before we begin to narrow down the correct levels of sophistication.  I will consent this testing process if we can get rid of the PSSAs!

Finally, I read the Songer article on LPs and their development goals.  Trying to develop an assessment that measures both content knowledge and inquiry reasoning echoed the purpose that Duschl described in developing LPs.  Not surprisingly, using more embedded assessment activities along with progression of a unit allowed for greater expansion of student reasoning and more complexity in answers compared to multiple choice, traditional assessment types.  This information supports the belief that most probably have regarding multiple choice summative exams; however, this will require a lot more effort and thought from the teachers as well as the students, and instruction methods need to mirror this type of assessment, otherwise I don’t see the value in giving these types of exams and expecting “better” results and thinking from your students.


29
Nov 10

Learning Progressions – Newer than I Thought

So I started my readings for this week with the Chapter from Taking Science to School, which made me think, “These are great! Why aren’t learning progressions more widely and effectively use?” Then I got to reading the journal articles and the editorial and I figured out why…..Learning progressions are a pretty new idea and consequently don’t have very much published research yet. Also, there isn’t much agreement yet about what a learning progression can encompass (just what levels students are at or including curriculum, assessment and instruction). The Wilson article definitely indicated that assessment and maybe instruction can be closely linked with learning progressions. He indicated in principle 2 that “the framework for the assessments and the framework for the curriculum and instruction must be one and the same” (page 721). This is part of his BEAR assessment system for evaluating learning progressions. One question that I had after reading this article was whether inquiry can work with learning progressions. Wilson clearly stated in order for assessments to be useful for teachers, “open-ended tasks, if used, must be quickly, readily, and reliably scorable” (page 721). From my understanding, much of good inquiry is specific to the situation and a lot of the assessments involved are open-ended. To me, it seems then that inquiry doesn’t fit with Wilson’s description of learning progressions.
The other articles from the week were helpful in seeing the ways that learning progressions are developed and evaluated. The Steedle article pointed out some of the issues that remain to be worked out regarding the alignment between a students level and what assessments indicate the level is. Their assessment was only accurate at certain levels, so it seems that the tools for evaluating learning progressions have a ways to go until they are reliable.
From the optional articles, I read the Stevens, et.al. piece about developing a learning progression for the nature of matter. It was interesting to see how they set their lower and upper anchors, then filled in from there. They seemed to have success developing a learning progression, although I would be interested to see how their LP does when evaluated with some of the current tools for LPs. I suspect their the LP presented in this paper will probably undergo many more modifications before its acceptable as a viable explanation of the levels of understanding for the nature of matter.
Overall, I felt like I learned a lot about learning progression and I’m kind of excited to see what progress is made with them over the next couple of years. They’re definitely something that I think states should be looking to incorporate into their standards, and I would definitely agree to teach within a learning progression curriculum. I do still have some questions about them, but these articles were a good start.


29
Nov 10

Week 14- LPs

This week’s readings are related to learning progressions. As far as I understand from the articles, learning progression research is based on cognitive learning perspective and concerns about individual students and attempts to classify students’ understandings in specific LPs levels.
   
    That being said, I am really glad that I read first the Learning Progression chapter from Taking Science to School and Duncan and Hmelo-Silver editorial piece from Journal of Research in Science Teaching. Because these pieces provide historical development of the learning progressions, they helped me to understand where LPs are coming. Before reading these articles, I have been hearing learning progressions but haven’t really read articles about them. I was interesting to learn that the roots of learning progressions are coming from Bruner’s notion of a spiral curriculum and also Gagne’s hierarchical concepts. Duncan piece provided an overview of this week’s articles. Since it is the editorial piece, it provides the gist of all articles. According to the articles LPs provide better alignment between standards, curriculum, and assessment. The purpose of the LPs emerged from the desire to develop assessment systems to track student progress. Duncan and Hmelo-Silver stated that LPs must be informed by empirical research on student thinking and learning in the domain. They mentioned three approaches to validate LPs. 1) An initial progression is developed solely based on existing research and analyses of the domain. 2) LPs could be based on carefully designed cross-sectional studies that document development of students’ knowledge and reasoning on a particular topic across multiple grades. 3) The third approach involves the development of a progression based on the careful sequencing of teaching experiments across multiple grades. This bottom-up approach provides evidence of what students are capable of given carefully designed instructional contexts.
    Steedle and Shavelson’s (2009) study tried to determine whether students’ observed responses reflect the systematic application of ideas associated with a single learning progression level and whether they provide a valid interpretation of learning progression level diagnoses. They found that students cannot always be placed in a single level LP. They concluded that interpretations of learning progression level diagnoses on a proposed learning progression would be invalid. They provided evidence that students’ do not always express ideas that are consistent with a single learning progression and thus raise questions about the validity of diagnosing students’ level of performance based on a given LPs. They demonstrated that it is not feasible to develop learning progressions that can adequately describe all students’ understanding of problems dealing with a topic.
    Wilson (2009) provides a particular approach; construct maps, to measure LPs. In this article, the manner in which the measurement approach supports the learning progression is referred to as the assessment structure for the learning progression.  I think Wilson article demonstrates how complex learning progressions and assessment of them are. Especially his figures with the “thought clouds” as LPs and the relationships between these LPs and construct maps were demonstrating how complex these concepts are.
    Schwarz et al article (2009) was interesting in terms of providing LPs for scientific practice rather than a scientific topic.  Their learning progression for scientific modeling has two dimensions that combine metaknowledge and elements of practice – scientific models as tools for predicting and explaining and models change as understanding improves. They found that students moved from illustrative to explanatory models and developed increasingly sophisticated views of the explanatory nature of models, shifting from models as correct or incorrect to models as encompassing explanations for multiple aspects of a target phenomenon.

We can conclude from these articles, LPs are new to the field, and there is not enough evidence to judge whether or not they are useful. Research shows that it is difficult to develop LPs, but at the same time since they are evidence based they are promising. I think as more research provides evidence; we can have a better sense about them. 


29
Nov 10

PDE’s Science Learning Progression

http://www.pdesas.org/main/fileview/Science_Learning_Progressions_August_2010.pdf

The link is from the Pennsylvania Department of Education’s “Standards Aligned System” website.  It is their version of the learning progression for science.


28
Nov 10

Learning Progressions

            I had not realized that the field of learning progression development is at such a beginning stage.  As the editorial stated, this is not really a new idea.  However, the way that researchers today want to define each level within the progressions seems to be new.  It sounds like few, if any, learning progressions have been developed to the point of being usable in the classroom.  It is surprising to me that given the state of learning progression development, there is still so much discussion and conjecture about how they will be used, how they will be validated. 

            While reading articles this week, I was trying to think about whether the idea of learning progressions is a “cognitive” or a “situative” idea.  From what we’ve read, it sounds like these articles were approached more with a cognitive perspective.  Developers are trying to assess the change that is happening in an individual’s thinking; they are trying to quantify learning.  Also, the descriptions of different conceptual understandings at the different levels in the progressions are about an individual’s understanding.  However, I wondered if a situativist were to develop a learning progression, what would it look like?  Would there be different levels of understanding within the group/social context?  They would probably look for evidence in transcripts of group conversations to draw conclusions about the level of sophistication in understanding a particular concept that had been reached.   

            There are certainly many challenges to developing a true learning progression.  Steedle and Shavelson found that it is difficult to classify students into different steps within a progression.  This does back to an idea I find particularly interesting – that students do not necessarily reason consistently according to any one belief or conceptual understanding (p704).  In other words, it is very hard to predict how a student thinks about similar concepts in different scenarios.  The authors link this to diSessa’s “knowledge in pieces approach” where students piece together explanations in a disjointed way.  It seems that the more complex the learning progression, the more impossible it will become to truly pinpoint where in that progression a student’s understanding is located.


Skip to toolbar