Tag Archives: Perception

Top-down Processing and Dyslexia

While going through bottom-up and top-down processing in this week’s lesson, I found myself relating these different perception approaches to my Dyslexia. These topics gave me an idea of why I may read or perceive words differently than someone without dyslexia. While considering these different approaches, I have realized that my perception of words and sentences deals with top-down processing much more than bottom-up processing. For this blog post, I will provide examples of how I often misread or misperceive words due to top-down processing, and I will explain why bottom-up processing or a combination of the two may be more efficient for reading.

Bottom-up processing deals with sensing raw data from the environment through site, smell, sound, taste, or touch and forming a perception of that data based on the senses. Top-down processing may cause one to perceive that same raw data differently due to making assumptions based on what is expected or considering past experiences while processing the data. Top-down processing is probably not the sole reason for my Dyslexia, but it does help me better understand certain symptoms of my disorder. For example, my family and I recently went on vacation and spent a lot of time driving. While on the road, I noticed that I would often misread signs. If a sign read, “Seasonal Lodging,” my family would see the words and perceive them correctly. I may see the same sign but perceive it as reading, “Logical Reasoning,” due to the somewhat similar ordering of letters and because I am exposed to the term, “logical reasoning” more often than “seasonal lodging,” so that is what I would expect. This is an example of making assumptions based on expectations before the data is fully processed.

Another common mistake that I make deals with seeing numbers among words. If a sign were to read, “80 South,” I may perceive the sign as reading “86 South.” This is because I would not simply make a visual perception based on what I sense through site, as one would with bottom-up processing. Instead, I see the number “80,” so I assume that I am now dealing with numbers. I then see the letter “S” and read it as “6” because of the “S” sound. Instead of making only a visual perception based on my sense of site, I would have combined an auditory perception due to my expectations.

Bottom-up processing may be the more efficient method while reading because it involves visually sensing the letters, decoding the letters to form words, and allowing the words to form complete sentences. This method prevents assumptions and allows for comprehension. Top-down processing may lead one to misread a sentence due to what he or she expects it to say. However, combining the two types of processing may be helpful as well. While top-down processing may cause one to make assumptions, it can also be helpful with understanding the true meaning of a strangely-worded sentence or a sentence with errors. For example, English professors often use the sentence, “Let’s eat, Grandma,” to point out the importance of proper punctuation. Without the comma, the sentence has an entirely different meaning, but top-down processing allows one to use previously acquired knowledge and expectations to realize the intended meaning (Gjessing & Karlsen, 1989, p. 71).

Reading has always been a relatively difficult task for me due to Dyslexia, but this week’s lesson provided me with an understanding of how my perception process may be involved. I have become aware that my reading and comprehension habits deal primarily with the top-down process, which may explain why I add and remove words from sentences and often misread words. With this knowledge, I can attempt to apply the bottom-up process while reading, which may help with the disorder.

 

References

Gjessing, H. J., & Karlsen, B. (1989). A longitudinal study of dyslexia: Bergen’s multivariate study of children’s learning disabilities. New York, NY: Spring Science & Business Media.

Perception and the Tiny Snake

It is absolutely amazing how quickly we perceive the elements of our environments. About a month ago, I went for a run near my home in southern York County, Pennsylvania, which I do on a (fairly) regular basis. We have the absolute best area for running/walking right near my home, as part of the road we live on is currently closed to traffic due to a bridge that is in desperate need of repair. I take full advantage of this free “track”, where I can run or walk with little worry of being run over by a car. It is a dirt road that goes through a forest, so there are plenty of sticks, twigs and leaves on the road and I frequently see deer, squirrels and chipmunks during my runs. This particular day, as I was running, I noticed something that caused me to stop. Amongst the twigs and gravel, there was a tiny snake. I had been moving at a pretty swift speed and this snake was only about 3-4 inches long and mostly black. If I had not spotted this snake, I would have most likely stepped on it.

Luckily, it was a very easily identifiable, non-venomous species of snake: the northern ring neck snake, Diadophis punctatus edwardsi (Vigil & Willson, n.d.). The part that I find most intriguing is that I was able to identify this creature as a snake, in spite of the fact that I was moving and it was small and well camouflaged amongst the twigs and leaves. This week’s lesson on perception brought this particular story to mind, causing me to evaluate how exactly I perceived this tiny snake and determined that it was a snake and not a twig.

Seeing the light reflected from the snake began the series of events that lead to my perception and identification of it. This reflection set off a chain of electrical signals sent from my eyes into my brain, causing the activation of specific neurons that are tuned to fire due to specific orientations of things that are seen (Goldstein, 2011). The shape of the snake consisted of geons, parts or shapes that can be observed and help us to identify the object being visually perceived (Goldstein, 2011). Even though I may not have seen the entire snake, just seeing the majority of geons allowed me to perceive and identify the object as a snake (Goldstein, 2011).

One aspect of perception that helped in identifying this snake is semantic regularities: this particular object is something that would normally occur in this type of setting (Goldstein, 2011). In this way, I used my previous experiences and knowledge to know that a snake is a normal item to find in a forest setting, so the object was, quite possibly a snake. Inside of my brain, I was using the what pathway from my striate cortex to the temporal lobe to identify what this object was in my environment and I used the where pathway from the striate cortex to the parietal lobe to determine where the snake was in the environment, allowing me to react appropriately (in this case, to step around the snake instead of on it) (Goldstein, 2011).

The most fascinating part of all of this is that all of these processes were engaged so quickly. In a matter of seconds, I was able to identify this tiny snake as a living creature and react. Our brains are incredible processing organs and we aren’t even consciously aware of what they are doing, even during such a mundane event as finding a snake while running through the woods.

 

 

 

Resources

 

Goldstein, E. B. (2011). Cognitive Psychology: Connecting Mind, Research, and Everyday Experience (3rd Ed.). California: Wadsworth, Cengage Learning.

 

Vigil, S & Willson, J.D. (N.D.). Species Profile: Ringneck Snake (Diadophis punctatus). Retrieved from http://srelherp.uga.edu/snakes/diapun.htm

Perception and Learning

There’s so much going on “up there.” So much swirling around at night before we fall asleep sometimes, and so much that we are supposed to remember to do, in a specific order, at a specific time.

“Crap! (or expletive) I have an appointment 30 minutes after work tomorrow.  I told Ethel we’d have coffee. Dang it, that project is due in four days.  Did I lock the door? Oh wow, my car payment is two days past due.”

Ok, so maybe this isn’t all of us, but sometimes, it’s me.  When I began reading about Gestalt Psychology and perception, I began to wonder; are we taught in a certain pattern when we’re young in an attempt to keep our brains more organized and clear when we’re older?  The beauty of psychology is that not all brains, genetics, or environment are alike.  But I began to wonder if the laws of proximity, similarity, good continuation, and connectedness are natural, or are they learned? Ah, yes, the everlasting debate- nature vs. nurture.  We’re well aware now that it’s both, but when we’re talking about this topic, perception, is the perception learned or innate? Hmm. If it’s a combination of the two, how do we know to what extent?

Reading about the laws of perception sparked these questions for me.  It’s easy to observe and agree that we tend to group things, categorize things, and connect things, and all for useful purposes.  However, when we look at what our brains often do in a picture of colored dots grouped together, a cluster of parallel lines, or a curved line on top of a straight line that we separate even though they touch.

Back to my original thought: I believe by some force, whether it be learned, innate, evolutionary, or whatever…. that we often think this way for a reason, and that these laws are pretty universal.  Even my scatter-brained self can agree with these laws of perception.  They seem like common sense, but when we analyze them, it can get us thinking.

Half Way There: The Interaction of Perception and Action

“But as everything else about perception, this ease and apparent simplicity are achieved with the aid of complex underlying mechanisms.”

-E. Bruce Goldstein

I was planning on writing a blog entry about the interaction between perceiving and taking action regarding writing this very blog post. I sat brainstorming about how many trade-offs between action-pathways and perception-pathways I would encounter. I considered delving deeper into the process and including information about feature detectors and the electrical signals that traveled through my retina. I became distracted by my hunger so I procured a snack, came back to my laptop, and reread through the “Welcome to Penn State’s Cognitive Psychology Blog” post to get back on track. I have a personal blog outside of class, so when I saw that adding graphics was permitted, I loved the idea. It is a bit of a lengthy process to explain how this happened, but here it goes.

                Light reflects off of my cell phone, and focuses an image of my cell phone onto my retina where specific neurons fire (Goldstein, 2011, p. 38, 39). The precise neural code this creates is carried through my optic nerve into the primary visual receiving area of my brain, also known as the occipital lobe. I can finally perceive a representation of my phone. (Goldstein, p. 30, 38, 39). From there, this representation of my phone departs to my temporal lobe which is responsible for identifying my cell phone as such through what is called the “what” or “perception” pathway (Goldstein, p. 72, 74). Additionally it is transmitted to my parietal lobe through what is called the “where” or “action” pathway which allows me to locate the exact place my cell phone was sitting in relation to the twenty-something other objects around me (Goldstein, p. 72, 74). With all of this information sorted out, I can reach for my cell phone.

                As I pull my arm back to grasp my cell phone, my “perception” pathway is again at work sending my temporal lobe information about the dimensions, estimated weight, and other perceptual details of my phone. This allows me to pick up my phone with my left hand without whipping it across the room, thank goodness, as my “action” pathway shares neural information with my parietal lobe. My brain perceives a representation of the round home button on the lower end of my cell phone through the “perception” pathway, and receives information about its location through the “action” pathway so I can press it down to illuminate the screen on my cell phone. That, of course, could not successfully happen without first perceiving how delicate the glass button mechanism on my phone is so I do not shatter it and endure glass shards in my thumb.

                I am sorry everyone. I was sure I would be able to explain this in time! I wanted to share how the coordination between the areas of my brain responsible for perceiving and taking action enabled me to take the picture for the featured image of my blog post.FullSizeRender

It is hard to believe I was only able to describe the process up to illuminating the screen on my phone. I did not even get to selecting and pressing the camera icon, looking at my keyboard to position my hand, looking through my phone at the image to select the perfect angle, or pressing the circle to capture the photo itself. I am guessing that would require a lot more writing space, so maybe another time. Really though, I am feeling a little “mind-blown” that this happens every time I take a selfie…

#SelfiesForPsychology

#SelfiesForPsychology

-Lia Marie

Goldstein, J. (2011). Cognitive Psychology Connecting Mind, Research, and Everyday Experience (3rd ed.). Belmont, CA:Wadsworth.