“But as everything else about perception, this ease and apparent simplicity are achieved with the aid of complex underlying mechanisms.”
-E. Bruce Goldstein
I was planning on writing a blog entry about the interaction between perceiving and taking action regarding writing this very blog post. I sat brainstorming about how many trade-offs between action-pathways and perception-pathways I would encounter. I considered delving deeper into the process and including information about feature detectors and the electrical signals that traveled through my retina. I became distracted by my hunger so I procured a snack, came back to my laptop, and reread through the “Welcome to Penn State’s Cognitive Psychology Blog” post to get back on track. I have a personal blog outside of class, so when I saw that adding graphics was permitted, I loved the idea. It is a bit of a lengthy process to explain how this happened, but here it goes.
Light reflects off of my cell phone, and focuses an image of my cell phone onto my retina where specific neurons fire (Goldstein, 2011, p. 38, 39). The precise neural code this creates is carried through my optic nerve into the primary visual receiving area of my brain, also known as the occipital lobe. I can finally perceive a representation of my phone. (Goldstein, p. 30, 38, 39). From there, this representation of my phone departs to my temporal lobe which is responsible for identifying my cell phone as such through what is called the “what” or “perception” pathway (Goldstein, p. 72, 74). Additionally it is transmitted to my parietal lobe through what is called the “where” or “action” pathway which allows me to locate the exact place my cell phone was sitting in relation to the twenty-something other objects around me (Goldstein, p. 72, 74). With all of this information sorted out, I can reach for my cell phone.
As I pull my arm back to grasp my cell phone, my “perception” pathway is again at work sending my temporal lobe information about the dimensions, estimated weight, and other perceptual details of my phone. This allows me to pick up my phone with my left hand without whipping it across the room, thank goodness, as my “action” pathway shares neural information with my parietal lobe. My brain perceives a representation of the round home button on the lower end of my cell phone through the “perception” pathway, and receives information about its location through the “action” pathway so I can press it down to illuminate the screen on my cell phone. That, of course, could not successfully happen without first perceiving how delicate the glass button mechanism on my phone is so I do not shatter it and endure glass shards in my thumb.
I am sorry everyone. I was sure I would be able to explain this in time! I wanted to share how the coordination between the areas of my brain responsible for perceiving and taking action enabled me to take the picture for the featured image of my blog post.
It is hard to believe I was only able to describe the process up to illuminating the screen on my phone. I did not even get to selecting and pressing the camera icon, looking at my keyboard to position my hand, looking through my phone at the image to select the perfect angle, or pressing the circle to capture the photo itself. I am guessing that would require a lot more writing space, so maybe another time. Really though, I am feeling a little “mind-blown” that this happens every time I take a selfie…
Goldstein, J. (2011). Cognitive Psychology Connecting Mind, Research, and Everyday Experience (3rd ed.). Belmont, CA:Wadsworth.