Perception: Adding Complexity to the Complex

Brian Brennan

Dr. Jonathan Hakun

Psych 256 (002)

 

Perception: Adding Complexity to a Complex Process

 

Through years of evolution, natural selection has molded human beings in such a way that has yielded the most advanced and intelligent form of life known to exist on this planet. With each passing day, we human beings continuously manipulate and improve ourselves in seemingly unnoticeable ways. Our brains, in particular, incessantly take in, filter, and apply data from our surrounding environment so as to help us adapt to ever-changing environmental conditions. However, a conscious awareness of the myriad of processes that occur while our brains take in this data would inevitably overwhelm the brain with a tedious and daunting process that would consume most, if not all of our daily lives. Ironically enough, our brains have adaptively solved the problem by means of allowing data-intake to proceed in a seemingly automatic and effortless manner. Ergo, our ability to use our perceptions to recognize, reason, and react to environmental stimuli occurs so as to maximize efficiency while minimizing effort. To that end, our most reliable and valid data-taking device is the two-part complex machinery resting on either side of the nose; the human eyes. Indubitably, vision per se is our most important sense, seeing that, “more than 50 percent of the cortex, the surface of the brain, is devoted to processing visual information,” according to David Williams, the William G. Allyn Professor of Medical Optics (University of Rochester, 2012). But what exactly does vision entail, and why is it so important? To answer that question, one need only to consider the concept of visual perception.

Over the past several decades, researchers around the world have investigated human perception with great vigor, and fortunately have answered many of the previously unresolved questions associated with perception. Moreover, these researchers have utilized two methods in particular to answer these questions, including brain ablations and a general neuropsychological approach. By means of these two methodological approaches, we have discovered that the process of perception largely occurs through two neural pathways, known as the ventral and dorsal streams, which are ultimately responsible for delineating what and where/how an object is, respectively (Goldstein, 2014). Nevertheless, because of the scientific nature surrounding the physiological process of perception, there will always be questions left unanswered and discoveries yet to be made. Fortunately, a team of neuroscientists at the Massachusetts Institute of Technology, headed by University of California – Santa Barbara’s Michael Goard, have made yet another momentous discovery in the field of neuropsychology, which has further contributed to our understanding of the connection between how perception guides action.

A recent article posted by ScienceDaily illustrates just how significant this discovery was, and, more importantly, what it can reveal about how exactly human beings perceive the world around us. The research at hand was specifically geared towards further understanding the neural circuitry responsible for transforming perceptual signals into coordinated motor responses. However, before one can understand the significance and importance of their findings, it is primarily important to explain what we already know about perception and how it relates to their research, especially with regards to the neural circuitry thought to play a pivotal role in using our perceptions to guide how we interact with objects in our environment. Goard explains that, “mapping perception to a future action seems simple. We do it all the time when we see a traffic light and use that information to guide our later motor action” (ScienceDaily, 2016). However, after familiarizing oneself with the complexities involved with perception, as exemplified in lesson three, it becomes clear that perception is anything but simple. Consider, for example, the following sentence:

 

The quick brown fox jumps over the lazy dog.

 

In the mere second it took you to read it, your retina received incoming photons of light, which were simultaneously projected as a coherent two-dimensional image on the back of your eye, and reflected back towards your computer screen (or piece of paper); these electrical signals were then subsequently propagated through the optic nerve, to the occipital lobe, and then sent to the respective lobes of the brain responsible for understanding what you’re looking at. This exemplifies only half of the story, and it occurs by means of bottom-up processing. The second half, known as top-down processing, also occurs while perceiving the sentence. This knowledge-based processing allows one to use his or her prior knowledge of the English alphabet (including the knowledge of every letter comprising the alphabet, seeing that all are utilized in the above sentence), word formation, sentence structure, and pronunciation in order to make sense of the perception per se. It is also worth noting that, while reading and making sense of this sentence, your brain was also using the same two processing devices to scroll through (or turn) the page. In my case, I was using the two aforementioned processing devices (i.e., bottom-up and top-down) to locate the keys in which to type the sentence, use my prior knowledge of perception to formulate a detailed and understandable interpretation of the complex perceptual process, and thus compose the text you are reading at this very moment. To deem the concept of perception, including the two ways in which we perceive environmental data, as being a complex physiological process almost seems like a drastic understatement. To that end, the interaction between perceiving (i.e., all that goes in to your experience of reading and/or writing a sentence) and taking action towards whatever it is your attention is turned to (e.g., using the muscles of your eyes to navigate across the page, using your fingers to locate and press the keys on a keyboard) has caught the attention of researchers around the world, as the scientific community continuously attempts to answer the question of how our brains accomplish these tasks; a question that was of primary importance in the research headed by Michael Goard and colleagues.

This question was first addressed beginning in the 1980s, after a myriad of theoretical support came forth regarding the likely existence of a close connection between perceiving an object and using that perception to interact with it appropriately. In order to investigate problems of this nature (e.g., those involving unobservable and complex neurological mechanisms), researchers frequently employ an approach characterized by brain ablation, which is the process of nullifying the actions of a brain region via surgical removal or chemical injection. By subjecting test subjects (i.e., primates) to an object discrimination task after the careful ablation of temporal regions in the brains, it became clear that a specific pathway is responsible for determining the identity of a given object; a difficult task for those test subjects with ablated temporal lobes. The conclusion of the experiment led to the formal discovery of this pathway, which is known as the what pathway (also referred to as the ventral stream), and it extends from the occipital lobe to the temporal lobe (Goldstein, 2014). Similar research has revealed yet another pathway involved with the process of perception, known as the where (or more appropriately, the how) pathway. Utilizing a landmark discrimination task, the goal of which being to remember an object’s location and then to choose that location after a delay period, as well as ablating regions of the parietal lobe revealed that the neural pathway responsible for determining an object’s location in space and time extends from the occipital love to the parietal lobe, and is known as the dorsal stream (Goldstein, 2014). The existence of these two pathways were provided with even more experimental support by means of utilizing a neuropsychological approach.

Similar to the test subjects who experienced nullified neurological functioning that arose by means of ablating regions of the brain, individuals with brain damage can also serve as useful test subjects, depending on where the damage occurred. Ergo, the experimental utilization of these test subjects (i.e., those who have suffered brain damage) is characteristic of a neuropsychological approach, and this approach was used to provide further support for the existence of a perception (ventral) stream and an action (dorsal) stream (Goldstein, 2014). Nevertheless, the notion that these two streams are solely responsible for perception and associated action has been under scrutiny, and for good reason. After all, the paradox lies in the fact that, despite the fact that we are able to use our neurological machinery to understand our neurological machinery, the evidence brought forth thus far does not meet the qualifications necessary to satisfy a complete and accurate understanding of our neurological machinery. The full quote by Goard regarding perception is as follows, “Mapping perception to a future action seems simple…However, how these associations are mapped across the brain is not well understood” (Goard et al., 2016).

In their research article, posted in the journal eLife, Goard introduces his research by explaining that sophisticated sensorimotor decisions (e.g., using a traffic signal to guide future driving maneuvers, often requires mapping specific sensory features to motor actions at a later time. He also mentions that there lies the possibility that the connection between perception and action may involve other neural circuits besides simply the ventral and dorsal streams, which is a logical conclusion to reach when attempting to understand why we do what we do (Goard, Pho, Woodson, & Sur, 2016). The article also denotes several unresolved issues involved with turning a perception into an action, including the lack of clarity regarding the regions responsible for sensorimotor transformation, as well as the issues that lie in determining which region(s) is (are) responsible for maintaining task-relevant information between stimulus reception and an evoked response to the given stimulus. The article also elucidates the fact that, “although measurement of neural activity is an important first step toward defining task-related regions, the presence of neural activity does not prove that a given region plays a causal role in mediating behavior” (Goard et al., 2016). To that end, the researchers were curious as to whether the observed differences (i.e., of previous studies) in sustained neurological activity, implicated in the parietal and prefrontal cortical regions, during perceptual tasks is consistent with that described by previous theoretical models; or whether these differences could be attributed to another aspect of the task (Goard et al., 2016). In order to aid in clarifying the unresolved issues involved with perception, the researchers utilized a more comprehensive and technologically advanced approach; they theorized that the clarification could be accomplished by measuring and perturbing activity across sensory, parietal association, and distributed motor cortical regions during a visual-delayed-response task. Additionally, instead of using only ablation or only a neuropsychological approach to yield information about brain activity involved with perception, the researchers were able to use more advanced techniques, which were based upon recent optical inactivation approaches. Specifically, the aforementioned approaches were deemed the most apt for the experiment due largely to recent revelations regarding the effect of cortical inactivation of behavior being dependent on timing and whether the inactivation was bilateral or unilateral (Goard et al., 2016). Using chemically ablated mice as their test subjects, the researchers utilized an optogenetic approach, which involved inactivating bilateral cortical regions exhibiting task-related responses. In doing so, the researchers were able to determine the necessity of sensory, association, and frontal motor cortical regions during each stimulus, delay, and response of a memory-guided task. Simply put, the significance of their research rested in their conclusive determination that the visual and parietal areas were involved in perceiving the stimulus and transforming that into a motor plan (as explained in lesson 3), but only the frontal motor cortex is necessary for maintaining the motor plan over the delay period.

The reason this research is particularly relevant and important in our understanding of human perception is that it was able to reveal a little bit more information about top-down processing. Moreover, by using more advanced techniques (i.e., optogenetics), characterized by the inactivation of neurons in a temporally precise manner by means of manipulating nerve cells with photons of light, the researchers were able to obtain a much more precise and accurate portrayal of what is going on in the brain when you use perceptual information (e.g., seeing a traffic light) to guide later motor action (e.g., hitting the brakes). In addition to the roles of the parietal and temporal lobes in our perceptually-based decisions, this study reveals evidence of perception being even more complex than originally thought.

 

 

References

Goard, M. J., Pho, G. N., Woodson, J., & Sur, M. (2016). Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions, In eLife 2016(5). Department of Brain and cognitive Sciences, Massachusetts Institute of Technology: Cambridge. doi: 10.7554/eLife.13764

Goldstein, E. B.  (2014). Cognitive Psychology: Connecting Mind, Research and Everyday Experience,  4th Edition [VitalSource Bookshelf version].  Retrieved from https://bookshelf.vitalsource.com/books/9781305176997

 

Hagen, S. (2016). The mind’s eye, In Rochester Review 74(4). University of Rochester: Rochester, NY. Retrieved from http://www.rochester.edu/pr/Review/V74N4/0402_brainscience.html

 

University of California – Santa Barbara. (2016). Neuroscience: Linking perception to action. ScienceDaily. Retrieved from www.sciencedaily.com/releases/2016/09/160908131001.htm

 

2 thoughts on “Perception: Adding Complexity to the Complex

  1. sbs5248

    References:
    Leeman-Markowski, B. A., & Schachter, S. C. (2016). Treatment of cognitive deficits in epilepsy. Neurologic Clinics, 34, 183-204.

  2. sbs5248

    Now that the class has learned a little bit about memory. I’d like to point out that there are things that can prevent both the bottom-up and top-down processes. For example, neurological disorders, such as epilepsy, and the treatments for them can hinder a person’s ability to use either or both of the bottom-up and/or the top-down processes. I’m going to go more in depth with epilepsy since I have some experience with it. According to Leeman-Markowski and Schachter (2016), “Deficits may affect multiple cognitive domains, including memory, attention, executive function, and language” (p. 183). Seizures can interfere with things like converting short term memory into long term memory. For example, I have to go places or meet people multiple times for me to remember them. I have said to my husband in the past that I had not been somewhere, but then he’d tell me that he had brought me there in the past. For some reason the memory that I had been to the place before (knowledge for top-down process) was never stored. This article also states that some of the medications used to treat epilepsy can cause cognitive functioning issues. For example, zonisamide can cause “objective deficits in verbal intelligence, verbal learning, delayed verbal or nonverbal memory, and verbal fluency” (Leeman-Markowski & Schachter, 2016). The functions I just listed fall either within either bottom-up or top-down processes.

Leave a Reply