Author Archives: Brian Robert Brennan

Visual Imagery & Memory: The Esotericism of Synesthesia

Brian Brennan
Dr. Jonathan Hakun
Psych 256 (002)
November 20, 2016

Visual Imagery & Memory: The Esotericism of Synesthesia

Mental imagery can be loosely defined as experiencing a sensory impression of something without the addition of any actual sensory input (Goldstein, 2011, p. 289). This phenomenon is something we experience every day, and it certainly helps us complete day-to-day tasks. Additionally, visual imagery has been discovered to be associated with improvements in memory, which is implicated in such tasks as visualizing interacting images, organization, and associating items with words. With respect to the latter task, Alan Paivio’s (1963) work on memory and visual imagery led him to formulate the conceptual peg hypothesis; an explanation for how we can use visual imagery to associate items with words. According to this hypothesis, memory for pairs of concrete nouns is much better than memory for pairs of abstract nouns. Moreover, these concrete nouns create images that other words can hang onto, which enhances memory for these words (Goldstein, 2011).
A similar and considerably more esoteric phenomenon than visual imagery is known as synesthesia. In this condition, a particular stimulation in a given sensory modality (e.g., touch) or cognitive process (e.g., computing) automatically triggers additional experiences in one or several other unstimulated domains (e.g., vision, emotion; Safran & Sanda, 2015, p. 36). The similarity to visual imagery, here, is the fact that a sensory reaction is produced without the input of a sensation per se that is directly related to the reaction. As a child, my preschool teachers became greatly concerned with my learning ability. I vividly remember overhearing their conversation with my parents when they said, “Bri is clearly a very intelligent student, but he is having a great deal of trouble with his colors and he keeps yelling out colors during our morning music hour.” As a result, my parents sent me to several doctors who tested my vision, and I was subsequently diagnosed with color vision deficiency (color blindness) and synesthesia.
To this day, certain pitches of sounds create very specific complex colors in my field, many of which I cannot correctly identify due to my color vision deficiency. Nevertheless, my condition has helped me immensely in the past, especially when it comes to memory. According to an article posted in Scientific American, “Research has documented that…synesthetic color differences can facilitate performance on tasks in which real color differences facilitate performance for nonsynesthetes” (Scientific American, 2006, p. 2). Personally, my memory for songs is impeccable, and thus I am very musically oriented. For example, the other week, my friend, who is fascinated with my condition, played a song on his guitar and subsequently prompted me to replicate it on the piano. As he way playing, I fixated my vision on a single point on the wall and began to listen and look very carefully at the cascade of colors that presented themselves on the wall. I then used visual imagery to recreate the color pattern and was able to play the song, of which I had never heard before, almost perfectly to the tune. Despite my recreation being very sloppy, I was able to hit every note perfectly, and this was primarily due to my ability to utilize visual imagery as it relates to memory.
Now, synesthesia isn’t as useful as it sounds in many respects because I tend to get extremely overwhelmed and anxious in situations with a lot of noise. For example, I cannot and refuse to go to concerts strictly due to this fact. Additionally, high-pitched, piercing tones create colors that can only be described as frightening (usually a dark red), while low-pitched tones create colors that are more depressing (usually a deep blue). Nevertheless, this is considered a normal occurrence as “persons presenting with synesthesia commonly avoid mentioning their unusual percepts and even tend to close on themselves in psychological distress” (Safran & Sanda, 2015, p. 2). Although my condition catches the attention of almost everyone who finds out I have it, I very much avoid telling others about it due to the revelation being followed up with most individuals yelling out different tones and simultaneously producing rather aversive colors in my visual field. As can be deduced from the aforementioned information, sounds and their respective synesthetic colors produced also yield an emotional response in me that I am unfortunately unable to control. However, this emotional response can also be quite enthralling as many sounds and songs that please me have the capacity to send chills down my spine and bring me to tears of joy. For example, after hearing Sia’s popular song “Chandelier” (piano version), I was hysterically crying and was left with the chills for about an hour. The colors were remarkable and I can remember the repetition of yellow, blue, and a soft green. To that end, I have recently (two weeks ago) delved into learning an instrument, the piano, for the first time so that I can hopefully create music of my own that will evoke similar emotional responses.

References

Goldstein, E. B. (2011). Long-term memory: Encoding and retrieval. In L. Schreiber-Ganster (Ed.), Cognitive psychology: Connecting mind, research, and everyday experience (3rd ed., pp. 268-291). Belmont, CA: Wadsworth Cengage Learning Inc.

Safran, A. B. & Sanda, N. (2015). Color synesthesia: Insight into perception, emotion, and consciousness. Neurology, 28(1), 36-44. Doi: 10.1097/WCO.00000000000169.

Palmeri, T. J. & Blake, R. B. (2006). What is synesthesia? Scientific American. Retrieved from https://www.scientificamerican.com/article/what-is-synesthesia/?

New Insight into the Mechanisms Involved with Memory Consolidation

Brian Brennan
Dr. Jonathan Hakun
Psych 256 (002)
October 16, 2016

New Insight into the Mechanisms Involved with Memory Consolidation

Whenever an individual speaks of a memory contextually, the neurological mechanisms that’re responsible for carrying out such an action are largely implicated in the cortical connections previously established by hippocampal activity. This is all made possible due to the process of memory consolidation, which is defined as the process by which information that has entered the memory system becomes strengthened to such an extent where it is resistant to interference caused by trauma or other events (Goldstein, 2011). Such traumatic events, such as being concussed, sometimes result in a form of amnesia whereby the individual experiences a loss of memory for events that occurred prior to the traumatic event, known as retrograde amnesia. Based primarily in the graded property of retrograde amnesia (i.e., amnesia that is worse for experiences that occurred just before the brain injury), the standard model of consolidation was developed; this model, which precisely delineates how we are able to retrieve such vivid contextual memories, implies a complex and dynamic interplay between the hippocampus and various cortical regions, and has been widely accepted as the most accurate interpretation of the neurological mechanisms involved with consolidation, for the most part. Nevertheless, this model has left a great deal of ambiguity with regards to the possibility of other brain regions and similar neurological components being responsible for memory consolidation. To that end, a recent research study posted in the journal Science investigated a question aimed at furthering our understanding of the mechanisms involved with long-term memory consolidation, specifically with respect to the involvement of what’re known as Engram (memory) cells.
Before delving into the complexity of an Engram complex, it is primarily important to begin with an overview of the standard model of consolidation. According to this model, memory retrieval depends on the hippocampus during consolidation per se; however, once consolidation is complete, retrieval no longer depends on the hippocampus, and instead operates by means of intracortical connections. To expand upon this, this model explains consolidation as occurring through a sequence of three neurological events. The first step, involves the hippocampal coordination of memory information into their respective cortical regions. Because memories involve many different cognitive and sensory areas in the cortex, incoming information from a new experience is distributed across the respective cortical regions, which is coordinated via the hippocampus. The second step, which is the core feature of consolidation, is known as reactivation. In the reactivation stage, the hippocampus replays the neural activity associated with the given memory, which was previously established in the network connecting this structure and the aforementioned cortical regions. This activity results in the formation of connections between and within these cortical regions (as opposed to connections between the hippocampus and the cortex), which are subsequently strengthened upon each and every reactivation. The third and final step can be deemed the consolidation stage, whereby the cortical connections have been reactivated and subsequently strengthened to such an extent that the hippocampus is no longer needed to retrieve the given memory (Goldstein, 2011). The process of memory consolidation per se is made possible through a phenomenon known as long-term potentiation (LTP), which is the increased rate of neuronal firing that occurs due to prior activity at the synapse, resulting in structural changes and enhanced responding. The most noteworthy point of this model is the notion that the synaptic strength resulting from LTP and cellular consolidation is a pivotal aspect in the reactivation process, and hence the ability to store a memory. However, the likely possibility of other brain structures being involved in memory consolidation, names Engram cells, raises the question of whether or not these structures rely on the same mechanistic processes implicated in the standard model of consolidation.
According to Susumu Tonegawa and colleagues of the Massachusetts’s Institute of Technology’s Department of Brain and Cognitive Science, an Engram can be defined as the enduring physical and/or chemical changes underlying newly formed memory associations that’re elicited by learning (Tonegawa, Xu, Ramirez, & Redondo, 2015). Moreover, Engram cells are populations of neurons that’re activated by the process of learning, undergo enduring cellular changes as a consequence of learning, and whose reactivation occurs through part of the original stimuli delivered during learning resulting in memory recall (Tonegawa et al., 2015). But how exactly are these cells implicated in memory consolidation? A team of researchers at the Massachusetts’s Institute of Technology tried to answer just that; that is to say, they were interested in determining whether or not the mechanisms involved with standard model of memory consolidation are applicable to Engram cells, and to what extent.
Taking into consideration the structural changes that must occur in order for the formation of new memories to be made possible, the team of researchers used lab animals to stop this from occurring, thus inducing retrograde amnesia. By using injections of anisomycin, which is an antibiotic protein synthesis inhibitor, the team of researchers was successfully able to prevent these synaptic changes from occurring, thus hindering the ability for new memories to be formed (Ryan, Roy, Pignatelli, Arons, & Tonegawa, 2015). According to Bruce Goldstein, the key to the experimental usage of this antibiotic is when the injection occurs (chap. 7, pp 168-201), and the researchers state that this task was carried out “immediately after contextual fear conditioning” (Ryan et al., 2015). Prior to experimentation, the research team proposed that a specific pattern of connectivity of Engram cells is crucial for memory information storage and strengthened synapses in these cells would contribute to the overall memory retrieval process (Ryan et al., 2015). However, they found that the latter assertion was in fact false.
Based on their data, the researchers were shocked to find that increased synaptic strength resulting from cellular consolidation is not a crucial requisite from storing a memory (Ryan et al., 2015). Because the lab animals had been in a lab-induced state of retrograde amnesia, it was assumed that the ability for them to retrieve the memory associated with the initial contextual fear conditioning. However, when the team used a single-cell light-inducing experimental technique, the memory was in fact retrievable for the animals, indicating the mechanisms being independent of the synaptic firing rate involved with long-term potentiation (Ryan et al., 2015). These findings collectively illustrate the illusory nature of the neurological mechanisms involved with such complex cognitive processes such as memory. To that end, this research has opened the door for others to answer how exactly memory Engram cells operate with respect to memory consolidation.

References

Goldstein, E. B. (2011). Long-term memory: Encoding and retrieval. In L. Schreiber-Ganster (Ed.), Cognitive psychology: Connecting mind, research, and everyday experience (3rd ed., pp. 168-201). Belmont, CA: Wadsworth Cengage Learning Inc.

Ryan, T. J., Roy, D. S., Pignatelli, M., Arons, A., & Tonegawa, S. (2015). Engram cells retain memory under retrograde amnesia. Science, 348(6238), 1007-1013. DOI: 10.1126/science.aaa5542.

Tonegawa, S., Xu, L., Ramirez, S., & Redondo, R. (2015). Memory Engram cells have come of age. Neuron, 87(5), 918-31. DOI: http://dx.doi.org/10.1016/j.neuron.2015.08.00

Perception: Adding Complexity to the Complex

Brian Brennan

Dr. Jonathan Hakun

Psych 256 (002)

 

Perception: Adding Complexity to a Complex Process

 

Through years of evolution, natural selection has molded human beings in such a way that has yielded the most advanced and intelligent form of life known to exist on this planet. With each passing day, we human beings continuously manipulate and improve ourselves in seemingly unnoticeable ways. Our brains, in particular, incessantly take in, filter, and apply data from our surrounding environment so as to help us adapt to ever-changing environmental conditions. However, a conscious awareness of the myriad of processes that occur while our brains take in this data would inevitably overwhelm the brain with a tedious and daunting process that would consume most, if not all of our daily lives. Ironically enough, our brains have adaptively solved the problem by means of allowing data-intake to proceed in a seemingly automatic and effortless manner. Ergo, our ability to use our perceptions to recognize, reason, and react to environmental stimuli occurs so as to maximize efficiency while minimizing effort. To that end, our most reliable and valid data-taking device is the two-part complex machinery resting on either side of the nose; the human eyes. Indubitably, vision per se is our most important sense, seeing that, “more than 50 percent of the cortex, the surface of the brain, is devoted to processing visual information,” according to David Williams, the William G. Allyn Professor of Medical Optics (University of Rochester, 2012). But what exactly does vision entail, and why is it so important? To answer that question, one need only to consider the concept of visual perception.

Over the past several decades, researchers around the world have investigated human perception with great vigor, and fortunately have answered many of the previously unresolved questions associated with perception. Moreover, these researchers have utilized two methods in particular to answer these questions, including brain ablations and a general neuropsychological approach. By means of these two methodological approaches, we have discovered that the process of perception largely occurs through two neural pathways, known as the ventral and dorsal streams, which are ultimately responsible for delineating what and where/how an object is, respectively (Goldstein, 2014). Nevertheless, because of the scientific nature surrounding the physiological process of perception, there will always be questions left unanswered and discoveries yet to be made. Fortunately, a team of neuroscientists at the Massachusetts Institute of Technology, headed by University of California – Santa Barbara’s Michael Goard, have made yet another momentous discovery in the field of neuropsychology, which has further contributed to our understanding of the connection between how perception guides action.

A recent article posted by ScienceDaily illustrates just how significant this discovery was, and, more importantly, what it can reveal about how exactly human beings perceive the world around us. The research at hand was specifically geared towards further understanding the neural circuitry responsible for transforming perceptual signals into coordinated motor responses. However, before one can understand the significance and importance of their findings, it is primarily important to explain what we already know about perception and how it relates to their research, especially with regards to the neural circuitry thought to play a pivotal role in using our perceptions to guide how we interact with objects in our environment. Goard explains that, “mapping perception to a future action seems simple. We do it all the time when we see a traffic light and use that information to guide our later motor action” (ScienceDaily, 2016). However, after familiarizing oneself with the complexities involved with perception, as exemplified in lesson three, it becomes clear that perception is anything but simple. Consider, for example, the following sentence:

 

The quick brown fox jumps over the lazy dog.

 

In the mere second it took you to read it, your retina received incoming photons of light, which were simultaneously projected as a coherent two-dimensional image on the back of your eye, and reflected back towards your computer screen (or piece of paper); these electrical signals were then subsequently propagated through the optic nerve, to the occipital lobe, and then sent to the respective lobes of the brain responsible for understanding what you’re looking at. This exemplifies only half of the story, and it occurs by means of bottom-up processing. The second half, known as top-down processing, also occurs while perceiving the sentence. This knowledge-based processing allows one to use his or her prior knowledge of the English alphabet (including the knowledge of every letter comprising the alphabet, seeing that all are utilized in the above sentence), word formation, sentence structure, and pronunciation in order to make sense of the perception per se. It is also worth noting that, while reading and making sense of this sentence, your brain was also using the same two processing devices to scroll through (or turn) the page. In my case, I was using the two aforementioned processing devices (i.e., bottom-up and top-down) to locate the keys in which to type the sentence, use my prior knowledge of perception to formulate a detailed and understandable interpretation of the complex perceptual process, and thus compose the text you are reading at this very moment. To deem the concept of perception, including the two ways in which we perceive environmental data, as being a complex physiological process almost seems like a drastic understatement. To that end, the interaction between perceiving (i.e., all that goes in to your experience of reading and/or writing a sentence) and taking action towards whatever it is your attention is turned to (e.g., using the muscles of your eyes to navigate across the page, using your fingers to locate and press the keys on a keyboard) has caught the attention of researchers around the world, as the scientific community continuously attempts to answer the question of how our brains accomplish these tasks; a question that was of primary importance in the research headed by Michael Goard and colleagues.

This question was first addressed beginning in the 1980s, after a myriad of theoretical support came forth regarding the likely existence of a close connection between perceiving an object and using that perception to interact with it appropriately. In order to investigate problems of this nature (e.g., those involving unobservable and complex neurological mechanisms), researchers frequently employ an approach characterized by brain ablation, which is the process of nullifying the actions of a brain region via surgical removal or chemical injection. By subjecting test subjects (i.e., primates) to an object discrimination task after the careful ablation of temporal regions in the brains, it became clear that a specific pathway is responsible for determining the identity of a given object; a difficult task for those test subjects with ablated temporal lobes. The conclusion of the experiment led to the formal discovery of this pathway, which is known as the what pathway (also referred to as the ventral stream), and it extends from the occipital lobe to the temporal lobe (Goldstein, 2014). Similar research has revealed yet another pathway involved with the process of perception, known as the where (or more appropriately, the how) pathway. Utilizing a landmark discrimination task, the goal of which being to remember an object’s location and then to choose that location after a delay period, as well as ablating regions of the parietal lobe revealed that the neural pathway responsible for determining an object’s location in space and time extends from the occipital love to the parietal lobe, and is known as the dorsal stream (Goldstein, 2014). The existence of these two pathways were provided with even more experimental support by means of utilizing a neuropsychological approach.

Similar to the test subjects who experienced nullified neurological functioning that arose by means of ablating regions of the brain, individuals with brain damage can also serve as useful test subjects, depending on where the damage occurred. Ergo, the experimental utilization of these test subjects (i.e., those who have suffered brain damage) is characteristic of a neuropsychological approach, and this approach was used to provide further support for the existence of a perception (ventral) stream and an action (dorsal) stream (Goldstein, 2014). Nevertheless, the notion that these two streams are solely responsible for perception and associated action has been under scrutiny, and for good reason. After all, the paradox lies in the fact that, despite the fact that we are able to use our neurological machinery to understand our neurological machinery, the evidence brought forth thus far does not meet the qualifications necessary to satisfy a complete and accurate understanding of our neurological machinery. The full quote by Goard regarding perception is as follows, “Mapping perception to a future action seems simple…However, how these associations are mapped across the brain is not well understood” (Goard et al., 2016).

In their research article, posted in the journal eLife, Goard introduces his research by explaining that sophisticated sensorimotor decisions (e.g., using a traffic signal to guide future driving maneuvers, often requires mapping specific sensory features to motor actions at a later time. He also mentions that there lies the possibility that the connection between perception and action may involve other neural circuits besides simply the ventral and dorsal streams, which is a logical conclusion to reach when attempting to understand why we do what we do (Goard, Pho, Woodson, & Sur, 2016). The article also denotes several unresolved issues involved with turning a perception into an action, including the lack of clarity regarding the regions responsible for sensorimotor transformation, as well as the issues that lie in determining which region(s) is (are) responsible for maintaining task-relevant information between stimulus reception and an evoked response to the given stimulus. The article also elucidates the fact that, “although measurement of neural activity is an important first step toward defining task-related regions, the presence of neural activity does not prove that a given region plays a causal role in mediating behavior” (Goard et al., 2016). To that end, the researchers were curious as to whether the observed differences (i.e., of previous studies) in sustained neurological activity, implicated in the parietal and prefrontal cortical regions, during perceptual tasks is consistent with that described by previous theoretical models; or whether these differences could be attributed to another aspect of the task (Goard et al., 2016). In order to aid in clarifying the unresolved issues involved with perception, the researchers utilized a more comprehensive and technologically advanced approach; they theorized that the clarification could be accomplished by measuring and perturbing activity across sensory, parietal association, and distributed motor cortical regions during a visual-delayed-response task. Additionally, instead of using only ablation or only a neuropsychological approach to yield information about brain activity involved with perception, the researchers were able to use more advanced techniques, which were based upon recent optical inactivation approaches. Specifically, the aforementioned approaches were deemed the most apt for the experiment due largely to recent revelations regarding the effect of cortical inactivation of behavior being dependent on timing and whether the inactivation was bilateral or unilateral (Goard et al., 2016). Using chemically ablated mice as their test subjects, the researchers utilized an optogenetic approach, which involved inactivating bilateral cortical regions exhibiting task-related responses. In doing so, the researchers were able to determine the necessity of sensory, association, and frontal motor cortical regions during each stimulus, delay, and response of a memory-guided task. Simply put, the significance of their research rested in their conclusive determination that the visual and parietal areas were involved in perceiving the stimulus and transforming that into a motor plan (as explained in lesson 3), but only the frontal motor cortex is necessary for maintaining the motor plan over the delay period.

The reason this research is particularly relevant and important in our understanding of human perception is that it was able to reveal a little bit more information about top-down processing. Moreover, by using more advanced techniques (i.e., optogenetics), characterized by the inactivation of neurons in a temporally precise manner by means of manipulating nerve cells with photons of light, the researchers were able to obtain a much more precise and accurate portrayal of what is going on in the brain when you use perceptual information (e.g., seeing a traffic light) to guide later motor action (e.g., hitting the brakes). In addition to the roles of the parietal and temporal lobes in our perceptually-based decisions, this study reveals evidence of perception being even more complex than originally thought.

 

 

References

Goard, M. J., Pho, G. N., Woodson, J., & Sur, M. (2016). Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions, In eLife 2016(5). Department of Brain and cognitive Sciences, Massachusetts Institute of Technology: Cambridge. doi: 10.7554/eLife.13764

Goldstein, E. B.  (2014). Cognitive Psychology: Connecting Mind, Research and Everyday Experience,  4th Edition [VitalSource Bookshelf version].  Retrieved from https://bookshelf.vitalsource.com/books/9781305176997

 

Hagen, S. (2016). The mind’s eye, In Rochester Review 74(4). University of Rochester: Rochester, NY. Retrieved from http://www.rochester.edu/pr/Review/V74N4/0402_brainscience.html

 

University of California – Santa Barbara. (2016). Neuroscience: Linking perception to action. ScienceDaily. Retrieved from www.sciencedaily.com/releases/2016/09/160908131001.htm