Author Archives: Emma Schadler

Could our society survive a zombie apocalypse?

Although I know we’re going to talk about this in class soon enough, I find the topic of a zombie apocalypse being a conceivable event fascinating. We have all seen if not heard of the hit TV show, The Walking Dead, and I’m sure many of us have watched other shows and movies about zombies. They’re fictional, completely made up starting back nearly two-hundred years. So what makes it so intriguing that people take the threat seriously, or at least consider it a possibility that may occur in the future? I think people find it relevant to them because it comes from the genre of dystopian literature. Just like how people are paranoid of Big Brother from 1984 becoming a reality, so do they see a kind of relevance in a zombie outbreak. Both are set in our world, cast with normal people, and include a single factor that changes everything. Whether or not a zombie apocalypse will occur won’t be known until the moment Patient Zero emerges, but until then it is left for us to speculate on whether or not we could survive if it does. It is my belief that if it were thirty, maybe even twenty years ago, we would not be prepared in the least. noldHowever, in the present I think we would have a chance.

Why would we have a chance to survive a zombie apocalypse now, compared to two decades ago? The media. The influx of zombie culture has risen exponentially in the past years as an overwhelming amount of zombie-themed movies, television shows, video games, books, comics, and more have been released. A majority of these platforms take place in the present or the near future, allowing people the ability to feel connected to the story, following the paths of the characters and comparing their actions to what they speculate their own actions would be if a similar event were to happen to them. Zombies themselves have also adapted over the years through their media sources; the movie Night of the Living Dead, released in 1968, was the first time zombies were depicted as undead cannibals. Before the 21st century, most zombies were depicted as eerie and slow-moving, incapable of anything faster than an awkward shuffle. Now, to make movies and shows more thrilling, zombies are given the ability to sprint after their prey and are often capable of feats that require great strength.

The changes to the original design of the creatures made them even more frightening to consumers. And what do people do when they’re frightened of something? When a child is afraid of monsters that hide in the dark, their parents tell them that the monsters are afraid of the light, and won’t bother the child when their nightlight is on. So in the case when society is the child and zombies are the monsters, who is the parent and what is the nightlight?

Society asked, and in 2011 the CDC answered. The Centers for Disease Control and Prevention posted a Zombie Apocalypse survival guide, including a list of supplies (water bottles, food, medication, sanitation, etc.) to create an emergency kit, instructions for deciding on an ztfemergency plan with your family, and information about joining their Zombie Task Force. Also in 2011, an article was published in the Journal of Clinical Nursing that includes instructions for nurses on how to care for patients with the zombie virus (referred to as Solanum), listing details for different stages of the disease (fever, coma, death, re-animation, etc.) and discusses the possibility of euthanasia for patients already infected with the virus. While the CDC post on the subject has some levity to it, the article in the nursing journal takes the threat very seriously, affectively going through all types of symptoms, reasoning behind the illness, and even dealing with the patient’s loved-one’s psychological problems after the person has re-animated or has been euthanized.

In 2009 a rather popular mathematical analysis was published, analyzing the possible outcomes of a zombie virus outbreak. In the report’s conclusion, the authors state that unless immediate action is taken to prevent the spread of the disease, such as treatment or complete eradicationsus of the virus carriers or virus source, a zombie virus has the capability to destroy civilization, leading to a world not far from those in media depictions. The report is referenced in several of the other sources I looked at, and seems to be acknowledged as the basis of prediction for the outcome of a possible zombie apocalypse.

As of yet, only hoaxes of zombie viruses have been reported, and the closest nature has gotten to something similar is in cases such as this parasite, which infests the bodies of honeybees and causes the bee to act unnaturally before its death. There has yet to be, to my knowledge, a virus in humans, animals, bugs, or any kind of life that reanimates the victim after the initial death.

In conclusion, with the fear of zombies from recent media inciting a desire to be prepared in the potential event of a zombie apocalypse, I think that people will be able to react quickly enough to prevent a drastic spread of the virus. The mathematical analysis claimed that a cure or total eradication is necessary in order to completely stop the disease, which I believe our society may twdbe capable of because of the CDC posting an official guide for surviving an outbreak and people already having general knowledge of zombies from the media. It seems silly to write all of this in a serious manner, but the fact that people besides myself debate it earnestly as well, such as the nursing journalist, only proves my point that while such an outbreak seems implausible, we as a society would be prepared nonetheless.

Sources:

Wiki Zombie Films

CDC Blog Post

Zombie Task Force

Nursing Journal Article

Mathematical Analysis

Honeybee Parasite

Pictures:

Night of the Living Dead

Zombie Girl

Latent Infection Graph

The Walking Dead

Does mental illness as a stigma lead to under-diagnoses and further health risks?

Here’s a disclaimer to start: I don’t have much personal experience with people with mental illnesses or disabilities, so this post will focus more on what I have heard second-hand from friends and family, from studies on the topic, and how mental illnesses are portrayed in the media. The reason I chose the topic of mental illnesses is actually because of how little I ever hear about them spoken aloud or discussed, even though around 44 million adults – nearly 14% of the U.S. population – experience a mental illness in a year. As many as half of that total can go undiagnosed or receive limited or improper treatment. Despite this, in my opinion, there is a severe lack of common knowledge of mental illnesses, and there remains several negative disordersstigmas that can lead people ignore their illness and potentially negatively impact their health.

A stigma is a belief that can be associated with a person or quality that carries a negative connotation. There are many stigmas surrounding mental illnesses, though the most common beliefs are that people with mental illnesses are violent, that being diagnosed makes a person appear weak or undesirable to significant others and employers, and that people with mental illnesses have childlike tendencies. But how can stigmas affect a person’s mental wellbeing? Well, based on an article in Schizophrenia Bulletin, while stigmas are beliefs that are developed by the public, who do not fully comprehend the object of the stigma’s interest, they are also affected by the reaction of the minority group the stigma is labeled toward. It tends to be that when those the stigma crudely represents hear of the misconceptions, they react in ways they normally would not. For example, if someone with a mental illness were to hear a stigma that someone with their disorder is violent and dangerous, a natural reaction would be for them to become angry at the person they hear it from. However, in most cases this reaction does nothing to stave off the stigma, but rather reinforces it, although unfairly. The most common emotion upon hearing a stigma surrounding something they suffer from is shame, as people tend to be embarrassed when they find out what others think of them but have no way or motivation to disprove them. Stigmas only further to separate the general public from the minority group, leading to a sense of alienation and loss of social status, thus leading to more stigmas, such as claiming that sillopeople with mental illnesses are socially inept or unable to function properly.

The impact these stigmas have on the public and on those with mental illnesses is immense; some of the public will refuse to help those with an illness, others avoid them entirely, and friends or family can push their loved one into an unwanted treatment that may lead to damaging the relationships between them. Not only is there a public stigma on mental illness, but because of the beliefs of people around them, sufferers of mental illnesses may begin to internalize their own self-stigma. They may start believing that they are less valued than a person without a mental illness, lowering their self-confidence and self-esteem. Depressive disorders especially heighten these feelings, as the person may already have seen themselves in a negative light to begin with.

While stigma is mostly conceptual rather than scientifically factual, making it a soft science rather than hard science, there are some areas of science that can be brought in for this argument. For example, a study in 1998 in the Journal of Affective Disorders researched whether or not bipolar disorder was under-diagnosed and anti-depressants were being over-utilized as a treatment method. Studying hospital records, the report found that most patients had not been correctly diagnosed – or diagnosed at all – in the past, and most were given anti-depressants as treatment, rather than mood-stabilizers. For patients who had been incorrectly diagnosed with antidisorders other than bipolar disorder, an average of 7.5 years passed until they were correctly diagnosed and treated. This study helps show that modern understandings of mental illnesses, even when an attempt is made at diagnosing and treating, is severely lacking. Although this study does not explicitly state that the health of people that were under-diagnosed became worse, it can be assumed that the misdiagnoses and wrong treatment method was not beneficial.

Some people might argue that it is the 21st century, and there is always room for improvement, which I readily agree with. While many representations of mental illnesses in media are unsavory or inaccurate, there have been some more recent positive portrayals that shine a light on what people with these disorders go through, and at the same time showing that they can live normal lives, just like everyone else, just with different hurdles to jump over. Internet and social media allows people to share their experiences, allowing for the first time for people to see the extent of mental illnesses and to have a community. However, improvement, especially in this area, is not always natural and need a catalyst or starting place, which in this case I believe is middle and high schools. Curriculum on disorders should be added to health classes, and people of all ages should be encouraged to integrate mental checkups in with annual physicals. It is my belief that stigma should be replaced with the truth, so that people can begin receiving treatment without feeling shameful or wanting to hide their illness, as that can only lead to further deterioration of their health.

Sources:

US Stats: http://www.nami.org/Learn-More/Mental-Health-By-the-Numbers

World Stats: http://www.iccd.org/keyfacts.html#young

About Stigmas: http://www.waisman.wisc.edu/EVENTS/ethics/Corrigan_Stigma_WP_2002.pdf

Measuring Stigma: http://schizophreniabulletin.oxfordjournals.org/content/30/3/511.full.pdf+html

Bipolar Study: http://www.sciencedirect.com/science/article/pii/S0165032798000767

Negative TV: http://www.vulture.com/2016/09/what-tv-gets-wrong-about-mental-illness.html

Positive TV: http://www.much.com/5-tv-shows-that-got-mental-health-right/

Pictures:

Disorders: https://www.exposingtruth.com/wp-content/uploads/2016/06/mentalhealth.jpg

Silhouette: http://assets.rappler.com/612F469A6EA84F6BAE882D2B94A4B421/img/B4B996E452134695B7A2636932B88EDC/depressed-man-20130323-rappler_8fc9bc92b2a347b4a03c5234cda1f9fb.jpg

Antidepressants: http://www.newhealthadvisor.com/images/1HT18076/antidepressants.jpg

Why do people believe in the paranormal?

Earlier this semester in class, Andrew showed us the ‘ghost in the cupboard’ video and then went on to explain how the paranormal cannot be used as an excuse or explanation behind scientific research or experiments. However, even as our world continues to stretch the known limits of science and create new, amazing innovations, there are still many people whose belief goes beyond science, to the supernatural. According to the most recent Gallup poll on the subject in 2005, three quarters of Americans believe in at least one aspect of the paranormal, with the majority claiming to believe in ESP (a sort of sixth sense that can involve interacting with ghosts, clairvoyance, or telepathy), or that houses can be haunted by dead people. I can actually count myself amongst some of those categories, but since this is agallup science class and I only have my own, unsupported anecdotes, I thought instead of talking about the possibility of the supernatural being fact rather than fiction, I would ask why it is that so many people choose to believe in what has not been, and may never be, proven or disproven.

It is of my opinion that as the world becomes more modernized and digitally dominated, and as globalization allows all the problems in the world to be known at once, people fall back on routine and belief to stabilize themselves, whether they are old or young. So the factors that lead a person to be more inclined to believe in a paranormal explanation than to seek a scientific one may be due to fear of the future or unknown, religion, or feeling unattached or overwhelmed by the natural world. To find out if any of my theories are shared, I decided to first inspect the elements of a person’s personality that might suggest a tendency to believe in the supernatural. An article published in The Journal of Parapsychology in 2005 by J.E. Kennedy discusses how someone’s personality can drive their belief or skepticism in the paranormal. Kennedy states that pragmatic, analytic personalities tend to be more skeptic than personalities that desire stimulus to feel in control of their lives. The article also notes that while there have been scientific studies of the supernatural, most scientists have more analytic personalities and are thus innately cynical of their own experiments, possibly creating an experimenter’s bias to affect the outcomes. Because Kennedy claims that the personalities that usually believe in the paranormal desire a need for control or to have extraordinary experiences to feel a certain euphoria, I believe that people with those personalities are more likely to seek out the paranormal in natural occurrences. For example, the ghost hunters on TV seek out venues where there have been prior claims of supernatural activity; if they happen to catch a random knock or other unexplainable sound on their recorders, the conclusion they first come to will be that it was caused by something paranormal. Unless the find a direct, physical source the noise might have come from, they will continue to ghbelieve what they found is evidence of the supernatural. Skeptics, however, will claim that there were probably dozens of other explanations for the sound, but the paranormal investigators either did not look hard enough, fell back on a type of Experimenter’s Bias, or are too inexperienced to find the correct source.

Another influence that leads people to believe in, or at least be more open to, the supernatural is its depiction in the media. Claims of ghost sightings have been recorded ever since the first century, and have continued to be reported more and more ever since. Ghosts appeared in the media rather early as well, through the works of an early second century Greek writer, Lucian, and more ‘recently’ in Shakespearean plays, specifically famous ones such as Hamlet, Julius caesarCaesar, and Macbeth. With entire genres dedicated to the paranormal, it is likely most Americans have seen or read at least one piece of supernatural media. The genre has been exceptionally popular in recent years, with the 1970s-1980s release of movies like The Exorcist and Poltergeist, and the recent Blair Witch Project and Paranormal Activity series. This isn’t even mentioning the sharp increase in extraterrestrial sightings in media, which I’ll leave out of this post since it’s really a topic of its own. But viewers know that movies are fictional, or at least skewed versions of real accounts. Television is a different story. In an article written by Glenn G. Sparks, who has written several other articles on the effects of mass media on the viewing public, Sparks conducted an experiment on students to investigate the influence of a disclaimer tag at the beginning of a paranormal television show. Sparks had four groups of students watch the same show but with different disclaimers, ranging from no tag at all to claiming the show’s story violates the laws of nature. There was also a control group that was tasked with simply watching a sitcom without any paranormal factors in it. The results showed that the groups that did not received the disclaimers were more likely to support the paranormal claims in the episode, whereas those who did had more doubts. The downside to this interesting study is that it was performed nearly two decades ago, and now supernatural media has become so commonplace that it is likely the shows either forgo a disclaimer, have a false claim that the story is based on true events, or people are so used to the disclaimer they no longer pay it any attention.

The third and final article I found, written in the British Journal of Psychology by Susan Blackmore and Tom Troscianko, discusses the likelihood that accounts of paranormal activity are simply misinterpretations of events. This explanation for the number of reported supernatural occurrences is very probable, since people usually fall back on the easiest or safest explanation they can if they are afraid to learn the real answer. An example of this might be religion, since there is not concrete proof of the existence of God, but millions upon millions of people continue to believe. As Andrew mentioned in class, there are people that will simply deny anything that goes against their personal belief, even if there is supported evidence pointing to the contrary. Another example is the cupboard ghost. The man in the video could not discern a natural cause for his cupboard door to open on its own, so his mind provided him with ghosts hm-ghostsas an easy answer. Perhaps the door handle mechanism was not installed correctly, or his house had a rodent infestation, but the answer that man came up with was: the supernatural is behind it.

It is difficult to find more recent polls or statistics regarding the issue of the paranormal today, because most of the surveys cover either a very specific or extremely general range of paranormal beliefs, so the overall results tend to vary. There also aren’t many studies that ask people why they believe in the paranormal, just whether or not they do. From the studies I have found, however, I believe most people take to the paranormal easily because they want to explore something outside the realm of the scientifically explained, media makes it an intriguing and sometimes frightening and thus thrilling subject, and people fear the repercussions of the real answer to the situation.

Image sources:

Gallup Poll

Ghost Hunters

Caesar’s Ghost

Haunted Mansion Ghosts

Sources:

http://search.proquest.com/docview/195003579?pq-origsite=gscholar

http://onlinelibrary.wiley.com/doi/10.1111/j.2044-8295.1985.tb01969.x/epdf

http://www.ravenndragon.net/montgomery/sparks.html

http://www.gallup.com/poll/16915/Three-Four-Americans-Believe-Paranormal.aspx?g_source=paranormal&g_medium=search&g_campaign=tiles

http://www.history.com/topics/halloween/historical-ghost-stories

Is there actually an age barrier for learning languages?

I have been learning Chinese for roughly the past six years, irregularly, in high school and now college. I would not call myself fluent in the least, even though I am at a level where I probably know at least half of the standard vocabulary used. I began learning when I was 12 and learningcontinued taking the language for about eight months out of a year for the rest of my time in grade school. So why am I barely fluent in Chinese after so long learning it? Is there actually some sort of metaphysical barrier based on age for learning a second language? Or is it just a slower process, either due to the time spent learning, effort put into learning, or the difficulty of the language?

Before I get into second language learning, I thought I’d look into what it entails to learn a language in the first place. In 1967, linguist Eric Heinz Lenneberg suggested his ‘critical period hypothesis,’ in which he theorized that children must acquire the ability to speak their first language before puberty in order to develop proper understanding and usage of the language. At the time, Lenneberg only had the indirect evidence available to support his hypothesis, although from that evidence (which was the general knowledge of how humans tend to begin learning their first language as toddlers or young children) he was able to suggest a mechanism to his theory; he believed that after maturing, an adult’s brain loses the plasticity it utilized as an adolescent to learn exceptional amounts of information in days or hours at a time. Thus, if language is not learned within that time period, the brain loses the physical ability to adapt to the addition of linguistics.

An anecdotal example of Lenneberg’s hypothesis and mechanism at work, which I learned of in my Psych 100 class, came available to scientists when they discovered Genie the Feral Child in 1970. Genie had been locked away by her abusive father from the age of 20 months, during which she was malnourished, not shown how to use the bathroom, and not allowed to communicate with anyone, outside of her father who would occasionally use negative words like ‘no’ and ‘stop.’ Aged 13 at her rescue, Genie could barely verbally communicate at all, but

critical-period

Critical Period Theory Diagram

after six months of being hospitalized she was able to communicate at around a four or five-year level – that is, as much as a four or five-year-old can speak. While attempting to develop language abilities, Genie showed some progress with recognizing words, but had more difficulty comprehending them, such as the difference between ‘or’ and ‘and.’ Because of the study, linguists now believe that acquiring language skills early in life is crucial for being able to form grammatically correct sentences. Unfortunately, funding for the study around Genie’s language development was dropped and not much is known after the fact.

 

Now we know that while it is not impossible to learn language after puberty, it is extremely difficult to practice grammar or comprehend some words or phrases, at least in Genie’s case. To be more relatable to us, though, who have already learned our first language, let’s look into age barriers for learning second languages. In terms of learning a second language, Lenneberg’s critical period hypothesis can be expanded to encompass the secondary learning by stating that while a human mind has a large capacity to learn and comprehend languages, that ability has a negative correlation with age. While age increases, the ability to grasp the workings of a second language decreases. Most studies done in researching the difference between child and adult learning conclude with the acceptance that there is a definite advantage for children than adults, especially in acquiring the correct tone and accent of the second language.

This rings a bell of truth with my own anecdotal experience in learning languages. I had spoken only English for over half of my life, and began studying Chinese after I went through asian-languagespuberty. However, I also find that when I study other languages, like Korean and Japanese, the words, sounds, and grammar come easier to me than Chinese.

While a majority of studies, like those that use Lenneberg’s hypothesis as their base, claim that there is a difference between child and adult second language learning, I did find one study that attempts to refute or rule out possible outcomes of other studies. A bit into their article, they reference research comparing Asian immigrants to the United States, varying in age between 3 and 26. The original study found that there was a consistent decline in ability to learn as the subjects age increased. However, the article critiquing the study noticed that in the visual description of the information, the ability to learn for younger people typically stayed around the trend line, but the older people’s learning ability had a much greater variance, with some performing worse than the younger subjects, and other performing the same or even better.

Based on the evidence I found in these studies, but especially in the last, I believe that learning a second language proficiently can be dependent on age, but is also heavily influenced by an individual’s desire to learn, natural competency with grammatical skill, and the general difficulty of the language being learned. I think in the correct environment, such as in a rigorous classroom or travelling to the country itself, anyone can learn a secondary language if they try hard enough.

In my own case, I believe in the idea that minds are more malleable at a young age, so since I started later I had a disadvantage to begin with. However, I think some people are able to overcome that disadvantage with either a natural aptitude for language learning, or their ability to learn is enhanced by their desire to learn the language. I personally enjoy learning other languages more than Chinese, which, paired with the fact it is one of the more difficult languages to learn, could explain why it can take a very long time for people to become fluent.

Learning Chinese Image

Critical Period Theory Diagram

Comparing Languages Image

Sources:

http://www.jstor.org/stable/412222?seq=18#page_scan_tab_contents

http://www.sciencedirect.com/science/article/pii/0010028589900030

http://slr.sagepub.com/content/13/2/116.full.pdf

Is there a link between sports and aggression in males?

I will begin with a forewarning: I am not privy to much knowledge about sports, outside of the few I played in grade school, and this blog is in no way stating that all sports negatively influence all people. However, after living at Penn State for nearly two months, I could not help but notice two quite prominent aspects of our college society: football and reports of sexual assault. When the first “Your Right to Know” report came into my mailbox, I was somewhat shaken, as it was reported within the first week of school. We have now had 17 reports of sexual assault and other violent acts such as robbery and physical assault. Many of the reports were filed as occurring either on a game weekend or in the early morning of weekends in general. Although I would like to compare some sort of frequency between game week and the reports, or whether or not the males that committed the attacks played or watched sports, it is simply too difficult to attempt gathering that kind of data at the moment. The amount of games is too small to show convincing data for just Penn State, and none of the assailants are actually identified in the ‘Right to Know reports. Instead, I will do my best to state the findings of some other researchers’ studies done on the topic and also state my own opinion based on my personal experience and any influences the studies had on me.

 

First, I want to clearly state what I am looking for: Does watching or playing sports directly correlate to an increase in aggression in males? There could, of course, be a factor of reverse causation, in which an increase in aggression leads males to take up watching or playing sports in order to relieve some of that pent up anger. There may also be third variables at play, such as alcohol consumption, gender, and age, among others. Finally, there is always the possibility that any correlation that may occur is only due to chance.

My first avenue of research was through a study done by a student of NYU Steinhardt on the effects of men participating in sports increasing violent behavior. The student, Nina Passero, first provides some examples of famous sports players from two different leagues (four from football, one from basketball) who have all been charged with crimes ranging from sexual assault to domestic abuse. She goes on to consider the effect of social norms and ingrained masculinity to be at play regarding the presence of the aggressive mentality in sports. To expand on her research, I looked up how many National Football League (NFL) arrests there have been since 2000; I came up with a database from USA Today arrestsconsisting of 833 arrests at the time I accessed it, with about 112 accounts of varying assault and battery chargers, 99 accounts of domestic violence, 8 and 3 charges of animal and child abuse, respectively, 5 homicides, and a large number of charges for DUIs and drugs. Unfortunately, I wasn’t able to find substantial evidence for the other sports leagues in order to contrast with football, and thus was unable to rule out a possible third variable of type of sport influencing aggression. However, the amount of arrests (which I interpret as a representation of aggression and deviant behavior) seems like a convincing argument for a correlation between sports and aggressiveness.

Another study I found, based in Romania, focused their preface on the understanding of aggression and its history in sports, and how sports themselves are designed to be confrontational between athletes, whether they are playing directly with each other or competing against another athlete’s performance. They also quoted other studies to support coachtheirs, which stated that the correlation between sports and aggression, the effect on a coaches wishes increasing aggression on the field, and comparing how women have a lower aggression threshold while playing sports than men. On the conclusion of their study, which surveyed 106 football players of varying ages, they found that the correlation was partially proven. Aggression seems to increase throughout adolescence, when boys are malleable to peer pressure and social norms on masculinity that suggest violence is a key factor in achieving victory.

In an environment where confrontation is an expectation, it is be natural to assume that aggression will be cultivated and even encouraged. While aggression can be beneficial in the world of sports, that cultured behavior can also bleed into daily lift through acts such as the ones committed by the NFL athletes that were arrested in the past sixteen years. Although there may not be hard evidence proving the correlation between sports and aggression, I am convinced there is a strong relationship between the two, based on what I have learned and my experiences here at Penn State. I personally believe that there should not be such a strong influence on young children, boys especially, to fit into the mold of social norms and feel the need to prove that they are men through sports. This century’s society has the capability to accept all types of people, regardless of belief, appearance, or behavior. The standard of dual-gender types is antiquated and oppressive.

Picture Sources:

Hockey

Graph on NFL Arrests from Bloomberg

Coach

Sources:

http://steinhardt.nyu.edu/appsych/opus/issues/2015/fall/passero

http://www.usatoday.com/sports/nfl/arrests/

https://www.degruyter.com/downloadpdf/j/ssr.2013.22.issue-1-2/ssr-2013-0003/ssr-2013-0003.xml

Superpowers in the Real World

superheroes

Click image for source.

The superhero trend of our generation began, debatably, in 2008 with the release of Iron Man. Since then, dozens of major motion pictures have been released in the same ‘superhero’ genre, as well as television shows on the CW and Netflix. Of course, the beginning of the original superhero trend began in the 1930s with Superman – when people were mystified by the fantastical abilities and appearances of the characters – but has managed to continue producing similar media for the past eighty years. Why are we continually entranced by the idea of superhuman beings with supernatural powers? My guess is that because the genre tends to fall under magical realism, usually based off of scientific theories or alterations of real-life practices, people see superheroes as something that is not completely out of reach. And perhaps some aspects are not that far-fetched, as technology continues to evolve over time. Already, there are several scientific advances in our world today that are similar to technology and powers possessed by three Marvel superheroes.

‘Iron Man’ suit

talos

Click image for source.

Tony Stark: billionaire, playboy philanthropist. Once a developer of nuclear weaponry, Stark is more than just a very, very rich man. With his fictional intelligence rivalling real-life Elon Musk, Stark has the technological ability to create a suit of armor flexible enough for moderate movement, general invincibility against bullets and strong energy blasts, and light enough to be able to walk comfortably when he’s not flying by way of propulsors. While the exact schematic I described has not been invented yet, and may well be completely out of engineering possibility, there is a similar suit in the works. Last year, the U.S. military released the details behind plans for a new tactical assault suit for soldiers in combat situations, shown as an exoskeleton around the soldier’s body. The exoskeleton has various purposes, such as giving the soldier support for carrying the weight of the suit along with additional equipment and robotically enhancing the soldier’s strength and stamina. The military group behind the project completed a study comparing one group of people wearing the suit, along with carrying a load of equipment weighing about 60-80 pounds, and another group not wearing the suit and carrying the same load. The results showed that the first group were 7% more efficient than the second group, showing a positive but rather low boost to performance, and with a possibility of confounding variables as the surveyed population was only seven people (Cornwall 2015). Thus the project, named TALOS for Tactical Assault Light Operator Suit, has faced more failures since its inception than successes. Just like how one would attempt to explain the science behind the fictional Iron Man suit, the attempt to create the complex goals of the TALOS project is extremely difficult. With enough funding and effort, there may be a breakthrough in the future, but the only Iron Man we’ll be seeing for a while will be on the big screen. (Cornwall 2015)

‘Daredevil’ ability

daredevil

Click image for source.

Matt Murdock is a blind lawyer in Hell’s Kitchen, New York by day and masked vigilante Daredevil by night. His power is the ability to use his remaining senses to the extreme, enough so that he can sense his environment and distant sounds better than a dolphin or bat using echolocation. In relation to Daredevil’s powers, less than two months ago researchers claim to have identified the area of the brain that makes people sensitive to room size by identifying how different sounds reverberate throughout a closed area. In the study conducted by MIT researchers, three different sounds were recorded and then synthesized into three different virtual rooms. Based on how the sounds emanated in each room helped the subjects of the experiment determine the size of the room, even if they weren’t able to see it themselves. The results of the study were quite satisfying, with a significantly high success rate between 75 and 100 percent, providing convincing evidence that people are able to guess the size of a room just from reverberations. So we now know that Daredevil’s ability isn’t as unbelievable as previously thought, as we have identified that even us everyday folk possess a similar ability, although to the lesser extent. Perhaps later research might add more to the scope or possible enhancement of our sense of perception. (Price 2016)

‘Spider-Man’ silk

spider-man

Click image for source.

I don’t know if many people are Spider-Man fans, but this is probably the most exciting story for me. In the ‘Ultimate Spider-Man’ comics, unlike the Tobey Maguire movies, Peter Parker learns how to build his own spider web-shooters, rather than producing them himself. However, it turns out that the synthesized webs might actually be less durable than the ‘real’ webs Maguire’s Spider-Man utilizes. It’s been going around for a while that spider silk is known to be pretty durable, but only recently has a biotech firm been able to create a type of spider silk stronger than both steel and Kevlar. The article itself references Spider-Man, this time speaking of the second Tobey Maguire movie where Spider-Man saves a train full of people from dying by stopping the vehicle completely with his webs. In the past, this event seemed only possible in a superhero film, where physics and science in general are questionable. However, biology professor Randy Lewis claimed to have gone over the requirements needed to stop the train using spider silk and concluded that, if such an occurrence had been real, the rescue would have indeed been a success. The article explains that spider silk has an extremely high capability of absorbing kinetic energy without being snapped; there is even consideration of the use of spider silk in creating artificial tendons in humans, as it is an organic material neither harmful to nor incompatible with human bodies. While a real-life Spider-Man can only be actualized if any radioactive spiders endow their victims with super-strength and agility instead of immediate death, it is endlessly interesting that spider silk may well be one of the toughest fabrics on Earth. (Scott 2014)

For more real-world science regarding superheroes, watch this video about the Avengers!

Why Dogs Are the Real MVP

BlueMerle

Click image for source.

Approximately 78 million dogs are owned as pets in the United States – in about 54% of all American homes – according to the American Pet Products Association (APPA), so I think this post is something many of us can relate to. Why do we keep dogs around, though? They can cost up to and above $1000 on the initial purchase if they aren’t rescues, they often get sick as puppies and require expensive treatment, and the continual upkeep of food, grooming, and miscellaneous supplies can create an average annual expense of around $1500, give or take a few hundred. So, what makes having a dog as a pet worth all of that extra spending? The simple answer, true nearly 100% of the time, is that we love them, so the money we spend on them doesn’t matter. They’re worth it. I, personally, share this belief and would give you the same answer if you asked me why my family chose to spend thousands of dollars to treat my dog’s cancer instead of putting him down. But, rather than leave it at that, I’d also like to give a more in-depth explanation as to why human society would be a much more depressing place without dogs by our side. (APPA 2015)

Canines Throughout History

Ancient Dog Statue

Click image for source.

Dogs have been companions to humans before any other animal; they became the first animal humans fed purposefully, even before livestock, in 13,000 BC Central Asia. The ancient breeds would scavenge around human campsites, looking for scraps. At first, they were shooed away, before humans realized that these animals helped reduce the more unwanted population of rats. Over time, people realized that dogs were actually multi-talented animals, with the ability to bark in warning when predators or unfriendly humans were near, track down small game for hunting, and eventually even help transport people by sled. The relationship between human and canine continued to grow as society civilized, and the capabilities of dogs are still being discovered to this day. (Carr 2016)

The perseverance, dedication, and loyalty of dogs can be seen through several examples in history. Since the origin of their domestication, dogs have played surprisingly significant roles in the development of human culture. Alexander the Great’s dog, Peritas, saved him from being stampeded by an elephant so he could go on to begin the journey toward Western civilization. A nameless Newfoundland saved Napoleon Bonaparte from drowning, so the man could live to then face his downfall at Waterloo later that year. Oddly enough, even Robert the Bruce’s dog, Donnchadh, who saved his master’s life in 1306, was instrumental in the road to the American revolution, as Robert the Bruce’s direct descendant’s actions would create the dissonance between the American colonists and England that leads to inciting the revolution. Without these dogs, two of whom protected their master’s and one of whom saved a stranger, history as we know it today would be completely different. (Bougerol 2007)

Continued Relevance Today
As long as the past relationship between humans and dogs has been, the future is just as promising and extensive, if not more-so. As science paves the way for new advances in every aspect of our world, it isn’t surprising to find that dogs will play a rather involved role in several areas, such as the “test tube” puppies that could help rekindle life for endangered species, dogs outfitted for military espionage, and drug trials on dogs that could extend their lifespans and, potentially, human lifespans as well. Although there are many studies I could talk about, I’d like to focus on one in general that I feel is extremely beneficial to our society: therapy dogs in nursing homes, hospitals, and schools.

The first dog connected to therapy was Jofi, Sigmund Freud’s Chow who he would take to his therapy sessions with his research subjects, as he believed Jofi helped create a calmer environment for the subjects. However, the person to whom the practice of using therapy dogs is attributed to is Elaine Smith. She created a program in 1976, in which she trained dogs to socialize with patients of hospitals and other institutions. Smith’s program, Therapy Dogs International (TDI), performed a study from 1996 to 1998 on the effect of therapy dogs on staff, residents, and patients in facilities which had the TDI program. They received 200 responses from their international survey, which involved various questions about the program’s effectiveness, the moods of the participants before and after, and the opinions of the staff. One such question inquired as to the benefits of the program. Ninety-two percent of responses recorded positive changes in mood as one of the benefits, followed closely by 86.5% reporting an increase in socialization and 86% reporting

Question7

Click image and go to pg. 13 for this chart.

an increase in verbalization. Even when staff were questioned about the effect of the therapy dog program on them, the responses were all positive, with the greatest response being that the dogs helped raise the staff’s morale during the visit. When asked if they would recommend the program to other facilities, the response was overwhelmingly affirmative, with most of the reasons for the recommendation being the increase in patient socialization and the elevated faculty mood. (TDI 1998)

In the future, I hope more grade schools consider utilizing a therapy dogs program for their students, as young children are more volatile to negative emotions and issues with socialization. As a college student, I know that having the chance to play with a dog before having to take an exam would help my stress immensely. Even being around untrained dogs can feel therapeutic, and is one of the many reasons people love their pets and consider them family.  For people that are interested in helping dogs that don’t have homes, consider visiting the ASPCA website to donate or find more information, or check out Penn State’s very own therapy dog training group, Roar for More.

Divorce in America and How Children Are Affected

The Facts

family

Click image for source.

Remember that statistic going around a while ago? The one that said half of American marriages end in divorce? Well, luckily that is both untested and, two years ago, was proven incorrect. Marriages seem to be healthier than the public assumes, with the divorce rate at its highest in the 1970s and 80s but having been steadily declining since then. According to the New York Times, assuming the current decline will remain steady, the divorce rate will actually be closer to one in three marriages than one in two. (Miller 2014)

This is all great news, considering that many couples will likely stay together, but what about the couples that do divorce? Those two families out of three might stay married ‘til death do they part, but that third family probably won’t get passed those difficult, early years of marriage. Maybe they fight, maybe they just lose that initial passion for each other or fall in love with other people. Eventually, they decide to divorce. They go through the paperwork: who keeps the house, the dog, the car. Who keeps the kids. In America, there are 1.5 million children who deal with the fallback of divorce every year (Arkowitz 2013). Although each divorce situation is different from another, there tend to be similar areas of children’s lives that are affected by the split.

The Studies

Professor Paul Amato, a sociologist at Penn State University, described divorce as leading to conflicts in relationships between those involved in the divorce, a lack of both monetary and emotional support, and various other detrimental effects on everyday home life. Amato outlined this in his 2000 study of ‘The Consequences of Divorce for Adults and Children’. He was credited in a 2012 study by Patrick F. Fagan and Aaron Churchill of the Marriage and Religion Research Institute, in which they further his summation into five categories of life that divorce detrimentally impacts. As evident in the following five categories, divorce can affect children physically, emotionally, spiritually, socially, and mentally. With religion (1), they noted that children had lower attendance rates at religious gatherings after their parents divorced. They claimed that divorce can lower children’s ability to learn and comprehend in school (2), that the financial instability for single parents creates a shaky household (3), and children can become more prone to committing crimes and getting involved in explicit activities (4). The children’s well-being can be compromised by the event, leading to negative expressions of emotions or even physical harm to themselves in extreme cases (5). (Churchill, Fagan 2012)

Effects

Click image for source.

However, the positive outlook on divorce is that the majority of children only suffer short-term affects after the initial announcement, as seen in the image below. Usually, these symptoms occur because of the abruptness of the decision by the parents, if conflict was not obvious, and can carry on until the separated family system becomes stable once again, typically around two years after the divorce. In contrast, families where conflict was vocal or physical, children were shown to be more adaptable to the change and may have even seen it as a relief. (Arkowitz 2013)

My Experience

To demonstrate how not all children are affected by divorce in the same way, I’ll provide myself as an example. When I was five and my sister seven, our parents ended their relationship and divorced amicably. For the next thirteen years, we went between houses weekly; our dad kept the house, since he owned it and hasn’t moved out since, whereas our mom, who had low income, lived in various apartments and rented homes until she remarried.

Divorce

Click image for source.

My sister and I don’t fit into many of the areas I mentioned earlier. The only thing that really applies to us is the marketplace, because our mom and dad had such different levels of income, but tried to keep support equal between them. What none of the articles or studies I read mentioned explicitly, however, was the stress of being the middleman. Having to tell your parent what another one said because they refuse to contact each other directly if they can help it, or having to decide which one to ask to pay for something you want or need. In my case, not only was there stress from the initial divorce, but also feeling the strain of always having to be the mediator between our parents afterward.

For anyone interested in recent news about divorce, check out this article about a law in Missouri that establishes equal child custody time between parents.

I love science — fiction.

Hello SC200! My name is Emma Schadler; I come from a small town in Pennsylvania called Coopersburg about three hours from State College and I’m a freshman in the College of Communications. I took this class because my advisor suggested it and she told me Andrew only counts your two best test scores! Now that I’ve actually seen what this class is about, I’m excited to see science from a different perspective than I’ve been taught in school. I know a bit about science in general, and have even enjoyed my biology, chemistry, and forensic science courses – but you couldn’t force me to retake my junior year physics class for anything.

Also, I’m a huge Harry Potter fan and now every time I read the books or watch the movies all I think about is how they had to have learned how to create black holes or some sort of negative space or rifts between space and time through magic in order to Vanish items, hide the entire Wizarding World, Apparate, etc….

Philosoraptor

I am much more inclined toward the arts than I am toward science. That isn’t to say I can’t enjoy or be proficient at sciences, just that I can’t see myself making a living out of something I’m not completely enthusiastic for. Writing, reading, editing, technology, and languages; those are areas I can be endlessly passionate about, whereas science is something I might find momentarily intriguing or thought provoking, but not a long-lasting interest. Anyway, in this class I’m excited to learn/hypothesize about SPACE! One of my favorite topics of science is space because there could be so many things out there we don’t know of and may never know about. It opens up so much opportunity for literature, film, and, of course. scientific discoveries!