Author Archives: Samantha Elizabeth Schmitt

8 Hours or 7 Hours of Sleep, What’s the Difference?

We have all heard that 8 hours is the “recommended” time for a good night’s sleep, but that’s hard to come by in college. So I wondered, is that recommendation accurate? Is there really a difference between getting 7 hours of sleep and 8 hours? Apparently there is because new research has supported the idea that less than 8 hours of sleep is actually optimal.

Who is “in charge” of telling us how much sleep we really need? The National Sleep Foundation is obviously a big, and respected, group when it comes to talking about how much sleep people nSTREPchanges_1eed.} Recently, the National Sleep Foundation released the results from a world-class two-year sleep study, which is an update on their guidelines on how much sleep each age group really needs. The panel of researchers consisted of 18 leading scientists and researchers, as well as 6 sleep specialists. This panel reviewed over 300 current scientific publications (meta-analyses) and voting on the appropriate amount of sleep required. Basically, this study was the big kahuna of sleep studies. The National Sleep Foundation does qualify their study saying that an exact amount of sleep needed for different age groups cannot be pinpointed, but there are recommended windows. They also point out it is important to pay attention to the individual, such as health issues, caffeine dependence, always feeling tired, and more that could change the recommended sleep for a specific person. The panel released the results in the chart above.

The panel revised the sleep ranges for all six children and teen age groups. They also included a new category (younger adults 18-25). See this website for the exact ranges that changed.

While this may be a credible source, this is only one source. The National Heart, Lung, and Blood Institute also published their own findings findings of how much sleep the average person should get. Their recommended amount of sleep was very consistent. Both being very credible sources and very similar in recommended sleep time, this similarity shows that this is a very good approximation to how much sleep each age group should spend sleeping.

The range for hours of sleep each night for people aged 18-64 is 7-9 hours. While 7 is included in the range, should it be focused on more and used as a target? Several sleep studies have been finding that seven hours is the optimal sleep time, not 8.

USA, New Jersey, Jersey City, Man sleeping in bed

There might be other benefits to sleeping 7 hours of sleep and not sleeping more. Shawn Youngstedt, professor in the College of Nursing and Health Innovation at ASU Phoenix claims the lowest mortality and morbidity is through 7 hours of sleep each night. He also claims that 8 or more hours have even been shown to be hazardous. While he is a professor of health, I couldn’t find the evidence with his study, just his claim. It could be considered somewhat credible because health is his profession, but without his data and experiment design (if he even did one) this can’t be credible. To check out if Youngstedt had any validity to his claim, I searched for some other studies.

Professor of psychiatry at UCSD Daniel F. Kripke spent 6 years tracking 1.1 million people participating in a large cancer study.  These 1.1 million men and women ranged in ages from 30 to 102 years old. This is important to note because it studied a very wide age group, but did not focus on any children categories or even the young adult category (from the National Sleep Foundation). This 2002 observational study controlled 32 health factors. This 2002 observational study controlled 32 health factors. This is also important to note because this doesn’t allow other health factors to skew results. The study reported that people who slept 6.5 to 7.4 hours had a lower mortality rate than those with shorter or longer sleep. To conclude: this study found strong correlation that sleeping 6.5 to 7.4 hours had the lowest mortality rate. (Sidenote: this is an amazingly detailed study, and I would really recommend checking out the site attached because there is so much information and it is all super interesting)

So does sleeping less than the “golden” 8 hours actually help us perform better in cognitive testing and overall mental performance? A 2013 study tested participants through a cognitive-training website Lumosity. Researchers obtained survey data from users of Lumosity between March 2011 and January 2012. Users participated in three types of tests available through Lumosity: Speed Match (given 45 seconds, matching task where users respond if the current object matches the previous object shown), Memory Matrix(shows users a pattern of squares on a grid and users are then asked to recall the pattern after a delay), and Raindrops (a rapid arithmetic test with various arithmetic problems appear at the top of the screen, users must answer problem before the problem makes it to the bottom of the screen). See the graphic to the left to show the results of the tileshopstudy (I don’t know why it’s blurry but the top row of graphs peak at 7 hours, I promise. The bottom set of graphs is irrelevant to this blog post, but check out the study link above to learn about more of their findings). In each of the three tests, participants’ peak performance occurred when the user reported 7 hours of sleep. Showing this consistency in not one but all three test indicates a very strong correlation that 7 hours of sleep is best for peak cognitive function. Because the peak performance occurred at 7 hours, it also supported that increasing sleep was not anymore beneficial to cognitive testing.

Now let’s relate to all of you sleep deprived college students. College sleeping is all about inconsistency. Whether your out studying super late in the library, out late at a party, waking up early to tailgate, or even pulling an all nighter to study, one night of 7 hours isn’t magic. Seven hours of sleep isn’t going to miraculously solve all of your problems, clear all exhaustion, and increase your lifespan. While college is all about inconsistency, you need 7 hours of sleep every night to reap the benefits of the 7 hours. So party hard, study hard, sleep in late, but to get the benefits of the 7 hours, it needs to be done consistently.

To conclude: These two studies both very strongly support that sleeping 7 hours, instead of the much more commonly hear 8 hours, might prove to be beneficial in the long run for increased mortality and better cognitive functioning. But like I said before to all you crazy, sleep-deprived college students, the 7 hours needs to be consistent for it to be beneficial. While there are certainly going to be some nights 7 hours of sleep is really impossible, aim for it. If it’s between that Netflix marathon and getting 7 hours of sleep, because it will be a step in the right direction towards consistency, which can lead to higher mortality levels and improved cognitive function. So SC200 students, aim for a consistent 7 hours every night (and maybe when you’re scheduling classes consider taking your sleep schedule into mind and avoid those 8AMs for an extra hour or two).

Hola…Bonjour…Guten Tag…

Hola fellow SC200 amigos. Me llamo Sam. As a current Spanish minor, I struggle with learning all of the different vocab words, grammar structures, and phonetics of the Spanish language. Unless you are a native speaker to another language, I am sure you can relate to the struggle bus I am currently riding on. But I think it is worth the struggle. One thing I do wish is that my parents taught me Spanish when I was younger so I didn’t have to learn a new language on top of being an extremely busy college student. Why do kids learn languages faster than other age categories? What I have heard is teaching a child a language is much easier than an adult or even high school student.

One article I found to start things off claimed that the best time to teach a child another language is in their first 3-4 years of life. This is supported by learning a language is a natural process, and children typically speak at least 2,000 basic words by the age of four. During their first six months alone, babies “babble” with only 70 sounds (also known as phonemes), but these sounds make up all of the languages of the world. They will eventually learn language through these sounds, and mainly by words they pick up through their environment. One of the reasons adults struggle learning a new language over children is a number of these 70 sounds are lost because they aren’t used it that adult’s native language. Adults and even older children just don’t know how to pronounce, hear, and differentiate the unique phonemes of other languages that don’t exist in our native languaBilingual-Kidsge. A child’s most natural ability to learn is through their first three years of life (in addition to the phoneme explanation with specifically learning a language), and that is why it is easiest to learn a language then. Over 50% of a child’s ability to learn is developed in their first years of life, with another 30% developed by age 8. This is when it is easiest for children to learn things, including a second language. Therese Sullivan Caccavale, president of the National Network for Early Language Learning (NNELL) notes research done in Canada with young children showing children who are bilingual develop object identification at a younger age. This is the concept of learning that the object stays the same, but has multiple names for multiple languages.  For example: a carrot is still a carrot, and looks and tastes like a carrot, whether is it called a carrot in English or una zanahoria in Spanish.

Another study done by Ann Fatham of Stanford University tested the relationship between age and second language productive ability. The study consisted of 200 children (ages 6-15) from diverse language backgrounds, but were learning English as a second language throughout public schools in the DC area. To qualify for the study, these students had to speak their native language at home, have no previous training before entering American schools, and live in the US for less than three years. This was a good group of students because their was a variety in their native language, as well as how long they had been in America, and they all had a variety of English instruction methods because they were at a variety of public DC schools. To test the students, they were given an oral production test to test their ability to produce English morphology and syntax. The test consisted of 20 subtests, with three items in each subtest. Each subtest was focused on a particular syntax pattern. Students were then given a 1 (for a pass) or a 0 (for a fail) based on every one of their answers. In addition to the sixty oral production test, each child was told to describe a composite picture. Their answers were recorded and evaluated by linguists, who ranked them 0-5.

The results of this study indicated there was a relationship between age and the rate of learning. The results were divided by two groupings: age groups (6-10 or 11-15) and the amount of time the students had been in the US (1 year, 2 years, or 3 years). The results showed that the older group performed better on the questions (p-value<.001), regardless of how long they had been in the US. This very low p-value indicates this result was most likely not due to chance. There were no differences found in the rate of learning at the variety of different programs used to teach English. The results of the pronunciation during the composite picture description indicated that younger children did significantly better than the older children (p-value<.05). This suggests that the younger children could be learning English phonetics at a much faster rate than the older children. This makes sense when realizing that older students “lose” their phonemes, while younger children are more likely to pick up on the language. To conclude: older children performed better with syntax, yet the younger children were much better with correct English pronunciation. I personally think this is very interesting because it is concurrent with what pervious studies suggested. I also think these results are similar with native English speakers, with younger children mastering pronunciation first, and learning proper syntax when older. One part of this study that could be a confounding variable is how long students had lived in the US. If students lived in the US longer than others, they could have naturally had more time to practice learning English.

Another reason why younger kids generally have an easier time learning other languages is that they are not as cognizant of grammar structures as adults, nor are they expected to be. Martha G. Abbott, the Director of Education for the American Council on the Teach of Foreign Languages claims that one of the largest benefits to teaching a child a second language early is that younger learners are able to develop near-native pronunciation of the language. This is because children can’t properly speak English (or their native language) so they don’t let the diction and spgty_bilingual_nt_130108_wblog1eech of their first language conflict with their pronunciation of their second. Younger students are allowed to get a “pass” because they can’t pronounce a word, yet if a high school or college student made the same language mistake it would not be as easily looked over. The older one gets, the more grammar and sentence structure starts to matter, which seems to “unintentionally” penalize adults and older students when making mistakes about learning a language. This can ultimately be discouraging. Abbott also points out that younger learners are naturally more curious about learning, allowing easier engagement with the language. Younger children are also more accepting of people from other cultures or other language speakers before they are given certain stigmatisms about certain cultures and races.

One researcher who disagrees with the strength of the argument of learning a second language young is Stefka Marinova-Todd of the University of British Columbia. Marinova-Todd and her colleagues argue that there is not a “critical” period for a second language to be learned. They claim researchers have fallen victim to three fallacies. One fallacy is a misinterpretation of observations that young children tend to pick up another language. This is when they are actually unsophisticated and immature, while also lacking certain cognitive skills (that could significantly help in acquiring a second language).  One study done to prove this was conducted by Harvard professor Catherine Snow. She studied English speakers learning Dutch of different age groups. Study size? The study concluded that 12-15 year olds did better than younger children. Another fallacy is some researchers report differences of brain organization in early and late second language learners when there are none. In actuality, a second language is acquired through the same neurological configurations responsible for acquiring the first language. The last fallacy is the rate of failure. Older people admittedly “fail” learning a language, but this is due to other factors such as energy, input, motivation,language-learning time, etc. This can also be logically supported because younger children are more likely to be “pushed to learn a language through parents, classes, tutoring, and rewards from parents or teachers. Adults have less motivation and must self-motivate themselves, which can make it seem “harder” for them to learn another language. This could potentially be a confounding variable for adults to learn, because this may make it seem like they aren’t as capable of learning, when really they are just less motivated. This article is concluded reminding people that the majority of bilingual children start monolingual, and while learning as a child is not proved to be not easier, it is also very possible and can even be simple for older people (of any age really) to learn a language. Basically, don’t let not being an infant or a toddler deter one from not learning a language.

To conclude: countless studies suggest that learning a language young is more beneficial, and it seems like it truly is more beneficial to learn a language young. That being said, Marinova-Todd still offers very interesting (and frankly not talked about) data and opinion to support that learning a language young is not the only way and time to learn a language. Overall, no matter what age, one should not be less motivated because they fear societal embarrassment (example: not knowing all grammatical structures and having a hard time with pronunciation). YOU CAN DO IT!

CrossFit: Good or Bad?

There are hundreds of workout trends and ways to work out. Ranging from yoga to powerlifting, everyone likes to work out in their own way. Before college, I was a competitive swimmer, training all year round, including doing CrossFit. Too many times to count, I have mentioned my love of CrossFit to people, only to be lectured about how bad it is for my body. In this blog post, I want to look into whether CrossFit is “good” or “bad”, who it is best for, and what kinds are the best.

imagesCrossFit is a strength and conditioning system promoting physical fitness. This system combines a variation of exercises through combining weight lifting, sprinting/distance running, and gymnastics. CrossFit challenges their members to be proficient in 10 movements: “cardiovascular/respiratory endurance, stamina, strength, flexibility, power, speed, agility, balance, coordination, and accuracy”. Typical CrossFit gyms create a “Workout Of the Day” (WOD) for their members, which is known to target specific body muscle groups or promote an area of fitness in the body. The emphasis of a WOD is on speed and total weight lifted, not technique. CrossFit is designed to build muscle mass through their intense yet quick workouts.

Critics of CrossFit state that CrossFit athletes are more prone to injuries. They also claim there is a lack of guidance for beginners. Without knowing proper technique, beginners can dive right into a WOD not knowing the proper movements, causing injury. People claim it is because CrossFit focuses on speed and weight rather than form and effectiveness. Regular athletes can also be in the way of danger, because anyone who exercises strenuously at a very fast pace without proper warm-up or form can be prone to injury and health issues. Right now, there is no definitive research to support or refute these critics (which I believe is because CrossFit is a very new way of working out). One of the biggest health issues from CrossFit is Rhabdomyolysis (also known as Rhabdo). This is a serious and potentially deadly illness that results from the catastrophic breakdown of muscle cells. It can end as kidney failure, and ultimately death. This is very common in CrossFit because CrossFit athletes are constantly encouraged to push themselves above their physical limits, causing their muscle cells to explode and die. In addition to Rhabdo, there are studies to test “body part specific” injuries.

2013-12-03-07-22-25-192382170-500x0To see how common injury rates really were, I found a study observed injury rates and patterns among current CrossFit athletes. The study consisted of 486 CrossFit participants, with 386 meeting inclusion criteria. The overall injury rate was found to be 19.4%, with males more likely to be injured than females (p=.03, meaning there is only a 3% probability this happened by chance). I believe that males were more likely to be injured than females because I feel that men are more influenced by peer pressure. When being told to do more reps or add more weight, I believe that females would be more rational with knowing what is good or bad for their body, while men want to impress each other. This study also concluded that of the 84 people reporting injuries there were 21 shoulder injuries, 12 lower back injuries, and 11 knee injuries. Lastly, the study found the injury rate to decrease with trainer involvement (p=.028). This clearly shows that trainer intervention can help athletes work out effectively, while safely doing movements and adding weight. While this study observed several different key factors, could it have all been by chance? Yes. This study doesn’t provide a conclusion about whether or not CrossFit is bad. These numbers could be a result of chance or a result of false positives, concluding that something is going on, when it really isn’t. What the study does show is bodily injury can be reduced with the assistance of a trainer, and with males more likely to be injured than females. I think these are important statistics because simple precautionary measures can be taken (like having a trainer assist you) to ensure safety. To follow up, I looked at another study. The second study I found also looked at proportions of injuries through CrossFit. This study surveyed 132 people, and had 97 responses claiming to have sustained an injury during CrossFit training (that’s 73.5%!). This study was done in the UK, while the first study was done in New York. This could be significant because New York (or even just American) gyms could be “safer” places to do CrossFit. A trainer who doesn’t focus of teaching and correcting, but on adding speed, weight, and reps is probably a trainer found at the gym in study two.

Clearly, this data does not line up. So which one is “right”? Technically, they both are (because we are going to assume the studies were done right). These widely skewed answers could be a result from a number of things: trainer skill level, workouts done, health of athlete…the list goes on and on. These studies may just also not be very reliable (but were what was available on this issue). They were not very reliable, because these injuries were self-reported. Being self-reported, people may over-report their injury (say they have an injury when they really don’t) or under-report their injury (say they don’t have an injury but do). While my third study was also CrossFit.RateOfInjury.Kubas_.GastownPhysioPilatesself-reported, it measured CrossFit injuries compared to other sporting injuries. This study concludes that rate or injury per 1,000 hours is actually much lower for CrossFit than with a multitude of other sports (see graph). In two studies (within this one), CrossFit’s rates were calculated at 2.4 and 3.1 per 1,000 hours of exposure. While these are only two studies, all will vary. There are so many factors that can cause injury, and some of these injuries could not be just from CrossFit training. We would have to look at multiple different studies. All three of my studies claim that there is little to no research in the field, but I still found it something worth analyzing and blogging about (even if it made it a lot harder and not super concrete). While many more studies need to be done to see if injuries are actually caused more in CrossFit than other workouts, clearly all three studies show that there can be dramatically different results when studying the same thing.

CrossFit has also been applauded for its results through powerful and intense conditioning. This specific form of exercise has been found to provide the most results without requiring significant time in a gym with the typical WOD style workout. Another positive aspect of the CrossFit “fad” is the sense of community it has created. Their official website ( offers workouts, gym affiliates locations, free resources, frequently asked questions, and a community discussion for exercises and nutrition. The aim to qualify for the CrossFit games also creates a sense of community and motivation for top athletes. While there is not research on this data, scientists could potentially do a psychological experiment to support this theory.

To summarize: Some people love CrossFit because it offers a very quick, muscle-building workout in a very short amount of time. Others hate it because it doesn’t focus of form and fear the rumors they have heard about causing injury. Even through my studies, it is hard to really know if CrossFit really does cause injuries more than another workout would. I wish there was more concrete evidence, but hopefully as the trend of CrossFit grows, more research will be done to have a more definitive answer on whether or not CrossFit is “good” or “bad”.

In conclusion, the call is up to you. Yes, from the studies analyzed, there is a risk of doing CrossFit (but maybe not as much as other sports). But do the benefits for you outweigh the costs? Clearly your chance of injury factors more than just doing CrossFit but the intensity you do it at, the weight you use, if you have proper form (and a trainer helping which will decrease chance of injury). Previous health can also play into CrossFit. In my opinion, CrossFit can have benefits to its athletes, but only if done properly with a trainer assisting and monitoring the athlete.

To leave you with a final note: check out this video of the 2014 CrossFit Games. (CrossFit Games: similar to the Olympics where athletes from around the world compete, top people in specific age groups qualify to attend and compete)

That Crab With One Big Claw

I spend a lot of time at the beach in the summer. One of the beaches I frequent in the summer is a “bay beach” in Brigantine, NJ (South Jersey by Atlantic City). Because of the bay and the lack of a large crowd, there are a lot of things at this beach that I usually don’t see at other beaches. The craziest thing I have seen at this beach are these crabs with one huge claw. They are called fiddler crabs. I have always wondered why they had one claw and what the benefit to this one, abnormally large claw was. And does their larger claw’s size matter? So SC200, you’re about to learn!

The pink shows where fiddle crabs are found throughout the world

The pink shows where fiddle crabs are found throughout the world

Fiddler crabs are small, semi-terrestrial (meaning they live mostly on land but require nearby water especially for breeding) crabs that are characterized by their extremely asymmetrical claws (for males). They are found along the eastern and southern coasts of the United States (where I see them), as well as many other coasts throughout the world (see map above). Fiddler crabs are hatched from eggs. The eggs grow from larvae, and then live as plankton through several molt stages. They then molt into immature crabs. Once the crabs move to land to continue to grow, the males start to develop a large asymmetrical claw. The females keep their two small claws, used for feeding, while the males only have one small claw for feeding.

imgresSo what is going on with these large, odd-looking claws? The claws usually weigh about 1/3 of the crab’s overall body weight. The claw needs to be light enough to wave around, but weigh enough to project injury or harm other crabs. Fiddler crabs are often found waving their large claw in order to attract females into their burrow to mate. The large claw of the male is used for mating purposes, as well as combat.


A male fiddler crab raising his large claw in hopes of attracting a female fiddler crab

Some evidence suggests that females choose the male crab with the largest claw. To ensure this, I found some studies that support it. This experiment (an experiment and not a observational study because they added “dummy crabs”) tested female fiddler crab preference based on both the size and elevation of the male claw and on male handedness. Dead, resin-coated males were used as test (dummy) objects. Females were observed to approach males with larger claws and males who raised their claw in the air. The experimenters noted that the handedness of the crab had no effect on the female responses. These results are consistent with evolution theories: females prefer to mate with crabs with bigger claw size because larger claw size correlates to better at fight, which leads to better chances of survival. Check out this video of a male fiddler crab waving his claw to attract a female. Once a female chooses a mate (based on his claw size), the female enters the male’s burrow. The female remains in the males burrow for about two weeks, while her newly inseminated eggs are developing into embryos. She will leave the burrow once she is ready for her eggs to go hatch in the water.


Two male fiddler crabs fighting

The male’s giant claw is also used as a weapon to fight other male crabs. These “fights” usually ensue from an unknown male hanging around the outside of another male’s burrow. It is believed that because the winners of the fights are usually the ones with the bigger claws, the successors of the fights pass along their genes, therefore the evolution of the big claw. Check out this (start watching at around 1:30) video of male fiddler crabs fighting.

To conclude fellow friends of SC200: A fiddler crab (male only) has a large call that is used to attract females. Females are attracted to the larger claws, as well as the male who waves his claw the highest. This is consistent with evolution and wanting to mate with a male crab that has a larger claw, because it increases his chances of survival. Males also use this large claw to fight other males. The cause of the fights is mainly to protect their burrow where they hope to breed with a female. The size of the claw also matters for fighting because the crab with the larger claw usually wins.uca.pugilator.800

Dress for Success

We have all heard the term “dress for success.” While it will probably allow you to be more successful if you dress up for an interview or a formal event, does it apply to everything? In high school, I had a friend who believed she had to dress up in order to do well on any test. Yet when I took my SATs or any other tests in high school, I made sure I would be comfortable, which usually ended up with me wearing a sweatshirt. But did not dressing up affect my grades? Was my friend right? Did her dressy appearance help her succeed in school?

Students dressed for success

Students dressed for success

I started my research with a “College Magazine” article , an informal article written by a college student. To summarize, the author agreed with the idea of “dress well, test well”. The author cited Carly Heltinger, author of “The Freshman 50” who praises the “dress well, test well” theory, but lacked scientific evidence to support this. While I did find several other articles written by college students claiming the “dress well, test well” method worked, I couldn’t find a true scientific study. I still thought this was an interesting, and testable theory. (I looked on Google Scholar and several library databases for studies and got nothing).

One article I did find that seemed to have some actual validity to it beyond the opinion of a campus opinion piece. This article evaluated the perspectives of two Ohio State University (boo OSU, go PSU) psychologists, professors Jennifer Crocker and Richard Petty. They claim they found what you wear affects your cognition—how you think. The psychology term for this is called “priming” and is when someone says one thing, you think of something automatically. Like beach and sand, or associating glasses with being smart. Petty states priming is a two-step process: “Clothes can be one example of something that can activate thoughts in our head but then once those thoughts in our head are activated or primed, then those primes can effect how we interpret the world around us and ultimately our behavior”. Priming is found in education all the time. We have all personally experienced it by having different feelings and reactions when we hearSAT rather than critical thinking test (even though they are the same thing). Crocker also argues that self-objectification consumes mental resources and creates a negative attitude. Self-objectification is defined as when we choose to evaluate ourselves based on appearance because we believe that this is how everyone else perceives us. Part of self-objectification is dressing down: where others won’t objectify you and you won’t objectify or be self conscious of yourself, and you can therefore be more focused. This can have the same exact effect as when one dresses promiscuously. On the other hand, self-objectification can cause a huge self-esteem boost when you are dressed up.

Standardized-Testing-StudentsOne of the reasons there may not be testing or experiments with specifically dressing for success and testing is because when people hear dress for success they commonly think of the business side of it: including interviews and presentations. Most people probably assume that this correlation carries over to testing, but it can’t be confirmed without a study. While the Ohio State article was helpful into the appearance of psychology, I still couldn’t find any studies to support my claim. But I was still really interested, and have attended every one of Andrew’s classes, so I figured that qualified me to design my own study.

A good study for this would be to randomly select certain high schools in diverse areas (so you are testing different genders, races, socioeconomic status—this is to ensure that the trend isn’t just in a certain group of students such as in girls, Asians, or poor people). This study couldn’t be double blind, because the students would know if they are “dressed for success”. Students would randomly be assigned to two groups: dressed up and not dressed up. The experimenters would provide the clothes to ensure students know they are dressed appropriately or not. All students would be given the same test, and then tests would be graded. The graders would only need to be blinded if it wasn’t a multiple-choice test (because multiple choice tests have clear answer keys and are objective). The results could then be compared and analyzed to see if the results were to happen by chance. The null hypothesis would be that what people wear doesn’t affect their test scores. The alternate hypothesis would be that what people wear does affect their test scores. I believe we would end up rejecting the null hypothesis because I assume that data would be consistent with professionalism and dressing for success in the business world.

So right now anxious SC200 peers, I can’t give you an answer. From my personal experience, I say do what feels right. Personally, I like the sweatshirt and leggings look and feel for exams, but if you think you do better dressed up, then go for it. But what is your cost? If it wouldn’t bother you to dress a little nicer, then maybe it could really help you. If I am not comfortable, I stress out and know I would not perform my best so I find it more beneficial to dress down. It is different for every person. From what I know (even with no study), I think it depends on the person. If you don’t pick out your outfit specifically for an exam, try dressing up a bit more and maybe you’ll see your grades increase as well. Maybe I can convince Andrew to give me some of his research money to conduct my own actual study 😉

What Causes People to Have Different Spicy Food Tolerances?

While at my tailgate this weekend, I was eating buffalo chicken dip and commented how spicy I thought it was. My sister replied saying it wasn’t spicy enough. This made me wonder, why do we have different tolerances?

Diagram of the tongue and where flavors are tasted

Diagram of the tongue and where flavors are tasted

To start off, lets get science-y. Flavor is actually what people refer to as taste, according to Dr. Bruce Bryant of Monell Chemical Senses Center. Flavor is made up of three components: taste, olfactory sense, and trigeminal sense. Tastes that people sense are sweet, sour, salty, bitter, umami (see paragraph below to learn about umami), and even fattiness in some cases. Your olfactory senses work with tastes to produce the sensations people think are one-dimensional taste. For example: fruit will be either sweet or sour, but the “fruitiness” comes from your nose smelling your fruit, and your nose is what allows you to tell the difference between a peach and a pear, not your tongue. Your trigeminal system perceives spiciness awareness. Your trigeminal system detects pain through nerve endings sensitive to pain, temperature, and touch.

***Side note: I didn’t know what umami was so I checked it out. It is Japanese and is known as a pleasant savory taste. Umami is subtle and blends with other foods to “round out” the taste. Because of this, most people don’t recognize it but it enhances the taste of many foods. It is known as the fifth primary taste rather than the four tastes people thought existed (sweet, sour, salty, bitter).

With your trigeminal system, scientists think that some people are simply born with pain receptors less sensitive, but they lack research to back this up. Researchers DO know that exposing children to more spicy food’s at a young age can desensitize nerve endings, making them more tolerable to spicy foods. For example, Mexican parents give children packets of sugar with red chili powder, which builds up their spice tolerance. This causes the nerve endings in the mouth to die off, therefore allowing a high tolerance to spicy foods from a lack of nerve endings.

But what if you weren’t fed spicy foods as a child? How do these people have different tolerances? PENN STATERS (woohoo) did research to find a link between personality traits and a passion for spicy food. Scientists have found that personality is a factor for whether or not people like spicy foods and how often one eats spicy food. Researchers Nadia Byrnes and John Hayes from Penn State’s College of Agricultural Sciences found data that “suggests chili liking is not merely a case of increased tolerance with repeated exposure, but rather that there is an affective shift towards a preference for oral burn that is not found in chili dislikers”. This means some people just prefer an oral burn over other people. UPenn researchers had earlier linked chili liking to thrill seeking, and SUNY Stony Brook researchers found a relationship between chili liking and sensation seeking to be a more formal measure of personality. Both UPenn and SUNY Stony Brook’s studies were found to be very week, with neither study looking at intake.

In the PSU study, they used updated measure of sensation seeking questions that avoided gender and age based questions. They also used a four-point scale rather than a simple yes/no scale to be more precise. The PSU study consisted of 97 males and females ranging in age from 18 to 45 who filled out a food-liking questionnaire. On the questionnaire, participants rated the intensity of different taste sensations after sampling six things including capsaicin (the active component of chili peppers, and the spice used in the study) with water. Later, they took a survey including personality questions, as well as were asked how often they consume chili peppers. Overall, this study confirmed that liking or disliking spicy foods is not only determined by one’s sensitivity to capsaicin but that personality factors do exist and influence the affective response to the initial “burning” of capsaicin. The study also concludes that there were significant positive associations found between sensation seeking and the liking of spicy meals, though some relationships varied in strength.


Arnett Inventory of Sensation Seeking vs. Liking of Spicy Meal


Yearly Chili Intake vs. Arnett Inventory of Sensation Seeking

While I am only a college student taking a science course for non-science majors and realize the people who conducted their study are much more advanced in the science field, I wonder if this study could have been done better. The study only used 97 males and females, and more people would have helped support stronger findings to ensure it was less likely to be chance (Although Andrew taught us there is always a chance of chanceJ). I also wonder if there could be reverse causation, like if liking spicy foods could cause different personality traits. There could also be other potential confounding variables. To my knowledge, reverse causation and confounding variables were not discussed in the study, and I am curious if either of these affected the results of the study.

Overall, the study and other articles claim that personality factors do influence the affective response to the initial “burning” sensation from capsaicin. I do have some doubt with this because of the possibility of reverse causation and confounding variables and the lack of other similar studies.

Ear Infections and Effective Surgeries to Get Rid of Them


Did you ever have a lot of ear infections as a child? I did. A lot. One year, I had 14. I was little at the time, so it was a lot of crying, a lot of trips to the doctor’s office, and a lot of medicine. Finally, my pediatrician recommended, “tubes” for my ears. Being little, I didn’t even know what caused an ear infection; let alone why they wanted to put tubes in my ears. So in this blog post, you will get to learn what an ear infection even is, and what tubes are and if they even prove to be effective.

The top picture is a diagram of the inside of an ear. The bottom diagram shows a swollen ear.

The top picture is a diagram of the inside of an ear. The bottom diagram shows a swollen ear.

First things first: ear infections. We have all had them, and if you haven’t you are lucky. Ear infections are much more common in younger children. The middle ear/eardrum are where the ear infection occurs. The middle ear is located between the eardrum and the inner ear. There are three tiny bones (the malleus, incus, and stapes) located within the inner ear that transfer sound vibrations from the eardrum to the inner ear. Ear infections are often bacterial or viral infections and develop when fluid builds up behind the eardrum. Otitis media with effusion (OME) is the thick fluid that builds up behind the eardrum, also known as an ear infection. The  eustachian tube (see diagram) connects the ear to the back of the throat. The eustachian tube’s job is to drain fluid from the ear, and is then swallowed. Ear infections occur when the eustachian tube is partially blocked, causing the fluid to build up and not properly drain. The eustachian tube can become blocked from drinking while lying down, and a sudden increase in air pressure. Swelling of the eustachian tube can happen from allergies, irritants, and respiratory infections. Long-term problems consist of persistent fluid in the middle ear or frequent infections (me!) can lead to hearing problems, spread of infection, or tearing of the eardrum. That’s why it is important to consider options to fixing the problem, with the most popular being ear/ventilation tubes!


This diagram illustrates a fluid filled middle ear and where the small incision is made in the ear drum to help drain the fluid

Ear tubes are usually inserted in children’s ears, but can be placed into adults. The scientific term for the surgery is called myringotomy (this is just the process of making an incision), but ventilation tubes are usually inserted in the incision to keep it open longer and prevent better drainage and longer relief. The surgery is done while the patient is under general anesthesia. The ear tube procedure consists of making a small surgical incision (myringotomy) in the eardrum of the infected ear(s). Any fluid is removed with suction through this cut before the tube is inserted. A small cylinder (tube) is placed through the eardrum. The tube is bigger on the edges so it stays in place in the eardrum. The tube is hollow and allows air to flow so pressure remains the same on both sides of the eardrum. When air is able to get behind the eardrum (through the tube), it allows trapped fluid to flow out or dry up. Overall, the surgery is an easy procedure, and an overnight stay is not required. Doctors may prescribe eardrops or antibiotics for after the surgery. After a few years, the tubes are supposed to be “pushed out” of the eardrum by themselves. If they do not fall out on there own, they can be surgically removed (this is what happened with mine). The hope is that ear infections will stop once one round of tubes are done, but if ear infections return, another set of tubes can be inserted. This is the most common surgery to help relieve reoccurring ear infections in young children.

This diagram is another view of the small incision (myringotomy) being made in the eardrum. This diagram also shows the placement of a ventilation tube placed in the eardrum.

This diagram is another view of the small incision (myringotomy) being made in the eardrum. This diagram also shows the placement of a ventilation tube placed in the eardrum.

So is getting tubes effective? Is there a better option? While a myringotomy is the surgery, there was a study done to test the effectiveness of two procedures commonly done to people with multiple ear infections. While inserting ventilation tubes is the most common surgery, a laser myringotomy is becoming more popular. “Laser myringotomy has prove to be a safe method to ventilate the middle ear, and results of up to 70% efficacy have been reported. However, the indication for laser myringotomy is not yet known, and evidence is lacking that laser myringotomy is an alternative or ventilation tubes.” The study was conducted in seven Dutch hospitals from July 1999 to September 2001. Children included in the study were less than 11 years old, and parents had noticed impaired hearing during at least 3 consecutive months. Children who previously had laser myringotomy or ventilation tubes were excluded from the study. After excluding children and eliminating children who did not show up for surgery, 208 children were enrolled in the study. The children were randomized by who got which procedure. Once the surgeries were done, results were calculated. The mean closure time of the laser myringotomy was 2.38 weeks, which was calculated from 84 of the 90 percent of patients who appeared to their weekly appointments until closure. In 94 patients, the tube was gone after an average of 3.88 months. For follow-ups after the surgery, “An effusion-free middle ear at the laser side was observed in 46.6% of patients after 1 month and in 35.5%, 37.1%, 38.6%, 41.6% and 39.1% after 2, 3, 4, 5, and 6 months, and for the tube side, this was noticed in 87.4%, 81.9%, 81.5%, 75.5%, 68.5% and 70.7% of patients, respectively.”

The percentages are consistently higher with people who received the laser myringotomy with ear tubes as well. Overall, the study confirmed that laser myringotomy is a safe method to treat chronic ear infections, but proved to be less effective treatment than ventilation tubes. “The laser myringotomy success rate in this trial of approximately 40% (range 46.6 –35.5%) was reached in the first month after the procedure and remained fairly constant over the rest of the follow-up period, whereas the success rate of the ventilation tube showed a significant decrease from 87.4% after 1 month to 70.7% at 6 months (range 87.4 – 68.5%). The effect of the laser myringotomy can therefore be determined 1 month after the procedure.”

Overall, the laser myringotomy is not a bad surgery. It is helpful with reliving ear infections, but the reason tube ventilation is more successful is because the prolonged opening in the eardrum. If the laser myringotomy could be altered to find a way to keep the eardrum open longer, it could prove to be a better and more effective surgery due to the shorter amount of surgery time and lack of general anesthesia, as well as lower costs.




What is the Punching Bag in the Back of Our Throat and Is It What Makes Us Gag?

Everyone has taken a look in the back of someone else’s throat (or even your own with a mirror) and seen a “punching bag” or “dangly thing”. This “dangly thing” is actually called a uvula . The uvula is made of mucus membranes, connective tissue, and muscle. The roof of the mouth is divided into two sections, the hard palate and soft palate (refer to the diagram). The uvula hangs from the soft palate, and is located above the back of the tongue.

A diagram of the inside of the mouth

A diagram of the inside of the mouth

So what exactly does the uvula do? For a while no one knew the answer. There were several different hypotheses that went around, with the most extreme being that it was a hazardous organ causing sudden infant death syndrome. This was disproved (and we now know how to disprove a hypothesis thanks to Andrew). A study was conducted in 1992 to find the purpose of the uvula. One thought that one purpose of the uvula is to assist with drinking while bending over. This previous assumption was that the uvula was a “leftover” organ from evolution, developed for when mammals needed to bend over to drink something. This was proved not true, because after the palate of eight different mammals were studied, an underdeveloped uvula was only found in two baboons. Two things that truly separate us humans from animals are speech and having a uvula. The conclusion from the study is that the uvula could be an organ to help aid speech. This is now confirmed. Another purpose of the uvula has to do with swallowing, but not upside down like originally thought. While swallowing, food is intended to go down your throat (obviously). The uvula is there to block the passage into the nasal cavity and makes sure your food stays out of your nose (yuck, thank you uvula).

Many people attribute the uvula to their gag reflex. This is also true. A gag reflex is the contraction of the back of the throat, triggered by an object touching the back of your tongue, tonsils, back of your throat, and your uvula. The gag reflex purpose it to prevent choking. The gag reflex is first and mainly used during infancy, and is supposed to help moderate the transition from liquid to solid foods. When an object that is deemed too “large” or “chunky” for a baby’s stomach to digest (causing the object to come in contact with the uvula), the gag reflex expels substances the brain triggers as harmful. Once babies are about 6 months, their gag reflex disappears, and they are able to start eating solid foods. The gag reflex is only triggered in children and adults when an unusually large object is placed in the mouth. To confirm that the uvula is related to gagging and not just the other areas in the back of the mouth, another study was done. A controlled, double blind experimental study on volunteers (we all know what this means from class) evaluated the gag reflex by using nitrous oxide to suppress experimentally induced gagging. The study found that the subjects who did nitrous oxide inhalations tolerated a much “more intrusive oropharyngeal stimulation than under control conditions”.

A diagram highlighting the gag reflex "trigger points"

A diagram highlighting the gag reflex “trigger points”

To conclude, the uvula does in fact have a purpose. The uvula helps direct food down the throat, assists with speech, and helps to trigger the gag reflex. Hopefully from reading this you will always remember that the punching bag in the back of your throat does have a name and a purpose.

P.S: While the following was not part of my main research topic, I did find it very interesting and wanted to include it: Although it is not an intended function of the uvula, it assists with snoring. It vibrates and is part of the reason you hear what you hear when you listen to someone snore. Side note: In very rare cases, there is actually a surgery (uvulopalatopharyngoplasty) to remove your uvula and surrounding tissues. This surgery is only done in VERY extreme cases of snoring problems.

Are There Actual Benefits to Recycling?

Recycling is great. Or so it seems. I always try to throw my plastic water bottles in the blue recycling bins, throw out soda cans, and even use a reusable water bottle to reduce my plastic water bottle impact completely. But is recycling worth it?

Recycling started when a barge, Mobro 4000, drove thousands of miles along the waters of the east coast, trying to get rid of its trash from its home, Long Island. People started to recycle when they realized their landfills were running out of room. This, and much more, was published in this famous article written in 1996. The author, John Tierney, basically called recycling garbage (the title of the article is literally “Recycling is Garbage”). Was Tierney right?

One thing Tierney was right about was that America would NOT run out of room in its landfills. A commonly cited statistic is that the United States’ trash for the next 1000 years could fit into a landfill only 100 yards deep, and 35 miles on each side (which is not that big if you consider it is 1000 years of trash).


A landfill: a place where trash is dumped and later buried and covered with soil

We know we don’t need to recycle for saving space in landfills, but overall do we need to? Consider the following quote: “Take the much- maligned plastic water bottle. It’s almost always made from petroleum, a resource that certainly seems worth conserving, and if you chuck it in the trash, the container will live on in a landfill for centuries. But how much diesel fuel does the truck that collects these bottles burn? How much energy does the recycling plant consume; what fumes does it emit into the atmosphere? And what does it all cost, anyway?”

People might think that throwing their bottle or can in the magic blue bin automatically makes the world a better place, but does it? Do the benefits outweigh the costs? You’re about to find out.

For some products, it does. Aluminum cans are the best to recycle: they require 96% less energy to recycle than if a brand new aluminum can was to be created. Plastic bottles use 76% less energy and newsprint uses 45% less energy. You must be thinking: this is great! They all use less energy so that means recycling is what we need to keep doing! Not 100% true. Another product we commonly recycle is glass. Glass only uses about 21% less energy, and you might be thinking this is still good. Recycling glass is basically getting back to its “virgin material”, which is sand. We are nowhere near a sand shortage, so we really don’t need to recycle glass, although it is not bad to. But recycling glass is more of a hassle than it really needs to be, especially because we don’t need to conserve its raw material like we do with other products.

Carolina Recycle Loop explains their recycling process for cans

A big (and often overlooked) issue with recycling is how it is done. A recycling plant is SUPER expensive, and needs a huge single, up-front capital investment. This needs to later be turned into a state-of-the-art single stream recycling plant and program. Once that is in place, recycling is super efficient. But without it, it cannot be efficient. Take PVC pipe for example. PVC is recyclable, but most plants don’t have the means to recycle it. So it takes up space traveling to the recycling plant, only to be sorted into trash ending up in a landfill.

There is a very large initial cost to creating a recycling plant, but once that is in place, products (especially aluminum cans), can be recycled many times and save plenty of energy and resources. This article cites that a ton of recyclables takes 10.4 million Btu to recycle. The additional collecting, hauling, and processing of these said recyclables adds only .9 million Btu. This totals 11.3 million Btu to manufacture one ton of recyclables. Creating products from virgin materials takes 23.3 million Btu. And for all you people who do like science and are curious, a Btu is a measurement of energy, equivalent to around 1055 joules. Not only would recycling be saving energy, but it would be reducing the same amount of greenhouse gases if 38 million cars were taken off the road (AKA a lot of greenhouse gases).

So overall, is recycling worth it? My answer would be yes, it is especially if there are already recycling plants and programs in place. The main downfall to recycling is setting up plants, but once they are set up recycling is beneficial to society.

Hand Sanitizer vs Soap…What is Cleaner?

We have all grown up being told to wash our hands. We also know that hand sanitizer is the quicker and better “on-the-go” option. Why would you stand at a sink and sing “Happy Birthday” TWICE when you could get a quick pump of hand sanitizer and be on your way? Personally, I always assumed hand sanitizer was better because it “kills 99.9% of germs”, and soap doesn’t advertise that. I’ve always been curious about which one was better. I figured these blogs posts would give me a reason to do some research.


Which one is better?

The CDC has a whole website devoted to teaching people how to wash their hands (I’m sorry but if you don’t know how to wash your hands by now, you’re in trouble. If you need this help, check it out). They claim that washing hands with soap and water is the best way to reduce germs in most situations, but if soap and water is not available, then a hand sanitizer (with at least 60% alcohol) should be used. This is what most people would have assumed. Don’t worry, there is more!

If you have no soap and water, the CDC recommends to use alcohol based hand sanitizers or wipes. Non-alcohol based hand sanitizers don’t work well for all classes of germs, cause germs to develop resistance, reduce germs instead of killing them, and can even irritate the skin.

While the CDC and the overall consensus was that soap and water does work better than hand sanitizer, many studies show that hand sanitizer does work well (and is time efficient) for hospital settings. This article confirms that in a hospital/traditional office setting, alcohol wipes or hand sanitizer can be just as effective. This setting is where hands come in contact with germs, but hands are not heavily soiled or greasy. Once again, they clarify that “dirty hands” from community settings (handling food, playing sports, working outside, etc) must be effectively cleaned with soap and water.

A Man showing dirty hands after gardening work

A man showing his dirty hands after gardening work

A randomized, controlled trial wanted to find out if alcohol-based hand sanitizers that are known for killing common respiratory and GI illnesses could reduce illness transmission in the home. The study was done in the homes of 292 families with children enrolled in out-of-home child care throughout 26 child care centers. Overall, it was found that through this study, using alcohol-based hand sanitizers was efficient in reducing transmission of GI illnesses with families in child care. Another study found that the regular use of hand sanitizer could not prevent influenza, but it did reduce total school absences and laboratory-confirmed influenza A infections in children who also had an influenza vaccination.

 (If the previous few paragraphs show up larger and bold, I have no clue why this happened. If not, then never mind and ignore this message.)

Other diseases need hand washing over hand sanitizer to prevent the illness. In a study to test the removal of Clostridium difficile, hand washing was found to be more effective.

To conclude overall, wash your hands over using hand sanitizer. During flu season (or what I have heard called the PSU Plague), it wouldn’t hurt to use hand sanitizer for some extra backup and protection. If your hands are clearly soiled and dirty, definitely wash them. If you just want extra cleanliness from touching a possibly germ covered object, you can use hand sanitizer (as long as it has alcohol). Basically, you’ll never know what you will come in contact with. If it happens to be the flu (and you already got your flu shot) or a GI illness, than hand sanitizer will help. But if it’s Clostridium difficile, then you’ll need soap. So play it safe and always wash your hands, and use hand sanitizer occasionally or when you’re on the run.

Vegetarians vs Meat Eaters

I would like to start off by saying I respect all food choices. I know people who are vegetarians, vegans, meat lovers, and everything else. Personally, I eat meat, but I have always had a love for my fruits and veggies. Note: Vegans rock too, but for this blog post I am focusing on vegetarians versus meat eaters, but if you want to know the difference between a vegan and vegetarian check it out here. I know people become vegetarians for the saving of animals, but I was wondering from a science point of view: which one is healthier (or are they equivalent)?


To start off I realized I needed a baseline so I checked out this article. One measurement of health is BMI, also known as Body Mass Index (For fun: If you’re interested in calculating your BMI you can here). In the June 2003 “International Journal of Obesity and Related Metabolic Disorders”, a University of Oxford EPIC study reported that after dividing 37,875 subjects into dietary categories, all three of the vegetarian groups had a lower BMI than the meat eater groups. They also found that high protein and low fiber intake correlated with the highest BMI (and high protein clearly comes from meat eaters).

There are other ways to measure health. One way is by having lower cholesterol. Harvard Health published a list of 11 foods to help lower cholesterol, with 10 of the 11 foods being vegetarian foods (with the last one being fish, which is included in a pescatarian diet).

Another measure of a healthy person is low saturated fats. The Harvard School of Health claims that we can’t eliminate saturated fats from our diets completely, because food with healthy fats (olive oil and peanuts are good examples) also contain a little bit of saturated fats. The top sources of saturated fats are found in red meat and full-fat dairy products. By eliminating red meat (i.e. with a vegetarian diet) one would be eliminating the majority of unnecessary saturated fats, leading to better health over a period of time.

People who consume plants over meat may worry about protein, but plant protein can actually exceed recommended requirements when a variety of plants are consumed. Whole grains create complete proteins, as well as beans, peas, lentils, nuts, seeds, and grains (check out these, they have recipes too!). As long as a vegetarian is consuming plant protein to make up for their lack of meat protein, their diet is healthy, and possibly even healthier than a meat eater.

Fresh baby greens salad and tomatoes close up

Fresh baby greens salad and tomatoes close up

From a cancer standpoint, The British Medical Journal found a possible link between breast cancer and meat intake. According to the BMJ: “Based on diet in 1991, substituting one serving/day of legumes for one serving/day of total red meat was associated with a lower risk of breast cancer among all women” showing that reducing meat from a diet can lead to lower risk of breast cancer. The National Institutes of Health also did a study and found “Higher levels of meat, especially red meat (beef, pork, lamb) and processed meat (bacon, hotdogs, luncheon meat, chicken nuggets, and other salted or cured meats) have been linked to a variety of cancers in a number of studies.” Both of these studies support that red meat can cause different types of cancer, but not specifically all types of meat.

While I remain a non-vegetarian, researching the different food benefits did open my eyes to conclude that it can be healthier, as long as one is not abstaining from protein (found in nuts and eggs). I do not plan for a blog post to cause anyone to stop eating meat and switch to an all plant diet, but hopefully it shows that people could always use some more fruits and veggies, and be more conscientious when eating a lot of meat.

Yay for Science! (Or Not)


I’m Sam Schmitt. I choose SC 200 because it seemed pretty easy, and said it was for non-science majors. TBH I am taking this course because we are required to take Gen Ed science classes, and it seemed like a relatively easy one (when compared to others especially) from PSU and Rate My Professor.

I am not planning to be a science major because I dislike science. Freshman year of high school, I took environmental science and loved it. Every science I took after that, including bio, chem, and physics. I absolutely hated and really struggled through it. Plus, I am very interested in business (hence why I am in Smeal).

Please enjoy this video of Robin Williams explaining golf while using a Scottish accent (note: it’s very funny)