Author Archives: Mary M. Brown

Are egg yolks unhealthy?

My staple breakfast meal at a diner consists of bacon, rye toast, home fries, and two poached eggs with runny yolks. It may sound insanely weird, but my favorite part of breakfast is the moment I dip my rye toast into my runny egg yolks. However, I’ve always heard rumors that egg yolks are actually extremely unhealthy. Although eggs are a great source of iron, vitamins, and nutrients, they contain a large amount of dietary cholesterol. Excessive amounts of cholesterol can negatively affect heart health (Heid). So my question is, are egg yolks actually THAT unhealthy for me?

In 2012, a Canadian study was conducted to research this question. This observational study implemented an ultrasound test to observe the buildup of fat in the arteries of 1,231 adults. Every participant in this study was a regular at a vascular prevention clinic due to pre-existing signs of heart disease. The alternative hypothesis tested was the effect that a large consumptions of egg yolks had on heart disease. Although this is an observational study, reverse causation can be ruled out due to time, since heart disease cannot be the cause of food consumption that already occurred. As always, chance is an option in this experiment, and many third variables could affect the correlation, such as junk food consumption, lack of exercise, and hereditary risk of heart disease.

v79e

diagram of carotid artery found here!

Besides observing each participant’s carotid arteries via ultrasound, the researchers also asked every person to do the following:

  • Explain his/her personal history of smoking
  • List how many egg yolks he/she consumes in a week
  • Recall how long he/she had been eating this weekly amount of egg yolks

With this data and information, researchers concluded that the more a person smoked and the more egg yolks a person ate both increased a person’s risk for heart disease. However, there are quite a few faulty limitations that hold this study back. For starters, this is a single, small study, and is extremely exclusive considering the participants are already at risk for heart disease, and are all Canadian. Second, just as our intuition is lousy, so is our memory. Many participants could simply be generalizing their egg yolk consumption and cigarette smoking throughout the years. This study definitely highlights a correlation between smoking, egg yolk consumption, and increased risk for heart disease. However, it is not large enough or conducted strongly enough to prove any causation. Personally, I think the only way to conclude a strong correlation would be to conduct a meta-analysis of similar studies.

On the other side of the spectrum, this U.S. News article by Toby Amidor argues in favor of egg yolks. An important point brought up in Amidor’s article is the fact that other foods in our daily diets, such as meats and dairy, contain just as much fat as the average egg yolk. So, if we are told to avoid egg yolks for this reason, then shouldn’t we be avoiding chicken, beef, pork, yogurt, cheese, and milk as well? At that point, we would all basically be vegans! Amidor’s article also highlights the American Heart Association’s approval of the entire egg, not just the whites, and according to Blue Cross Blue Shield of Mississippi, other breakfast foods that complement morning eggs are the actual sources of bad cholesterol.

egg

image found here!

What I’ve taken away from my time researching egg yolks is that the unhealthiness of an egg yolk probably exists, but not enough to cut yolks out of my diet completely, especially if I’m not at risk for heart disease. In reality, if my morning eggs are the fattiest foods I consume during the day, am I really doing that much harm to myself? Surely, people who are pre-disposed for heart disease may want to watch excessive egg yolk consumption, but observing a healthy diet as a whole instead of cheating oneself out of one specific food is a more well-rounded way of eating, and a happier one too!

Are we scared into obeying authority?

I’ve often wondered what leads us as humans to obey authority. Is it because we feel obligated to help someone whenever they ask? Is it just a moral sense of correctness? Or is it the looming fear of what will happen if we disobey authority that lures us into obeying it? Personally, I’ve always questioned the correlation between fear and authority. It seems a little ironic, since authority figures are supposed to serve as role models and sound boards, not people we should be afraid of. However, a little fear now and then is good, because sometimes a lack of fear can lead to a nonchalant attitude and lack of respect. Nevertheless, I’m still curious to know how much, if at all, authority affects fear.

To start off, we have to look at both the null hypothesis and alternative hypothesis of the situation.

NULL HYPOTHESIS: Fear does not cause us to obey authority.

ALTERNATIVE HYPOTHESIS: Fear causes us to obey authority.

If the alternative hypothesis is true or a false positive, then we have two causalities to observe, still noting confounding variables and the effect of chance.

Direct: Fear causes obedience

Reverse: Obedience causes fear

Since this is a relatively vague question, I wondered if I would struggle with finding scientific evidence to use as a base for my blog. Thankfully, I thought back to my junior year of high school, where we spent a week discussing the Milgram Experiment in my theology class. This experiment was conducted by social psychologist Stanley Milgram at Yale University.

Milgram’s first study included 40 local men from New Haven between the ages of 20 and 50, from all different socioeconomic backgrounds. Milgram recruited his participants through a newspaper advertisement, paying each man $4.50 for showing up to the testing (McLeod). These guinea pigs were told that they were involved in an experiment studying the effects that consequence had on a person’s learning ability. In reality, they were being deceived, as the study was actually testing the participant’s obedience to authority. When each man arrived, he was introduced to his partner in the study, and the two drew straws to decided who would administer the test and who would take it (McLeod). One participant was actually a hired actor, fixed to always draw the learner straw (Encina). So in this case, the hired actor would be the control in the experiment. We also have to take note of a few confounding variables that could affect the outcome of the experiment, like the difference in ages of the participants, as well as their jobs and education levels. The $4.50 incentive for participating is also a third variable.

In the experiment, the blind participant would ask the actor simple questions, and was required to give increasingly dangerous electric shocks to him each time he answered incorrectly. Beforehand, the researcher would sample a 45-volt shock to both participants, alerting the deceived participant of what his counterpart would be experiencing. The actor was actually never hooked up to the shock machine, but was given a list of cues to follow as shocks supposedly increased, in order to worry the innocent subject (Encina).

picture1

shock range image found here

Naturally, the participants would become increasingly uneasy as time went on, for fear of hurting the man on the other side of the experiment. With any hesitation, the experimenter would use intimidating commands to insist that the participant go on with the experiment, no matter what (Encina). Personally, I don’t believe I could continually increase the shocks, especially after experiencing the 45-volt shock and understanding how it felt.

milgram_experiment_v2-svg

experiment layout found here

As a hopeful human being, I assumed that a large number of participants would refuse to continue the experiment; however, to my surprise, a shocking 65% of participants administered the final 450-volt shock, and 100% went to at least the 300-volt mark! How insane! If this was the only study conducted, then we would not be able to conclude anything based on a lack of insufficient evidence. However, over time, Milgram conducted 18 variations of his experiment using 636 different people, producing startlingly similar results with each conclusion (McLeod). Could these different experiments be treated as a meta-analysis?

milgram_graph

graph found here

Critics called bias on this experiment, since it only included males. They also had a bone to pick with the ethics of the experiment, because it was highly based on deception (McLeod). Although this experiment seems slightly unethical, I don’t think I could brainstorm any other effective way to test the alternative hypothesis. Besides that, we learned in class that science is anti-authoritarian, so who are critics to tell Milgram his experiment was conducted incorrectly? He was simply practicing science in the correct manner. There is not only one right way to test something.

My thought process sees enough evidence to reject the null hypothesis. When two-thirds (65%) of participants obey authority all the way until the end, there is no possible way to conclude that this correlation is due to chance.

When we think about it, the situation is actually frightening. These participants allowed authority to cloud their moral judgement out of fear. It may sound extremely corny, but researching my original question, “Are we scared into obeying authority,” has proven to me that a strong moral compass is an incredible necessity. If humans were more assured in standing up against authority when authority was immoral, then would we have been able to avoid disasters such as genocides, wars, and dictatorships? What do you think SC200…would you be pressured into giving that 450-volt shock?

P.S. If you have time, I highly recommend watching this modern version of Milgram’s experiment, conducted by Derren Brown on the British show, The Heist. It’s 10 minutes long, but it gives a great visual as to what Milgram’s original experiment looked like.

Do Celebrity Endorsements Promote Unhealthy Eating?

Whether we like it or not, every time we turn on the TV, click on a YouTube video, or wait for a news article to load, we come face to face with advertisements featuring celebrities. As we know, celebrities endorse all types of products, from clothing brands to cleaning products to bath and beauty, and everywhere in between. One of the largest markets with celebrity endorsements is the food market. Beckham at Burger King. The Manning Brothers selling Oreos. Sofia Vergara donning red lipstick and sipping on a Diet Pepsi. As entertaining as all of this celebrity screen time is, could it be having a negative effect? Consumers are enticed by the sight of their favorite celebrities, making them more willing to buy a product. But could all of this celebrity endorsement actually be a cause for unhealthier eating habits of consumers?

The direct causation would be celebrity endorsements causing unhealthy eating habits. Since a consumer’s unhealthy eating habits cannot cause celebrity endorsements, we can rule out reverse causation. However, some confounding variables such could have an effect. For example, people who watch more TV are likely to be more inclined to snack on junk food in the first place.

I’ve thought out two hypotheses that could be the outcomes of any experiment conducted to answer my question.

  • Null hypothesis—Celebrity endorsed advertisements do not cause unhealthier eating habits.
  • Alternative hypothesis—Celebrity endorsed advertisements cause unhealthier eating habits.

A study performed by the NYU Langone Medical Center last June follows the idea of my question. Researchers at NYU have discovered a correlation between the frightening rise in childhood obesity and junk food advertised by well-known musicians and pop stars. They conducted a study, discovering that very few celebrities endorsed natural foods like fruits, vegetables, and unprocessed grains, but high numbers of famous people endorsed snacks, junk food, and chain restaurants.

The researchers began conducting their study by gathering data on musicians. To do so, they looked through Billboard’s Top 100 from 2013-2014, then analyzed each artist’s popularity with the youth. They discovered that 65 of the 165 celebrities picked covered 57 different food product endorsements. The scientists then researched the food brands endorsed by celebrities, finding that 21 of 26 products were considered unhealthy. However, this study was merely research. No experiments were conducted. This study also cannot be taken as gospel, because “unhealthy” in an observational sense is subjective.

Luckily, in 2006 the University of Liverpool conducted an experimental study with a similar hypothesis. In this study, 181 children from ages 8 to 11 were asked to view a 20-minute cartoon with commercial breaks containing one of three advertisements or normal footage of famous British TV sports anchor, Gary Lineker. The three commercials advertised Walker Crisps (endorsed at the time by Lineker), another unnamed snack, or a children’s toy. While they watched the show, the children were given two different bowls of chips, one marked “Walker Crisps” and the other labeled “Supermarket Crisps.” Although both bowls were actually filled with Walker Crisps, the children who viewed footage of Lineker or the commercials for Walkers Crisps ate more of the bowl labeled “Walkers” than those children who watched other commercials.

celebrity-endorsements-effective-at-increasing-children-s-junk-food-intake-study_strict_xxl

Gary Lineker with his Walkers Crisps

This study is interesting because it provides evidence that children are affected by celebrity endorsements even when the commercialized product isn’t being broadcast to them directly. Like we discussed in class, there is still room for chance, especially considering there is no data on the p value of this experiment. However, while this one experiment cannot prove for sure that celebrity advertisements cause unhealthier eating habits, we can throw out the null hypothesis, because there is evidence of a correlation.

What I’ve taken away from researching this question is that I should be more careful about how much I allow celebrities and the media in general to influence my purchases, especially food products. I take pride in the healthy lifestyle I try to live, and holding myself back by falling for advertising tricks would be foolish. Even though chance is always an option, the two studies I’ve looked at lead me to believe there’s a strong correlation between the unhealthy food consumption and celebrity backings. However, before we could be completely certain in the alternative hypothesis, we would need to go more in depth with our testing of it. An interesting way to take this study further would be to have participants track their eating habits of celebrity endorsed, unhealthy products and record changes in their health over an extended period of time. Researchers would then compile the data, observing whether or not the food influenced a declining healthy lifestyle.

Living Vicariously Produces No Real Winners

Growing up in typical suburbia, I played three sports: soccer, basketball, and (unfortunately) softball. While many amazing bonds were built and numerous championships were won, there were some negatives that came with my township sports teams. The biggest nuisance: overbearing sports parents.

the-why-is-my-son-on-the-bench-haircut

Image found here

Everyone knows what I’m talking about, the dad standing 5 feet from his son in goal, coaching completely differently from the ACTUAL coaches. The mom scowling and muttering in her fold up chair at the softball field because her daughter was benched after three consecutive strike outs at bat. The parents at the pizzeria after the basketball game, bragging about their children’s personal stats as opposed to celebrating a TEAM win. It’s insane. When did youth sports stop focusing on the growth and development of children and start honing in on the competitive nature of parents?

This, SC200, is a psychological term known as narcissistic parenting, and I was a victim of it. Luckily, I was never remotely good enough at any of the three sports I played to be extremely affected by narcissistic parenting, but plenty of young children grow up to be insecure adults as a result of their parents’ self-absorption, and many childhood athletes give up their sport because they’re tired of performing for their parents instead of for themselves. In fact, according to Jay Atkinson of the Boston Globe, of the 45 million kids that play youth sports in America, about 80 percent have given up on their “passion” by age 15.

This fact stunned me. 36 million kids quit sports before they even hit high school, and a huge factor in their decision is their own parents’ narcissism? What could the mechanism be? Kelly Wallace of CNN has made an interesting, reasonable case. She has inferred something that I have often thought about myself, that narcissistic parents live vicariously through their children because they have felt unsuccessful in their own lives. Wallace cites Joseph Burgo, author of The Narcissist You Know: Defending Yourself Against Extreme Narcissists in an All-About-Me Age, as confirmation for her theory. Burgo believes that narcissistic parents spend their child’s formative years grooming her into the person they wish they had been. He also hypothesizes that a distressing, chaotic childhood could lead a parent to narcissism in order to mask lingering insecurities.

the-narcissist-you-know-9781476785684_hr

Burgo’s book cover found here

Since I couldn’t find any concrete studies that delved into the extensive science behind narcissistic parenting, I’ve decided to talk about how I think a study could be conducted. A possible explanation could be the way narcissistic parents were treated when they were children. If I was in charge of the study, I would treat the supposedly “unsuccessful” lives as the independent variable and narcissism in parents as the dependent variable. This would give us the following options.

Direct causation: “unsuccessful” lives causes narcissism in parents.

Reverse causation: narcissism in parents causes ”unsuccessful” lives.

Confounding variables that could have an effect could be competition between parents, social norms, or depression affecting self-image. As always, chance is an option. An experimental study wouldn’t necessarily be possible, because the causal variable could not be easily manipulated. Instead, the study would have to be observational. One way this study could be conducted could be through the use of two random surveys. The first could be advertised through different platforms, and preliminary questions could be used to weed out non-narcissistic parents. These questions could focus on how parents treat their children in competitive situations, how they feel when their child “fails,” and their thoughts on their own needs versus their children’s. The second could be used to calculate baseline statistics on the cause of narcissistic parents, by focusing questions more on the history of the parents rather than their relationships with their children.

This study seems as if it could be difficult to conduct, especially considering parents may feel inclined to answer questions in an anti-narcissistic way. However, I do think it would be extremely interesting to have a little more science backing up this topic of parental narcissism. While we wait on that, we can avoid becoming narcissistic people ourselves by taking the advice of Burgo, who believes that a parent’s paying more attention to the well-being of his or her child instead of catering to his or her ego is the first and greatest step to eliminating narcissism.

Cold medicine: relief or placebo?

We’re one month into college and the plague has already hit. In every lecture hall coughs echo off the walls and I find myself cringing at the symphony of sickness that surrounds me. Although I may not live in East, the plague has still hit my room. The other day I walked down to McLanahan’s for some vitamin C and came face to face with empty shelves where cold relief should have been. Keyword there: relief. I’m a person who would prefer to ride out a cold instead of pumping myself full of drugs that really do little to no good for me. But other people, my dad for example, love to purchase every cold and flu remedy found in a local pharmacy. While I feel as if cold medicine is just a placebo, my dad swears by it. Which leads me to ask my question: Is over-the-counter cold medicine effective or just a head game?

In this situation, I am treating over the counter cold medicine as the putative causal variable and a noticeable improvement in health as the putative response variable. Since an improvement in health could not really cause a person to take more cold medicine, we can rule out reverse causality. However, the placebo effect is a possible third variable in this question. In my opinion, we have two possibilities:

  • Taking over the counter cold medicine improves a sick person’s health.
  • Taking over the counter cold medicine improves a sick person’s health if they are convinced by the placebo effect.

placebo

Image found here

Luckily, back in 2011 a group of researchers at the NIH were just as interested in cold medicine and its possible placebo effect as me. For their randomized controlled trial, the scientists picked 714 study participants who were currently fighting some form of a cold. These 714 infected people were split into four categories that I have creatively named below in order to keep them all straight (for my own sake).

  • The Sufferers: These people were given no relief for their symptoms, simply waiting for their illness to pass.
  • The Blind Sheep: This group was blinded to Echinacea, not knowing what kind of over the counter medicine they were receiving, or if it was effective.
  • The Extra Blind Sheep: These participants were blinded to placebo, which should not improve their health any more than not ingesting medicine would.
  • The Holy Grail: The luckiest group was given open-label Echinacea, meaning they were fully aware of their over the counter medicine consumption.

At the end of the study, researchers recorded the statistics for each sub-category, the most interesting being the average span of the illness for each group. While the group of Sufferers endured their illnesses for a mean of 7.03 days, the Blind Sheep (6.34 days,) the Extra Blind Sheep (6.87 days,) and the Holy Grail (6.76 days) all overcame illness in a relatively similar time-frame. The most notable observation from this data is that the Blind Sheep (those blinded to Echinacea) had the quickest recovery time. Does this mean that patients blinded to actual cold remedies always recover faster than those who know exactly what’s going on? From this study, I honestly think it’s hard to tell. Only 714 people were involved, and although the numbers for participants who took pills (placebo or Echinacea) are in fact lower than those who simply endured, the numbers really didn’t seem substantial enough to prove either of my scenarios more than the other. In fact, the lower numbers for both placebo and Echinacea point to the possibility that both scenarios could be plausible.

A little confused, I turned back to Google and stumbled across this Washington Post article written by Carolyn Y. Johnson that condemns the use of over the counter cold drugs, citing three different case studies as part of her evidence. Two of these studies focused on the decongestant drug phenylephrine, which is an ingredient found in many cold and allergy medicines sold in drugstores. The most recent of the two was completed in 2015 and tested the effectiveness of Phenylephrine hydrochloride versus a placebo. This trial was completely open-label, and 539 participants were randomized into two separate sub-groups. One took increasing dosages of 10, 20, 30, 40mg over the experimental period and the other ingested placebo over the seven days. The biggest data measured was the average change from baseline (the first day) over the whole treatment period, calculated from a daily “score” of nasal congestion.

This trial, much like the first I looked at, saw no startling differences between the placebo treatment and the over the counter treatment. In fact, placebo seems frighteningly similar to over the counter drugs in its effectiveness. If this is the case, why are ineffective decongestants still available to purchase on shelves? Johnson believes this is due to the revenue brought in from upper respiratory medicines. In 2015 alone, sales of these ailments to the population came out to be around 1.1 billion dollars (Consumer Healthcare Products Association.)

Circling back to my original question, I feel as if the second scenario is the plausible answer. While others could argue that the over the counter medicine used in both studies I’ve cited did in fact alleviate and improve the cold symptoms of sick people, I can assert that those who took placebo improved at almost the same rate as those who had actual drugs. Neither of the studies analyzed whether its participants that were given actual medicine believed in the placebo effect, so how can we be sure that they didn’t just improve because they have faith in the power of medicine?

So, SC200, my question to you is this: Are you one to take over the counter cold medicine? Do you find it to be effective? If so, have your feelings changed after reading this? In all honesty, mine haven’t. I still won’t be wasting my money on medicine that doesn’t help, but then again, I’ve always gotten by as one of The Sufferers.

I got a rock.

Last Thursday in class we took our first pop quiz on an article that summarized an extremely thought-provoking study. As we can all probably remember, the TIME Magazine article described a depression study done on hamsters. When the hamsters were exposed to light at night, they suffered changes to their hippocampus. The decrease in melatonin resulting from this could be a cause for depression (Kluger).

Our last question on the quiz asked if rational people should shut the blinds based on the experiment. As we went over the quiz in class, we deduced that a rational person should follow this suggestion, because as Andrew said, making the small sacrifice of shutting off the lights every night is better than the horrible depression that could eventually arise from keeping them on.

But what if there was a light that, when kept on at night, actually improved our health? SC200, it exists. On Monday night, I received an unexpected package in the mail from my mom’s best friend, Karen. Karen is extremely notorious for sending me gifts that some would consider “Zen” while others label “crunchy.” This gift was no exception. I tore open the package and came face to face with…drum roll please…this:

A chunky, heavy, light up rock. The proper name: a Himalayan Salt Lamp. Basically, these lamps are oversized slabs of Himalayan Salt surrounding a light bulb (WellnessMama). I know; it didn’t really seem like anything special to me either. But these lamps do so much more than merely decorate your most frequented household room! In fact, here is a list of the top ten reasons to purchase one.

The importance behind the Himalayan Salt Lamp is its natural ability to produce negative ions. Negative ions and positive ions both exist in the natural world, but are created by completely opposite things. Electronic devices and machinery give off positive ions, while natural occurrences like rainstorms, lightning, and sunlight produce negative ions. Negative ions cancel out positive ions, purifying the air of the toxins that electronic devices tend to give off (WellnessMama).

Not only do the Himalayan Salt Lamps purge the air of its electronic toxins, but they also clean mold, dust, and bacteria (all of which consist of positive ions) from the air. All of these negative ions at work naturally increase the flow of blood and oxygen to our brains—aiding us in our sleep patterns and concentration abilities (Taylor).

So this brings us back to the debate of whether or not to sleep with this light on. Personally, my roommate and I have kept our Salt Lamp on every night since Monday; consequently, we’ve both noted slight improvements in our sleep patterns, and our overall abilities to relax and breathe in the room. While it is too soon to tell whether or not this will have a depressing affect on us in the long run (or whether we will actually benefit fully from the lamp), we both agree that the dim orange glow is more soothing than sad. Sorry to say Charlie Brown, but sometimes getting a rock isn’t always a bad thing.

Additional Sources:

Kluger, Jeffery. “Depressed? Kill All the Lights.” TIME 29 Nov. 2010: n. pag. Print.

A Left-Handed Death Sentence?

As a lefty, I have often been teased by friends and strangers alike about my shortcomings with school desks, scissors, and of course binders and spiral notebooks. I’ve proudly displayed my ink smudges post in-class essay or after a fervent session of note-taking. It’s honestly gotten to the point where a person’s discovery of my left-handedness usually goes as follows:

Stranger/New Friend: “Wait, you’re left-handed?”

Me: “Yes, I am in fact spawned by the devil!”

But all joking aside, my least favorite lefty “fact” that people regurgitate is the “left-handed-people-die-earlier” topic. Come on, do you really think I want to hear that? I’m an eighteen-year-old girl with goals and the drive to accomplish them…I don’t you to constantly remind me that I’m expected to die earlier.

But then we discussed the association of wormy kids with bad grades in class, and we learned that correlation doesn’t always equal causation. This situation has three primary possible scenarios:

  1. People who are left-handed are more likely to die earlier.
  2. People who die earlier are more likely to be left-handed.
  3. Left-handedness and early death are both affected by a third variable, but share no relation to each other.

This article from the BBC delves into the different possible reasons behind the lefty/early death correlation. Maybe, it says, early death can be attributed to the ongoing problematic difficulties of using everyday objects, like utensils and office supplies. But Hannah Barnes, the article’s author, doesn’t think something this insignificant, however annoying, could subtract almost ten years from a life. I have to agree. My daily struggles with my knife and fork irk me incessantly; nevertheless, I’ve adapted, and don’t really see something I rarely think about anymore as the cause of my premature death.

But Barnes couldn’t understand why this “myth” still had prevalence, since the evidence behind it didn’t seem sufficient. Chris McManus, a psychology and medical education professor at University College London answered Barnes’ skepticism in his book Right Hand, Left Hand. McManus sees right through the speculation, finding a minor flaw in the research behind it. Researchers studied the death records of Southern Californians, but neglected to pay attention to people living at the time of the study. By omitting case studies of the living, the researchers didn’t take into consideration the rising commonality of being left-handed. At the time of the study, more left handed people were younger, so obviously those who died younger had a higher chance of being left handed, adding to the myth (Barnes).

lefty

Image found here

Another factor that boosts the inaccuracy of this myth is the forced right-handedness that occurred during the 1800’s-1900’s. Many lefties were forced to use machinery intended for righties, eventually adapting and becoming fully right-handed. So people who died and were used as data in the study may have been left handed, but known as righties for their entire lives, further complicating the study (Barnes).

After careful thought, I’ve decided that this myth leads to scenario three—that left-handedness and early death are both affected by confounding variables, but share no relation to each other. The other variables are the misconstrued data, consisting of an insufficient data pool (only death records) and an inaccuracy in data (left-handed people becoming right handed). From now on, I’ll be able to have an educated comeback for anyone who tries to tell me that I’ll die prematurely…and I have science to thank for that!

Eat your greens…eventually

As I sat down to lunch in the Redifer Dining Commons yesterday, I noticed something extremely interesting about my plate. It was piled high with hearty helpings of cauliflower and mixed steamed vegetables including peppers, lima beans, and broccoli. I was pretty impressed, considering 15 year old me would have opted for mashed potatoes and called it the daily veggie serving. This led me to ask myself a question I’ve always wondered about. Why do children resist eating vegetables? Is there scientific reasoning, or does the fault lie in the fact that parents constantly shoved vegetable baby food down their children’s throats? Following the motto of my initial post, I concluded that there has to be a scientific explanation for this stereotype.

And it turns out that I was correct in my assumptions. There is a scientific explanation for child resistance of vegetables, and it known as food neophobia. Simply put, it’s defined as a child’s disinclination to eat food. But food neophobia isn’t the only reason young kids avoid their vegetables. According to this research review from Appetite, food neophobia coupled with picky eating make for the two biggest culprits in the childhood boycott against vegetables. A child’s age, gender, social surroundings, and personality can all affect her level of pickiness; furthermore, variables like pressure, parenting styles and feeding styles have an affect on how long a child experiences food neophobia or fussy eating. Even visual appeal plays a role in a child’s level of food neophobia; the more unfamiliar a food looks, the more likely a child is to reject it.

blog1.1

image found here

The same excerpt from Appetite also presented an explanation for our growing tolerance of healthy food. When we are toddlers, our decreasing dependency on our parents increases our survival skills, one of which is the resistance of vegetables. Young taste buds are not developed enough to enjoy bitter foods, and instead react negatively towards the foods as a defense mechanism against what they think to be poison. In short, young children NATURALLY spurn bitter foods like vegetables. However, as we grow and our senses develop, we adapt to the bitterness in most vegetables, which justifies why I can now stack my plate with all different shades of green, orange, yellow, and red. It seems as if picky eating and food neophobia are just two more things we can add to the list of “phases” our parents love to discuss.

What I actually found most fascinating about my interest in this particular study was its relevance to our first class discussion with Andrew, that science is anti-authoritarian. No one forced me to research why kids dislike eating vegetables; I went about it on my own terms. I simply had a strong curiosity. The same goes for the scientists who researched food neophobia on their own volition. Like Andrew said, many believe science to be de-humanizing, when in fact it is a topic humans participate in and relate to the most.

The semi-disappointing piece of this puzzle is the fact that there really is no particular way to ensure kids will willingly consume vegetables from their first go at eating solids. Luckily, the Washington Post has provided us all with a short list of ways to get kids to eat their vegetables, so if/when food neophobia strikes our households in the distant future, we have something small to fall back on!

General Appreciation

Hey SC200, my name is Mary, but I go by Molly. I chose to take this science course because I see myself as a person who enjoys critical thinking and deep reflection as opposed to facts and memorization. In high school, all of my science courses consisted of formulas I found confusing, labs that bored me to tears, or dissections that left me feeling queasy. Walking out of class, I usually felt like this:

Science meme

However, I still have a genuine appreciation for science, because it surrounds me in all aspects of my life. Scientists created the car I drive. They invented the technology that tells me how to dress appropriately for the weather. They are even responsible for the reconstructed ACL in my right knee.

But like I said…while I have a real appreciation for science, I’m not too excited by the nitty-gritty details of biology, chemistry, and physics. I’ve always taken interest with math and languages, and I consider my abilities to organize and lead a team efficiently my best skills. For this reason, I plan on pursuing a career in business and tackling fluency in Spanish. However, I do see the importance of understanding (even a little) the subject that plays such a primary role in my life. Here is a Canadian article I found that explains the importance of being scientifically literate. The content is concurrent with my views on the importance of science…hopefully at least one of you agrees with it too!