Scientific Studies: The Good and the Bunk

Image Source: The Wall Street Journal

It’s easy to be manipulated by the range of studies out there. With flowery language and an air of pretentiousness (what Andy Kessler dubs “sciencey”), some of these debunked studies still shape our perceptions of the world. It’s worth asking: what are the merits of these studies, and what conclusions can actually be drawn from them?

Kessler believes a failed study begins with a biased sample. Particularly, he takes issue with psych labs that attempt to prove assumptions for all of mankind through the eyes of “hungover grad students.” I couldn’t agree with this point more. One of the central steps in a statistical study is choosing a representative sample of the population for which conclusions would be applied. For convenience, certain studies are taking shortcuts that reflect poorly in the results. Researchers then extrapolate these results to the general public, and somehow this is acceptable? Another flaw with these ‘pop behavioral science’ studies is that the results can’t be reproduced, a red flag for biases in the original study.

Image Source: Atilgan Ozdil/Anadolu Agency/Getty Images

I believe we all should be skeptics when presented with a study – a somewhat ‘guilty until proven innocent’ approach. While it’s not fair to discredit studies as a whole, it’s hard to deny that they may be misleading. And the results of some debunked studies make good headlines, so it’s hard to overcome an initial conclusion even as new research disproves it. Information asymmetry is dangerous, but information in the hands of all is priceless, which is why it’s worth taking a second look. What organization is sponsoring the study? Could it have a stake in the results of the study? Are the researchers credible? The participants? Last but certainly not least, do other studies reach the same conclusion? Here’s John Oliver tackling some of these questions and more.

For my paper on the changing landscape of automation, I will be pulling from existing data on the ‘4th Industrial Revolution’ and historical statistics on the number jobs displaced by automation in former Industrial Revolutions. We are still in the midst of this transformation, so we have to rely on economic forecasts to predict what the future holds. Many of these projections vary wildly, so I’d need to make sure my research is consistent and verifiable.  I plan on using consumer behavior studies to show trends in automation in the retail industry. These studies are especially prone to biases, so I know to look out for them when digging through my sources. As important as citing reliable studies is citing varied studies. Though I shouldn’t use studies to drive my paper, I also shouldn’t cherry pick studies to prove a point, or as psychologists would say, confirm a bias. Studies show…..ah, never mind!

Leave a Reply

Your email address will not be published. Required fields are marked *