STEM Biography: Jennifer Doudna and Emmanuelle Charpentier

We’ve all heard of gene editing before in one context or another. For the majority of the term’s existence, gene editing has remained more of a science fiction than an actuality until Jennifer Doudna and Emmanuelle Charpentier developed a mechanism by which it can feasibly be done in a real world setting: CRISPR technology.

Jennifer Doudna

Jennifer Doudna and Emmanuelle Charpentier won the Nobel Prize for Chemistry in 2020 for their work on CRISPR, which has been referred to as the most revolutionary development in biology in the past few decades. First announced in the 2010s, Doudna and Charpentier were critical in their early work in characterizing how the CRISPR-cas9 system works. In the past 8 years or so, CRISPR has become a common term to hear in any laboratory across the world.

Emmanuelle Charpentier

CRISPR is short for “clustered regularly interspaced short palindromic repeats” and is a microbial ‘immune system’ that prokaryotes (bacteria) use to prevent infection by viruses known as phages. CRISPR provides prokaryotes with a system to recognize the precise genetic sequences that match a phage and target those specific sequences for destruction using specialized enzymes.

While previous work had identified these CRISPR-associated proteins (Cas), Charpentier discovered a key component of the CRISPR system: an RNA molecule that was required to direct the enzymes to the correct target sequences. In 2012, Doudna and Charpentier showed that the system could be programmed to cut specific sites in isolated DNA. This programmable gene-editing system has inspired a boom in medicine, agriculture and basic science.

Their work has pioneered an entire new arena of biology to explore and we’ve yet to fully understand the magnitude of potential possibilities that may arise from this. However, the increasing reality of actual gene editing does raise many ethical concerns, especially as it may make it easier for parents and medical professionals to “select for” or “modify” embryos to possess certain traits and characteristics. With that in mind, it is entirely possible that the nation will be faced with a new wave of eugenics as people once again use biology and “science” to reinforce oppressive and racist ideology.

In fact, this editing of the human genome has already begun: Chinese scientists have begun the effort of developing “crispr” babies. While it remains unclear how the future will unfold, I feel that there will be an increasing prevalence of science in policy and legislature as people have combatting views on how certain technologies should be used and regulated.

STEM Biography: Edith Clarke – First Female Electrical Engineer

Close your eyes and picture and inventor. You probably picture an eccentric genius skinny guy in a lab coat with frizzy hair. What you’ve probably conjured up in your brain is reflective of the majority of people inducted into the Inventors Hall of Fame. One of the most influential members of that Hall of Fame however, was Edith Clarke, the first female engineer to graduate from MIT.

Her story is one of resilience and determination. Upon graduating from MIT, a world-renowned institution, Clarke initially struggled to find a job — as many firms and corporations didn’t view engineering as a women’s field of work. Eventually she got a job at General Electric as a supervisor of computers (at that time the term referred to people, typically women, who were tasked with performing intensive calculations for engineers), a position she was incredibly overqualified for.

""

Edith Clarke, a pioneer for women in the field of engineering.

Instead of resting on her laurels, Clarke took advantage of the spare time she had and invented the graphical calculator, applied for a patent in 1921 and approved in 1925 — it served as the basis for the TI-85 calculators that we often use today. Her device was used to solve electric power transmission line problems.

Recognizing her hard work, Clarke received an electrical engineering role in 1923, becoming the first female to hold a professional position as an electrical engineer in the US. During her long career at GE, she developed mathematical methods that simplified and improved the work of electrical engineers, and published 18 technical papers. Additionally, her textbook, Circuit Analysis of A-C Power Systems, became the standard for industry in her time.

She retired from GE in 1945 and became the first female professor of electrical engineering at the University of Texas in Austin. She spent the next 10 years teaching there until she passed away in 1959.

In Dr.James E Brittain’s paper “From Computer to Electrical Engineer — the Remarkable Career of Edith Clarke” he write that “As a woman who worked in an environment traditionally dominated by men, she demonstrated effectively that women could perform at least as well as men if given the opportunity. Her outstanding achievements provided an inspiring example for the next generation of women with aspirations to become career engineers.”

Line graph showing percentages of women by discipline

Data from the Bureau of Labor Statistics.

Today women make up 55% of Biological scientists, 36% of Chemists and material scientists, 26.9% of computer and mathematical occupations, and only 16.7% of architecture and engineering occupations. Due to push back against women in engineering fields following WWII, the percentage of women in engineering has struggled the most with improving over the past century since Clarke’s time.

STEM Biography: Eunice Foote – Pioneer Climate Scientist

Most people our age have grown up hearing about greenhouse gases and their effects on the environment, how they contribute to the changes in climate that we’re seeing around us. While greenhouse gases may seem like a new development, the reality is that we’ve known about their presence for a while. This leads to the question: “who discovered greenhouse gases?”

A quick google will tell you that John Tyndall, an Irish physicist, discovered and set our foundational understanding of greenhouse gases, and throughout much of history this achievement was accredited to him. In actuality however, Eunice Foote, an American climate scientist performed the experiments that proved the existence of greenhouse gases.

Eunice Foote was a pioneer in her field. She performed experiments in the 1850s that demonstrated the ability of atmospheric water vapor and carbon dioxide to have an effect on solar heating. Her experiments foreshadowed John Tyndall’s later experiments that illustrated the workings of Earth’s greenhouse effect — she beat him by at least three years!

Drawing of Eunice Foote by Carlyn Iverson, NOAA Climate.gov.

Despite her incredible contributes and remarkable insight into the influence of raised CO2 levels on the Earth’s temperature, she went forgotten in the history of climate scientists until very recently.

Foote utilized glass cylinders, each encasing a mercury thermometer, and found that the heating effect of the sun was greater in moist air than dry air, and that it was highest in a cylinder containing CO2. While her experimental design was simple and didn’t give to much insight into the exact nature of how things worked, Foote was still able to draw remarkable insight from her studies, writing in 1856:

“An atmosphere of that gas would give to our earth a high temperature; and if as some suppose, at once a period of its history the air had mixed with it a larger proportion than present, an increased temperature…must necessarily resulted.”

Her findings were presented in August of 1856 at the annual meeting of the American Association for the Advancement of Science (AAAS) by a male colleague, Joseph Henry. Her paper, nor Henry’s presentation of it, were included in the proceedings of the conference however.

Foote’s paper “Circumstances affecting the heat of Sun’s rays.”

A summary of her work was published in the 1857 volume of Annual of Scientific Discovery by David A.Wells. When reporting on the meeting, Wells wrote: “Prof. Henry then read a paper by Mrs. Eunice Foote, prefacing it with a few words, to the effect that science was of no country and of no sex. The sphere of a woman embraces not only the beautiful and the useful, but the true.”

Her work was later used in a column of the September 1856 issue of the Scientific American to counter the held sentiment that “women do not possess the strength of mind necessary for scientific investigation.” The writer highlighted that her experiments were direct evidence to the contrary: “ the experiments of Mrs.Foote afford abundant evidence of the ability of woman to investigate any subject with originality and precision.”

While much has changed since the time of Eunice Foote, there is still much work to be done to acknowledge the past contributions of women scientists and ensure that future efforts do not go unseen.

STEM Biography: Katherine Johnson

In 2016, the movie Hidden Figures came out, detailing the incredible contributions made by Katherine Johnson that made the Apollo Missions to the moon possible. Without her doing the math by hand, we would have never achieved the incredible feat that we did. Unfortunately, most still do not know her name. Her name is Katherine Johnson.

Katherine Johnson, sitting at her desk at NASA.

As the United States was facing World War II, the push for aeronautical advancement led to a increasing demand for mathematicians, people who could crunch the numbers and allow engineers to focus on the other aspects of their tasks. In other words, there was a growing need for “Human Computers” to carry out the complex mathematics before digital computers were available to handle that burden. Women were the obvious solution for these highly computational roles. Katherine Johnson was one of the “West Computers,” designated these name after the building to which they resided.

Katherine was gifted with numbers from an early age: she started attending high school at 10 years old, and graduated with honors from West Virginia State College with bachelor’s degrees in mathematics and French. In 1953 she began her work at the National Advisory Committee for Aeronautics West Area Computing unit where she analyzed test data and provided mathematical computation that was vital to the success of the U.S. space program at the time.

Her contributions stem far beyond her work on the Apollo Mission as she became the first woman in her devision to received credit as an author of a research report; as a member of the Space Task Group she coauthored a paper on how to place spacecraft in orbit in 1960. She went on to author/coauthor 26 research reports throughout her career.

Mercury Seven Astronauts in Spacesuits

Astronauts of the Mercury Program: the first U.S. space explorers.

She played a crucial role in NASA’s Mercury program from 1961-1963 which was for the development of crewed spaceflights. In 1961, she calculated the path for Freedom 7, the spacecraft that put the first U.S. astronaut into space (Alan B. Shepard Jr.). In 1962, at the request of the astronaut John Glenn, Katherine verified by hand that the electronic computer had planned his flight correctly. Glenn went on to make history about the Friendship 7, as he became the first U.S. astronaut to orbit the Earth. Most notably, she was a part of the team that calculated where and when to launch the rocket for the Apollo 11 mission, which sent the first three men to the moon.

Clearly, without Katherine Johnson, we would not have the NASA we know today. Perhaps even the Space Race and Cold War would have ended up differently if not for her incredible accomplishments and contributions to NASA’s space program. Given how impactful her efforts are on our everyday lives, we should all know the name Katherine Johnson.

STEM Biography: Marie Curie

It is unlikely to have gone through a Highschool chemistry class without hearing the name Marie Curie uttered at some point in the course. Known for her work on radioactivity, Marie Curie is a two-time winner of the Novel Prize, the first woman to win a Nobel Prize and the only woman to ever win the award in two different fields.

Marie Curie in her laboratory.

Marie Curie was born in Warsaw Poland and was the daughter of a secondary-school teacher, from whom she learned mathematics and physics. As a young student she became involved in a students’ revolutionary organization and soon realized that it was necessary for her to leave Warsaw. She continued to pursue an education, receiving Licentiateships in Physics and the Mathematical Sciences from Sorbonne in Paris (a university degree that is between a bachelor’s and a doctor’s degree). She later gained her Doctor of Science degree in 1903 and was eventually appointed Director of the Curie Laboratory in the Radium Institute of the University of Paris (founded to commend her work).

Marie Curie and her husband carried out revolutionary work in regards to radiation, they isolated two new elements (Polonium and Radium), which is not an easy feat to accomplish. Marie Curie developed methods for the separation of radium from radioactive residues, allowing for its characterization and the careful study of its properties. She had a special interest in the potential therapeutic properties.

Before the end of the 19th century, surgery was the only treatment option available to cancer patients, and that only worked if the cancer was localized to a specific region of the body and had yet to spread. However, it is very difficult to detect cancer before it spreads to other regions of the body. The discovery of X-rays and radium changed this as treatment with ionizing radiation was developed. The goal of radiotherapy is to kill the cancer cells via targeted radiation.

Radiotherapy machine, 1967

Radiotherapy machine in the 1960s.

Marie Curie’s discovery provided cancer patients to an alternative treatment method. One that is still being used today in conjunction with oncology drug specialized for targeting cancer cells.

She is an incredible role-model for all women in STEM, especially when one takes into consideration that women were not yet able to vote in America when Marie Curie was busy discovering radioactivity and encouraging the use of radium to alleviate suffering during World War 1. She was an amazingly brilliant scientist, who sought to improve the world around her. Something that we should all strive to achieve.

STEM Biography: Rosalind Franklin

Anyone who has taken a biology course that covered DNA likely has heard the names Watson and Crick before. However, these people are likely unfamiliar with the name Rosalind Franklin despite the fact that it was her X-ray diffraction images of DNA and insights that made deciphering the structure of DNA possible.

British scientist, Rosalind Franklin

Rosalind Franklin was one of the most pivotal scientists of the 20th century.

Rosalind Franklin is one of the most unfairly sidelined scientific figures in history. She was born on July 25th, 1920 and unfortunately died from cancer on April 16, 1958. She pursued physical chemistry at the University of Cambridge before turning down a fellowship in 1941 to assist in the war effort.

She later used the research for doctorate and received it in 1945. From 1947 to 1950 she worked at the State Chemical Laboratory in Paris, where she studied X-ray diffraction technology.

X-ray diffraction is “the only laboratory technique that non-destructively and accurately obtains information such as chemical composition, crystal structure, crystal orientation, crystallite size, lattice strain, preferred orientation and layer thickness.” It is an incredibly powerful tool used to obtain a detailed view. X-rays are produced by a source to illuminate the sample, the particles hit the sample and then diffract. The specific diffraction resulting from the X-ray diffraction allows scientists to determine what the structure of the sample is. While X-ray diffraction technology has come a long way since Rosalind Franklins time, it is still a commonly used technique today.

Diagram showing x-ray passing through a lead screen, crystalline solid, photographic plate showing spots of diffracted x-rays.

Diagram showing x-ray passing through a lead screen, crystalline solid, photographic plate showing spots of diffracted x-rays.

Later in her career, Rosalind Franklin worked at the Biophysical Lab at King’s College in London, where she applied what she learned about X-ray diffraction to study the structure of deoxyribose nucleic acid (DNA). When she started working there, there wasn’t much known about the chemical makeup or structure of DNA. However, she soon discovered DNA’s density and established the helical conformation of DNA.

The determination of the molecular structure of DNA had a huge impact on our understanding of biology. An often repeated phrase in biology is that “structure determines function.” For example, the determination of the structure of DNA provided with it a potential mechanism for the inheritance of the genetic information contained within, a mechanism that had eluded biologists since Darwin coined the term “descent with modification.”

Rosalind Franklin’s work to make clearer X-ray patterns of DNA molecules made it possible for James Watson and Francis Crick to later suggest that the structure of DNA is a double-helical polymer. They would not have been able to make that inference without Rosalind’s clear images of the DNA molecules. Despite this, however, Rosalind did not stand with them to receive a Nobel Prize and unfortunately died from cancer before the error could be corrected.

Emergent Technologies

As one of the final blog posts of the semester, I felt that it would be prudent to do one last overview of some developing technologies. There are 5 different emergent technologies that I believe are likely to become increasing influential in the coming years: CRISPR gene editing technology, XR hardware and software, Artificial Intelligence, Cloud Computing, Specialized Medicine.

CRISPR

Emmanuelle Chapentier (Left) and Jennifer A. Doudna (Right) are the 2020 recipients of the Nobel Prize in Chemistry for their work on CRISPR gene editing technology

Emmanuelle Charpentier and Jennifer A. Doudna were the recipients of the Nobel prize for chemistry in 2020, and already the technology has begun to revolutionize science and medicine. The premise of CRISPR is that it takes advantage of a pre-existing pathway in bacteria that evolved as an immune response defense against viruses and bacteriophages. In essence, it keeps a record of the viral DNA it comes across in the past. When it runs into that same DNA again, it synthesizes a protein complex that seeks out and breaks the viral DNA in half — thereby ensuring its destruction and protecting the cell. Now, the applications of CRISPR are unlikely to be readily apparent from that description alone; the real magic happens in that it enables scientists to purposefully break DNA sequences in half and insert DNA that they want to be there instead: genetically modifying the host cell.

This kind of targeted gene editing wasn’t possible before CRISPR, and it will only get better as time goes on. While there are many positive use cases for this technology — gene therapeutics for genetic diseases — it can just as easily be used to carry out oppressive practices in society (watch the movie Gattaca if you’re curious).

Related to this kind of low-level modifications to the genome is the emergence of increasingly specialized medicine and healthcare.

Specialized Medicine

The human genome has been fully sequenced and annotated. Every day we get closer to understanding how each part of it contributes to the creation of each variation of expression. DNA Sequencing has become cheaper and more feasible. The genome of an entire person can be sequenced in about a day.

Once again, DNA sequencing technology is a dual use tool. While it can be used for more specialized healthcare, improving the quality of life of many, it could also be used as a determinant in one’s freedom and mobility in society. Once again, I’ll call upon the example of Gattaca.

Poster for the film Gattaca. A film about a dystopian future in which your genome determines your entire life.

In the film, people are able to determine, with near 98% certainty when someone is likely to die, solely based on the content of their genome. As this information is freely accessible by anyone who can obtain a DNA sample, employers use it as a key determinant in the hiring of an applicant — are they healthy enough? How expensive will insurance be? Are they worth the investment?

While it may be exaggerated in the film, these are becoming increasingly pressing issues when one takes into account the rise of machine learning technologies, tools specialized in the interpretation of large amounts of data. What are you going to do when a ML model says that you aren’t fit to live somewhere based solely on the content of your bloodstream and a 98% accuracy?

Artificial Intelligence and Machine Learning

As I’ve discussed a multitude of times throughout this blog series, machine learning and artificial intelligence are very powerful emergent technologies that will be key determinants in our future. Such powerful tools should not be left unchecked and in the hands of the few. I believe that regulation is necessary — immediately — before large “Big Data” companies turn us into their product, harvesting our information, feeding it into an algorithm, and selling the product back to us as if it’s something we didn’t own in the first place.

XR

Speaking of “Big Data” companies, let’s delve into the Metaverse for a second. While grossly overhyped when Meta initially announced their name change, I believe that XR (a term that covers Virtual Reality and Augmented Reality) has its place amongst the technologies that will shape our future. Currently, most of the available devices and screens are confined to two dimensions, we interact with flat screens as a way of communicating — imagine how dramatic a change to three dimensions will be?

While there are many dystopian aspects to the Metaverse that Meta is trying to sell to people, I believe that the creative aspects of Virtual and Augmented Reality are not to be undersold. The emergence of two dimensional images led to the eventual rise of 2D movies and the film industry.

Think about how much the film industry has shaped entertainment over the past century, now imagine how crazy it’ll get when studios are able to produce three dimensional movies and stories. Perhaps instead of being a simple observer, you get to shape the story yourself — who knows.

The two dimensional film industry is so well established that there is a clear methodology for how to create a movie, it’s so engrained in the culture that it seems obvious now. However, few people realize how experimental the early years of film production were as people struggled to figure out what worked — I believe that we are going to see something very similar with VR and AR in the coming decade.

Cloud Computing

Save your data in the cloud is a phrase that we’ve all heard before. It’s more protected, as shown by how Ukrainian cybersecurity is bolstered by cloud based servers, and it’s more powerful as you are able to take advantage of more advanced machinery. Looking at the revenue Amazon has generated through server hosting highlights our growing dependence on cloud based technologies. This will only get more amplified in the future as more people flock to AI development and fight for the scarce GPU resources provided by retailers.

These are just some of the fields that I find interesting. I believe that will be increasingly important in the future, but then again, some of the most revolutionary technologies have been those that people weren’t even aware of a year before — creating an entirely new field for innovation.

Rights to Creation: The Creator and WGA

I’m sure some of you saw the trailer for The Creator, a movie that hit theaters sometime in September. The trailer appeared to promise an action-packed intellectual movie that grappled with the question of what AI will mean for the future and how our relationship with it will develop over time. While the movie began filming in January of 2022, its timing could not have been better, nor could it be more ironic.

Poster image for The Creator movie

According to a review by Christy Lemire, the Creator is not as innovative as it appears to be. She comments that its central trope has been reused excessively throughout film history. The main kid (The Creator) is an all-powerful creature who could be humanity’s savior or destroyer — sound familiar?

What’s ironic about this blatant reuse of tropes is that it’s almost like an AI was used to create the plot for the movie: regurgitating previous innovations instead of creating something new and unique. This irony turns problematic when one remembers the Writer’s Guild strike that went on for the past several months.

That strike ended on September 24th when the Writers Guild of America (WGA) and the Alliance of Motion Picture and Television Producers announced that an agreement had been reached. This was Hollywood’s second longest strike, and it appears to have been a successful one as the “deal is exceptional” according to WGA leaders.

The agreement doesn’t prevent writers or productions from using generative AI but lays out rules prohibiting the use of the software to reduce or eliminate writers and their pay.

“A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services” (Statement from the agreement).

Writers Guild of America on Strike

This agreement seems to be a good first step towards regulating AI in terms of putting the power into the hands of the individual, rather than that of large corporations.

Additionally, the agreement states that the WGA has the “right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.”

Besides the simple use of AI technologies as a tool to replace “writers” and lower their pay, another major concern had been the exploitation of writers work and ideas. Using their innovations to create the very AI models that would potentially replace them.

Regardless of the outcome of this strike (once the final details are released), I believe that they will serve as precedent for future discussions regarding the regulation of AI tools and the protection of individual data and rights.

The question that is going to need answering, as soon as possible, is who owns what. For example, if an data is scraped from a website and is used to train an AI model, the company that owns the website will likely cry out that they had not been compensated for the use of that information. However, that company is not compensating the people who created and generated that content and posted it online.

It becomes a tangled web of ownership and rights, one that I am unsure will be untangled properly without damaging something or someone.

Learning Machine Learning

Knowledge is not power, it is only potential. Applying that knowledge is power” — Takeda Shingen (1521-1573)

Alright, so we’ve talked quite a bit now about AI, its ramifications and ethical questions, the state of the internet, etc. Now I think it would be quite pertinent to discuss how one might go about learning machine learning. I think it’s important to have a base understanding of how ML models operate, as misunderstanding their base functionalities can lead to poor regulatory decisions down the line and can influence the way people view the applications of machine learning.

If AI is applied in a regulated manner, it has the potential to lead to a world in which humans and machines can live coexist.

To preface this blog, I’d like first to say that I am a self-taught machine learning engineer; my knowledge comes from hands-on experiences with developing my own projects and researching online.

If you are looking to understand machine learning, there are a few categories that you may belong to.

  1. You may be a programmer looking to understand the way machine learning works and want to learn enough about it so you can implement it into your own projects.
  2. You may be interested in pursuing a career in machine learning and want to understand the low-level math behind the models.
  3. You may be an entrepreneur looking to understand machine learning from a business perspective so that you can take advantage of its benefits to create an attractive business model.
  4. You may simply be interested in gaining a high-level understanding of a piece of technology that is very likely to become increasingly prevalent in our lives in the coming years.

All of these positions are equally valid, and there a plenty of resources online for how to best learn machine learning (even if you don’t fall into any of those categories). I myself can relate to all four of these categories and will do my best to provide my advice on how to fulfill each of these goals best online.

For all four of these perspectives, I recommend watching <> video on YouTube. They do a good job of explaining the base methodologies behind machine learning and data science — as well as its limitations and use cases. If you don’t have time to watch it, stick around, I hope to do a general overview in a future blog post (which may be helpful for those that fall in category #4).

Groups 1 and 2: Programmers

If you are in groups 1 and 2, I recommend starting out with Google’s ML Bootcamp online course. They explain some of the core concepts plainly and provide hands-on exercises to apply those concepts. After going through it, try creating a simple model on your own, applying what you learned without simply copying code over.

  • Note: One downside to Google’s ML Bootcamp is that it is based on TensorFlow. While it’s a good starting place to get the core concepts of machine learning, I recommend switching to PyTorch as soon as possible, as they allow for much easier debugging as a trade-off for more low-level coding.

Once you’ve done that, I recommend making it a habit to play around with Kaggle.com: they have ML competitions, hands-on exercises, and a ton of other resources. It’s a great place to learn from others and get a better idea of the strategies behind ML. If you are able, I also recommend getting your hands on a copy of “Machine Learning Approaches to Almost Any Problem” by Abhishek Thakur.

I recognize that I just through a lot of resources at you, so I’ll end this portion with this: the one best thing I can recommend is creating a machine learning project from scratch. Collect the data yourself, interpret and explore it on your own, determine what model would work best for the kind of problem you are trying to solve, and create an MLOps pipeline to best iterate and create your application.

Group 2: Low-Level Understanding

For those in group 2 who are interested in understanding the low-level details, I recommend StatQuest on YouTube (they explain the mathematical concepts very well) and the d2l.ai online book.

To stay up to date with all of the current research developments going on with AI, I recommend the YouTube channel AI Explained; the creator does a great job of summarizing recent developments and diving into what they actually mean for the future.

Group 3: Business Lens

For those with a business lens, I recommend getting involved in the Nittany AI Student Society here on campus. They do a lot of things with machine learning but are definitely more business-focused. Their Nittany AI Challenge is a good place to get real hands-on experience designing an ML application that can be applied to a real-world problem.

Nittany AI Alliance preps students for development and leadership roles ...

Nittany AI Student Society hosts a yearly competition for teams to create “start-up” esque machine learning applications and compete for real money to further their project.

Additionally, I recommend looking more into the limitations of AI. Knowing what kind of projects won’t be solved with AI is likely more beneficial than knowing the problems that can be solved with AI. Knowing when to say no is often more important than saying yes.

Group 4: Users of this Technology

The single best advice I can give here is to be critical of AI applications: what kind of data did they train the model on? Can you think of any potential biases that the model may have? If a person said/did it, would you believe/trust that person at face value?

Being critical and asking questions, looking things up when you don’t know the answer, and reading from multiple sources will provide you with all the information you need to make informed decisions about a topic that is still on the precipice of being implemented everywhere.

While it may feel like these developments are out of our hands, we determine how they are going to shape the future. It will be up to us in order to determine what regulations to put in place, up to us to use the AI applications that will be thrown our way; it will be up to us to handle the ethical and moral conundrums that will inevitably arise due to AI.

No pressure.

The Metaverse is Already Here, and it Has Been For A While

Zuckerberg announces Facebook’s rebranding to Meta (totally not a mid-life crisis).

“To reflect who we are and what we hope to build…our company is now Meta.” – Mark Zuckerberg.

Almost two years ago, Facebook changed its name to Meta. They said the change was to reflect better their new ambitions of bringing the “Metaverse” into reality — they hoped to claim it as their invention.

Truth is, like many things Meta has claimed to pioneer, they’ve acquired (or rather stolen) the term from pre-existing media. The term “Metaverse” was first coined in the 1990s novel by Neal Stephenson, “Snow Crash.” It’s story is about a dystopian future where people live their entire lives in virtual reality.

Snow Crash book cover; first time the word “metaverse” was coined.

What if I told you that the Metaverse is already here, and has been here for decades now?

First, let’s define what I mean when I say Metaverse. For those who have seen the film “Ready Player One,” I am not referring to the software and hardware that make the smooth transition between virtual reality worlds possible. Instead, I’m focusing more so on the idea of the Metaverse. To me, the Metaverse is a place not directly attached to reality in which people can create, self-indentify, and meet other people — live their lives.

With this definition, it becomes clear to see how the Metaverse has been around for decades, ever since the creation of online chatrooms. It’s magnitude and influence has only grown since then.

So, Meta “claiming” the Metaverse as their invention is akin to someone claiming to be the sole inventor of trains. Meta isn’t even pioneering the virtual reality space, their headsets and platform was created by Oculus and acquired by Meta (some of the branding still says Oculus). Although, considering the amount of FTC (Federal Trade Commison) investigations into Meta’s practices, this isn’t exactly surprising behavior from Zuckerberg.

As technology has progressed, the line between the Metaverse and reality has become so blurred, it can be hard to tell the difference between the two: especially as both influence each other.

I remember reading about a Tik Tok trend where people with door cameras would leave notes for Amazon delivery workers to do something, and then post the footage online. These workers, who depend on reviews for their job, have to do a little dance for some nameless crowd online. The dystopia laid out in Snow Crash is already here. We already live in the Metaverse.

I’m not saying that technology is inherently bad, but when it becomes the first thing you look at in the morning and the last thing you engage in before bed, that’s kinda sad. I’m not criticizing, this is something that I personally do and it’s depressing.

What’s the solution?

There’s no perfect solution that I can see. Technology will keep developing, algorithms will get better and better at keeping us hooked, and corporations will keep looking for ways to make money off us.

In fact, I would argue that technology has become a drug, but that’s a topic to explore another day.

On the other hand, we can’t just unplug, the benefits for society far outweigh the negatives brought about by technology; innovations in medication, connections, and information sharing are all enhanced by technology.

The only thing we can do is work on ourselves. Take the time to remember the whole that we do live in, the real world, not the Metaverse. Turn off your music sometimes while you are walking, take a look at the trees, the sky, the people. Grounding yourself in the present can help combat the anxieties, or at the very least give you a break from the overstimulation provided by the technologies today.

Which is ironic, considering you’re reading this online. Oh well.