Emergent Technologies

As one of the final blog posts of the semester, I felt that it would be prudent to do one last overview of some developing technologies. There are 5 different emergent technologies that I believe are likely to become increasing influential in the coming years: CRISPR gene editing technology, XR hardware and software, Artificial Intelligence, Cloud Computing, Specialized Medicine.

CRISPR

Emmanuelle Chapentier (Left) and Jennifer A. Doudna (Right) are the 2020 recipients of the Nobel Prize in Chemistry for their work on CRISPR gene editing technology

Emmanuelle Charpentier and Jennifer A. Doudna were the recipients of the Nobel prize for chemistry in 2020, and already the technology has begun to revolutionize science and medicine. The premise of CRISPR is that it takes advantage of a pre-existing pathway in bacteria that evolved as an immune response defense against viruses and bacteriophages. In essence, it keeps a record of the viral DNA it comes across in the past. When it runs into that same DNA again, it synthesizes a protein complex that seeks out and breaks the viral DNA in half — thereby ensuring its destruction and protecting the cell. Now, the applications of CRISPR are unlikely to be readily apparent from that description alone; the real magic happens in that it enables scientists to purposefully break DNA sequences in half and insert DNA that they want to be there instead: genetically modifying the host cell.

This kind of targeted gene editing wasn’t possible before CRISPR, and it will only get better as time goes on. While there are many positive use cases for this technology — gene therapeutics for genetic diseases — it can just as easily be used to carry out oppressive practices in society (watch the movie Gattaca if you’re curious).

Related to this kind of low-level modifications to the genome is the emergence of increasingly specialized medicine and healthcare.

Specialized Medicine

The human genome has been fully sequenced and annotated. Every day we get closer to understanding how each part of it contributes to the creation of each variation of expression. DNA Sequencing has become cheaper and more feasible. The genome of an entire person can be sequenced in about a day.

Once again, DNA sequencing technology is a dual use tool. While it can be used for more specialized healthcare, improving the quality of life of many, it could also be used as a determinant in one’s freedom and mobility in society. Once again, I’ll call upon the example of Gattaca.

Poster for the film Gattaca. A film about a dystopian future in which your genome determines your entire life.

In the film, people are able to determine, with near 98% certainty when someone is likely to die, solely based on the content of their genome. As this information is freely accessible by anyone who can obtain a DNA sample, employers use it as a key determinant in the hiring of an applicant — are they healthy enough? How expensive will insurance be? Are they worth the investment?

While it may be exaggerated in the film, these are becoming increasingly pressing issues when one takes into account the rise of machine learning technologies, tools specialized in the interpretation of large amounts of data. What are you going to do when a ML model says that you aren’t fit to live somewhere based solely on the content of your bloodstream and a 98% accuracy?

Artificial Intelligence and Machine Learning

As I’ve discussed a multitude of times throughout this blog series, machine learning and artificial intelligence are very powerful emergent technologies that will be key determinants in our future. Such powerful tools should not be left unchecked and in the hands of the few. I believe that regulation is necessary — immediately — before large “Big Data” companies turn us into their product, harvesting our information, feeding it into an algorithm, and selling the product back to us as if it’s something we didn’t own in the first place.

XR

Speaking of “Big Data” companies, let’s delve into the Metaverse for a second. While grossly overhyped when Meta initially announced their name change, I believe that XR (a term that covers Virtual Reality and Augmented Reality) has its place amongst the technologies that will shape our future. Currently, most of the available devices and screens are confined to two dimensions, we interact with flat screens as a way of communicating — imagine how dramatic a change to three dimensions will be?

While there are many dystopian aspects to the Metaverse that Meta is trying to sell to people, I believe that the creative aspects of Virtual and Augmented Reality are not to be undersold. The emergence of two dimensional images led to the eventual rise of 2D movies and the film industry.

Think about how much the film industry has shaped entertainment over the past century, now imagine how crazy it’ll get when studios are able to produce three dimensional movies and stories. Perhaps instead of being a simple observer, you get to shape the story yourself — who knows.

The two dimensional film industry is so well established that there is a clear methodology for how to create a movie, it’s so engrained in the culture that it seems obvious now. However, few people realize how experimental the early years of film production were as people struggled to figure out what worked — I believe that we are going to see something very similar with VR and AR in the coming decade.

Cloud Computing

Save your data in the cloud is a phrase that we’ve all heard before. It’s more protected, as shown by how Ukrainian cybersecurity is bolstered by cloud based servers, and it’s more powerful as you are able to take advantage of more advanced machinery. Looking at the revenue Amazon has generated through server hosting highlights our growing dependence on cloud based technologies. This will only get more amplified in the future as more people flock to AI development and fight for the scarce GPU resources provided by retailers.

These are just some of the fields that I find interesting. I believe that will be increasingly important in the future, but then again, some of the most revolutionary technologies have been those that people weren’t even aware of a year before — creating an entirely new field for innovation.

Rights to Creation: The Creator and WGA

I’m sure some of you saw the trailer for The Creator, a movie that hit theaters sometime in September. The trailer appeared to promise an action-packed intellectual movie that grappled with the question of what AI will mean for the future and how our relationship with it will develop over time. While the movie began filming in January of 2022, its timing could not have been better, nor could it be more ironic.

Poster image for The Creator movie

According to a review by Christy Lemire, the Creator is not as innovative as it appears to be. She comments that its central trope has been reused excessively throughout film history. The main kid (The Creator) is an all-powerful creature who could be humanity’s savior or destroyer — sound familiar?

What’s ironic about this blatant reuse of tropes is that it’s almost like an AI was used to create the plot for the movie: regurgitating previous innovations instead of creating something new and unique. This irony turns problematic when one remembers the Writer’s Guild strike that went on for the past several months.

That strike ended on September 24th when the Writers Guild of America (WGA) and the Alliance of Motion Picture and Television Producers announced that an agreement had been reached. This was Hollywood’s second longest strike, and it appears to have been a successful one as the “deal is exceptional” according to WGA leaders.

The agreement doesn’t prevent writers or productions from using generative AI but lays out rules prohibiting the use of the software to reduce or eliminate writers and their pay.

“A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services” (Statement from the agreement).

Writers Guild of America on Strike

This agreement seems to be a good first step towards regulating AI in terms of putting the power into the hands of the individual, rather than that of large corporations.

Additionally, the agreement states that the WGA has the “right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.”

Besides the simple use of AI technologies as a tool to replace “writers” and lower their pay, another major concern had been the exploitation of writers work and ideas. Using their innovations to create the very AI models that would potentially replace them.

Regardless of the outcome of this strike (once the final details are released), I believe that they will serve as precedent for future discussions regarding the regulation of AI tools and the protection of individual data and rights.

The question that is going to need answering, as soon as possible, is who owns what. For example, if an data is scraped from a website and is used to train an AI model, the company that owns the website will likely cry out that they had not been compensated for the use of that information. However, that company is not compensating the people who created and generated that content and posted it online.

It becomes a tangled web of ownership and rights, one that I am unsure will be untangled properly without damaging something or someone.

Learning Machine Learning

Knowledge is not power, it is only potential. Applying that knowledge is power” — Takeda Shingen (1521-1573)

Alright, so we’ve talked quite a bit now about AI, its ramifications and ethical questions, the state of the internet, etc. Now I think it would be quite pertinent to discuss how one might go about learning machine learning. I think it’s important to have a base understanding of how ML models operate, as misunderstanding their base functionalities can lead to poor regulatory decisions down the line and can influence the way people view the applications of machine learning.

If AI is applied in a regulated manner, it has the potential to lead to a world in which humans and machines can live coexist.

To preface this blog, I’d like first to say that I am a self-taught machine learning engineer; my knowledge comes from hands-on experiences with developing my own projects and researching online.

If you are looking to understand machine learning, there are a few categories that you may belong to.

  1. You may be a programmer looking to understand the way machine learning works and want to learn enough about it so you can implement it into your own projects.
  2. You may be interested in pursuing a career in machine learning and want to understand the low-level math behind the models.
  3. You may be an entrepreneur looking to understand machine learning from a business perspective so that you can take advantage of its benefits to create an attractive business model.
  4. You may simply be interested in gaining a high-level understanding of a piece of technology that is very likely to become increasingly prevalent in our lives in the coming years.

All of these positions are equally valid, and there a plenty of resources online for how to best learn machine learning (even if you don’t fall into any of those categories). I myself can relate to all four of these categories and will do my best to provide my advice on how to fulfill each of these goals best online.

For all four of these perspectives, I recommend watching <> video on YouTube. They do a good job of explaining the base methodologies behind machine learning and data science — as well as its limitations and use cases. If you don’t have time to watch it, stick around, I hope to do a general overview in a future blog post (which may be helpful for those that fall in category #4).

Groups 1 and 2: Programmers

If you are in groups 1 and 2, I recommend starting out with Google’s ML Bootcamp online course. They explain some of the core concepts plainly and provide hands-on exercises to apply those concepts. After going through it, try creating a simple model on your own, applying what you learned without simply copying code over.

  • Note: One downside to Google’s ML Bootcamp is that it is based on TensorFlow. While it’s a good starting place to get the core concepts of machine learning, I recommend switching to PyTorch as soon as possible, as they allow for much easier debugging as a trade-off for more low-level coding.

Once you’ve done that, I recommend making it a habit to play around with Kaggle.com: they have ML competitions, hands-on exercises, and a ton of other resources. It’s a great place to learn from others and get a better idea of the strategies behind ML. If you are able, I also recommend getting your hands on a copy of “Machine Learning Approaches to Almost Any Problem” by Abhishek Thakur.

I recognize that I just through a lot of resources at you, so I’ll end this portion with this: the one best thing I can recommend is creating a machine learning project from scratch. Collect the data yourself, interpret and explore it on your own, determine what model would work best for the kind of problem you are trying to solve, and create an MLOps pipeline to best iterate and create your application.

Group 2: Low-Level Understanding

For those in group 2 who are interested in understanding the low-level details, I recommend StatQuest on YouTube (they explain the mathematical concepts very well) and the d2l.ai online book.

To stay up to date with all of the current research developments going on with AI, I recommend the YouTube channel AI Explained; the creator does a great job of summarizing recent developments and diving into what they actually mean for the future.

Group 3: Business Lens

For those with a business lens, I recommend getting involved in the Nittany AI Student Society here on campus. They do a lot of things with machine learning but are definitely more business-focused. Their Nittany AI Challenge is a good place to get real hands-on experience designing an ML application that can be applied to a real-world problem.

Nittany AI Alliance preps students for development and leadership roles ...

Nittany AI Student Society hosts a yearly competition for teams to create “start-up” esque machine learning applications and compete for real money to further their project.

Additionally, I recommend looking more into the limitations of AI. Knowing what kind of projects won’t be solved with AI is likely more beneficial than knowing the problems that can be solved with AI. Knowing when to say no is often more important than saying yes.

Group 4: Users of this Technology

The single best advice I can give here is to be critical of AI applications: what kind of data did they train the model on? Can you think of any potential biases that the model may have? If a person said/did it, would you believe/trust that person at face value?

Being critical and asking questions, looking things up when you don’t know the answer, and reading from multiple sources will provide you with all the information you need to make informed decisions about a topic that is still on the precipice of being implemented everywhere.

While it may feel like these developments are out of our hands, we determine how they are going to shape the future. It will be up to us in order to determine what regulations to put in place, up to us to use the AI applications that will be thrown our way; it will be up to us to handle the ethical and moral conundrums that will inevitably arise due to AI.

No pressure.

The Metaverse is Already Here, and it Has Been For A While

Zuckerberg announces Facebook’s rebranding to Meta (totally not a mid-life crisis).

“To reflect who we are and what we hope to build…our company is now Meta.” – Mark Zuckerberg.

Almost two years ago, Facebook changed its name to Meta. They said the change was to reflect better their new ambitions of bringing the “Metaverse” into reality — they hoped to claim it as their invention.

Truth is, like many things Meta has claimed to pioneer, they’ve acquired (or rather stolen) the term from pre-existing media. The term “Metaverse” was first coined in the 1990s novel by Neal Stephenson, “Snow Crash.” It’s story is about a dystopian future where people live their entire lives in virtual reality.

Snow Crash book cover; first time the word “metaverse” was coined.

What if I told you that the Metaverse is already here, and has been here for decades now?

First, let’s define what I mean when I say Metaverse. For those who have seen the film “Ready Player One,” I am not referring to the software and hardware that make the smooth transition between virtual reality worlds possible. Instead, I’m focusing more so on the idea of the Metaverse. To me, the Metaverse is a place not directly attached to reality in which people can create, self-indentify, and meet other people — live their lives.

With this definition, it becomes clear to see how the Metaverse has been around for decades, ever since the creation of online chatrooms. It’s magnitude and influence has only grown since then.

So, Meta “claiming” the Metaverse as their invention is akin to someone claiming to be the sole inventor of trains. Meta isn’t even pioneering the virtual reality space, their headsets and platform was created by Oculus and acquired by Meta (some of the branding still says Oculus). Although, considering the amount of FTC (Federal Trade Commison) investigations into Meta’s practices, this isn’t exactly surprising behavior from Zuckerberg.

As technology has progressed, the line between the Metaverse and reality has become so blurred, it can be hard to tell the difference between the two: especially as both influence each other.

I remember reading about a Tik Tok trend where people with door cameras would leave notes for Amazon delivery workers to do something, and then post the footage online. These workers, who depend on reviews for their job, have to do a little dance for some nameless crowd online. The dystopia laid out in Snow Crash is already here. We already live in the Metaverse.

I’m not saying that technology is inherently bad, but when it becomes the first thing you look at in the morning and the last thing you engage in before bed, that’s kinda sad. I’m not criticizing, this is something that I personally do and it’s depressing.

What’s the solution?

There’s no perfect solution that I can see. Technology will keep developing, algorithms will get better and better at keeping us hooked, and corporations will keep looking for ways to make money off us.

In fact, I would argue that technology has become a drug, but that’s a topic to explore another day.

On the other hand, we can’t just unplug, the benefits for society far outweigh the negatives brought about by technology; innovations in medication, connections, and information sharing are all enhanced by technology.

The only thing we can do is work on ourselves. Take the time to remember the whole that we do live in, the real world, not the Metaverse. Turn off your music sometimes while you are walking, take a look at the trees, the sky, the people. Grounding yourself in the present can help combat the anxieties, or at the very least give you a break from the overstimulation provided by the technologies today.

Which is ironic, considering you’re reading this online. Oh well.

Science and Religion Aren’t That Removed From Each Other

Do we seek out our own Creator, or does Creation seek us out?

The separation of Church and State doesn’t seem to apply to the head-on collision of Church and Science that has occurred over the past 200 years.

Why is it that whenever machine-learning engineers talk about AI, they often seem nonchalant about the possibility of their creations bringing about the end of humanity? They’ve definitely seen the future foretold in almost every sci-fi movie involving artificial intelligence. When a religious lens is applied to the situation, it appears that their motivations for creating these machines have a sort of religious zeal to it.

It is something they must do. They are part of a greater crusade to reinstate Adam and Eve to the perfection of Eden. All part of a centuries-old quest to achieve perfection, and thus eternal life, through objectivity. As stated in a Vox article, “A lot of excitement about the building of a superintelligent machine comes down to recycled religious ideas.”

In Christianity, Adam and Eve fell from grace as they gained knowledge of good and evil — the ability to do either, to be imperfect. In his book The Religion of Technology, historian David Noble explains how, in the Middle Ages, Christian thinkers began to wonder: “What if tech could help us restore humanity to the perfection of Adam before the fall?” This is evident in the rise of the motto “ora et labora” — prayer and work — in Monasteries as they became “hotbeds of engineering.”

If the pursuit of Artificial General Intelligence is viewed with the same lens, does the fervor with which machine learning architects pursue stem from the same anxieties around mortality that religion addresses?

Considering that people are imagining ways to immortalize themselves through artificial intelligence, I would argue that this statement is true.

It’s not just AI architects who are interested in pursuing digital immortality; Tom Hanks, a famous actor, recently stated that he would be interested in using AI so that he could continue to perform in movies long after his death.

Tom Hanks acting in the Polar Express film. Recently he has stated that his work here would make it easier to create an AI version of him using the existing models of his face for that movie.

If the pursuit of Artificial Intelligence is founded on the same anxieties as religion, which is often dismissed for not being objective enough, does that undermine the scientific nature of the endeavor?

I argue that isn’t the case.

Religion (often) seeks to provide comfort in an uncertain world to help us grapple with the “meaning” of our existence — valuable to the health and well-being of many.

Science accepts the randomness and explainability of our circumstances and seeks to find solutions to the things that ail us through a concrete understanding of the laws of nature — the pursuit of knowledge.

However, science doesn’t need to be carried out in an objective void — separate from all emotions and passions.

There’s this idea that percolates throughout the scientific community, and it’s that in order to get something right, you have to be objective about it. While this may seem like sound logic, it can actually be more detrimental to the scientific process, especially when people start denying the existence of their own emotions and those of others.

Confirmation bias is a perfect example of this. It occurs when someone develops a hypothesis about something and then looks at data; because they have their own hypothesis, they observe and interpret the data through that lens. If you are in an environment where people are aware of confirmation bias and call you out on it, it becomes less of a problem. However, if you are in a space where people disregard the imperfections and biases of others, you’re going to have a fun time explaining to people why your product turned out to be faulty, even though your hypothesis was correct.

In short, I believe that science is a human endeavor. A human experience. It should be carried out in a way that dismisses our imperfections, our curiosities, and our desires to understand. It shouldn’t ignore our fears, our hopes, our dreams either. Science is, and should be, a holistic endeavor that encompasses all of us.

Duality of Generative AI: Part 2

Throughout human history, we have been dependent on machines to survive. Fate, it seems, is not without a sense of irony — Morpheus, The Matrix

In my last blog post, I touched upon some of the interesting questions surrounding generative AI, and the contradictory perspectives surrounding its impacts on the creative industry and online information sharing. In this blog post, I am to dive a bit deeper into perspectives surrounding AI’s potential to bring about the apocalypse or a genesis of a new age.

To preface, there are three main perspectives on this topic:

  • Humanists: believe that AI will be the end of us all fall into this category, generally believe that the end of times is coming.
  • Trans-humanists: Believe that AI will empower humanity to move beyond the physical limitations and move onto the next stage of evolution.
  • Machine-Learning Engineers: AI has the potential to bring about both outcomes. They continue to develop AI models, deeming the risk negligible or not. A 2022 survey found that over half of machine learning experts think the the chance that AI leads to “human extinction or similarly permanent and severe disempowerment of the human species” is greater than 10%.

These categories aren’t static; over the decades, the percentages of people belong to each category has shifted as technology develops and we become more accustomed to what is normal.

A great reflection of this development is the difference between the original Matrix trilogy and the recent addition of Matrix Resurrections.

In the original trilogy, the robots have taken over and humanity has been reduced to livestock, sources of electrical energy to power the robots. It is a clear war between humanity and machines, good versus evil — no exceptions.

Final Scene from the matrix, Neo(left) fighting against Agent Smith(right), representing the fight between individuality and conformity, humanity and machine.

This clear distinction is no longer reflective of our current perspectives on technology.

Two years ago, Matrix Resurrections was released (18 years after the finale of the original trilogy). In those 18 years, much has changed in reality: people have numerous devices, iPads and chrome books have become a “standard” in education, all work is done online, and social media has become the main way we interact with people.

All of these changes have contributed to a gradual shift in the way we perceive technology and our future with it.

This development is apparent in Matrix Resurrections by the presence of good AI individuals assisting and living in harmony with humans — cooperation and cohabitation is possible.

Scene from Matrix Resurrections. On the right is a physical representation of an AI individual working in a garden alongside humans.

Instead of an oppressive dystopia that only allows for one to come out on top, we’ve readjusted our vision of the future as a cooperative one, based on the integration of man and machine in the present.

In a sense, life as we knew it in 2003, when the last matrix movie came out, has faced an apocalyptic event. Life as we knew it drastically changed into the technological world we have today. For better and for worse.

Technology enabled us to adapt to Covid-19 and continue to function (somewhat) as a society. However, it has also alienated and isolated us from each other, leading to increased levels of anger and frustration.

Society has developed into a blend of cautious trans-humanists: the original matrix messages are still in the backs of people’s minds, but we shed aside those worries and fears for the benefits that technology reaps — a future of coexistence is feasible.

I believe that the numerous benefits of artificial intelligence and generative AI far outweighs the downsides: healthcare, drug development, gene therapies, more efficient energy solutions, space exploration, etc. Not every problem or field is one where an “AI solution” is necessary.

I believe that the most likely outcome of the developments is that generative AI will change the very foundations of life as we know it, we shall face another apocalyptic event, just as we did over the gradual shift the past 20 years.

The development and commoditization of technologies (personal and work devices) has caused the “apocalypse” for the original matrix theory; people and technology have become so integrated that it would be near impossible a feat to undo it without irreparable harm.

Duality of Generative AI

As an AI Language Model, I can’t write your blog for you.

Fascinating how obscure phrases rise to become members of the everyday lexicon in response to unprecedented wide-spread events: “flatten the curve,” “screen time,” and most recently, “generative AI models.”

Coming up on the one-year birthday celebration of ChatGPT, it feels fitting to reflect on some conflicting interpretations of what generative AI will mean for everyday life and the future of humanity. More specifically, I will discuss 3 points of contention: creation vs. curation, promotion of innovation vs. destruction of incentives, and apocalypse vs. genesis.

One of the most significant controversies with the emergence of commercial generative AI has been the debate surrounding ownership and monetization: Who owns the product of an AI model? A fascinating case study currently debating this question is the “Say No to AI Art Campaign.” Many of the most popular generative models are image generation models (who doesn’t like to see their imagination spring to life with a click of a button), such as Midjourney, DalE, and Stable-Diffusion XL.

AI can't draw hands

Generative AI models can’t seem to be able to draw hands despite being trained on millions of data points, which include hands.

The controversial nature of these models is rooted in their very definition: “generative model” implies that they are creating something from scratch. However, these models are trained on scraped art — mass collection from the internet via an algorithm — without the artists’ consent.

Therefore, are these models creating something entirely new, or are they simply curating pre-existing works and must pay artists their dues before monetizing their curation? Either answer opens a new can of worms regarding either pathway’s consequences and implementation.

Artists have unified to defame these generative models — and creators — as thieves instead of innovators. Supporters of the models have responded by arguing that the models learn from existing art just as beginner artists do — an argument with some merit. Therefore, the fundamental issue of the matter is monetization rather than the artificial nature of the entity mimicking the art (humans have imitated each other’s styles for centuries; it’s where plagiarism comes from).

However, there are consequences for stealing one’s work and writing it off as your own — we have copyright laws for a reason. If we equate these models to human thieves, do they face the same consequences? Do they then obtain the same legal status and the same rights? Do you treat it as a tool or an accomplice? Is the very nature of its existence a crime, or does it matter how it is implemented on a case-by-case basis?

Coming up with a domain-specific answer for the art industry will require redefining what art is — contradicting the very freeform nature of the field.

The complex issue of ownership and monetization isn’t confined to the domain of art; it spans the entirety of the creation industry — including the more technical innovation spheres.

The internet is unequivocally the greatest invention for the collective sharing of knowledge, experience, and perspectives; never before could you interact with 500 million people within a few clicks of a button, and that’s just on Twitter alone.

However, with the growing importance of big data and emphasis on data curation for machine learning models, incentives for sharing information may vanish before our eyes, potentially ending humanity’s tradition of information sharing.

StackOverflow, a question-and-answer website for programmers, exemplifies this consequence of generative AI models in real-time as their web activity has nosedived in recent months.

Github’s Copilot, another large language model, is a less commonly known but incredibly powerful generative model. Copilot targets programmers as it pulls on the largest reservoir of programming information (Github’s public repository, essentially large coding libraries) and assists its programmers in real-time by suggesting code as they type.

Consequentially, StackOverflow has seen a 35% drop in internet traffic and a 60% decrease in the number of people asking questions monthly. There is no longer an incentive for people to ask questions — Copilot answers within seconds — or answer them — an AI will be trained on it, and no credit will be given to them.

StackOverflow foreshadows what is likely to happen to most forum question-based websites. It’s happening to StackOverflow first because programmers are most ingrained in this technology and are most likely to utilize it daily. According to Github, Copilot is behind 46% of a programmer’s code on average — almost half of all code is written by an AI. 

These generative AI models have shut off the tap of knowledge and information they drink from; they discourage individuals from sharing data, data that these models rely on to generate their output. Unless something changes, they are akin to a snake eating its tail.

However, it’s not all bad; Copilot does make programming and debugging more efficient. It used to be said that programmers spend 90% of their time on StackOverflow; now, they can have their questions answered immediately, allowing more time to be spent on innovating new solutions.

Pie chart, majority of it is Stack overflow

This is a meme created before the rise of generative models because if “asking an AI” was a part of this pie chart, it would be the majority.

Do the ways in which generative models enable innovation outweigh their detrimental effect on knowledge sharing — the basis for our species’ success since the dawn of man?

At the core of all these questions surrounding the impacts of AI is the debate on whether AI is inherently good or bad: will AI enslave us all or enable us to achieve prosperity?

These ponderances of apocalyptic outcomes or the genesis of a better future will have to wait for part 2 of the blog post; for now, enjoy coming up with your answers to these questions.

Imperfect Machines

Robots will take people’s jobs. 

Most of us have heard this statement or even spoken it. There’s undoubtedly some truth to it, and in some cases, it’s happened already: the proliferation of self-checkout machines, the impending arrival of semi-automated carriers and delivery drones, the employment of assembly robots in factories, and now the creative industry is under siege from generative artificial intelligence models

The reasoning behind this statement is rooted in the conviction that machines do everything better than people, making machines the perfect replacement for imperfect people.

Every science fiction movie portrays robots as flawless, objective, and infallible, furthering the narrative that computers best humans in every conceivable way.

Shown here is the android “Data” from the TV Show Star Trek. Throughout the show, it is shown that the android is both physically stronger and smarter than any of the biological crew mates.

Upon investigating the concept of the inevitable robot takeover, I have uncovered that the true capabilities of intelligent machines are blown entirely out of proportion by the rise of commercial generative AI, and the realistic applications and strength of these machines are subjected to frequent mischaracterization in the haze of recent unbridled hyper and self-perpetuating misconceptions.

Before diving into the deep end, I believe it’s worth defining what sort of “intelligent machines” I refer to in this analysis. 

With the term intelligent machines, I aim to cover a wide enough breadth of novel technologies. This scope includes androids (humanoid robots), machine-learning models, automated vehicles, and advanced assembly robots. For clarity, I exclude the following from my analysis software with predefined functionality (e.g., browsers, video games), regular cars, automatic doors, and the standard toaster. 

In short, any robot/algorithm labeled as “going to take over the world” falls into the category of intelligent machines. 

As this is an extensive scope embodying a variety of tasks and disciplines, this initial analysis is very generic as I aim to apply it to the entire spectrum.

First, let’s tackle the misconception that “machines triumph over people because they don’t make mistakes.” First, anyone who has ever opened a pack of Skittles and discovered a malformed Skittle hiding inside recognizes this falsehood. Machines make mistakes, and assuming otherwise is akin to trusting an imperfect machine. 

I will concede that machines make fewer mistakes than human equivalents for certain tasks. But propagating the misconceived notion that they never make mistakes is incredibly dangerous — especially with the emergence of technologies like autonomous vehicles and generative AI models. There’s a real danger in not investigating a model’s outputs. 

For example, Amazon rolled out a new hiring AI bot a few years ago to assist HR with the resume intake process. Based on Amazon’s previous hiring data, the model determined which candidates to move on to the next stage of the hiring process. However, it was found that the model had developed a bias against the female candidates. Further investigation illuminated that Amazon’s previous hiring practices had disproportionately hired male candidates over female ones. In short, the model had a flaw.

Now, there is the argument that it’s not the machine itself that was flawed but rather the people who engineered it. In that case, the question becomes: How could any machine achieve perfection if it inherits the imperfect qualities of its creators?

To further combat the notion of technological superiority, are the flaws expressed by these superintelligent machines even fixable? Or are they an inherent design flaw? Is a “perfect machine” even a realistic goal to have? For example, generative AI models are fundamentally plagued by hallucinations caused by their inherent design flaws. For context, chatbots hallucinate when they output blatantly wrong statements with the absolute confidence of a politician. 

To preach the inherent perfection of machines is to perpetuate a misconception. Instead of trusting in the perfection of silicon and binary, we should criticize these machines and their creators — especially when their actions impact the well-being and lives of others.

I’m not arguing against innovation, nor against the benefits of using machine tools in industry. I’m arguing against the zealotry of believing in the infallibility of synthetic intelligence. 

Checks and balances are required to prevent these fallible systems from invoking irrevocable harm to humanity and the world. We should take the same stance as we do with people, the original imperfect machines. 

Art generated by the text-to-image model from OpenAI: An imperfect machine robot sitting on a cliff over the ocean, digital art.