Duality of Generative AI: Part 2

Throughout human history, we have been dependent on machines to survive. Fate, it seems, is not without a sense of irony — Morpheus, The Matrix

In my last blog post, I touched upon some of the interesting questions surrounding generative AI, and the contradictory perspectives surrounding its impacts on the creative industry and online information sharing. In this blog post, I am to dive a bit deeper into perspectives surrounding AI’s potential to bring about the apocalypse or a genesis of a new age.

To preface, there are three main perspectives on this topic:

  • Humanists: believe that AI will be the end of us all fall into this category, generally believe that the end of times is coming.
  • Trans-humanists: Believe that AI will empower humanity to move beyond the physical limitations and move onto the next stage of evolution.
  • Machine-Learning Engineers: AI has the potential to bring about both outcomes. They continue to develop AI models, deeming the risk negligible or not. A 2022 survey found that over half of machine learning experts think the the chance that AI leads to “human extinction or similarly permanent and severe disempowerment of the human species” is greater than 10%.

These categories aren’t static; over the decades, the percentages of people belong to each category has shifted as technology develops and we become more accustomed to what is normal.

A great reflection of this development is the difference between the original Matrix trilogy and the recent addition of Matrix Resurrections.

In the original trilogy, the robots have taken over and humanity has been reduced to livestock, sources of electrical energy to power the robots. It is a clear war between humanity and machines, good versus evil — no exceptions.

Final Scene from the matrix, Neo(left) fighting against Agent Smith(right), representing the fight between individuality and conformity, humanity and machine.

This clear distinction is no longer reflective of our current perspectives on technology.

Two years ago, Matrix Resurrections was released (18 years after the finale of the original trilogy). In those 18 years, much has changed in reality: people have numerous devices, iPads and chrome books have become a “standard” in education, all work is done online, and social media has become the main way we interact with people.

All of these changes have contributed to a gradual shift in the way we perceive technology and our future with it.

This development is apparent in Matrix Resurrections by the presence of good AI individuals assisting and living in harmony with humans — cooperation and cohabitation is possible.

Scene from Matrix Resurrections. On the right is a physical representation of an AI individual working in a garden alongside humans.

Instead of an oppressive dystopia that only allows for one to come out on top, we’ve readjusted our vision of the future as a cooperative one, based on the integration of man and machine in the present.

In a sense, life as we knew it in 2003, when the last matrix movie came out, has faced an apocalyptic event. Life as we knew it drastically changed into the technological world we have today. For better and for worse.

Technology enabled us to adapt to Covid-19 and continue to function (somewhat) as a society. However, it has also alienated and isolated us from each other, leading to increased levels of anger and frustration.

Society has developed into a blend of cautious trans-humanists: the original matrix messages are still in the backs of people’s minds, but we shed aside those worries and fears for the benefits that technology reaps — a future of coexistence is feasible.

I believe that the numerous benefits of artificial intelligence and generative AI far outweighs the downsides: healthcare, drug development, gene therapies, more efficient energy solutions, space exploration, etc. Not every problem or field is one where an “AI solution” is necessary.

I believe that the most likely outcome of the developments is that generative AI will change the very foundations of life as we know it, we shall face another apocalyptic event, just as we did over the gradual shift the past 20 years.

The development and commoditization of technologies (personal and work devices) has caused the “apocalypse” for the original matrix theory; people and technology have become so integrated that it would be near impossible a feat to undo it without irreparable harm.

Duality of Generative AI

As an AI Language Model, I can’t write your blog for you.

Fascinating how obscure phrases rise to become members of the everyday lexicon in response to unprecedented wide-spread events: “flatten the curve,” “screen time,” and most recently, “generative AI models.”

Coming up on the one-year birthday celebration of ChatGPT, it feels fitting to reflect on some conflicting interpretations of what generative AI will mean for everyday life and the future of humanity. More specifically, I will discuss 3 points of contention: creation vs. curation, promotion of innovation vs. destruction of incentives, and apocalypse vs. genesis.

One of the most significant controversies with the emergence of commercial generative AI has been the debate surrounding ownership and monetization: Who owns the product of an AI model? A fascinating case study currently debating this question is the “Say No to AI Art Campaign.” Many of the most popular generative models are image generation models (who doesn’t like to see their imagination spring to life with a click of a button), such as Midjourney, DalE, and Stable-Diffusion XL.

AI can't draw hands

Generative AI models can’t seem to be able to draw hands despite being trained on millions of data points, which include hands.

The controversial nature of these models is rooted in their very definition: “generative model” implies that they are creating something from scratch. However, these models are trained on scraped art — mass collection from the internet via an algorithm — without the artists’ consent.

Therefore, are these models creating something entirely new, or are they simply curating pre-existing works and must pay artists their dues before monetizing their curation? Either answer opens a new can of worms regarding either pathway’s consequences and implementation.

Artists have unified to defame these generative models — and creators — as thieves instead of innovators. Supporters of the models have responded by arguing that the models learn from existing art just as beginner artists do — an argument with some merit. Therefore, the fundamental issue of the matter is monetization rather than the artificial nature of the entity mimicking the art (humans have imitated each other’s styles for centuries; it’s where plagiarism comes from).

However, there are consequences for stealing one’s work and writing it off as your own — we have copyright laws for a reason. If we equate these models to human thieves, do they face the same consequences? Do they then obtain the same legal status and the same rights? Do you treat it as a tool or an accomplice? Is the very nature of its existence a crime, or does it matter how it is implemented on a case-by-case basis?

Coming up with a domain-specific answer for the art industry will require redefining what art is — contradicting the very freeform nature of the field.

The complex issue of ownership and monetization isn’t confined to the domain of art; it spans the entirety of the creation industry — including the more technical innovation spheres.

The internet is unequivocally the greatest invention for the collective sharing of knowledge, experience, and perspectives; never before could you interact with 500 million people within a few clicks of a button, and that’s just on Twitter alone.

However, with the growing importance of big data and emphasis on data curation for machine learning models, incentives for sharing information may vanish before our eyes, potentially ending humanity’s tradition of information sharing.

StackOverflow, a question-and-answer website for programmers, exemplifies this consequence of generative AI models in real-time as their web activity has nosedived in recent months.

Github’s Copilot, another large language model, is a less commonly known but incredibly powerful generative model. Copilot targets programmers as it pulls on the largest reservoir of programming information (Github’s public repository, essentially large coding libraries) and assists its programmers in real-time by suggesting code as they type.

Consequentially, StackOverflow has seen a 35% drop in internet traffic and a 60% decrease in the number of people asking questions monthly. There is no longer an incentive for people to ask questions — Copilot answers within seconds — or answer them — an AI will be trained on it, and no credit will be given to them.

StackOverflow foreshadows what is likely to happen to most forum question-based websites. It’s happening to StackOverflow first because programmers are most ingrained in this technology and are most likely to utilize it daily. According to Github, Copilot is behind 46% of a programmer’s code on average — almost half of all code is written by an AI. 

These generative AI models have shut off the tap of knowledge and information they drink from; they discourage individuals from sharing data, data that these models rely on to generate their output. Unless something changes, they are akin to a snake eating its tail.

However, it’s not all bad; Copilot does make programming and debugging more efficient. It used to be said that programmers spend 90% of their time on StackOverflow; now, they can have their questions answered immediately, allowing more time to be spent on innovating new solutions.

Pie chart, majority of it is Stack overflow

This is a meme created before the rise of generative models because if “asking an AI” was a part of this pie chart, it would be the majority.

Do the ways in which generative models enable innovation outweigh their detrimental effect on knowledge sharing — the basis for our species’ success since the dawn of man?

At the core of all these questions surrounding the impacts of AI is the debate on whether AI is inherently good or bad: will AI enslave us all or enable us to achieve prosperity?

These ponderances of apocalyptic outcomes or the genesis of a better future will have to wait for part 2 of the blog post; for now, enjoy coming up with your answers to these questions.