Science and Religion Aren’t That Removed From Each Other

Do we seek out our own Creator, or does Creation seek us out?

The separation of Church and State doesn’t seem to apply to the head-on collision of Church and Science that has occurred over the past 200 years.

Why is it that whenever machine-learning engineers talk about AI, they often seem nonchalant about the possibility of their creations bringing about the end of humanity? They’ve definitely seen the future foretold in almost every sci-fi movie involving artificial intelligence. When a religious lens is applied to the situation, it appears that their motivations for creating these machines have a sort of religious zeal to it.

It is something they must do. They are part of a greater crusade to reinstate Adam and Eve to the perfection of Eden. All part of a centuries-old quest to achieve perfection, and thus eternal life, through objectivity. As stated in a Vox article, “A lot of excitement about the building of a superintelligent machine comes down to recycled religious ideas.”

In Christianity, Adam and Eve fell from grace as they gained knowledge of good and evil — the ability to do either, to be imperfect. In his book The Religion of Technology, historian David Noble explains how, in the Middle Ages, Christian thinkers began to wonder: “What if tech could help us restore humanity to the perfection of Adam before the fall?” This is evident in the rise of the motto “ora et labora” — prayer and work — in Monasteries as they became “hotbeds of engineering.”

If the pursuit of Artificial General Intelligence is viewed with the same lens, does the fervor with which machine learning architects pursue stem from the same anxieties around mortality that religion addresses?

Considering that people are imagining ways to immortalize themselves through artificial intelligence, I would argue that this statement is true.

It’s not just AI architects who are interested in pursuing digital immortality; Tom Hanks, a famous actor, recently stated that he would be interested in using AI so that he could continue to perform in movies long after his death.

Tom Hanks acting in the Polar Express film. Recently he has stated that his work here would make it easier to create an AI version of him using the existing models of his face for that movie.

If the pursuit of Artificial Intelligence is founded on the same anxieties as religion, which is often dismissed for not being objective enough, does that undermine the scientific nature of the endeavor?

I argue that isn’t the case.

Religion (often) seeks to provide comfort in an uncertain world to help us grapple with the “meaning” of our existence — valuable to the health and well-being of many.

Science accepts the randomness and explainability of our circumstances and seeks to find solutions to the things that ail us through a concrete understanding of the laws of nature — the pursuit of knowledge.

However, science doesn’t need to be carried out in an objective void — separate from all emotions and passions.

There’s this idea that percolates throughout the scientific community, and it’s that in order to get something right, you have to be objective about it. While this may seem like sound logic, it can actually be more detrimental to the scientific process, especially when people start denying the existence of their own emotions and those of others.

Confirmation bias is a perfect example of this. It occurs when someone develops a hypothesis about something and then looks at data; because they have their own hypothesis, they observe and interpret the data through that lens. If you are in an environment where people are aware of confirmation bias and call you out on it, it becomes less of a problem. However, if you are in a space where people disregard the imperfections and biases of others, you’re going to have a fun time explaining to people why your product turned out to be faulty, even though your hypothesis was correct.

In short, I believe that science is a human endeavor. A human experience. It should be carried out in a way that dismisses our imperfections, our curiosities, and our desires to understand. It shouldn’t ignore our fears, our hopes, our dreams either. Science is, and should be, a holistic endeavor that encompasses all of us.

Duality of Generative AI: Part 2

Throughout human history, we have been dependent on machines to survive. Fate, it seems, is not without a sense of irony — Morpheus, The Matrix

In my last blog post, I touched upon some of the interesting questions surrounding generative AI, and the contradictory perspectives surrounding its impacts on the creative industry and online information sharing. In this blog post, I am to dive a bit deeper into perspectives surrounding AI’s potential to bring about the apocalypse or a genesis of a new age.

To preface, there are three main perspectives on this topic:

  • Humanists: believe that AI will be the end of us all fall into this category, generally believe that the end of times is coming.
  • Trans-humanists: Believe that AI will empower humanity to move beyond the physical limitations and move onto the next stage of evolution.
  • Machine-Learning Engineers: AI has the potential to bring about both outcomes. They continue to develop AI models, deeming the risk negligible or not. A 2022 survey found that over half of machine learning experts think the the chance that AI leads to “human extinction or similarly permanent and severe disempowerment of the human species” is greater than 10%.

These categories aren’t static; over the decades, the percentages of people belong to each category has shifted as technology develops and we become more accustomed to what is normal.

A great reflection of this development is the difference between the original Matrix trilogy and the recent addition of Matrix Resurrections.

In the original trilogy, the robots have taken over and humanity has been reduced to livestock, sources of electrical energy to power the robots. It is a clear war between humanity and machines, good versus evil — no exceptions.

Final Scene from the matrix, Neo(left) fighting against Agent Smith(right), representing the fight between individuality and conformity, humanity and machine.

This clear distinction is no longer reflective of our current perspectives on technology.

Two years ago, Matrix Resurrections was released (18 years after the finale of the original trilogy). In those 18 years, much has changed in reality: people have numerous devices, iPads and chrome books have become a “standard” in education, all work is done online, and social media has become the main way we interact with people.

All of these changes have contributed to a gradual shift in the way we perceive technology and our future with it.

This development is apparent in Matrix Resurrections by the presence of good AI individuals assisting and living in harmony with humans — cooperation and cohabitation is possible.

Scene from Matrix Resurrections. On the right is a physical representation of an AI individual working in a garden alongside humans.

Instead of an oppressive dystopia that only allows for one to come out on top, we’ve readjusted our vision of the future as a cooperative one, based on the integration of man and machine in the present.

In a sense, life as we knew it in 2003, when the last matrix movie came out, has faced an apocalyptic event. Life as we knew it drastically changed into the technological world we have today. For better and for worse.

Technology enabled us to adapt to Covid-19 and continue to function (somewhat) as a society. However, it has also alienated and isolated us from each other, leading to increased levels of anger and frustration.

Society has developed into a blend of cautious trans-humanists: the original matrix messages are still in the backs of people’s minds, but we shed aside those worries and fears for the benefits that technology reaps — a future of coexistence is feasible.

I believe that the numerous benefits of artificial intelligence and generative AI far outweighs the downsides: healthcare, drug development, gene therapies, more efficient energy solutions, space exploration, etc. Not every problem or field is one where an “AI solution” is necessary.

I believe that the most likely outcome of the developments is that generative AI will change the very foundations of life as we know it, we shall face another apocalyptic event, just as we did over the gradual shift the past 20 years.

The development and commoditization of technologies (personal and work devices) has caused the “apocalypse” for the original matrix theory; people and technology have become so integrated that it would be near impossible a feat to undo it without irreparable harm.

Duality of Generative AI

As an AI Language Model, I can’t write your blog for you.

Fascinating how obscure phrases rise to become members of the everyday lexicon in response to unprecedented wide-spread events: “flatten the curve,” “screen time,” and most recently, “generative AI models.”

Coming up on the one-year birthday celebration of ChatGPT, it feels fitting to reflect on some conflicting interpretations of what generative AI will mean for everyday life and the future of humanity. More specifically, I will discuss 3 points of contention: creation vs. curation, promotion of innovation vs. destruction of incentives, and apocalypse vs. genesis.

One of the most significant controversies with the emergence of commercial generative AI has been the debate surrounding ownership and monetization: Who owns the product of an AI model? A fascinating case study currently debating this question is the “Say No to AI Art Campaign.” Many of the most popular generative models are image generation models (who doesn’t like to see their imagination spring to life with a click of a button), such as Midjourney, DalE, and Stable-Diffusion XL.

AI can't draw hands

Generative AI models can’t seem to be able to draw hands despite being trained on millions of data points, which include hands.

The controversial nature of these models is rooted in their very definition: “generative model” implies that they are creating something from scratch. However, these models are trained on scraped art — mass collection from the internet via an algorithm — without the artists’ consent.

Therefore, are these models creating something entirely new, or are they simply curating pre-existing works and must pay artists their dues before monetizing their curation? Either answer opens a new can of worms regarding either pathway’s consequences and implementation.

Artists have unified to defame these generative models — and creators — as thieves instead of innovators. Supporters of the models have responded by arguing that the models learn from existing art just as beginner artists do — an argument with some merit. Therefore, the fundamental issue of the matter is monetization rather than the artificial nature of the entity mimicking the art (humans have imitated each other’s styles for centuries; it’s where plagiarism comes from).

However, there are consequences for stealing one’s work and writing it off as your own — we have copyright laws for a reason. If we equate these models to human thieves, do they face the same consequences? Do they then obtain the same legal status and the same rights? Do you treat it as a tool or an accomplice? Is the very nature of its existence a crime, or does it matter how it is implemented on a case-by-case basis?

Coming up with a domain-specific answer for the art industry will require redefining what art is — contradicting the very freeform nature of the field.

The complex issue of ownership and monetization isn’t confined to the domain of art; it spans the entirety of the creation industry — including the more technical innovation spheres.

The internet is unequivocally the greatest invention for the collective sharing of knowledge, experience, and perspectives; never before could you interact with 500 million people within a few clicks of a button, and that’s just on Twitter alone.

However, with the growing importance of big data and emphasis on data curation for machine learning models, incentives for sharing information may vanish before our eyes, potentially ending humanity’s tradition of information sharing.

StackOverflow, a question-and-answer website for programmers, exemplifies this consequence of generative AI models in real-time as their web activity has nosedived in recent months.

Github’s Copilot, another large language model, is a less commonly known but incredibly powerful generative model. Copilot targets programmers as it pulls on the largest reservoir of programming information (Github’s public repository, essentially large coding libraries) and assists its programmers in real-time by suggesting code as they type.

Consequentially, StackOverflow has seen a 35% drop in internet traffic and a 60% decrease in the number of people asking questions monthly. There is no longer an incentive for people to ask questions — Copilot answers within seconds — or answer them — an AI will be trained on it, and no credit will be given to them.

StackOverflow foreshadows what is likely to happen to most forum question-based websites. It’s happening to StackOverflow first because programmers are most ingrained in this technology and are most likely to utilize it daily. According to Github, Copilot is behind 46% of a programmer’s code on average — almost half of all code is written by an AI. 

These generative AI models have shut off the tap of knowledge and information they drink from; they discourage individuals from sharing data, data that these models rely on to generate their output. Unless something changes, they are akin to a snake eating its tail.

However, it’s not all bad; Copilot does make programming and debugging more efficient. It used to be said that programmers spend 90% of their time on StackOverflow; now, they can have their questions answered immediately, allowing more time to be spent on innovating new solutions.

Pie chart, majority of it is Stack overflow

This is a meme created before the rise of generative models because if “asking an AI” was a part of this pie chart, it would be the majority.

Do the ways in which generative models enable innovation outweigh their detrimental effect on knowledge sharing — the basis for our species’ success since the dawn of man?

At the core of all these questions surrounding the impacts of AI is the debate on whether AI is inherently good or bad: will AI enslave us all or enable us to achieve prosperity?

These ponderances of apocalyptic outcomes or the genesis of a better future will have to wait for part 2 of the blog post; for now, enjoy coming up with your answers to these questions.

Imperfect Machines

Robots will take people’s jobs. 

Most of us have heard this statement or even spoken it. There’s undoubtedly some truth to it, and in some cases, it’s happened already: the proliferation of self-checkout machines, the impending arrival of semi-automated carriers and delivery drones, the employment of assembly robots in factories, and now the creative industry is under siege from generative artificial intelligence models

The reasoning behind this statement is rooted in the conviction that machines do everything better than people, making machines the perfect replacement for imperfect people.

Every science fiction movie portrays robots as flawless, objective, and infallible, furthering the narrative that computers best humans in every conceivable way.

Shown here is the android “Data” from the TV Show Star Trek. Throughout the show, it is shown that the android is both physically stronger and smarter than any of the biological crew mates.

Upon investigating the concept of the inevitable robot takeover, I have uncovered that the true capabilities of intelligent machines are blown entirely out of proportion by the rise of commercial generative AI, and the realistic applications and strength of these machines are subjected to frequent mischaracterization in the haze of recent unbridled hyper and self-perpetuating misconceptions.

Before diving into the deep end, I believe it’s worth defining what sort of “intelligent machines” I refer to in this analysis. 

With the term intelligent machines, I aim to cover a wide enough breadth of novel technologies. This scope includes androids (humanoid robots), machine-learning models, automated vehicles, and advanced assembly robots. For clarity, I exclude the following from my analysis software with predefined functionality (e.g., browsers, video games), regular cars, automatic doors, and the standard toaster. 

In short, any robot/algorithm labeled as “going to take over the world” falls into the category of intelligent machines. 

As this is an extensive scope embodying a variety of tasks and disciplines, this initial analysis is very generic as I aim to apply it to the entire spectrum.

First, let’s tackle the misconception that “machines triumph over people because they don’t make mistakes.” First, anyone who has ever opened a pack of Skittles and discovered a malformed Skittle hiding inside recognizes this falsehood. Machines make mistakes, and assuming otherwise is akin to trusting an imperfect machine. 

I will concede that machines make fewer mistakes than human equivalents for certain tasks. But propagating the misconceived notion that they never make mistakes is incredibly dangerous — especially with the emergence of technologies like autonomous vehicles and generative AI models. There’s a real danger in not investigating a model’s outputs. 

For example, Amazon rolled out a new hiring AI bot a few years ago to assist HR with the resume intake process. Based on Amazon’s previous hiring data, the model determined which candidates to move on to the next stage of the hiring process. However, it was found that the model had developed a bias against the female candidates. Further investigation illuminated that Amazon’s previous hiring practices had disproportionately hired male candidates over female ones. In short, the model had a flaw.

Now, there is the argument that it’s not the machine itself that was flawed but rather the people who engineered it. In that case, the question becomes: How could any machine achieve perfection if it inherits the imperfect qualities of its creators?

To further combat the notion of technological superiority, are the flaws expressed by these superintelligent machines even fixable? Or are they an inherent design flaw? Is a “perfect machine” even a realistic goal to have? For example, generative AI models are fundamentally plagued by hallucinations caused by their inherent design flaws. For context, chatbots hallucinate when they output blatantly wrong statements with the absolute confidence of a politician. 

To preach the inherent perfection of machines is to perpetuate a misconception. Instead of trusting in the perfection of silicon and binary, we should criticize these machines and their creators — especially when their actions impact the well-being and lives of others.

I’m not arguing against innovation, nor against the benefits of using machine tools in industry. I’m arguing against the zealotry of believing in the infallibility of synthetic intelligence. 

Checks and balances are required to prevent these fallible systems from invoking irrevocable harm to humanity and the world. We should take the same stance as we do with people, the original imperfect machines. 

Art generated by the text-to-image model from OpenAI: An imperfect machine robot sitting on a cliff over the ocean, digital art.