Imperfect Machines

Robots will take people’s jobs. 

Most of us have heard this statement or even spoken it. There’s undoubtedly some truth to it, and in some cases, it’s happened already: the proliferation of self-checkout machines, the impending arrival of semi-automated carriers and delivery drones, the employment of assembly robots in factories, and now the creative industry is under siege from generative artificial intelligence models

The reasoning behind this statement is rooted in the conviction that machines do everything better than people, making machines the perfect replacement for imperfect people.

Every science fiction movie portrays robots as flawless, objective, and infallible, furthering the narrative that computers best humans in every conceivable way.

Shown here is the android “Data” from the TV Show Star Trek. Throughout the show, it is shown that the android is both physically stronger and smarter than any of the biological crew mates.

Upon investigating the concept of the inevitable robot takeover, I have uncovered that the true capabilities of intelligent machines are blown entirely out of proportion by the rise of commercial generative AI, and the realistic applications and strength of these machines are subjected to frequent mischaracterization in the haze of recent unbridled hyper and self-perpetuating misconceptions.

Before diving into the deep end, I believe it’s worth defining what sort of “intelligent machines” I refer to in this analysis. 

With the term intelligent machines, I aim to cover a wide enough breadth of novel technologies. This scope includes androids (humanoid robots), machine-learning models, automated vehicles, and advanced assembly robots. For clarity, I exclude the following from my analysis software with predefined functionality (e.g., browsers, video games), regular cars, automatic doors, and the standard toaster. 

In short, any robot/algorithm labeled as “going to take over the world” falls into the category of intelligent machines. 

As this is an extensive scope embodying a variety of tasks and disciplines, this initial analysis is very generic as I aim to apply it to the entire spectrum.

First, let’s tackle the misconception that “machines triumph over people because they don’t make mistakes.” First, anyone who has ever opened a pack of Skittles and discovered a malformed Skittle hiding inside recognizes this falsehood. Machines make mistakes, and assuming otherwise is akin to trusting an imperfect machine. 

I will concede that machines make fewer mistakes than human equivalents for certain tasks. But propagating the misconceived notion that they never make mistakes is incredibly dangerous — especially with the emergence of technologies like autonomous vehicles and generative AI models. There’s a real danger in not investigating a model’s outputs. 

For example, Amazon rolled out a new hiring AI bot a few years ago to assist HR with the resume intake process. Based on Amazon’s previous hiring data, the model determined which candidates to move on to the next stage of the hiring process. However, it was found that the model had developed a bias against the female candidates. Further investigation illuminated that Amazon’s previous hiring practices had disproportionately hired male candidates over female ones. In short, the model had a flaw.

Now, there is the argument that it’s not the machine itself that was flawed but rather the people who engineered it. In that case, the question becomes: How could any machine achieve perfection if it inherits the imperfect qualities of its creators?

To further combat the notion of technological superiority, are the flaws expressed by these superintelligent machines even fixable? Or are they an inherent design flaw? Is a “perfect machine” even a realistic goal to have? For example, generative AI models are fundamentally plagued by hallucinations caused by their inherent design flaws. For context, chatbots hallucinate when they output blatantly wrong statements with the absolute confidence of a politician. 

To preach the inherent perfection of machines is to perpetuate a misconception. Instead of trusting in the perfection of silicon and binary, we should criticize these machines and their creators — especially when their actions impact the well-being and lives of others.

I’m not arguing against innovation, nor against the benefits of using machine tools in industry. I’m arguing against the zealotry of believing in the infallibility of synthetic intelligence. 

Checks and balances are required to prevent these fallible systems from invoking irrevocable harm to humanity and the world. We should take the same stance as we do with people, the original imperfect machines. 

Art generated by the text-to-image model from OpenAI: An imperfect machine robot sitting on a cliff over the ocean, digital art.