Rights to Creation: The Creator and WGA

I’m sure some of you saw the trailer for The Creator, a movie that hit theaters sometime in September. The trailer appeared to promise an action-packed intellectual movie that grappled with the question of what AI will mean for the future and how our relationship with it will develop over time. While the movie began filming in January of 2022, its timing could not have been better, nor could it be more ironic.

Poster image for The Creator movie

According to a review by Christy Lemire, the Creator is not as innovative as it appears to be. She comments that its central trope has been reused excessively throughout film history. The main kid (The Creator) is an all-powerful creature who could be humanity’s savior or destroyer — sound familiar?

What’s ironic about this blatant reuse of tropes is that it’s almost like an AI was used to create the plot for the movie: regurgitating previous innovations instead of creating something new and unique. This irony turns problematic when one remembers the Writer’s Guild strike that went on for the past several months.

That strike ended on September 24th when the Writers Guild of America (WGA) and the Alliance of Motion Picture and Television Producers announced that an agreement had been reached. This was Hollywood’s second longest strike, and it appears to have been a successful one as the “deal is exceptional” according to WGA leaders.

The agreement doesn’t prevent writers or productions from using generative AI but lays out rules prohibiting the use of the software to reduce or eliminate writers and their pay.

“A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services” (Statement from the agreement).

Writers Guild of America on Strike

This agreement seems to be a good first step towards regulating AI in terms of putting the power into the hands of the individual, rather than that of large corporations.

Additionally, the agreement states that the WGA has the “right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.”

Besides the simple use of AI technologies as a tool to replace “writers” and lower their pay, another major concern had been the exploitation of writers work and ideas. Using their innovations to create the very AI models that would potentially replace them.

Regardless of the outcome of this strike (once the final details are released), I believe that they will serve as precedent for future discussions regarding the regulation of AI tools and the protection of individual data and rights.

The question that is going to need answering, as soon as possible, is who owns what. For example, if an data is scraped from a website and is used to train an AI model, the company that owns the website will likely cry out that they had not been compensated for the use of that information. However, that company is not compensating the people who created and generated that content and posted it online.

It becomes a tangled web of ownership and rights, one that I am unsure will be untangled properly without damaging something or someone.

Learning Machine Learning

Knowledge is not power, it is only potential. Applying that knowledge is power” — Takeda Shingen (1521-1573)

Alright, so we’ve talked quite a bit now about AI, its ramifications and ethical questions, the state of the internet, etc. Now I think it would be quite pertinent to discuss how one might go about learning machine learning. I think it’s important to have a base understanding of how ML models operate, as misunderstanding their base functionalities can lead to poor regulatory decisions down the line and can influence the way people view the applications of machine learning.

If AI is applied in a regulated manner, it has the potential to lead to a world in which humans and machines can live coexist.

To preface this blog, I’d like first to say that I am a self-taught machine learning engineer; my knowledge comes from hands-on experiences with developing my own projects and researching online.

If you are looking to understand machine learning, there are a few categories that you may belong to.

  1. You may be a programmer looking to understand the way machine learning works and want to learn enough about it so you can implement it into your own projects.
  2. You may be interested in pursuing a career in machine learning and want to understand the low-level math behind the models.
  3. You may be an entrepreneur looking to understand machine learning from a business perspective so that you can take advantage of its benefits to create an attractive business model.
  4. You may simply be interested in gaining a high-level understanding of a piece of technology that is very likely to become increasingly prevalent in our lives in the coming years.

All of these positions are equally valid, and there a plenty of resources online for how to best learn machine learning (even if you don’t fall into any of those categories). I myself can relate to all four of these categories and will do my best to provide my advice on how to fulfill each of these goals best online.

For all four of these perspectives, I recommend watching <> video on YouTube. They do a good job of explaining the base methodologies behind machine learning and data science — as well as its limitations and use cases. If you don’t have time to watch it, stick around, I hope to do a general overview in a future blog post (which may be helpful for those that fall in category #4).

Groups 1 and 2: Programmers

If you are in groups 1 and 2, I recommend starting out with Google’s ML Bootcamp online course. They explain some of the core concepts plainly and provide hands-on exercises to apply those concepts. After going through it, try creating a simple model on your own, applying what you learned without simply copying code over.

  • Note: One downside to Google’s ML Bootcamp is that it is based on TensorFlow. While it’s a good starting place to get the core concepts of machine learning, I recommend switching to PyTorch as soon as possible, as they allow for much easier debugging as a trade-off for more low-level coding.

Once you’ve done that, I recommend making it a habit to play around with Kaggle.com: they have ML competitions, hands-on exercises, and a ton of other resources. It’s a great place to learn from others and get a better idea of the strategies behind ML. If you are able, I also recommend getting your hands on a copy of “Machine Learning Approaches to Almost Any Problem” by Abhishek Thakur.

I recognize that I just through a lot of resources at you, so I’ll end this portion with this: the one best thing I can recommend is creating a machine learning project from scratch. Collect the data yourself, interpret and explore it on your own, determine what model would work best for the kind of problem you are trying to solve, and create an MLOps pipeline to best iterate and create your application.

Group 2: Low-Level Understanding

For those in group 2 who are interested in understanding the low-level details, I recommend StatQuest on YouTube (they explain the mathematical concepts very well) and the d2l.ai online book.

To stay up to date with all of the current research developments going on with AI, I recommend the YouTube channel AI Explained; the creator does a great job of summarizing recent developments and diving into what they actually mean for the future.

Group 3: Business Lens

For those with a business lens, I recommend getting involved in the Nittany AI Student Society here on campus. They do a lot of things with machine learning but are definitely more business-focused. Their Nittany AI Challenge is a good place to get real hands-on experience designing an ML application that can be applied to a real-world problem.

Nittany AI Alliance preps students for development and leadership roles ...

Nittany AI Student Society hosts a yearly competition for teams to create “start-up” esque machine learning applications and compete for real money to further their project.

Additionally, I recommend looking more into the limitations of AI. Knowing what kind of projects won’t be solved with AI is likely more beneficial than knowing the problems that can be solved with AI. Knowing when to say no is often more important than saying yes.

Group 4: Users of this Technology

The single best advice I can give here is to be critical of AI applications: what kind of data did they train the model on? Can you think of any potential biases that the model may have? If a person said/did it, would you believe/trust that person at face value?

Being critical and asking questions, looking things up when you don’t know the answer, and reading from multiple sources will provide you with all the information you need to make informed decisions about a topic that is still on the precipice of being implemented everywhere.

While it may feel like these developments are out of our hands, we determine how they are going to shape the future. It will be up to us in order to determine what regulations to put in place, up to us to use the AI applications that will be thrown our way; it will be up to us to handle the ethical and moral conundrums that will inevitably arise due to AI.

No pressure.

The Metaverse is Already Here, and it Has Been For A While

Zuckerberg announces Facebook’s rebranding to Meta (totally not a mid-life crisis).

“To reflect who we are and what we hope to build…our company is now Meta.” – Mark Zuckerberg.

Almost two years ago, Facebook changed its name to Meta. They said the change was to reflect better their new ambitions of bringing the “Metaverse” into reality — they hoped to claim it as their invention.

Truth is, like many things Meta has claimed to pioneer, they’ve acquired (or rather stolen) the term from pre-existing media. The term “Metaverse” was first coined in the 1990s novel by Neal Stephenson, “Snow Crash.” It’s story is about a dystopian future where people live their entire lives in virtual reality.

Snow Crash book cover; first time the word “metaverse” was coined.

What if I told you that the Metaverse is already here, and has been here for decades now?

First, let’s define what I mean when I say Metaverse. For those who have seen the film “Ready Player One,” I am not referring to the software and hardware that make the smooth transition between virtual reality worlds possible. Instead, I’m focusing more so on the idea of the Metaverse. To me, the Metaverse is a place not directly attached to reality in which people can create, self-indentify, and meet other people — live their lives.

With this definition, it becomes clear to see how the Metaverse has been around for decades, ever since the creation of online chatrooms. It’s magnitude and influence has only grown since then.

So, Meta “claiming” the Metaverse as their invention is akin to someone claiming to be the sole inventor of trains. Meta isn’t even pioneering the virtual reality space, their headsets and platform was created by Oculus and acquired by Meta (some of the branding still says Oculus). Although, considering the amount of FTC (Federal Trade Commison) investigations into Meta’s practices, this isn’t exactly surprising behavior from Zuckerberg.

As technology has progressed, the line between the Metaverse and reality has become so blurred, it can be hard to tell the difference between the two: especially as both influence each other.

I remember reading about a Tik Tok trend where people with door cameras would leave notes for Amazon delivery workers to do something, and then post the footage online. These workers, who depend on reviews for their job, have to do a little dance for some nameless crowd online. The dystopia laid out in Snow Crash is already here. We already live in the Metaverse.

I’m not saying that technology is inherently bad, but when it becomes the first thing you look at in the morning and the last thing you engage in before bed, that’s kinda sad. I’m not criticizing, this is something that I personally do and it’s depressing.

What’s the solution?

There’s no perfect solution that I can see. Technology will keep developing, algorithms will get better and better at keeping us hooked, and corporations will keep looking for ways to make money off us.

In fact, I would argue that technology has become a drug, but that’s a topic to explore another day.

On the other hand, we can’t just unplug, the benefits for society far outweigh the negatives brought about by technology; innovations in medication, connections, and information sharing are all enhanced by technology.

The only thing we can do is work on ourselves. Take the time to remember the whole that we do live in, the real world, not the Metaverse. Turn off your music sometimes while you are walking, take a look at the trees, the sky, the people. Grounding yourself in the present can help combat the anxieties, or at the very least give you a break from the overstimulation provided by the technologies today.

Which is ironic, considering you’re reading this online. Oh well.

Science and Religion Aren’t That Removed From Each Other

Do we seek out our own Creator, or does Creation seek us out?

The separation of Church and State doesn’t seem to apply to the head-on collision of Church and Science that has occurred over the past 200 years.

Why is it that whenever machine-learning engineers talk about AI, they often seem nonchalant about the possibility of their creations bringing about the end of humanity? They’ve definitely seen the future foretold in almost every sci-fi movie involving artificial intelligence. When a religious lens is applied to the situation, it appears that their motivations for creating these machines have a sort of religious zeal to it.

It is something they must do. They are part of a greater crusade to reinstate Adam and Eve to the perfection of Eden. All part of a centuries-old quest to achieve perfection, and thus eternal life, through objectivity. As stated in a Vox article, “A lot of excitement about the building of a superintelligent machine comes down to recycled religious ideas.”

In Christianity, Adam and Eve fell from grace as they gained knowledge of good and evil — the ability to do either, to be imperfect. In his book The Religion of Technology, historian David Noble explains how, in the Middle Ages, Christian thinkers began to wonder: “What if tech could help us restore humanity to the perfection of Adam before the fall?” This is evident in the rise of the motto “ora et labora” — prayer and work — in Monasteries as they became “hotbeds of engineering.”

If the pursuit of Artificial General Intelligence is viewed with the same lens, does the fervor with which machine learning architects pursue stem from the same anxieties around mortality that religion addresses?

Considering that people are imagining ways to immortalize themselves through artificial intelligence, I would argue that this statement is true.

It’s not just AI architects who are interested in pursuing digital immortality; Tom Hanks, a famous actor, recently stated that he would be interested in using AI so that he could continue to perform in movies long after his death.

Tom Hanks acting in the Polar Express film. Recently he has stated that his work here would make it easier to create an AI version of him using the existing models of his face for that movie.

If the pursuit of Artificial Intelligence is founded on the same anxieties as religion, which is often dismissed for not being objective enough, does that undermine the scientific nature of the endeavor?

I argue that isn’t the case.

Religion (often) seeks to provide comfort in an uncertain world to help us grapple with the “meaning” of our existence — valuable to the health and well-being of many.

Science accepts the randomness and explainability of our circumstances and seeks to find solutions to the things that ail us through a concrete understanding of the laws of nature — the pursuit of knowledge.

However, science doesn’t need to be carried out in an objective void — separate from all emotions and passions.

There’s this idea that percolates throughout the scientific community, and it’s that in order to get something right, you have to be objective about it. While this may seem like sound logic, it can actually be more detrimental to the scientific process, especially when people start denying the existence of their own emotions and those of others.

Confirmation bias is a perfect example of this. It occurs when someone develops a hypothesis about something and then looks at data; because they have their own hypothesis, they observe and interpret the data through that lens. If you are in an environment where people are aware of confirmation bias and call you out on it, it becomes less of a problem. However, if you are in a space where people disregard the imperfections and biases of others, you’re going to have a fun time explaining to people why your product turned out to be faulty, even though your hypothesis was correct.

In short, I believe that science is a human endeavor. A human experience. It should be carried out in a way that dismisses our imperfections, our curiosities, and our desires to understand. It shouldn’t ignore our fears, our hopes, our dreams either. Science is, and should be, a holistic endeavor that encompasses all of us.