As an AI Language Model, I can’t write your blog for you.
Fascinating how obscure phrases rise to become members of the everyday lexicon in response to unprecedented wide-spread events: “flatten the curve,” “screen time,” and most recently, “generative AI models.”
Coming up on the one-year birthday celebration of ChatGPT, it feels fitting to reflect on some conflicting interpretations of what generative AI will mean for everyday life and the future of humanity. More specifically, I will discuss 3 points of contention: creation vs. curation, promotion of innovation vs. destruction of incentives, and apocalypse vs. genesis.
One of the most significant controversies with the emergence of commercial generative AI has been the debate surrounding ownership and monetization: Who owns the product of an AI model? A fascinating case study currently debating this question is the “Say No to AI Art Campaign.” Many of the most popular generative models are image generation models (who doesn’t like to see their imagination spring to life with a click of a button), such as Midjourney, DalE, and Stable-Diffusion XL.

Generative AI models can’t seem to be able to draw hands despite being trained on millions of data points, which include hands.
The controversial nature of these models is rooted in their very definition: “generative model” implies that they are creating something from scratch. However, these models are trained on scraped art — mass collection from the internet via an algorithm — without the artists’ consent.
Therefore, are these models creating something entirely new, or are they simply curating pre-existing works and must pay artists their dues before monetizing their curation? Either answer opens a new can of worms regarding either pathway’s consequences and implementation.
Artists have unified to defame these generative models — and creators — as thieves instead of innovators. Supporters of the models have responded by arguing that the models learn from existing art just as beginner artists do — an argument with some merit. Therefore, the fundamental issue of the matter is monetization rather than the artificial nature of the entity mimicking the art (humans have imitated each other’s styles for centuries; it’s where plagiarism comes from).
However, there are consequences for stealing one’s work and writing it off as your own — we have copyright laws for a reason. If we equate these models to human thieves, do they face the same consequences? Do they then obtain the same legal status and the same rights? Do you treat it as a tool or an accomplice? Is the very nature of its existence a crime, or does it matter how it is implemented on a case-by-case basis?
Coming up with a domain-specific answer for the art industry will require redefining what art is — contradicting the very freeform nature of the field.
The complex issue of ownership and monetization isn’t confined to the domain of art; it spans the entirety of the creation industry — including the more technical innovation spheres.
The internet is unequivocally the greatest invention for the collective sharing of knowledge, experience, and perspectives; never before could you interact with 500 million people within a few clicks of a button, and that’s just on Twitter alone.
However, with the growing importance of big data and emphasis on data curation for machine learning models, incentives for sharing information may vanish before our eyes, potentially ending humanity’s tradition of information sharing.
StackOverflow, a question-and-answer website for programmers, exemplifies this consequence of generative AI models in real-time as their web activity has nosedived in recent months.
Github’s Copilot, another large language model, is a less commonly known but incredibly powerful generative model. Copilot targets programmers as it pulls on the largest reservoir of programming information (Github’s public repository, essentially large coding libraries) and assists its programmers in real-time by suggesting code as they type.
Consequentially, StackOverflow has seen a 35% drop in internet traffic and a 60% decrease in the number of people asking questions monthly. There is no longer an incentive for people to ask questions — Copilot answers within seconds — or answer them — an AI will be trained on it, and no credit will be given to them.
StackOverflow foreshadows what is likely to happen to most forum question-based websites. It’s happening to StackOverflow first because programmers are most ingrained in this technology and are most likely to utilize it daily. According to Github, Copilot is behind 46% of a programmer’s code on average — almost half of all code is written by an AI.
These generative AI models have shut off the tap of knowledge and information they drink from; they discourage individuals from sharing data, data that these models rely on to generate their output. Unless something changes, they are akin to a snake eating its tail.
However, it’s not all bad; Copilot does make programming and debugging more efficient. It used to be said that programmers spend 90% of their time on StackOverflow; now, they can have their questions answered immediately, allowing more time to be spent on innovating new solutions.

This is a meme created before the rise of generative models because if “asking an AI” was a part of this pie chart, it would be the majority.
Do the ways in which generative models enable innovation outweigh their detrimental effect on knowledge sharing — the basis for our species’ success since the dawn of man?
At the core of all these questions surrounding the impacts of AI is the debate on whether AI is inherently good or bad: will AI enslave us all or enable us to achieve prosperity?
These ponderances of apocalyptic outcomes or the genesis of a better future will have to wait for part 2 of the blog post; for now, enjoy coming up with your answers to these questions.
I appreciate how you presented both sides of the argument regarding whether or not AI generated art should be considered original, and therefore monetized. This leaves me curious as to whether you think people who use AI to make art should have their work monetized. In my opinion, I think the art is technically an original work, but monetization is difficult to rationalize when the art can be made by essentially anyone with a few clicks of a button.
AI is our new technology, it is our “When I was a kid I didn’t have any of that” when we’re adults. It’s both exciting and terrifying, and I like how you allude to both of these things, and the legal questions are some of the most important parts about this whole thing to me. As someone who used to program a lot, seeing StackOverflow go is nostalgic and sad. But it raises the question, is new bad or are we scared of letting go of what we know?
AI has always terrified me of its capabilities and power. It is capable of taking over any job industry, and I think the world is starting to seriously realize the dangers of AI. Personally, I don’t feel too strongly that AI-generated art should be monetized. Monetized, man-made art should remain as appreciated art. How do you feel about this? Do you think AI art deserves to be monetized?
AI art is an interesting case where the AI does not open up an art software and then ‘make’ art in the same way as a human. The actual way in which generative AI models function makes it incredibly hard to crack down on them without accidentally catching important protections like satire in the crossfire. With regard to the Stack Overflow example, it’s somewhat scary that since the source blog is effectively set in stone once the AI model is trained off it, if an incorrect response was present, then all the programmers using the new AI will get incorrect solutions, creating the potential for cascading effects. I’m sure it’s fine.