All of this raises the question of what the best-use case for something like Gemini is in the first place. Are we really lacking in sufficient historically accurate depictions of Nazis? Not yet, although these generative-AI products are positioned more and more as gatekeepers to knowledge; we might soon see a world where a service like Gemini both constrains access to and pollutes information. And the definition of AI is expansive; it can in many ways be understood as a mechanism of extraction and surveillance.
We should expect Google—and any generative-AI company—to do better. Yet resolving issues with an image generator that creates oddly diverse Nazis would rely on temporary solutions to a deeper problem: Algorithms inevitably perpetuate one kind of bias or another. When we look to these systems for accurate representation, we are ultimately asking for a pleasing illusion, an excuse to ignore the machinery that crushes our reality into small parts and reconstitutes it into strange shapes.
Read more:
Gilliard, C. (2024, February 26). The Deeper Problem With Google’s Racially Diverse Nazis. The Atlantic. https://www.theatlantic.com/technology/archive/2024/02/google-gemini-diverse-nazis/677575/