We have constant surveillance, especially in our cars. The smarter our cars get, the more they’re surveilling us. We are seeing increasing use of those systems and cars being used, and police cases to see if you were paying attention. Were you talking on your phone? Were you texting and driving? Things like that. There is automation in cars that’s designed to identify people and to stop right to avoid hitting you. And as we know, a lot of the systems misidentify Black people as trash cans, and will instead hit them. There are so many instances where AI is part of our life, and I don’t think people realize the depth of which it really does drive our lives. And I think that’s the thing that scares me the most for people of color is that we don’t understand just how much AI is part of our everyday life. And I wish people would stop and sort of think about, yes, I get easy access to this thing, but what am I trading off to get that easy access? What does that mean for me? And what does that mean for my community? We have places like Project Blue light, Project Green Light, where those systems are heavily surveilled in order to “protect communities.” But are those created to protect white communities at the expense of Black and brown communities? Right? That’s what we have to think about when we say that these technologies, especially surveillance technologies, are being used to protect people, who are they protecting? And who are they protecting people from? And is that idea that they’re protecting people from a certain group of people realistic? Or is that grounded in some cultural bias that we have.
Read more:
Yohannes, A. (2024, March 19). AI How AI is unfairly targeting and discriminating against Black people. Mozilla Distilled. https://blog.mozilla.org/en/mozilla/ai/artificial-intelligence-dangers-black-people-african-americans/