Artificial intelligence (AI) is a rapidly evolving field with many practical uses and implementations seen in today’s world. The general goal of AI is to mimic cognitive functions displayed by natural intelligence by artificially learning how to perceive its environment and achieve planned goals. AI can currently be seen implemented in automated decision making (video games), understanding human speech (Siri and Alexa), self-driving cars (Tesla), and advanced web search engines (Google) that recognize patterns to improve their recommendation settings for a specific user. With AI’s prevalence in our daily lives increasing, it is important to not only focus on bettering its functionalities and usages but to also focus on the ethics governing the way AI is created. I am not talking about potential doomsday events brought upon by AI that will suddenly become smarter than their creators, but more pressing issues that we are seeing currently related to gender bias and anti-discrimination guidelines that tend to be sidelined conversations when talking about any sort of product in its developing stage. With the exciting prospects that AI brings about, people often forget about other powerful consequences that the same technology could bring about as well. However, I think it is important to note these impacts that could be combatted sooner rather than later to prevent unconscious prejudices of developers from being implemented into programs or built into datasets used to train the software.
One of the first steps of implementing machine learning and artificial intelligence programs is to collect data samples that provide an accurate representation of real life scenarios. These data sets can take the forms of numbers, photos, text, transaction records, or anything else the program is required to analyze. As with any experiment, the more training data a machine has, the better. Once training data is obtained, programmers can choose a machine learning model to implement to train itself to recognize patterns and even make predictions based on these patterns. While it would be nice to assume that the training data provided always represents society as a whole, recent results from companies have proven that this is not the case. For example, in 2018, Amazon revealed that they had to get rid of their AI-based recruiting program once they realized that the system had taught itself to favor male candidates and gave resumes that included the words “women’s” or “woman”. The AI that Amazon used was trained by being given all of the applications that were submitted to the company over the past ten years, most of which came from men, reflecting the male dominance across the tech industry. So, the AI mistakenly recognized this pattern as a favorable trend and implemented it while screening new applications.
Although many people within the tech industry think it is important not to place many restrictions on AI as it is in its initial developmental stages, I think it is important for developers and leaders to understand their responsibility to ensure that the training data that they use to create their programs do not in any manner reflect personal or societal biases. I think the implementation of AI, when calibrated correctly could be the key to finally permanently removing barriers faced by minorities in the tech industry. However, if we are not thoughtful and conscientious about taking a few extra steps to ensure inclusivity, it could also solidify and create more barriers.
tvc5559 says
Great post. I find this topic particularly fascinating, especially as a computer science major and someone very interested in the field of AI. Prior to learning anything about AI, I think I always saw it as what it is meant to become, a perfect decision-maker without any bias. However, as I learn more about how models are created and trained, I’ve realized that like you said, it can struggle with humanity’s biases even more than humans do. This is a big problem considering that many people, especially those non-technically inclined, may not realize that the AI they use even has bias. It is definitely going to be a very thin line between perfecting the decision-making of AI models and sending them down the same problematic paths that humanity is already struggling with, but I am excited to see how it all turns out!