0

Are we programming computers to be biased?

By In Pun, Berfin Evrim, Yanan Cheng

Mechanization
A Robotic Arm in the Construction Industry. Source: https://flic.kr/p/25TfZSq:

Computers have been gaining skills on what seems like a daily basis. Behind the tasks computers can do, however, are humans who develop new systems and new ways to encode algorithms. Yet, despite the fact that humans have control over computers, popular myths continue to portray computers as taking over the world. Who then is ultimately responsible for what computers “decide” and do?

One of the most important questions for designers – of products or buildings – is the role of computers during the design process. Machines can be used as designers, constructors, and assistants. After encoding the correct algorithm, computers are able to do most of the things that human can do. However, they cannot make their own ethical decisions and they thus reflect their designer’s moral values.  Consequently, computers have biased systems that can end up making unethical decisions. In an article on computer ethics, computer specialists Batya Friedman and Helen Nissenbaum have noted that, “a system discriminates unfairly if it denies an opportunity or a good or if it assigns an undesirable outcome to an individual or group of individuals on grounds that are unreasonable or inappropriate”. Therefore, as computers perform according to data input that humans coded, the result can be designs that are biased or sexist, as a few examples will show.

Facebook
Facebook and Billions of Users. Image: https://flic.kr/p/84654V

Fighting Over Data Sources

To be able input data, computers need enough data to encode an algorithm – a step-by-step process for performing an assigned task. Larger data collection can assure more accurate results. This is why data-monopolies like Google and Facebook collect so much information about their users. The question is how they receive all this data and how they use it.

Facebook uses a myriad of ways to gather data on its users; Google gathers information through Google Drive, Gmail, Google Search, Google Maps and YouTube. Every time, a person uses these applications, all steps are recorded in the information pool. There is no permanent deletion in these data sources, and the data collection itself has become contentious. Recently, Facebook struggled with a federal investigation about data sharing  As a 2018 Wall Street Journal article noted: “Facebook Inc. disclosed it gave dozens of companies special access to user data, detailing for the first time a spate of deals that contrasted with the social network’s previous public statements that it restricted personal information to outsiders in 2015.” The tech giant not only ignored privacy, which is a fundamental right of all humans as recognized by United Nations, it also misled its users about its activities. It is this kind of scandal that makes people question the trustworthiness of technology companies and sometimes the technology itself.

Compounding this issue, when third parties receive information about us, we do not know what they do with it. Even if information is collected anonymously, we cannot know if companies are using the data for encoding  neutral  systems – or biased ones.

Machines That Make Decisions: The Self-driving Car

test-moral-machine-mit
MIT’s  “Moral Machine”, faced with the choice of who will survive a crash, asks: “What should the car do?”
Image: https://flic.kr/p/2bo7NxA

When designing a decision-making machine, people must first give it a moral sense. In 2016, researchers at the MIT Media Lab launched Moral Machine, a massive survey to gather human perspectives on moral decisions made by “intelligent” machines such as driverless cars. They presented various moral dilemmas to participants, such as whether a self-driving car should continue straight ahead to kill three elderly pedestrians or swerve into a barricade to kill three youthful passengers. As outside observers, the participants were asked to judge which outcome they thought was more acceptable. Over 2 million online participants from over 200 countries participated in the survey.

Edmond Awad, a postdoc at MIT Media Lab, noted that the researchers aimed to understand the kinds of moral decisions that driverless cars might have to resort to. The study found that there were three major elements that people seem to consider most frequently. They found that people tend to prioritize saving human lives over those of other animals, saving the lives of many over saving the lives of a few, and saving the lives of the young over those of older people. What is more, participants from poorer countries with weaker institutions were more tolerant of jaywalkers versus pedestrians who cross legally. And participants from countries with a high level of economic inequality showed greater gaps between the treatment of individuals with high and low social status.

The study further found that countries with close proximity to one another often show closer moral preferences, which fall in three dominant geographic clusters: West, East, and South. For instance, respondents in southern countries had a relatively stronger tendency to favor sparing young people rather than the elderly, especially compared to the eastern cluster. Participants from individualistic cultures, like the UK and the United States, placed a stronger emphasis on sparing more lives given all the other choices, because of the greater emphasis on the value of each individual.

The various preferences noted can serve in shaping the design and regulation of the vehicles, although this can create its own ethical problems. For example, carmakers may find that Chinese consumers would prefer a car that protects themselves over pedestrians. Knowing such preferences can  inform the way in which people can program autonomous vehicles. Although the study found that preferences were to some degree universally agreed upon, the degree to which participants in various countries agreed with them varied among different groups or countries. With a lack of moral consensus about how to program cars, researchers still need to think more deeply about the ethics of self-driving cars. Researchers must analyze who takes more or less risk, and more importantly, identify where and how the bias takes place that programs machines to spare certain lives over others. While it is often claimed that autonomous technology will make the roads safer and more efficient, programmed decision-making will always incorporate a degree of bias. If we program a machine to make a biased decision, we are ultimately responsible for the consequences.

Machine Learning

Stereotypical gender roles. AdobeStock/rodjulian  Source: https://stock.adobe.com/images/stereotypical-gender-roles/96748691

Computation is not limited to developing cars, it has become integrated with all sorts of design work, including architectural design. This has raised some basic questions regarding machine computing, and obtaining designs or aesthetic criticisms and judgments from machines. There are quite a few methods to produce designs via computers, including programming and geometric modeling. Computing is seen as a boon to design; however, designers are rarely taught to question the algorithms being used for design computing.

In a recent study, Vicente Ordonez, a computer science professor at the University of Virginia, was developing image-recognition software. He noticed a pattern in the program, in which the computer associates the images of a kitchen more often with women than men. He began to question if he unconsciously programmed biases into the program. Two research studies dealing with image collections by Microsoft and Facebook confirmed  how common such biases are. The studies showed a gender bias in the depiction of activities: Cooking and shopping were associated with women, while sports were tied to men.

People like to assume that computers and machines are neutral in their results., In reality, machine-learning software that uses datasets to “train” software often amplifies existing social biases. Companies are currently relying heavily on software that learns by sorting piles of data. This has led to computers taking on unsavory biases from both the programmer and society in general. To neutralize this phenomenon, the researcher must be sensitive to bias in the first place, and then specify what she or he wants to correct. When people blindly applying algorithms to solve problems without evaluating the consequences of those actions, ethical problems can arise. People have a tendency to not question computational results, because computers are seen as being “neutral”. This can lead to our not acknowledging ethical problems in our designs. When we assume that computers are “neutral” we often overlook the biases they generate or perpetuate.

Conclusion

Whether we are programming cars or designing buildings, computers are not neutral decision makers. Instead, we are programming them for potential bias. As architects and designers, we need to carefully evaluate what software and algorithms we use in our designs. Humans are arguably biased, and we program software to learn from society. While the datasets we use reflect actual statistics, we can no longer blindly utilize design technology without finding ways to better evaluate the data used and the results being generated.

In spite of the advantages computers have brought to the design professions, we must be aware of ethical problems that can arise due to the uncritical use of algorithms and computer software. As designers, we often use any software available on the market, especially when  it is promoted as enhancing productivity or providing convenience. We have largely taken on computing and machines without first analyzing the source of the data being used and how computers are utilizing such data. Built-in algorithms can thus potentially influence or even dictate our design decisions. Computers are extremely useful, as they are able to complete certain tasks at a much faster rate than humans. We must, however, try to control the values inherent within computational design, and not have computers end up dictating our values.

 

References

  • Aouf, Rima Sabina. “MIT Surveys Two Million People to Set out Ethical Framework for Driverless Cars.” Dezeen. October 26, 2018. Accessed December 12, 2018. https://www.dezeen.com/2018/10/26/mit-moral-machine-survey-driverless-cars-technology/.
  • Benes, B., D. J. Kasik, W. Li, and H. Zhang. “Computational Design and Fabrication.” IEEE Computer Graphics and Applications 37, no. 3 (May 2017): 32–33. https://doi.org/10.1109/MCG.2017.50.
  • Colvile, Robert. “Is a robot about to take your job?” The Telegraph, June 6, 2016. https://www.telegraph.co.uk/men/thinking-man/is-a-robot-about-to-take-your-job/
  • Curran, Dylan. “Are you ready? Here is all the data Facebook and google have on you.” The Guardian, March 30, 2018. https://www.theguardian.com/commentisfree/2018/mar/28/all-the-data-facebook-google-has-on-you-privacy
  • Dizikes, Peter, and MIT News Office. “How Should Autonomous Vehicles Be Programmed?” MIT News. October 24, 2018. Accessed December 02, 2018. http://news.mit.edu/2018/how-autonomous-vehicles-programmed-1024.
  • Friedman, Batya and Helen Nissenbaum. “Bias in Computer Systems.” ACM Transactions on Information Systems (TOIS) 14, no. 3 (1996): 330-347. https://www.vsdesign.org/publications/pdf/64_friedman.pdf
  • Hao, Karen. “Should a Self-driving Car Kill the Baby or the Grandma? Depends on Where You’re From.” MIT Technology Review. October 29, 2018. Accessed December 02, 2018. https://www.technologyreview.com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/.
  • Simonite, Tom. “Machines Taught by Photos Learn a Sexist View of Women.” Wired, August 21, 2017. https://www.wired.com/story/machines-taught-by-photos-learn-a-sexist-view-of-women/.
  • Singer, Natasha. “What you Don’t Know About How Facebook Uses Your Data.” The New York Times, April 11, 2018. https://www.nytimes.com/2018/04/11/technology/facebook-privacy-hearings.html.
  • Sydell, Laura. “FTC Confirms It’s Investigating Facebook For Possible Privacy Violations.” NPR (National Public Radio), March 26, 2018. https://www.npr.org/sections/thetwo-way/2018/03/26/597135373/ftc-confirms-its-investigating-facebook-for-possible-privacy-violations
  • Wells, Georgia. “Facebook Reveals Apps, Others That Got Special Access to User Data.” The Wall Street Journal, July 1, 2018. https://www.wsj.com/articles/facebook-reveals-apps-others-that-got-special-access-to-user-data-1530454712.

 

Berfin Evrim

Leave a Reply

Your email address will not be published. Required fields are marked *