Ethical case analysis is a common exercise for identifying and reasoning about ethical challenges in complex situations. Analyzing ethical case studies with your mentors, colleagues, and peer students also provides opportunities for each participant to articulate her own ethical values and to seek ethical consensus within the group. The Rock Ethics Institute provides a 12-step approach for analyzing ethical case studies. This step-by-step framework includes:
1. State the nature of the ethical issue you’ve initially spotted
2. List the relevant facts
3. Identify stakeholders
4. Clarify the underlying values
5. Consider consequences
6. Identify relevant rights/duties
7. Reflect on which virtues apply
8. Consider relevant relationships
9. Develop a list of potential responses
10. Use moral imagination to consider each option based on the above considerations
11. Choose the best option
12. Consider what could be done in the future to prevent the problem
Application of the 12 steps to an ethics scenario is illustrated in a series of instructional videos.
For those of you who are familiar with engineering design, could you identify the parallel between this ethical reasoning framework and the engineering design process? When we rephrase the 12 steps using the language of design, we might see that both emphasize an iterative process for identifying and solving open-ended challenges (see Figure 4).
As big data technologies become widely adopted by business and governmental sectors, we find ourselves often confronted by the following question: To what extent can we trust computer algorithms to make ethical decisions for us? Another way to ask this question is: Do algorithms have ethical agency? Ethical agency is the ability to act responsibly according to one’s ethical judgment of right and wrong (MacIntyre, 1999; van der Velden, 2009). For example, adult human beings have ethical agency because they have a sense of what is ethically right. That is, we accept that adults make intentional choices to act ethically or not, and they can be held accountable for their actions. Admittedly, machines (and computers) can be programmed to do things we consider ethically right. For example, we can program an electrical system to turn off the lights when the sensors detect no people in a room. In this case, it is the programmer, not the electrical system, that decides avoiding energy waste is ethically right. The human programmer is able to fully grasp the meanings of and connections between “no people in a room,” “turning off the lights,” and “avoiding energy waste.” However, in the case of big data analysis, the human actors (e.g., authors of the algorithms) may have less grasping of the entire situation because 1) they do not interact directly with the data (the algorithms do so), and 2) they might be working on a tiny proportion of a vast network of interrelated algorithms (Ananny, 2016). Faced with enormously complex systems and incomplete information, the human actors (e.g., researchers and programmers) involved in big data analysis sometimes have to delegate the power of making ethical decisions to algorithms. Yet algorithms are not fully capable of making sense of the patterns they recognize or the impact of their recommendations. Or we can say that algorithms have at the best “partial ethical agency.” The following case study highlights the challenges of letting algorithms with partial ethical agency to make important decisions on behalf of humans.
Identifying potential terrorists with algorithms?
In early 2016, counter-terrorism officials from the federal government met with leaders of giant tech companies in the Silicon Valley to discuss strategies for identifying and preventing terrorism on social networks. Among the proposals was a suggestion for the tech companies to develop a security algorithm that will detect, measure, and flag “radicalization” from social network posts. The federal officials who proposed this algorithm also cited the example of the Facebook suicide prevention mechanisms, which allows Facebook users to report suicidal content to the company.
You could also listen to a discussion about this proposal at WNYC.
Questions for Case Analysis
- What are ethically sound responses to the federal officials’ proposal? Use the 12-step approach or the Design-Based Framework to analyze this case.
- Which of the four ethical concepts (integrity, rights, impact, and epistemic norms) introduced in the above section are applicable to this case?
Next Page: Previous Page: