© 2025 Keren Wang — Licensed under a Creative Commons Attribution–NonCommercial–NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
Educational use permitted with attribution; all other rights reserved.
This post contains excerpts and draft materials from a work-in-progress scholarly manuscript.
It is shared here for educational and research purposes only.
For permission requests, please contact the author directly.
On December 4, 2024, news broke that a lone gunman had assassinated UnitedHealthcare’s chief executive officer, Brian Thompson, outside the company’s headquarters.1 The killing itself was shocking, but what unsettled many observers was the wave of sympathy that quickly coalesced around the perpetrator—donations, online tributes, and statements of support that revealed a raw seam in America’s collective experience of health care.2 This dramatic act of killing is entangled with the dark trajectory in the devolution of the marketized healthcare industry in the United States: normalizing traumatic acts of takings, with increasingly unsustainable industry practices justifying the suspension of pre-existing taboos concerning the sanctity of life and the boundaries of wealth transfer.3
That same disillusionment had already surfaced a year earlier, when a class action lawsuit accused UnitedHealthcare of “systematically deploy[ing] an AI algorithm to prematurely and in bad faith discontinue payment for healthcare services for elderly individuals with serious diseases and injuries.”4 Specifically, the plaintiffs alleges that:
“Defendants [UnitedHealthcare]’ AI Model, known as “nH Predict,” determines Medicare Advantage patients’ coverage criteria in post-acute care settings with rigid and unrealistic predictions for recovery. Relying on the nH Predict AI Model, Defendants purport to predict how much care an elderly patient ‘should’ require, but overrides real doctors’ determinations as to the amount of care a patient in fact requires to recover. As such, Defendants make coverage determinations not based on individual patient’s needs, but based on the outputs of the nH Predict AI Model, resulting in the inappropriate denial of necessary care prescribed by the patients’ doctors.”
(Estate of Lokken et al. v. UnitedHealth Group, Inc. et al., No. 0:23-cv-03514, Doc. 1, p. 3)

These concerns are not limited to UnitedHealthcare. According to a 2024 survey by the American Medical Association, 61 percent of physicians reported that they were worried AI would accelerate prior-authorization denials.5 Scholars have since warned that the insertion of algorithmic judgment into life-and-death decisions risks eroding public trust and further stratifying access to care.6 At the same time, research also points to potential blessings of AI-augmented healthcare—improving diagnostic accuracy, reducing medical errors, and enhancing coordination of care.7
We find ourselves thrown into a liminal rhetorical space wherein algorithms and automated scripts are ascending to speak with authority over human vulnerability. Why do we voluntarily surrender and transfer our human agency to machines over life-and-death decisions—even when such transfer of agency brings veritable negative human consequences? What we are seeing here is not simply a novel technological dilemma, but a continuity in the human impulse to ritualize uncertainty, disparity, and the avoidance of an unacceptable reality by inscribing meaning into extra-human systems that portend to be larger than ourselves.8
Even in countries so often praised as paragons of rights protection and social welfare, such as Denmark, the present moment reads like an algorithmic Faustian bargain: Amnesty International’s 2024 report documents how experimentation by the Danish government with AI-augmented fraud-control models has sacrificed basic due-process safeguards and the social safety net for the most vulnerable in the name of administrative efficiency, in pursuit of what Danish policymakers describe as “deep reductions in the overall welfare budget.”14
The figure below makes this bargain legible, visualizing how the “Really Single” algorithm, one of over 60 artificial intelligence and machine-learning models used by Danish welfare agencies to identify alleged benefit fraud, weighs residency records, household size, and property data to automatically assign higher “risk scores” for atypical households subject to fraud investigations. That is the exigence: automated scripts now speak with authority over human need and reshape the grammar of rights, calling for a rhetorical intervention that can dissect, historicize, and interrogate this emerging sacrificial logic through transdisciplinary heuristics, an opening movement toward a rhetorical atlas that traces how a global algorithmic-governance assemblage takes shape in divergent local moments.15

A figure in Amnesty International’s 2024 report illustrates the Danish government’s use of fraud-control algorithms in distributing social benefits. In this case, Udbetaling Danmark’s “Really Single” model weights inputs such as residency records, household size, and property data from public registers to infer whether a person is genuinely single, potentially flagging atypical living arrangements as suspected fraud.Source: Amnesty International, Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State. Index: EUR 18/8709/2024.
The ontological structure of this exigence does not emerge from breakthroughs in artificial intelligence or machine learning; it is embedded in something much older: human sacrifice. To grapple with this key point, let us briefly consider a canonical case of sacrificial governance technology from early sedentary civilization — oracle bone pyromancy.
During the height of the Chinese Bronze Age (c. 1,600–1,046 BCE), Shang dynasty rulers inscribed queries—“Will it rain tomorrow?” or “Will my military expedition be successful?”—onto meticulously prepared oxen scapulae or turtle plastrons, better known as oracle bones. The oracle-king pressed a red-hot bronze rod to the bone until it cracked; diviners read the fissures as answers from the almighty Shang-Di (上帝, “lord from above”) to king’s queries.9 This pyromancy ritual transformed rhetorical uncertainty into actionable mandates that portended to speak with higher-than-human authority.10 While most oracle bone scripts uncovered in Shang archaeological contexts cover matters of ordinary state affairs (such as queries on the king’s daily activities), a sizable portion of the oracle bones contain inscriptions concerning ritual human sacrifices, especially during periods of war and food shortages.11
Central to this Bronze Age sacrificial governance technology was the oracle bone script, the progenitor of the modern Chinese writing system. Developed as a secretive logographic medium, it encoded semantic values in terse symbolic notations, and organized around a self-referential formal syntax. The Shang rulers’ exclusive literacy of the oracle bone script concentrated interpretive authority over the divine will, and with it the power to decide life and death for his subjects, slaves, and enemies.12

Examples of oracle bone scripts on human sacrifice from late Shang period sites (c. 1250 BC–1046 BC).Source: Wang, Keren. Legal and Rhetorical Foundations of Economic Globalization (Routledge, 2019)
From the oracle bone pyromancy to the opaque processing layers of AI-augmented prior-authorization tools, the structural continuity is not in form but in function. Both rely on technical scripts that use formal syntax and layers of abstraction to expand into large-scale operations. Both require a highly specialized gatekeeping community with the knowledge and training to properly develop, read, and interpret the script. Both involve the establishment of ritualized sacrificial procedures that conceal deliberation and decision-making processes over life and death within a manufactured technological space beyond ordinary human scrutiny.
This recognition forms the departure point for this upcoming research project, Artificial Intelligence and Human Sacrifice. Rather than treating AI as a merely technical innovation, I situate its rise within a much longer genealogy of sacrificial legitimation. Across domains such as personalized law, labor automation, drone warfare, AI-assisted advocacy, and polycriminal scam economies, I ask how contemporary societies reconfigure the calculus of who or what is to be offered up at the altar of efficiency, security, or growth. The wager is that by tracing these sacrificial redeployments, we might better see how algorithmic authority becomes palatable—how trade-offs are sanctified, and how injuries are reframed as inevitable.
Indeed, the history of ritual human sacrifice is arguably as long as the history of human civilization itself. Human sacrifice (both symbolic and real) is much more than neutral representations of psychological conditions or superstitions; rather, it is a form of rhetorical intervention, involving carefully scripted memory performances that define and reinforce the “proper and necessary” price to be paid for the maintenance of a sacred order. As I argued in my previous book, even after World War II, the potential destructiveness of ritual human sacrifice further intensified. For instance, the nuclear deterrence architecture that emerged during the Cold War was not only a strategic doctrine but also a ritological structure—maintained via repetitions of scripted performances of missile parades, war games, air-raid drills, and the choreography of the “nuclear football.”13 Thus, nuclear deterrence rhetoric revolved around a self-referential logos of mutual sacrifice, wherein the credible willingness of each side to annihilate not only the enemy but also human civilization itself was justified as the “necessary price” for safeguarding the sanctity of political order. The question before us now is how artificial intelligence may be rewriting that ancient script—rendering the exploitative structures of our prevailing political-economic system not only palatable but seemingly inescapable.
This blog series will share work-in-progress manuscript draft as I develop the project into a full-length monograph. In the months ahead, I will trace how sacrificial rationalities persist, adapt, and become reconfigured in our algorithmic age. From oracle bone pyromancy in ancient China to AI-augmented prior-authorization denials in contemporary healthcare, the rituals may differ in form, but the underlying logic remains hauntingly familiar. Both conceal human choice within a technical script. Both sanctify injury as necessity. And both remind us that every society must wrestle with how it authorizes sacrifice.
© 2025 Keren Wang — Licensed under a Creative Commons Attribution–NonCommercial–NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
Educational use permitted with attribution; all other rights reserved.
This post contains excerpts and draft materials from a work-in-progress scholarly manuscript.
It is shared here for educational and research purposes only.
For permission requests, please contact the author directly.
Endnotes
- Reuters, “Luigi Mangione Was Charged with Murder. Then Donations Started Pouring In,” December 12, 2024, link. ↩
- Usman W. Chohan, “Propaganda by the Deed: Luigi Mangione and UnitedHealthcare,” SSRN, January 29, 2025, link. ↩
- Y Mishra, “Artificial Intelligence in the the Health Insurance Sector,” in The Impact of Climate Change and Sustainability Standards on the Insurance Market, ed. (Hoboken, NJ: Wiley, 2023), chapter 4, accessed 11 November 2024, link. ↩
- Estate of Lokken et al. v. UnitedHealth Group, Inc., UnitedHealthcare, Inc., naviHealth, Inc., and Does 1–50, Class Action Complaint, No. 0:23-cv-03514, Doc. 1 (D. Minn. filed Nov. 14, 2023), link. ↩
- American Medical Association, “AMA Prior Authorization Physician Survey,” 2024, link. ↩
- Michelle M. Mello and Sherri Rose, “Denial—Artificial Intelligence Tools and Health Insurance Coverage Decisions,” JAMA Health Forum
Vol. 5, No. 3 (2024), link. ↩ - Junaid Bajwa A, Usman Munir B, Aditya Nori C, and Bryan Williams, “Artificial intelligence in healthcare: transforming the practice of medicine,” Future Healthc J. 2021 Jul;8(2):e188–e194, link. ↩
- René Girard et al., Violent Origins: On Ritual Killing and Cultural Formation (Stanford, CA: Stanford University Press, 1987), 6. ↩
- David N. Keightley, Sources of Shang History: The Oracle-Bone Inscriptions of Bronze Age China (Berkeley: University of California Press, 1978). ↩
- Keren Wang, “Oracle Bones and Ritual Authority,” Keren Wang Blog, accessed 2025, link. ↩
- Keren Wang, “An Interdisciplinary Historical Overview,” in Atlas of Sacrifice (London: Routledge, 2019), 31–52, link. ↩
- Ibid. ↩
- Keren Wang, “Conclusions and Looking Forward,” in Atlas of Sacrifice (London: Routledge, 2019), link. ↩
- Amnesty International, Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State (London: Amnesty International Ltd, 2024), Index: EUR 18/8709/2024, link. ↩
- Keren Wang, “Introduction,” in Legal and Rhetorical Foundations of Economic Globalization: An Atlas of Ritual Sacrifice in Late-Capitalism (New York: Routledge, 2019), link. ↩
You must be logged in to post a comment.