One of the absurdities of the current situation is that when AI systems produce harm, it falls to researchers, investigative journalists, and the public to document the harms and push for change. But that means society is always carrying a heavy burden and scrambling to play catch-up after the fact.
So the report’s top recommendation is to create policies that place the burden on the companies themselves to demonstrate that they’re not doing harm. Just as a drugmaker has to prove to the FDA that a new medication is safe enough to go to market, tech companies should have to prove that their AI systems are safe before they’re released.
Samuel, S. (2023, April 12). Finally, a realistic roadmap for getting AI companies in check. Vox. https://www.vox.com/future-perfect/2023/4/12/23677917/ai-artificial-intelligence-regulation-big-tech-microsoft-openai
“There is nothing about artificial intelligence that is inevitable,” the report says. “Only once we stop seeing AI as synonymous with progress can we establish popular control over the trajectory of these technologies.”