OpenAI Supports Illinois Bill to Limit AI Companies’ Liability for Mass Casualty Incidents, Financial Disasters

OpenAI is backing an Illinois state bill that would protect AI companies from legal responsibility when their technology contributes to severe societal harms, including mass deaths or catastrophic financial losses.

Wired reports that the ChatGPT maker has testified in favor of Illinois Senate Bill 3444, legislation that would shield frontier AI developers from liability for critical harms caused by their models under certain conditions. The bill represents what several AI policy experts describe as a notable evolution in OpenAI’s legislative approach, which until now had focused primarily on opposing measures that would increase liability for AI companies.

SB 3444 would define critical harms as incidents causing death or serious injury to 100 or more people, or at least $1 billion in property damage. Under the proposed law, AI labs would be protected from liability as long as they did not intentionally or recklessly cause such an incident and had published safety, security, and transparency reports on their websites. The bill defines frontier models as those trained using more than $100 million in computational costs, a threshold that would likely apply to major American AI company including OpenAI, Google, xAI, Anthropic, and Meta.

The legislation specifically identifies several scenarios of concern to the AI industry, including the use of AI by malicious actors to develop chemical, biological, radiological, or nuclear weapons. It also covers situations where an AI model independently engages in conduct that would constitute a criminal offense if committed by a human, provided such actions lead to the extreme outcomes defined in the bill.

Jamie Radice, an OpenAI spokesperson, said in an emailed statement: “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”

Caitlin Niedermeyer, a member of OpenAI’s Global Affairs team, delivered testimony supporting the bill and echoed the call for federal AI regulation. Her arguments aligned with the Trump administration’s opposition to inconsistent state-level AI safety laws. Niedermeyer emphasized the importance of avoiding what she called “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” She also suggested that state laws can be valuable when they “reinforce a path toward harmonization with federal systems.”

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment