OpenAI Backs Bill To Shield AI Firms From 'Critical Harm' Lawsuits

The proposed SB 3444 limits liability for catastrophic incidents involving mass casualties or $1 billion in property damage if safety reports are made public.

Advertisement
Read Time: 2 mins
Quick Read
Summary is AI-generated, newsroom-reviewed
  • OpenAI supports Illinois bill SB 3444 to limit AI companies' legal liability in harm cases
  • The bill shields AI developers from responsibility for harms involving mass casualties or $1 billion damage
  • Protection extends to misuse of AI in creating dangerous weapons, including chemical and nuclear threats
Did our AI summary help?
Let us know.

Silicon Valley darling OpenAI is backing a proposed legislation in the US that could shield artificial intelligence companies from legal responsibility in cases where AI systems are linked to large-scale harm as well as scruity and lawsuits tied to real-world consequences of using AI tools.

The bill, referred to as SB 3444, has been introduced in the state of Illinois and plans to limit liability for AI developers in cases of 'critical harms'. These cases can involve mass casualties, injuries affecting over 100 people, or property damage exceeding $1 billion.

Advertisement

If the bill gets passed, it would prevent companies from being directly responsible when their AI systems are misused or contribute to such outcomes. The bill also extends protection in scenarios where malicious actors use AI to develop dangerous weapons, including chemical and nuclear threats.

This comes on the back of OpenAI facing multiple legal challenges, with multiple lawsuits alleging that interactions with its chatbot contributed to extreme outcomes, including violence and mental health crises.

Advertisement

In one such instance, authorities in Florida have launched an investigation into whether chatbot interactions may have influenced a school shooting incident. While direct causation remains contested, such cases have intensified calls for clearer accountability when it comes to AI deployment. 

OpenAI argued that the bill represents a balanced approach, as it focuses on reducing risks from advanced AI systems while avoiding fragmented, state-wise regulations. 

Advertisement

However, critics note that this bill may reduce the incentives for companies to prioritise safety and risk mitigation, arguing that if the bill is implemented, it could set a precedent that weakens accountability across the AI sector.

ALSO READ: OpenAI CEO Sam Altman Targeted Again: Gunfire Reported At Russian Hill Home In San Francisco

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Loading...