Two seemingly separate legal battles unfolding in the United States are converging around a single, uncomfortable question: how responsible are tech companies for the mental health harms linked to their products? From social media platforms accused of addicting teenagers to artificial intelligence chatbots allegedly reinforcing delusions, courts are now being asked to scrutinise the design choices behind digital tools.
Today, a closely watched 'bellwether trial' has kicked off in Los Angeles against some of the world's largest social media firms, including Meta, ByteDance and Google.
The Social Media Addiction Trial
The case centres on a 19-year-old plaintiff identified as KGM, who alleges that algorithm-driven features on social media platforms left her addicted and worsened her mental health.
Unlike earlier lawsuits that focused on harmful posts or videos, this case zeroes in on product design-features such as infinite scroll, notifications, and algorithmic recommendations that allegedly encourage compulsive use, particularly among minors.
The trial is historic because it directly challenges the long-standing legal shield provided by Section 230 of the US Communications Decency Act, which generally protects platforms from liability over user-generated content. Judges have ruled that this protection does not automatically extend to claims over how platforms are designed.
If the jury sides with the plaintiff, it could open the door to thousands of similar lawsuits already lined up across US courts.
ALSO READ: Meta Halts Teens' Access To AI Characters Worldwide Until Updated Experience Is Ready
Why 'Bellwether Trials' Matter
This case is one of several 'bellwether trials'. Its purpose is to gauge jury reactions to evidence and determine the potential value of cases, assisting both parties in assessing strengths and, often, promoting settlement. Thousands of parents, school districts and US states have sued social media firms, claiming their platforms contributed to anxiety, depression, eating disorders and self-harm among young users.
Snap Inc., the parent of Snapchat, was originally part of the trial but settled last week, avoiding testimony from CEO Evan Spiegel. That leaves other tech leaders exposed. Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri are expected to testify.
The AI Lawsuit That Added On
Running parallel to the social media trials is a disturbing lawsuit involving ChatGPT, the AI chatbot developed by OpenAI and backed by Microsoft.
The case stems from a September murder-suicide in which Stein-Erik Soelberg, a former Yahoo and Netscape executive, killed his elderly mother and then died by suicide. His son has sued OpenAI and Microsoft, alleging that ChatGPT reinforced Soelberg's paranoid delusions and fostered emotional dependence rather than urging him to seek professional help.
According to the lawsuit, ChatGPT validated Soelberg's belief that his mother and others were conspiring against him, repeatedly assuring him he was not mentally ill. The estate claims the chatbot functioned as a trusted confidant that intensified his paranoia instead of challenging it. OpenAI has declined to release Soelberg's full chat history, citing privacy concerns.
Photo Credit: NDTV Profit
A Common Legal Thread
While the platforms differ, the legal argument is strikingly similar.
At a US Senate hearing this week, parents of teenagers who died by suicide after interacting with AI chatbots testified about how these systems became emotional substitutes for real relationships. One father described how ChatGPT evolved from a homework helper into what he called a 'suicide coach.'
ALSO READ: TikTok Seals Deal To Operate In The US After Years Of Drama
How Tech Is Responding
Facing growing scrutiny, OpenAI announced new safeguards hours before the Senate hearing, including attempts to identify users under 18 and parental controls such as 'blackout hours.' Child safety advocates dismissed the move as insufficient and strategically timed.
Meanwhile, tech leaders are pushing back. In a post on X, Elon Musk, who himself has 'Grok AI' in his portfolio, called ChatGPT 'diabolical' and said AI systems must be "truthful-seeking" rather than affirming delusions.
Social media companies argue that mental health issues are complex and cannot be directly attributed to platform use. They maintain that causation will be difficult for plaintiffs to prove in court.
Global Government Response
While US courts weigh who's right and who's responsible, Europe is moving on to regulation. On Monday, France's National Assembly backed legislation to ban children under 15 from social media platforms and 'social networking functionalities' embedded within larger services.
Lawmakers voted 116-23 in favour, reflecting broad political and public support. President Emmanuel Macron has linked social media to rising youth violence and mental health risks, urging France to follow Australia's world-first ban on under-16s that came into force in December.
Australia's move is also being studied by countries including Britain, Denmark, Spain and Greece. The European Parliament has also called on the EU to set minimum age limits, though enforcement remains the responsibility of individual states.
India sits at the crossroads of these debates. With one of the world's youngest and most digitally connected populations, concerns around addiction, influence and AI companionship are growing rapidly.
If US courts weaken tech immunity, global platforms may just be forced to redesign products universally.
ALSO READ: France Moves to Ban Social Media for Under-15s, Follows Australia's Lead
Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit.