Hallucinations & More: How Journalists Are Taking On AI Majors Sparking A Global Copyright War

The reporter who exposed Theranos and authored Bad Blood — and five other journalists are suing xAI, Anthropic, Google, OpenAI, Meta Platforms, and Perplexity.

Even as lawsuits multiply, AI deals continue. Many publishers are signing licensing agreements with AI platforms for commercial leverage. (Image source: Envato)

For decades, the legal fault lines in media ran between publishers and platforms — newspapers versus search engines, journalists versus aggregators. Generative AI has scrambled those alignments. Now, newsrooms, authors, and creators are lawyering up to defend something more fragile: attribution, accuracy, and brand trust in an age of AI-generated confidence.

The latest flashpoint involves two cases where The New York Times is involved. The first is John Carreyrou — the reporter who exposed Theranos and authored Bad Blood — and five other journalists suing xAI, Anthropic, Google, OpenAI, Meta Platforms, and Perplexity, alleging that copyrighted books were used without permission to train large language models.

The second is the organisation suing Perplexity AI earlier this month, alleging mass copyright infringement through scraping, summarisation, and, crucially, misattribution of journalism.

Together, the issue is no longer only about publishers protecting archives. It is about creators asserting individual control over how their work feeds systems that now speak for them — often inaccurately, and sometimes deceptively.

From Copyright to Brand Damage

The New York Times’ complaint does something legally ambitious. Beyond copyright, it invokes trademark law, arguing that AI hallucinations have given false answers confidently attributed to the Times.

Hallucinations are usually framed as a technical limitation of generative models. The Times reframes them as a commercial injury, and a failure of attribution that confuses consumers and damages brand equity.

Carreyrou’s lawsuit sharpens that logic further. Filed this week in the California federal court alongside five other writers, it deliberately avoids class-action status. The complaint argues that class settlements allow technology firms to 'buy peace' at 'cheap rates,' insulating themselves from meaningful accountability.

This comes soon after Anthropic’s $1.5 billion settlement with a class of authors — a deal, reports say, amounts to roughly 2% of maximum statutory damages per work. For plaintiffs like Carreyrou, individualised claims are the only way to reflect the true value of creative labour. The filing is also the first AI training lawsuit to name xAI as a defendant, widening the circle of exposure.

Also Read: How Journalism Will Adapt in the Age of AI

Carreyrou Is Not Alone

The legal pushback is global and accelerating.

In September, Penske Media, the publisher of Rolling Stone, Billboard, and Variety, sued Google, alleging that AI Overviews republish journalism without consent and divert traffic that underpins advertising and subscription revenue.

In Japan, Asahi Shimbun and Nikkei have sued Perplexity in Tokyo District Court, accusing it of scraping paywalled content, ignoring 'robots.txt', and presenting false answers credited to their brands. The case invokes unfair-competition law — echoing the trademark arguments raised in the US.

The BBC has also threatened legal action unless Perplexity stops scraping, deletes stored content, and proposes compensation. Perplexity has dismissed the claims as “manipulative,” but the pattern is unmistakable: disputes are moving from quiet negotiations to open court.

Also Read: ChatGPT, Gemini, Copilot, Others Generating Research Papers, Journals That Don't Exist: Red Cross

Why Hallucinations Are No Longer Funny

Everyone who has used generative AI has seen hallucinations in action: confident falsehoods delivered with perfect grammar and no hesitation. In casual contexts, they are inconvenient. In media, finance, and law, they are dangerous.

What the Times’ complaint does explicitly is connect hallucinations to improper attribution. The technical failure then becomes a legal one. If an AI system produces false information and stamps it with a publisher’s brand, the argument is that it becomes consumer deception under trademark law.

"When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation," European Broadcasting Union's (EBU) Media Director Jean Philip De Tender said, in a statement reported by the Reuters. In a research conducted by EBU and the BBC, a third of all AI assistants' responses showed missing, misleading or incorrect attribution.

This is where US' Lanham Act enters the conversation. Traditionally used to police false advertising and brand misuse, it could become a new frontier for AI liability.

The impact would then be immediate. Product-level safeguards — clearer citations, stricter source controls, and attribution interface that defaults to uncertainty rather than authority.

Also Read: Google’s AI Keeps Hallucinating. Does Anyone Care?

India's Approach

The Indian government, on the other end, is planning a multi-year overhaul of copyright law for the AI era, with a three-year horizon for a final framework.

On December 8, 2025, the commerce ministry’s Department for Promotion of Industry and Internal Trade published Part I of an expert committee’s working paper proposing a hybrid mandatory blanket licence. Under the model, AI developers could train on all lawfully accessible copyrighted content without individual permissions.

A proposed Copyright Royalties Collective for AI Training would collect and distribute payments through copyright societies. Paywalled content would remain off-limits, but everything else — text, images, audio, video — could be used by default.

The framework, however, avoids EU-style dataset transparency requirements, with DPIIT warning that such rules could slow innovation, particularly for MSMEs. Part II of the paper, due next, will tackle an even trickier question of whether AI-generated works can be copyrighted at all, and who counts as the author.

The Deals Must Go On

Even as lawsuits multiply, deals continue. Many publishers are signing licensing agreements with AI platforms, most prominently OpenAI, and increasingly Amazon, for commercial leverage. For AI platforms, content compliance is fast becoming a budget line item, not an afterthought.

Bloomberg recently reported that OpenAI has sold more than 700,000 ChatGPT licences to around 35 public universities in the US, embedding the tool into classrooms despite administrative skepticism.

What these cases ultimately test is not whether AI will use human knowledge — we all know the answer to that — but on whose terms, with what safeguards, and at whose expense. The question courts must now answer is whether confidence, without accountability, is merely a technical flaw or a legal liability waiting to be priced in.

Also Read: Microsoft, Google Among 24 Firms Joining US AI ‘Genesis Mission’

Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit. Feel free to Add NDTV Profit as trusted source on Google.
WRITTEN BY
Yukta Baid
Yukta is a SIMC Pune alumnus and news producer at NDTV Profit who takes a k... more
GET REGULAR UPDATES
Add us to your Preferences
Set as your preferred source on Google