- Ministry of Electronics and IT notified rules for labelling AI-generated synthetic content
- Intermediaries must disclose AI-generated text, audio, video, and images clearly to users
- No fixed size or placement for AI labels; disclosures must be clear and prominent
The Ministry of Electronics and Information Technology has formally notified amendments to the Information Technology Rules, introducing a regulatory framework for the labelling and oversight of deepfakes and other forms of synthetically generated content.
Under the notified rules, intermediaries are required to ensure that content generated or manipulated using artificial intelligence is clearly disclosed to users. The obligation covers AI-generated text, audio, video and images, with a specific focus on preventing the misuse of deepfakes and deceptive synthetic media.
Notably, the final rules do not prescribe any numerical or percentage-based requirement for the visibility of AI labels. Earlier draft versions of the framework had discussed a 10% prominence threshold for labelling synthetic content. That requirement has been dropped in the final notification. Instead, MeitY has adopted a principle-based standard, requiring disclosures to be "clear, prominent and visible" without specifying size, placement or format.
The rules place responsibility squarely on online platforms to ensure that disclosures are not removed, hidden or altered in a manner that misleads users. Intermediaries must also exercise due diligence to prevent the dissemination of prohibited content that impersonates individuals or misrepresents facts using synthetic means.
ALSO READ: "Ship Has Sailed...": Ramesh Damani Sees Ominous Signs For Indian IT Amid Anthropic Buzz
MeitY has clarified that the framework is technology-agnostic and applies across use cases, without carving out exemptions for specific AI tools or categories of content. The obligation to label rests on the intermediary, regardless of whether the content is user-generated or produced through platform-integrated AI systems.
The notified rules are enforced through existing compliance mechanisms under the IT Rules, including grievance redressal and government takedown directions. Non-compliance could attract penalties under the Information Technology Act.
By removing prescriptive design mandates such as the proposed 10% label rule, MeitY has opted for flexibility in implementation while retaining accountability. The government has said the objective is to curb misinformation and deceptive content without stifling innovation in artificial intelligence.
The amendments take effect from the date of publication in the official gazette.
Notably, industry bodies like IAMAI had pushed back on legislation, calling it arbitrary and broad. IAMAI members include tech majors such as Google, Meta, Snapchat, WhatsApp, Amazon, Apple, Jio, Airtel, and Netflix, among others.
MeitY, as per the revised ruled, has also reduced time for social media platforms to remove unlawful social media content to three hours, from earlier 36 hours.
“The amended IT Rules mark a more calibrated approach to regulating AI-generated deepfakes. By narrowing the definition of synthetically generated information, easing overly prescriptive labelling requirements, and exempting legitimate uses like accessibility, the government has responded to key industry concerns - while still signalling a clear intent to tighten platform accountability. That said, the significantly compressed grievance timelines - such as the two- to three-hour takedown windows - will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections,” said Rohit Kumar, founding partner at public policy firm The Quantum Hub.
Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.