Deepfake Rules: Social Media Platforms To Lose 'Safe Harbour' If AI Labeling Not Followed

Deepfake Rules: Facebook, YouTube, Instagram, and X would need to ensure that AI-gen visuals cover 10% of screen area or initial audio duration, signalling to users that the content is synthetic.

Safe harbour is a legal provision that shields online intermediaries like social media platforms. (Image: Freepik)

The government's strict obligations over major social media platforms such as Facebook, Instagram and others to label AI-generated and synthetic content will kick in from Nov. 1.

Union Minister for Electronics at IT, Ashwini Vaishnaw, explaining the rationale of the move, has said that in the Parliament as well as many other fora that people have demanded something should be done about deepfakes which are harming the society.

"People using others' or some prominent person’s image and creating deepfakes affects personal lives and privacy. It (causes) various misconceptions in society, so the step we’ve taken is making sure that users get to know whether something is synthetic or real. Once users know, they can take a call in a democracy. But it’s important that users know what is real. That distinction will be led through mandatory data labelling," he said.

Also Read: Govt Looks To Tighten IT Rules; Mandate Labelling For Deepfakes, AI-Generated Content

Safe Harbour for online platforms

Essentially, platforms like Facebook, YouTube, Instagram, and X would need to ensure that AI-generated visuals cover at least 10% of the screen area or initial audio duration, clearly signalling to users that the content is synthetic.

They must also deploy automated tools to detect undeclared AI content, failing which they risk losing "safe harbour" protection under the IT Act for due diligence lapses.

Safe harbour is a legal provision that shields online intermediaries, like social media platforms and ISPs, from liability for third-party content they host or transmit. This protection is enshrined in laws like India's IT Act. Further, in case where AI content is deemed unlawful, the power to issue takedown orders has also been restricted to senior officers.

These can now only be issued by a senior officer not below the rank of Joint Secretary, or equivalent, and in case of police authorities, only an officer not below the rank of Deputy Inspector General of Police (DIG), specially authorised, can issue such an intimation.

Also Read: Deepfake Fraud: How 'Go Invest' Tried To Scam People Through Impersonation Of PM Modi, Sundar Pichai

"Earlier, we had many instances with state govts where assistant sub-inspectors and sub-inspectors were passing orders, and we thought that it is important to have a very senior level of accountability because these are matters which impact the entire society, so we’re very careful with these," Vaishnaw told reporters.

Further, a provision for monthly review of all actions taken has also been made at the Secretary-level of the IT and Home Ministries, to ensure there's proper oversight and reversals can be made if an error has occurred.

Sachin Dhawan, Deputy Director at a tech policy think tank 'The Dialogue', said platforms that offer AI video and photo tools such as ChatGPT or Google's Gemini will also have to label content that users create/modify/alter using these tools.

"Platforms have to verify user declarations that content being uploaded is synthetically generated content. If intermediaries fail to comply with these rules, they run the risk of losing safe harbour protection," he said.

Also Read: From Deepfakes To Impersonation: How Can Communications Leaders Combat Gen AI Threats?

Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit. Feel free to Add NDTV Profit as trusted source on Google.
WRITTEN BY
Rishabh Bhatnagar
Rishabh writes on technology, startups, AI, and key economic ministries in ... more
GET REGULAR UPDATES
Add us to your Preferences
Set as your preferred source on Google