ADVERTISEMENT

Government's AI Move: All You Need to Know

The government has not only asked platforms to obtain explicit approvals before deploying untested Al; they must also carry a disclaimer warning users about the unreliability of their results.

<div class="paragraphs"><p>Representational Image (Source: Unsplash)</p></div>
Representational Image (Source: Unsplash)

The government has issued a new advisory under the Information Technology rules asking platforms to obtain explicit approval to deploy under-testing AI models for the public.

These untested models must also carry a disclaimer warning users about the potential unreliability of their results.

Clarifying more on the advisory, the Minister of State for Electronics and Information Technology Rajeev Chandrasekhar posted on his X account: "Seeking permission from MeitY is exclusive to large platforms and startups are exempted from it."

The focus is on preventing untested AI platforms from deploying on the Indian internet.
Rajeev Chandrasekhar, Minister of State for Electronics and Information Technology

The outlined process involves obtaining permission, labeling, and ensuring consent-based disclosure to users regarding untested platforms, serving as an insurance policy for platforms to mitigate potential consumer lawsuits, Chandrasekhar said.

Opinion
Government Asks AI Platforms To Seek Nod For Deploying Under-Trial AI; Mandates Labelling

Effect On Corporates vs. Startups

Any form of pre-approval would be the death knell of AI in our country, according to Rahul Matthan, partner at Trilegal.

If the advisory is targeted towards the bigger platforms, more clarifications would be required as they drive a lot of innovation and have the financial capacity to invest in substantial AI models, Matthan said.

The recent clarification excluding start-ups from the advisory offers relief, but for various other platforms, especially concerning government permission, it may pose excessive burdens and lack clarity on the practical aspects of obtaining approval, said Avisha Gupta, partner at Luthra and Luthra.

Additionally, even though, as per Chandrasekhar's post on his X account, the advisory is not for startups, a lot of these ventures depend on bigger platforms for various applications.

Based on this, Matthan predicts the following for the affected models:

  • This could lead cautious social media intermediaries to withdraw their models from India to avoid the potential risks associated with operating without explicit permission.

  • If major platforms decide to limit inputting data and make it available to the Indian consumer, it presents challenges in terms of inclusivity and the development of AI models.

Further, the advisory has asked the AI technologies to take action against data that could potentially be used to spread misinformation or deepfake content and mark their output with a unique label to identify the user of the software or computer resource, the intermediary itself, and the person who originally created or modified such content.

When you introduce such conditions that involve excessive monitoring or intervention by intermediaries, we are moving away from the intended purpose of safe harbour protection, Ranjana Adhikari, partner at Induslaw, told NDTV Profit.

Safe harbour provisions are like legal protections given to online middlemen, such as internet service providers, social media platforms, and e-commerce websites. The main law governing this aspect is the IT Act.

These rules keep them safe from being held responsible for content created by users on their platforms. In other words, if users post something problematic, the platform itself is not automatically held legally responsible.

Since the parent act has provided these safe harbour protections, they cannot be overridden by the advisory, as per Adhikari.

Legality Of The Advisory

While Chandrasekhar has clarified that the advisory will apply to large platforms and not startups, there should be an amendment in the circular to this effect, Adhikari said.

Posting it on social media is not the right way to go about it, she said.

The advisory states that before deploying AI models, there should be clear labels indicating the potential for mistakes or unreliability in the output they produce.

The circular suggests using a 'consent popup' mechanism, which is like a notification, to inform users explicitly about the possible errors or unreliability in the results generated by these AI technologies.

However, the soft disclaimers conveying the risk of outputs are already present on AI models, but putting this in the advisory could mean an excess of what's already been followed as an industry practice, as per Adhikari.

Apart from this, the circular further mentions that artificial intelligence models, software, or algorithms that are still in the testing phase will come under this advisory.

To this end, Adhikari opined that the language seems like it could apply to platforms as well as intermediaries, which can be excessive.

Lastly, the advisory speaks in the context of unreliable and undertested AI technologies.

This brings in additional conditionality and ambiguity as to how it would be determined if a model is undertested or unreliable, leaving a considerable amount of room to question the legality of the advisory, as per Adhikari.

Opinion
AI Advisory Applicable On Big Platforms, Not On Start-Ups: Chandrasekhar