Artificial intelligence management systems have repeatedly emphasised the importance of “human-in-the-loop” (HITL) as a necessity for implementing areas considered relatively high-risk. Speaking at the AI Summit held in New Delhi in February 2026, Prime Minister Narendra Modi elucidated the importance of MANAV (essentially five core principles, i.e., a Moral and Ethical System; Accountable Governance; National Sovereignty, particularly the right to data; Accessible and Inclusive technology; and Valid and Legitimate systems) in AI.
Globally, the most advanced regulatory response to AI risks is the European Union's AI Act, 2024. The EU AI Act is the first statute to embed fundamental rights within a risk-tiered regulatory model, similar to the General Data Protection Regulation's impact on privacy laws. The Act encourages safer AI development and deployment while also building public trust by demanding rigourous assessments and monitoring, reducing fears related to deepfakes, manipulation and systemic surveillance. The EU's proactive stance underscores the importance of balancing innovation with responsibility, positioning it as a model for emerging economies and major AI markets alike. High-risk AI systems under the EU AI Act are subject to strict, mandatory and legally-enforceable compliance obligations designed to protect health, safety and fundamental rights. These regulations are compulsory for any AI system placed on the market or put into service within the EU, with full compliance for most high-risk systems required by August 2026 (August 2027 for systems integrated into regulated products).
Despite rapid strides in AI adoption, especially with its booming tech, fintech, healthcare and governance applications, India does not yet have a dedicated AI statute. Governance currently relies on a regulatory regime comprising the Information Technology Act, the Digital Personal Data Protection Act, sectoral regulations and judicial interventions.
The AI Governance Guidelines, released by the Government of India in February 2026, advocate building a safe, trusted and innovation-led AI ecosystem. However, unlike a statute, it cannot provide details on enforcement and penalties/fines. Anchored in seven core principles, i.e., trust, human-centric design, responsible innovation, fairness, accountability, transparency and safety, the framework positions AI as a driver of inclusive growth and national competitiveness. The guidelines propose an approach supported by new institutions such as the AI Governance Group, the Technology & Policy Expert Committee and the AI Safety Institute. Alongside expanding national AI infrastructure, the framework outlines reforms in regulation, risk mitigation and capacity building to ensure responsible deployment at scale.
In addition, a recent Ministry of Electronics and Information Technology of India (MeitY) notification has formally brought synthetically generated information, including deepfakes, AI-altered audio, images and videos, under the same regulatory framework that governs other unlawful online content. It mandates that platforms must remove flagged deepfakes or unlawful AI-generated content within three hours of a government or court order, down from the earlier 36-hour requirement. This approach can create legal gaps, accountability issues and deficits in public trust. There have been several AI-related scandals involving celebrity deepfakes and other instances of data misuse recently. Such cases are creating a sense of doubt and mistrust among the general public about emerging technologies.
However, till the time the country has a robust regulation that can fill the critical gaps, especially around issues of classification and liability across the AI value chain, proactive oversight, testing requirements and standardised redressal mechanisms, corporates cannot possibly wait and be left behind in the AI adoption race.
ISO 42001, published in December 2023, was the first international standard for AI management systems and, in some ways, fills the gap above. It provides a structured framework for organisations to establish, implement, maintain and continually improve AI governance processes. In the absence of a dedicated AI law in India, adopting ISO 42001 can help stakeholders proactively manage risks associated with AI systems, including bias, transparency, accountability and security. The standard emphasises risk assessment, lifecycle management and stakeholder engagement, ensuring that AI deployments align with ethical principles and organisational objectives. By integrating ISO 42001, companies can demonstrate global best practices, build trust with users and prepare for future regulatory compliance. This voluntary adoption mitigates operational and reputational risks and positions Indian businesses competitively in international markets where adherence to recognised standards is increasingly valued. Only a few organisations using AI enablers today are compliant with the ISO 42001 standard. Most organisations may not be well-versed with the standard requirements.
In a nutshell, India's AI ecosystem is growing faster and maturing rapidly, but without a dedicated, rights-based AI Act, there are critical and growing vulnerabilities. An Indian law that provides for coherent risk-based oversight, clear liability, robust due diligence, and greater public resilience, while also unlocking global competitiveness for Indian tech firms, is now the need of the hour. Technology knows no boundaries, and the growing global incidents of AI misuse only underscore the urgency. Adopting a structured AI regulatory framework can protect citizens and promote trusted innovation, putting India on a comparable footing with global AI powerhouses. In the interim, ISO 42001 certification could serve as a guiding force for corporates, helping strengthen trust, transparency and responsible AI governance. It would allow companies to differentiate themselves as responsible AI adopters, improving their standing in procurement processes and enterprise partnerships, especially for companies in healthcare, human safety and constitutional rights. Another sector that could benefit tremendously from strong AI governance and transparency is Global Capability Centres.
With the Prime Minister of India and the EU AI Act both advocating HITL, human oversight is undoubtedly imperative to ensure safety, fairness, and adherence to rights. While MANAV stands for adopting responsible and ethical AI and takes an AI governance approach from the top down, HITL is oriented towards enhancing controls on the ground and will ensure appropriate oversight as part of AI lifecycle controls and compliance.
Saurabh Khosla is a Partner with Deloitte India's Forensic & Financial Crime practice and a certified Lead Auditor for ISO 42001.
Disclaimer: The views expressed in this article are solely those of the author and do not necessarily reflect the opinion of NDTV Profit or its affiliates. Readers are advised to conduct their own research or consult a qualified professional before making any investment or business decisions. NDTV Profit does not guarantee the accuracy, completeness, or reliability of the information presented in this article.
Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.