ADVERTISEMENT

India's AI Governance Guidelines 2025: A Framework For Safe And Inclusive Innovation

The guidelines reflect India's ambition to harness artificial intelligence as an engine for inclusive development and long-term economic strength.

Artificial intelligence or AI
India's AI Governance Guidelines 2025: A Framework For Safe And Inclusive Innovation (Image: Freepik)
Show Quick Read
Summary is AI Generated. Newsroom Reviewed

Artificial Intelligence is reshaping how societies work, communicate, and govern. Recognising both its transformative potential and its inherent risks, the Ministry of Electronics and Information Technology has released the India AI Governance Guidelines 2025.

These guidelines aim to position India as a responsible global leader in the development and use of AI. They combine legal and technical safeguards with a strong focus on innovation, accountability and public trust. India’s approach emphasises governing the applications of AI rather than the underlying technology so that progress can continue while ethical and societal concerns remain central.

The guidelines reflect India’s ambition to harness AI as an engine for inclusive development and long-term economic strength. They are aligned with the national vision of AI for All, which seeks to ensure that the benefits of AI reach every citizen. The government views AI as a crucial enabler of Viksit Bharat 2047, the country’s aspiration to become a developed nation.

AI is now embedded within India’s Digital Public Infrastructure. By connecting AI systems to platforms such as Aadhaar, UPI and DigiLocker, the government intends to create public services that are scalable, affordable and efficient. At the same time, the guidelines acknowledge serious risks associated with AI, including misinformation, deepfakes, algorithmic bias and national security challenges. The framework, therefore, seeks to balance opportunity with responsibility.

A central feature of the guidelines is the articulation of seven sutras or foundational principles. These sutras together form the normative core of India’s AI governance philosophy.

The first sutra places trust at the centre of AI adoption and emphasises that trust must be integrated into technological design, organisational culture and deployment practices. Without trust, innovation cannot scale.

The second sutra, People First, ensures that AI systems strengthen human agency. It requires AI to enhance capability rather than replace or undermine it, and it demands meaningful human supervision in consequential decisions.

The third sutra, Innovation over Restraint, promotes a regulatory environment that encourages experimentation while ensuring that appropriate safeguards remain in place. India seeks to avoid premature restrictions that could stifle development. The principle of Fairness and Equity forms the fourth sutra and requires AI systems to promote inclusion, prevent discriminatory outcomes, and broaden access to opportunity. This reflects India’s diversity and the need for technology that supports all communities.

The fifth sutra, Accountability, calls for clarity regarding the roles and responsibilities of actors throughout the AI value chain. Obligations must correspond to the level of risk that each actor introduces into the system. The sixth sutra, Understandable by Design, highlights the importance of explainability. Users, regulators, and affected persons must be able to understand how AI systems operate.

The seventh sutra, Safety, Resilience, and Sustainability, requires AI systems to be secure, robust, and environmentally responsible. Taken together, these principles reflect global best practices while adapting them to India’s specific developmental context.

To operationalise these principles, the guidelines set out six governance pillars. These pillars together describe how India intends to build a comprehensive and future-ready AI ecosystem.

The first pillar concerns national infrastructure. India plans to expand access to high-quality datasets and computing resources, including through GPU clusters and the AIKosh platform. These resources are intended to support both public and private innovation. The integration of AI with Digital Public Infrastructure ensures that solutions can scale across sectors.

The second pillar focuses on capacity building. The guidelines call for training regulators, law enforcement agencies, civil servants, and the judiciary so that they are equipped to manage AI-driven transformations. They also encourage stronger AI education in universities and research institutions. Public awareness programmes aim to help citizens understand AI’s benefits and risks.

Policy and regulation form the third pillar. MeitY concludes that most AI risks can be managed through existing laws, strengthened by targeted amendments. These include issues relating to content authenticity, copyright, due diligence obligations, and the legal classification of AI developers and deployers. Regulatory sandboxes will give innovators an opportunity to test emerging technologies in controlled conditions.

The fourth pillar relates to risk mitigation. The guidelines identify several categories of risk, including malicious use, opacity, bias, safety failures and security threats. MeitY, therefore, proposes an India-specific risk assessment framework supported by a national incident database that records real-world harms and failures. Voluntary measures, such as audits, transparency reports and bias assessments, are encouraged as part of a proactive risk management culture.

The fifth pillar concerns accountability. The guidelines adopt a graded liability model, where responsibility is proportionate to the risk associated with the function performed. Organisations must implement grievance mechanisms and publish reports on risk management practices. Regulators are expected to ensure predictable enforcement. Mechanisms such as self-certification, independent audits and technical compliance tools reinforce this framework.

The final pillar focuses on institutional mechanisms. The guidelines propose a coordinated oversight structure involving three national bodies. The AI Governance Group serves as the principal inter-ministerial body responsible for aligning national strategy and ensuring policy coherence.

The Technology and Policy Expert Committee provides specialised guidance on legal, ethical, and technical matters. The AI Safety Institute conducts model evaluations for safety, fairness, and robustness and represents India in global safety networks. Together, these institutions support a whole-of-government approach to AI regulation.

The action plan within the guidelines sets out short-term, medium-term, and long-term priorities. In the short term, the focus lies on establishing institutions, releasing voluntary codes, enhancing compute accessibility, and launching public awareness programmes. Medium-term goals include developing national standards for authenticity, fairness, transparency, and cybersecurity, operationalizing the incident database, and updating legal frameworks. Long-term objectives include continuous policy review and strengthening India’s role in shaping global AI governance.

A defining characteristic of India’s approach is its techno-legal philosophy. Compliance requirements are built directly into system architecture so that governance becomes a matter of design rather than after-the-fact enforcement. The Data Empowerment and Protection Architecture provides a foundation for consent-driven data use. Extending this model to AI training enhances transparency and auditability. Tools such as watermarking, provenance tracking, and human-in-the-loop oversight reduce administrative burdens and help ensure responsible development.

The guidelines situate AI governance within India’s broader international strategy. India intends to take an active role in global discussions through the G20, OECD, and United Nations. Hosting the AI Impact Summit 2026 reflects the country’s ambition to shape international norms and champion the interests of the Global South. India’s model offers a practical route for emerging economies that wish to encourage innovation while maintaining safeguards.

The guidelines represent a major step in India’s governance of advanced technologies. They bring together trust, innovation, and accountability to create a coherent framework for responsible AI development. If implemented effectively, India’s model can serve as a global benchmark and demonstrate how developing nations can adopt advanced technologies in ways that reinforce public trust and promote inclusive growth.

The article has been authored by Pranav Khatavkar, founder at Lexentra.

The views expressed in this article are solely those of the author and do not necessarily reflect the opinion of NDTV Profit or its affiliates. Readers are advised to conduct their own research or consult a qualified professional before making any investment or business decisions. NDTV Profit does not guarantee the accuracy, completeness, or reliability of the information presented in this article.

Opinion
Close To Half Indian Enterprises Use Generative AI, But Budget Still Sparse
OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit