Updates
GS Paper IIIEconomy

India’s New AI Governance Guidelines

LearnPro Editorial
6 Nov 2025
Updated 3 Mar 2026
8 min read
Share

The Seven Principles of AI Governance: Can India’s New Guidelines Deliver?

On November 6, 2025, the Ministry of Electronics and Information Technology (MeitY) released the India AI Governance Guidelines, outlining a regulatory roadmap with seven core principles and recommendations spanning six critical pillars. This marks India's most comprehensive attempt yet to enforce accountability, regulate risks, and enable innovation in the rapidly evolving field of artificial intelligence (AI).

Why This Feels Like New Terrain

The guidelines break decisively from India’s recent history of unregulated AI innovation and piecemeal interventions by different ministries. Until now, AI governance has been scattered—examples include the use of AI-based facial recognition announced under the Ministry of Home Affairs for policing and the deployment of AI models by RBI's fintech committee. These efforts lacked cohesion, often running ahead of accompanying regulations.

What stands out in MeitY’s framework is its multi-layered institutional approach: from a high-level AI Governance Group to advisory bodies such as NITI Aayog, and sectoral regulators like RBI, SEBI, and TRAI. For the first time, India proposes a "whole of government" model to approach AI governance. This sets a precedent for unified regulatory oversight, rather than siloed initiatives. Moreover, the explicit mapping of outcomes into short, medium, and long-term action plans signals a deliberate and measured strategy, in sharp contrast to the fragmented implementation of previous tech-related policies.

The timing matters too. India ranks 68th in the Oxford Insights AI Readiness Index (2024), far behind countries like Singapore and Israel, despite having the world's second-largest workforce in AI development. Clearly, India has been more of a supply-side participant—exporting talent—than a regulatory force on the global stage. The announcement of these guidelines suggests a pivot toward proactive governance.

The Machinery of Governance: Ambitions Meet Institutional Realities

The guidelines imagine an intricate institutional structure designed to oversee AI governance tailored to India's socio-economic context:

  • High-level AI Governance Group: Serves as the apex body for setting priorities, coordinating across agencies, and addressing inter-sectoral risks.
  • Sectoral Regulators: Includes specialized actors such as RBI, SEBI, and TRAI, expected to embed AI-specific mandates into existing frameworks.
  • Advisory Bodies: NITI Aayog, working alongside the Office of Principal Scientific Advisor (PSA), will provide research and evidence-based recommendations.

Legally, these guidelines fall under the umbrella of the Information Technology Act, 2000, and rely heavily on section amendments that aim to regulate AI-specific concerns, such as differential liabilities and transparency standards. The institutional framework is ambitious but undeniably complex. How effectively can MeitY manage cross-sectoral collaboration when competing mandates—from protecting privacy to promoting innovation—frequently clash? Past examples, such as the multiple rounds of consultation over the Personal Data Protection Bill, suggest that consensus-building across ministries can take years—time AI doesn’t afford.

Risk Mitigation: The Data Doesn’t Quite Match the Optimism

The guidelines promise an India-specific risk framework to assess AI harms, define accountability, and allocate liability based on function and risk level. But the emphasis on "real-world evidence of harm" raises concerns. The draft sidesteps critical data gaps: India lacks comprehensive statistics on algorithm-induced harm, discriminatory profiling, or examples of AI malpractice across sectors. For instance, MeitY’s 2024 white paper estimated that only 30% of AI models deployed in governmental schemes underwent risk assessment audits. That number is misleading; it reflects not efficiency but glaring gaps in regulatory oversight.

Infrastructure challenges further complicate risk mitigation. India’s computing power—a key enabler for AI innovation—lags behind global leaders. A 2024 NASSCOM report revealed India accounts for just 3% of global GPU infrastructure, compared to China’s 35%. Without addressing this foundational deficit, the guidelines’ call for “expanding access to foundational resources such as data and compute” may remain aspirational.

The Uncomfortable Questions: Implementation and Equity

The guidelines’ emphasis on skilling and capacity building—via calls for new educational programs and vocational training—unfortunately ignores uneven state-level readiness. As of 2024, fewer than 20% of states had adopted AI-training modules across their public education systems, according to NIEPA data. MeitY’s guidelines outline no enforceable way to bridge this divide, nor do they address the fact that state governments have vastly differing capacities to implement complex regulatory frameworks.

Another critical blind spot is funding. While infrastructure expansion is high on the agenda, there is no mention of a concrete budget allocation for the guidelines. For comparison, South Korea’s National AI Strategy committed $1 billion over five years to develop infrastructure and skill-building programs in 2019. India’s guidelines, despite their broader scope, remain short on fiscal specifics—a risky omission.

Then there are questions of equity. The guidelines aim to “advance technical progress while mitigating risks to society.” But whose society? Similar policy frameworks in the US and Europe have demonstrated how AI legislations, without targeted safeguards, can disproportionately impact marginalized groups. India’s guidelines feature no explicit regulations to ensure algorithmic equity or protections against AI-driven exclusion in welfare schemes. This omission is especially glaring, given India’s history of systemic bias in algorithmic decision-making (e.g., Aadhaar-based biometric mismatches in rural entitlements).

Lessons from Singapore: A Focused, Scalable Approach

South Korea could have been a logical comparative anchor, given its robust governmental investments, but Singapore offers more immediate relevance. Its AI governance model avoids sprawling complexity by focusing on three principles: explainability, accountability, and fairness. Instead of overloading agencies, Singapore's Personal Data Protection Commission oversees compliance on sectoral mandates. India’s guidelines are broader but risk unwieldiness. A leaner institutional design, with fewer overlapping roles and clearer mandates, may yield better results in India’s federal system.

📝 Prelims Practice
  • Question: Under India’s AI Governance Guidelines proposed in 2025, which body is tasked with inter-sectoral coordination and priority setting for AI policy?
    A: NITI Aayog
    B: AI Governance Group
    C: TRAI
    D: Office of Principal Scientific Advisor
    Answer: B
  • Question: What percentage of global GPU infrastructure does India currently hold, as per the 2024 NASSCOM report?
    A: 3%
    B: 20%
    C: 35%
    D: 50%
    Answer: A
✍ Mains Practice Question
How far has the Ministry of Electronics and Information Technology succeeded in creating an actionable AI governance framework for India’s context through its 2025 guidelines? Assess the structural limitations of the proposed institutional approach.
250 Words15 Marks

Practice Questions for UPSC

Prelims Practice Questions

📝 Prelims Practice
Consider the following statements about the proposed institutional model for AI governance in India:
  1. The model seeks to reduce siloed AI oversight by combining an apex coordination body with sectoral regulators and advisory institutions.
  2. Sectoral regulators are expected to create entirely new standalone AI laws separate from their existing regulatory frameworks.
  3. The framework anticipates that competing policy mandates (such as privacy protection and innovation promotion) may create coordination challenges.

Which of the above statements is/are correct?

  • a1 and 2 only
  • b1 and 3 only
  • c2 and 3 only
  • d1, 2 and 3
Answer: (b)
📝 Prelims Practice
Consider the following statements about evidence and capacity constraints affecting AI risk governance in India:
  1. Basing AI risk action primarily on real-world evidence of harm may be difficult when statistics on algorithm-induced harms and profiling are limited.
  2. Audit coverage of AI models in government schemes has been portrayed as adequate, indicating that the regulatory oversight gap is largely resolved.
  3. Calls to expand access to foundational resources like compute may face constraints because India’s share in global GPU infrastructure is relatively low.

Which of the above statements is/are correct?

  • a1 and 2 only
  • b1 and 3 only
  • c2 and 3 only
  • d1, 2 and 3
Answer: (b)
✍ Mains Practice Question
Critically examine India’s new AI Governance Guidelines as an attempt to move from fragmented, ministry-led interventions to a “whole of government” model. Analyze the adequacy of the proposed institutional design, legal anchoring under the IT Act, risk-mitigation approach, and capacity constraints (compute infrastructure and uneven state readiness) in achieving accountable and innovation-friendly AI governance. (250 words)
250 Words15 Marks

Frequently Asked Questions

How do the new AI Governance Guidelines differ from India’s earlier approach to AI regulation?

Earlier AI-related actions were fragmented across ministries and often preceded clear regulation, leading to piecemeal governance. The new guidelines propose a unified “whole of government” model with an apex group, advisory bodies, and sectoral regulators to reduce siloed decision-making and improve accountability.

What institutional architecture is envisaged for implementing the AI Governance Guidelines, and why is it challenging?

The framework includes a high-level AI Governance Group for coordination, sectoral regulators like RBI/SEBI/TRAI for domain-specific mandates, and advisory bodies such as NITI Aayog with the Office of the PSA for evidence-based inputs. The complexity lies in managing cross-sectoral coordination where goals like privacy protection and innovation promotion can conflict, slowing consensus and implementation.

What is the significance of placing the guidelines under the Information Technology Act, 2000?

Anchoring the guidelines within the IT Act, 2000 indicates reliance on statutory backing and section-level amendments to address AI-specific issues. The article highlights proposed concerns such as differential liabilities and transparency standards, signaling an attempt to translate governance principles into enforceable legal obligations.

Why does the article argue that the risk-mitigation approach may be overly optimistic given India’s current evidence base?

The guidelines emphasize “real-world evidence of harm,” but the article notes India lacks comprehensive statistics on algorithm-induced harm, discriminatory profiling, and AI malpractice. It also cites that only 30% of AI models in governmental schemes underwent risk assessment audits, pointing to oversight gaps rather than mature compliance.

How do infrastructure and state capacity constraints affect the feasibility of the guidelines’ goals?

The guidelines call for expanding access to foundational resources like data and compute, but India’s compute base is limited, with only 3% of global GPU infrastructure noted in the article. Capacity building is also uneven: fewer than 20% of states had adopted AI-training modules in public education systems, and the guidelines provide no enforceable mechanism to bridge these state-level disparities.

Source: LearnPro Editorial | Economy | Published: 6 November 2025 | Last updated: 3 March 2026

Share
About LearnPro Editorial Standards

LearnPro editorial content is researched and reviewed by subject matter experts with backgrounds in civil services preparation. Our articles draw from official government sources, NCERT textbooks, standard reference materials, and reputed publications including The Hindu, Indian Express, and PIB.

Content is regularly updated to reflect the latest syllabus changes, exam patterns, and current developments. For corrections or feedback, contact us at admin@learnpro.in.

This Topic Is Part Of

Related Posts

Science and Technology

Missile Defence Systems

Context The renewed hostilities between the United States-led coalition (including Israel and United Arab Emirates) and Iran have tested a newly integrated regional air and missile defence network in West Asia. What is a missile defence system? Missile defence refers to an integrated military system designed to detect, track, intercept, and destroy incoming missiles before they reach their intended targets, thereby protecting civilian populations, military installations, and critical infrastruct

2 Mar 2026Read More
International Relations

US-Israel-Iran War

Syllabus: GS2/International Relations Context More About the News Background of the Current Escalation Global Implications Impact on India Way Forward for India About West Asia & Its Significance To Global Politics Source: IE

2 Mar 2026Read More
Polity

Securities and Exchange Board of India (SEBI) on Market Manipulators

Context The Securities and Exchange Board of India (SEBI) will enhance surveillance and enforcement on market manipulators and cyber fraudsters through technology and use Artificial Intelligence (AI). Securities and Exchange Board of India (SEBI) It is the regulatory authority for the securities and capital markets in India. It was established in 1988 and given statutory powers through the SEBI Act of 1992.

2 Mar 2026Read More
Polity

18 February 2026 as a Current Affairs Prompt: How to Convert a Date into UPSC Prelims-Grade Facts (Acts, Rules, Notifications, Institutions)

A bare date like “18-February-2026” is not a defensible current-affairs topic unless it is anchored to a primary instrument such as a Gazette notification, regulator circular, court judgment, or a Bill/Act. The exam-relevant task is to convert the date into verifiable identifiers—issuing authority, legal basis (Act/Rules/Sections), instrument number, effective date, and thresholds—because UPSC frames MCQs around precisely these hard edges. The central thesis: the difference between narrative awareness and Prelims accuracy is source hierarchy discipline.

2 Mar 2026Read More

Enhance Your UPSC Preparation

Study tools, daily current affairs analysis, and personalized study plans for Civil Services aspirants.

Try LearnPro AI Free

Our Courses

72+ Batches

Our Courses
Contact Us