Updates
GS Paper IIPolity

Govt. Notifies Amendments to IT Rules, 2021 To Regulate AI-generated Content

LearnPro Editorial
23 Oct 2025
Updated 3 Mar 2026
7 min read
Share

The Regulation of AI-Generated Content: Amendments to IT Rules, 2021 Signal Lofty Goals, but Uncertain Outcomes

On October 23, 2025, the Ministry of Electronics and Information Technology (MeitY) notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Among its most notable changes, the amendment mandates that AI-manipulated media now carry visible labels to alert users and introduces stiff accountability mechanisms to curb synthetic and deepfake content. According to MeitY, platforms failing to meet these requirements risk losing “safe harbour” protection under Section 79 of the Information Technology Act, 2000, exposing them to legal liabilities for user content.

Targeting AI: Break from Past Patterns of Content Regulation

This amendment diverges sharply from prior regulatory approaches, which merely focused on intermediaries’ due diligence and compliance tools. The explicit mention of “synthetically generated information” — defined as algorithmically-generated media resembling authentic content — lays a new foundation for Indian digital governance. Until now, India’s digital framework lacked statutory clarity on issues arising from AI manipulation, deepfake content, or synthetic media, leaving platforms to rely on general takedown procedures under the IT Rules, 2021.

The stipulation for platforms to visibly label and tag AI-generated media — occupying 10% of the frame or duration, according to the Rules — sets a global benchmark in assigning liability directly to platforms. This represents both an innovation and reinforcement of regulatory intent, as similar moves have been slow to emerge even in digital policy-heavy jurisdictions like the European Union. While the EU AI Act prioritises accountability for developers and corporations involved in AI tools, India appears intent on tasking intermediaries directly with public-facing obligations to manage synthetic content's societal impact.

The Machinery of Oversight: Fragmentation or Precision?

At the heart of the amendments is tighter bureaucratic control over takedown requests. Only officers of Joint Secretary rank (for the central government) or Director General of Police rank (for states) are authorised to issue orders. This provision responds to recurring criticism of arbitrary censorship, where orders often originated from mid-level bureaucrats acting on ambiguous grounds. The amendments further mandate stringent criteria for takedown requests: each must cite its statutory basis, specify the URL or content identifier, and provide written reasoning.

Another notable move is the introduction of the monthly Secretary-level review mechanism. This procedural safeguard aims to minimise arbitrary action and ensure that takedown decisions adhere to the principles of legality, necessity, and proportionality. Yet, systems relying on oversight mechanisms often run into effectiveness bottlenecks, particularly if volume overwhelms their capacity. If platforms escalate disputes over flagged content too frequently, backlogs and delays could snowball into institutional dysfunction.

Reconciling Claims and Ground Reality

The government argues that these amendments enhance transparency while safeguarding user rights, but whether this assertion holds is an open question. Consider the provision that calls for automated detection tools to identify and tag deepfake or synthetic content. While technologically feasible for Significant Social Media Intermediaries (SSMIs) like Instagram or YouTube, smaller platforms may lack the resources or AI-trained tools required to comply. Importantly, the threshold for platform accountability remains steep — failure to comply results in forfeiting safe harbour protections under Section 79, threatening smaller intermediaries disproportionally.

The data also reveals underlying tensions. According to an October 2024 MeitY report, only 12% of platforms in India deployed AI tools to track potentially misleading content, and most relied on human moderators instead. The amendment may incentivise AI integration, but mandating declarations from users uploading AI-altered content presumes an unrealistic level of self-regulation in a country where digital literacy remains patchy.

Uncomfortable Questions: The Real Battle Lies Ahead

The broader concern goes beyond technological feasibility and focuses squarely on regulatory design. First, will Joint Secretary-level and DGP-level authorisations truly curb misuse? Much depends on the independence of these roles and their willingness to resist political pressure. Second, the monthly reviews sound like a promising layer of accountability — but do Secretary-level officers have the bandwidth and expertise to vet AI-centric takedown cases with the pace digital platforms demand?

Third, and perhaps most troubling, is the question of censorship. Labeling deepfakes and ensuring algorithmic accountability help viewers discern reality from manipulated visuals, but the definition of “misleading or synthetic media” remains legally subjective. Without unequivocal criteria for what constitutes “harmful manipulation,” these amendments may inadvertently expand censorship discretion rather than curbing it.

Lessons From South Korea's Precision

South Korea offers a telling comparison. In 2018, when deepfake scandals began surfacing, Seoul implemented the Digital Information Ethics Act, which targeted not intermediaries but developers of AI tools used for manipulation. This distinction avoided forcing platforms into infeasible technological investments while holding algorithm creators responsible for synthetic misinformation. Unlike India’s expansive labeling norms, South Korea devised clear, accessible criteria for content removal, emphasising falsified media intended to harm national security or personal reputations. India’s moves may appear more comprehensive, but they sacrifice precision by placing parallel burdens on platforms and users.

📝 Prelims Practice
  • Which legislative framework gives authority for the IT Rules, 2021 amendments?
    Answer: Information Technology Act, 2000.
  • What is the consequence for platforms failing to comply with labeling norms for synthetic media under the IT Rules amendments?
    Answer: Loss of safe harbour protection under Section 79 of the IT Act.
✍ Mains Practice Question
Critically evaluate whether the recent amendments to the Information Technology Rules, 2021 strike a balance between regulating synthetic media and preserving digital rights in India.
250 Words15 Marks

Practice Questions for UPSC

Prelims Practice Questions

📝 Prelims Practice
Consider the following statements about the recent amendments to the IT Rules, 2021:
  1. Statement 1: The amendments focus only on large social media intermediaries.
  2. Statement 2: Platforms are required to visibly label AI-generated content.
  3. Statement 3: Takedown requests can be issued by any mid-level bureaucrat.

Which of the above statements is/are correct?

  • a1 and 2 only
  • b2 only
  • c2 and 3 only
  • d1 and 3 only
Answer: (b)
📝 Prelims Practice
Which of the following best describes the primary purpose of labeling AI-generated media according to the amendments?
  1. Statement 1: To increase the advertising revenue of platforms.
  2. Statement 2: To ensure user awareness of manipulated content.
  3. Statement 3: To reduce the operational costs of content moderation.

Which of the above statements is/are correct?

  • a1 and 2 only
  • b2 only
  • c1 and 3 only
  • d1, 2 and 3
Answer: (b)
✍ Mains Practice Question
Critically examine the role of the amendments to the IT Rules, 2021 in managing AI-generated content and their implications for digital governance in India.
250 Words15 Marks

Frequently Asked Questions

What are the key objectives of the amendments to the IT Rules, 2021 regarding AI-generated content?

The key objectives of the amendments include the mandating of visible labels for AI-manipulated media and the introduction of stringent accountability measures for platforms to curb synthetic and deepfake content. These changes aim to enhance transparency and protect users while establishing a legal framework for addressing AI-generated information.

How do the amendments differ from previous content regulation approaches in India?

The amendments differ significantly as they explicitly address 'synthetically generated information,' establishing new responsibilities for intermediaries rather than solely relying on due diligence. This shift aims to provide statutory clarity regarding AI manipulation and places accountability for AI-generated content directly on the platforms.

What mechanisms have been introduced in the amendments to prevent arbitrary censorship?

To prevent arbitrary censorship, the amendments stipulate that only senior officials, such as Joint Secretaries or Directors General of Police, can issue takedown orders. Furthermore, they require detailed justifications for each takedown request and introduce a monthly review mechanism to ensure oversight in the decision-making process.

What challenges do smaller platforms face in complying with the new regulations on AI content?

Smaller platforms may struggle with compliance due to resource limitations, as they might not possess the necessary AI capabilities or automated detection tools required to identify and tag deepfake content. The heavy accountability burden could disproportionately affect them, particularly as they might lack the manpower compared to larger platforms.

Why is there skepticism about the effectiveness of the new regulatory measures?

Skepticism arises from concerns over whether the defined bureaucratic roles can effectively handle the volume of disputes arising from flagged content and the potential for misuse. Additionally, the feasibility of automated detection tools raises concerns about reliance on self-regulation within a digitally illiterate population.

Source: LearnPro Editorial | Polity | Published: 23 October 2025 | Last updated: 3 March 2026

Share
About LearnPro Editorial Standards

LearnPro editorial content is researched and reviewed by subject matter experts with backgrounds in civil services preparation. Our articles draw from official government sources, NCERT textbooks, standard reference materials, and reputed publications including The Hindu, Indian Express, and PIB.

Content is regularly updated to reflect the latest syllabus changes, exam patterns, and current developments. For corrections or feedback, contact us at admin@learnpro.in.

This Topic Is Part Of

Related Posts

Science and Technology

Missile Defence Systems

Context The renewed hostilities between the United States-led coalition (including Israel and United Arab Emirates) and Iran have tested a newly integrated regional air and missile defence network in West Asia. What is a missile defence system? Missile defence refers to an integrated military system designed to detect, track, intercept, and destroy incoming missiles before they reach their intended targets, thereby protecting civilian populations, military installations, and critical infrastruct

2 Mar 2026Read More
International Relations

US-Israel-Iran War

Syllabus: GS2/International Relations Context More About the News Background of the Current Escalation Global Implications Impact on India Way Forward for India About West Asia & Its Significance To Global Politics Source: IE

2 Mar 2026Read More
Polity

Securities and Exchange Board of India (SEBI) on Market Manipulators

Context The Securities and Exchange Board of India (SEBI) will enhance surveillance and enforcement on market manipulators and cyber fraudsters through technology and use Artificial Intelligence (AI). Securities and Exchange Board of India (SEBI) It is the regulatory authority for the securities and capital markets in India. It was established in 1988 and given statutory powers through the SEBI Act of 1992.

2 Mar 2026Read More
Polity

18 February 2026 as a Current Affairs Prompt: How to Convert a Date into UPSC Prelims-Grade Facts (Acts, Rules, Notifications, Institutions)

A bare date like “18-February-2026” is not a defensible current-affairs topic unless it is anchored to a primary instrument such as a Gazette notification, regulator circular, court judgment, or a Bill/Act. The exam-relevant task is to convert the date into verifiable identifiers—issuing authority, legal basis (Act/Rules/Sections), instrument number, effective date, and thresholds—because UPSC frames MCQs around precisely these hard edges. The central thesis: the difference between narrative awareness and Prelims accuracy is source hierarchy discipline.

2 Mar 2026Read More

Enhance Your UPSC Preparation

Study tools, daily current affairs analysis, and personalized study plans for Civil Services aspirants.

Try LearnPro AI Free

Our Courses

72+ Batches

Our Courses
Contact Us