The Regulation of AI-Generated Content: Amendments to IT Rules, 2021 Signal Lofty Goals, but Uncertain Outcomes
On October 23, 2025, the Ministry of Electronics and Information Technology (MeitY) notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Among its most notable changes, the amendment mandates that AI-manipulated media now carry visible labels to alert users and introduces stiff accountability mechanisms to curb synthetic and deepfake content. According to MeitY, platforms failing to meet these requirements risk losing “safe harbour” protection under Section 79 of the Information Technology Act, 2000, exposing them to legal liabilities for user content.
Targeting AI: Break from Past Patterns of Content Regulation
This amendment diverges sharply from prior regulatory approaches, which merely focused on intermediaries’ due diligence and compliance tools. The explicit mention of “synthetically generated information” — defined as algorithmically-generated media resembling authentic content — lays a new foundation for Indian digital governance. Until now, India’s digital framework lacked statutory clarity on issues arising from AI manipulation, deepfake content, or synthetic media, leaving platforms to rely on general takedown procedures under the IT Rules, 2021.
The stipulation for platforms to visibly label and tag AI-generated media — occupying 10% of the frame or duration, according to the Rules — sets a global benchmark in assigning liability directly to platforms. This represents both an innovation and reinforcement of regulatory intent, as similar moves have been slow to emerge even in digital policy-heavy jurisdictions like the European Union. While the EU AI Act prioritises accountability for developers and corporations involved in AI tools, India appears intent on tasking intermediaries directly with public-facing obligations to manage synthetic content's societal impact.
The Machinery of Oversight: Fragmentation or Precision?
At the heart of the amendments is tighter bureaucratic control over takedown requests. Only officers of Joint Secretary rank (for the central government) or Director General of Police rank (for states) are authorised to issue orders. This provision responds to recurring criticism of arbitrary censorship, where orders often originated from mid-level bureaucrats acting on ambiguous grounds. The amendments further mandate stringent criteria for takedown requests: each must cite its statutory basis, specify the URL or content identifier, and provide written reasoning.
Another notable move is the introduction of the monthly Secretary-level review mechanism. This procedural safeguard aims to minimise arbitrary action and ensure that takedown decisions adhere to the principles of legality, necessity, and proportionality. Yet, systems relying on oversight mechanisms often run into effectiveness bottlenecks, particularly if volume overwhelms their capacity. If platforms escalate disputes over flagged content too frequently, backlogs and delays could snowball into institutional dysfunction.
Reconciling Claims and Ground Reality
The government argues that these amendments enhance transparency while safeguarding user rights, but whether this assertion holds is an open question. Consider the provision that calls for automated detection tools to identify and tag deepfake or synthetic content. While technologically feasible for Significant Social Media Intermediaries (SSMIs) like Instagram or YouTube, smaller platforms may lack the resources or AI-trained tools required to comply. Importantly, the threshold for platform accountability remains steep — failure to comply results in forfeiting safe harbour protections under Section 79, threatening smaller intermediaries disproportionally.
The data also reveals underlying tensions. According to an October 2024 MeitY report, only 12% of platforms in India deployed AI tools to track potentially misleading content, and most relied on human moderators instead. The amendment may incentivise AI integration, but mandating declarations from users uploading AI-altered content presumes an unrealistic level of self-regulation in a country where digital literacy remains patchy.
Uncomfortable Questions: The Real Battle Lies Ahead
The broader concern goes beyond technological feasibility and focuses squarely on regulatory design. First, will Joint Secretary-level and DGP-level authorisations truly curb misuse? Much depends on the independence of these roles and their willingness to resist political pressure. Second, the monthly reviews sound like a promising layer of accountability — but do Secretary-level officers have the bandwidth and expertise to vet AI-centric takedown cases with the pace digital platforms demand?
Third, and perhaps most troubling, is the question of censorship. Labeling deepfakes and ensuring algorithmic accountability help viewers discern reality from manipulated visuals, but the definition of “misleading or synthetic media” remains legally subjective. Without unequivocal criteria for what constitutes “harmful manipulation,” these amendments may inadvertently expand censorship discretion rather than curbing it.
Lessons From South Korea's Precision
South Korea offers a telling comparison. In 2018, when deepfake scandals began surfacing, Seoul implemented the Digital Information Ethics Act, which targeted not intermediaries but developers of AI tools used for manipulation. This distinction avoided forcing platforms into infeasible technological investments while holding algorithm creators responsible for synthetic misinformation. Unlike India’s expansive labeling norms, South Korea devised clear, accessible criteria for content removal, emphasising falsified media intended to harm national security or personal reputations. India’s moves may appear more comprehensive, but they sacrifice precision by placing parallel burdens on platforms and users.
- Which legislative framework gives authority for the IT Rules, 2021 amendments?
Answer: Information Technology Act, 2000. - What is the consequence for platforms failing to comply with labeling norms for synthetic media under the IT Rules amendments?
Answer: Loss of safe harbour protection under Section 79 of the IT Act.
Practice Questions for UPSC
Prelims Practice Questions
- Statement 1: The amendments focus only on large social media intermediaries.
- Statement 2: Platforms are required to visibly label AI-generated content.
- Statement 3: Takedown requests can be issued by any mid-level bureaucrat.
Which of the above statements is/are correct?
- Statement 1: To increase the advertising revenue of platforms.
- Statement 2: To ensure user awareness of manipulated content.
- Statement 3: To reduce the operational costs of content moderation.
Which of the above statements is/are correct?
Frequently Asked Questions
What are the key objectives of the amendments to the IT Rules, 2021 regarding AI-generated content?
The key objectives of the amendments include the mandating of visible labels for AI-manipulated media and the introduction of stringent accountability measures for platforms to curb synthetic and deepfake content. These changes aim to enhance transparency and protect users while establishing a legal framework for addressing AI-generated information.
How do the amendments differ from previous content regulation approaches in India?
The amendments differ significantly as they explicitly address 'synthetically generated information,' establishing new responsibilities for intermediaries rather than solely relying on due diligence. This shift aims to provide statutory clarity regarding AI manipulation and places accountability for AI-generated content directly on the platforms.
What mechanisms have been introduced in the amendments to prevent arbitrary censorship?
To prevent arbitrary censorship, the amendments stipulate that only senior officials, such as Joint Secretaries or Directors General of Police, can issue takedown orders. Furthermore, they require detailed justifications for each takedown request and introduce a monthly review mechanism to ensure oversight in the decision-making process.
What challenges do smaller platforms face in complying with the new regulations on AI content?
Smaller platforms may struggle with compliance due to resource limitations, as they might not possess the necessary AI capabilities or automated detection tools required to identify and tag deepfake content. The heavy accountability burden could disproportionately affect them, particularly as they might lack the manpower compared to larger platforms.
Why is there skepticism about the effectiveness of the new regulatory measures?
Skepticism arises from concerns over whether the defined bureaucratic roles can effectively handle the volume of disputes arising from flagged content and the potential for misuse. Additionally, the feasibility of automated detection tools raises concerns about reliance on self-regulation within a digitally illiterate population.
About LearnPro Editorial Standards
LearnPro editorial content is researched and reviewed by subject matter experts with backgrounds in civil services preparation. Our articles draw from official government sources, NCERT textbooks, standard reference materials, and reputed publications including The Hindu, Indian Express, and PIB.
Content is regularly updated to reflect the latest syllabus changes, exam patterns, and current developments. For corrections or feedback, contact us at admin@learnpro.in.