Updates

Introduction: Government and IT Sector Engagement with Anthropic AI Model

In 2024, the Government of India alongside the IT industry initiated a comprehensive study of the cybersecurity implications posed by the Anthropic AI model, a large language model designed with enhanced safety features. This collaboration involves agencies such as MeitY, CERT-In, and industry bodies like NASSCOM to assess emerging AI-driven cyber threats. The study aims to preemptively identify vulnerabilities and regulatory gaps, given India's rapid digital expansion and increasing AI adoption. The significance lies in addressing cybersecurity risks before large-scale exploitation occurs, particularly in the context of India’s growing AI market and cyberattack volume.

UPSC Relevance

  • GS Paper 3: Science and Technology – Cybersecurity challenges, AI governance, IT Act provisions
  • GS Paper 2: Polity – Data protection laws, constitutional right to privacy
  • Essay: Digital India and Emerging Technologies – Balancing innovation and security

India’s cybersecurity governance is anchored by the Information Technology Act, 2000, particularly Sections 43A (data protection liability) and 66 (computer-related offences). The Personal Data Protection Bill, 2019, still pending parliamentary approval, seeks to regulate data privacy comprehensively, a critical gap in AI data governance. The Supreme Court’s Justice K.S. Puttaswamy (2017) verdict constitutionally enshrined the right to privacy under Article 21, providing a legal basis for protecting personal data against AI misuse. Furthermore, the National Cyber Security Policy, 2013 outlines strategic objectives but lacks AI-specific directives. The Shreya Singhal v. Union of India (2015) judgment clarified intermediary liability, relevant for AI platforms hosting user-generated content.

  • IT Act 2000: Sections 43A and 66 address compensation for data breaches and cyber offences.
  • Personal Data Protection Bill 2019: Pending law aimed at data privacy and AI data handling.
  • Article 21: Right to privacy underpins data protection in AI context.
  • National Cyber Security Policy 2013: Framework for cybersecurity, lacks AI-specific provisions.
  • Shreya Singhal Case: Defines intermediary liability relevant to AI content moderation.

Economic Dimensions: Cybersecurity and AI Market Dynamics in India

India’s cybersecurity market is projected to expand from USD 3.05 billion in 2020 to USD 13.6 billion by 2025, growing at a CAGR of 32.5% (NASSCOM 2021). The Union Budget 2023-24 allocated INR 5,000 crore (~USD 670 million) under Digital India for cybersecurity infrastructure. Concurrently, the AI market is expected to reach USD 7.8 billion by 2025 (NITI Aayog 2022), intensifying cybersecurity risks linked to AI models like Anthropic’s Claude. Cybercrime costs India an estimated INR 1.15 lakh crore (~USD 15 billion) annually (CERT-In 2023). The IT sector employs over 4.5 million professionals, with a rising demand for cybersecurity experts to counter AI-enabled threats.

  • Cybersecurity market CAGR: 32.5% growth to USD 13.6 billion by 2025.
  • Government funding: INR 5,000 crore for cybersecurity under Digital India.
  • AI market growth: USD 7.8 billion expected by 2025.
  • Annual cybercrime losses: INR 1.15 lakh crore (~USD 15 billion).
  • IT workforce: 4.5 million+, increasing cybersecurity skill demand.

Key Institutions and Their Roles in AI Cybersecurity

CERT-In functions as the national agency for cybersecurity incident response and coordination. The National Critical Information Infrastructure Protection Centre (NCIIPC) safeguards critical infrastructure from cyber threats. MeitY formulates and implements IT and cybersecurity policies, including AI governance. NASSCOM represents IT industry interests and conducts cybersecurity surveys. Anthropic is an AI research company developing large language models with a focus on safety and interpretability. NITI Aayog advises the government on AI policy and emerging technology strategies, including cybersecurity implications.

  • CERT-In: Cyber incident response and threat coordination.
  • NCIIPC: Critical infrastructure cybersecurity protection.
  • MeitY: IT policy and cybersecurity regulation.
  • NASSCOM: Industry representation and cybersecurity research.
  • Anthropic: Developer of AI models with safety features.
  • NITI Aayog: Policy advisory on AI and cybersecurity.

India ranked 3rd globally with over 1.16 billion cyberattacks detected in 2023 (CERT-In Annual Report 2023). Anthropic’s Claude AI model claims improved interpretability and safety compared to GPT-4 (Anthropic Whitepaper 2023), yet over 70% of Indian IT firms reported increased cybersecurity threats linked to AI tools in 2023 (NASSCOM Cybersecurity Survey 2023). Only 15% of Indian enterprises have AI-specific cybersecurity protocols (KPMG India AI Risk Report 2023). The pending Personal Data Protection Bill creates regulatory uncertainty in AI data governance (PRS Legislative Research 2024). Globally, AI-related cybercrime is forecasted to cause USD 15 billion in damages by 2025, with India as a major target due to its digital footprint (Interpol Cybercrime Report 2023).

  • 1.16 billion cyberattacks in India in 2023; 3rd highest globally.
  • Anthropic Claude model: enhanced safety and interpretability.
  • 70%+ Indian IT firms report increased AI-linked threats.
  • Only 15% enterprises have AI-specific cybersecurity measures.
  • Personal Data Protection Bill pending, causing regulatory gaps.
  • Global AI cybercrime damage projected at USD 15 billion by 2025.

Comparative Analysis: India versus European Union AI Cybersecurity Regulation

AspectIndiaEuropean Union (EU)
Regulatory FrameworkIT Act 2000, pending Personal Data Protection Bill, no AI-specific cybersecurity lawProposed AI Act (2021) with mandatory risk assessments and cybersecurity standards for high-risk AI
AI Model GovernanceLimited guidelines, focus on data protection and cybercrime preventionStrict transparency, human oversight, and cybersecurity compliance for AI systems
Enforcement and ComplianceFragmented enforcement by MeitY, CERT-In, NCIIPCCentralized enforcement with penalties for non-compliance, pilot states showed 30% reduction in AI-driven cyber incidents
Data PrivacyRight to privacy upheld by Supreme Court; Personal Data Protection Bill pendingGDPR integrated with AI Act, strong data privacy and user rights

Critical Gaps in India’s AI Cybersecurity Governance

India lacks a comprehensive, AI-specific cybersecurity regulatory framework integrating data protection, AI ethics, and real-time threat mitigation. The pending status of the Personal Data Protection Bill creates ambiguity in AI data governance. The current legal framework does not address AI model transparency, accountability, or mandatory risk assessments. Institutional coordination among MeitY, CERT-In, and NCIIPC is fragmented, limiting rapid response to AI-driven cyber threats. This gap contrasts with the EU’s proactive AI Act, which mandates strict cybersecurity and ethical standards, reducing AI-related cyber incidents.

  • No AI-specific cybersecurity legislation or mandatory risk assessment protocols.
  • Pending Personal Data Protection Bill causes regulatory uncertainty.
  • Fragmented institutional coordination reduces threat mitigation efficiency.
  • Absence of AI transparency and accountability mandates.
  • Vulnerability to AI-driven cyberattacks due to governance gaps.

Way Forward: Strengthening India’s AI Cybersecurity Posture

  • Enact the Personal Data Protection Bill with AI-specific provisions for data governance and privacy.
  • Develop a dedicated AI cybersecurity framework mandating risk assessments, transparency, and human oversight, inspired by the EU AI Act.
  • Enhance coordination between MeitY, CERT-In, and NCIIPC for integrated threat detection and response.
  • Promote industry adoption of AI-specific cybersecurity protocols and capacity building for cybersecurity professionals.
  • Encourage public-private partnerships to develop AI safety standards and ethical guidelines.
📝 Prelims Practice
Consider the following statements about India's cybersecurity regulatory framework:
  1. The Information Technology Act, 2000 includes provisions for compensation in case of data protection failure.
  2. The Personal Data Protection Bill, 2019 is currently an active law governing data privacy in India.
  3. The National Cyber Security Policy, 2013 includes specific guidelines for AI cybersecurity risk mitigation.

Which of the above statements is/are correct?

  • a1 only
  • b2 and 3 only
  • c1 and 3 only
  • d1, 2 and 3
Answer: (a)
Statement 1 is correct because Section 43A of the IT Act 2000 provides for compensation for failure to protect data. Statement 2 is incorrect as the Personal Data Protection Bill, 2019 is pending and not yet enacted. Statement 3 is incorrect because the National Cyber Security Policy, 2013 does not contain AI-specific guidelines.
📝 Prelims Practice
Consider the following about CERT-In and MeitY:
  1. CERT-In is responsible for cybersecurity incident response and coordination in India.
  2. MeitY is the nodal agency for formulating and implementing IT and cybersecurity policies.
  3. CERT-In directly enforces data protection laws under the Personal Data Protection Bill.

Which of the above statements is/are correct?

  • a1 and 2 only
  • b2 and 3 only
  • c1 and 3 only
  • d1, 2 and 3
Answer: (a)
Statements 1 and 2 are correct. CERT-In handles cybersecurity incident response, while MeitY formulates IT and cybersecurity policies. Statement 3 is incorrect because CERT-In does not enforce data protection laws; enforcement is under MeitY and other agencies.
✍ Mains Practice Question
Critically examine the cybersecurity challenges posed by emerging AI models like Anthropic’s Claude in India. Discuss the adequacy of India’s current legal and institutional framework to address these challenges and suggest measures to strengthen AI cybersecurity governance.
250 Words15 Marks

Jharkhand & JPSC Relevance

  • JPSC Paper: Paper 3 (Science & Technology) – Cybersecurity and AI governance
  • Jharkhand Angle: Jharkhand’s growing IT hubs and digital infrastructure increase exposure to AI-driven cyber threats.
  • Mains Pointer: Frame answers highlighting state-level cybersecurity capacity, need for AI-specific policies, and alignment with national frameworks.
What is the role of Section 43A of the IT Act, 2000 in cybersecurity?

Section 43A mandates compensation by companies for failure to protect sensitive personal data, establishing liability for negligence in cybersecurity.

Why is the Personal Data Protection Bill, 2019 critical for AI cybersecurity?

The bill aims to regulate data privacy comprehensively, including AI data processing, but its pending status creates regulatory uncertainty for AI governance.

How does Anthropic’s Claude AI model differ in safety features?

Claude claims enhanced interpretability and safety mechanisms compared to GPT-4, aiming to reduce misuse and improve transparency in AI outputs.

What are the main cybersecurity challenges linked to AI in India?

Challenges include increased AI-driven cyberattacks, lack of AI-specific protocols (only 15% enterprises have them), and fragmented regulatory frameworks.

How does the EU AI Act serve as a model for India?

The EU AI Act mandates risk assessments, transparency, and human oversight for high-risk AI, reducing AI-related cyber incidents by 30% in pilot states, offering a regulatory blueprint.

Our Courses

72+ Batches

Our Courses
Contact Us