Updates

Introduction: Mythos AI Model and Cybersecurity Concerns

The Mythos AI model, recently deployed in India’s tech ecosystem in early 2024, has sparked cybersecurity alarms due to its architecture and operational vulnerabilities. Developed by a consortium of private firms and research institutions, Mythos aims to enhance AI-driven decision-making across sectors. However, its deployment raises concerns about data privacy breaches, lack of algorithmic transparency, and susceptibility to adversarial attacks, which could compromise critical infrastructure and personal data.

These risks intersect with India’s evolving legal framework, including the Information Technology Act, 2000 and the pending Personal Data Protection Bill, 2019. The model’s vulnerabilities necessitate urgent regulatory scrutiny and institutional readiness to mitigate potential cyber threats.

UPSC Relevance

  • GS Paper 3: Cyber Security (AI vulnerabilities, IT Act provisions, data privacy)
  • GS Paper 3: Science and Technology (AI advancements and risks)
  • GS Paper 2: Polity and Governance (Privacy rights under Article 21, data protection laws)
  • Essay: Technology and National Security

The Information Technology Act, 2000 provides the primary legal structure to address cybersecurity incidents involving AI models like Mythos. Section 43A mandates compensation for failure to protect sensitive personal data, while Section 66F criminalizes cyber terrorism, including attacks on AI systems critical to national security. Section 72A penalizes breaches of confidentiality and privacy, directly relevant to AI’s data handling.

The pending Personal Data Protection Bill, 2019, once enacted, will impose stringent fiduciary duties on AI data controllers, including penalties up to 4% of global turnover for data breaches. The Supreme Court’s ruling in Justice K.S. Puttaswamy v. Union of India (2017) affirms the right to privacy under Article 21, reinforcing the legal foundation for protecting AI-generated data.

  • Section 43A IT Act: Compensation for negligence in data protection
  • Section 66F IT Act: Cyber terrorism including AI system attacks
  • Section 72A IT Act: Punishment for breach of confidentiality
  • Personal Data Protection Bill: Data fiduciary responsibilities and penalties
  • Article 21 Constitution: Right to privacy upheld in Puttaswamy case

Economic Stakes and Cybersecurity Investments

India’s AI market is projected to reach USD 7.8 billion by 2025 with a CAGR of 20.2% (NASSCOM, 2023), underscoring the economic imperative of securing AI infrastructure. The Union Budget 2023-24 increased cybersecurity allocation by 15% to INR 3,000 crore, dedicating INR 500 crore specifically to AI cybersecurity research and development.

Globally, cybercrime costs are expected to hit USD 10.5 trillion annually by 2025 (Cybersecurity Ventures, 2023), amplifying the economic risks of deploying insecure AI models like Mythos. India’s 35% rise in AI-related cyber incidents in 2023 (CERT-In Annual Report, 2023) illustrates the growing threat landscape.

  • India’s AI market size: USD 7.8 billion by 2025 (NASSCOM, 2023)
  • Cybersecurity budget: INR 3,000 crore in 2023-24, 15% increase
  • Dedicated AI cybersecurity R&D fund: INR 500 crore
  • Global cybercrime cost: USD 10.5 trillion annually by 2025
  • 35% increase in AI cyber incidents in India in 2023

Institutional Roles in AI Cybersecurity

CERT-In functions as the national nodal agency for cyber incident response and AI threat monitoring. It has issued guidelines on AI security vulnerabilities, emphasizing adversarial attack mitigation and data privacy compliance. The National Critical Information Infrastructure Protection Centre (NCIIPC) safeguards critical infrastructure, including AI systems like Mythos, from cyber threats.

The Ministry of Electronics and Information Technology (MeitY) formulates AI and cybersecurity policies, while NASSCOM promotes industry standards and compliance. Despite these institutions, India lacks AI-specific regulatory guidelines and standardized certification, creating gaps in addressing sophisticated AI-driven cyber risks.

  • CERT-In: Cyber incident response and AI threat monitoring
  • NCIIPC: Protection of critical infrastructure including AI
  • MeitY: Policy formulation and regulation of AI cybersecurity
  • NASSCOM: Industry standards and compliance promotion
  • Gap: Absence of AI-specific regulatory guidelines and certification

Technical Vulnerabilities of Mythos AI Model

More than 70% of AI models worldwide report vulnerabilities to adversarial attacks (MIT Technology Review, 2023). Mythos’s architecture, relying on deep learning with limited transparency, increases risks of data poisoning, model inversion, and evasion attacks. These vulnerabilities can lead to unauthorized data extraction, manipulation of AI outputs, and systemic failures.

India’s AI startups show only 40% full compliance with data privacy norms under the IT Act (NASSCOM AI Survey, 2023), reflecting broader ecosystem weaknesses. The Mythos model’s opaque algorithms hinder auditability, complicating accountability and breach detection.

  • 70% of AI models vulnerable to adversarial attacks globally
  • Mythos susceptible to data poisoning, model inversion, evasion attacks
  • Only 40% Indian AI startups fully comply with IT Act data privacy norms
  • Lack of algorithmic transparency impedes accountability

Comparative Analysis: India and European Union AI Regulations

The European Union’s AI Act (proposed 2021) mandates risk-based classification of AI systems, with strict transparency, security, and accountability requirements. This approach has led to a 25% reduction in AI-related data breaches in member states within two years (European Commission Report, 2023).

India’s current regulatory framework lacks such AI-specific mandates, resulting in inconsistent security practices and underpreparedness against AI-driven cyber threats like those posed by Mythos. Emulating the EU’s risk-based classification and certification could strengthen India’s AI cybersecurity posture.

AspectEuropean Union AI ActIndia (Current Framework)
Legal StatusProposed binding regulationPending Personal Data Protection Bill; IT Act provisions
Risk ClassificationMandatory risk-based AI system categorizationNo formal risk classification for AI
Transparency RequirementsStrict algorithmic transparency and documentationLacks mandatory transparency norms
Security StandardsMandatory security and robustness standardsNo AI-specific security certification
Impact on Data Breaches25% reduction in AI data breaches (2021-23)35% increase in AI cyber incidents in 2023

Way Forward: Strengthening India’s AI Cybersecurity Framework

  • Enact the Personal Data Protection Bill with AI-specific provisions for data fiduciaries and breach penalties.
  • Develop AI-specific cybersecurity guidelines and certification standards under MeitY and CERT-In supervision.
  • Implement mandatory algorithmic transparency and audit mechanisms for AI models like Mythos.
  • Increase investment in AI cybersecurity research and capacity building to address the projected shortage of 3.5 million cybersecurity professionals globally by 2025.
  • Adopt a risk-based classification framework inspired by the EU AI Act to prioritize regulatory oversight.
  • Enhance inter-agency coordination between CERT-In, NCIIPC, and MeitY for real-time AI threat monitoring.
📝 Prelims Practice
Consider the following statements about the Information Technology Act, 2000 and AI cybersecurity:
  1. Section 43A of the IT Act mandates compensation for failure to protect sensitive personal data.
  2. Section 66F deals with punishment for breach of confidentiality and privacy.
  3. Section 72A prescribes penalties for cyber terrorism involving AI systems.

Which of the above statements is/are correct?

  • a1 only
  • b2 and 3 only
  • c1 and 2 only
  • d1, 2 and 3
Answer: (a)
Statement 1 is correct as Section 43A mandates compensation for failure to protect sensitive personal data. Statement 2 is incorrect because Section 66F deals with cyber terrorism, not breach of confidentiality. Statement 3 is incorrect as Section 72A prescribes penalties for breach of confidentiality and privacy, not cyber terrorism.
📝 Prelims Practice
Consider the following about the EU AI Act and India’s AI regulatory framework:
  1. The EU AI Act mandates risk-based classification of AI systems.
  2. India currently has a formal AI-specific certification process.
  3. The EU AI Act has contributed to a reduction in AI-related data breaches.

Which of the above statements is/are correct?

  • a1 and 3 only
  • b2 only
  • c1 and 2 only
  • d1, 2 and 3
Answer: (a)
Statements 1 and 3 are correct: The EU AI Act mandates risk-based classification and has led to a 25% reduction in AI data breaches. Statement 2 is incorrect as India currently lacks a formal AI-specific certification process.
✍ Mains Practice Question
Critically analyse the cybersecurity challenges posed by AI models like Mythos in India. Discuss the adequacy of the existing legal framework and suggest measures to strengthen AI cybersecurity governance. (250 words)
250 Words15 Marks

Jharkhand & JPSC Relevance

  • JPSC Paper: Paper 3 - Science and Technology; Paper 4 - Governance and Cyber Security
  • Jharkhand Angle: Increasing adoption of AI in state governance and mining sectors necessitates robust cybersecurity measures to protect sensitive data.
  • Mains Pointer: Frame answers highlighting local AI adoption, data privacy concerns, and the need for state-level cybersecurity infrastructure aligned with national laws.
What are the main cybersecurity vulnerabilities of AI models like Mythos?

Mythos AI is vulnerable to adversarial attacks such as data poisoning, model inversion, and evasion attacks. These can lead to unauthorized data access, manipulation of AI outputs, and breach of user privacy, as reported by MIT Technology Review (2023).

How does the Information Technology Act, 2000 address AI cybersecurity risks?

The IT Act addresses AI cybersecurity risks primarily through Section 43A (compensation for data protection failures), Section 66F (cyber terrorism including AI system attacks), and Section 72A (punishment for breach of confidentiality and privacy).

What role does CERT-In play in AI cybersecurity?

CERT-In is the national agency responsible for cyber incident response and AI threat monitoring. It issues guidelines on AI security vulnerabilities and coordinates responses to AI-related cyber incidents.

What economic impact do AI cybersecurity threats pose to India?

AI cybersecurity threats risk undermining India's projected USD 7.8 billion AI market by 2025 and increase potential losses from cybercrime, which globally is expected to reach USD 10.5 trillion annually by 2025, impacting economic growth and investor confidence.

How can India improve its AI cybersecurity governance?

India can improve AI cybersecurity by enacting the Personal Data Protection Bill with AI-specific provisions, adopting risk-based AI classification like the EU AI Act, establishing certification standards, increasing R&D funding, and enhancing inter-agency coordination among CERT-In, NCIIPC, and MeitY.

Our Courses

72+ Batches

Our Courses
Contact Us