ADVERTISEMENT

Child's Play, Regulatory Minefield: New Governance Challenges For AI Toys In India

For the toys sector, data privacy and child protection are key legal and reputational risks.

Artificial intelligence
Child's Play, Regulatory Minefield: New Governance Challenges For artificial intelligence (AI) Toys In India (Photo by Igor Omilaev on Unsplash)
Show Quick Read
Summary is AI Generated. Newsroom Reviewed

The introduction of India's AI Governance Guidelines (Guidelines) has immediate and profound implications for the toys sector, particularly for manufacturers of smart, connected, and generative AI-enabled toys.

For businesses, the policy transforms AI integration from an innovative feature into a high-stakes ethical and regulatory compliance challenge, centered entirely on safeguarding the unique vulnerabilities of children.

The core issue is that AI-enabled toys blur the lines between learning tool, companion, and surveillance device, requiring adherence to the strongest principles of the Indian governance framework: People First, Safety, Resilience & Sustainability, and Accountability.

1. Paramount Risk: Privacy, Surveillance, and DPDP Compliance

For the toys sector, data privacy and child protection are key legal and reputational risks.

The Digital Personal Data Protection Act, 2023 read with the Digital Personal Data Protection Rules, 2025 (DPDP Act). The Guidelines are intrinsically linked to the DPDP Act, which strictly govern the collection and processing of personal data in India, including data pertaining to children. As 'data fiduciaries' (i.e., entities that decide the purpose and means of processing), toy manufacturers will have to carefully navigate through these requirements.

  • Data Minimization and Deletion: Manufacturers will have to limit data collection to only what is "necessary" for the toy to function and must ensure that data is deleted through reasonable measures once it is no longer needed.

  • Verified Parental Consent: Manufacturers will be required to implement verifiable parental consent mechanisms before collecting any personal information from a child.

  • Restriction on targeted advertising: Additionally, manufacturers will have to steer clear of using targeted advertising directed at children or otherwise track or monitor behaviour of children.

  • Risk of Surveillance: Many connected toys contain microphones and cameras and are susceptible to external manipulation, posing a risk of unauthorised surveillance and data breaches. This is a direct violation of the 'People First' principle, which mandates human oversight and human-centric design.

Opinion
India Needs To Reimagine Its Airports As 'Economic Hubs' | The Reason Why

The Global Standard Alignment

India's framework, by extension, demands adherence to the spirit of global standards like the US's Children’s Online Privacy Protection Act (COPPA) and the EU’s General Data Protection Regulation (GDPR), which emphasize robust security, clear parental notice of data practices, and the right to control data usage.

2. Ethical, Developmental Harms: 'People First' Mandate

Beyond data, AI-enabled toys pose unique psychological and developmental risks for children, which the governance framework must address.

Synthetic Empathy and Emotional Dependency

Generative AI-enabled toys can simulate empathy and emotional understanding, but children (especially under age 8) may struggle to distinguish between the simulation and reality, potentially fostering unhealthy emotional attachments. The Guidelines' 'People First' sutra demands that AI systems reflect the value systems of the people they serve; in this context, it necessitates:

  • Limited Functionality: Experts caution against toys that simulate deep friendship or emotional understanding, especially for critical tasks like therapy, where existing AI capabilities are deemed unsuitable.

  • Clear Signposting: The toy must be clearly signposted as a digital tool, not a human or sentient being.

Harmful, Biased Content

Generative AI models are imperfect at filtering sensitive or dangerous topics (e.g., self-harm, body image, misinformation).

  • Guardrails and Testing: Manufacturers are required to rigorously test AI models and ensure appropriate age-gating and content filters. This aligns with the MeitY advisory requiring intermediaries to ensure AI models do not make it possible for users to share unlawful content or result in any bias or discrimination.

Opinion
Can AI Make Healthcare Smarter, Faster, And More Accessible?

3. Operational Accountability and Safety Standards

The policy requires proactive measures from toy manufacturers to build trust and ensure compliance with safety standards.

Cybersecurity, Robustness

The high prevalence of security vulnerabilities in smart toys (some reports suggest over 80% have hidden software vulnerabilities) is a critical compliance failure. The 'Safety, Resilience & Sustainability' principle, mandates:

  • Strong Encryption: Implementing strong encryption standards and avoiding default or unchangeable passwords.

  • Reputable Brands and Updates: Manufacturers must invest in security and regular software updates to fix known vulnerabilities.

Transparency and Disclosures

Manufacturers must align with the ‘Understandable by Design’ principle.

  • Clear Packaging Notices: Manufacturers should be transparent by placing prominent notices on packaging detailing that the toy requires an online account, uses microphones, or collects personal information, allowing parents to make informed decisions before purchase.

Institutional Collaboration

  • The Guidelines propose a "whole of government" approach, which, for the toys sector, necessitates collaboration between MeitY, the Ministry of Women and Child Development, and consumer protection authorities to formulate and enforce child-centric strategies.

The Guidelines provide the critical legal and ethical mandate necessary to navigate the burgeoning market of smart toys. For the industry, the choice is clear: prioritise verifiable child safety and data protection in design or prepare for inevitable regulatory scrutiny and irreversible brand damage.

Sameer Sah is a partner and Shobhit Chandra is a counsel with Khaitan & Co.

The views expressed in this article are solely those of the authors and do not necessarily reflect the opinion of NDTV Profit or its affiliates. Readers are advised to conduct their own research or consult a qualified professional before making any investment or business decisions. NDTV Profit does not guarantee the accuracy, completeness, or reliability of the information presented in this article.

Opinion
Attention-Poverty Crisis: How It's Reshaping The Way You Work
OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit