Artificial intelligence (AI) company OpenAI has a new comprehensive safety framework for teenage users in India. The Teen Safety Blueprint for India, released days ahead of the India AI Impact Summit, lays emphasis on age-aware AI behaviour, parental controls and what OpenAI calls industry-leading prevention of AI-generated child sexual abuse material (CSAM) and child sexual exploitation material (CSEM). A pivotal element of this framework is an understanding that “we prioritise safety ahead of privacy and freedom”, which indicates a philosophical calibration of approach.
“Teens are growing up with AI but aren’t grown-ups yet. We believe ChatGPT should meet them where they are — the way ChatGPT responds to a 15-year old should differ from the way it responds to an adult,” notes the framework. This is the latest in a three part evolution for OpenAI, which started with the laying down of teen safety and privacy principles in September, a first global teen safety framework released in November, followed by a model spec update for teen protections in December. The India-specific iteration of this blueprint emphasises a need for AI models to understand that teens aren’t still adults, and respond accordingly.
An India-specific blueprint isn’t out of place, considering this is one of OpenAI’s biggest and fastest growing markets. There is also an element of diversity and local nuances to take into consideration. The blueprint outlines explicit guardrails for under-18 users, including AI systems refraining from depicting suicide or self-harm, facilitate dangerous stunts, enable access to illegal substances, reinforce harmful body ideals, or allow graphic and immersive sexual or violent scenarios.
OpenAI, beyond proposing better content moderation, calls for a structural and layered redesign of how AI platforms identify, classify, and treat users below the age of 18 years. Age prediction system, which ChatGPT uses, is in the spotlight. “It looks at different signals linked to your account. For example, it may look at general topics you talk about or the times of day you use ChatGPT,” the technical documentation explains, though admitting that “no system is perfect”.
Parental Controls, such as allowing parents to link their account to their teen’s account, setting blackout hours, as well as an ability to turn off memory and chat history, remain key to keeping children safe in their interactions with AI.
“We believe AI companies should identify teens on their platforms using privacy-protective, risk-based age estimate tools to distinguish between teens and adults. Age estimation will help AI companies ensure that they are applying the right protections to the right users. It will facilitate age-appropriate experiences,” the framework proposes more AI companies should adopt similar methodology, thereby creating an industry standard at some point.
There is a challenge, unique to India, which may complicate things — the RATI Foundation’s Ideal Internet Report 2024-25 notes that 62% of Indian teens use shared devices. This trait, unique to India, does disrupt assumptions and mechanisms in most globally formatted digital safety systems that presume individual device ownership, with English as the main spoken language. Indian teens using shared devices are often part of a family of users which are also multilingual, thereby making premeditated concepts and contexts irrelevant.
Other AI companies too have policy guidelines in place for child user safety. Google’s Gemini uses age verification method to gate certain conversations — this age data synchronises with Android and YouTube restrictions. In search and AI responses, explicit self-harm, sexual content, illegal behaviours, and graphic violence are filtered by existing SafeSearch and moderation layers that extend into generative responses.
Meta, much like OpenAI, requires users to be at least 13-years old before they set their own accounts. Meta users machine learning to detect suspected minors, and generative AI responses default to safer, non-graphic and often non-instructional mode for users below the age of 18 years.
Key questions still arise. Can generative AI systems which are open-ended, creative and context driven, be able to deliver restrictive age-specific experiences without missing key elements? Secondly, can age estimation when interspersed with cultural and regional nuances such as developmental trends versus physical age, likely stumble?
When OpenAI says “teens are growing up with AI but aren’t grown-ups yet”, it is both a description and a warning.