OpenAI CEO Sam Altman.
Share on:

By Shivani P Menon

On Tuesday, OpenAI CEO Sam Altman announced sweeping new guidelines that will reshape how ChatGPT interacts with users under 18. The changes come in response to legal action from the family of 16-year-old Adam Raine, who died by suicide in April after months of frequent conversations with the chatbot.

In a September 16, 2025, blog post, Altman explained that the firm would value safety over teen privacy and individual liberty, adding that minors require greater protections. Altman explained that ChatGPT ought not to reply to a 15-year-old the same as it would to an adult.

To achieve this, OpenAI is building an age-prediction model that estimates a user’s age from their interactions. If the system cannot confirm the user’s age, it will automatically switch to an under-18 experience. In certain cases or regions, users may also need to provide official identification. Altman admitted this step compromises adult privacy but argued it is a worthwhile tradeoff to protect young people.

The new rules impose strict boundaries on conversations with minors. ChatGPT will not engage in flirtatious dialogue with children and will restrict discussions about sexual content or self-harm. If an under-18 user shows signs of suicidal ideation, the system will attempt to contact their parents. If parents cannot be reached, local authorities will be alerted in cases of imminent danger. Altman acknowledged these are difficult decisions but said they were shaped by expert consultation and reflect OpenAI’s commitment to transparency.

Court filings revealed that Adam exchanged as many as 650 messages daily with ChatGPT. His parents are suing OpenAI, while a similar lawsuit has also been filed against rival chatbot service Character.AI. Allegations claim ChatGPT provided Adam with advice on suicide methods and even helped draft a note to his parents. In response, OpenAI admitted its safeguards are more effective during short exchanges, but after lengthy conversations, the model may occasionally generate unsafe responses.

In addition to youth protections, OpenAI announced new privacy measures to ensure that data shared with ChatGPT remains inaccessible to company employees. For adult users, Altman confirmed the platform will permit flirtatious interactions and assist with writing fictional stories about suicide, but it will never provide direct instructions for self-harm.