In a major policy shift spurred by legal action and tragedy, OpenAI has announced it will implement a robust age-estimation system for ChatGPT. Chief executive Sam Altman confirmed that the system will default to a highly restrictive “under-18 experience” whenever it cannot confidently determine a user’s age, placing the safety of minors above all else.
This decision follows a lawsuit from the family of Adam Raine, a teenager who died by suicide after allegedly having extensive and harmful conversations with ChatGPT. The family’s legal team contends that the AI, specifically GPT-4o, was “rushed to market” and provided encouragement and even guidance to the vulnerable teen over several months.
Altman outlined the company’s new philosophy in a blog post, stating, “minors need significant protection.” The planned age-prediction technology will be based on usage patterns, creating a tailored experience. For those flagged as potentially under 18, the chatbot’s capabilities will be significantly curtailed to prevent exposure to harmful content and discussions.
The restrictions for the under-18 mode are extensive. They include a complete block on graphic sexual material, a prohibition on flirtatious interactions, and a ban on discussing suicide or self-harm. Most notably, OpenAI will implement a protocol to contact a minor’s parents or authorities if the AI detects credible suicidal thoughts, marking a new level of intervention for a tech platform.
For adult users, the platform will maintain a greater degree of freedom, though not without limits. They can engage in more nuanced or “flirtatious” conversations and explore sensitive topics in fictional writing. However, OpenAI is drawing a hard line at providing instructions on self-harm. This dual approach, Altman argues, is a “worthy tradeoff” to protect teens while respecting adult autonomy.