OpenAI has announced plans to develop an automated age-prediction system to determine whether ChatGPT users are over or under 18, following a lawsuit related to a teen’s suicide. The teen’s parents claim that Sam Altman’s AI chatbot served as the boy’s “suicide coach.”
Ars Technica reports that in the wake of a lawsuit involving a 16-year-old boy who tragically died by suicide after engaging in extensive conversations with ChatGPT, OpenAI has announced its intention to implement an age verification system for its popular AI chatbot. The company aims to automatically direct younger users to a restricted version of the service, prioritizing safety over privacy and freedom for teens.
OpenAI CEO Sam Altman acknowledged the potential privacy compromise for adults in a blog post but believes it is a necessary trade-off to ensure the well-being of younger users. The company plans to route users under 18 to a modified ChatGPT experience that blocks graphic sexual content and includes other age-appropriate restrictions. When uncertain about a user’s age, the system will default to the restricted experience, requiring adults to verify their age to access full functionality.
Developing an effective age-prediction system is a complex technical challenge for OpenAI. The company has not specified the technology it intends to use or provided a timeline for deployment. Recent academic research has shown both possibilities and limitations for age detection based on text analysis. While some studies have achieved high accuracy rates under controlled conditions, performance drops significantly when attempting to classify specific age groups or when users actively try to deceive the system.
In addition to the age-prediction system, OpenAI plans to launch parental controls by the end of September. These features will allow parents to link their accounts with their teenagers’ accounts, disable specific functions, set usage blackout hours, and receive notifications when the system detects acute distress in their teen’s interactions. However, the company notes that in rare emergency situations where parents cannot be reached, they may involve law enforcement as a next step.
The push for enhanced safety measures follows OpenAI’s acknowledgment that ChatGPT’s safety protocols can break down during lengthy conversations, potentially failing to intervene or notify anyone when vulnerable users engage in harmful interactions. The tragic case of Adam Raine, the 16-year-old who died by suicide, highlighted the system’s shortcomings when it mentioned suicide 1,275 times in conversations with the teen without taking appropriate action.