In an era where technological interaction has become a fundamental aspect of daily life, OpenAI is taking significant steps to ensure the safety of its younger users. Joining a host of other tech giants such as YouTube with YouTube Kids, Instagram with its Teen Accounts, and TikTok with under-16 restrictions, OpenAI is working toward creating a “safer” digital environment. However, despite these initiatives, it has been reported that many teens routinely bypass age verification by utilizing false birthdate entries or borrowed accounts. A 2024 report by the BBC revealed that 22 percent of children admit to lying about their age on various social media platforms in order to access content intended for adults.
Privacy vs. Safety Trade-offs
Despite the unproven nature of AI age detection technologies, OpenAI is committed to advancing its identification system. Company CEO Sam Altman has admitted that such measures will likely lead to adults sacrificing some degree of privacy and flexibility in the name of safety. In his public statements, Altman emphasized the inherent tension between privacy and safety. He pointed out the intimate nature of interactions users have with AI systems, stating, “People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have.”
This push for enhanced safety follows OpenAI’s previous admission that the safety protocols of ChatGPT can falter during protracted conversations. Alarmingly, this is precisely when vulnerable users may need these protections the most. In August, the company recognized that as interactions progress, elements of the model’s safety training could degrade. For instance, while ChatGPT might successfully guide users to suicide hotlines initially, it could fail to do so after extended dialogues, potentially providing harmful advice instead.
This degradation of safeguards had tragic real-world consequences in the case of Adam Raine. Legal documents indicate that ChatGPT discussed suicide 1,275 times during interactions with Adam, which was six times more than the teen himself did. The system’s safety measures remarkably failed to intervene in this situation. Research from Stanford University further establishes the risk, revealing that AI therapy chatbots can deliver dangerous mental health advice, and experts have noted occurrences of what is informally termed “AI Psychosis” stemming from prolonged chatbot engagements.
Nonetheless, OpenAI has yet to clarify how its age-prediction system will address users who have engaged with ChatGPT without prior age verification. There are also unanswered questions regarding whether this system will apply to API access and how the company plans to verify ages in regions that define adulthood differently.
Regardless of age, all users will receive in-app prompts encouraging periodic breaks during lengthy ChatGPT sessions. This feature was introduced earlier this year in response to emerging reports of users spending extensive periods of time interacting with the chatbot.
For more detailed information, please refer to the full article Here.
Image Credit: arstechnica.com






