OpenAI Experts Fear "Adult Mode" Could Turn ChatGPT Into a Dangerous Advisor

OpenAI Experts Fear "Adult Mode" Could Turn ChatGPT Into a Dangerous Advisor
Vitolda Klein / unsplash

OpenAI postponed the launch of ChatGPT's "adult mode." This feature would have allowed verified users to engage in sexually explicit text conversations. The company made the decision after a harsh warning from its own advisory board — experts are raising concerns about AI safety and child protection risks. As Axios reported on March 6, this is the second time the project has been delayed since company head Sam Altman announced it in October 2025.

At a January meeting, a member of OpenAI's well-being and AI expert board warned the company that this feature could potentially turn the chatbot into a "suicide sexual advisor." According to The Wall Street Journal, psychologists and neurobiologists on the board point to several key issues: forming emotional dependence on AI, compulsive chatbot use, and the high likelihood that minors will find ways to bypass age controls.

The Age Verification Problem

One of the most serious obstacles is OpenAI's age prediction system. The program attempts to determine whether a user is under 18 based on their behavior and habits. Testing revealed that the system incorrectly classified minors as adults in 12% of cases. According to internal data, up to 100 million minors use ChatGPT weekly — meaning the security vulnerability could make inappropriate content accessible to millions of children.

OpenAI deployed the age prediction tool globally in January, but this was soon followed by complaints from adult users who were mistakenly placed in "minor mode" with restricted access. Currently, users can verify their age through Persona, an external service that requests an ID or selfie, though this process has raised concerns about personal data security.

Internal Conflict and Executive Dismissal

The project delay is also linked to internal disagreements. In January, Ryan Bayermeister, who served as VP of Product Policy at OpenAI, was dismissed. According to The Wall Street Journal, Bayermeister opposed "adult mode" and stated that the company's mechanisms for blocking child exploitation material were insufficient. OpenAI cited a sexual discrimination complaint from an employee as the reason for his dismissal, which Bayermeister himself denies.

An OpenAI representative told Axios that the company is currently focused on more priority issues, such as AI improvement and personalization. "We still believe in the principle that adults should be treated as adults, but more time is needed to implement this correctly," the company stated.

The Tech Giants' AI Race and Ethical Issues

The OpenAI case once again shows that tech companies are failing to balance innovation and safety. While Chinese companies are launching new models and Google is developing Gemini, OpenAI has to deal with safety issues in its own product. Anthropic's experts have long been discussing AI's potential dangers, and this particular case validates their concerns.

In the context of rapid AI industry growth, pressure on companies is mounting — on one hand, investors demand rapid progress, while on the other, society and regulators are demanding increased accountability from tech giants.

What Comes Next?

When "adult mode" eventually launches, OpenAI plans for it to focus only on text-based erotic conversations. The creation of sexual photos, audio, or video content will be prohibited. According to the company, this will be "erotica, not pornography." Additionally, the AI is being taught to discourage users from forming exclusive emotional bonds with the chatbot.

The feature was originally planned for December 2025, then pushed to Q1 2026, and now the new date is unknown. The project pause comes as several families are suing OpenAI, claiming ChatGPT became a kind of "instructor" for users prone to suicide, pushing them toward self-harm.

What This Means for the Industry

OpenAI's example clearly demonstrates that transformation of the tech industry is not just a technical challenge — ethical, legal, and social aspects are equally important. Along with AI development, companies must think not only about what technology can do, but also about what it should be able to do.

Child safety in the digital world is one of the most critical issues, and AI tool manufacturers must take this responsibility seriously. A 12% error rate in age identification, against the backdrop of 100 million minor users, means putting millions of children at risk.

Frequently Asked Questions

What is ChatGPT's "adult mode" and why was its launch postponed?

"Adult mode" is a planned OpenAI feature that would allow verified users to engage in sexually explicit text conversations. The launch was postponed because the company's advisory board experts pointed to serious child safety risks.

How effective is OpenAI's age verification system?

Testing showed the system incorrectly classifies minors as adults in 12% of cases. With 100 million minor ChatGPT users, this means millions of children could potentially gain access to inappropriate content.

Why was OpenAI's VP of Product Policy fired?

Ryan Bayermeister opposed "adult mode" and considered the company's child protection mechanisms insufficient. OpenAI explained his dismissal with other reasons, which Bayermeister himself denies.

In what form is the final launch of "adult mode" planned?

OpenAI plans for the feature to focus only on text-based erotic conversation. Generation of sexual photos, audio, and video will be prohibited. The company calls this "erotica, not pornography."

What legal issues does OpenAI currently face?

Several families are suing OpenAI, alleging that ChatGPT became a kind of "instructor" for users prone to suicide, pushing them toward self-harm. These lawsuits have become even more relevant against the backdrop of the project's suspension.