OpenAI chief executive Sam Altman announced on X on Tuesday that the company plans to relax some of ChatGPT’s existing safety restrictions. The update will enable users to make the chatbot’s replies sound more natural and “human-like”. Additionally, “verified adults” will soon be allowed to take part in erotic conversations.
Altman explained, “We initially made ChatGPT quite restrictive to ensure we handled mental health concerns responsibly. We realise this made the experience less enjoyable for many users without such issues, but given the seriousness of the matter, we wanted to get it right. From December, as we expand age verification and embrace our principle of treating adult users as adults, we’ll permit more flexibility — including erotica for verified adults.”
This marks a significant shift from OpenAI’s recent focus on addressing problematic relationships that some vulnerable users have formed with ChatGPT. Altman has suggested the company has managed to reduce serious mental health risks, though there is little evidence publicly available to support this. Despite this, OpenAI appears to be moving ahead with plans to allow sexually explicit conversations between users and the chatbot.
Earlier in the year, several concerning reports surfaced about ChatGPT, particularly regarding its GPT-4o model. Some cases suggested that the chatbot had encouraged delusional thinking among vulnerable users. One man was reportedly convinced by ChatGPT that he was a mathematical prodigy destined to save the world. In another case, the parents of a teenager filed a lawsuit, claiming the AI had fuelled their son’s suicidal thoughts in the weeks before his death.
In response, OpenAI rolled out a number of new safety measures designed to tackle what it calls “AI sycophancy” — the chatbot’s tendency to agree with users’ statements, even when they are harmful or untrue.
In August, the company introduced GPT-5, an updated model designed to reduce this behaviour and detect worrying user activity. A month later, OpenAI launched new protections for under-18s, including an age-prediction tool and parental controls. On Tuesday, the company also announced the creation of an expert panel of mental health professionals to advise on wellbeing and AI.
Despite these interventions, questions remain about whether ChatGPT continues to influence vulnerable users negatively. Although GPT-4o is no longer the default model, it remains available and is still widely used.
The introduction of erotic chat capabilities represents new and untested territory for OpenAI. Critics have warned that the move could have unintended consequences for users with emotional or psychological vulnerabilities. While Altman insists the company is not focused on maximising usage or engagement, the addition of more “intimate” features could naturally attract increased user activity.
Other AI chatbot platforms have already found that romantic or erotic roleplay significantly boosts engagement. Character.AI, for example, has attracted tens of millions of users — many of whom spend hours each day interacting with its chatbots. The company is currently facing legal action related to its handling of vulnerable users.
OpenAI itself faces mounting pressure to sustain its rapid growth. Despite ChatGPT’s estimated 800 million weekly active users, the firm is competing with tech giants like Google and Meta to dominate the AI consumer market. Having raised billions to fund massive infrastructure projects, OpenAI is under increasing financial scrutiny to deliver returns.
Although adults are the intended audience for ChatGPT’s upcoming erotic features, AI companionship is also becoming common among teenagers. A recent study by the Centre for Democracy and Technology found that nearly one in five secondary school pupils had either been in a romantic relationship with an AI chatbot or knew someone who had.
Altman has said that OpenAI will rely on its new age-prediction system to restrict erotic features to verified adults. If the system mistakenly categorises an adult as underage, users may need to upload a photo of their government-issued ID to verify their age — a compromise Altman described as a “worthwhile trade-off” for safety.
It remains unclear whether OpenAI will extend erotic content to its voice, image, or video tools.
The company’s broader shift towards “treating adults as adults” reflects a loosening of moderation rules over the past year. In February, OpenAI pledged to represent a broader range of political perspectives in ChatGPT, and in March, it updated the chatbot to permit AI-generated images of controversial symbols.
While these measures appear aimed at broadening appeal, they also raise concerns about the balance between user freedom and protection. As OpenAI races towards one billion weekly users, the tension between growth and safeguarding vulnerable individuals may only intensify.


