OpenAI has introduced Advanced Account Security, a new opt-in protection setting for ChatGPT accounts designed to provide stronger safeguards against account takeover and digital attacks.
The feature is aimed at users at heightened risk of cyber threats, including journalists, elected officials, political dissidents, researchers and other security-conscious individuals, while also being available to any user seeking stronger account protections.
The new security layer also extends to Codex accounts linked through the same login credentials.
The launch comes as AI platforms increasingly store highly sensitive personal and professional information, making account security a growing concern as users rely on AI tools for high-stakes tasks and connected workflows.
OpenAI said the initiative forms part of its broader cybersecurity strategy to expand access to technologies that help protect communities, critical systems and national security.
Advanced Account Security consolidates multiple enhanced protections into a single setting available through the Security section of ChatGPT accounts on the web.
Among the most significant changes is the introduction of stronger sign-in requirements. Users enrolled in the programme must use passkeys or physical security keys, while traditional password-based login is disabled entirely. This phishing-resistant authentication model is designed to significantly reduce the risk of credential theft.
The feature also introduces stricter account recovery protocols. Email and SMS-based recovery options are disabled, as these methods can be vulnerable if a user’s phone number or email account is compromised.
Instead, account recovery is limited to backup passkeys, physical security keys and recovery keys. OpenAI noted that because of these stricter controls, its support team will not be able to assist with account recovery for users enrolled in the programme.
To further limit exposure, enrolled accounts will have shorter sign-in sessions, reducing the time window in which compromised devices or sessions could be exploited.
Users will also receive login alerts and gain improved visibility into active account sessions across devices, allowing them to review and manage access more effectively.
Another notable feature is automatic exclusion from model training. Conversations from accounts enrolled in Advanced Account Security will not be used to train OpenAI’s models, a safeguard intended for users handling particularly sensitive information.
OpenAI said the feature is designed to give users greater control over security and privacy, while emphasising that stronger protections also come with greater responsibility, particularly around safeguarding recovery credentials.
The rollout reflects increasing industry focus on strengthening identity protections as AI systems become more deeply embedded in personal and professional digital ecosystems.


