OpenAI may have violated California’s new AI safety law with the release of its latest coding model, according to allegations from an AI watchdog group.

https://fortune.com/2026/02/10/openai-violated-californias-ai-safety-law-gpt-5-3-codex-ai-model-watchdog-claims/

5 Comments

  1. “A violation would potentially expose the company to millions of dollars in fines, and the case may become a precedent-setting first test of the new law’s provisions.

    The controversy centers on GPT-5.3-Codex, OpenAI’s newest coding model, which was released last week. The model is part of an effort by OpenAI to reclaim its lead in AI-powered coding and, according to benchmark data OpenAI released, shows markedly higher performance on coding tasks than earlier model versions from both OpenAI and competitors like Anthropic. However, the model has also raised unprecedented cybersecurity concerns.

    CEO Sam Altman said the model was the first to hit the “high” risk category for cybersecurity on the company’s Preparedness Framework, an internal risk classification system OpenAI uses for model releases. This means OpenAI is essentially classifying the model as capable enough at coding to potentially facilitate significant cyber harm, especially if automated or used at scale.

    AI watchdog group the Midas Project is claiming OpenAI failed to stick to its own safety commitments—which are now legally binding under California law—with the launch of the new high-risk model.

    California’s SB 53, which went into effect in January, requires major AI companies to publish and stick to their own safety frameworks, detailing how they’ll prevent catastrophic risks—defined as incidents causing more than 50 deaths or $1 billion in property damage—from their models. It also prohibits these companies from making misleading statements about compliance.”

  2. They don’t give a shit about fines and they don’t give a shit about safety. If you haven’t noticed recently they are just dropping new models as soon as their competitors do. Speed is all that is going to matter to them, dam the consequences.