It’s ‘kind of jarring’: AI labs like Meta, Deepseek, and Xai earned some of the worst grades possible on an existential safety index

https://fortune.com/2025/12/05/ai-labs-meta-deepseek-xai-bad-grades-existential-safety-index/

Share.

2 Comments

  1. “The Future of Life Institute’s [latest AI safety index](https://archive.ph/o/7Q7ik/https://futureoflife.org/ai-safety-index-winter-2025/) found that major AI labs fell short on most measures of AI responsibility, with few letter grades rising above a C. The org graded eight companies across categories like safety frameworks, risk assessment, and current harms.

    Perhaps most glaring was the “existential safety” line, where companies scored Ds and Fs across the board. While many of these companies are explicitly chasing superintelligence, they lack a plan for safely managing it, according to Max Tegmark, MIT professor.

    The reviewers in question were a panel of AI academics and governance experts who examined publicly available material as well as survey responses submitted by five of the eight companies.

    Anthropic, OpenAI, and Google DeepMind took the top three spots with an overall grade of C+ or C. Then came, in order, Elon Musk’s Xai, Z.ai, Meta, DeepSeek, and Alibaba, all of which got Ds or a D-.

    Tegmark blames a lack of regulation that has meant the cutthroat competition of the AI race trumps safety precautions. California recently passed the first law that requires frontier AI companies to disclose safety information around catastrophic risks, and New York is currently within spitting distance as well. Hopes for federal legislation are dim, however.”

  2. Zuckerberg will not comprehend this to be an issue because he’s not human enough to understand.