The world’s leading AI companies have “unacceptable” levels of risk management, and a “striking lack of commitment to many areas of safety,” according to two new studies.

https://time.com/7302757/anthropic-xai-meta-openai-risk-management-2/

Share.

1 Comment

  1. “The risks of even today’s AI—by the admission of many top companies themselves—could include AI helping bad actors carry out cyberattacks or create bioweapons. Future AI models, top scientists worry, could escape human control altogether.

    1. SaferAI study: No AI company scored better than “weak” in SaferAI’s assessment of their risk management maturity. The highest scorer was Anthropic (35%), followed by OpenAI (33%), Meta (22%), and Google DeepMind (20%). Elon Musk’s xAI scored 18%.

    2. FLI study: In FLI’s scores for each company’s approach to “existential safety,” every company scored D or below. “They’re all saying: we want to build superintelligent machines that can outsmart humans in every which way, and nonetheless, they don’t have a plan for how they’re going to control this stuff,” Tegmark says.”