
AI Companies Talk Safety. Headcount of Safety Teams Tells a Different Story – The number of people focused on making sure it’s safe fits on a single airplane.
https://www.bloomberg.com/opinion/articles/2026-03-18/ai-companies-talk-safety-headcount-of-safety-teams-tells-a-different-story?leadSource=reddit_wall

4 Comments
“Artificial intelligence presents a transformative moment for society, but it appears that t**he number of people focused on making sure it’s safe might fit on a single transatlantic flight.**
Perhaps that shouldn’t surprise given the global arms race that has propelled generative AI companies to stratospheric valuations, but it should cause some alarm. The technology makes errors, is largely untested in the wild and has shown toxic side effects on mental health. Yet a rough estimate of how these companies are staffed suggests a disturbing imbalance, with i**nvestment into safety-oriented roles looking like a rounding error compared with the money going into making their systems more powerful.”**
The outsourced the safety teams to AI , its perfectly safe now /s.
There was never a safety team in the first place.
“Safety experts” would quit saying that the AI can’t be stopped or some bullshit, driving the stocks higher.
People honestly need to be reminded that these tools aren’t AI in the sci-fi sense we’ve seen in movies.
They just generate text based off previous inputs. Calling it AI is disingenuous.
They inherently can’t be safe, and never will be because there is no thought.
Stop buying into these companies.
If these tools were good at anything we’d have seen an explosion of good software written by it, but we’re still left with poorly written tools now.
AI “safety” is essentially a misnomer.
We have not observed any “unsafe” AI so far, so it’s not quite clear what “AI safety” actually means, if we disregard fearmongering.