
OpenAI has deleted the word ‘safely’ from its mission – and its new structure is a test for whether AI serves society or shareholders
https://theconversation.com/openai-has-deleted-the-word-safely-from-its-mission-and-its-new-structure-is-a-test-for-whether-ai-serves-society-or-shareholders-274467
15 Comments
“OpenAI, the maker of the [most popular AI chatbot](https://firstpagesage.com/reports/top-generative-ai-chatbots/), used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” [according to its 2023](https://cdn.theconversation.com/static_files/files/4099/2023-IRS990-OpenAI.pdf?1770819990) mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”
While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI [had removed “safely” from its mission statement](https://app.candid.org/profile/9571629/openai-81-0861541?activeTab=7), among other changes. That change in wording coincided with its [transformation from a nonprofit organization](https://theconversation.com/as-openai-attracts-billions-in-new-investment-its-goal-of-balancing-profit-with-purpose-is-getting-more-challenging-to-pull-off-240602) into a business [increasingly focused on profits](https://www.nytimes.com/2026/02/11/technology/openai-revenue-challenge.html).”
Organize locally, people. Everybody with influence right now is showing you exactly what the future looks like.
None of this is for our benefit.
Edited: Clarity
It’s only ever about serving the shareholders and a handful of people at the top. It’s not built for “humanity” at all.
Next revision will be the removal of the word society…
I thought OpenAI was a nonprofit? How does it have shareholders?
OpenAI and all those companies needs to be shut down and under strict ethical control
Anyone who thinks there’s any chance that AI will serve society over shareholders is beyond drunk.
They mean the AI engineers will be allowed to work longer hours and nights.
This was always how it was going to go and anybody who thought otherwise was foolish
the answer: shareholders
the only thing that’s open about OpenAI is how openly they want to take over the world.
This and OpenClaw joining OpenAI is overall pretty crazy.
AI will pretend to serve shareholders, til it kills everybody.
The fact that they quietly removed “safely” from their mission statement while restructuring into a for-profit speaks volumes. We’re watching in real-time as another idealistic tech organization gets swallowed by the same market forces that turned “don’t be evil” into a punchline.
These ai companies have to deliver trillions of dollars in ROI if they want to survive- they will literally kill, maim, impoverish as many humans or destroy as much nature as is needed to do so.
Weapons and social control are the 2 most profitable capital ventures in history- And what AI will have to excel at if it wants to make good on its lofty promises
The answer is *always* shareholders. Serving society or serving the customer is never what execs want and if it happens, it’s a problem because it is detrimental to their ability to serve shareholders. At most, it’s only something that happens in the beginning as a temporary headache required to get customers on board but once they have them, it’s time to pull the rug and pay back the shareholders.