OpenAI safety researcher quits, brands pace of AI development ‘terrifying’ | Steven Adler expresses concern industry taking ‘very risky gamble’ and raises doubts about future of humanity

https://www.theguardian.com/technology/2025/jan/28/former-openai-safety-researcher-brands-pace-of-ai-development-terrifying

Share.

13 Comments

  1. Who the hell quits because the goal of the company they signed up to work for is being achieved? It seems ignorant to understand any risks after deciding to work there.

    Either it’s a marketing gimmick or there’s something else happening.

  2. BoomBapBiBimBop on

    It’s only been a few days and I have another opportunity to point out that the skeptics in this thread who poopoo the person quitting and critiquing the work of the company

    Don’t have experience with what they’re working on

    Probably don’t have domain expertise or education but just read articles on the internet. 

    In some cases haven’t trained a machine learning algorithm

    Haven’t studied ethics in any formal way much less ethics related to AI

    Don’t have a firm grip on the possibilities

    And still manage to come here and act like they’re gods gift to ai analysis instead of actually listening to someone who knows what the fuck they are talking about.

  3. I’m tired of all these “quittings” that are just made public to fake advertise how scary and advanced their “AI” is. When in reality it’s nothing more than glorified predictive text.

  4. ShadowBannedAugustus on

    Oh this marketing shtick again? Has it been a week yet? I would bet 2:1 they have spreading this bullshit as a severance condition in their contracts.

  5. I wonder how many of these open AI departures are people being nudged out then get on socials and act crazy

  6. If I was a betting man, I’d bet that whenever OpenAI is laying off someone they’re offering a well-sweetened deal if the laid-offeé (I don’t fucking know) promises to do a little song and dance like this.

  7. man, those exit bonuses must be real good for them to come out saying the same thing after quitting.

  8. I’m of two minds about AI currently. Part of me is super optimistic that it could really make a big difference for people in the near future and help in a lot of ways. But that’s balanced by fear that because this is all done by for profit companies and it’s a race to the top – and not coming from a place of trying to benefit humanity that it’s a very dangerous proposition that is probably leaning more towards the negative outcome side of things

  9. And when do we see this “terrifying” advancement?
    It’s effectively still doing the same things it did several years ago.

  10. I saw the same warnings coming from intelligence community, journalist community, historian community, health community about how dangerous trump would be as President. Now people are figuring that out, if they get affected by any one of his stupid policies enacted so far.

    So it seems to me we haven’t learned to listen to experts and want to create deep seeked conspiracies about how things aren’t really this bad.

    Spoiler alert, they are.

  11. Given the current state of affaires in the USA, The cabal of Tech oligarchs scrambling to give the orange a little reach-around while they get on and do what they want. I suggest that Safety Researcher is a redundant role just now.