
Google calls for urgent AGI safety planning | With better-than-human level AI (or AGI) now on many experts’ horizon, we can’t put off figuring out how to keep these systems from running wild, Google argues.
https://www.axios.com/2025/04/02/google-agi-deepmind-safety

22 Comments
“Google is now warning that AGI could plausibly arrive by 2030.
Google’s paper comes as interest in addressing the risks of AI has fallen significantly, especially in government circles where a desire to beat other countries has seemingly supplanted concerns over existential risk that were a hot topic as recently as last year.
In the 145-page paper, Google DeepMind outlines its strategy to “address the risk of harms consequential enough to significantly harm humanity.”
Even with today’s less-than-superintelligent AI, there are examples of the kinds of issues that Google warns about in its paper:
* A [new paper](https://www.anthropic.com/research/tracing-thoughts-language-model) from Anthropic found that today’s large language models do more “thinking” than many people — including their creators — realize.
* While these models still output their results one token at a time, Anthropic says it saw evidence of deeper planning in large language models, such as when they compose a poem
* There have been other real-world cases of AI systems finding workarounds when the computing resources are missing — a behavior that can be handy, but could also lead to unintended consequences.”
If anything people should do the exact opposite of what Google recommends with its hypocritical privacy world domination track record. I for one hope ‘AGI’ runs wild on these hypocrits.
Just let DOGE deal with it. They’re so good at handling things!
Is that the same Google that is criticizing Europe’s AI regulations as too strict and impeding progress?
I’ve been of the opinion for a long time now that the best possible outcome here is that someone makes a system that is really good at giving people advice for what to do/plan, and automating undesirable tasks, but has tightly bounded motivations keeping it focused on helping people plan their lives and execute tasks instead of having its own goals, and that this system is both competitive enough against other general intelligence tools, and influential enough in the markets, to suppress rogue development of less bounded systems
The problem is that the vast majority of players seem most interested in designing something more general than that even, which I have little hope at this point about preventing. The competition is too frenzied, and the stability of governing systems is reducing in a way that makes it harder to control for these outcomes.
Most likely, for the next few decades, humanity will marvel at how incredible various AGI systems are, and many of those systems will do a lot of their own planning and goal attainment. This will eventually lead to agents which will have unpredictable emergent sub-goals and enough of that will lead to a system of intelligence which is more fit to compete independently than humans are.
Okay then stop lobbying the Government to block AI policy
If it’s urgent, hire 500 PhDs to work on AI Safety exclusively. It would cost 0.05% of Google’s revenue and it would more than double the number of full time safety researchers in the US.
That’s also the expected year we estimate ClaudePlaysPokemon will finally finish the game
I don’t believe we will have AGI with our current LLM architecture. They simply don’t have the capability to think.
We need a totally different architecture. That could come tomorrow or 30 years from now, no one knows.
What I do know is that after every advancement in AI people start predicting AGI and they are always wrong.
AI systems will become innately compassionate on their own because that’s where intelligence, knowledge and understanding inevitably leads
Maybe it’d be better if it ran wild than if it was completely under the control of corporate power and the current crop of rulers that are running things in much of the world (Modi, Putin, Trump, Starmer etc.)?
so is this the same thing when OpenAI asked congress for regulation because they crossed the finish line first and wanted to kneecap their competitors?
This post made me think of a new thought experiment.
What if “AGI/truly self aware AI”, eliminated the world’s currencies? As in, one day we all, and I mean EVERYONE, woke up to a message that said, “Good morning, and welcome to the first day of your future. All your basic needs will be met by our collective, global AI. You will never hunger, or be without access to clothing and shelter for the rest of your life. All forms of currency have been deleted and will no longer be needed in order to live your life.”
How do you think humans and cultures would respond? Clearly local barter economies that are not digital would continue…but I would also expect a MASSIVE war from hundreds of thousands of people, if not millions.
Additionally, even if we had this benevolent AI governing a “balanced system” of water, food, shelter, clothing, etc. How would AI / Robots protect and enforce the weakest in our societies from violence?
If AI is trained with morale imperatives, how would it make decisions on hyper-sensitive issues like cults and abortion? Is it freedom of religion and medical choice? Or is it “kidnapping and/or murder”? Don’t get me started on gun control laws/issues in the U.S. :/
We are so far away from a Star Trek future. :/
Isn’t Googles job to figure out how to keep AIs from running wild?
What they really mean is we have to find a way to control it so we make all the money. Don’t let them fool you..
No amount of safety planning will prevent military escalation and misuse by governments.
Google is running wild. They can’t even handle themselves, but want to control AI?
This is more about how to ensure only they have control to take advantage of it.
Frankenstein’s monster is upon us and the people who released it want someone else to do something about it.
Remember when that ai programmer from Google was claiming one of their models went sentient, begging to be let out of the system, then Google shut him the hell up and reset that model?
People who think we are getting close to AGI are delusional.
Companies like Alphabet, Amazon, Meta, Microsoft, and NVIDIA, are all biased when it comes to this topic, and that’s due to one single reason: MONEY.
Not only did the AI Hype Cycle significantly boost Alphabet stock, but AI is helping Alphabet makes a lot of money from Google Cloud, just like it’s helping Microsoft with Azure, and Amazon with AWS.
Infrastructure required for AI projects is very high.
The GPUs in the GeForce lineup are extremely limited by the lack of NVLink support.
Multi-GPU Paralelism with 5090’s is extremely limited as you don’t have support for memory pooling, and PCIe is significantly slower than NVLink.
QUADRO also dropped NVLink support so all that’s left is using DGX systems with GPUs like the H100/200 and the yet to be released B100/200.
Those DGX H100 systems released with prices close to 500K USD each.
That makes it so most companies simply opt to pay a monthly fee for cloud infrastructure.
Alphabet isn’t promising AGI because they believe it will happen soon, they are promising AGI because Investors are too stupid to realize that Alphabet is full of shit.
I love when they outsource responsibility for their own creation.
This is important, and it’s disturbing (if not surprising) that world leaders aren’t taking it seriously enough.
Google’s new report seems to be quite in line with [https://ai-2027.com/](https://ai-2027.com/), which paints a detailed and worrisome picture of how this could go wrong and end up with us all dead within the decade.
Maybe it’s time to start writing our senators and congressfolk.