Just days ago, OpenAI CEO Sam Altman said “It has seemed to me for a long time it might be better if building artificial general intelligence were a government project.” And Palantir’s CEO thinks it’s inevitable – that if a technology takes millions of jobs *and* threatens the U.S. military, that you’d be crazy to think it *wouldn’t* get nationalized.
Do AI companies secretly want this? Imagine guaranteed government contracts and guaranteed funding for research and development. Or will government inevitably *need* this? If society is transformed, will they have no choice but to seize the private companies building it so they can direct its development?
aparallaxview on
And how would that be a worse outcome than the current neo-feudalist one? Not saying it would be better, just that it likely wouldn’t be worse 🤷
Everythings_Magic on
Oh well. Maybe you shouldn’t have been in bed with this admin.
quantumpencil on
I mean, the government is going to do this. They may not do it explicitly but they will use force to ensure they are ultimately in control of the technology one way or another.
DaVirus on
They don’t worry. It was always the plan. AI can be used as a mass surveillance tool and most AI shops are impossible to make profitable.
Nationalization disguised as a bail out was always the goal.
SkiHotWheels on
Isn’t this what all the meetings were about back in the presidential race? Biden had told these guys that the gov wanted to highly regulate AI. Then, Trump told them he’d stay out of their way. Hearing that they got behind him in Q2 of 2024.
PM_ME_NUNUDES on
It’s not high tech enough. The moat is currently just language and training. What’s a government going to do about open source models? Maaaaaybe if the government banned individual hardware ownership they could do it, but i don’t see the public accepting that. This is just more evidence that Altman and Co don’t really have a clue about what they are doing.
Anyway I’m off to the thrift store to buy every single book they’ve got.
IBJON on
Yeah, I mean, that’s what happens when you push your product to be an integral part of how the government operates and make the government and military dependent on your product never going offline.
BalerionSanders on
It shouldn’t be nationalized.
It should be banned.
(Anyway, all corporations and rich people, and us too, are nationalized already right now. This criminal regime can do anything at any time and nothing except the implicit threat of armed rebellion and mass resistance- whether or not we are any longer capable of actually effecting that kind of fight for our liberty- is able to stop them. If the DOJ rolled up to Anthropic or OpenAI or even JP Morgan, and said, ‘pay us, or else,’ they’d pay, or they’d be destroyed. Nazism makes no compromises, capitalism is only useful to Nazis as long as it is useful, or it will be destroyed. You get, what you campaign financed, you Silicon Valley apocalypse incel cult of dunces)
KratosLegacy on
Nationalize another 70% of industries while you’re fucking at it. Healthcare, education, banking, energy, etc. But it needs to be in the hands of the people, not the government. Keep that in mind people. It needs to be held accountable by *the people* not *politicians* who don’t represent the people and sign data center deals behind closed doors while we pay higher energy and water bills.
QVRedit on
Well, they are presently making the mistake of trying to move too far too fast. And doing that will generate a significant back reaction against it.
DejectedTimeTraveler on
We need something like the FED for AI. Some board of appointed, long term individuals whos only job is to try and keep the AI’s somewhat in check.
Spitfire1900 on
I think the bigger risk to profiteers is that AI models look to be democratized too easily. A decently funded government project can release an open source model for a tenth the price at 90% the capability.
Multidream on
Wait wouldn’t that directly transfer the loan obligations to the US? Is that a backdoor out of this fustercluck?
SolidLikeIraq on
I was listening to a podcast that spoke about how the race for AGI is essentially the race for total power.
If anyone creates genuine AGI – it would instantly be able to control the world’s nuclear arsenal. It would break encryption that was unbreakable. It would game economic models and connected systems in a way that we likely couldn’t comprehend.
It’s hard to even imagine something that has the power to be world shifting. In weird way it’s like the old adage – “Imagine trying to explain cars to horse traders.”
If AGI is possible, it will be so beyond comprehension that even our most residents safe systems will be at the threat of complete destruction.
I think what we likely end up with is a lot of very useful specific “agents” that are perfect for operating within specific tasks.
The ability to connect all those tasks and adjust for the nuances of the weird contextual clues that we all use in our daily lives will be difficult.
This isn’t a “what word most commonly follows this word if the following parameters and probabilities are true for the words before it.” Problem.
I just hope we don’t completely destroy the next 10-15 years with this shit. Technology is fire. Fire can be a great thing and we’ve learned how to use it to better our lives. It STILL burns a ton of shit down accidentally though.
JUST_A_LITTLE_PUSH on
What I’m more concerned about is the government enforcing the AI to learn pro Israeli bias and feed it back to the millions of users. People are already taking what chatbots tell them as the gospel truth. Another medium for the Mossad to infiltrate and manipulate; if they haven’t already.
DeLoresDelorean on
Is not that reliable nor accurate, so it will fit perfectly with other government assets.
pimpeachment on
They can’t. Anyone can run their own models on their own hardware. They can nationalize the services offering pre built models for consumers.
FauxReal on
No shit, you’ve built the best automated surveillance and de-anonymizing system in history.
SnooDucks4472 on
We will get to a point where AGI when/ if it becomes possible, will be akin to a superweapon. At that point the hope would be that a reasonable government realizes that this power is dangerous and cannot be left in any one persons hands.
slurtybartfarst on
That would be hilarious at first and then absolutely terrifing
Altruistic_Koala_122 on
It’s already secretly eyeing your browser tabs if you haven’t dismantled it completely. Not to mention the future of PCs, that all will have a.i. on the hardware itself.
People keep forgetting, the evil and mean people will always be the first to abuse everything and everyone for quick profits.
dav_man on
I mean, surely this is the way forward? I’ve been shouting from the rooftops at work about something similar. We’re pissing around trying to implement AI in different ways. Every day someone has something new to trial. Nothing sticks. All the while we’re burning tokens like mad. All people are doing is vibe coding.
Anyway, at some point, some key workflows using AI will stick and we’ll be bound by some models. At some point the VC funding will cease and the rug will be pulled. Then all the companies reliant on these LLMs will have their costs increase massively (potentially). So why not invest in open source or inner source now? Isn’t that the smart move? Then keep some funding for some premium models.
I mean, anyone doing OpenClaw stuff is going that straight off the bat… so why are massive companies not? I mean we all know why.
So by scaling this up and having this at government level, surely that makes sense. Fuck em. They would fuck you over in a heartbeat. To be at the behest of a load of tech bros who were all sexually repressed teenagers is madness!
SoftlySpokenPromises on
The only thing that can prevent AI from running rampant is regulation, so this is a worst case scenario for an industry that is already building an empire of debt.
TheCh0rt on
lol what did they think would happen, just be left alone to vacuum up money forever until the government allowed the AI companies to replace it? I think the AI companies legitimately thought they could pull it off, right in front of the government’s face.
buddhistbulgyo on
They’re like kids and they’re oblivious to the power they wield.
wumr125 on
It would sooooo unfair if they aren’t allowed to keep private the thing they made by stealing EVERYONE’S data, conversation and copyrighted works
Substantial__Unit on
I thought Governments destroy everything they touch? No? Not anymore?
Wind_Best_1440 on
Of course governments are going to nationalize AI, it’s the perfect mass spying software ever created, its right up there with digital cameras.
The Stasi in eastern germany before the USSR fell would blush at the spying taking place in the western world now at a fraction of the cost.
IntroductionStill813 on
Don’t give them ideas! Last u remembered Bernie and Mamdani were socialist not the Republicans!
MawsonAntarctica on
They only worry because they lose the bag and the govt gets the bag.
31 Comments
Just days ago, OpenAI CEO Sam Altman said “It has seemed to me for a long time it might be better if building artificial general intelligence were a government project.” And Palantir’s CEO thinks it’s inevitable – that if a technology takes millions of jobs *and* threatens the U.S. military, that you’d be crazy to think it *wouldn’t* get nationalized.
Do AI companies secretly want this? Imagine guaranteed government contracts and guaranteed funding for research and development. Or will government inevitably *need* this? If society is transformed, will they have no choice but to seize the private companies building it so they can direct its development?
And how would that be a worse outcome than the current neo-feudalist one? Not saying it would be better, just that it likely wouldn’t be worse 🤷
Oh well. Maybe you shouldn’t have been in bed with this admin.
I mean, the government is going to do this. They may not do it explicitly but they will use force to ensure they are ultimately in control of the technology one way or another.
They don’t worry. It was always the plan. AI can be used as a mass surveillance tool and most AI shops are impossible to make profitable.
Nationalization disguised as a bail out was always the goal.
Isn’t this what all the meetings were about back in the presidential race? Biden had told these guys that the gov wanted to highly regulate AI. Then, Trump told them he’d stay out of their way. Hearing that they got behind him in Q2 of 2024.
It’s not high tech enough. The moat is currently just language and training. What’s a government going to do about open source models? Maaaaaybe if the government banned individual hardware ownership they could do it, but i don’t see the public accepting that. This is just more evidence that Altman and Co don’t really have a clue about what they are doing.
Anyway I’m off to the thrift store to buy every single book they’ve got.
Yeah, I mean, that’s what happens when you push your product to be an integral part of how the government operates and make the government and military dependent on your product never going offline.
It shouldn’t be nationalized.
It should be banned.
(Anyway, all corporations and rich people, and us too, are nationalized already right now. This criminal regime can do anything at any time and nothing except the implicit threat of armed rebellion and mass resistance- whether or not we are any longer capable of actually effecting that kind of fight for our liberty- is able to stop them. If the DOJ rolled up to Anthropic or OpenAI or even JP Morgan, and said, ‘pay us, or else,’ they’d pay, or they’d be destroyed. Nazism makes no compromises, capitalism is only useful to Nazis as long as it is useful, or it will be destroyed. You get, what you campaign financed, you Silicon Valley apocalypse incel cult of dunces)
Nationalize another 70% of industries while you’re fucking at it. Healthcare, education, banking, energy, etc. But it needs to be in the hands of the people, not the government. Keep that in mind people. It needs to be held accountable by *the people* not *politicians* who don’t represent the people and sign data center deals behind closed doors while we pay higher energy and water bills.
Well, they are presently making the mistake of trying to move too far too fast. And doing that will generate a significant back reaction against it.
We need something like the FED for AI. Some board of appointed, long term individuals whos only job is to try and keep the AI’s somewhat in check.
I think the bigger risk to profiteers is that AI models look to be democratized too easily. A decently funded government project can release an open source model for a tenth the price at 90% the capability.
Wait wouldn’t that directly transfer the loan obligations to the US? Is that a backdoor out of this fustercluck?
I was listening to a podcast that spoke about how the race for AGI is essentially the race for total power.
If anyone creates genuine AGI – it would instantly be able to control the world’s nuclear arsenal. It would break encryption that was unbreakable. It would game economic models and connected systems in a way that we likely couldn’t comprehend.
It’s hard to even imagine something that has the power to be world shifting. In weird way it’s like the old adage – “Imagine trying to explain cars to horse traders.”
If AGI is possible, it will be so beyond comprehension that even our most residents safe systems will be at the threat of complete destruction.
I think what we likely end up with is a lot of very useful specific “agents” that are perfect for operating within specific tasks.
The ability to connect all those tasks and adjust for the nuances of the weird contextual clues that we all use in our daily lives will be difficult.
This isn’t a “what word most commonly follows this word if the following parameters and probabilities are true for the words before it.” Problem.
I just hope we don’t completely destroy the next 10-15 years with this shit. Technology is fire. Fire can be a great thing and we’ve learned how to use it to better our lives. It STILL burns a ton of shit down accidentally though.
What I’m more concerned about is the government enforcing the AI to learn pro Israeli bias and feed it back to the millions of users. People are already taking what chatbots tell them as the gospel truth. Another medium for the Mossad to infiltrate and manipulate; if they haven’t already.
Is not that reliable nor accurate, so it will fit perfectly with other government assets.
They can’t. Anyone can run their own models on their own hardware. They can nationalize the services offering pre built models for consumers.
No shit, you’ve built the best automated surveillance and de-anonymizing system in history.
We will get to a point where AGI when/ if it becomes possible, will be akin to a superweapon. At that point the hope would be that a reasonable government realizes that this power is dangerous and cannot be left in any one persons hands.
That would be hilarious at first and then absolutely terrifing
It’s already secretly eyeing your browser tabs if you haven’t dismantled it completely. Not to mention the future of PCs, that all will have a.i. on the hardware itself.
People keep forgetting, the evil and mean people will always be the first to abuse everything and everyone for quick profits.
I mean, surely this is the way forward? I’ve been shouting from the rooftops at work about something similar. We’re pissing around trying to implement AI in different ways. Every day someone has something new to trial. Nothing sticks. All the while we’re burning tokens like mad. All people are doing is vibe coding.
Anyway, at some point, some key workflows using AI will stick and we’ll be bound by some models. At some point the VC funding will cease and the rug will be pulled. Then all the companies reliant on these LLMs will have their costs increase massively (potentially). So why not invest in open source or inner source now? Isn’t that the smart move? Then keep some funding for some premium models.
I mean, anyone doing OpenClaw stuff is going that straight off the bat… so why are massive companies not? I mean we all know why.
So by scaling this up and having this at government level, surely that makes sense. Fuck em. They would fuck you over in a heartbeat. To be at the behest of a load of tech bros who were all sexually repressed teenagers is madness!
The only thing that can prevent AI from running rampant is regulation, so this is a worst case scenario for an industry that is already building an empire of debt.
lol what did they think would happen, just be left alone to vacuum up money forever until the government allowed the AI companies to replace it? I think the AI companies legitimately thought they could pull it off, right in front of the government’s face.
They’re like kids and they’re oblivious to the power they wield.
It would sooooo unfair if they aren’t allowed to keep private the thing they made by stealing EVERYONE’S data, conversation and copyrighted works
I thought Governments destroy everything they touch? No? Not anymore?
Of course governments are going to nationalize AI, it’s the perfect mass spying software ever created, its right up there with digital cameras.
The Stasi in eastern germany before the USSR fell would blush at the spying taking place in the western world now at a fraction of the cost.
Don’t give them ideas! Last u remembered Bernie and Mamdani were socialist not the Republicans!
They only worry because they lose the bag and the govt gets the bag.