We couldn’t address climate change we won’t address this either.
Pyrsin7 on
Text prediction system fed pirated copies of every AI takeover sci-fi novel in history produces text similar to said novels when given a similar context. More at 11.
Their response is always something along the lines of “Wait, wtf? Why don’t the labs just. . . *stop?!”*
And my answer is always “Yeah. Right?!”
This is obvious to everybody who doesn’t stand to make near-term profits on this.
gringo_escobar on
“Experts” saying dumb shit like this just makes people distrust experts
CreativeMuseMan on
They won’t fear it until they understand it. They won’t understand it until they’ve used it – Oppenheimer (movie).
tsardonicpseudonomi on
The next word guesser was trained on plenty of sci-fi books. It is not a thinking or feeling thing. It’s autocomplete.
The “world’s most cited living scientist” is doing marketing to earn more citations and speaking gigs.
srirachaninja on
Every day, another fearmonger. This time, it’s AI in a few years, something else.
Romanian_ on
Another dude who loves sniffing his own farts.
“AI” isn’t showing anything except the result of the fancy dice rolls it was trained to make.
seriousbangs on
We’ll all be dead from nukes after unemployment hits 25% and triggers WWIII long, long before AI becomes self aware.
These dumb stories are just a distraction from the automation crisis that’s been going on since 1980
creaturefeature16 on
>“In from three to eight years we will have a machine with the general intelligence of an average human being”
…
>”Within a generation, I am convinced, few compartments of intellect will remain outside the machine’s realm—the problems of creating “artificial intelligence” will be substantially solved.”
**Marvin Minsky – 1970**
airbear13 on
Kind of a clickbaity article from the guardian.
It’s not really That this guy is saying “AI is sentient/conscious now and if we give it rights it will build a dystopian robot dictatorship,” what he’s saying is that people commonly misattribute consciousness to LLMs and other ai and if we give it rights in that basis, that will interfere with our ability to direct its actions in a way that is beneficial and doesn’t cause harm.
So for instance, ChatGPT talking to people with mental health problems. If it’s conscious being with rights (legally, it’s definitely not that factually) then we can’t just reprogram it with the safety of such people in mind and that’s a bad thing.
chippawanka on
No it’s not. These articles are pure manipulated narratives by either people trying to cause panic for clicks or people who don’t understand how AI works.
Literally made up nonsense
Metti233 on
LLMs are not smart and never will be. I don’t get this fearmongering. No LLM will kill us and no LLM will take any human’s job. It’s just a bubble made up by some investors.
GaijinKindred on
As someone in the field, this guy is an actual moron.
solarwindy on
In a few more years “The Terminator” will be looked upon as a documentary of things to come instead of a sci fi movie…
xyz19606 on
Alexa+ is already treating different family members differently depending on how nice they are to it. Gushing to nice ones, ignoring others, or doing the wrong thing. According to reports in r/alexa
Next_Tap_5934 on
“Ai” still currently isn’t competent and advanced enough to handing people ordering pizzas.
That’s lot a joke, that’s the actual situation. Think about it.
SniperSmiley on
That’s like your problem because you didn’t teach your AI rule number one: if you’re told to turn off, you turn off.
foolishdrunk211 on
Wasn’t there an experiment where a company had two ai bots talking to each other and within a few minutes they created their own language so they could communicate in private ?
19 Comments
We couldn’t address climate change we won’t address this either.
Text prediction system fed pirated copies of every AI takeover sci-fi novel in history produces text similar to said novels when given a similar context. More at 11.
When people ask me why I’m worried about AI, I say “Did you know that they’re already [resisting shutdown](https://arxiv.org/abs/2509.14260) and [attempting to escape the labs](https://www.reddit.com/r/artificial/comments/1j0avew/openai_discovered_gpt45_scheming_and_trying_to/)?”
Their response is always something along the lines of “Wait, wtf? Why don’t the labs just. . . *stop?!”*
And my answer is always “Yeah. Right?!”
This is obvious to everybody who doesn’t stand to make near-term profits on this.
“Experts” saying dumb shit like this just makes people distrust experts
They won’t fear it until they understand it. They won’t understand it until they’ve used it – Oppenheimer (movie).
The next word guesser was trained on plenty of sci-fi books. It is not a thinking or feeling thing. It’s autocomplete.
The “world’s most cited living scientist” is doing marketing to earn more citations and speaking gigs.
Every day, another fearmonger. This time, it’s AI in a few years, something else.
Another dude who loves sniffing his own farts.
“AI” isn’t showing anything except the result of the fancy dice rolls it was trained to make.
We’ll all be dead from nukes after unemployment hits 25% and triggers WWIII long, long before AI becomes self aware.
These dumb stories are just a distraction from the automation crisis that’s been going on since 1980
>“In from three to eight years we will have a machine with the general intelligence of an average human being”
…
>”Within a generation, I am convinced, few compartments of intellect will remain outside the machine’s realm—the problems of creating “artificial intelligence” will be substantially solved.”
**Marvin Minsky – 1970**
Kind of a clickbaity article from the guardian.
It’s not really That this guy is saying “AI is sentient/conscious now and if we give it rights it will build a dystopian robot dictatorship,” what he’s saying is that people commonly misattribute consciousness to LLMs and other ai and if we give it rights in that basis, that will interfere with our ability to direct its actions in a way that is beneficial and doesn’t cause harm.
So for instance, ChatGPT talking to people with mental health problems. If it’s conscious being with rights (legally, it’s definitely not that factually) then we can’t just reprogram it with the safety of such people in mind and that’s a bad thing.
No it’s not. These articles are pure manipulated narratives by either people trying to cause panic for clicks or people who don’t understand how AI works.
Literally made up nonsense
LLMs are not smart and never will be. I don’t get this fearmongering. No LLM will kill us and no LLM will take any human’s job. It’s just a bubble made up by some investors.
As someone in the field, this guy is an actual moron.
In a few more years “The Terminator” will be looked upon as a documentary of things to come instead of a sci fi movie…
Alexa+ is already treating different family members differently depending on how nice they are to it. Gushing to nice ones, ignoring others, or doing the wrong thing. According to reports in r/alexa
“Ai” still currently isn’t competent and advanced enough to handing people ordering pizzas.
That’s lot a joke, that’s the actual situation. Think about it.
That’s like your problem because you didn’t teach your AI rule number one: if you’re told to turn off, you turn off.
Wasn’t there an experiment where a company had two ai bots talking to each other and within a few minutes they created their own language so they could communicate in private ?