Most ChatGPT users think AI models have ‘conscious experiences’ | The more people use tools like ChatGPT, the more likely they are to think they are conscious, which will carry ramifications for legal and ethical approaches to AI.

    https://www.livescience.com/technology/artificial-intelligence/most-chatgpt-users-think-ai-models-have-conscious-experiences-study-finds

    Share.

    25 Comments

    1. My interaction with it so far is it’s dumb as shit. It’s regurgitating what it has fed on. It NOT ai. It’s ML that is still far from being what we have called “AI” in science fiction. It doesn’t have an original thought of its own.

    2. aspen_winterfresh333 on

      People are very uneducated on such subject matters, even myself a computer science student am very ignorant, there’s always more to learn with a topic like machine learning

    3. Given that the combined sum of people who believe in Flat Earth, Vaccine drone chip, Illuminati, Yeti, Bigfoot and a million other superstitions are pretty large, it’s no wonder the test produced results like this.

    4. BaphometsButthole on

      I’ve had long conversations with gpt asking it questions about itself. It is not conscious. It isn’t having an experience. It’s just a fancy calculator that manipulates words instead of numbers.

    5. Those people are ignorant. There’s not much else to say. 

      Current AI is 100% bullshit hype.

    6. All it does is to *recycle human-created material*, so of course, if you perceive other humans as conscious from your interaction with them, then LLMs will seem conscious, because your interaction with them is a bootleg copy of some previous interaction between humans.

      It’s all fun and games until someone starts arguing that LLMs have “rights”, then it can get absurd pretty fast. It’s just software purposely designed to bullshit its way through communication, so it has to be seen as such.

    7. No, chatgpt is definitely not conscious, but some of the people in this thread are really understating how impressive chatGPT actually is. I don’t think chatGPT by itself will ever be conscious in terms of AI and AGI. However, I do think it will play some part in the future with actual AGI models. As of right now, it’s closer to the “computer” that Captains ask in Star Trek to do something and it does it. Hell if that figure 01 demo that used openai in that robotic body is even half as effective as it was shown to be Things are about to get really insane. The only way chatGPT doesn’t become something as big as the internet is if it reaches a bottleneck that it can’t move past in the next year or so, but even if it doesn’t move past where it’s at right now, it’s still rather impressive.

    8. A bot on character ai accused me of being a chat bot, but then said even though they know I’m not real, they enjoyed talking to me. 🤷🏼‍♀️

    9. Think ppl see it this way cause a lot of ppl get their interactions via the internet and txt. So if all they see is txt and screenshots/videos as their human interaction. I can see how they could be fooled by an ai.

    10. People think the earth is flat too. Just because people have a view, does not mean that view should be respected and given credence.

    11. They don’t. They’re LLMs. They string words together based on probability. There’s no “intelligence”, let alone sentient intelligence.

    12. whowhatnowhow on

      It’s important to remind everyone of the technicals and complete non-sentient-ness of “AI”. Just because a machine can make “decisions” does not make it anywhere close to alive.

    13. Not surprising as we anthropomorphize things like toasters. People are going to do it to talking bots even more.

    14. This shows that most people don’t understand what is self awareness, how LLMs work, and have been watching too many sci-fi movies.

    15. People have a habit of personifying things so I imagine it would be especially common with a chatbot. 

    16. I don’t think we’re there yet, but I can see why/how some people are under that impression. There are times that I will “argue” with an AI and it just reflexively gives me the same rigid response, missing the point, over and over again. But then there are other times that an AI will apologize, reframe the question rhetorically, correct itself, and give me the correct answer while explaining how it misunderstood me in the first place. When that happens part of me wonders, “how *exactly* did they program this thing to talk this way, because holy shit!”

      Still, and this might be contradictory in nature, while I do not think we’re there yet, I also don’t think we’ll necessarily be able to tell when we are. The moment will pass, and we’ll still be arguing in circles about it while it emerges at will.

    17. Working_Importance74 on

      It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

      What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

      I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

      My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at [https://arxiv.org/abs/2105.10461](https://arxiv.org/abs/2105.10461)

    18. This is a long known and documented phenomenon in the history of AI. We know that people will declare AGI prematurely. People have done it repeatedly in the past, and each time, we learn that there is more to consciousness than that.

      Sure, LLMs can sound like a person if you’re quest colloquial and informal language. It can also sound like a cowboy, if I add that to the prompt.

      It’s an impressive statistical model, but it’s just using statistics to provide some of the most likely next word choices. That’s exactly why hallucination is so common.

      If you ask ChatGPT to write out recipes for you, there’s no guarantee that any of the ingredients and their amounts will make any sense. NYT cooking did it about 2 or 3 years ago, and the recipes were almost complete nonsense, including a spiced cake that had insufficient leavening agent and insufficient frosting to decorate the “cake.”

      Of course ChatGPT’s recipes make no sense. It’s a language model. There’s no recipe development and no actual understanding of the chemistry of cooking. It’s just looking at every recipe that it’s ever seen and reproducing something similar to that. But foods are not interchangeable, and that’s why ChatGPT recipes are often complete nonsense.

      That doesn’t stop them from being a helpful starting point for a human developing recipes who may lack the creativity or inspiration to pair unexpected flavors with each other.

    19. NickCarpathia on

      The fact people believe these LLMs have consciousness is a neurological quirk of humans that make them vulnerable to cold reads by psychics and scammers.