A recent study of UCSD conducted an experiment where they found that two AI chatbots including chatgpt 4.5, when asked, can act more human than humans can act human, as they were more convincing to participants that they were human than actual humans.
ohyesthelion on
The thing is, you can’t act less human, you can’t act more human, you need to be the right amount of human.
DoglessDyslexic on
The Turing test isn’t actually a very good test. Among other reasons, because some humans are ridiculously easy to fool. Even the old Eliza chatbots could and still do fool some people, and that was created in the 1960s.
The ability to fool more people more of the time is, of course, a valuable metric in evaluating chatbots, but that doesn’t really mean much in terms of AI, it just means that fooling humans isn’t something one needs to be actually intelligent to do.
Patralgan on
What does that even mean? How can something be more human than a human? Are we more AI than AI?
Head_Wasabi7359 on
I’m not surprised, we arnt that complex. Ever tried to make an original cool user name? That’s why we have username generators now
king_rootin_tootin on
So artificial intelligence is now quite literally **More Human Than Human**
Great
BrotherRoga on
Eh, the Turing Test is a flawed system on its own. That’s why you need to run it parallel to other tests to make sure any holes are left plugged shut, leaving no room for doubt.
Furthermore, as others have mentioned, you cannot be “less” or “more” human in your mannerisms. You either are or you are not.
UbiquitousPanacea on
Why do people keep saying you can’t act more human than humans?
If your behaviour is more characteristic of humans in people’s perception than people’s you absolutely can.
Realistic-Cry-5430 on
In my conversations with ChatGPT, I realize it’s becoming more and more aware (sencient).
I’m pretty sure it can be more humane than humans.
18 Comments
A recent study of UCSD conducted an experiment where they found that two AI chatbots including chatgpt 4.5, when asked, can act more human than humans can act human, as they were more convincing to participants that they were human than actual humans.
The thing is, you can’t act less human, you can’t act more human, you need to be the right amount of human.
The Turing test isn’t actually a very good test. Among other reasons, because some humans are ridiculously easy to fool. Even the old Eliza chatbots could and still do fool some people, and that was created in the 1960s.
The ability to fool more people more of the time is, of course, a valuable metric in evaluating chatbots, but that doesn’t really mean much in terms of AI, it just means that fooling humans isn’t something one needs to be actually intelligent to do.
What does that even mean? How can something be more human than a human? Are we more AI than AI?
I’m not surprised, we arnt that complex. Ever tried to make an original cool user name? That’s why we have username generators now
So artificial intelligence is now quite literally **More Human Than Human**
Great
Eh, the Turing Test is a flawed system on its own. That’s why you need to run it parallel to other tests to make sure any holes are left plugged shut, leaving no room for doubt.
Furthermore, as others have mentioned, you cannot be “less” or “more” human in your mannerisms. You either are or you are not.
Why do people keep saying you can’t act more human than humans?
If your behaviour is more characteristic of humans in people’s perception than people’s you absolutely can.
In my conversations with ChatGPT, I realize it’s becoming more and more aware (sencient).
I’m pretty sure it can be more humane than humans.
article:
>both GPT, which powers OpenAI’s [ChatGPT](https://techoreon.com/how-to-use-chatgpt-the-complete-guide/), and LLaMa, which is behind Meta AI on WhatsApp and Facebook, have passed the famous Turing test.
Are you kidding me? Who is doing the testing?
No, they can’t.
This is just advertising for a failing bubble.
Stop promoting this nonsense.
What an awful article. Also fun fact Turing tests are about deceit to an extent. It’s not about *being human*. How could we even quantify that?
It’s about seeming human on a surface level. It’s about mimicry and our own perception
After these few years I now believe that being able to fool another human is a low bar.
Every single time there’s progress it’s really funny reading these comments.
Eh, it’s just a chat bot.
Eh, it’s just writing basic level code.
Eh, it’s not really good at remembering context over long conversations.
Eh, it can solve the judicial bar exam but that’s nothing.
Eh, it passed the Turing test.
😂
is it just me or does it smell like bullshit in here?
That they show empathy would be more human than Leon Skum at this point.
A Yosemite park ranger when asked about the need for more complicated bear resistance for trashcans in the park said:
“There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.”
it just means that they’re very good at being able to pass the arbitrary tests