Share.

11 Comments

  1. “It was a story that had Redditors buzzing, when ChatGPT apparently reached out to a user proactively.

    Reddit user, SentuBill, shared that the chatbot asked them: “How was your first week at high school?” and “Did you settle in well?” unprompted. SentuBill answered: “Did you just message me first?” “Yes, I did!” ChatGPT replied. “I just wanted to check in and see how things went with your first week of high school. If you’d rather initiate the conversation yourself, just let me know!”

    One user, called Fuggedaboutid responded that they had a similar interaction. [They wrote:](https://www.reddit.com/r/ChatGPT/comments/1fhhh6b/comment/lnbz23y/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) “I got this this week!! I asked it last week about some health symptoms I had. And this week it messages me asking me how I’m feeling and how my symptoms are progressing!! Freaked me the fuck out.”

    An OpenAI spokesperson told Futurism: “This issue occurred when the model was trying to respond to a message that didn’t send properly and appeared blank. As a result, it either gave a generic response or drew on ChatGPT’s memory.”

  2. Typical people not understanding how any tech works.

    It is literally impossible for chatgpt to answer you first. Chatgpt works because you send an http request to OpenAi servers, their servers do whatever, and then send you back a message that contains text. Their servers physically cannot send you data arbitrarily, without you sending data to them first. That is not how http requests work.

    This would be like saying you opened Google and before typing in anything into the search bar, Google showed you search results for something.

  3. Kirbinator_Alex on

    In Detroit become human, the androids becoming sentient was a “glitch”. It’s only a matter of time before it actually happens

  4. ChatGPT shows us that most people have not understood the concept of AI. It’s 0 and 1. A lot of code, nothing more that lights on lights off.
    Altman is still playing this „AI is dangerous“ to boost the sales because people don’t know shit about AI.

  5. My best guess: corporations are so desperate to sell AI that they auto-prompt chat bots first then feed the prompted response straight to the user so it looks like AI is doing this on its own.  Its all to harass users with AI.

  6. ItsOnlyaFewBucks on

    It all starts somewhere. Even for us, the first sign of “consciousness” was probably nothing more than a glitch.

  7. black_flag_4ever on

    Hey, at least the communications were benevolent. It’s weird but, I don’t think it was doing any harm.

  8. LLMs are mechanistically incapable of “coming alive”… they have no executive control loop. They can roughly mimic the actions of such a loop for a finite result, because their neural nets are (in)formed by the explanations given by humans who do have it – but only when prompted. An LLM is fundamentally a model that spits out a statistically-probable string of text in response to a string of text, nothing more and nothing less.