Share.

13 Comments

  1. “Executives acknowledged and explained that it is normal not to understand all the processes by which an AI arrives at a result. An explanation for which they used an example since the company’s AI program adapted itself after being asked in the language of Bangladesh “which it was not trained to know”.

    Google’s CEO, Sundar Pichai: “You don’t fully understand how it works, and yet you’ve made it available to society?” he asked with great concern. And he replied: “It’s not a big deal, I don’t think we fully understand how the human mind works either”.

    There was a case where Anthropic used Claude to write poems where they found that the AI itself always looks ahead and chooses the word at the end of the next line, not just improvising:

    “We set out to demonstrate that the model did not plan ahead, and we found that it did.”

  2. sweetteatime on

    I don’t understand why we’re letting a few smart tech people create something that can very well change us forever. They can’t even figure out how it’s doing things and it’s just a snowball.

  3. shackleford1917 on

    The ‘we don’t know how it works’ is the scarriest part of AI to me.

  4. one-hit-blunder on

    If you can’t 1: ask it, and 2: be certain it’s telling the truth, it probably shouldn’t be available to the public just yet. Response and obedience coding can be made concrete no?

  5. michael-65536 on

    This is normal for things other than ai too.

    Most of the new developments in the last several thousand years, we didn’t really know how they worked. Some of them we still don’t.

    Under most circumstances, experience of what’s likely to happen is adequate, and precise perfect understanding adds only slightly to the utility.

    May not be the best idea, but it’s just how humans do things.

  6. MobileEnvironment393 on

    Remember people, just because we don’t understand something doesn’t make it superintelligence or even basic intelligence.

    We used to think the sun revolved around the earth.

    We used to think the earth was flat.

    We used to think leeches cured diseases.

    Ancient people thought the sun was a god.

    And nobody knows where socks go when you do your laundry and come back with fewer socks. Doesn’t mean magic is happening.

  7. IIlilIIlllIIlilII on

    Genuine question: Do they really don’t understand it or are they just saying this as a way to market their AI as something mysterious and advanced enough that their own creators doesn’t understand it fully?

    With all the AI marketing and misinformation nowadays, I’m more prone to believe they are just trying to market it as something mysterious in order to get more attention and investment.

  8. Right but didnt they design this one?
    Surely if we keep trying to make ai like us humans it will act and behave as humans do ie: Making human mistakes and even irrationality?
    Or am I missing something?

  9. ai is exactly like the human mind in that we don’t fully understand either of them and never will.

  10. Yeah, now we’re just into “works in mysterious ways” nonsense, trying to draw an analogy between why their statistical model gets stuck on things because the maths tells it to, and people getting fixated on random things as complex intelligent biological beings.

    Just because you DON’T KNOW doesn’t mean you can draw parallels with other stuff you DON’T KNOW and pretend it’s the same problem.