Share.

27 Comments

  1. In other news. Ai supporters believe ai has already taken over. Ai people believe ai blah blah blah. 

  2. cromstantinople on

    “There are still big problems with generative AI’s Achilles heel — the way it makes things up. Reliability and hallucinations are an even bigger problem if you’re going to turn AI into autonomous agents: Unless OpenAI and its rivals can persuade customers and users that agents can be trusted to perform tasks without going off the rails, the companies’ vision of autonomous agents will flop.”

    That’s a pretty big problem.

  3. breakthrough this, breakthrough that, let me know when they know the amount of R’s in strawberry and ill give a shit. AI people always are on and on about the next best thing, but the fucking thing can’t even count on its own yet

    edit: i stand corrected, they did fix this issue. thanks to the commenters pointing that out. i do wonder how reliable the breakthroughs made will be tho, considering their instability and reputation for making stuff up

  4. Business meeting with government to make sure that this game breaking technology will only be used to make money for like 5 people.

    *Shocked Pikachu Face*

  5. Sam Altman? The same guy who was really full of shit before ? Why do you think he stopped lying now?

  6. I’m like 35% sure he’s going to declare that they have achieved AGI. It gets them out of a contract thing with Microsoft and Altman just seems like it type of guy to want the honor of announcing the first AGI even if it’s not quite there yet.

  7. SandboxSurvivalist on

    My prediction is that he’s going to tell them that they are really super duper close to taking the next steps toward an amazing breakthrough that will open up new avenues of research that could potentially take them one step closer to AGI and that the government should definitely step in before other companies catch up because OpenAI is the only company that can be trusted to do this safely.

    He’s going to say all that in the most annoying vocal fry ever heard by human ears so that you know he’s super serious.

  8. Isn’t it great to know that any world changing developments in Ai are going to be run by the Trump administration, who will surely use that knowledge for the good of the country and mankind, and not for a corrupt purpose.

  9. ArtLeading5605 on

    Sam’s business model: extract value from every bit of knowledge humans have ever discovered or developed, make trillions selling it all back to said humans and taking their jobs, and share none of the proceeds with those humans afterward.

  10. He’s gonna beg for that government money because nobody wants to pay for a chatbot subscription.

  11. FlaccidEggroll on

    or it just another tech oligarch there to prohibit competition and shifts things in his favor. I find it more likely this is the case than super intelligence.

  12. bentaldbentald on

    Even as someone fascinated by the progress in the AI space, these sorts of headlines and ‘stories’ are becoming really boring.

    We haven’t even seen a fully functioning agent yet and we’re expected to buy into the idea that super agents – whatever tf that means – are just round the corner.

    I expect AGI to come at some point but for the time being it is a lot of zzzzzzzz

  13. Previous-Display-593 on

    This is such crazy speculation and editorialization.

    This could be about anything.

  14. Lol, they were just exposed that they were cheating in their last benchmarks… They had full access to frontierMath datasets… Good luck with that.

  15. -oo_oo_-o-o_-o- on

    “PhD level super agents” of course means whatever you want it to mean. There is no definition of “PhD level,” nor any way to really assess that something is that level. PhD in what even? “Agent” actually does technically have a definition in the context, but I’m guessing it’s not that either.

    Great news guys, now we can misrepresent dissertation chapters as badly as the press!

  16. Billionaires “Hey Doctor AI, how do we fix our economy and Government ?”

    PhD AI “You should greatly increase taxes on the wealthy and get them out of your government”

    Billionaires “Uh – This thing needs to be controlled by us only “

  17. This guy is primarily a promoter. He’s likely seeking investment or currying favor for favorable tax/regulation policy. I wouldnt believe anything about claims like this without more evidence.

  18. Nah! Probably just sharing insider information to push for stock prices.

    Military contractors do the same, like I make a bigger, more expensive, destructive, missile. Can you create a conflict so we can bomb a poor country for profit? It’s a win/win if the congress also owns stocks in defense companies.

  19. Significant_Swing_76 on

    And Trump will say “sure, but Elon needs to own 51% of OpenAI”…
    “…unless you make a counteroffer that’s bigger than Elons…”

  20. Perhaps AI, behind closed doors is able to replace most jobs now. We need to focus on getting these automatic established to provide for the common citizen or we’re all going into servitude.

  21. This feels like a very dumb article.  Which AI researchers are going around saying that PhD level super agents are coming?

    Probably the most substantial evidence we’ve seen lately is just Noam Brown on twitter saying that o1 to o3 early scaling results are good, but otherwise telling people to calm down.  Besides that it’s Twitter influencers / schizos just making shit up and Zuckerburg’s off-hand comment that AI will perform as well as a mid-level engineer this year (nevermind his Metaverse predictions).

    So yeah, which insiders and what remarks?  Did any reach out to Axios directly?

    This is why I like The Information.  They don’t treat anonymous Twitter shitposting as something to be taken seriously.  Actual researchers from OpenAI are more than willing to discuss inside matters with them.

    EDIT: reread a third time.  Axios has sources at the US govt and within labs saying that internal projections have been revised to be more optimistic on timelines.

    But, you know, no leaks on new models or whatever, just “wow, scaling o3 seems pretty promising”.

  22. Makes sense, a lot of published papers/code are public, just like the majority of open source code/software. Coding AI and research AI are good candidates.

    Problem I have is PhD super agents are only be possible if we trust the peer review process (as a firewall for publishing) and LIRC the peer review process has been tainted In the last decade and PhD students and universities rushed to publish for funding, business investment or even propaganda. And exploiting services like arXiv with a flood of bad research. Use this as training data for openAI and it’s garbage in garbage out.

  23. Our government leaders should be asking themselves how they can make up for all the lost tax revenue from the significant losses to the labor market.