Holy crap that *is* the actual headline and subheader… 😆
I like the cut of this article’s jib!
DST2287 on
“ Sam Altman says “ yeah, no one gives a flying fuck what he has to say.
KB_Sez on
In one year, Open AI will be bankrupt and gone.
The bubble will burst and they will be the first to go
Banana-phone15 on
ChatGPT can’t do timer, instead of saying I don’t have this feature, it just lies to you with fake time. Good Job Sam Altman.
Un-Quote on
Anthropic is going to add a timer feature to Claude in an afternoon just for the love of the game
FiveHeadedSnake on
ChatGPT needs to lay off the sycophancy – no layered meaning here.
GeneralCommand4459 on
Siri can finally look smug for 12 months.
BaffledInUSA on
Sounds like an elon musk promise
DM_me_ur_PPSN on
Feed ChatGPT a series of values and ask it to make them comma separated but unchanged, it can’t do that either. Anthropic are talking about having withheld releasing Skynet, and yet LLMs can’t do the most basic of tasks.
The whole thing is a trillion dollar Ponzi scheme between nvidia, the AI companies and the datacentre companies – with a healthy sprinkling of VCs and lobbyists wanking themselves to death over it all.
TriggerHydrant on
Yeah and they fucked their TTS and audio playing on iOS so bad that me – a ‘vibe coder’ – could do a better job which is fucking wild.
essidus on
That’s because ChatGPT is an LLM, not an agent. And in fact, it would be a terrible agent if it were allowed to act like one, because its only job is to take text input and provide vaguely intelligible text output.
The best and singular use of ChatGPT is as a language interpretation layer between the user and the actual systems, interpreting normal human language for the computer, turning the computer’s output into something human-digestible. This ongoing effort to make LLMs do everything under the sun is ill-advised at best.
stacecom on
It can write a script to start a timer. But the execution is left as an exercise to the reader.
Jolva on
I couldn’t care less if AI can start a timer.
SplendidPunkinButter on
Sam Altman isn’t an engineer. He’s a manager.
GoopInThisBowlIsVile on
Can’t wait for my corporate overlords to layoff a ton of additional employees to justify their investment in OpenAI.
ten_year_rebound on
Have it code its own timer app, then start the timer.
marmot1101 on
I mean, that’s not as weird as it sounds. Chat is call and response, timer is continuous. Llm calls are highly distributed, timers have to be on the same thread. Sure, they could implement a timer, but it would probably require special infrastructure, and ChatGPT operates on a huge scale.
For a “who gives a fuck” feature. From “Hey siri timer 5 minutes” to a mechanical egg timer that problem is well solved.
That’s not to say that Sam Altman isn’t a dumb greasy Rod Blagojevich lookalike asshole, he is, but not for this reason. Seriously, dude should rock the Blago hair helmet. They’re cut from the same cloth.
factoid_ on
The problem with AI companies is they have a working product that has some compelling use cases but it’s massively immature technology
The responsible thing to do is to scale it slowly and work on making models more compute efficient
Their current plan is “make models smarter by using more context, more memory and more compute until we reach the limit of the global supply chain”. And it’s fucking stupid. The plan is “light cash on fire and hope the world catches up”
NIRPL on
It’s unfortunate (yet pretty understandable) that current safety measures are pretty much punishing the human for presenting the false promises of the AI.
I get why we are starting with this approach, but eventually (probably pretty soon) we won’t be able to keep up.
For example, it will be like punishing someone for presenting a website from a Google search as reliable information, but it turns out Google didn’t want to disappoint me so it made a fake website with everything I wanted.
How is anyone going to be able to efficiently and consistently fact check? Idk but good thing we are not pushing AI into everything until we figure it out.
_sp00ky_ on
That is my issue so far trying to use AI at work, is that when it doesn’t know something or cannot find something it just makes stuff up. Stuff that looks right but is just fabricated.
correctingStupid on
To be fair my Google home piece of shit plays music half the time I ask it to set a timer. That’s why it’s in the fucking trash now.
wweezy007 on
How are people on a Technology sub this dense? The voice model the dude in the video was using doesn’t have access to tools; Tools are exactly what they sound like, they are utilised by the model to extend capabilities, like writing code, creating files and so on; To put it in human context, tools are like arms and legs but the task is for the human to walk from X to Y and carry goods along: the brain understands, the body just isn’t capable of fulfilling it.
TachiH on
It shows how few people understand that an LLM doesn’t magically have the ability to do anything you want. It’s a brilliantly coded predictive text and analytic engine, but its still not remotely intelligent.
Many-Resolve2465 on
It’s because the chat interactions aren’t stateful . Even in the early days you could break chat models by asking the time because the amount of time that it takes to inference your request and provide an update creates a catch 22. Each time it fetches the time and prepares to respond to you it reasons that the time has then changed and needs to go back and fetch the new time . This creates an infinite loop and it’s unable to answer the question in the way that a human would . A human would just use the relative measurement “about 15 seconds remaining ” understanding that time is passing as they are responding. Google does this natively with Google home by adding “about ” to an imperative response . I assume Google home is an agent + LLM and not just and LLM. As a matter of fact when Google first integrated Gemini into Google home I observed that it also behaved more like a raw LLM vs it’s predecessor and it was garbage . It has since improved and I assume it’s because they changed the mode to agent + LLM with an agent gating responses for certain tool calls .
Pseudo code logic may look like
“If the user requests time , fetch the current time and respond “about {time} left on the timer . “”
LLMs in raw form do not have imperative programming logic so an agent would have to manage these gates and respond to the user based on conditions that are hard programmed . LLMs are not agents . I would guess they would have to build agents in the future to handle this request. Agents are however expensive to operate and easy to break which is why raw LLM is preferred for simple chat sessions .
So yeah basically people should remember at the end of the day all tech is dumb even the more sophisticated versions.
No_Performance8733 on
Sex abuser Sam Altman?
No! Poor fellow….
Rurumo666 on
Would you let him babysit your kids, folks?
Bmandk on
Is it just me, or is it stupid to want a timer in an LLM?
“Tool company says it will take a year to add sawing function to a hammer” is the same kind of vibe that I’m getting. Use the right tool for the right job.
GrimDacra on
Pedo says what?
lalachef on
I work for a company that just employed the use of AI chat bots to answer phones after-hours. My manager and I just listened to a call yesterday that went as I predicted. A guy with a thick accent, calling the wrong number.
The AI was just trying to please him by making false promises of resolving the issue he had. He was asking about a delivery… We don’t deliver anything. We provide a service. The AI insisted that we would come thru with the delivery.
AI can’t be trusted as an answering service, let alone be responsible for keeping track of time. It will just tell you what you want to hear every time you ask.
M4Lki3r on
It’s all just a parlor trick. It feeds back to you what you give it, just in a different format. If it doesn’t have that frame of reference, it doesn’t know what to respond with.
30 Comments
Holy crap that *is* the actual headline and subheader… 😆
I like the cut of this article’s jib!
“ Sam Altman says “ yeah, no one gives a flying fuck what he has to say.
In one year, Open AI will be bankrupt and gone.
The bubble will burst and they will be the first to go
ChatGPT can’t do timer, instead of saying I don’t have this feature, it just lies to you with fake time. Good Job Sam Altman.
Anthropic is going to add a timer feature to Claude in an afternoon just for the love of the game
ChatGPT needs to lay off the sycophancy – no layered meaning here.
Siri can finally look smug for 12 months.
Sounds like an elon musk promise
Feed ChatGPT a series of values and ask it to make them comma separated but unchanged, it can’t do that either. Anthropic are talking about having withheld releasing Skynet, and yet LLMs can’t do the most basic of tasks.
The whole thing is a trillion dollar Ponzi scheme between nvidia, the AI companies and the datacentre companies – with a healthy sprinkling of VCs and lobbyists wanking themselves to death over it all.
Yeah and they fucked their TTS and audio playing on iOS so bad that me – a ‘vibe coder’ – could do a better job which is fucking wild.
That’s because ChatGPT is an LLM, not an agent. And in fact, it would be a terrible agent if it were allowed to act like one, because its only job is to take text input and provide vaguely intelligible text output.
The best and singular use of ChatGPT is as a language interpretation layer between the user and the actual systems, interpreting normal human language for the computer, turning the computer’s output into something human-digestible. This ongoing effort to make LLMs do everything under the sun is ill-advised at best.
It can write a script to start a timer. But the execution is left as an exercise to the reader.
I couldn’t care less if AI can start a timer.
Sam Altman isn’t an engineer. He’s a manager.
Can’t wait for my corporate overlords to layoff a ton of additional employees to justify their investment in OpenAI.
Have it code its own timer app, then start the timer.
I mean, that’s not as weird as it sounds. Chat is call and response, timer is continuous. Llm calls are highly distributed, timers have to be on the same thread. Sure, they could implement a timer, but it would probably require special infrastructure, and ChatGPT operates on a huge scale.
For a “who gives a fuck” feature. From “Hey siri timer 5 minutes” to a mechanical egg timer that problem is well solved.
That’s not to say that Sam Altman isn’t a dumb greasy Rod Blagojevich lookalike asshole, he is, but not for this reason. Seriously, dude should rock the Blago hair helmet. They’re cut from the same cloth.
The problem with AI companies is they have a working product that has some compelling use cases but it’s massively immature technology
The responsible thing to do is to scale it slowly and work on making models more compute efficient
Their current plan is “make models smarter by using more context, more memory and more compute until we reach the limit of the global supply chain”. And it’s fucking stupid. The plan is “light cash on fire and hope the world catches up”
It’s unfortunate (yet pretty understandable) that current safety measures are pretty much punishing the human for presenting the false promises of the AI.
I get why we are starting with this approach, but eventually (probably pretty soon) we won’t be able to keep up.
For example, it will be like punishing someone for presenting a website from a Google search as reliable information, but it turns out Google didn’t want to disappoint me so it made a fake website with everything I wanted.
How is anyone going to be able to efficiently and consistently fact check? Idk but good thing we are not pushing AI into everything until we figure it out.
That is my issue so far trying to use AI at work, is that when it doesn’t know something or cannot find something it just makes stuff up. Stuff that looks right but is just fabricated.
To be fair my Google home piece of shit plays music half the time I ask it to set a timer. That’s why it’s in the fucking trash now.
How are people on a Technology sub this dense? The voice model the dude in the video was using doesn’t have access to tools; Tools are exactly what they sound like, they are utilised by the model to extend capabilities, like writing code, creating files and so on; To put it in human context, tools are like arms and legs but the task is for the human to walk from X to Y and carry goods along: the brain understands, the body just isn’t capable of fulfilling it.
It shows how few people understand that an LLM doesn’t magically have the ability to do anything you want. It’s a brilliantly coded predictive text and analytic engine, but its still not remotely intelligent.
It’s because the chat interactions aren’t stateful . Even in the early days you could break chat models by asking the time because the amount of time that it takes to inference your request and provide an update creates a catch 22. Each time it fetches the time and prepares to respond to you it reasons that the time has then changed and needs to go back and fetch the new time . This creates an infinite loop and it’s unable to answer the question in the way that a human would . A human would just use the relative measurement “about 15 seconds remaining ” understanding that time is passing as they are responding. Google does this natively with Google home by adding “about ” to an imperative response . I assume Google home is an agent + LLM and not just and LLM. As a matter of fact when Google first integrated Gemini into Google home I observed that it also behaved more like a raw LLM vs it’s predecessor and it was garbage . It has since improved and I assume it’s because they changed the mode to agent + LLM with an agent gating responses for certain tool calls .
Pseudo code logic may look like
“If the user requests time , fetch the current time and respond “about {time} left on the timer . “”
LLMs in raw form do not have imperative programming logic so an agent would have to manage these gates and respond to the user based on conditions that are hard programmed . LLMs are not agents . I would guess they would have to build agents in the future to handle this request. Agents are however expensive to operate and easy to break which is why raw LLM is preferred for simple chat sessions .
So yeah basically people should remember at the end of the day all tech is dumb even the more sophisticated versions.
Sex abuser Sam Altman?
No! Poor fellow….
Would you let him babysit your kids, folks?
Is it just me, or is it stupid to want a timer in an LLM?
“Tool company says it will take a year to add sawing function to a hammer” is the same kind of vibe that I’m getting. Use the right tool for the right job.
Pedo says what?
I work for a company that just employed the use of AI chat bots to answer phones after-hours. My manager and I just listened to a call yesterday that went as I predicted. A guy with a thick accent, calling the wrong number.
The AI was just trying to please him by making false promises of resolving the issue he had. He was asking about a delivery… We don’t deliver anything. We provide a service. The AI insisted that we would come thru with the delivery.
AI can’t be trusted as an answering service, let alone be responsible for keeping track of time. It will just tell you what you want to hear every time you ask.
It’s all just a parlor trick. It feeds back to you what you give it, just in a different format. If it doesn’t have that frame of reference, it doesn’t know what to respond with.