AI is at its most impressive when the answers to the questions it seeks are in its training data. It’s why it can get almost 100% in law and medical exams. The questions have been discussed so often on the internet, that all the answers are in training data scrapped from the internet. This can make AI very useful for narrow tasks, say detecting breast cancer in x-rays, but it’s much less useful when it has to deal with new information that doesn’t come from extensive training data.
For obvious reasons, it does not enjoy those advantages when it comes to news and current affairs. The great drawback of current AI is that it lacks reasoning ability, so frequently makes simple errors when it encounters new combinations of information that aren’t in its training data.
All the big tech companies developing AI are collectively pouring hundreds of billions of dollars into the efforts. To varying degrees, they are under huge pressure to justify this to investors. Hence, there is a rush to integrate AI into everything.
Perhaps the hope is that fundamental problems with reasoning will be quickly solved along the way. But they haven’t been, and so we see ridiculous outcomes like this.
knotatumah on
*”Research also shows water is hot when boiled and air is necessary for breathing.”*
Seriously, ai has shown over and over and over again that it is not a reliable source of factual information and must be fact checked regularly; yet, we go through this with every industry in every applicable usage of ai and somehow its news every time.
evilspyboy on
Language models should not be used as knowledge repositories. They should be used to interpret language that they derive the facts from.
OldWoodFrame on
I asked ChatGPT and it said 5-30% of chatbot responses contain misinformation. And that was misinformation!
SadWrongdoer4655 on
It would be interesting if they separate the different models and compare them. Surely the new models like 03 and 03 Mini are more accurate than GPT-4??
TheSleepingPoet on
It’s not as if the news sources online are 100% accurate. Almost all news is influenced by opinion, political views, and social interpretation. I seldom read a news report online or in a traditional media format which is not in some way inaccurate or could be argued to be based on an outright lie.
Tha_Watcher on
For those of us who’ve frequently interacted with chatbots, I’m sure this news isn’t particularly surprising in the slightest!
gurufi on
Whats new, the DELIBERATE INACCURACIES have been happening with CNN, BBC, FOX et al for years. They now have serious competion from AI and seemingly they dont like it one bit.
J0ats on
Would be good if a comparison with a similar study on human responses about news had been made.
9 Comments
Submission Statement
AI is at its most impressive when the answers to the questions it seeks are in its training data. It’s why it can get almost 100% in law and medical exams. The questions have been discussed so often on the internet, that all the answers are in training data scrapped from the internet. This can make AI very useful for narrow tasks, say detecting breast cancer in x-rays, but it’s much less useful when it has to deal with new information that doesn’t come from extensive training data.
For obvious reasons, it does not enjoy those advantages when it comes to news and current affairs. The great drawback of current AI is that it lacks reasoning ability, so frequently makes simple errors when it encounters new combinations of information that aren’t in its training data.
All the big tech companies developing AI are collectively pouring hundreds of billions of dollars into the efforts. To varying degrees, they are under huge pressure to justify this to investors. Hence, there is a rush to integrate AI into everything.
Perhaps the hope is that fundamental problems with reasoning will be quickly solved along the way. But they haven’t been, and so we see ridiculous outcomes like this.
*”Research also shows water is hot when boiled and air is necessary for breathing.”*
Seriously, ai has shown over and over and over again that it is not a reliable source of factual information and must be fact checked regularly; yet, we go through this with every industry in every applicable usage of ai and somehow its news every time.
Language models should not be used as knowledge repositories. They should be used to interpret language that they derive the facts from.
I asked ChatGPT and it said 5-30% of chatbot responses contain misinformation. And that was misinformation!
It would be interesting if they separate the different models and compare them. Surely the new models like 03 and 03 Mini are more accurate than GPT-4??
It’s not as if the news sources online are 100% accurate. Almost all news is influenced by opinion, political views, and social interpretation. I seldom read a news report online or in a traditional media format which is not in some way inaccurate or could be argued to be based on an outright lie.
For those of us who’ve frequently interacted with chatbots, I’m sure this news isn’t particularly surprising in the slightest!
Whats new, the DELIBERATE INACCURACIES have been happening with CNN, BBC, FOX et al for years. They now have serious competion from AI and seemingly they dont like it one bit.
Would be good if a comparison with a similar study on human responses about news had been made.