7 Comments

  1. **Learning from AI summaries leads to shallower knowledge than web search**

    Results of a set of experiments found that individuals learning about a topic from large language model summaries develop shallower knowledge compared to when they learn through standard web search. **Individuals who learned from large language models felt less invested in forming their advice, and created advice that was sparser and less original compared to advice based on learning through web search**. The research was published in PNAS Nexus.

    Results of these experiments showed that participants who used LLM summaries spent less time learning and reported learning fewer new things. They invested less thought and spent less time writing their advice. As a result, they felt lower ownership of the advice they produced. Overall, this supported the idea that learning from LLM summaries results in shallower learning and lower investment in acquiring knowledge and using it.

    Participants learning from web searches and websites produced richer advice with more original content. Their advice texts were longer, more dissimilar to each other, and more semantically unique.

    For those interested, here’s the link to the peer reviewed journal article:

    https://academic.oup.com/pnasnexus/article/4/10/pgaf316/8303888

  2. WTFwhatthehell on

    “less original”

    was there any assessment of how likely it was for “original” to mean “wrong”?

  3. Imagine a dystopian future where nobody has thinking skills because did not trained them and anyone relies on so-called artificial ~~intelligence~~. Everyone gets the same and often wrong answers, believes them unconditionally at the level of religion and persecute *heretics* who dares to have own thoughts, while bad actors poison LLM training data for personal gain.

  4. MajorInWumbology1234 on

    That makes sense, as having an LLM explain things to you is basically having one person that kinda understands something explain it to you. A lot of people would say they don’t “understand” the subject, and that’s true, but I’d also argue a lot of people are mostly regurgitating information about topics and don’t fully understand them, either.

    I’ve only recently jumped on the AI bandwagon, but I’m utilizing it to give me a framework of topics to research independently rather than trying to learn anything from it directly. My formal education ends at high school, but I like learning things recreationally and have recently been asking AI how to patch gaps in my knowledge.

    Anti-AI sentiment is strong on reddit, and there’s no denying that companies are pushing it excessively and for tasks it has no business doing, but I think a lot of issues are on the human side because people engage with AI with unrealistic and lazy expectations. It can’t do things for you but it can certainly help you do things.

  5. Own-Animator-7526 on

    >*This research used the GPT-3.5-turbo model for the ChatGPT condition in Experiment 1. ​*

    In my opinion this is a poorly designed study with an unwarranted conclusion. A better LLM, which had been instructed to provide “*richer advice with more* analytical *content,*” could have been a fair comparison. In effect the search query response is unbounded, while the vanilla LLM response is designed to provide a concise browser — not learner — summary. It is hardly surprising that it is sparser in comparison.

    I use these tools on a daily basis to get summaries of papers, theses, and more general questions. The LLM has been told that I am an *academic researcher*, and in my experience provides thorough responses at an appropriate level.

    A simple example of why I think the LLM experience Is a clear win. I can ask it to use formal terminology, but add parenthetical explanations for technical jargon it thinks I won’t understand. On request the level of the discussion can be raised or simplified. And this is the fundamental point I think papers like this miss — that an LLM discussion is what you make it.

  6. Odd-Outcome-3191 on

    I think this tracks what I’ve seen in people studying from textbooks vs short-form video content/reviews.

    Like yeah, you get the information quicker from LLMs than videos/reviews, and those are faster than a textbook. But trying to locate the information you need, reading unneeded information and parsing what parts are relevant is an important part of information retention.

  7. another_random_bit on

    That would be true for all summary learning versus digging deeper while learning.

    No one’s stopping anyone from going deeper after reading the AI summary.