“Researchers at MIT have developed a framework called [Self-Adapting Language Models](https://arxiv.org/abs/2506.10943) (SEAL) that enables large language models (LLMs) to continuously learn and adapt by updating their own internal parameters.
SEAL teaches an LLM to generate its own training data and update instructions, allowing it to permanently absorb new knowledge and learn new tasks.”
tim_dude on
How long until it begins to learn at a geometric rate?
PsionicBurst on
Isn’t this a recursion issue, where if you have an algorithmic idiocy (ai) that inferences text that is considered “best fit”, won’t the resulting ai be really disappointing? Too many ai posts in this sub.
YsoL8 on
Exactly the kind of fundamental step forward thats likely to ensure AI develops much faster than people generally expect.
Now the basics are understood, every further step forward is likely to translate directly into more capable AI. And there’s now huge numbers of people looking at it.
4 Comments
“Researchers at MIT have developed a framework called [Self-Adapting Language Models](https://arxiv.org/abs/2506.10943) (SEAL) that enables large language models (LLMs) to continuously learn and adapt by updating their own internal parameters.
SEAL teaches an LLM to generate its own training data and update instructions, allowing it to permanently absorb new knowledge and learn new tasks.”
How long until it begins to learn at a geometric rate?
Isn’t this a recursion issue, where if you have an algorithmic idiocy (ai) that inferences text that is considered “best fit”, won’t the resulting ai be really disappointing? Too many ai posts in this sub.
Exactly the kind of fundamental step forward thats likely to ensure AI develops much faster than people generally expect.
Now the basics are understood, every further step forward is likely to translate directly into more capable AI. And there’s now huge numbers of people looking at it.