Google's A2A release isn't as flashy as other recent releases such as photo real image generation, but creating a way for AI agents to work together begs the question: what if the next generation of AI is architected like a brain with discretely trained LLMs working as different neural structures to solve problems? Could this architecture make AI resistant to disinformation and advanced the field towards obtaining AGI?

Think of a future state A2A as acting like neural pathways between different LLMs. Those LLMs would be uniquely trained with discrete datasets and each carry a distinct expertise. Conflicts between different responses would then be processed by a governing LLM that weighs accuracy and nuances the final response.

https://betterwithrobots.substack.com/p/the-cortex-link?r=1w3nvi&utm_campaign=post&utm_medium=web&triedRedirect=true

Share.

3 Comments

  1. Sweaty_Yogurt_5744 on

    Could a future state A2A used to chain multiple agents together to act in concert – resembling the neural pathways and cortices of an organic brain? Would AI’s trained on discrete datasets construct more accurate responses, be more resistant to disinformation contained in their datasets, and advance the field towards AGI? Conversely, would this model be unwieldy or struggle with cognitive dissonance? Discussion here should weigh these questions and whether this concept might be incorporated into the product roadmaps of the companies that are developing AI today.

  2. michael-65536 on

    I think this is probably only one step away from agi, if the subnetworks are trained separately.

    Once the whole assemblage is trained together, like human brains are, it will probably lead to agi.