Google has introduced a new AI model designed to help robots better understand and interact with the physical world, addressing one of the biggest challenges in robotics: reasoning beyond instructions.
The model, Gemini Robotics-ER 1.6, focuses on “embodied reasoning,” enabling robots to interpret visual inputs, plan tasks, and determine when a task is complete.
This marks a shift from command-following machines to systems capable of making context-aware decisions.
1 Comment
Google has introduced a new AI model designed to help robots better understand and interact with the physical world, addressing one of the biggest challenges in robotics: reasoning beyond instructions.
The model, Gemini Robotics-ER 1.6, focuses on “embodied reasoning,” enabling robots to interpret visual inputs, plan tasks, and determine when a task is complete.
This marks a shift from command-following machines to systems capable of making context-aware decisions.