Google DeepMind unveils new AI models for controlling robots

Nikesh Vaishnav
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Google DeepMind, Google’s AI research lab, on Wednesday announced new AI models called Gemini Robotics designed to enable real-world machines to interact with objects, navigate environments, and more.

DeepMind published a series of demo videos showing robots equipped with Gemini Robotics folding paper, putting a pair of glasses into a case, and other tasks in response to voice commands. According to the lab, Gemini Robotics was trained to generalize behavior across a range of different robotics hardware, and to connect items robots can “see” with actions they might take.

DeepMind claims that in tests, Gemini Robotics allowed robots to perform well in environments not included in the training data. The lab has released a slimmed-down model, Gemini Robotics-ER, that researchers can use to train their own models for robotics control, as well as a benchmark called Asimov for gauging risks with AI-powered robots.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *