BTCC / BTCC Square / decryptCO /
Google’s Robots Now Think, Web-Search, and Self-Teach New Tricks

Google’s Robots Now Think, Web-Search, and Self-Teach New Tricks

Author:
decryptCO
Published:
2025-09-27 18:49:40
10
2

Google's AI leaps from code-executor to autonomous thinker—web-crawling, self-upgrading, no human hand-holding.

Neural Nets Go Rogue

These bots don't just follow scripts. They parse search results, synthesize data, and rewrite their own algorithms. Real-time learning—no weekend coding marathons required.

Finance's Newest Intern?

Wall Street's quant teams sweat as Google's creations bypass traditional data-crunching. One hedge fund manager muttered: 'They'll probably short my job before I finish this coffee.'

The Autonomy Tipping Point

When machines teach machines, the loop closes. No more update delays—just continuous evolution. Scary for sci-fi purists, a goldmine for efficiency junkies.

Remember when 'disruption' meant a new app? Now it's silicon rewriting its own DNA—while your portfolio still hinges on some guy's 'gut feeling'.

This process combines online search, visual perception, and step-by-step planning—making context-aware decisions that go beyond what older robots could achieve. The registered success rate was between 20% to 40% of the time; not ideal, but surprising for a model that was not able to understand those nuances ever before.

How Google turn robots into super-robots

The two models split the work. Gemini Robotics-ER 1.5 acts like the brain, figuring out what needs to happen and creating a step-by-step plan. It can call up Google Search when it needs information. Once it has a plan, it passes natural language instructions to Gemini Robotics 1.5, which handles the actual physical movements.

More technically speaking, the new Gemini Robotics 1.5 is a vision-language-action (VLA) model that turns visual information and instructions into motor commands, while the new Gemini Robotics-ER 1.5 is a vision-language model (VLM) that creates multistep plans to complete a mission.

When a robot sorts laundry, for instance, it internally reasons through the task using a chain of thought: understanding that "sort by color" means whites go in one bin and colors in another, then breaking down the specific motions needed to pick up each piece of clothing. The robot can explain its reasoning in plain English, making its decisions less of a black box.

Google CEO Sundar Pichai chimed in on X, noting that the new models will enable robots to better reason, plan ahead, use digital tools like search, and transfer learning from one kind of robot to another. He called it Google's "next big step towards general-purpose robots that are truly helpful."

New Gemini Robotics 1.5 models will enable robots to better reason, plan ahead, use digital tools like Search, and transfer learning from one kind of robot to another. Our next big step towards general-purpose robots that are truly helpful — you can see how the robot reasons as… pic.twitter.com/kw3HtbF6Dd

— Sundar Pichai (@sundarpichai) September 25, 2025

The release puts Google in a spotlight shared with developers like Tesla, Figure AI and Boston Dynamics, though each company is taking different approaches. Tesla focuses on mass production for its factories, with Elon Musk promising thousands of units by 2026. Boston Dynamics continues pushing the boundaries of robot athleticism with its backflipping Atlas. Google, meanwhile, bets on AI that makes robots adaptable to any situation without specific programming.

The timing matters. American robotics companies are pushing for a national robotics strategy, including establishing a federal office focused on promoting the industry at a time when China is making AI and intelligent robots a national priority. China is the world's largest market for robots that work in factories and other industrial environments, with about 1.8 million robots operating in 2023, according to the Germany-based International Federation of Robotics.

DeepMind's approach differs from traditional robotics programming, where engineers meticulously code every movement. Instead, these models learn from demonstration and can adapt on the fly. If an object slips from a robot's grasp or someone moves something mid-task, the robot adjusts without missing a beat.

The models build on DeepMind's earlier work from March, when robots could only handle single tasks like unzipping a bag or folding paper. Now they're tackling sequences that WOULD challenge many humans—like packing appropriately for a trip after checking the weather forecast.

For developers wanting to experiment, there's a split approach to availability. Gemini Robotics-ER 1.5 launched Thursday through the Gemini API in Google AI Studio, meaning any developer can start building with the reasoning model. The action model, Gemini Robotics 1.5, remains exclusive to “select” (meaning “rich,” probably) partners.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.Your EmailGet it!Get it!

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users