Skild AI unveiled its latest look at a generalized brain for robotics. The company aims to provide a general-purpose brain, called the Skild Brain, that is capable of controlling diverse robots across various environments and tasks. A new video (watch above) highlights the company’s early progress in this journey.

Physical AI represents the convergence of artificial intelligence (AI) with physical systems like robots that can sense, act, and learn in real-world environments. It enables intelligent agents to process data, make decisions, and interact physically with their surroundings. The importance of Physical AI stems from its ability to bridge the gap between AI in software and tangible action in the physical world.

“Robotics is marred by Moravec’s paradox: the hard problems are easy and the easy problems are hard. A lot of current robotics models focus on tasks that are hard for humans and easy for robots: dancing, kung-fu, because they are free-space actions and do not require any generalization,” said Deepak Pathak, CEO and co-founder of Skild AI. “Skild AI models can not only solve these easy tasks but also solve everyday hard tasks such as climbing stairs even under adversarial conditions, or assembling fine-grained items, which require vision and reasoning about contact dynamics.”

It’s been a little over a year since the company closed a $300M Series A round to fund this development cycle. In that time, the company has grown to over 25 employees and raised a total of $435 million across two funding rounds.

Several other notable companies are also launching physical AI solutions. Physical Intelligence, founded by Berkeley professor Sergey Levine, is chasing the same end goal: a single brain/foundation model for any robot.


RoboBusiness 2025 Explores Physical AI

Physical AI will be a main topic at RoboBusiness (Oct. 15-16 in Santa Clara), the premier event for robotics developers and produced by The Robot Report. Deepu Talla, NVIDIA’s Vice President of Robotics and Edge AI, will deliver the opening keynote called “Physical AI for the New Era of Robotics.” He’ll explore the requirements for Physical AI, where models can perceive, reason, and act in real-world environments.

Other talks about Physical AI will include:

How Multi-Model Decision Agents Improve Performance, Safety, Scale
Speaker: Robert Sun, Founding Engineer at Dexterity

How AI Enhances ABB’s Robot Performance
Speaker: Thomas-Tianwei Wang, Lead AI Application Engineer, ABB Robotics

Sim2Real Reinforcement Learning: Training Robots for the Real World
Speakers: Ken Goldberg, William S. Floyd Jr. Distinguished Chair in Engineering, UC Berkeley; Jeff Mahler, Co-Founder & CTO, Ambi Robotics

The Generalization Gap: Why Physical AI Needs Smarter Data Curation
Speaker: Benji Barash, Co-Founder & CEO, Roboto

Advancing Human-Robot Collaboration Through Natural Language AI
Speaker: Han-Pang Chiu, Technical Director, Center for Vision Technologies, Vision and Robotics Laboratory, SRI

5 Keys to Deploying AI-Powered Robots in Manufacturing
Speaker: SK Gupta, Co-Founder, Chief Scientist, GrayMatter Robotics

AI for Dexterity & Adaptation in High-Stakes Environments
Speaker: Vivian Chu, Co-Founder & Chief Innovation Officer, Diligent Robotics

Dexterous Robots in the Age of Embodied AI
Speaker: Mihai Jalobeanu, Founder & CEO, Dexman AI


Companies like NVIDIA are developing foundational models for robotics and creating simulation environments like Omniverse to train robots in virtual settings.

Boston Dynamics and Agility Robotics are designing physical humanoid and quadruped robots capable of performing complex movements and interacting with their surroundings. Waymo is a prominent example in the transportation sector, with its self-driving vehicles relying on Physical AI to navigate complex road conditions and anticipate interactions with other vehicles and pedestrians.

In warehouse automation, Amazon Robotics uses physical AI to optimize inventory movement and improve order fulfillment. These examples highlight the broad application and increasing focus on bringing AI out of the digital realm and into physical operations.

The Skild Brain is designed to be safe around humans while being highly adaptive to disturbances and human interactions.

A challenge in building a robotics foundation model is the limited availability of large-scale robotics data, and collecting real-world data using hardware can be slow and expensive. Skild AI has addressed this by leveraging large-scale simulation and human videos on the internet to pre-train its foundation model. This approach allows them to achieve scale before post-training the model with targeted real-world data to deliver working solutions to customers.

Raviraj Jain, Partner at Lightspeed, said, “Skild’s foundation models are truly generalizable across form factors, already showing emergent capabilities and are extremely robust – they represent a new paradigm in embodied AI. Unlike several other robotics demos that are often overfitted for the specific demo environment, Skild robots truly work ‘in-the-wild,’ safely navigating and co-existing with humans.”

The post Skild AI is giving robots a brain appeared first on The Robot Report.

By

Leave a Reply

Your email address will not be published. Required fields are marked *