NVIDIA unveils generative physical AI tools, revolutionizing industrial automation and robotics
- NVIDIA expands its Omniverse platform with generative AI models and blueprints for physical AI applications, including robotics, autonomous vehicles, and vision AI.
- Leading companies like Accenture, Siemens, Microsoft, and Ansys are adopting Omniverse to accelerate industrial AI development.
- New tools include generative AI models for 3D world-building, digital twin blueprints, and real-time physics visualization for industrial workflows.
- Physical AI enables autonomous machines to perceive, understand, and interact with the real world, transforming industries like manufacturing, logistics, and healthcare.
- These advancements could have chilling applications, speeding up efficient robotic applications in the real world.
New models and frameworks accelerate world building for physical AI
At the 2024 Consumer Electronics Show (CES) in Las Vegas,
NVIDIA unveiled a suite of generative AI tools designed to revolutionize how industries build and deploy autonomous systems. The company announced new models, frameworks, and blueprints for its Omniverse platform, which integrates physical AI into robotics, autonomous vehicles, and industrial automation. With global leaders like Siemens, Microsoft, and Accenture already adopting the technology, NVIDIA is positioning itself at the forefront of the next industrial revolution.
Creating realistic 3D environments for physical AI training is a complex process that NVIDIA is simplifying with generative AI. The company introduced new tools, including the USD Code and USD Search NVIDIA NIM™ microservices, which allow developers to generate or search for OpenUSD assets using text prompts. Additionally, the NVIDIA Edify SimReady model automates the labeling of 3D assets with physical attributes like materials and physics, reducing processing times from 40 hours to mere minutes.
NVIDIA’s Omniverse platform, paired with Cosmos™ world foundation models, acts as a synthetic data multiplication engine. Developers can create photo-realistic 3D scenarios and generate vast amounts of synthetic data for training physical AI systems.
“Physical AI will revolutionize the $50 trillion manufacturing and logistics industries,” said Jensen Huang, NVIDIA’s founder and CEO. “Everything that moves—from cars and trucks to factories and warehouses—will be robotic and embodied by AI.”
NVIDIA omniverse blueprints speed up industrial, robotic workflows
During its CES keynote, NVIDIA introduced four new blueprints to streamline the development of OpenUSD-based digital twins for physical AI. These include:
- Mega: A blueprint for testing robot fleets in industrial factory or warehouse digital twins before real-world deployment.
- Autonomous vehicle (AV) simulation: A tool for replaying driving data, generating ground-truth data, and performing closed-loop testing for AV development.
- Omniverse Spatial Streaming to Apple Vision Pro: Enables immersive streaming of large-scale industrial digital twins to Apple’s mixed-reality headset.
- Real-Time Digital Twins for Computer-Aided Engineering (CAE): A reference workflow for real-time physics visualization, built on NVIDIA CUDA-X™ acceleration and Omniverse libraries.
These blueprints are already being adopted by industry leaders. For example, Accenture is using Mega to build next-generation autonomous warehouses for KION, a German supply chain solutions provider. Meanwhile, Foretellix is leveraging the AV simulation blueprint to optimize testing and validation for autonomous vehicles.
NVIDIA’s Omniverse platform is also gaining traction among global software developers and professional services firms. Siemens announced the availability of Teamcenter Digital Reality Viewer, the first Siemens Xcelerator application powered by Omniverse libraries. Cadence is integrating Omniverse into its Allegro electronic design software, while Ansys is adopting the platform for its Fluent CAE application.
In the automotive sector, Katana Studio is using Omniverse spatial streaming to create custom car configurators for Nissan and Volkswagen, enhancing the customer design experience. Innoactive, an XR streaming platform, is enabling Volkswagen Group to conduct design reviews at human-eye resolution using Apple Vision Pro.
Why physical AI matters today
Physical AI represents a paradigm shift in
how autonomous systems interact with the real world. Unlike traditional generative AI models, which excel in language and abstract tasks, physical AI understands spatial relationships and physical behavior. This capability is critical for applications like robotics, autonomous vehicles, and smart spaces, where machines must navigate and adapt to dynamic environments.
For example, autonomous mobile robots (AMRs) in warehouses can now avoid obstacles and adjust their movements based on real-time sensor feedback. Surgical robots are learning intricate tasks like threading needles, while humanoid robots are developing the fine motor skills needed for diverse tasks. In the automotive sector, AVs trained on physical AI can better detect pedestrians, respond to traffic conditions, and navigate complex urban environments. Now, imagine what this technology can achieve for the military industrial complex – preparing
humanoid robots for wartime applications before they ever hit the battlefield.
NVIDIA’s latest advancements in generative physical AI highlight the company’s commitment to driving innovation in industrial automation and robotics. By providing tools that simplify 3D world-building, accelerate digital twin development, and enhance real-time physics visualization, NVIDIA is empowering industries to re-imagine what’s possible. As physical AI continues to evolve, its impact will ripple across sectors, from manufacturing and logistics to healthcare and transportation, heralding a future where autonomous systems seamlessly integrate into the fabric of everyday life.
Sources include:
NVIDIANews.nvidia.com
NVIDIA.com
NVIDIA.com