Nvidia brought to light Alpamayo, a complete open-source AI suite, at CES 2026 that aims to make machines more-intelligent-like-humans through reason in the fields of robotics and autonomous driving. The innovative platform utilizes the combination of prestigious AI models, simulation tools, and an enormous amount of data to enable the self-driving car to comprehend, strategize, and justify its behavior in intricate real-world scenarios, much like a human.
At the core of the launch is Alpamayo-1, which Nvidia describes as the first chain-of-thought, vision-language-action (VLA) reasoning model built specifically for autonomous vehicle research. Featuring 10 billion parameters, the model analyses video inputs to generate driving paths while transparently showing the logic behind each decision. The company has released the model on Hugging Face with open weights and training scripts, allowing developers to adapt it for real-world vehicles or build complementary tools such as auto-labeling systems and evaluation frameworks. Nvidia claims that future revisions will be more powerful, offer more options, and provide optional commercial deployments.
Nvidia is also making available to the public an extensive driving dataset consisting of 1,700 hours of footage as a way of supporting large scale training. This footage has been taken from various locations, roads, weather, and infrequent driving situations. In addition to this is AlpaSim which is a simulation platform that is open-source and that helps developers recreate real-life driving conditions including sensors, traffic behavior, and road infrastructure. Through this tool, developers can safely test and refine their autonomous systems before they are allowed on public roads.
A key focus of Alpamayo is explainability. By processing situations step by step, the reasoning-based models make it easier to understand why a vehicle takes a particular action—an aspect Nvidia believes is essential for building trust, safety, and regulatory acceptance. The platform is integrated with the company’s NVIDIA Halos safety system, which is designed to support safe and scalable autonomous deployment.
At a conference, the CEO of Nvidia, Jensen Huang, referred to Alpamayo as “the ChatGPT moment for physical AI”. He pointed out that one of the advantages of Alpamayo was that it would allow self-driving cars to handle atypical and unexpected scenarios while also giving their reasoning in a manner that is easy to comprehend. Although the introduction of the technology has received a very warm reception and is considered to be a considerable step forward in the area of self-driving cars, it has also brought to the forefront the swift-moving nature—and increasing intricacy—of AI development. Through the unrelenting provision of large-scale models, datasets, and simulations, Nvidia is pushing the limits of innovation, even as the concerns regarding governance, safety standards, and regulation continue to lag behind.
With Alpamayo, Nvidia is positioning itself at the center of the next phase of autonomous mobility—one where machines don’t just react, but reason.