Nvidia (NVDA) CEO Jensen Huang unveiled Alpamayo, a family of open models built for autonomous driving, during his keynote address at CES 2026 in Las Vegas on Monday.
“Today, we are introducing Alpamayo, the world’s first thinking model for autonomous driving,” Huang said. “Not only does your car drive as you would expect it to drive, but it reasons out any situation it could come upon.”
Mercedes-Benz’s (MBGAF)(MBGYY) new CLA model, the brand’s first vehicle featuring the MB.OS platform, will include Nvidia’s full-stack DRIVE AV software, AI infrastructure and accelerated compute. The two companies have worked on this for several years.
Huang said the vehicles will be available in the U.S. during the first quarter of 2026, in Europe during the second quarter and will enter other markets and regions during the second half of 2026.
“This is the first large-scale physical AI market,” Huang said. “We can all agree it is fully here. In the next 10 years I’m fairly certain a large percentage of the world’s cars will be autonomous.”
Other companies adopting Alpamayo models for AV include JLR, Lucid (LCID) and Uber (UBER). Alpamayo was trained with Nvidia’s Cosmos, a world foundation model. It uses synthetic data to train robots for the physical world. It synthesized millions of driving miles for Alpamayo before it was ever deployed on the roadways.
“It’s impossible for us to generate every single instance that could happen in every country in the world with every single driver,” Huang said. “But it can respond to a new situation and apply reason to generate a proper response.”
The CLA received a five-star European New Car Assessment Program EuroNCAP safety rating due to its features in accident mitigation and avoidance. Its safety systems feature multiple guardrails and redundancy to navigate complex and unforeseen environments.
Alpamayo’s emergence foreshadows the greater role Nvidia plans to play in the rise of physical AI, or robotics.
“The next era for AI is robotics,” Huang said.
Vera Rubin enters full production
Huang also revealed that the Vera Rubin, Nvidia’s next-generation computer platform and the successor to Grace Blackwell, has now entered full production.
It begins with a Vera CPU, a Rubin GPU and 17,000 additional components on a Vera Rubin compute board. It is a complete redesign with no cords or fans. In total, it features six new chips, including the Vera CPU, Rubin GPU, NVLINK 6 Switch, ConnectX-9 SuperNic, Bluefield-4 DPU and Spectrum-6 Ethernet Switch.
“Rubin arrives at exactly the right moment, as AI computing demand for both training and inference is going through the roof,” said Huang. “With our annual cadence of delivering a new generation of AI supercomputers — and extreme codesign across six new chips — Rubin takes a giant leap toward the next frontier of AI.”
Infrastructure software and storage partners on the Vera Rubin include AIC, Cloudian, DDN, Dell (DELL), HPE (HPE), Hitachi Vantara, IBM (IBM), NetApp (NTAP), Nutanix (NTNX), Pure Storage (PSTG), Supermicro (SMCI), SUSE, VAST Data and WEKA.
AI labs and startups and cloud service providers that are expected to adopt Rubin include Amazon Web Services (AMZN), Anthropic (ANTHRO), Black Forest Labs, Cisco (CSCO), Cohere, CoreWeave (CRWV), Cursor, Dell, Google (GOOG)(GOOGL), HPE, Lenovo, Meta (META), Microsoft (MSFT), Mistral AI, Nebius (NBIS), OpenAI (OPENAI), Oracle (ORCL) and Perplexity.
“The efficiency gains in the NVIDIA Rubin platform represent the kind of infrastructure progress that enables longer memory, better reasoning and more reliable outputs,” said Anthropic co-founder and CEO Dario Amodei. “Our collaboration with NVIDIA helps power our safety research and our frontier models.”