Nvidia announces open AI models for autonomous driving
Nvidia presented a package of new open AI models and developer tools aimed at accelerating research in autonomous driving and robotics, underlining the company’s broader push to build what it calls the foundation for physical AI, Qazinform News Agency correspondent reports, citing TechCrunch.
At the center of the release is Alpamayo R1, an open reasoning vision language model created specifically for autonomous driving research. Nvidia describes it as the first vision language action model focused on this field. Such systems combine visual perception with language-based reasoning, allowing vehicles to interpret their surroundings while linking what they see to decision making processes used for navigation and safety.
Alpamayo R1 is built on Nvidia’s Cosmos Reason framework, a family of models designed to work through complex decisions before producing an output. The Cosmos line was first introduced in January 2025 and expanded with additional models in August. By applying this reasoning approach to driving scenarios, Nvidia aims to help autonomous systems handle more subtle, human like judgments in real traffic situations.
The company says technologies of this kind are essential for reaching level 4 autonomy, the stage at which vehicles are capable of fully self driving within defined areas and specific conditions.
To encourage adoption and experimentation, Nvidia has released Alpamayo R1 publicly through GitHub and Hugging Face, making it accessible to researchers and developers worldwide.
Earlier, Qazinform News Agency reported that Nvidia had posted stronger than expected quarterly results.