CCT - Crypto Currency Tracker logo CCT - Crypto Currency Tracker logo
Cryptopolitan 2025-12-02 09:34:41

Nvidia rolls out new open models to boost physical and digital AI

Nvidia has unveiled new tools and open model development for physical and digital AI, including Nvidia Drive Alpamayo-R1 (AR1) for autonomous driving. The tech giant announced at the NeurIPS AI conference that it is further increasing its cluster of open AI models, tools, and datasets. Nvidia said its new open physical and digital AI models aim to support research in the AI industry and beyond. It touted that its Alpamayo-R1 is the world’s first-ever industry-scale open reasoning vision language action (VLA) model for self-driving cars. The chipmaker also hinted at the release of new datasets and models for AI safety and speech. The tech firm’s researchers have prepared over 70 papers, workshops, and talks for presentation at the conference. The company is sharing its innovative projects that span medical research, autonomous driving, and AI reasoning. Nvidia shows stronger commitment to open source The tech giant demonstrated a more substantial commitment to open source at the AI event, an effort recognized by AI benchmarking platform Artificial Analysis with its new Openness Index. The Artificial Analysis Open Index rated the company’s Nemotron family of AI tools as one of the best available. The rating is based on the amount of technical information shared, the ease of using model licenses, and the clarity of data use regulations. Meanwhile, AR1 (Alpamayo-R1) integrates chain-of-thought AI reasoning with path planning to enable level 4 autonomy and enhance AV (autonomous vehicle) safety in various road scenarios. The chipmaker said previous iterations of autonomous driving models struggled with situations such as a pedestrian-heavy intersection, a car double-parked in a bicycle lane, or an approaching road closure. However, reasoning gives autonomous vehicles the common sense to drive like humans. The AR1 model achieves this by breaking down scenarios and reasoning through each step to consider all possible outcomes. It then uses contextual data to determine the most effective course of action. Nvidia claims that the AR1 taps into the chain-of-thought reasoning that allows it to process data along its path and use the information to plan a trajectory, such as stopping for jaywalkers. The model’s open foundation is based on the tech firm’s Cosmos Reason, which allows researchers to customize it for their non-commercial use cases. Researchers can also customize the AR1 model for benchmarking or developing experimental applications for AVs, according to the chipmaker. Nvidia Drive Alpamayo-R1 will be available on Hugging Face and GitHub , while a subset of the data used to train and evaluate models is available on Nvidia Physical AI Open Datasets. Reinforcement learning proves effective for AR1’s post-training Nvidia researchers claimed that reinforcement training had proven effective for AR1’s post-training. They pointed out that developers can also learn how to use and post-train models based on Cosmos using step-by-step reasoning. The researchers said examples of quick-start inference and advanced post-training can be found in the Cosmos Cookbook . The comprehensive guide for physical AI developers covers step-by-step data curation, model evaluation, and the generation of synthetic data. Meanwhile, the chipmaker said the possibilities for Cosmos-based applications are virtually limitless. The tech giant provided examples of Cosmos-based applications, including LidarGen, Omniverse NuRec Fixer, Cosmos Policy, and ProtoMotions3. The tech firm boasted that LidarGen was the world’s first model to generate lidar data for autonomous vehicle simulations. It also mentioned that its Omniverse NuRec Fixer model for robotics and AV simulation taps into Nvidia’s Cosmos Predict. The ProtoMotions3 is an open-source, GPU-accelerated framework built on Nvidia Newton and Isaac Lab. The framework can be used to train physically simulated humanoid robots and digital humans, according to the chipmaker. The Cosmos world foundation models (WFMs) can be used to generate realistic scenes. Nvidia also mentioned that policy models can be trained in Isaac SIM and Isaac Lab. The data generated from the policy models can then be used to post-train the chipmaker’s Groot N models for robotics. Sharpen your strategy with mentorship + daily ideas - 30 days free access to our trading program

阅读免责声明 : 此处提供的所有内容我们的网站,超链接网站,相关应用程序,论坛,博客,社交媒体帐户和其他平台(“网站”)仅供您提供一般信息,从第三方采购。 我们不对与我们的内容有任何形式的保证,包括但不限于准确性和更新性。 我们提供的内容中没有任何内容构成财务建议,法律建议或任何其他形式的建议,以满足您对任何目的的特定依赖。 任何使用或依赖我们的内容完全由您自行承担风险和自由裁量权。 在依赖它们之前,您应该进行自己的研究,审查,分析和验证我们的内容。 交易是一项高风险的活动,可能导致重大损失,因此请在做出任何决定之前咨询您的财务顾问。 我们网站上的任何内容均不构成招揽或要约