CCT - Crypto Currency Tracker logo CCT - Crypto Currency Tracker logo
Cryptopolitan 2025-12-02 09:34:41

Nvidia rolls out new open models to boost physical and digital AI

Nvidia has unveiled new tools and open model development for physical and digital AI, including Nvidia Drive Alpamayo-R1 (AR1) for autonomous driving. The tech giant announced at the NeurIPS AI conference that it is further increasing its cluster of open AI models, tools, and datasets. Nvidia said its new open physical and digital AI models aim to support research in the AI industry and beyond. It touted that its Alpamayo-R1 is the world’s first-ever industry-scale open reasoning vision language action (VLA) model for self-driving cars. The chipmaker also hinted at the release of new datasets and models for AI safety and speech. The tech firm’s researchers have prepared over 70 papers, workshops, and talks for presentation at the conference. The company is sharing its innovative projects that span medical research, autonomous driving, and AI reasoning. Nvidia shows stronger commitment to open source The tech giant demonstrated a more substantial commitment to open source at the AI event, an effort recognized by AI benchmarking platform Artificial Analysis with its new Openness Index. The Artificial Analysis Open Index rated the company’s Nemotron family of AI tools as one of the best available. The rating is based on the amount of technical information shared, the ease of using model licenses, and the clarity of data use regulations. Meanwhile, AR1 (Alpamayo-R1) integrates chain-of-thought AI reasoning with path planning to enable level 4 autonomy and enhance AV (autonomous vehicle) safety in various road scenarios. The chipmaker said previous iterations of autonomous driving models struggled with situations such as a pedestrian-heavy intersection, a car double-parked in a bicycle lane, or an approaching road closure. However, reasoning gives autonomous vehicles the common sense to drive like humans. The AR1 model achieves this by breaking down scenarios and reasoning through each step to consider all possible outcomes. It then uses contextual data to determine the most effective course of action. Nvidia claims that the AR1 taps into the chain-of-thought reasoning that allows it to process data along its path and use the information to plan a trajectory, such as stopping for jaywalkers. The model’s open foundation is based on the tech firm’s Cosmos Reason, which allows researchers to customize it for their non-commercial use cases. Researchers can also customize the AR1 model for benchmarking or developing experimental applications for AVs, according to the chipmaker. Nvidia Drive Alpamayo-R1 will be available on Hugging Face and GitHub , while a subset of the data used to train and evaluate models is available on Nvidia Physical AI Open Datasets. Reinforcement learning proves effective for AR1’s post-training Nvidia researchers claimed that reinforcement training had proven effective for AR1’s post-training. They pointed out that developers can also learn how to use and post-train models based on Cosmos using step-by-step reasoning. The researchers said examples of quick-start inference and advanced post-training can be found in the Cosmos Cookbook . The comprehensive guide for physical AI developers covers step-by-step data curation, model evaluation, and the generation of synthetic data. Meanwhile, the chipmaker said the possibilities for Cosmos-based applications are virtually limitless. The tech giant provided examples of Cosmos-based applications, including LidarGen, Omniverse NuRec Fixer, Cosmos Policy, and ProtoMotions3. The tech firm boasted that LidarGen was the world’s first model to generate lidar data for autonomous vehicle simulations. It also mentioned that its Omniverse NuRec Fixer model for robotics and AV simulation taps into Nvidia’s Cosmos Predict. The ProtoMotions3 is an open-source, GPU-accelerated framework built on Nvidia Newton and Isaac Lab. The framework can be used to train physically simulated humanoid robots and digital humans, according to the chipmaker. The Cosmos world foundation models (WFMs) can be used to generate realistic scenes. Nvidia also mentioned that policy models can be trained in Isaac SIM and Isaac Lab. The data generated from the policy models can then be used to post-train the chipmaker’s Groot N models for robotics. Sharpen your strategy with mentorship + daily ideas - 30 days free access to our trading program

Loe lahtiütlusest : Kogu meie veebisaidi, hüperlingitud saitide, seotud rakenduste, foorumite, ajaveebide, sotsiaalmeediakontode ja muude platvormide ("Sait") siin esitatud sisu on mõeldud ainult teie üldiseks teabeks, mis on hangitud kolmandate isikute allikatest. Me ei anna meie sisu osas mingeid garantiisid, sealhulgas täpsust ja ajakohastust, kuid mitte ainult. Ükski meie poolt pakutava sisu osa ei kujuta endast finantsnõustamist, õigusnõustamist ega muud nõustamist, mis on mõeldud teie konkreetseks toetumiseks mis tahes eesmärgil. Mis tahes kasutamine või sõltuvus meie sisust on ainuüksi omal vastutusel ja omal äranägemisel. Enne nende kasutamist peate oma teadustööd läbi viima, analüüsima ja kontrollima oma sisu. Kauplemine on väga riskantne tegevus, mis võib põhjustada suuri kahjusid, palun konsulteerige enne oma otsuse langetamist oma finantsnõustajaga. Meie saidi sisu ei tohi olla pakkumine ega pakkumine