News

Nvidia Announces New AI Chips

Nvidia last Tuesday, March 18, at its annual GTC conference, announced new chips that are designed for the development and subsequent deployment of artificial intelligence models.

Nvidia Announces New AI Chips

Chief executive officer of the mentioned company Jensen Huang revealed a family of microcircuits called Blackwell Ultra. Currently, it is already known that sales of these products will be launched in the second half of the present year. The next-generation graphics processing units called Vera Rubin were also presented. This new Nvidia product is expected to be available for purchase starting next year.

Currently, the company, led by Jensen Huang, is one of the main beneficiaries of the so-called artificial intelligence boom that began after OpenAI’s ChatGPT debut in November 2022. Nvidia, based in Santa Clara, California, is actually the main global supplier of chips that are necessary for training and ensuring the process of subsequent functioning of machine intelligence models. It is worth noting that microcircuits are what can be described as a basic material element, without which neither the existence of the artificial intelligence industry as such, nor its movement forward within the framework of a consistent technological evolution, is possible. Graphics processing units, developed by Nvidia, have most of the market for advanced machine intelligence development. The company’s sales have increased sixfold since the end of 2022 amid the active scaling and high-intensity forward movement of the artificial intelligence industry. Last year, Nvidia’s market capitalization crossed the historic $3 trillion mark. Currently, the company is one of the most valuable in the world. Nvidia’s prospects are generally positive. In the relevant context, it is worth noting that the artificial intelligence industry continues to move along the trajectory of high-intensity development. This means that demand for chips will continue and is likely to increase. Nvidia also has no equivalent competitors. Other chip manufacturers make products in smaller volumes. Moreover, these products are not as popular with consumers as Nvidia’s microcircuits.

Software developers and investors are closely monitoring the new chips from the company headed by Jensen Huang. It is important for them to understand whether Nvidia’s new products offer a level of performance and efficiency that is sufficient in terms of arguments in favor of the fact that the largest end customers, including Microsoft, Google, and Amazon, should continue to spend billions of dollars on building data centers based around microcircuits from the Santa Clara-based company.

Jensen Huang stated that this last year is where almost the entire world got involved. According to him, the computing requirements, the law of scaling artificial intelligence, are more resilient and are actually hyper-accelerated.

The announcements made on Tuesday, as noted by the media, are a kind of test for Nvidia’s annual strategy. In this case, the company’s business concept is implied, according to which new chip families should be debuted every year. It is worth noting that before the boom of artificial intelligence, Nvidia released new microcircuit architectures every two years.

The GTC conference in San Jose, California, is what you might call a demonstration of the company’s strength. Even before the start of the mentioned event, the media reported that 25,000 attendees and hundreds of firms would take part in this, which would discuss options for using Nvidia hardware for artificial intelligence. In this case, it also refers to brands such as Microsoft, Waymo, and Ford. General Motors said it will use Nvidia’s services for its next-generation vehicles.

The chip architecture after Rubin will be named after physicist Richard Feynman. Nvidia announced this on Tuesday. It is worth noting that the company has already formed a kind of practice in which it names chip families in honor of scientists. Sales of Feynman microcircuits are expected to launch in 2028.

Nvidia also announced new laptops and desktops using the company’s chips. In this case, new personal computers called DGX Spark and DGX Station were mentioned separately. The company said that these devices will be able to run large artificial intelligence models such as Llama or DeepSeek.

Moreover, Nvidia has announced updates to its networking parts for tying hundreds or thousands of graphics processing units together so they work as one. Another new product of the company is a software package called Dynamo, which helps users get the most out of their chips.

Nvidia plans to launch shipping of systems based on its next-generation graphics processing units family in the second half of 2026. The system consists of two main components, including the central processing unit Vera, and the graphics processing unit design Rubin. It was named after astronomer Vera Rubin.

Nvidia representatives stated that Vera is the first custom central processing unit design. This product is based on a core design called Olympus.

The concept of previous practice stipulated that for companies in need of central processing units, Nvidia used an off-the-shelf design from Arm. Firms that have developed custom Arm core designs such as Qualcomm and Apple claim that they can be more tailored and unlock better performance.

Nvidia said the custom Vera design will be twice as fast as the central processing unit used in last year’s Grace Blackwell chips.

Paired with Vera, Rubin can manage 50 petaflops while doing inference, which exceeds 20 petaflops for modern Blackwell chips. Rubin can also support up to 288 gigabytes of fast memory, which is one of the core specs that artificial intelligence developers watch.

Moreover, Nvidia is making changes to what it calls graphics processing units. The company stated that Rubin actually consists of two GPUs.

The structure of Blackwell’s graphics processing unit, which is currently on the market, includes two separate chips that were assembled together and made to work as one microcircuit.

Starting with Rubin, Nvidia has stated that when it combines two or more dies to make a single chip, it will refer to them as separate graphics processing units. In the second half of 2027, the company intends to release the microcircuit Rubin Next, which will combine four dies to make a single chip, doubling the speed of the product, and it will refer to that as four GPUs.

The firm said that it will come in a rack called Vera Rubin NVL144. Previous versions of Nvidia’s rack were called NVL72.

The company also announced new versions of its Blackwell family of chips. This development is called Blackwell Ultra. The chip will be able to produce more tokens per second. The microcircuit can generate more content in the same amount of time as its predecessor.

Nvidia said cloud providers can use Blackwell Ultra to offer a premium artificial intelligence service for time-sensitive apps, allowing them to make as much as 50 times the revenue from the new chips as the Hooper generation, which shipped in 2023.

Blackwell Ultra will have a version with two paired to an Nvidia Arm central processing unit, called GB300, and a version with just the graphics processing unit, called B300. Moreover, there will be versions with eight GPUs in a single server blade and a rack version with 72 Blackwell chips.

Nvidia said the top four cloud companies deployed three times as many Blackwell chips as Hooper microcircuits.

The Chinese artificial intelligence model DeepSeek R1, which debuted in January, may have scared investors of the firm, headed by Jensen Huang, to a certain extent. At the same time, Nvidia has embraced the software. The chipmaker will use the mentioned AI model to benchmark several of its new products.

Many artificial intelligence observers have stated that DeepSeek R1 threatens Nvidia’s business. It is claimed that this model of artificial intelligence requires fewer chips than similar functional systems being developed by companies based in the United States. At the same time, Jensen Huang does not perceive DeepSeek R1 as a threat. According to him, the mentioned artificial intelligence model is actually a good sign for Nvidia. This statement is because DeepSeek R1 uses a reasoning process that requires a lot of computing power to provide consumers with better answers.

Nvidia representatives said that the new Blackwell Ultra chips are better for reasoning models. The company has developed its microcircuits to more efficiently do inference. When new reasoning models require more computing power, Nvidia chips will be able to handle it.

Jensen Huang stated that over the past two or three years, a major breakthrough, a fundamental advance has occurred in the area of artificial intelligence. He described it as agentic AI. The mentioned type of artificial intelligence can reason about how to answer or how to solve a problem. It is also worth noting that agentic AI is more practical in terms of its use to perform a wide range of tasks.

As we have reported earlier, Nvidia Releases Gaming Chips.

Serhii Mikhailov

3499 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.