Nvidia Unveils GH200 Super Chip and GH200 DGX Compute Platform for AI

Nvidia Unveils GH200 Super Chip and GH200 DGX Compute Platform for AI

The big picture: Nvidia’s new strategy centers around generative AI, large language models and recommender systems, as well as its latest DGX supercomputer. The company believes these will soon be the “digital engines of the modern economy” as companies like Meta, Google and Microsoft are racing to realize the benefits of artificial intelligence using Nvidia’s Grace, Hopper and Ada Lovelace hardware architectures.

It’s no secret by now that Nvidia has gone all-in on the idea of ​​selling shovels to companies large and small who are maniacally delving into the land of generative AIs in search of digital treasures. The company is well positioned to capitalize on this trend and could very well become the first chipmaker with a $1 trillion valuation, more than double that of TSMC, the company that makes more than half of the world’s most advanced chips.

Nvidia’s announcements at Computex 2023 reflect this new strategy very well. Nvidia CEO Jensen Huang revealed the company’s Grace Hopper GH200 superchips are now in full production, highlighting their potential to accelerate computing services and software for new business models and optimize existing ones.

Huang says the tech industry has hit a hard wall with traditional architecture in recent years, which is why it is increasingly turning to GPUs and accelerated computing to solve complex computing tasks. To meet this increased demand, Nvidia has developed a new DGX GH200 supercomputing platform that packs 256 Grace Hopper GH200 super chips.

Each Grace Hopper unit combines a Grace CPU and Tensor Core H100 GPU, and the DGX GH200 system is allegedly capable of delivering an exaflop of compute performance and ten times the memory bandwidth of the previous generation. For reference, the first exascale computer was the Frontier supercomputer at Oak Ridge National Laboratory in Tennessee, which was able to hit 1.2 exaflops in last year’s Linmark test, taking the crown off Japan’s Fugaku system.

The DGX GH200 also features 144 terabytes of shared memory, 500 times more than the DGX A100 system it is replacing. This should allow companies to easily build and run generative AI models like the one behind ChatGPT. Nvidia says Microsoft, Google Cloud and Meta are among the first customers for the new supercomputer, while Japan’s SoftBank is looking to bring GH 200 super chips to data centers across the Asian country.

Nvidia will also use four DGX GH200 systems connected using the Quantum-2 InfiniBand network with bandwidth up to 400 Gb per second to create its own AI supercomputer called Helios. Separately, the company is introducing more than 400 different system configurations coming to market in the coming months that integrate the Hopper, Grace, and Ada Lovelace architectures for a variety of high-performance computing applications.

When standing next to a full-size visual representation of the DGX GH200 system on stage, Huang described it as “four elephants, one GPU,” as any GH200 unit has access to the entire 144 terabyte memory pool. He also tried to humor the audience by noting that they wondered if this new system could run Crysis. Given that enthusiasts were able to run the famous title directly from the VRAM of a GeForce RTX 3090, you could probably run many thousands of simultaneous instances using a monster like the DGX GH200.

One thing is certain: Nvidia is almost laser-focused on capitalizing on the boom in AI chips, as advances in this area are what bring in more than half of its revenues. The new DGX supercomputer is yet another attempt to keep the industry hooked on Nvidia products. Whether a company wants to power 5G networks, generative AI services, factory robots, augmented and virtual reality experiences, or advertising engines, Nvidia wants to be the go-to provider for all companies looking to use accelerated computing .

Gamers are still on the company’s radar, albeit more through the lens of what AI can do to improve gaming experiences. For example, Nvidia’s new Avatar Cloud Engine for Games will allow developers to enhance interactions with non-playable characters by linking them to a large language model. The company wouldn’t say what the system requirements would be for this new technology, but we do know that Nvidia’s research arm is busy exploring ways to optimize game assets in future games using artificial intelligence.

#Nvidia #Unveils #GH200 #Super #Chip #GH200 #DGX #Compute #Platform

Previous articleSamsung cuts prices for Galaxy S23, S22 and tablets
Next articleKevin Bush and Michael McDaniel team up with Surf Internet to drive customer growth and engagement

LEAVE A REPLY

Please enter your comment!
Please enter your name here