Nvidia CEO shares vision for data center overhaul

Nvidia signage

Nvidia experienced a noticeable gain of 24.7% last weekdriven in part by a statement from CEO Jensen Huang the day before the big stock crash that emphasized the need to replace outdated data center equipment with new chips as more companies integrate artificial intelligence.

“The computer industry is going through two simultaneous transitions: accelerated computing and generative AI,” Huang said during Nvidia’s second quarter earnings statement to investors on May 24th. “One trillion dollars of installed global data center infrastructure will shift from generic to accelerated computing as enterprises rush to apply generative AI into every product, service and business process.

Obsolete equipment will be replaced: but at what cost?

However, there are questions about the extent and cost of these replacements. While implementing AI may require some specialized processor or hardware upgrades for optimal performance, the cost and extent of those replacements depend on each company’s specific requirements. Businesses should assess needs and costs on a case-by-case basis.

Bradley Shimmin, AI and data industry analyst at technology research and advisory group Omdia, recognizes the potential for companies to capitalize on the generative AI trend, which may require new approaches to acceleration hardware. However, Shimmin doesn’t fully share Huang’s belief that data centers need to replace all equipment.

“For many use cases, especially those involving very demanding model training requirements, enterprises will look to reduce costs and accelerate time to market by investing in the latest and greatest AI hardware acceleration,” he said. Shimmin. “However, there is a reverse trend right now where researchers are learning to do more with fewer models with fewer parameters, highly curated datasets, and smarter training/optimization using PEFT [Parameter Efficient Fine-tuning] and LoRa, for example.”

Financial hurdles and physical limitations of the data center

In addition to the physical limitations of data centers, any attempt to improve transistor density in data centers is not without its obstacles. Building fabs comes at a high cost, especially when coupled with the growing expenses for cutting-edge nodes. Data center leaders must address these financial concerns as they strive to meet an ever-increasing demand for more advanced data center infrastructure.

As the data center industry continues to evolve, finding cost-effective solutions to achieve transistor density along with employee retention will be a critical focus for data center operators.

Expanding ecosystems and chip architectures

Chipmakers are also rushing to support generative AI use cases on smaller target platforms such as Samsung’s efforts to run large-scale models on chips and in-phones, Shimmin pointed out. This indicates that the overall ecosystem will expand across various chip types and deployment configurations, including back-end training and on-edge or in-device inference. More chip architectures, such as RISC-V, FPGAs, GPUs, and specialized solutions such as AWS Trainium and Inferentium, will play a significant role in this evolving landscape.

“It’s easy to see that the whole ecosystem will explode,” Shimmin said.

AI has become a focal point for investors and data center infrastructure management due to the growing demands of AI for scalability. This is due to the overwhelming success of OpenAI’s various GPT models.

But creating powerful language or image models is something only a few companies can do. In the past, it was possible to see significant improvements with smaller scale models to work on data center sized systems. To keep pushing the boundaries of technology, companies will need to invest in better and more advanced hardware, lending much credence to Huang’s statement.

Karl Freund, founder and principal analyst at Cambrian-AI Research, said in a statement to Data center knowledge that he would never bet that Jensen was wrong.

“He is a visionary like no other,” said Freund. “Jensen has said for years that the data center would be accelerated, and it is happening. Based on the processor, the The GPU segment accounted for the top revenue share of 46.1% in 2021.”

Nvidia investors, however, may want to temper their expectations of a continued earnings rally. The implication is that scaling has already stalled and will soon level off. While implementing AI may require specialized processor or hardware upgrades for optimal performance, the extent of replacements will likely vary between companies. As the technology ecosystem evolves, optimizations and advances in AI models are expected to offer workarounds that balance hardware needs.

Sam Altman, Open AI CEO who recently asked Congress to consider regulatory proposals on AI he said further advances in AI won’t come from making models bigger.

“I think we’re at the end of the era where there’s going to be these, like, giant, giant models,” Altman told an audience at an event held at MIT in early April, Wired reported. “We’ll make them better in other ways.”

#Nvidia #CEO #shares #vision #data #center #overhaul

Previous articleGet ready for an expanded job market, thanks to artificial intelligence
Next articleInternet Connected Wireless Doorphone Systems Market 2023 Analysis, Business Status and Outlook Zartek, Aiphone, Commax, Panasonic – SleterFC.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here