One of the leading companies in AI technology is Nvidia. Recently, the firm has introduced the latest iteration of their Grace Hopper platform, the GH200.
This advanced platform features the groundbreaking Grace Hopper Superchip, specifically designed for high-performance generative AI models like ChatGPT and Google Bard. The key innovation lies in the world’s first HBM3e processor customized for generative AI.
The GH200 chip comes up with a dual arrangement that provides up to 3.5 times more memory capacity and 3 times more bandwidth than the previous generation.
A single server with 144 Arm Neoverse cores and 282GB of the newest HBM3e memory technology powers this chip, providing eight petaflops of AI performance.
As the demand for Large Language Models (LLMs) like ChatGPT grows, the expense of running these models poses a challenge to their adoption. But this new chip brings down this hurdle with energy efficiency capability.
The Grace Hopper chip has the potential to make AI technologies more accessible to businesses. This could lead to wider implementation of ChatGPT and similar LLMs, as affordability becomes less of a barrier.
The CEO and founder of Nvidia, Jensen Huang, emphasizes that the GH200 platform serves to address the rising demand for generative AI, offering better memory technology and bandwidth, enabling improved output and easy GPU connectivity for enhanced performance across data centers.
Nvidia expects that leading system manufacturers will introduce platforms based on the GH200 Grace Hopper Superchip by Q2 of the 2024 calendar year.