What is NVIDIA’s Grace Hopper?
The Grace Hopper can be thought of as a Superchip featuring two chips on one motherboard. One for NVIDIA’s Hopper GPU and the other for NVIDIA’s Grace CPU. They use NVIDIA’s signature NVLink-C2C technology to deliver exceptional levels of AI accelerated performance. When I mention CPU+GPU, it is to be noted that both are made by NVIDIA. In a way you can say that NVIDIA has finally entered into the CPU market. NVIDIA’s Grace CPU features 144 Arm v9 cores and 1 TB/s of memory bandwidth. The GPU features NVIDIA’s upcoming Hopper architecture (parallel to Lovelace for consumers). NVIDIA’s Hopper GPUs feature 80 billion transistors using the cutting edge TSMC 4N process. Importantly, the 4N process falls under the umbrella of TSMC’s 5nm process, so it may be a refresh edition (improved).
Grace CPU Architecture
NVIDIA’s new Scalable Coherency Fabric (SCF) mesh interconnect allows for a massive bandwdith of 3.2TB/s across various Grace chip units. This mesh is scalable for up to 72+ cores where each CPU has 117MB of L3 Cache. Another diagram gives us much more information. Every CPU supports up to 68 PCIe Gen 5.0 Lanes (12+56) and 4 PCIe 5.0X16 conections. In addition to that, 16 LPDDR5x Memory Controllers (MC) can also be found.
Information Regarding NVLink-C2C
Most GPUs process data now faster than ever, however, bandwidth and the time taken to transfer still remains a bottleneck. To counter this, NVIDIA created a custom CPU and GPU Superchip thus eliminating any such problems and maximizing bandwdith. This interface provides a bandwidth of around 900GB/s which is 7x more than a PCIe 5.0 x 16 interface. Efficiency wise, NVLink-C2C uses just 1.3 pJ/bit. That’s much more efficient than a PCIe 5.0 interace (Up to 5x).
Release Date
NVIDIA’s Grace CPUs and Hopper GPUs are ready for launch sometime in Q1/Q2 2023. The Grace CPUs are more pertained towards high performance computing, whereas the Hopper GPU is targeted for AI training, HPC.