Advanced Micro Devices’ (AMD) fourth generation EPYC “Bergamo” processors are taking an interesting approach to optimisation for the cloud computing data centre market. Rather than push to higher clock rates, they have designed the device geometry around a clock rate of 3.1 GHz, which allows shrinking the CPU core by around 35%, which permits packing up to 128 cores, all implementing the full instruction set (no more “performance” vs. “efficiency” cores, which don’t fit with the cloud provider business model) into a single package, and providing better energy efficiency per core.
The new AMD Instinct MI300A artificial intelligence (AI) and high-performance computing (HPC) accelerator integrates 24 Zen 4 CPU cores, 128 Gb on-chip high performance memory shared between the CPU and 228 GPU cores. A subsequent MI300X model, optimised for AI acceleration work, will delete the CPU cores to provide 304 CPU cores and 192 Gb of on-chip memory. A server configuration with 8 MI300 chips will be available for building supercomputers.
Here is more on the AMD AI announcements from WccfTech.