top of page
server-parts.eu

server-parts.eu Blog

AMD Announces the 5th Gen AMD EPYC Processors: Everything You Need to Know

In October 2024, AMD introduced its 5th Gen EPYC processors, known as Turin, bringing a major step forward in enterprise server capabilities.


"Time to upgrade to AMD 5th Gen EPYC processors?"

Built for high-performance computing (HPC), artificial intelligence (AI), and cloud environments, these processors offer key improvements in design and performance. With higher core counts and better efficiency, they help data centers and businesses handle demanding tasks more effectively and scale up their operations with ease.


 
 

Architectural Advancements: Zen 5 and Core Density


The 5th Gen EPYC processors are powered by AMD's Zen 5 architecture, featuring core counts that reach up to 192 cores and 384 threads per CPU. This unparalleled core density makes the new EPYC processors ideal for tasks requiring massive parallel processing, such as AI model training, HPC simulations, and big data analytics. The use of AVX-512 with a full 512-bit data path ensures optimal performance for compute-heavy workloads like AI inference and deep learning tasks.


"5th Gen AMD EPYC CPU: 192 cores and 384 threads!"

In addition to the core density, the processors offer an IPC (instructions per clock) gain of 17%, with specific workloads like AI and HPC seeing as much as a 37% performance-per-clock improvement compared to the previous generation. Clock speeds reach up to 5 GHz on certain models, ensuring that single-threaded workloads are also well-supported.


AI and HPC Workloads: EPYC Shines


AMD has positioned its 5th Gen EPYC processors as a leading solution for AI and machine learning workloads, especially when paired with AMD Instinct MI325X GPUs. These processors are designed to accelerate AI training and inference by leveraging AVX-512 optimizations and high core counts. In some AI training scenarios, the EPYC 9575F offers up to 20% better performance than equivalent Intel Xeon configurations, making it an attractive option for enterprises building AI clusters.


The Intel equivalent is the Intel Xeon "Granite Rapids" and "Emerald Rapids" series, which also target AI and machine learning workloads. These Xeons feature AVX-512 support and MRDIMM technology, providing higher memory bandwidth than AMD’s EPYC processors. Intel's Xeon Platinum 8490H and upcoming Granite Rapids CPUs are designed for similar AI and HPC tasks, with a focus on memory bandwidth and efficiency, making them strong competitors in this space.


Server Models Built for 5th Gen AMD EPYC


Several major server brands have incorporated the 5th Gen AMD EPYC processors into their platforms, making it easier for enterprises to adopt the new technology:


  • Hewlett Packard Enterprise (HPE):

    • The HPE ProLiant DL385 Gen11 server is optimized for memory-intensive workloads and virtualization tasks, benefiting from the improved core count and memory bandwidth of the EPYC processors.

    • HPE Apollo 6500 Gen10 Plus servers support AI and HPC workloads, offering an ideal environment for high-performance computing with EPYC processors and GPU accelerators.


  • Dell Technologies:

    • The Dell PowerEdge R7515 and R7525 servers offer an excellent balance of performance and scalability for AI, data analytics, and cloud computing workloads.


  • Lenovo:

    • Lenovo ThinkSystem SR655 and SR665 servers are optimized for databases, analytics, and virtualization, benefiting from the EPYC processors’ performance-per-watt advantages.


  • Supermicro:

    • The Supermicro A+ Series leverages the core density and energy efficiency of AMD EPYC processors, making it a popular choice for cloud and HPC environments.

 

5th Gen AMD EPYC processors are an excellent choice whether you're looking to upgrade your infrastructure or building new systems. Here are some common scenarios:


Upgrade Scenarios


For companies running older servers, such as Intel Cascade Lake or AMD EPYC Rome, upgrading to the 5th Gen EPYC processors can significantly reduce the number of servers required while improving performance. AMD claims that two 192-core EPYC Turin servers can replace as many as seven older dual-socket Intel Cascade Lake servers, saving up to 68% in power consumption while maintaining or even exceeding performance. This consolidation not only reduces hardware sprawl but also lowers power and cooling costs.


New Purchase Scenarios


If your organization is building new AI or HPC clusters, or expanding cloud infrastructure, the 5th Gen EPYC processors are designed to handle the most demanding workloads efficiently. The high core counts, energy efficiency, and scalability make them an ideal fit for cloud service providers and enterprises looking to future-proof their infrastructure. Additionally, the processors' ability to host more virtual machines (VMs) or containers per server ensures optimal resource usage, reducing operational costs in the long run.

5th Gen AMD EPYC processors: Discover key features, performance boosts, and upgrades for AI, HPC, and cloud computing in enterprise servers. Server-parts.eu.
 
 

Power Efficiency and Performance Gains


One of the most compelling aspects of the 5th Gen AMD EPYC processors is their performance-per-watt efficiency. While the processors can consume up to 500W in power at the highest configurations, they offer significant performance benefits, making them more energy-efficient than comparable Intel processors. This is particularly beneficial for data centers aiming to reduce energy consumption without sacrificing performance.


Memory Bandwidth and Scalability Considerations


The 5th Gen EPYC processors support DDR5 memory speeds up to 6400 MT/s. However, it's important to note that Intel's MRDIMM support in its latest Xeon processors offers even higher memory bandwidth, which can be a factor for HPC customers who prioritize memory performance. Nonetheless, AMD’s modular chiplet design, with independent memory scaling, ensures that core count increases do not compromise memory bandwidth, making these processors versatile across different workload types.

 
 

Comments


bottom of page