Home GADGETS Intel ‘Emerald Rapids’ 5th-Gen Xeon Platinum 8592+ Review: 64 Cores, Tripled L3...

Intel ‘Emerald Rapids’ 5th-Gen Xeon Platinum 8592+ Review: 64 Cores, Tripled L3 Cache and Faster Memory Deliver Impressive AI Performance

Intel ‘Emerald Rapids’ 5th-Gen Xeon Platinum 8592+ Review: 64 Cores, Tripled L3 Cache and Faster Memory Deliver Impressive AI Performance

Intel is keeping things competitive: the company’s new $11,600 flagship 64-core Emerald Rapids 5th-Gen Xeon Platinum 8592+ arrives as part of a complete refresh of the company’s Xeon product stack as it grapples with AMD’s EPYC Genoa lineup of processors that continue to chew away Intel’s market share. Our benchmarks in this article show that Emerald Rapids delivers surprisingly impressive performance uplifts, drastically improving Intel’s competitive footing against AMD’s competing chips. Critically, Intel’s new chips have also arrived on schedule, a much-needed confirmation that the company’s turnaround remains on track.

For Emerald Rapids, Intel has added four more cores to the flagship over the company’s prior-gen chips, providing up to 128 cores and 256 threads per dual-socket server, tripled the L3 cache, and moved to faster DDR5-5600 memory across the breadth of its product stack. In concert with other targeted enhancements, including a significant redesign of the die architecture, the company claims these enhancements provide gen-on-gen gains of 42% in AI inference, 21% more performance in general compute workloads, and 36% higher performance-per-watt.

As with the previous-gen Sapphire Rapids processors, Emerald Rapids leverages the ‘Intel 7’ process, albeit a more refined version of the node, and the slightly-enhanced Raptor Cove microarchitecture. However, the new Emerald Rapids server chips come with plenty of new innovations and design modifications that far exceed what we’ve come to expect from a refresh generation — Intel moved from the complex quad-chiplet design for the top-tier Sapphire Rapids chips to a simpler two-die design that wields a total of 61 billion transistors, with the new die offering a more consistent latency profile. Despite the redesign, Emerald Rapids still maintains backward compatibility with the existing Sapphire Rapids ‘Birch Stream’ platform, reducing validation time and allowing for fast market uptake of the new processors.

Emerald Rapids still trails in terms of overall core counts — AMD’s Genoa tops out at 96 cores with the EPYC 9654, a 32-core advantage. As such, Emerald Rapids won’t be able to match Genoa in many of the densest general compute workloads; the latter’s 50% core count advantage is tough to beat in most parallel workloads. However, Intel’s chips still satisfy the requirements for the majority of the market — the highest-tier chips always comprise a much smaller portion of the market than the mid-range — and leans on its suite of in-built accelerators and performance in AI workloads to tackle AMD’s competing 64-core chips with what it claims is a superior blend of performance and power efficiency.

There’s no doubt that Emerald Rapids significantly improves Intel’s competitive posture in the data center, but AMD’s Genoa launched late last year, and the company’s Zen 5-powered Turin counterpunch is due in 2024. Those chips will face Intel’s Granite Rapids processors, which are scheduled for the first half of 2024. A new battlefield has also formed — AMD has its density-optimized Bergamo with up to 128 cores in the market, and Intel will answer with its Sierra Forest lineup with up to 288 cores early next year.

It’s clear that the goalposts will shift soon for the general-purpose processors we have under the microscope today; here’s how Intel’s Emerald Rapids stacks up against AMD’s current roster. 

Intel Emerald Rapids 5th-Gen Xeon Specifications and Pricing

Source link