Friday, April 18, 2025
spot_img

New AMD AI Advancements – EPYC CPU vs Nvidia Grace, Ryzen AI MAX 395 Performance Benchmarks, Versal AI Edge in Space

spot_img
spot_img
spot_img
- Advertisement -
- Advertisement -

AMD announces the following updates across its AI compute portfolio:

AMD EPYC leadership in enterprise AI:  AMD EPYC processors, built on the x86 architecture, deliver leadership performance and broadly deployed workload compatibility when compared to Arm-based solutions. AMD EPYC processors outperform Nvidia Grace in key enterprise AI workloads, including 2.75x better power efficiency in dual-socket configurations and 2.17x higher performance in database workloads.

AMD Ryzen AI MAX+ 395 offers AI performance uplift over competition: The new Ryzen AI MAX+ 395 represents a significant leap in AI processing capabilities for consumer laptops, combining Zen 5 CPU cores with a 50 TOPS XDNA 2 NPU and integrated GPU to offer unprecedented AI performance for premium thin and light devices. It demonstrates remarkable improvements in running local AI models, specifically in LLM applications, with significant performance advantages over competitors.

AMD Versal AI Edge adaptive SoC qualified for spaceflight: The Versal AI Edge XQRVE2303 adaptive SoC achieves Class B spaceflight qualification, bringing accelerated AI inferencing to space with enhanced AI Engines in a smaller, more power-efficient package, enabling critical space applications. 

The AMD Advantage for AI and Data Centers: In the rapidly evolving AI landscape, GPUs have become indispensable, driving advancements from deep learning to complex data analytics. The latest AMD InstinctTM accelerators are purpose-built for exceptional performance and efficiency, delivering leadership capabilities across foundation model training, fine-tuning, and inference. However, AI is not a one-size-fits-all challenge—nor is enterprise IT infrastructure. While GPUs remain essential for large-scale generative AI, 5th Gen AMD EPYCTM CPUs are the world’s best CPU for enterprise AI that provide a scalable, cost-effective path for AI-enhanced applications and host node performance. With high-frequency processing, leadership memory capacity, and seamless x86 compatibility, AMD delivers an evolutionary path to AI acceleration, enabling organizations to integrate, scale, and modernize their infrastructure at their own pace—with little disruption.

Flight-Qualified AMD XQR Versal SoC Brings Accelerated AI Inferencing to Space: AMD VersalTM AI Edge XQRVE2302 becomes the second radiation-tolerant device in the space-grade (XQR) Versal adaptive SoC portfolio to be qualified for spaceflight, having achieved Class B qualification. Derived from the US military specification MIL-PRF-38535, the completion of Class B qualification, together with the publishing of the production data sheet, allows customers to begin placing orders, with devices expected to begin shipping in the Fall.

AMD Versal AI Edge XQRVE2302 devices bring accelerated AI inferencing to space with integrated, enhanced AMD AI Engines (AIE) optimized for machine learning applications.

Known as AIE-ML, these compute engines deliver enhanced support for data types prevalent in AI inferencing, offering 2X the INT8 and 16X the BFLOAT16 performance, with reduced latency compared to first-generation AI Engines. Additionally, local memory has doubled via new memory tiles providing high bandwidth shared memory access.

XQRVE2302 devices bring powerful computing into a small form factor (23mm x 23mm package). It is the industry’s first adaptive SoC for space applications to be offered in such a small form-factor package. The devices feature the same high-performance processor system as the larger Versal AI Core XQRVC1902 space-grade device, in less than 30% of the board area, resulting in a smaller footprint and a significant reduction in power consumption.

XQRVE2302 devices feature a dual-core Arm Cortex-A72 application processor and a dual-core Arm Cortex-R5F real-time processor, along with AIE-ML, DSP blocks, and FPGA programmable logic. With these features, developers can convert raw sensor data into useful information, making it ideal for edge processing in space, such as image detection and classification, autonomous navigation, and sensor data processing.

AMD RyzenTM AI MAX+ 395 Processor: Breakthrough AI Performance In Thin And Light: The AMD RyzenTM AI MAX+ 395 (codename: ‘Strix Halo’) is the most powerful x86 APU and delivers a significant performance boost over the competition. Powered by “Zen 5” CPU cores, 50+ peak AI TOPS XDNATM 2 NPU and a truly massive integrated GPU driven by 40 AMD RDNATM 3.5 CUs, the RyzenTM AI MAX+ 395 is a transformative upgrade for the premium thin and light form factor. The RyzenTM AI MAX+ 395 is available in options ranging from 32GB all the way up to 128GB of unified memory – out of which up to 96GB can be converted to VRAM through AMD Variable Graphics Memory.

The RyzenTM AI Max+ 395 excels in consumer AI workloads like the llama.cpp-powered application: LM Studio. Shaping up to be the must-have app for client LLM workloads, LM Studio allows users to locally run the latest language model without any technical knowledge required. Deploying new AI text and vision models on Day 1 has never been simpler.

The ‘Strix Halo’ platform extends AMD performance leadership in LM Studio with the new AMD RyzenTM AI MAX+ series of processors. As a primer: the model size is dictated by the number of parameters and the precision used. Generally speaking, doubling the parameter count (on the same architecture) or doubling the precision will also double the size of the model. Most of our competition’s current-generation offerings in this space max out at 32GB on-package memory. This is enough shared graphics memory to run lar

Covered By: NCN MAGAZINE / AMD

If you have an interesting Article / Report/case study to share, please get in touch with us at editors@roymediative.com , roy@roymediative.com98113468469625243429

- Advertisement -
spot_img
spot_img
spot_img