Customers utilizing the latest AMD Instinct accelerator portfolio include major industry players such as Dell Technologies, Hewlett Packard Enterprise, Lenovo, Meta, Microsoft, Oracle, Supermicro, and others.
AMD has unveiled new accelerators and processors designed for the efficient execution of large language models (LLMs), a domain where Nvidia currently leads in generative AI chip technology. The AMD Instinct MI300X chip stands out for its industry-leading memory bandwidth tailored for generative AI and superior performance in large language model (LLM) training and inferencing. Meanwhile, the AMD Instinct MI300A accelerated processing unit (APU) combines the cutting-edge AMD CDNA 3 architecture with “Zen 4” CPUs, delivering groundbreaking performance for both high-performance computing (HPC) and AI workloads.
Victor Peng, President of AMD, emphasized, “AMD Instinct MI300 Series accelerators are designed with our most advanced technologies, delivering leadership performance, and will be in large scale cloud and enterprise deployments.” Peng highlighted the importance of AMD’s holistic approach, combining advanced hardware, software, and an open ecosystem, enabling cloud providers, OEMs, and ODMs to introduce technologies that empower enterprises in adopting and deploying AI-powered solutions.
Prominent entities, including Dell Technologies, Hewlett Packard Enterprise, Lenovo, Meta, Microsoft, Oracle, Supermicro, and others, have embraced the latest AMD Instinct accelerator portfolio.
The AMD Instinct MI300X offers significant enhancements, boasting nearly 40% more compute units, 1.5 times more memory capacity, 1.7 times more peak theoretical memory bandwidth, and support for new math formats like FP8 and sparsity. These features are strategically designed to cater to the evolving demands of AI and HPC workloads.
Specifically, the AMD Instinct MI300X accelerators feature a class-leading 192GB memory capacity and a remarkable 5.3 TB per second peak memory bandwidth, ensuring optimal performance for increasingly sophisticated AI workloads.
The AMD Instinct Platform, built on an industry-standard OCP design, stands out as a leading generative AI platform. It incorporates eight MI300X accelerators, offering an industry-leading 1.5TB of HBM3 memory capacity, further solidifying AMD’s position in the AI landscape.