My project
AMD Instinct™ MI300 Series
Accelerators
Leadership Generative AI Accelerators and Data Center APUs
Supercharging AI and HPC
AMD Instinct™ MI300 Series accelerators are uniquely well-suited to power even the most demanding AI and HPC workloads, offering exceptional compute performance, large memory density, high bandwidth memory, and support for specialized data formats.
Under the Hood
AMD Instinct MI300 Series accelerators are built on AMD CDNA™ 3 architecture, which offers Matrix Core Technologies and support for a broad range of precision capabilities—from the highly efficient INT8 and FP8 (including sparsity support for AI), to the most demanding FP64 for HPC.
Meet the Series
Explore AMD Instinct MI300X accelerators, the AMD Instinct MI300X Platform, and AMD Instinct MI300A APUs.
AMD Instinct MI300X Accelerators
AMD Instinct MI300X Series accelerators are designed to deliver leadership performance for Generative AI workloads and HPC applications.
  • 304 CUs

    304 GPU Compute Units
  • 192 GB

    192 GB HBM3 Memory
  • 5.3 TB/s

    5.3 TB/s Peak Theoretical Memory Bandwidth
Offers approximately 6.8X the Al training workload performance using FP8 vs. MI250 accelerators using FP161
Runs Hugging Face OPT transformer Large Language Model (LLM) on 66B parameter on 1 GPU2
AMD Instinct MI300X Platform
The AMD Instinct MI300X Platform integrates 8 fully connected MI300X GPU OAM modules onto an industry-standard OCP design via 4th-Gen AMD Infinity Fabric™ links, delivering up to 1.5TB HBM3 capacity for low-latency AI processing. This ready-to-deploy platform can accelerate time-to-market and reduce development costs when adding MI300X accelerators into existing AI rack and server infrastructure.
  • 8 MI300X

    8 MI300X GPU OAM modules
  • 1.5 TB

    1.5 TB Total HBM3 Memory
  • 42.4 TB/s

    42.4 TB/s Peak Theoretical Aggregate Memory Bandwidth
Expected to deliver 20.9 PFLOPs FP16 and BF16 peak theoretical floating-point performance with sparsity3
AMD Instinct MI300A APUs
AMD Instinct MI300A accelerated processing units (APUs) combine the power of AMD Instinct accelerators and AMD EPYC™ processors with shared memory to enable enhanced efficiency, flexibility, and programmability. They are designed to accelerate the convergence of AI and HPC, helping advance research and propel new discoveries.
  • 228 CUs

    228 GPU Compute Units
  • 24

    24 “Zen 4” x86 CPU Cores
  • 128 GB

    128 GB Unified HBM3 Memory
  • 5.3 TB/s

    5.3 TB/s Peak Theoretical Memory Bandwidth
Offers approximately 2.6X the HPC workload performance per watt using FP32 compared to AMD MI250X accelerators4
Request consultation
Contact ASBIS Experts to get more information about AMD EPYC processors.
©2024 Advanced Micro Devices, Inc. All rights reserved.
AMD, the AMD logo, EPYC, and combinations thereof are trademarks of Advanced Micro Devices, Inc.