AMD Selects Samsung Electronics as Main Supplier of HBM3E Memory for AI Chips

AMD Selects Samsung Electronics as Main Supplier of HBM3E Memory for AI Chips
Immo Wegmann / unsplash

AMD Selects Samsung Electronics as Main Supplier of HBM3E Memory for AI Chips

Semiconductor giant AMD has made a strategic decision to make Samsung Electronics its primary partner for High Bandwidth Memory (HBM3E). These components are intended for the company's latest artificial intelligence accelerators, the MI325X and MI350 series. The deal signifies a new stage in collaboration between AMD and Samsung and a significant shift in the balance of the semiconductor industry. In today's market, where billions of dollars are being invested in AI infrastructure, supply stability is a critical factor for hardware vendors.

Technological Partnership Details: 12-Layer Breakthrough

Samsung successfully passed rigorous quality control audits and will provide supplies of 12-layer HBM3E modules. This is a significant achievement for Samsung, which previously faced yield challenges in HBM production. The transition from HBM3 to HBM3E (Enhanced) signifies a 50% increase in pin speed, reaching up to 9.6 Gbps per pin. The 12-layer technology provides:

  • 36 Gigabytes of memory in a single stack — the highest capacity in the industry at launch.
  • 1.2 Terabytes/second bandwidth, which is a massive leap over the 819 GB/s provided by standard HBM3.
  • Advanced TSV (Through-Silicon Via): Utilizing refined vertical interconnects to minimize signal delay between the 12 stacked dies.
  • Improved Thermal Management: Despite the stack height, Samsung uses specialized non-conductive film (NCF) to maintain structural integrity and heat dissipation.

This move will allow AMD to reduce its dependence on SK Hynix and increase production volumes to meet surging demand from cloud providers and AI researchers. As businesses demand concrete results, having a reliable memory pipeline is essential for delivering next-gen hardware on time. By securing Samsung as a primary partner, AMD ensures it won't be sidelined by the massive supply requirements of its competitors.

Leadership Position and Roadmap

AMD CEO Lisa Su, speaking about future plans, noted: "Our AI roadmap is focused on annual performance growth, and collaboration with Samsung will help us maintain this pace." Su confirmed that the company plans to release a new generation of AI accelerators every single year to stay competitive.

The AMD AI Roadmap is structured as follows:

  • MI325X (2025) — The current flagship featuring HBM3E memory and 288GB of total capacity.
  • MI350 (2026) — New architecture based on CDNA 4, targeting a 35x performance increase in inference.
  • MI400 (2027) — The next generation featuring the upcoming HBM4 standard with 16-layer and 20-layer stack options.

The leap to HBM4 in 2027 will be even more dramatic, with expectations of memory bandwidth exceeding 2 TB/s per GPU. This aggressive plan directly competes with NVIDIA's ambitions, which has also moved to an annual update cycle for its Blackwell and future architectures. The battle for the AI data center is no longer just about the GPU core, but about the speed and volume of the memory surrounding it.

Why This Is Important for Samsung

This agreement is vital for Samsung, which had previously lagged behind competitors in the HBM3E market. According to Samsung's own forecasts, the development of artificial intelligence will sharply increase chip demand, and the partnership with AMD helps the company catch this wave. This is a reputational breakthrough for Samsung's HBM business. The company spent billions of dollars modernizing production lines, and receiving AMD certification proves that the investment has yielded results. At the same time, Samsung's mobile department faces internal challenges, making the chip business even more strategically significant.

Supply Chain Diversification and Global Market

Analysts estimate that diversifying suppliers will help AMD compete with NVIDIA, as memory chip shortages remain a major industry challenge. In the past two years, HBM shortages have delayed AI accelerator production and significantly impacted the market. Supply chain stability is especially relevant as:

  • Global demand for AI accelerators grows by 40% annually.
  • HBM memory production is concentrated in only three companies (Samsung, SK Hynix, Micron).
  • Geopolitical risks threaten the Asian semiconductor industry hubs.
  • New data centers are opening worldwide at an unprecedented scale.

Strategic Approach to Competition

AMD's strategy in competing with NVIDIA is based on several key elements. First, price competition — the MI325X accelerator is positioned more attractively than NVIDIA's H200 or upcoming B200 solutions for certain scale-out workloads. Second is the open ecosystem approach — AMD's ROCm platform is open source, giving developers more flexibility than closed proprietary stacks. As the rapid growth of AI startups shows, there is room for more than one dominant player. Despite the collaboration between Google Cloud and NVIDIA, large cloud providers are actively seeking alternative high-performance suppliers.

Future Perspective and AI Economy

The future of the semiconductor industry is directly linked to AI development progress. Partnerships like the one between AMD and Samsung are significant milestones. As demonstrated by Tesla's Terafab project, large companies are increasingly dedicating resources to their own and partner-led AI chip production, placing traditional chipmakers in a highly dynamic landscape. Experts predict the AI chip market will reach $300 billion by 2027, and AMD's market share growth depends on the success of the MI350/MI400 series.

Frequently Asked Questions

What is HBM3E memory?

HBM3E (High Bandwidth Memory 3 Enhanced) is the second revision of high-bandwidth memory specifically designed for AI workloads, providing 1.2 TB/s bandwidth in 12-layer configurations.

Why is memory type important for AI?

Training and running Large Language Models (LLMs) requires processing enormous amounts of data rapidly. HBM provides 10 times more bandwidth than conventional DDR memory used in desktops.

What challenges did Samsung face with HBM?

Samsung initially had quality control issues with HBM3E yields, causing some major vendors to delay adoption. The company has since invested heavily to fix these production bottlenecks.

Can AMD really compete with NVIDIA?

AMD has a real chance to capture significant market share via price competition, a 12-layer capacity advantage, and its commitment to the open ROCm software ecosystem.

When will MI400 with HBM4 arrive?

According to current roadmaps, MI400 is slated for a 2027 launch, targeting the next massive leap in AI training efficiency and model size capability.