Meta Platforms has announced a multi-year agreement with Advanced Micro Devices (AMD) to power its artificial intelligence infrastructure with up to 6 gigawatts (GW) of AMD Instinct GPUs, marking one of the largest AI compute partnerships in the industry.
The collaboration is aimed at supporting Meta’s long-term ambition to build next-generation AI systems and advance what it calls “personal superintelligence.” The company said the scale of its expanding AI workloads requires massive, energy-efficient compute infrastructure capable of handling both training and inference at unprecedented levels.
Under the agreement, Meta and AMD will align their roadmaps across silicon, systems and software, enabling tighter vertical integration across Meta’s infrastructure stack. The partnership spans multiple generations of AMD Instinct GPUs, EPYC CPUs and rack-scale AI systems.
Shipments to support the first GPU deployments are expected to begin in the second half of 2026. These deployments will be built on the Helios rack-scale architecture, a system Meta developed and unveiled at the Open Compute Project Global Summit last year in collaboration with AMD.
AMD Chair and CEO Lisa Su said the expanded strategic partnership will accelerate one of the industry’s largest AI infrastructure rollouts, positioning AMD at the centre of global AI expansion.
Meta Founder and CEO Mark Zuckerberg described the deal as a key step in diversifying the company’s compute ecosystem. The agreement forms part of Meta’s broader “Meta Compute” initiative, which seeks to scale infrastructure for the AI era through a portfolio-based strategy.
In addition to third-party hardware partnerships, Meta is advancing its in-house Meta Training and Inference Accelerator (MTIA) silicon program. By combining external and proprietary technologies, the company aims to build a resilient, flexible AI infrastructure capable of delivering advanced AI experiences to billions of users worldwide.


