Broadcom Lands Multi-GW Meta AI Chip Deal Through 2029
Meta commits to 1 gigawatt of custom MTIA accelerators built on Broadcom's platform, extending their partnership five more years.
Broadcom just locked in another massive AI chip deal—this time with Meta, and it runs through the end of the decade.
The companies announced Tuesday that they've extended an existing partnership to 2029, with Meta committing to deploy multiple gigawatts of custom silicon based on Broadcom's XPU accelerator platform. The initial deployment: 1 gigawatt of Meta's MTIA training and inference accelerators.
Shares of Broadcom rose 5% on the news. This follows last week's Google and Anthropic agreements, cementing Broadcom's position as the go-to partner for hyperscaler custom silicon.
What Meta Is Building
MTIA—Meta Training and Inference Accelerator—is Meta's in-house chip designed specifically for its AI workloads. The next generation will be the first AI silicon manufactured on a 2nm process node.
The partnership covers the full stack: chip design, advanced packaging, and network interconnects. Broadcom provides the engineering expertise; Meta owns the design and workload optimization.
Why build custom? Cost and efficiency. General-purpose GPUs are powerful but expensive. Meta runs AI inference at massive scale across Facebook, Instagram, WhatsApp, and Threads. Custom silicon tuned to those specific workloads delivers better performance per dollar.
It's the same logic driving Google's TPUs, Amazon's Trainium, and Microsoft's Maia. The hyperscalers want to reduce reliance on NVIDIA's premium-priced hardware.
The Scale of the Commitment
One gigawatt of AI accelerators is an enormous deployment. For perspective, NVIDIA's current flagship data center GPU—the H100—draws about 700 watts. One gigawatt represents roughly 1.4 million H100-equivalents worth of computing power.
And that's just the initial commitment. The announcement mentions "multiple gigawatts" over the partnership's lifetime.
Meta's AI infrastructure spending has exploded. The company committed over $65 billion in capex for 2026, with the majority going toward AI compute. This deal locks in Broadcom as the manufacturing partner for a significant chunk of that buildout.
Hock Tan Steps Back from Meta's Board
Buried in the announcement: Broadcom CEO Hock Tan will leave Meta's board of directors but transition to an advisory role focused on Meta's custom silicon roadmap.
The move makes sense. As Broadcom's custom chip business expands, Tan serving on a major customer's board creates potential conflicts. An advisory role keeps the relationship tight without the governance complications.
Tan built Broadcom into a $700 billion company through disciplined acquisitions and ruthless cost management. His input on Meta's silicon strategy is worth the advisory arrangement.
Broadcom's Custom Chip Empire
This deal reinforces a pattern. Broadcom isn't trying to out-GPU NVIDIA. Instead, it's becoming the contract manufacturer of choice for every hyperscaler that wants to build its own chips.
The business model works like this: hyperscalers design chips optimized for their specific workloads, then partner with Broadcom for manufacturing expertise, packaging technology, and supply chain management.
Broadcom captures margin on every chip shipped without bearing the R&D risk of designing general-purpose processors. And because each customer's design is proprietary, switching costs are high once a partnership is established.
The Google deal extends through 2031. The Anthropic infrastructure agreement is multi-year. Now Meta through 2029. That's a decade of locked-in revenue from three of the largest AI spenders on the planet.
What It Means for the Sector
The custom silicon trend has legs. Every major tech company is either building its own chips or actively considering it.
For NVIDIA, this represents gradual market share erosion at the margin. The company still dominates training workloads and will likely maintain its lead in the most demanding applications. But inference at scale—where cost per query matters most—is increasingly moving to custom silicon.
For semiconductor equipment suppliers like ASML, more custom chips mean more orders. Someone has to manufacture all this silicon, and that ultimately flows through the foundries and their equipment providers.
For Broadcom, the bull thesis keeps getting stronger. The company's pivot toward AI infrastructure—both custom chips and networking—positions it to capture multiple revenue streams from the hyperscaler capex buildout.
At 28x forward earnings, the stock isn't cheap. But revenue visibility through decade-end and monopoly-like positioning in custom silicon justify the premium.