Meta Extends Broadcom AI Chip Partnership to 2029, Targeting 2nm Process
The social media giant is doubling down on custom silicon to power inference workloads, betting purpose-built chips will outpace Nvidia's GPUs in cost efficiency at scale.

Meta has extended its custom AI chip partnership with Broadcom through 2029, committing to at least four generations of silicon designed to power the recommendation and ranking systems that underpin Facebook, Instagram, and its other platforms.
The expanded deal positions Meta's MTIA program as the first in the industry to adopt a 2-nanometer manufacturing process, according to Broadcom. The chipmaker will also supply Ethernet networking technology to connect Meta's expanding AI compute clusters. Chief Executive Mark Zuckerberg framed the collaboration as essential infrastructure "to deliver personal superintelligence to billions of people," spanning chip design, packaging, and networking.
Meta's first-generation MTIA 300 chip is already operational, handling inference workloads—the process by which AI models respond to user queries in real time. Three additional chip generations are planned through 2027, all optimized for inference rather than the training tasks that dominate Nvidia's GPU business.
(The partnership reflects Meta's strategic bet that custom silicon tailored to its specific workloads will deliver superior cost efficiency compared to general-purpose accelerators, particularly at the scale Meta operates. TSMC, the world's dominant contract chipmaker, reported that AI-related demand remains "extremely robust" and that the company is booked through 2028, underscoring the supply constraints facing all players in the space.)
Meta is following a path pioneered by Google, which began producing custom AI accelerators in 2015 and has since deployed them across its search, advertising, and cloud infrastructure. The strategy represents a direct challenge to Nvidia's dominance in AI compute, where the company's GPUs account for more than 60 percent of global AI processing capacity. Amazon has also gained traction with its Trainium chips, winning major clients by offering alternatives to Nvidia's tightly controlled ecosystem.
The move comes as AI infrastructure spending reaches unprecedented levels. Global enterprise AI investment hit $581.7 billion in 2025, nearly double the prior year, with the United States accounting for nearly half of that total. Yet the benefits remain concentrated: TSMC alone captures the bulk of chip fabrication revenue, while a handful of hyperscalers and chipmakers control the steepening curve of AI compute power, which has increased thirtyfold in three years.
Keywords
Sources
https://thenextweb.com/news/meta-and-broadcom-extend-their-ai-chip-deal-to-2029
Focuses on MTIA chip generations, 2nm process leadership, and Meta's strategy to outperform Nvidia GPUs in cost efficiency at scale
https://www.tipranks.com/news/meta-platforms-meta-partners-with-broadcom-on-custom-ai-microchips
Emphasizes investor perspective on Meta's chip partnership and its implications for stock performance and competitive positioning
https://m.theblockbeats.info/en/news/61982
Contextualizes deal within global AI compute power concentration, noting Nvidia's 60% market share and TSMC's foundry dominance
https://www.pcgamer.com/hardware/tsmcs-latest-bank-report-is-exactly-as-youd-expect-it-mo-wafers-mo-money/
Highlights TSMC's record profits, robust AI demand, and capacity booked through 2028, underscoring supply constraints
