Meta Platforms (META) is planning to deploy four new generations of its self-developed AI chips by 2027.
Meta said that while it remains committed to a diverse silicon portfolio and to using the best solutions available — both internally and externally — the Meta Training and Inference Accelerator, or MTIA, its family of homegrown AI chips developed in partnership with Broadcom (AVGO), has remained and will continue to be an important part of Meta’s AI infrastructure strategy.
On Wednesday, Meta announced plans for the four new chips — MTIA 300, MTIA 400, MTIA 450 and MTIA 500 — as a part of its efforts to diversify its chip portfolio, reduce reliance on outside chips, and bring down costs.
Meta had recently announced deals to spend billions on AI infrastructure from Nvidia (NVDA), Advanced Micro Devices (AMD), and Alphabet’s (GOOG) (GOOGL) Google.
The company said the new chips have either already been deployed or are scheduled for deployment in 2026 or 2027, expanding workload coverage from ranking and recommendation, or R&R, inference to R&R training, general GenAI workloads, and GenAI inference with targeted optimizations.
AI inference is the process of running a trained AI model to make predictions on new, unseen data.
Meta said MTIA 300 is already in production for R&R training. Meta has finished testing MTIA 400 in its labs, and the company is on track to deploy it in its data centers.
The company noted that MTIA 450 and MTIA 500 are scheduled for mass deployment in early 2027.
Meta said that anticipating the rise in GenAI inference demand, MTIA 400 transitioned into MTIA 450, with specific optimizations for GenAI inference. The company added that since the bandwidth of high-bandwidth memory, or HBM, is the most important factor affecting GenAI inference performance, it doubled HBM bandwidth from MTIA 400 to 450. Meanwhile, MTIA 500 increased HBM bandwidth by an additional 50% compared to MTIA 450.
“Given the rapid pace of AI innovation, we have built the capability to ship a new chip roughly every six months,” said a Meta blog post written by Yee Jiun Song, Andrew Tulloch, Harikrishna Reddy, CQ Tang, and Vijay Thakkar.
Last month, The Information reported that Meta was facing issues with AI chips being developed internally and has discarded its most advanced chip, shifting focus to a less complicated version.
Meta said on Wednesday that chip designs are based on projected workloads, but by the time the hardware reaches production — often two years later — those workloads may have shifted substantially. Rather than placing a bet and waiting for a long period of time, the company has taken an iterative approach: each MTIA generation builds on the last, using modular chiplets, incorporating the latest AI workload insights and hardware technologies, and deploying on a shorter cadence.
The tech giant noted that this tighter loop keeps its hardware better aligned with evolving models while enabling faster adoption of new technology.
Shares of Meta were largely flat on Wednesday.