Meta Platforms (META) plans to expand its custom silicon efforts beyond applications for simple recommendation algorithms to eventually support complex AI model training, according to a Bloomberg report that cites the company CFO’s comments from a recent conference.
The company believes some of its workloads, especially those tied to AI models and personalized recommendations, are customized, and building its own chips could improve performance and efficiency. The strategy aims to tailor hardware to Meta’s (META) unique needs, reducing reliance on third-party suppliers like Nvidia (NVDA) and AMD (AMD).
“Some of our workloads really are very customized to us,” Meta CFO Susan Li commented at a technology conference. “The sort of ranking and recommendations workloads have been where we have started, and that’s the place where we have rolled out custom silicon at the most scale. But we expect and are hopeful that we are going to expand that over time, including eventually to training AI models.” he added.
The testing of Meta’s (META) first in-house AI training chip began in early 2025, and executives had earlier targeted 2026 rollout for generative AI, though its advanced designs like Olympus faced setbacks in February.
Still, Meta (META) prioritizes proprietary silicon for customized workloads, with the move aligning with broader industry trends to optimize AI infrastructure costs and performance.