Meta Platforms has begun testing its first in-house chip designed for artificial intelligence (AI) training, marking a key step in the company’s strategy to develop custom silicon and reduce dependence on external suppliers such as Nvidia, two sources familiar with the matter told Reuters.
The social media giant has initiated a limited deployment of the chip and may scale up production for wider use if testing proves successful, the sources said. The move aligns with Meta’s broader efforts to cut infrastructure costs as it ramps up investment in AI-driven technologies.
See also: Boston Dynamics Showcases Progress on New Atlas Robot
Meta, which owns Facebook, Instagram, and WhatsApp, has projected total expenses for 2025 to be between $114 billion and $119 billion, with capital expenditures—primarily for AI infrastructure—expected to reach up to $65 billion.
One source described Meta’s new AI training chip as a dedicated accelerator, meaning it is specifically designed for AI-related tasks rather than general computing. This design could improve power efficiency compared to traditional GPUs used for AI workloads. The company has partnered with Taiwan Semiconductor Manufacturing Co (TSMC) for chip production, the source added.
See also: U.S. DOJ Drops Push for Google to Divest AI Investments, Maintai
The test deployment follows the completion of Meta’s first “tape-out” of the chip, a crucial stage in semiconductor development where an initial design is sent to a manufacturing facility. This process typically costs tens of millions of dollars and takes several months to complete, with no guarantee of success. If the test fails, Meta would need to diagnose the issue and repeat the process.
The AI training chip is the latest development in Meta’s Meta Training and Inference Accelerator (MTIA) program, which has faced challenges in the past, including the cancellation of a chip at a similar phase of development. However, the company successfully introduced an MTIA inference chip last year to power recommendation systems on Facebook and Instagram.
See also: U.S. Considers Restrictions on Chinese AI Chatbot DeepSeek Over Security Concerns
Meta executives have stated their goal is to deploy in-house chips for AI training by 2026, starting with recommendation systems before expanding to generative AI applications such as Meta AI. “We’re working on how we would do training for recommender systems and then eventually how we think about training and inference for generative AI,” said Chris Cox, Meta’s Chief Product Officer, at a recent technology conference.
Meta previously abandoned an internally developed AI inference chip after limited success in small-scale testing, opting instead to place multi-billion-dollar orders for Nvidia GPUs in 2022. Despite its ongoing investments in in-house silicon, Meta remains one of Nvidia’s largest customers, using its GPUs for AI model training, advertising systems, and its Llama foundation models.
See also: Microsoft Develops In-House AI Models to Diversify Offerings
Meta’s efforts come amid broader industry discussions on the effectiveness of scaling AI models by increasing computing power. The recent launch of cost-efficient AI models from Chinese startup DeepSeek, which optimize computational efficiency through inference rather than sheer processing scale, has fueled debate over the future direction of AI development.
Following DeepSeek’s advancements, Nvidia’s stock faced a sharp decline before recovering most of its losses, with investors betting that the company’s hardware will remain central to AI training and inference applications.