Meta Accelerates AI Ambitions: Four New MTIA Chip Generations Slated Within Two Years

Meta has issued a strategic warning to the semiconductor industry, revealing plans to launch four new generations of its proprietary MTIA AI accelerator chips within the next 24 months. This aggressive in-house silicon roadmap, centered on ranking, recommendations, and generative AI workloads, signals a major shift as the tech giant adopts a 'portfolio approach'—blending custom chips with external suppliers to avoid vendor lock-in and control its core AI infrastructure.
Meta rolls out four MTIA chip generations on a faster schedule
Meta said it already uses hundreds of thousands of MTIA chips for inference work tied to both organic content and ads across its apps.
The chips are built for the company’s own jobs, not for general use. That matters because Meta said the hardware is part of a custom full-stack solution, which gives it a more tuned system for the work it runs every day.
The company said that setup delivers better compute efficiency for its specific use cases and lowers cost compared with broader-purpose chips.
The next phase is a larger rollout. Meta said it is building MTIA 300, 400, 450, and 500, with each version bringing gains in compute, memory bandwidth, and efficiency. MTIA 300 is already in production and will handle ranking and recommendations training.
MTIA 400, 450, and 500 can run all workloads, but Meta said those chips will mainly be used for GenAI inference production in the near term and through 2027.
The company also said the silicon is modular, which lets new chips slide into existing rack system infrastructure. That cuts the wait between design and deployment.
On release speed, Meta said the industry usually launches a new AI chip every one to two years, but it now has the capacity to release its own chips every six months or less by reusing modular designs.
Meta builds its AI chip strategy around inference and open standards
The company said its MTIA strategy rests on three parts: fast iteration, an inference-first design, and easy adoption through common standards.
On the first point, Meta said the shorter release cycle helps it adjust faster as AI techniques change, bring in newer hardware technology, and reduce the cost of developing and deploying fresh chip versions.
On the second point, Meta drew a line between its plan and the usual market model. The company said most mainstream chips are built first for large GenAI pre-training jobs and then used for other work, often at a worse cost level.
Meta said it is doing the opposite. MTIA 450 and 500 are being tuned first for GenAI inference, then used for ranking, recommendations training and inference, and GenAI training when needed.
The company also said MTIA is built from the start on standard tools and systems, including PyTorch, vLLM, Triton, and the Open Compute Project. Its system and rack designs also follow OCP standards for use in data centers.
Meta added that no single chip can cover every demand it has, which is why it plans to deploy different chips for different workloads while pushing toward what it called “personal superintelligence for all.”
There’s a middle ground between leaving money in the bank and rolling the dice in crypto. Start with this free video on decentralized finance.