Meta Considers TPUs to Diversify Computing Hardware
Meta is reportedly considering the use of Google’s Tensor Processing Units (TPUs) in its data centers starting in 2027, a move that could disrupt NVIDIA’s long-standing lead in high-performance computing hardware.
The company is said to be in discussions to invest billions in TPUs, exploring both long-term deployment and the option to rent Google’s chips via Google Cloud as early as next year.
This shift comes as major developers look to diversify suppliers amid rising demand and concerns about heavy reliance on NVIDIA GPUs, which remain the industry standard for training and running large-scale machine learning models.
Google’s latest model, Gemini 3, was also trained on TPUs, highlighting the capabilities of the platform. Following reports of the Meta–Google talks, Alphabet shares rose as much as 2.7%, while NVIDIA shares fell by a similar margin, signaling investor anticipation of a potential change in market dynamics.
TPUs Gain Traction as a High-Performance Alternative
If finalized, this agreement would strengthen TPUs as a viable alternative in high-performance computing. Google has already committed up to one million TPUs to Anthropic under a separate deal.
With Meta’s capital expenditures expected to surpass $100 billion in 2026, analysts estimate the company could spend $40–$50 billion next year on inferencing-chip capacity alone, which may further boost demand for Google Cloud services. TPUs, designed over a decade ago specifically for large-scale model workloads, are increasingly seen as a power-efficient, customized option compared to traditional GPUs.
Also Read:
Former MrBeast content strategist develops an AI tool for creator ideation and analytics








