Microsoft’s Access to OpenAI’s System-Level IP
Microsoft CEO Satya Nadella said the company has full access to OpenAI’s system-level intellectual property, outlining how Microsoft will balance its own chip development with continued large-scale use of NVIDIA GPUs. In an interview, he explained that Microsoft receives every part of OpenAI’s accelerator-related IP except consumer hardware, responding “All of it” when asked about the extent of access.
OpenAI recently announced a multi-year partnership with Broadcom to co-develop and deploy 10 gigawatts of custom accelerators and networking hardware, significantly expanding its computing capabilities. Nadella noted that Microsoft had previously shared its own IP with OpenAI while building supercomputers, establishing a reciprocal technology exchange. He said Microsoft benefits from OpenAI’s ongoing system-level innovation through this continued access.
This pipeline allows Microsoft to advance its Maia chip program at a steady pace, even as competitors like Google develop custom silicon. Nadella emphasized that in-house chip efforts only succeed when supported by internal model demand. He explained that the success of any specialized hardware depends on generating sufficient usage from the company’s own AI workloads.
Balancing NVIDIA, Custom Silicon, and Future Infrastructure
Nadella acknowledged that Microsoft still relies heavily on NVIDIA GPUs and that any new accelerator must be competitive with NVIDIA’s existing offerings. He said that when evaluating hardware across a large fleet, total cost of ownership becomes the key factor. He also pointed out that major cloud providers face similar considerations, noting that companies like Google and Amazon continue to buy NVIDIA hardware because it remains flexible, highly capable, and widely supported by customer demand.
He outlined that Microsoft’s hardware strategy is informed by years of operating with multiple chip generations and vendors, explaining that the company previously used large volumes of Intel hardware before adding AMD and later Cobalt. This history illustrates Microsoft’s experience managing mixed-silicon environments.
Nadella said Microsoft will maintain a tight feedback loop between its MAI models and its hardware roadmap, ensuring that new microarchitectures align with the company’s own AI workloads. At the same time, the company will continue rapid deployment using NVIDIA systems. He concluded by stating that Microsoft will first implement systems developed by OpenAI and then expand those designs into its broader infrastructure.
Also Read:
AI-Led Orders Reach 10% as Sonata Posts Profit Growth Despite Revenue Decline
AI Coding Platform Cursor Raises $2.3 Billion, Valued Now at $29.3 Billion








