Microsoft says its new AI chip surpasses Amazon and Google

Microsoft has introduced Maia 200, its most powerful in-house AI accelerator to date, designed to cut the cost of running large AI models while boosting speed and efficiency, Qazinform News Agency correspondent reports.

photo: QAZINFORM

The new chip is built specifically for inference, the stage where AI systems generate answers, text, or images for users.

Maia 200 delivers a major jump in performance per dollar compared with the hardware currently used across its data centers. The company says it is about 30% more cost efficient than the latest systems in its fleet and outperforms rival chips used by other cloud giants when running modern AI models.

According to Microsoft, Maia 200 outperforms competing chips from other cloud providers. The company says it delivers 3 times the FP4 performance of the third generation Amazon Trainium, and FP8 performance above Google’s seventh generation TPU.

Maia 200 is manufactured using an advanced 3 nanometer process and is packed with more than 140 billion transistors. It is optimized for low precision computing, which is now common in large AI models, allowing it to generate responses faster while using less power. In practical terms, Microsoft says the chip can comfortably handle today’s biggest models and is ready for even larger ones in the future.

The new accelerator will power multiple Microsoft services, including Azure, Microsoft Foundry, and Microsoft 365 Copilot. It will also be used to run the latest GPT-5.2 models from OpenAI. Inside Microsoft, the Superintelligence team plans to use Maia 200 to generate synthetic data and improve future in house AI systems.

Maia 200 is already live in Microsoft’s US Central data center near Des Moines, Iowa, with a second deployment planned for the Phoenix, Arizona region. More locations are expected to follow as Microsoft expands the rollout across its global cloud network.

Alongside the hardware launch, Microsoft is previewing a Maia software development kit. The tools are designed to help developers move existing AI models onto the new chip and fine tune performance without deep hardware expertise. Support includes popular AI frameworks and low level controls for teams that need more customization.

Earlier, Qazinform News Agency reported that Microsoft, NVIDIA and Anthropic forged a major new AI Alliance.