SANTA CLARA – Intel and Google have announced a multiyear collaboration to advance the next generation of AI and cloud infrastructure, reinforcing the critical role of CPUs and custom infrastructure processing units (IPUs) in scaling modern, heterogeneous AI systems.
Shares in Intel are up 23% in the past week.
As AI adoption accelerates, infrastructure is becoming more complex and heterogeneous, driving increased reliance on CPUs for orchestration, data processing and system-level performance. Through this collaboration, Intel and Google will align across multiple generations of Intel Xeon processors to improve performance, energy efficiency and total cost of ownership across Google’s global infrastructure.
Google Cloud continues to deploy Intel Xeon processors across its workload-optimized instances, including the latest Intel Xeon 6 processors powering C4 and N4 instances. These platforms support a broad range of workloads—from large-scale AI training coordination to latency-sensitive inference and general-purpose computing.
In parallel, Intel and Google are expanding their co-development of custom ASIC-based IPUs. These programmable accelerators offload networking, storage and security functions from host CPUs – improving utilization, increasing efficiency and enabling more predictable performance across hyperscale AI environments.
IPUs are a critical component of modern data center architectures. By handling infrastructure tasks traditionally managed by CPUs, they unlock greater effective compute capacity and allow cloud providers to scale more efficiently without increasing overall system complexity. Together, Xeon CPUs and IPUs form a tightly integrated platform balancing general-purpose compute with purpose-built infrastructure acceleration to deliver more efficient, flexible and scalable AI systems.
“AI is reshaping how infrastructure is built and scaled,” said Lip-Bu Tan, CEO of Intel. “Scaling AI requires more than accelerators – it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.”
“CPUs and infrastructure acceleration remain a cornerstone of AI systems—from training orchestration to inference and deployment,” said Amin Vahdat, SVP & Chief Technologist, AI Infrastructure, Google. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads.”