Gruve Secures $50 Million Follow-on Series A Financing

<div><strong>REDWOOD CITY<&sol;strong> – <a class&equals;"ww&lowbar;lnktrkr" href&equals;"https&colon;&sol;&sol;gruve&period;ai&sol;" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">Gruve<&sol;a>&comma; a provider of AI services and infrastructure&comma; announced the availability of more than 500 megawatts of distributed AI inference capacity across the United States&period; The company also secured a &dollar;50 million follow-on Series A financing to accelerate deployments&comma; expand strategic partnerships&comma; and scale its full-stack agentic services&period;<&sol;p>&NewLine;<p>The financing brings Gruve’s total funding to &dollar;87&period;5 million and was led by Xora Innovation &lpar;backed by Temasek&rpar;&comma; with participation from Mayfield&comma; Cisco Investments&comma; Acclimate Ventures&comma; AI Space and other strategic investors&period;<&sol;p>&NewLine;<p>The capital accelerates Gruve’s ability to make low-latency AI inference capacity immediately available across Tier 1 and Tier 2 U&period;S&period; cities and scale efficiently as demand grows—without multi-year data center buildouts&period;<&sol;p>&NewLine;<p><strong>The Execution Gap in AI<&sol;strong><br &sol;>&NewLine;As inference becomes the dominant AI workload&comma; infrastructure has emerged as the industry’s primary constraint&period; While models&comma; agents&comma; and hardware continue to see breakthroughs&comma; the systems running them have not kept pace&period;<&sol;p>&NewLine;<p>Most production inference today relies on infrastructure that was never designed for low-latency&comma; high-throughput&comma; cost-sensitive AI&comma; resulting in unsustainable costs&comma; mounting technical debt&comma; and weak unit economics&period;<&sol;p>&NewLine;<p>Gruve’s Inference Infrastructure Fabric was built to close this gap&period;<&sol;p>&NewLine;<p><strong>Inference Infrastructure Services Purpose-built for Production AI Workloads<&sol;strong><br &sol;>&NewLine;Gruve’s Inference Infrastructure Fabric is a distributed platform engineered specifically for production-grade AI inference&comma; delivering predictable latency&comma; scalable throughput and industry leading economics&period;<&sol;p>&NewLine;<p>Key capabilities include&colon;<&sol;p><&sol;div>&NewLine;<ul>&NewLine;<li><strong>500MW&plus; of expandable U&period;S&period; capacity<&sol;strong>&comma; leveraging excess power and existing infrastructure near Tier 1 and Tier 2 cities&comma; enabled by long-term partnerships with Lineage&comma; Inc&period; and other major colocation providers<&sol;li>&NewLine;<li><strong>Modular&comma; high-density&comma; rack-scale inference capacity<&sol;strong>&comma; engineered for cost efficiency in inference-heavy workloads and rapid deployment<&sol;li>&NewLine;<li><strong>A distributed&comma; low-latency edge fabric<&sol;strong> for seamless connectivity and workload orchestration across sites<&sol;li>&NewLine;<li><strong>Full-stack operations<&sol;strong>&comma; including a 24&&num;215&semi;7 AI-powered SOC&comma; network services&comma; and cluster management to meet enterprise-grade reliability and performance standards<&sol;li>&NewLine;<&sol;ul>&NewLine;<p>Gruve is bringing 30MW live today across four U&period;S&period; sites&comma; with additional capacity under development and further near-term expansions in Japan and Western Europe&period; This unique approach bypasses multi-year data center build cycles and delivers AI-ready capacity in months instead of years&period;<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;Gruve’s Inference Infrastructure Fabric combines modular state-of-the-art pods with a distributed network architecture to enable rapid capacity deployment in power- available locations today — without compromising on latency&period; As demand for inference accelerates&comma; scalable&comma; low-latency infrastructure with strong unit economics is increasingly critical&comma; and Gruve is well position to meet that need as it scales in 2026&period;”<&sol;p>&NewLine;<p>— Phil Inagaki&comma; Managing Partner and Chief Investment Officer&comma; Xora Innovation<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;We’re launching our Inference Infrastructure with 30MW across four U&period;S&period; sites&comma; immediate capacity available nationwide&comma; and near-term expansions in Japan and Western Europe&period; Combined with our 24&&num;215&semi;7 AI-powered SOC&comma; inference fabric and infrastructure operations&comma; Gruve is ready to support customers at true production scale&period;”<&sol;p>&NewLine;<p>— Tanuj Mohan&comma; GM &amp&semi; SVP&comma; AI Platform Services&comma; Gruve<&sol;p>&NewLine;

Editor

PG&E Prepares for Super Bowl Sunday

OAKLAND -- Pacific Gas and Electric Company (PG&E) has been working behind the scenes to…

2 hours

Jon Hamm Show on Apple TV+ Renewed for Third Season

Ahead of the season two debut of “Your Friends & Neighbors,” Apple TV announced that…

2 hours

Mountain West Conference Signs New Media Rights Deal With CBS, Fox

The Mountain West Conference -- a college sports conference that includes San Jose State --…

3 hours

Jogger Struck and Killed by Car in Palo Alto

PALO ALTO – Police are investigating a collision that occurred Friday morning in Palo Alto, where an…

3 hours

NFL to Play First Regular Season Game in Paris

PARIS, FRANCE — The National Football League will play its first-ever regular season game in Paris,…

4 hours

FAA, FBI Establish No Drone Zone at Levi’s Stadium

The Federal Aviation Administration (FAA), in coordination with the Federal Bureau of Investigation (FBI), has…

4 hours