Gruve Secures $50 Million Follow-on Series A Financing

<div><strong>REDWOOD CITY<&sol;strong> – <a class&equals;"ww&lowbar;lnktrkr" href&equals;"https&colon;&sol;&sol;gruve&period;ai&sol;" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">Gruve<&sol;a>&comma; a provider of AI services and infrastructure&comma; announced the availability of more than 500 megawatts of distributed AI inference capacity across the United States&period; The company also secured a &dollar;50 million follow-on Series A financing to accelerate deployments&comma; expand strategic partnerships&comma; and scale its full-stack agentic services&period;<&sol;p>&NewLine;<p>The financing brings Gruve’s total funding to &dollar;87&period;5 million and was led by Xora Innovation &lpar;backed by Temasek&rpar;&comma; with participation from Mayfield&comma; Cisco Investments&comma; Acclimate Ventures&comma; AI Space and other strategic investors&period;<&sol;p>&NewLine;<p>The capital accelerates Gruve’s ability to make low-latency AI inference capacity immediately available across Tier 1 and Tier 2 U&period;S&period; cities and scale efficiently as demand grows—without multi-year data center buildouts&period;<&sol;p>&NewLine;<p><strong>The Execution Gap in AI<&sol;strong><br &sol;>&NewLine;As inference becomes the dominant AI workload&comma; infrastructure has emerged as the industry’s primary constraint&period; While models&comma; agents&comma; and hardware continue to see breakthroughs&comma; the systems running them have not kept pace&period;<&sol;p>&NewLine;<p>Most production inference today relies on infrastructure that was never designed for low-latency&comma; high-throughput&comma; cost-sensitive AI&comma; resulting in unsustainable costs&comma; mounting technical debt&comma; and weak unit economics&period;<&sol;p>&NewLine;<p>Gruve’s Inference Infrastructure Fabric was built to close this gap&period;<&sol;p>&NewLine;<p><strong>Inference Infrastructure Services Purpose-built for Production AI Workloads<&sol;strong><br &sol;>&NewLine;Gruve’s Inference Infrastructure Fabric is a distributed platform engineered specifically for production-grade AI inference&comma; delivering predictable latency&comma; scalable throughput and industry leading economics&period;<&sol;p>&NewLine;<p>Key capabilities include&colon;<&sol;p><&sol;div>&NewLine;<ul>&NewLine;<li><strong>500MW&plus; of expandable U&period;S&period; capacity<&sol;strong>&comma; leveraging excess power and existing infrastructure near Tier 1 and Tier 2 cities&comma; enabled by long-term partnerships with Lineage&comma; Inc&period; and other major colocation providers<&sol;li>&NewLine;<li><strong>Modular&comma; high-density&comma; rack-scale inference capacity<&sol;strong>&comma; engineered for cost efficiency in inference-heavy workloads and rapid deployment<&sol;li>&NewLine;<li><strong>A distributed&comma; low-latency edge fabric<&sol;strong> for seamless connectivity and workload orchestration across sites<&sol;li>&NewLine;<li><strong>Full-stack operations<&sol;strong>&comma; including a 24&&num;215&semi;7 AI-powered SOC&comma; network services&comma; and cluster management to meet enterprise-grade reliability and performance standards<&sol;li>&NewLine;<&sol;ul>&NewLine;<p>Gruve is bringing 30MW live today across four U&period;S&period; sites&comma; with additional capacity under development and further near-term expansions in Japan and Western Europe&period; This unique approach bypasses multi-year data center build cycles and delivers AI-ready capacity in months instead of years&period;<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;Gruve’s Inference Infrastructure Fabric combines modular state-of-the-art pods with a distributed network architecture to enable rapid capacity deployment in power- available locations today — without compromising on latency&period; As demand for inference accelerates&comma; scalable&comma; low-latency infrastructure with strong unit economics is increasingly critical&comma; and Gruve is well position to meet that need as it scales in 2026&period;”<&sol;p>&NewLine;<p>— Phil Inagaki&comma; Managing Partner and Chief Investment Officer&comma; Xora Innovation<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;We’re launching our Inference Infrastructure with 30MW across four U&period;S&period; sites&comma; immediate capacity available nationwide&comma; and near-term expansions in Japan and Western Europe&period; Combined with our 24&&num;215&semi;7 AI-powered SOC&comma; inference fabric and infrastructure operations&comma; Gruve is ready to support customers at true production scale&period;”<&sol;p>&NewLine;<p>— Tanuj Mohan&comma; GM &amp&semi; SVP&comma; AI Platform Services&comma; Gruve<&sol;p>&NewLine;

Editor

Silicon Valley Engineering Council Inducts Two Into Hall of Fame

SANTA CLARA -- The Silicon Valley Engineering Council (SVEC) announces the induction of Andrea J. Goldsmith, PhD and R.…

1 week

CardVault by Tom Brady Opens Flagship Store in SF

CardVault by Tom Brady, the first national retailer dedicated to sports cards, trading cards, and…

1 week

Trener Robotics Scores $32 Million Series A

SAN FRANCISCO & TRONDHEIM, NORWAY -- Trener Robotics (formerly T-Robotics), developer of an AI robot…

1 week

Cerebras Valued at $23 Billion With $1 Billion Series H Round

SUNNYVALE— Cerebras Systems has announced the closing of a $1 billion Series H financing at…

1 week

Big Health Reels in a Big $23.7 Million

SAN FRANCISCO -- Big Health, a developer of digital treatments for the most pervasive mental…

2 weeks

Deep Fission Uncovers $80 Million in Financing

BERKELEY -- Deep Fission, anadvanced nuclear energy company placing small modular pressurized water reactors (SMRs)…

2 weeks