Gruve Secures $50 Million Follow-on Series A Financing

<div><strong>REDWOOD CITY<&sol;strong> – <a class&equals;"ww&lowbar;lnktrkr" href&equals;"https&colon;&sol;&sol;gruve&period;ai&sol;" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">Gruve<&sol;a>&comma; a provider of AI services and infrastructure&comma; announced the availability of more than 500 megawatts of distributed AI inference capacity across the United States&period; The company also secured a &dollar;50 million follow-on Series A financing to accelerate deployments&comma; expand strategic partnerships&comma; and scale its full-stack agentic services&period;<&sol;p>&NewLine;<p>The financing brings Gruve’s total funding to &dollar;87&period;5 million and was led by Xora Innovation &lpar;backed by Temasek&rpar;&comma; with participation from Mayfield&comma; Cisco Investments&comma; Acclimate Ventures&comma; AI Space and other strategic investors&period;<&sol;p>&NewLine;<p>The capital accelerates Gruve’s ability to make low-latency AI inference capacity immediately available across Tier 1 and Tier 2 U&period;S&period; cities and scale efficiently as demand grows—without multi-year data center buildouts&period;<&sol;p>&NewLine;<p><strong>The Execution Gap in AI<&sol;strong><br &sol;>&NewLine;As inference becomes the dominant AI workload&comma; infrastructure has emerged as the industry’s primary constraint&period; While models&comma; agents&comma; and hardware continue to see breakthroughs&comma; the systems running them have not kept pace&period;<&sol;p>&NewLine;<p>Most production inference today relies on infrastructure that was never designed for low-latency&comma; high-throughput&comma; cost-sensitive AI&comma; resulting in unsustainable costs&comma; mounting technical debt&comma; and weak unit economics&period;<&sol;p>&NewLine;<p>Gruve’s Inference Infrastructure Fabric was built to close this gap&period;<&sol;p>&NewLine;<p><strong>Inference Infrastructure Services Purpose-built for Production AI Workloads<&sol;strong><br &sol;>&NewLine;Gruve’s Inference Infrastructure Fabric is a distributed platform engineered specifically for production-grade AI inference&comma; delivering predictable latency&comma; scalable throughput and industry leading economics&period;<&sol;p>&NewLine;<p>Key capabilities include&colon;<&sol;p><&sol;div>&NewLine;<ul>&NewLine;<li><strong>500MW&plus; of expandable U&period;S&period; capacity<&sol;strong>&comma; leveraging excess power and existing infrastructure near Tier 1 and Tier 2 cities&comma; enabled by long-term partnerships with Lineage&comma; Inc&period; and other major colocation providers<&sol;li>&NewLine;<li><strong>Modular&comma; high-density&comma; rack-scale inference capacity<&sol;strong>&comma; engineered for cost efficiency in inference-heavy workloads and rapid deployment<&sol;li>&NewLine;<li><strong>A distributed&comma; low-latency edge fabric<&sol;strong> for seamless connectivity and workload orchestration across sites<&sol;li>&NewLine;<li><strong>Full-stack operations<&sol;strong>&comma; including a 24&&num;215&semi;7 AI-powered SOC&comma; network services&comma; and cluster management to meet enterprise-grade reliability and performance standards<&sol;li>&NewLine;<&sol;ul>&NewLine;<p>Gruve is bringing 30MW live today across four U&period;S&period; sites&comma; with additional capacity under development and further near-term expansions in Japan and Western Europe&period; This unique approach bypasses multi-year data center build cycles and delivers AI-ready capacity in months instead of years&period;<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;Gruve’s Inference Infrastructure Fabric combines modular state-of-the-art pods with a distributed network architecture to enable rapid capacity deployment in power- available locations today — without compromising on latency&period; As demand for inference accelerates&comma; scalable&comma; low-latency infrastructure with strong unit economics is increasingly critical&comma; and Gruve is well position to meet that need as it scales in 2026&period;”<&sol;p>&NewLine;<p>— Phil Inagaki&comma; Managing Partner and Chief Investment Officer&comma; Xora Innovation<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;We’re launching our Inference Infrastructure with 30MW across four U&period;S&period; sites&comma; immediate capacity available nationwide&comma; and near-term expansions in Japan and Western Europe&period; Combined with our 24&&num;215&semi;7 AI-powered SOC&comma; inference fabric and infrastructure operations&comma; Gruve is ready to support customers at true production scale&period;”<&sol;p>&NewLine;<p>— Tanuj Mohan&comma; GM &amp&semi; SVP&comma; AI Platform Services&comma; Gruve<&sol;p>&NewLine;

Editor

iJustine Interviews Apple CEO Tim Cook in NY

Apple is celebrating its 50th anniversary this year and kicked off celebrations with a free…

1 day

MatX Raises $500 Million Series B

MOUNTAIN VIEW -- MatX, a company developing new chips for AI inference, has raised a…

5 days

Jest Emerges From Stealth With $7 Million

SAN FRANCISCO -- Jest, a company building the world’s first marketplace for messaging games,  has…

5 days

Doordash Uses AI to Improve Pizza Ordering

Pizza is one of the most popular categories on DoorDash with more than 150 million…

1 week

Fieldguide Scores $75 Million Series C

SAN FRANCISCO — Fieldguide, an agentic AI-native platform for the audit and advisory industry, has…

1 week

JetStream Security Takes Off With $34 Million Seed Funding

SANTA CLARA -- JetStream Security has raised $34 million in seed funding to solve what…

2 weeks