AI Research Lab Goodfire Scores $125 Million

<p><span class&equals;"legendSpanClass"><strong>SAN FRANCISCO<&sol;strong> &&num;8212&semi;<&sol;span> <u><a href&equals;"https&colon;&sol;&sol;edge&period;prnewswire&period;com&sol;c&sol;link&sol;&quest;t&equals;0&amp&semi;l&equals;en&amp&semi;o&equals;4612412-1&amp&semi;h&equals;2379072455&amp&semi;u&equals;https&percnt;3A&percnt;2F&percnt;2Fwww&period;goodfire&period;ai&percnt;2F&amp&semi;a&equals;Goodfire" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">Goodfire<&sol;a><&sol;u>—the AI research lab using interpretability to understand&comma; learn from&comma; and design models—announced a &dollar;150 million Series B funding round at a &dollar;1&period;25 billion valuation&period; The round was led by B Capital&comma; with participation from existing investors Juniper Ventures&comma; Menlo Ventures&comma; Lightspeed Venture Partners&comma; South Park Commons&comma; and Wing Venture Capital&comma; and new investors DFJ Growth&comma; Salesforce Ventures&comma; Eric Schmidt&comma; and others&period; This funding&comma; coming less than a year after its Series A&comma; will enable Goodfire to advance frontier research initiatives&comma; build the next generation of its core product&comma; and scale partnerships across AI agents and life sciences&period;<&sol;p>&NewLine;<p>Interpretability is the science of how neural networks work internally&comma; and how modifying their inner mechanisms can shape their behavior—e&period;g&period;&comma; adjusting a reasoning model&&num;8217&semi;s internal concepts to change how it thinks and responds&period; Interpretability also enables AI-to-human knowledge transfer&comma; i&period;e&period;&comma; extracting novel insights from powerful AI models&period; Goodfire recently identified a novel class of Alzheimer&&num;8217&semi;s biomarkers in this way&comma; by applying interpretability techniques to an epigenetic model built by Prima Mente—the first major finding in the natural sciences obtained from reverse-engineering a foundation model&period;<&sol;p>&NewLine;<p>&&num;8220&semi;We are building the most consequential technology of our time without a true understanding of how to design models that do what we want&comma;&&num;8221&semi; said Yan-David &&num;8220&semi;Yanda&&num;8221&semi; Erlich&comma; former COO and CRO at Weights &amp&semi; Biases and General Partner at B Capital&period; &&num;8220&semi;At Weights &amp&semi; Biases&comma; I watched thousands of ML teams struggle with the same fundamental problem&colon; they could track their experiments and monitor their models&comma; but they couldn&&num;8217&semi;t truly understand <i>why<&sol;i> their models behaved the way they did&period; Bridging that gap is the next frontier&period; Goodfire is unlocking the ability to truly steer what models learn&comma; make them safer and more useful&comma; and extract the vast knowledge they contain&period;&&num;8221&semi;<&sol;p>&NewLine;<p>Most companies building AI models today build their models as black boxes&period; Goodfire believes that that approach means that society is currently flying blind—and that deeply understanding how models work &&num;8220&semi;under the hood&&num;8221&semi; is critical to building and deploying safe&comma; powerful AI systems&period; The company is pursuing research which turns AI into something that can be understood&comma; debugged&comma; and intentionally designed like written software&period;<&sol;p>&NewLine;<p>&&num;8220&semi;Interpretability&comma; for us&comma; is the toolset for a new domain of science&colon; a way to form hypotheses&comma; run experiments&comma; and ultimately <i>design<&sol;i> intelligence rather than stumbling into it&comma;&&num;8221&semi; explained Goodfire CEO Eric Ho&period; &&num;8220&semi;Every engineering discipline has been gated by fundamental science—like steam engines before thermodynamics—and AI is at that inflection point now&period;&&num;8221&semi;<&sol;p>&NewLine;<p>Goodfire is part of an emerging cadre of research-first &&num;8220&semi;neolabs&&num;8221&semi;—AI companies which are pursuing new breakthroughs in training models which have been neglected by &&num;8220&semi;scaling labs&&num;8221&semi; such as OpenAI and Google DeepMind&period;<&sol;p>&NewLine;<p>So far&comma; the company has shown the value of their interpretability-driven approach across two key domains&colon; scientific discovery and model design&period;<&sol;p>&NewLine;<p>On the scientific discovery front&comma; Goodfire has focused on deciphering scientific foundation models with partners like Mayo Clinic&comma; Arc Institute&comma; and Prima Mente&comma; exemplified by their identification of a new class of biomarkers for Alzheimer&&num;8217&semi;s detection&period; Because AI models already surpass human understanding in many scientific domains&comma; like materials discovery and protein folding&comma; studying how those models work can extract novel insights and expand the horizons of human knowledge&period; The company plans to continue scaling its pipeline for scientific discovery with new collaborators&period;<&sol;p>&NewLine;<p>On the model design front&comma; Goodfire has focused on teaching models directly through their internal mechanisms&period; The company has recently developed methods to efficiently retrain a model&&num;8217&semi;s behavior by precisely targeting parts of its inner workings&period; One application of these methods reduced hallucinations by half in a large language model&period; Goodfire is betting that this approach will underpin a paradigm shift in how AI is built&comma; where AI can be made far more reliable and people can precisely and efficiently dictate how models should behave without off-target effects&period;<&sol;p>&NewLine;<p>The new funding will support Goodfire&&num;8217&semi;s work to rethink training and build a &&num;8220&semi;model design environment&&num;8221&semi;—a platform for understanding&comma; debugging&comma; and intentionally designing AI models at scale&period; The platform will leverage frontier interpretability techniques to allow users to reach inside models&comma; identify the parts responsible for behaviors they want to change&comma; and specifically train or intervene on those subunits&period;<&sol;p>&NewLine;<p>The company also plans to continue its green-field research into fundamental model understanding and new interpretability methods&period;<&sol;p>&NewLine;<p>Goodfire&&num;8217&semi;s team comprises top AI researchers from DeepMind and OpenAI&comma; leading academics from Harvard&comma; Stanford and more&comma; and top ML engineering talent from OpenAI and Google&period;<&sol;p>&NewLine;

Editor

D-Wave Quantum Leaving Silicon Valley for Florida

PALO ALTO – Quantum Computing company D-Wave Quantum Inc. announced that it has selected Boca…

3 hours

Madden NFL 26 Predicts Seattle Super Bowl Victory

EA SPORTS has revealed its official Super Bowl LX Simulation with Madden NFL 26, predicting…

9 hours

Mercedes to Use NVIDIA Autonomous Technology in S-Class Vehicles

Mercedes-Benz is marking 140 years of automotive innovation with a new S-Class lineup of vehicles…

9 hours

Home Swapping Site Kindred Nabs $125 Million

SAN FRANCISCO -- Kindred, the global home swapping platform,  has raised $125 million over two rounds…

9 hours

RapidFort Reels In $42 Million Series A

SAN FRANCISCO -- RapidFort, a company specializing in software supply chain security, today announced $42…

11 hours

Mark Zuckerberg Conference Call Remarks

Meta Platforms, Inc. (META) released its Fourth Quarter 2025 Results and CEO Mark Zuckerberg and…

11 hours