SAN JOSE — Lambda, a cloud company founded by AI engineers and powered by NVIDIA GPUs, has landed a $320 million Series C led by Thomas Tull’s US Innovative Technology with participation from new investors B Capital, SK Telecom, funds and accounts advised by T. Rowe Price Associates, Inc., and existing investors Crescent Cove, Mercato Partners, 1517 Fund, Bloomberg Beta, and Gradient Ventures, among others.
The company plans to use the new money to expand its AI cloud business, including Lambda’s popular on-demand and reserved cloud offerings.
Founded in 2012, Lambda has over a decade of experience building AI infrastructure at scale and has amassed over 100,000 customer sign-ups on Lambda Cloud. An early provider of NVIDIA H100 Tensor Core GPUs, Lambda is chosen by AI developers for the fastest access to the latest architectures for training, fine-tuning and inferencing of generative AI, large language models and foundation models.
Lambda’s hardware and private cloud business serves over 5,000 customers across manufacturing, healthcare, pharmaceuticals, financial services, and the U.S. government. Lambda’s AI Cloud has been adopted by the world’s leading companies and research institutions including Anyscale, Rakuten, The AI Institute, and multiple enterprises with over a trillion dollars of market capitalization.
“AI is fundamentally restructuring science, commerce, and industry. Over the next 10 years, every human endeavor will be augmented by the integration of LLMs and generative AI,” said Lambda CEO and co-founder, Stephen Balaban. “This AI rollout is going to require a lot of GPUs. This latest financing supports our mission to make GPU compute as ubiquitous as electricity.”
“Investing in strong infrastructure to power and disseminate cutting-edge technology is imperative to ensure the U.S. remains a global leader in AI advancement,” said Thomas Tull, Chairman of USIT. “A long-time pioneer in deep-learning, Lambda offers an unmatched combination of hardware and cloud infrastructure alongside connective software and tools that enable AI developers to build with efficiency and speed. We believe Lambda’s platform will serve as the foundation for the AI hyperscalers of tomorrow.”
Since the company’s last funding announcement in March 2023, Lambda became one of the first public clouds to deploy NVIDIA H100 GPUs and GH200 Superchip-powered systems. Despite a sharp increase in demand for generative AI, Lambda maintained high availability of the latest NVIDIA GPUs at some of the lowest per hour prices in the world. Customers choose Lambda because of how easy it is to access NVIDIA GPUs to perform large scale distributed training and inference. One of the most useful tools for these workloads, especially when they involve multiple GPUs and multiple nodes, is Ray, an open source framework created by Anyscale.
“The pace of innovation and workload growth in AI have suddenly made Moore’s Law obsolete,” said Dr. Ion Stoica, co-founder and Executive Chairman at Anyscale. “Lambda is addressing a critical market need for accessible, affordable, highly-performant cloud infrastructure designed specifically for AI workloads. Our ongoing partnership with Lambda expands customer choice and paves the way forward in state-of-the-art price-performance in AI.”