Venture Capital

Coactive AI Lands $30 Million Series B

SAN JOSE — Coactive AI has landed $30 million in Series B funding, cementing its position as the leading platform for analyzing images and videos. Cherryrock Capital, in its first investment, co-led this funding round with Emerson Collective. They were joined with significant participation from Greycroft, and previous investors Andreessen Horowitz, Bessemer Venture Partners, and Exceptional Capital, all participating in this raise.

Coactive unlocks the untapped potential of images and videos for applications ranging from intelligent search to video analytics – no metadata or tags required, creating an enterprise-grade operating system for visual content.

“We are thrilled to be in partnership with Cody, Will, and the entire Coactive team. We believe they are using the most advanced technology for unstructured data to solve real customer problems. We look forward to bringing our operational expertise to help them scale as they build a world class multimodal AI company,” said Stacy Brown-Philpot, Co-founder & Managing Partner, Cherryrock Capital.

While computer systems have led to massive paradigm shifts in the past from digital transformation to the big data movement, visual data (images and videos) have remained elusive despite making up more and more of everyday life. The way we work, communicate, shop and are entertained are all increasingly visual thanks to the rise of video conferencing, social media, and e-commerce and streaming services. Even the web looks very different now than it did back in the 1980s and 1990s. Images and videos are everywhere, yet our systems have been blind to those changes.

Enterprises are stuck in a world of tag, load, search (TLS), where they need to tag raw assets with either human or machine annotations, load those annotations into their systems as metadata, and search based on those annotations. That process is slow, expensive, and inflexible and does not scale to the image and video content we have today. And with the rise of Generative AI, we won’t be able to keep up anymore.

Coactive flips the process on its head with a load, search, tag (LST) approach. With our platform, enterprises can load and index the raw images and videos directly and make them searchable – no metadata or tags required. Tags are only necessary for enterprise specific terminology and domain-specific ideas where precision matters or for backward compatibility into metadata systems. We do all of the heavy lifting in a scalable and secure way to understand the pixels and audio directly, giving enterprise a new set of superpowers. Find clips in less than a second, not hours. Develop a sixth sense to detect content that violates standards & practices or editorial guidelines. Cover breaking news before AI can generate it. And turn raw video files from a tax to an asset that generates new revenue streams. This transition from TLS to LST is akin to ETL (Extract, Transform, Load) to ELT (Extract, Load, Transform) in data handling and processing and represents a fundamental leap forward like automobiles were to horses.