PALO ALTO — Pika, an AI video platform that is redesigning the video-making and editing experience, has raised a Series A funding round of $35 million, led by Lightspeed Venture Partners.
Pika has now raised a total of $55 million in the company’s first six months with Pre-Seed and Seed rounds led by Nat Friedman and Daniel Gross. Additional investors include other prominent angel investors in AI, including Elad Gil, Adam D’Angelo (Founder and CEO of Quora), Andrej Karpathy, Clem Delangue (Co-Founder and CEO of Hugging Face and Partner at Factorial Capital), Craig Kallman (Chairman and CEO of Atlantic Records) and Alex Chung (Co-Founder of Giphy), as well as venture firms such as Homebrew, Conviction Capital, SV Angel and Ben’s Bites.
Pika has also unveiled Pika 1.0, a major product upgrade that includes a new AI model that can generate and edit videos in diverse styles such as 3D animation, anime or cinematic, and a new web experience that made it easier to use.
The first version of Pika launched in beta on Discord in late April of 2023 and today has more than 500,000 users generating millions of videos each week. Top Pika users on Discord spend up to 10 hours a day creating videos with Pika. Pika-generated videos have gone viral on social media: on TikTok alone, the #pikalabs hashtag has nearly 30M views.
Video is one of the most widely used creative mediums, dominating social media, entertainment and educational platforms, but it remains complicated and resource-intensive to create. While other AI video tools are primarily focused on professionals and commercial use, Pika has designed a video-making and editing experience that is effortless and accessible to the everyday consumer and creator. Anyone can be a creative director with Pika.
“My Co-Founder and I are creatives at heart. We know firsthand that making high-quality content is difficult and expensive, and we built Pika to give everyone, from home users to film professionals, the tools to bring high-quality video to life,” said Demi Guo, Pika co-founder and CEO. “Our vision is to enable anyone to be the director of their stories and to bring out the creator in all of us.”
The new Pika 1.0 includes the following:
- A new generative AI model that creates even higher quality video.
- New features that allow you to edit videos with AI, in addition to generating videos in many new styles:
- Text-to-Video and Image-to-Video: Type a few lines of text or upload an image on Pika, and the platform creates short, high-quality video using AI.
- Video-to-Video: Transform your existing videos into different styles, including different characters and objects, while maintaining the structure of the video. For example, change a video from live action to an animated video.
- Expand: Expand the canvas or aspect ratio of a video. For example, change a video from a TikTok 9:16 format to a wide screen 16:9 format, and the AI model will predict the content beyond the border of the original video.
- Change: Edit video content with AI, such as changing someone’s clothing, adding in another character, changing the environment or adding in props.
- Extend: Extend the length of an existing video clip using AI.
“Just as other new AI products have done for text and images, professional-quality video creation will also become democratized by generative AI. We believe Pika will lead that transformation,” said Michael Mignano, partner at Lightspeed Venture Partners. “Given such an impressive technical foundation, rooted in an early passion for creativity, the Pika team seems destined to change how we all share our stories visually. At Lightspeed, we couldn’t be more excited to support their mission to allow anyone to bring their creative vision to life through video, and we’re thrilled to be investing alongside other amazing investors at the forefront of AI.”
Pika was founded by two experts in AI: Demi Guo, co-founder and CEO, and Chenlin Meng, co-founder and CTO, both former PhD students hailing from Stanford University’s prominent AI Labs. Prior to her time at Stanford, Demi worked as the youngest full-time employee at Meta AI Research as a college sophomore, and won numerous international awards in software development. Chenlin has published more than 28 research papers in the last three years, including Denoising Diffusion Implicit Models (DDIM), which is now a default approach for content generation and has been widely used in OpenAI’s DALLE-2, Google’s Imagen and Stability AI’s Stable Diffusion.