Rewriting The Script

Runway Looks to Turn Text into Blockbusters

Every Sunday, we'll highlight one company to do an in-depth breakdown of their business, financial situation, and whether or not it's a worthy investment opportunity - large or small. In today's edition, we'll look at Runway AI, a cutting-edge artificial intelligence startup looking to revolutionize content creation for various industries, including film, television, and social media.

Generative AI startup Runway recently raised a noteworthy $100 million in a Series D funding round, resulting in a significant increase in the company's valuation to $1.5 billion. This development comes just five months after its Series C round, during which the startup was valued at $500 million. With a buzzing market, a fresh injection of capital, and a skyrocketing valuation, the future appears bright for Runway AI. But what's driving this buzz, and how will Runway fare in the rapidly evolving AI-driven video industry?

Crafting the Future of Artistry

Runway AI was founded in 2018 when founders Cristóbal Valenzuela, Anastasis Germanidis, and Alejandro Matamala-Ortiz met at NYU's Interactive Telecommunications Program. This graduate program focused on the intersection of technology and the arts, sparking the founders' interest in exploring unconventional applications of neural networks, specifically in tools for artists and filmmakers.

According to the founders, many considered their idea of developing AI models to power image and video editing and generation tools "crazy." The founders envisioned a platform that would eventually enable creators to generate full videos from mere text prompts. Despite the initial skepticism, this moonshot vision eventually materialized, capturing the attention of investors who saw the potential in Runway's full-stack strategy.

This approach involved building everything from the underlying models and infrastructure layer to the end application. This enabled Runway to attract funding from reputable venture capital firms like Amplify Partners, Lux Capital, Madrona, Coatue, and Felicis Ventures.

In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, later funded the computing costs required to train the model on a much larger dataset. By 2022, Stability AI had taken Stable Diffusion mainstream, transforming it from a research project into a global phenomenon. Runway would eventually part ways with Stability AI, developing its own model for text-to-video generation.

The Art of AI Editing

Since its inception in 2018, Runway AI has developed a range of AI-powered video editing software, in a bid to fundamentally alter the way content is created and edited. The company's web-based video editor boasts a plethora of tools, including frame interpolation, background removal, blur effects, audio cleaning or removal, and motion tracking, among others.

In addition to its editing tools, Runway operates Runway Studios, an entertainment and production division. The creative suite provided by Runway AI has become a valuable resource among filmmaking professionals, who leverage it to save significant time on editing tasks that were traditionally done manually.

Runway's AI Magic Tools enable users to edit and generate content in various ways, such as removing backgrounds from videos, erasing or replacing parts of an image, and blurring out faces or backgrounds. These innovative features have made Runway AI a game-changer in the video editing landscape.

The company has also ventured into promoting AI-generated content, hosting a festival for AI-generated short films in San Francisco in February this year. The event showcased the works of ten finalists, demonstrating the capabilities of generative AI tools in filmmaking. Runway AI co-founder Cristóbal Valenzuela has expressed plans to host the film festival again, with interest from other established festivals to collaborate.

From Oscars to Late Night TV

Runway AI's tools have already been put to use in the industry, illustrating their value in various creative applications. In the Oscar-winning film Everything Everywhere All at Once, VFX artist Evan Halleck utilized Runway's Green Screen Background Remover for one of the Moving Rocks scenes.

Halleck mentioned in an interview that he wished he had discovered Runway's tools earlier, as they could have automated laborious and time-consuming processes.

Runway's editing tools have also found their way into television, streamlining workflows for The Late Show with Stephen Colbert. The company claims that its tools have compressed a workflow that once took six hours into just six minutes. In addition to the Green Screen tool, Colbert utilizes Inpainting, which enables film editors to remove or paint over objects in videos.

Runway describes its tools as having a Hollywood production studio in the browser that has already made an impact on the industry. By providing accessible and efficient tools, Runway AI is looking to empower creators and professionals alike to reimagine the way content is produced.

Picture-Perfect Pixels

At its core, Runway AI's technology enables users to generate videos from simple textual descriptions. To initiate the process, users provide a brief description of a scene, ideally involving some action, such as "a rainy day in New York City or "a cat with a cellphone in an apartment.” Within a minute or two, the tool produces a video based on the description.

Runway's technology can generate videos of common images, like a dog sleeping on a rug, or combine unrelated concepts to create amusing scenarios, like a lion at a birthday party. However, the resulting videos are currently limited to four seconds and may appear choppy and blurry upon close examination.

Runway's generative AI system learns by analyzing digital data, including photos, videos, and captions describing their content. Researchers are confident that by training the technology on ever-increasing amounts of data, they can quickly enhance and broaden its capabilities. The company says its system will soon generate professional-looking mini-movies, complete with music and dialogue.

The output generated by Runway's system is difficult to categorize, as it is neither a photo nor a cartoon. Instead, it consists of numerous pixels blended to create a realistic video. The company plans to integrate its technology with other tools aimed at expediting the work of professional artists.

Runway AI's Gen-1

Gen-1 is a pioneering AI model developed by Runway AI that can transform existing videos into new ones using text prompts or reference images. The company recently launched a mobile app for iOS devices, enabling users to access Gen-1 directly from their phones.

The app allows users to transform an existing video based on text, image, or video input, functioning similarly to a style transfer tool but generating entirely new videos as output rather than applying filters. Gen-1 distinguishes itself from other text-to-video models with its ability to deliver higher quality by transforming existing footage. The app is designed to be user-friendly for beginners while also offering advanced options for experienced users to fine-tune results or experiment with more extreme settings.

Gen-1 enables users to apply various aesthetics or themes to their videos, such as transforming them into watercolor paintings, charcoal sketches, or claymation-style scenes. However, some limitations include working with footage no longer than five seconds, restricted prompts, and certain copyright restrictions. Processing videos takes about two to three minutes, with processing happening in the cloud and expected to speed up over time.

The app offers both standard ($144/year) and Pro ($345/year) plans. The standard version grants users 525 credits, limiting video uploads to five seconds in length, with each second consuming 14 credits. The pro plan provides additional credits, 1080p video, unlimited projects, and access to all of Runway's 30+ AI tools, among other features.

Second Generation Improvements

Runway AI's Gen-2 software represents a significant leap forward in AI video generation technology compared to its predecessor, Gen-1. The most notable improvement is the ability to create entirely new videos from text descriptions alone, rather than merely modifying preexisting footage. This innovative feature enables users to generate unique content by applying the composition and style of a given text prompt to a source video.

The output quality of videos produced by Gen-2 is noticeably superior to that of Gen-1. The videos are more realistic, with sharper pixel quality. Despite the GIF-like flow indicating that they are AI-generated, the overall quality showcases the technology's potential and positions Runway AI's Gen-2 as a standout product in the current market.

Gen-2 introduces three new modes that expand its capabilities and offer greater creative freedom to users

  1. Mode 1 (Text to Video): This mode allows users to create videos from scratch using only a text prompt. With just a few words describing the desired scene, Gen-2 can generate a completely new video.

  2. Mode 2 (Text + Image to Video): In this mode, users can generate a video based on both a driving image and a text prompt. By combining visual and textual information, Gen-2 can create videos that more closely align with the user's vision.

  3. Mode 3 (Image to Video): This mode enables the generation of a video using just a driving image as input. The AI then creates a video sequence that reflects the visual elements and style of the input image.

These new modes, combined with the enhanced output quality, make Runway AI's Gen-2 an invaluable tool for content creators. With increased flexibility and creative possibilities, users can generate videos from text and image prompts more efficiently and effectively than ever before.

The Future of Filmmaking

The film and television industry has always faced challenges related to the high costs and time-consuming nature of content production. Traditional visual effects (VFX) and computer-generated imagery (CGI) techniques, while effective, can be expensive and time-consuming to create. Moreover, the demand for high-quality, engaging content is continuously growing, with streaming giants like Netflix and Disney spending billions of dollars annually to keep their audiences captivated. AI is already making waves in the movie industry, with some companies leveraging the technology to create more realistic and cost-effective visuals.

For instance, Miramax has recently used Metaphysic's AI model to de-age Tom Hanks in the film Here, resulting in more realistic and budget-friendly visuals compared to existing traditional VFX or CGI methods. In addition, AI-based de-noising algorithms have begun to outperform hand-coded algorithms in terms of speed and accuracy, drastically reducing render times and enhancing artists' workflows. Industry leaders have also started to recognize the potential of AI in film and television production. Warner Bros and 20th Century Studios have both invested in AI technology to streamline various aspects of their decision-making processes, from release dates to audience segmentation. Smaller studios like TheSoul are using AI to expedite the production process, enabling them to create more content in less time.

This is where Runway's technology can make a significant impact. By offering a more efficient way to generate videos from text and image prompts, Runway can potentially revolutionize the film and television industry by partnering with various studios and creating an entirely new genre of AI-generated content. Runway's Gen-2 AI software can help production teams quickly generate realistic visuals, storyboards, and even entire scenes, reducing the time and cost associated with traditional production methods. Furthermore, Runway's technology can democratize content creation, allowing smaller studios and independent filmmakers to produce high-quality visuals without the need for a large budget. By integrating AI-generated content into their workflows, filmmakers can explore new creative possibilities, pushing the boundaries of storytelling and visual effects.

The Creative Revolution

Runway's technology has the potential to transform not only the film and television industry but also the landscape of social media content creation. The demand for engaging and unique content is growing across platforms like TikTok, YouTube, and Instagram. AI can help alleviate some of the pain points associated with content creation on these platforms, making it easier for creators to produce captivating videos and grow their online presence. While tech giants like Google and Meta have developed their own AI models for content generation, the adoption of these tools will depend on their ease of use and seamless integration with popular platforms. Runway's Gen-2 AI software could provide a more user-friendly solution for content creators, giving them the ability to generate high-quality videos with minimal effort.

Social media platforms today have to work with a delicate balance between retaining influential creators, motivating new creators, and growing the viewer base. These platforms often engage in incentive wars to attract and retain creators, such as TikTok's alleged manual promotion of videos and YouTube Shorts' lowered subscriber requirements for revenue generation. However, generative AI has the potential to overcome these challenges by revolutionizing the way content is produced and consumed on social media. AI-driven video platforms can help break down barriers to value creation by guiding creators on what drives engagement and showing relevant content to viewers. This guidance, along with reduced barriers, empowers creators to produce more valuable content and ultimately increases the platform's overall value. As the line between creators and viewers becomes increasingly blurred, AI-generated content can reach a wider audience and encourage greater interaction between users.

The economic impact of generative AI on social media content creation could be substantial. Traditionally, a small percentage of popular content on a platform has compensated for a larger percentage of less popular content. However, with the help of AI-generated content and algorithmic recommendations, creators can supercharge the success of popular content while also improving the profitability of less popular content. Lower barriers to the creation and enhanced guidance can result in a more diverse and engaging content ecosystem on social media platforms. Essentially, Runway's technology has the potential to significantly impact social media content creation by providing an easy-to-use, AI-driven solution for creators. By addressing pain points in content creation and offering new possibilities for engaging and unique content, Runway can help reshape the content seen on social media

Bottom Line

By combining AI advancements with user-friendly tools, Runway has the potential to address the challenges faced by traditional content creation methods and provide unprecedented opportunities for creators and viewers alike. The integration of AI-powered solutions like Runway's Gen-2 can democratize content creation, allowing for more diverse and engaging experiences, while also cultivating innovation and growth in the digital world. Ultimately, Runway's technology signifies a bold step forward in content generation, paving the way for an exciting and dynamic future for the creative industry.