It’s time to celebrate the incredible women leading the way in AI! Nominate your inspirational leaders for VentureBeat’s Women in AI Awards today by June 18. More information
New York City-based Runway ML, also known as Runway, was one of the first startups to focus on high-quality, realistic generative AI video creation models.
But after debuting its Gen-1 model in February 2023 and Gen-2 in June 2023, the company has since seen its star faded by other highly realistic AI video generators, namely OpenAI’s unreleased Sora model and the Dream Machine model from Luma AI. released last week.
That’s changing today, though, as Runway returns to the generative AI video wars in a big way: today it announced Gen-3 Alpha, which a blog post says is the “first of a coming series of models trained by Runway on new infrastructure built for large-scale multimodal training,” and a “a step toward building general world models,” or AI models that can “represent and simulate a wide range of situations and interactions such as those found in the real world.” world.” Watch sample videos created with Gen-3 Alpha by Runway below in this article:
Gen-3 Alpha allows users to generate high-quality, detailed, highly realistic 10-second video clips, with high precision and a range of emotional expressions and camera movements.
VB Transform 2024 Registration is open
Join business leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with colleagues, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. register now
According to an email from a Runway spokesperson to VentureBeat: “This initial rollout will support 5- and 10-second generations, with noticeably faster generation times. Generating a 5-second clip takes 45 seconds, and generating a 10-second clip takes 90 seconds.”
No precise release date has been given for the model yet, with Runway only showing demo videos on its website and social account on that starts at $15 per month or $144 per year).
After this article was published, VentureBeat interviewed Anastasis Germanidis, co-founder and chief technology officer (CTO) of Runway, who confirmed that the new Gen-3 Alpha model would be available to first-time paying Runway subscribers in “days,” but that the free tier was on. deck to get the model at a yet to be announced point in the future.
A Runway spokesperson echoed this statement, sending VentureBeat an email saying, “Gen-3 Alpha will be available in the coming days, available to paying Runway subscribers, our Creative Partners Program, and Enterprise users.”
On LinkedIn, Runway user Gabe Michael stated that he expected to gain access later this week:
About some new ones that are now only possible with a more capable basic model”
Germanidis also wrote that since the release of Gen-2 in 2023, Runway has learned that “video diffusion models don’t come close to saturating the performance gains from scaling, and that by learning the task of predicting video, those models building really powerful representations of the visual world. ”
Diffusion is the process of training an AI model to reassemble images (still or moving) of concepts from pixelated ‘noise’, based on learning those concepts from annotated image/video and text pairs.
Runway says in its blog post that Gen 3-Alpha was “collaboratively trained on videos and images” and was “a collaborative effort between an interdisciplinary team of research scientists, engineers and artists,” although specific data sets have not yet been revealed – following a trend of most other leading AI media generators who also don’t reveal exactly what data their models are trained on, and whether it was acquired through paid licensing deals or simply scraped from the internet.
Critics believe that AI model makers should pay the original creators of their training data through licensing agreements and have even filed copyright infringement lawsuits to that end, but AI model companies generally take the position that they are legally allowed to do so. to train on publicly posted information. facts.
Runway’s spokesperson emailed VentureBeat the following question when asked what training data was used in Gen-3 Alpha: “We have an internal research team that oversees all of our training and we use curated, internal data sets to build our models to train.”
Interestingly, Runway also notes that it is already “collaborating and collaborating with leading entertainment and media organizations to create custom versions of Gen-3,” which “enables more stylistically controlled and consistent characters and focuses on specific artistic and narrative requirements, including other options.”
No specific organizations are mentioned, but rather filmmakers behind critically acclaimed and award-winning films such as Everything, everywhere, all at once And The Volksjoker have revealed that they used Runway to create effects for parts of their films.
Runway includes a form in the Gen-3 Alpha announcement inviting other organizations interested in obtaining modified versions of the new model to sign up here. No price has been made public for how much training a custom model costs.
Meanwhile, it’s clear that Runway isn’t giving up the fight to be a dominant player or leader in the rapidly changing generational AI video creation space.