Runway’s Gen-2 AI Software Creates Full Videos From Text Descriptions
In a massive leap for the artificial intelligence (AI) industry, Runway has unveiled its Gen-2 software, which can create full videos from text descriptions. The company’s first AI software, Gen-1, was already able to generate new videos using data from existing ones, but Gen-2 takes things to a whole new level. The model has been in development since September of last year and is now publicly available as the first text-to-video model on the market.
Combining the composition and style features of Gen-1, which could apply the structure of a source video to an image or text prompt to create a new video (also called video-to-video), Gen-2 can now create entirely new video content using only text descriptions. In essence, it can generate videos with just words – a massive breakthrough for the industry. With Gen-2, if you can say it, you can now see it – a groundbreaking concept that Runway is calling “Text to Video.”
The web-based platform can generate relatively high-resolution videos (compared to what is available on the market) and whilst not yet photorealistic they do demonstrate the power of the technology. Runway claims that a simple text prompt like “An aerial shot of a mountain landscape” can create videos that are strikingly beautiful. Similarly, a prompt like “a close up of an eye” can create an awe-inspiring short video that showcases the impressive capabilities of the model.
Runway believes that deep learning techniques applied to audiovisual content will revolutionize art, creativity, and design tools. The company says, “Deep neural networks for image and video synthesis are becoming increasingly precise, realistic, and controllable. In a couple of years, we have gone from blurry low-resolution images to both highly realistic and aesthetic imagery allowing for the rise of synthetic media.” Runway Research is at the forefront of these developments, and the company ensures that the future of content creation is accessible, controllable, and empowering for users.
While these generated video clips cannot yet seamlessly replace actual videos, the technology is rapidly advancing. The developments in the text-to-image space, like Midjourney, are a good indicator of where the industry is headed. Midjourney’s version 5, launched last year, made significant strides in creating images that were indistinguishable from actual photographs. Runway’s tech will likely see many competitors crop up quickly, as text-to-video generation becomes more prevalent.
In conclusion, Runway’s Gen-2 software is a game-changer in the AI industry. With the ability to create full videos from text descriptions, the technology is rapidly advancing and transforming the way we create and consume content. While the software is not yet perfect, it is an excellent indicator of what is possible and provides a glimpse into the future of content creation.