Google unveils Veo, a high-definition AI video generator that may rival Sora

admin


Still images taken from videos generated by Google Veo.
Enlarge / Still images taken from videos generated by Google Veo.

Google / Benj Edwards

On Tuesday at Google I/O 2024, Google announced Veo, a new AI video synthesis model that can create HD videos from text, image, or video prompts, similar to OpenAI’s Sora. It can generate 1080p videos lasting over a minute and edit videos from written instructions, but it has not yet been released for broad use.

Veo reportedly includes the ability to edit existing videos using text commands, maintain visual consistency across frames, and generate video sequences lasting up to and beyond 60 seconds from a single prompt or a series of prompts that form a narrative. The company says it can generate detailed scenes and apply cinematic effects such as time-lapses, aerial shots, and various visual styles

Since the launch of DALL-E 2 in April 2022, we’ve seen a parade of new image synthesis and video synthesis models that aim to allow anyone who can type a written description to create a detailed image or video. While neither technology has been fully refined, both AI image and video generators have been steadily growing more capable.

In February, we covered a preview of OpenAI’s Sora video generator, which many at the time believed to represent the best AI video synthesis the industry could offer. It impressed Tyler Perry enough that he put his film studio expansions on hold. However, so far, OpenAI has not provided general access to the tool—instead, they’ve limited its use to a select group of testers.

Now, Google’s Veo appears at first glance to be capable of video generation feats similar to Sora. We have not tried it ourselves, so we can only go by the cherry-picked demonstration videos the company has provided on its website. That means anyone viewing them should take Google’s claims with a huge grain of salt, because the generation results may not be typical.

Veo’s example videos include a cowboy riding a horse, a fast-tracking shot down a suburban street, kebabs roasting on a grill, a time-lapse of a sunflower opening, and more. Conspicuously absent are any detailed depictions of humans, which have historically been tricky to for AI image and video models to generate without obvious deformations.

Google says that Veo builds upon the company’s previous video generation models, including Generative Query Network (GQN), DVD-GAN, Imagen-Video, Phenaki, WALT, VideoPoet, and Lumiere. To enhance quality and efficiency, Veo’s training data includes more detailed video captions, and it utilizes compressed “latent” video representations. To improve Veo’s video generation quality, Google included more detailed captions for the videos used to train Veo, allowing the AI to interpret prompts more accurately.

Veo also seems notable in that it supports filmmaking commands: “When given both an input video and editing command, like adding kayaks to an aerial shot of a coastline, Veo can apply this command to the initial video and create a new, edited video,” the company says.

While the demos seem impressive at first glance (especially compared to Will Smith eating spaghetti), Google acknowledges AI video generation is difficult. “Maintaining visual consistency can be a challenge for video generation models,” the company writes. “Characters, objects, or even entire scenes can flicker, jump, or morph unexpectedly between frames, disrupting the viewing experience.”

Google has tried to mitigate those drawbacks with “cutting-edge latent diffusion transformers,” which is basically meaningless marketing talk without specifics. But the company is confident enough in the model that it is working with actor Donald Glover and his studio, Gilga, to create an AI-generated demonstration film that will debut soon.

Initially, Veo will be accessible to selected creators through VideoFX, a new experimental tool available on Google’s AI Test Kitchen website, labs.google. Creators can join a waitlist for VideoFX to potentially gain access to Veo’s features in the coming weeks. Google plans to integrate some of Veo’s capabilities into YouTube Shorts and other products in the future.

There’s no word yet about where Google got the training data for Veo (if we had to guess, YouTube was likely involved). But Google states that it is taking a “responsible” approach with Veo. According to the company, “Videos created by Veo are watermarked using SynthID, our cutting-edge tool for watermarking and identifying AI-generated content, and passed through safety filters and memorization checking processes that help mitigate privacy, copyright, and bias risks.”



Source link

Leave a comment