Adobe lets customers test Firefly AI video generator

Adobe’s AI model for video generation is now available in a limited beta, enabling users to create short video clips from text and image prompts.

The Firefly Video model, first unveiled in April, is the latest generative AI model Adobe has developed for its Creative Cloud products — the others cover image, design and vector graphic generation.

From Monday, there are two ways to access the Firefly Video model as part of the beta trial.

One is the text and image to video generation that Adobe previewed last month, accessible in the Firefly web app at firefly.adobe.com. This enables users to create five-second, 720p-resolution videos from natural-language text prompts. These can contain realistic video footage and 2D or 3D animations. It’s also possible to generate video using still images as a prompt, meaning a photograph or illustration could be used to create b-roll footage.

To provide greater control over the output, there are options for different camera angles, shot size, motion and zoom, for example, while Adobe says it’s working on more ways to direct the AI-generated video.

Waiting list

Adobe said it only trains the video model on stock footage and public domain data that it has rights to use for training its AI models. It won’t use customer data or data scraped from the internet, it said.

To access the beta, you’ll need to join the waitlist. It’s free for now, though Adobe said in a new release that it will reveal pricing information once the Firefly Video model gets a full launch.

Adobe is one of several technology companies working on AI video generation capabilities. OpenAI’s Sora promises to let users create minute-long video clips, while Meta recently announced its Movie Gen video model and Google unveiled Veo back in May. However, none of these tools are publicly available at this stage.

Extended remix

The other way to access the Firefly Video model is with the Generative Extend tool, available in beta in video editing app Premiere Pro. Generate Extend can be used to create new frames to lengthen a video clip — although only by a couple of seconds, enabling an editor to hold a shot longer to create smoother transitions. Footage created with Generative Extend must be 1920×1080 or 1280×720 during the beta, though Adobe said its working on support for higher resolutions.

Background audio can also be extended for up to 10 seconds, thanks to Adobe’s AI audio generation technology, though spoken dialogue can’t be generated.

At its MAX conference on Monday, Adobe also announced that its  GenStudio for Marketing Performance app, designed to help businesses manage the influx of AI-generated content, is now generally available.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *