Gen of film – The future of the generation of Videos IA

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

Meta, the Facebook and Instagram parent company, introduced a revolutionary artificial intelligence model called Film generationDesigned to considerably improve the creation of videos. This new video generator powered by AI is capable of producing high definition videos with sound, using only text prompts. The announcement of the film Gen marks the latest Meta foray into the generative AI, placing it in direct competition with other industry giants such as Openai and Google.

Basically, Movie Gen allows users to create completely new video clips from simple text entries like this: “A lazy with pink sunglasses wesed on a bucket of donuts in a pool”. The model offers a leap forward in the generation of videos, pushing the limits of creativity for filmmakers, content creators and enthusiasts. Videos can be produced in various aspect ratios and can last up to 16 seconds, which makes them adapted to a wide range of uses, from social networks to short-films. This technology is based on previous work of META in video synthesis, such as the Make-A-Scene video generator and the EMU image synthesis model.

In addition to creating new videos from zero, the film Gen offers advanced editing capacities. Users can download existing videos or images and modify them using simple text controls. For example, an immobile image of a person can be transformed into a video in movement where the person performs actions depending on the entrance prompt. The possibility of customizing the existing images does not stop there. Users can modify specific details such as background, objects and even costumes. These modifications, all executed via text prompts, present the precision and versatility of the functions of the film Gen.

But what really distinguishes the film from its competitors is the integration of the high quality audio generation. The AI ​​can create soundtracks, sound effects and ambient noises that synchronize with the visuals of the video generated. Users can provide text prompts for specific audio indices, such as “rustling sheets” or “steps on the gravel”, and the film will incorporate these sounds into the scene. The model can generate up to 45 seconds of audio, ensuring that even short films or detailed clips are accompanied by dynamic sound landscapes. Meta AI also mentioned that the model includes an audio extension technique, allowing a seamless audio loop for longer videos.

The unveiling of the generation of films occurs at a time when other major players in the AI ​​industry also develop similar tools. Openai has announced its text model to Video Sora Earlier this year, but the model has not yet been published publicly. And Runway recently introduced its latest generative AI platform-Gen-3 Alpha.

However, Movie Gen stands out because of its ability to perform several tasks: generate new video content, edit existing clips and incorporate personalized elements, while maintaining the integrity of the original video. According to Meta IA, in the blind tests, the film Gen has outperformed competing models in the video and audio generation.

Despite the excitement surrounding the film Gen, Meta said that the tool is not yet ready for the public release. Depending on the company, technology is still too expensive to operate effectively and production time is longer than desired. These technical limitations mean that the film Gen will remain in development for the moment, without any calendar defined for the moment when it is made available to developers or the general public.

https://www.youtube.com/watch?v=Svtdag9zqzc

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.