Whether a person is an aspiring filmmaker hoping to make it in Hollywood or a creator who enjoys making videos for their audience, we believe everyone should have access to tools that help enhance their creativity. Today, we're excited to premiere Meta Movie Gen, our breakthrough generative AI research for media, which includes modalities like image, video, and audio. Our latest research demonstrates how you can use simple text inputs to produce custom videos and sounds, edit existing videos, and transform your personal image into a unique video. Movie Gen outperforms similar models in the industry across these tasks when evaluated by humans.
This work is part of our long and proven track record of sharing fundamental AI research with the community. Our first wave of generative AI work started with the Make-A-Scene series of models that enabled the creation of image, audio, video, and 3D animation. With the advent of diffusion models, we had a second wave of work with Llama Image foundation models, which enabled higher quality generation of images and video, as well as image editing. Movie Gen is our third wave, combining all of these modalities and enabling further fine-grained control for the people who use the models in a way that's never before been possible. Similar to previous generations, we anticipate these models enabling various new products that could accelerate creativity.
While there are many exciting use cases for these foundation models, it's important to note that generative AI isn't a replacement for the work of artists and animators. We're sharing this research because we believe in the power of this technology to help people express themselves in new ways and to provide opportunities to people who might not otherwise have them. Our hope is that perhaps one day in the future, everyone will have the opportunity to bring their artistic visions to life and create high-definition videos and audio using Movie Gen.