China's first AIGC director co-creation program is about to go live. The text video tool is really "in use".
Recently, Meta has launched new video editing features based on ai, which utilize text-to-image technology to achieve text-to-video, generating videos based on user input text prompts through ai models.
On December 6, China's first AIGC director co-creation program is about to go live.
This program is jointly initiated by Kuaishou's video generation model product "Keling AI" along with nine well-known directors.
Yesterday, the titles and trailers of nine AIGC short films were released. From the information disclosed, these experimental short films cover various genres such as fantasy, strange tales, family bonds, and animation, with each approximately 3 minutes long.
As of late July, "Sanxingdui: The Future Revelation" had a total dissemination volume exceeding 0.14 billion, of which Douyin's total playback reached 0.135 billion, ranking among the top five hottest short dramas on Douyin; "The Shan Hai Qi Jing: Splitting Waves and Cutting Through Waves" has over 52 million views for 5 episodes of the main film, with a total topic exposure exceeding 0.43 billion.
Seizing the text video track has become a consensus among domestic and foreign AI model companies.
The domestic text-to-video large model company Vidu launched the Vidu 1.5 new version in November, which can understand diverse inputs and has overcome the problem of "consistency". The relevant person in charge of Tencent's Hunyuan responded during an interview with media reporters that the current video generation technology has not yet reached the stage of large-scale commercial use in terms of usability, and there are many technical difficulties that need to be overcome. The more important focus at this stage is to open source so that more people can use it, allowing the model's flywheel to rotate quickly and drive optimizations.
Overseas, Sora is reported to potentially be officially launched before the end of 2024. Meta has reached a partnership with the Hollywood company Blumhouse Productions to jointly develop the AI video model Movie Gen.
Open-source Securities believes that the multimodal video large model has undergone more than a year of iterations, gradually enhancing its empowerment effect on content production and expanding creative boundaries, which may open up commercial space in the AI film and television field in the future.
Hong Kong stock companies related to AI text-to-video:
Kuaishou-W (01024): According to Kuaishou, all these short films were generated using Kecai AI, with film directors completely relying on the large video generation model, and the community deeply participating in movie-level content creation, which is the first of its kind in China. In July this year, Douyin and Kuaishou successively launched two AI micro-short dramas "Sanxingdui: The Future Revelation" and "Shanhaijing: The Cutting Waves", both of which had no live-action shooting or real human performances, yet gained significant attention.
Tencent (00700): Tencent's Hunyuan large model officially launched video generation capability and has open-sourced the video generation large model with a parameter count of 13 billion, making it the largest open-source video model currently available.
Alibaba-SW (09988): Alibaba launched the Tongyi Wuxiang video generation function in September, supporting the generation of up to 5 seconds of high-definition film-level video.
Meitu (01357): Meitu upgraded the video generation capability of its Qixiang large model in September.