share_log

第四范式(6682.HK):Q1业绩表现优秀 十余年深耕行业大模型助力公司未来长期发展

Fourth paradigm (6682.HK): Excellent performance in Q1 for more than ten years, deep cultivation of the industry model helps the company develop long-term in the future

海通證券 ·  Jun 18

Q1 The performance is excellent, and we attach great importance to R&D to build a technical moat. On May 28, Fourth Paradigm released core business progress for 2024Q1. The company adheres to technological innovation, steady business expansion, and uses artificial intelligence+ to contribute value to thousands of industries. 2024Q1, the company's total revenue was RMB 8.3 billion, up 28.5% year on year, and gross profit was RMB 340 million, up 21.1% year on year. The company uses innovation to drive development. The R&D investment is RMB 350 million, and the R&D cost ratio is 42.0% to ensure the continuous improvement of the company's competitiveness.

In March 2024, the company released the “Prophet AI Platform 5.0,” which is positioned as the industry's largest model platform. This is the fifth upgrade of the product in 10 years. The product uses “predicting the next any mode” as the technical principle, and can build large industry models based on different modal data in various industry scenarios, breaking through the limitation that large language models can only predict text, and greatly expand the application field of industry models. Product upgrades have led to a sharp rise in core business, and the large-scale effect of the industry model is remarkable. 2024Q1, the fourth paradigm seer AI platform's business revenue was RMB 50 billion, up 84.8% year over year, accounting for 60.6% of the Group's total revenue. At the same time, the customer base is also becoming more diverse. The company's total number of users in 2024Q1 was 124, the number of benchmark users was 54, and the average revenue of the benchmark user group increased by 64.0% year on year. From January 1, 2020 to March 31, 2024, the company's cumulative total number of service users was 1,058. Furthermore, the company has always adhered to the concept of combining technology and humanistic care, paid close attention to social issues, and actively assumed corporate social responsibility. 2024Q1, the company collaborated with a water conservancy unit to build a large intelligent flood control model to deploy flood control work in advance to cope with the summer flood season. The company's flood control model can monitor and predict flood risks in real time, optimize emergency response and resource scheduling, warn and reduce disaster losses in advance, thereby ensuring the safety of residents' lives and property and enhancing the overall disaster resilience of society.

The company has been deeply involved in the industry model for more than ten years and is positioned as a new type of infrastructure of “artificial intelligence+thousands of industries”.

Since its establishment, in the past ten years, the company has been deeply involved in industry models and promoted the intelligent transformation of thousands of industries.

From April 28 to 29, as one of the representative enterprises of artificial intelligence, the company participated in the annual meeting of the Zhongguancun Forum, once again redefined the development path of the big industry model, and publicly demonstrated the panoramic capabilities of the industry model platform “Prophet AIOS 5.0” for the first time. Hu Shiwei, co-founder and president of the company, pointed out at the event that as the key to the application of artificial intelligence technology, the big industry model will become a new type of infrastructure for “artificial intelligence+thousands of industries”. The real industry model is not limited to the big industry language model commonly understood by the public.

Hu Shiwei pointed out that the real value of the big industry model lies in the large collection of scenarios and models that solve core business pain points for enterprises. The large industry model is based on massive and high-quality data from various industries to expand the scale of model parameters and improve the strategic efficiency of the industry. Currently, the company's main business, the “Fourth Paradigm Prophet AI Platform”, is the main location for the company to develop a large model in the industry. This business accounts for nearly 60% of the company's total revenue, and the scale effect is remarkable. Since its development, the company has helped customers build large-scale models for many industries such as risk control management, personalized recommendations, scheduling, supply chain management, etc., to empower enterprises with high-value businesses. Currently, it has penetrated the financial, retail, manufacturing, energy and electricity, telecommunications and medical industries. Hu Shiwei said that the value of constructing a big model for the industry is that artificial intelligence can be applied in multiple scenarios at a more reasonable and lower cost. Therefore, the company should adapt to local conditions, combine the characteristics of the Chinese market, and achieve the ultimate industry model required for each industry and scenario. As the coverage of the big industry models becomes more and more intensive, they can converge into a sea. This is also the company's unique path towards AGI.

The fourth paradigm Seer AI platform continues to iterate, and “Seer AIOS 5.0” greatly expands the application fields of the industry's big model. The fourth paradigm Seer AI platform has gone through 5 iterations in ten years. Each upgrade effectively solved the application pain points of artificial intelligence at the time, improved development efficiency, and lowered the development threshold. The Seer AI platform 1.0 greatly improves model accuracy through a high-dimensional, real-time, and self-learning framework. The Seer AI platform 2.0 uses the automated modeling tool HyperCycle to drastically reduce the threshold for model development. The Prophet AI Platform 3.0 standardizes AI data governance and is put into production, and completed the “last mile” of modeling to implementation of the application. The Seer AI Platform 4.0 introduces Polaris indicators to maximize the value of AI applications and enhance the core competitiveness of enterprises. In March 2024, the company released the industry model platform “Prophet AIOS 5.0,” which can build a large industry model based on different modal data from various industry scenarios, change the inherent pattern that the current industry model can only feed industry text data to the big language model and generate the next word, greatly expanding the application field of the big industry model. “Seer AIOS 5.0” provides various development tools and an enterprise-level model management platform. Business personnel can implement the entire process of constructing, optimizing, reporting, and managing large industry models through natural language interaction (enterprise-level AI Agent). In terms of data, the platform supports various modes of data storage, call and calculation, and can also make everyone a “supplier” of high-quality data through labeling and feedback.

In terms of large model capabilities, users can build large industry models for different scenarios based on basic large models and development tools such as big language models and multi-modal big models to provide more vertical industry capabilities. In terms of computing power, the platform can be adapted to mainstream hardware, and through self-developed inference frameworks and acceleration cards, large model inference and application costs can be greatly reduced. In addition, the platform provides an algorithm disclosure platform and a large-scale professional talent network to solve the shortage of talent supply in large-scale model industry applications. The company will also continue to work to polish the industry's big model base and contribute to the innovation and development of “artificial intelligence+thousands of industries”. We believe that the company's Prophet AI platform has continued to iterate for ten years, and is also a decade where the company's understanding of the big model of the industry has gradually deepened. This has also built a moat that is difficult for the company to replace. As the company continues to polish the foundation of the industry's big model in the future, the company's growth rate is also expected to continue to accelerate in the future.

The company released a large model inference acceleration card and inference framework, which increased inference performance by 10 times and reduced costs by half. In order to break the GPU memory bottleneck in large model inference, the fourth paradigm released the large model inference framework SLXLLM and the hardware version of the inference acceleration card 4Paradigm Sage LLM Accelerator (SLX for short).

Through multi-task shared storage and processing optimization technology, the inference performance of large models was increased by 10 times; when the model effect was not damaged, 8 24G video memory GPUs were also used to perform FP16 inference on the 6B/7B large model. The number of models that can be deployed was increased from 8 to 16, and the GPU utilization rate increased from 55% to 100%, and the inference cost was only half of the original. It is worth mentioning that this capability will also be integrated into 4Paradigm Sage AIOS 5.0 to promote large-scale model implementation. Currently, one of the major bottlenecks in big model inference recognized by the industry is the GPU video memory bottleneck. Like computing power, video memory is one of the key indicators for measuring GPU performance, and is used to store data such as calculation results and model parameters. In the process of large model inference, GPU computing power cannot be “fully activated” for the inference process due to limited video memory. GPU computing power utilization is low, and large model inference costs remain high. To this end, the fourth paradigm released the large model inference framework SLXLLM and the inference acceleration card SLX. Under the joint optimization of the two, the large model's inference performance was increased by 10 times in text generation scenarios. For example, in inference tests on a large 72B model using 4 80G GPUs, compared to using vLLM, the number of tasks that can be run simultaneously increased from 4 to 40 compared to the SLXLLM+SLX scheme in the fourth paradigm. In addition, the SLX inference acceleration card is also compatible with mainstream large model inference frameworks such as TGI, FastLLM, and vLLM, and the large model inference performance is improved by about 1-8 times.

We believe that in addition to developing an AI platform, the company also has a large model inference acceleration card and inference framework. On the one hand, it has improved the company's business layout, and on the other hand, it has also brought potential new growth points for the company in the future.

Profit forecasting and investment advice. We believe that the company's overall performance in Q1 is excellent, confirming the viability of the company's current business model, and the company's accumulation in the field of large-scale industry models for more than ten years is expected to gradually be released along with the rapid development of the AI industry, becoming an important driving force for the company's performance growth, and the company's development throughout the year is promising. Combining various factors, we gave the fourth model 6-7 times PS in 2024, corresponding to the company's reasonable value range of HK$73.44-85.68 (1 HKD = RMB 0.9254). For the first time, we covered the “superior to the market” rating.

Risk warning. Risks that AI development falls short of expectations; risks that AI commercialization falls short of expectations, etc.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment