share_log

概念动态 | 财联社主题库新增“多模态模型”

Concept News | Financial Associated Press Theme Library Adds “Multi-Modal Model”

cls.cn ·  Dec 8, 2023 09:10

① On December 6, 2023, Google announced the launch of Gemini, which it believes is the largest and most powerful artificial intelligence model. As the first multi-modal model released by Google and around the world, the Gemini model is the first model to surpass human experts in terms of performance at MMLU. ② List of multi-modal model concept stocks.

On December 6, 2023, Google announced the launch of what it considers to be the largest and most powerful artificial intelligence model, Gemini. As the first multi-modal model released by Google and around the world, the Gemini model is the first model to surpass human experts in terms of performance at MMLU. The model is divided into three versions: Gemini Ultra, Gemini Pro, and Gemini Nano according to its size, and supports cloud and edge testing.

According to a research report published by Ping An Securities, the large model Gemini launched by Google this time focuses on outstanding abilities in the multi-modal field: 1) In the text field, Gemini Ultra is ahead of GPT-4 in multiple benchmarks, and has become the first model to surpass human experts in large-scale multi-tasking language understanding (MMLU). 2) In the multi-modal field, Gemini Ultra also surpassed GPT-4V in multiple image, video, and audio benchmarks. 3) In addition to the field of modal combination, Gemini also demonstrated its powerful ability to process multi-modal input and cross-modal inference. The current intense competition for multi-modal AI will also continue to raise the overall capability level of big models, helping to continuously expand the application scenarios and boundaries of big models, and promote the further explosion of AI applications.

According to the latest report from market analysis agency IDC on November 2, 2023, the global AI application software market in 2022 is 64 billion US dollars, and is expected to increase to 279 billion US dollars by 2027, with a compound annual growth rate (CAGR) of 31.4%.

Relevant constituent stock information:

1) Thors

Reason for selection: The company is one of the earliest companies in China to develop artificial intelligence technology. It has the underlying technology for autonomous and controllable multi-modal content processing in the fields of NLP, knowledge mapping, OCR, image and video structuring

2) Xinhuanet

Reason for selection: The company's shareholding company, Xinhua Zhiyun, launched the Media Brain Platform in 2017, and has always been committed to the application of technology research using AI technology to empower content production. The company initiated and formulated standards for machine-intelligent content production (MGC) and launched the MAGIC platform. Based on self-developed intelligent short video production technology, the platform can quickly and automatically produce a wide variety of short videos, and is already commercially used in media, cultural tourism, finance, conferences, etc.

3) Tom Cat

Reason for selection: The company's domestic R&D team and Xihu Xinchen collaborated with the multi-modal AI Tom Cat product that initially realized various functions such as photo recognition, English oral enlightenment, interest guidance, popular science education, AI life pictures, AI generation of picture books, and contextual dialogue. In November 2023, the company stated that the first AI mobile game “Talking Ben AI” developed by an overseas team has begun the first round of overseas testing in Slovenia, Cyprus, South Africa and other regions

4) Yuncong Technology

Reason for selection: The company focuses on the fields of vision, speech, and multi-modality, and is committed to providing intelligent brain services for intelligent connectivity

5) Bohui Technology

Reason for selection: The company has increased its research and development efforts in the fields of natural language processing and machine vision. Through the use of artificial intelligence, big data and other technologies, the collected data is analyzed, characteristic learning, and sample training are carried out, and an intelligent supervision model has been constructed, improving the processing and analysis capabilities of multi-modal data such as text, images, audio, and video

6) iFLYTEK

Reason for selection: The company officially released the AIBOT platform, the superbrain platform of iFLY robots, at the 2022 global “1024 Developer Festival”, and has already officially connected to the Spark Cognitive Big Model. Relying on the “iFLY Superbrain 2030” technology base, the iFLY Robotics Superbrain Platform launched a robot development platform centered on AI capabilities, nebula, multi-modal interaction, intelligent motion, model training, asset generation, and software and hardware access

7) Yijiahe

Reason for selection: YJH-LM, a large model based on multi-modal hyperfusion technology released by the company, has completed functional testing on the company's commercial cleaning robot

8) Greene's deep pupils

Reason for selection: The company's self-developed large models are mainly in the visual field, and have multi-modal abilities such as speech semantics

9) Suzhou Keda

Reason for selection: In the field of artificial intelligence technology, the company thoroughly develops AI technology based on deep learning, closely follows cutting-edge trends in artificial intelligence technology, and officially launched the KD-GPT model in July 2023, including the large multi-mode model, the AIGC image model, and the industry model, which have begun to take shape, and have begun to be applied in actual projects

Use the Financial Services Association's search function to search the “Multi-Modal Model” section to see more and more fully constituent stocks.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment