share_log

大语言模型并非AI“尽头”?Meta首席科学家:仍无法企及人类智慧

Aren't big language models the “end” of AI? Meta's Chief Scientist: Human Intelligence Is Still Unattainable

cls.cn ·  May 23 17:09

① Yang Likun, chief AI scientist at Meta, believes that all existing large-scale language models have a very limited understanding of logic and will never be able to achieve the same reasoning and planning ability as humans; ② Yang Likun said his team is working to create an artificial intelligence that can form “common sense,” a method called “world modeling”; ③ Yang Likun's opinion also caused huge divisions within Meta.

Finance Association, May 23 (Editor Zhou Ziyi) Yang LeCun (Yann LeCun), the chief artificial intelligence (AI) scientist at Meta, believes that existing large-scale language models (LLMs) will never be able to achieve human-like reasoning and planning abilities.

Yang Likun said that large-scale language models “have a very limited understanding of logic. They don't understand the material world, have no lasting memory, can't reason with any reasonable definitions, and can't carry out hierarchical planning.”

In a recent interview, he believes that it is impossible to rely on existing advanced large-scale language models to create general artificial intelligence (AGI) comparable to human intelligence, because these models can only accurately answer prompts if correct training data is obtained, so they are “inherently unsafe.”

Specifically, Yang Likun believes that although the current large-scale language model has excellent performance in natural language processing, conversation understanding, dialogue interaction, and text creation, it is still just a “statistical modeling” technique. By learning the statistical rules in data, it is essentially not really capable of understanding or reasoning.

Meanwhile, Yang Likun himself is working hard to develop a new generation of artificial intelligence systems. He hopes that this system will power machines with a level of human intelligence and create “super intelligence” in machines. However, he pointed out that this vision may take 10 years to be realized.

The “world modeling” method

Yang Likun manages a team of around 500 people at Meta's Basic Artificial Intelligence Research (Fair) lab. They are working to create artificial intelligence that can form “common sense,” observe, experience, and learn how the world works in a similar way to humans, and ultimately implement general artificial intelligence (AGI), a method known as “world modeling.”

In 2022, Yang Likun first published a paper on the “World Modeling” vision. Since then, Meta has published two research models based on this method.

Yang Likun recently pointed out that Fair Labs is testing various ideas, hoping that artificial intelligence can eventually reach the level of human intelligence. However, “there is a lot of uncertainty and exploration in this, and we are unable to determine which one will be successful and which one will eventually be selected.”

Furthermore, he firmly believes, “We are on the cusp of the next generation of artificial intelligence systems.”

Internal Conflicts

However, the scientist's experimental vision is a costly gamble for Meta, as current investors prefer to see a quick return on their AI investments.

As a result, differences of opinion between “short-term revenue” and “long-term value” have also arisen within Meta. This difference can be seen from the establishment of the GenAI team last year.

Meta founded Fair Labs in 2013 to pioneer the field of artificial intelligence research and hire top scholars in the field. However, in 2023, Meta separately created a new GenAI team, led by Chief Product Officer Chris Cox, who drew a number of AI researchers and engineers from Fair Labs and led the work on the Llama 3 model and integrated it into products such as its new artificial intelligence assistant and image generation tool.

Some insiders believe that the establishment of the GenAI team may be due to some kind of conceptual conflict between Yang Likun and Meta Chief Executive Zuckerberg. Under pressure from investors and profit pressure, Zuckerberg has been promoting more commercial applications of artificial intelligence; the academic culture within Fair Labs has made Meta slightly “weak” in the generative AI boom.

As Yang Likun expressed this opinion, Meta and its rivals are advancing more enhanced versions of large-scale language models, including the faster GPT-4O model released by OpenAI last week; Google launched a new “multi-modal” AI assistant Project Astra; and Meta also launched the latest Llama 3 model last month.

Yang Likun is dismissive of these latest large-scale language models. He believes, “This evolution of large-scale language models is superficial and limited. Only when human engineers step in and train based on this information do models learn, rather than naturally draw conclusions like humans.” This is also equivalent to punching your own Llama model in the face.

Despite conflicting ideas, people familiar with the matter revealed that Yang Likun is still one of Zuckerberg's core advisors because he enjoys an excellent reputation in the field of artificial intelligence.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment