Source: Wall Street See
Author: Jiang Zihan
Elon Musk plans to transfer a large amount of existing Tesla car footage to Tesla's ai system so that the algorithm can learn safe driving under a technique called 'imitation learning.' However, this method of building a fully autonomous driving system based on imitation learning still has flaws at present, and further development requires breakthroughs in ai, which may also take some time.
Elon Musk is betting that siasun robot&automation cars will drive Tesla into a new era of profitability, but some media analysis suggests that his approach may be wrong.
On Monday, November 4th, The Wall Street Journal reported that Musk's plan to achieve autonomous driving revolves around what he calls "end-to-end artificial intelligence." Musk plans to feed a large amount of video footage from existing Tesla cars to Tesla's AI system, allowing the algorithms to learn safe driving.
Musk's approach contrasts sharply with that of other autonomous driving companies. While the industry leader in autonomous driving, Google's subsidiary Waymo, also uses a lot of artificial intelligence, their method involves breaking down the autonomous driving problem into more specific tasks, utilizing data from multiple sensors such as lasers and radars to give cars a richer environmental view.
In essence, Musk aims to invent an AI system that learns by observing human driving, utilizing imitation learning techniques. Whereas companies like Waymo correct errors during AI driving to help the autonomous driving system progress, employing reinforcement learning techniques.
AI experts suggest that Tesla's approach of building a fully autonomous driving system based on imitation learning requires AI breakthroughs, which might take some time.
Shortcomings of Musk's Method
Musk believes that Tesla's strength lies in the fact that all of its vehicles have built-in cameras that can capture a large amount of real-world driving footage. Therefore, Robotaxis can acquire a wealth of real driving video data, including all data on the existing Full Self-Driving (FSD) system in Tesla cars.
Using this passive record data to train tesla's ai requires a technique called imitation learning. Computer scientist Timothy B. Lee stated that in order to benefit from this data, Tesla's ai must watch millions of hours of human driving videos and try to imitate human actions.
Experts point out that Musk's approach has flaws.
First, systems trained primarily through imitation learning may fail when faced with behaviors beyond the training data scope.
Secondly, Tesla's excessive focus on 'end-to-end AI' systems has led to the formation of complex black boxes within its system, making it difficult to understand why the system behaves in certain ways and difficult to find ways to correct these behaviors.
For example, Tesla's current fully automated driving system can operate on most city roads and highways, but requires high driver monitoring as the system may make sudden and potentially fatal decisions—such as attempting to directly steer onto the path of other vehicles, running red lights, not stopping for trains in foggy weather...
The federal automotive safety regulatory agency recently announced an investigation into the role played by Tesla's fully automated driving system in fatal accidents.
Waymo's co-founder Anthony Levandowski stated that Musk's goal of launching a fully automated driving system within a year is unreasonable. Creating the type of automated driving system Musk envisions may require further breakthroughs in ai technology, and when these breakthroughs can be achieved remains uncertain.
Editor/Jeffy