Automotive AI Foundation Model Technology and Application Trends Report 2024: AI Foundation Models Evolve Rapidly, Bringing New Opportunities

DUBLIN, March 6, 2024 /PRNewswire/ — The “Automotive AI Foundation Model Technology and Application Trends Report, 2023-2024” report has been added to ResearchAndMarkets.com’s offering.

Research and Markets Logo

Since 2023 ever more vehicle models have begun to be connected with foundation models, and an increasing number of Tier1s have launched automotive foundation model solutions. Especially Tesla’s big progress of FSD V12 and the launch of SORA have accelerated implementation of AI foundation models in cockpits and intelligent driving.

In February 2023, Tesla FSD v12.2.1, which adopts an end-to-end autonomous driving model, began to be pushed in the United States, not just to employees and testers. According to the feedback from the first customers, FSD V12 is quite powerful, allowing ordinary people who previously did not believe in and use autonomous driving to dare to use FSD. For example, Tesla FSD V12 can bypass puddles on roads. A Tesla engineer commented: this kind of driving approach is difficult to implement with explicit code, but Tesla’s end-to-end approach makes it almost effortlessly.

The development of AI foundation models for autonomous driving can be divided into four phases. 

  • Phase 1 uses a foundation model (Transformer) at the perception level.
  • Phase 2 is modularization, with foundation models used in perception, planning & control and decision.
  • Phase 3 is end-to-end foundation models (one ‘end’ is raw data from sensors, and the other ‘end’ directly outputs driving actions).
  • Phase 4 is about heading from vertical AI to artificial general intelligence (AGI’s world model).

Most companies are now in Phase 2, while Tesla FSD V12 is already in Phase 3. Other OEMs and Tier1s have followed up with the end-to-end foundation model FSD V12. On January 30, 2024, Xpeng Motor announced that its end-to-end model will be fully available to vehicles in the next step. It is known that NIO and Li Auto will also launch “end-to-end based” autonomous driving models in 2024.

FSD V12’s driving decisions are generated by an AI algorithm. It uses end-to-end neural networks trained with massive video data to replace more than 300,000 lines of C++ code. FSD V12 provides a new path that needs to be verified. If it is feasible, it will have a disruptive impact on the industry.

On February 16, OpenAI introduced text-to-video model SORA, signaling the wide adoption of AI video applications. SORA not only supports generation of up to 60-second videos from texts or images, but it well outperforms previous technologies in capabilities of video generation, complex scenario and character generation, and physical world simulation.

Through vision both SORA and FSD V12 enable AI to understand and even simulate the real physical world. Elon Mask believes that FSD 12 and Sora are just two of the fruits of AI’s ability to recognize and understand the world through vision, and FSD is ultimately used for driving behaviors, and Sora is used to generate videos.

The high popularity of SORA is further evidence of the rationality of FSD V12. Musk said ‘Tesla generative video from last year’.

AI foundation models evolve rapidly, bringing new opportunities.

In recent three years foundation models for autonomous driving have undergone several evolutions, and the autonomous driving systems of leading automakers must be rewritten almost every year, which also provides entry opportunities for late entrants.

At CVPR 2023, UniAD, an end-to-end autonomous driving algorithm jointly released by SenseTime, OpenDriveLab and Horizon Robotics, won the 2023 Best Paper.

In early 2024, Waytous’ technical team and the Institute of Automation Chinese Academy of Sciences jointly proposed GenAD, the industry’s first generative end-to-end autonomous driving model which combines generative AI and end-to-end autonomous driving technology. This technology is a disruption to UniAD progressive process end-to-end solution, and explores a new end-to-end autonomous driving mode. The key is to using generative AI to predict temporal evolution of the vehicle and surroundings in past scenarios.

In February 2024, Horizon Robotics and Huazhong University of Science and Technology proposed VADv2, an end-to-end driving model based on probabilistic planning. VADv2 takes multi-view image sequences as input in a streaming manner, transforms sensor data into environmental token embeddings, outputs the probabilistic distribution of action, and samples one action to control the vehicle. Using only camera sensors, VADv2 achieves state-of-the-art closed-loop performance in CARLA Town05 benchmark test, much better than all existing approaches. It runs stably in a fully end-to-end manner, even without rule-based wrapper.

On the Town05 Long benchmark, VADv2 achieved a Drive Score of 85.1, a Route Completion of 98.4, and an Infraction Score of 0.87, as shown in Tab. 1. Compared to the previous state-of-the-art method, VADv2 achieves a higher Route Completion while significantly improving Drive Score by 9.0. It is worth noting that VADv2 only utilizes cameras as perception input, while DriveMLM utilizes both cameras and LiDAR. Furthermore, compared to the previous best method which only relies on cameras, VADv2 demonstrates even greater advantages, with a remarkable increase in Drive Score of up to 16.8.

Also in February 2024, the Institute for Interdisciplinary Information Sciences at Tsinghua University and Li Auto introduced DriveVLM (its whole process shown in the figure below). A range of images are processed by a large visual language model (VLM) to perform specific chain of thought (CoT) reasoning to produce driving planning results. This large VLM includes a visual encoder and a large language model (LLM).

Due to limitations of VLMs in spatial reasoning and high computing requirements, DriveVLM team proposed DriveVLM-Dual, a hybrid system that combines advantages of DriveVLM and conventional autonomous driving pipelines. DriveVLM-Dual optionally combines DriveVLM with conventional 3D perception and planning modules, such as 3D object detector, occupancy network, and motion planner, allowing the system to achieve 3D localization and high-frequency planning. This dual-system design, similar to slow and fast thinking processes of human brain, can effectively adapt to changing complexity of driving scenarios.

Key Topics Covered:



1 Classification of Autonomous Driving (AD) Algorithms and Common Algorithm Models

1.1 AD System Classification and Software 2.0

1.2 Baidu AD Algorithm Development History

1.3 Tesla AD Algorithm Development History

1.4 Neural Network Model

1.5 Traditional AD AI Algorithms (Small Model)

1.6 Transformer and BEV (Foundation Model)

1.7 End-to-end Foundation Model Cases



2 Overview of AI Foundation Model and Intelligent Computing Center

2.1 AI Foundation Model

2.2 Application of AI Foundation Model in Automotive

2.3 Autonomous Driving (AD) Multimodal Basic Foundation Model

2.4 Intelligent Computing Center



3 Tesla Algorithm and Foundation Model Analysis

3.1 Algorithm Fusion of CNN and Transformer

3.2 Transformer Turns 2D into 3D

3.3 Occupancy Network, Semantic Segmentation and Time-space Sequence

3.4 LaneGCN and Search Tree

3.5 Data Closed Loop and Data Engine



4 AI Algorithms and Foundation Model Providers

4.1 Haomo.ai

4.2 QCraft

4.3 Baidu

4.4 Inspur

4.5 SenseTime

4.6 Huawei

4.7 Unisound

4.8 iFLYTEK

4.9 AISpeech

4.10 Megvii Technology

4.11 Volcengine

4.12 Tencent Cloud

4.13 Other Companies

4.13.1 Banma Zhixing

4.13.2 ThunderSoft

4.13.3 Horizon Robotics’ End-side Deployment of Foundation Model



5 Foundation Model of OEMs

5.1 Xpeng Motor

5.2 Li Auto

5.3 Geely

5.4 BYD

5.5 GM

5.6 Changan Automobile

5.7 Other Auto Enterprises

5.7.1 GWM: All-round Layout of AI Foundation Model

5.7.2 Chery: EXEED STERRA ES Equipped with Cognitive Foundation Model

5.7.3 GAC

5.7.4 SAIC-GM-Wuling

5.7.5 Mercedes-Benz

5.7.6 Volkswagen

5.7.7 Stellantis

5.7.8 PSA



6 Application Trends of Sora and AI Foundation Model in Automotive

6.1 Analysis of Sora Text-to-Video Foundation Model

6.2 Explanation of Sora’s Underlying Algorithm Architecture

6.3 Generative World Model and Intelligent Vehicle Industry

6.4 Application Trends of AI Foundation Model in Automotive

6.5 AI Foundation Model Requirements for Chips

Companies Profiled

  • Haomo.ai
  • QCraft
  • Baidu
  • Inspur
  • SenseTime
  • Huawei
  • Unisound
  • iFLYTEK
  • AISpeech
  • Megvii Technology
  • Volcengine
  • Tencent Cloud
  • Banma Zhixing
  • ThunderSoft
  • Horizon Robotics
  • Xpeng Motor
  • Li Auto
  • Geely
  • BYD
  • GM
  • Deepal GPT
  • GWM
  • Chery
  • GAC
  • SAIC-GM-Wuling
  • Mercedes-Benz
  • Volkswagen
  • Stellantis
  • PSA

For more information about this report visit https://www.researchandmarkets.com/r/o1dfga

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world’s leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Media Contact: 

Research and Markets

Laura Wood, Senior Manager

[email protected]

For E.S.T Office Hours Call +1-917-300-0470

For U.S./CAN Toll Free Call +1-800-526-8630

For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907

Fax (outside U.S.): +353-1-481-1716

Logo: https://mma.prnewswire.com/media/539438/Research_and_Markets_Logo.jpg

Cision View original content:https://www.prnewswire.com/news-releases/automotive-ai-foundation-model-technology-and-application-trends-report-2024-ai-foundation-models-evolve-rapidly-bringing-new-opportunities-302081061.html

SOURCE Research and Markets

Featured image: MegaPixl © Noipornpan

Disclaimer