At the WAIC 2025 forum on July 26, focused on AI-powered innovations in autonomous driving, the achievements of the Shanghai Autonomous Driving Training Ground were officially announced. Led by the Intelligent Vehicle Innovation Development Platform, the training ground partnered with organizations such as Shanghai Yidian, Cupas, SenseTime, SAIC Group, and Zhiji Automobile, successfully integrating the entire technical chain from 'public infrastructure - data collection - data processing - world model application - model training on vehicles.' As a key project in Shanghai's high-level autonomous driving area, the training ground aims to foster an internationally competitive smart connected vehicle industry cluster. Wang Xiaogang, CEO of SenseTime's Zhiying and co-founder, executive director, and CTO of SenseTime, was invited to jointly announce the achievements and deliver a keynote speech. He expressed that Zhiying is honored to contribute to the construction of the Shanghai Autonomous Driving Training Ground, leveraging its leading data generation and simulation testing capabilities to accelerate the implementation and popularization of safe and reliable assisted driving systems, setting new standards in assisted driving. At this year's WAIC 2025, Zhiying also upgraded the industry's first mass-produced, interactive 'Zhiying Kaiwu' world model, officially launching the first generative world model product platform in the assisted driving field, along with the largest-scale generative driving dataset 'WorldSim-Drive,' continuously empowering the assisted driving industry. Attendees at the SenseTime Zhiying booth were able to interact with the world model in real-time, experiencing the industry's leading data generation capabilities. As assisted driving technology advances rapidly, data-driven end-to-end solutions are becoming mainstream. Domestic automakers face challenges in obtaining high-quality driving data, including difficulties, high costs, insufficient scene distribution, and inadequate closed-loop testing verification. Thus, Zhiying created the globally leading 'Zhiying Kaiwu' world model, which simulates real-world driving scenarios to generate high-definition simulation data that is realistic, controllable, diverse, and consistently high-quality. On the Shanghai Autonomous Driving Training Ground platform, using the computational and data infrastructure developed by partners, the scene videos generated by the 'Zhiying Kaiwu' world model maintain spatial-temporal consistency across 11 camera perspectives, lasting up to 150 seconds, with a professional-grade resolution of 1080P. The training scenarios allow for element-level detailed control, enabling free editing and adjustments, including one-click generation of extremely high-risk scenarios and controlled generation of diverse scenes. Currently, Zhiying is deeply collaborating with Zhiji Automobile under SAIC Group to build an end-to-end data factory aimed at mass production. Both parties are working to define and plan high-value end-to-end scenarios and successfully streamline the data generation chain for scenarios like cut-ins and collisions. In the next phase, they will jointly construct an industry-leading, efficient end-to-end data production engine, covering high-value scenarios with a million clips of synthetic data to empower the development of SAIC's end-to-end intelligent driving. Meanwhile, the generated data from 'Zhiying Kaiwu' already spans thousands of long-tail scenarios, enabling closed-loop simulation testing. Zhiying and Zhiji Automobile are also collaborating in the closed-loop simulation testing field to create rich testing scene data for scenarios like emergency braking in lanes and navigating roundabouts. In the future, both parties plan to jointly develop a scene library with tens of millions of generated scenes to construct comprehensive testing examples covering all driving possibilities, further ensuring driving safety. The newly upgraded 'Zhiying Kaiwu' is the industry's first mass-produced, interactive world model. At the WAIC 2025, Zhiying showcased this upgraded world model, officially launching the first generative world model product platform for assisted driving that is open for trial use by B/C end users. The product platform is built on industry SOTA world model technology, possessing strong capabilities in understanding physical laws and generating scenario controls, serving as an innovative tool to effectively address the data bottleneck in assisted driving. On one hand, the platform allows for flexible customization of scene videos, supporting editing and generalization of various elements such as multiple perspectives, weather conditions, and road types, enriching the diversity of training scenarios. On the other hand, the generative world model product platform from Zhiying enables one-click generation of multiple scenarios based on prompt words, making it simple and user-friendly. Additionally, Zhiying released the industry's largest generative driving dataset, 'WorldSim-Drive.' Utilizing the 'Kaiwu' world model, Zhiying has produced over 1 million clips of generative data tailored for mass production, covering a wide array of scene types, including more than 50 weather and lighting conditions, 200 types of traffic signs, and 300 types of road connection scenarios. Furthermore, Zhiying's generative driving data achieves multi-perspective spatial-temporal consistency, with durations reaching several minutes and resolutions matching real data at 1080P quality. 'Zhiying Kaiwu' is the first world model applied in true-value training data production in the industry, boasting high production efficiency. Based on a single A100 GPU, the data generated daily by 'Zhiying Kaiwu' is equivalent to the data collection capacity of 10 real cars or 100 road test vehicles, comparable to that of 500 mass-produced vehicles. Currently, 20% of Zhiying's data is produced through the world model. Additionally, at the SenseTime Zhiying booth at WAIC 2025, attendees experienced the industry's generative world model product platform, utilizing a simple and intuitive interface to generate corresponding scene videos by inputting text or selecting scene images, thus experiencing the industry-leading performance of the assisted driving dataset. Furthermore, they could interact with the world model in real-time, feeling as if they were 'driving' in the environment generated by the world model.
Shanghai Autonomous Driving Training Ground Unveiled at WAIC 2025

Images


Share this post on: