DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation

CASIA1,  GigaAI2 
*Equal Contribution

DriveDreamer-2 can produce multi-view driving videos based on user descriptions: "On a rainy day, there is a car cutting in."

DriveDreamer-2 demonstrates powerful capabilities in generating multi-view driving videos. The generation quality of DriveDreamer-2 surpasses other state-of-the-art methods and effectively enhances downstream tasks.

Abstract

World models have demonstrated superiority in autonomous driving, particularly in the generation of multi-view driving videos. However, significant challenges still exist in generating customized driving videos. In this paper, we propose DriveDreamer-2, which builds upon the framework of DriveDreamer and incorporates a Large Language Model (LLM) to generate user-defined driving videos. Specifically, an LLM interface is initially incorporated to convert a user's query into agent trajectories. Subsequently, a HDMap, adhering to traffic regulations, is generated based on the trajectories. Ultimately, we propose the Unified Multi-View Model to enhance temporal and spatial coherence in the generated driving videos. DriveDreamer-2 is the first world model to generate customized driving videos, it can generate uncommon driving videos (e.g., vehicles abruptly cut in) in a user-friendly manner. Besides, experimental results demonstrate that the generated videos enhance the training of driving perception methods (e.g., 3D detection and tracking). Furthermore, video generation quality of DriveDreamer-2 surpasses other state-of-the-art methods, showcasing FID and FVD scores of 11.2 and 55.7, representing relative improvements of 30% and 50%.

Method

The overall framework of DriveDreamer-2 involves initially generating agent trajectories according to the user query, followed by producing a realistic HDMap, and finally generating multi-view driving videos.

Results with Gnerated Structural Information

"Daytime / rainy day / at night, a car abruptly cutting in from the right rear of ego-car."


"Rainy day, car abruptly cutting in from the left rear of ego-car. (long video)"


"Daytime, the ego-car changes lanes to the right side. (long video)"


"Rainy day, a person crosses the road in the front of the ego-car. (long video)"


Results with nuScenes Structural Information

"Daytime / rainy day / at night, ego-car drives through urban street, surrounded by a flow of vehicles on both sides."


"Daytime / rainy day / at night, a bus is positioned to the left front of the ego-car, with a pedestrian near the bus."


"Rainy day, the windshield wipers of the truck are continuously clearing the windshield."


"Rainy day, the ego-car makes a left turn at the traffic signal, with vehicles behind proceeding straight through the intersection. (long video)"


"Daytime, the ego-car drives straight through the traffic light, with a truck situated to the left front and pedestrians crossing on the right side. (long video)"

BibTeX

If you use our work in your research, please cite:

@article{zhao2024drive,
  title={DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation},
  author={Zhao, Guosheng and Wang, Xiaofeng and Zhu, Zheng and Chen, Xinze and Huang, Guan and Bao, Xiaoyi and Wang, Xingang},
  journal={arXiv preprint arXiv:2403.06845},
  year={2024}
}