Concepedia

Publication | Closed Access

V2V4Real: A Real-World Large-Scale Dataset for Vehicle-to-Vehicle Cooperative Perception

184

Citations

41

References

2023

Year

TLDR

Autonomous vehicle perception is limited by occlusions and short range, hindering Level 5 autonomy, and although V2V cooperative perception promises to overcome these limitations, progress is stalled by the absence of a real‑world dataset. We introduce V2V4Real, the first large‑scale real‑world multi‑modal dataset to advance cooperative perception research. V2V4Real was collected by two sensor‑equipped vehicles over 410 km, providing 20 k LiDAR, 40 k RGB frames, 240 k annotated 3‑D boxes, HD maps, and defines three tasks—cooperative 3‑D detection, tracking, and Sim2Real domain adaptation—with comprehensive algorithm benchmarks.

Abstract

Modern perception systems of autonomous vehicles are known to be sensitive to occlusions and lack the capability of long perceiving range. It has been one of the key bottlenecks that prevents Level 5 autonomy. Recent research has demonstrated that the Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry. However, the lack of a real-world dataset hinders the progress of this field. To facilitate the development of cooperative perception, we present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception. The data is collected by two vehicles equipped with multi-modal sensors driving together through diverse scenarios. Our V2V4Real dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps that cover all the driving routes. V2V4Real introduces three perception tasks, including cooperative 3D object detection, cooperative 3D object tracking, and Sim2Real domain adaptation for cooperative perception. We provide comprehensive benchmarks of recent cooperative perception algorithms on three tasks. The V2V4Real dataset can be found at research.seas.ucla.edu/mobility-lab/v2v4real/.

References

YearCitations

Page 1