Publication | Closed Access
SparseFusion: Fusing Multi-Modal Sparse Representations for Multi-Sensor 3D Object Detection
92
Citations
34
References
2023
Year
Unknown Venue
EngineeringMachine LearningDepth Map3D Computer VisionImage AnalysisData SciencePattern RecognitionMultimodal Sensor FusionComputational GeometryCamera ModalitiesMachine VisionObject DetectionComputer ScienceDeep Learning3D Object RecognitionMulti-modality CandidatesComputer Vision3D VisionModality-specific DetectorsScene Modeling
By identifying four important components of existing LiDAR-camera 3D object detection methods (LiDAR and camera candidates, transformation, and fusion outputs), we observe that all existing methods either find dense candidates or yield dense representations of scenes. However, given that objects occupy only a small part of a scene, finding dense candidates and generating dense representations is noisy and inefficient. We propose SparseFusion, a novel multi-sensor 3D detection method that exclusively uses sparse candidates and sparse representations. Specifically, SparseFusion utilizes the outputs of parallel detectors in the LiDAR and camera modalities as sparse candidates for fusion. We transform the camera candidates into the LiDAR coordinate space by disentangling the object representations. Then, we can fuse the multi-modality candidates in a unified 3D space by a lightweight self-attention module. To mitigate negative transfer between modalities, we propose novel semantic and geometric cross-modality transfer modules that are applied prior to the modality-specific detectors. SparseFusion achieves state-of-the-art performance on the nuScenes benchmark while also running at the fastest speed, even outperforming methods with stronger backbones. We perform extensive experiments to demonstrate the effectiveness and efficiency of our modules and overall method pipeline. Our code will be made publicly available at https://github.com/yichen928/SparseFusion.
| Year | Citations | |
|---|---|---|
Page 1
Page 1