中文

W. Wu, X. Deng, P. Jiang, S. Wan and Y. Guo, "CrossFuser: Multi-Modal Feature Fusion for End-to-End Autonomous Driving Under Unseen Weather Conditions," in IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 12, pp. 14378-14392, Dec. 2023, doi: 10.1109/TITS.2023.3307589. (中科院 1区)

Hits:

  • Release time:2024-03-13

  • Journal:IEEE Transactions on Intelligent Transportation Systems

  • Abstract:Abstract— Multi-modal fusion is a promising approach to boost the autonomous driving performance and has already received a large amount of attention. Meanwhile, to increase driving reliability under distinct scenarios, it is important to handle unforeseen weather events in the training dataset, which is known as an Out-Of-Distribution (OOD) problem, for autonomous driving algorithms. In this paper, we consider those two aspects and propose an end-to-end multi-modal domain-enhanced framework, namely CrossFuser, to meet the safety orientated driving requirements. CrossFuser first integrates both image and lidar modalities to generate a robust environmental representation through conjoint mapping, elastic disentanglement, and attention mechanism. Further, the perception embedding is used to calculate corresponding waypoints by a waypoint prediction network, consisting of Gate Recurrent Units (GRUs). Finally, the final control commands are calculated by low-level control functions. We conduct experiments on the Car Learning to Act (CARLA) driving simulator involving complex weather conditions under urban scenarios, the results show that CrossFuser can outperform the state of the art.

  • Note:http://faculty.csu.edu.cn/dengxiaoheng/zh_CN/lwcg/10445/content/49304.htm

  • Translation or Not:no


  • Attachments:

  • 5-CrossFuser_Multi-Modal_Feature_Fusion_for_End-to-End_Autonomous_Driving_Under_Unseen_Weather_Conditions.pdf   
  • Postal Address:

  • Email:

Technology Support:WCMC  Copyright ? 1999-2023, Central South Unversity  All Rights Reserved Click:
  MOBILE Version

The Last Update Time:..