CarFormer: Self-Driving with Learned Object-Centric Representations


Shadi Hamdan
Fatma Guney


Department of Computer Engineering, Koc University
KUIS AI Center
ECCV 2024



Method overview figure


TL;DR we introduce CarFormer, an auto-regressive transformer model that can both drive and act as a world model, predicting future states. We show that a learned, self-supervised, object-centric representation for self-driving based on slot attention contains the information necessary for driving such as speed and orientation of vehicles.



Abstract

The choice of representation plays a key role in self-driving. Bird’s eye view (BEV) representations have shown remarkable performance in recent years. In this paper, we propose to learn object-centric representations in BEV to distill a complex scene into more actionable information for self-driving. We first learn to place objects into slots with a slot attention model on BEV sequences. Based on these object-centric representations, we then train a transformer to learn to drive as well as reason about the future of other vehicles. We found that object-centric slot representations outperform both scene-level and object-level approaches that use the exact attributes of objects. Slot representations naturally incorporate information about objects from their spatial and temporal context such as position, heading, and speed without explicitly providing it. Our model with slots achieves an increased completion rate of the provided routes and, consequently, a higher driving score, with a lower variance across multiple runs, affirming slots as a reliable alternaive in object-centric approaches. Additionally, we validate our model’s performance as a world model through forecasting experiments, demonstrating its capability to accurately predict future slot representations.




What is the best representation of the state?

Ways of representing the input


Quantitative Results

Longest6 results

Quantitative Results on Longest6. The average of 3 evaluations on the Longest6 benchmark. We find that CarFormer, using self-supervised slot representations, outperforms other models in terms of driving score.



Forecasting results

Quantitative Results on forecasting These results show the ability of CarFormer to predict the future BEV representation at T=t+1 and T=t+4. CarFormer outperforms the simple copy baseline.


Paper

CarFormer: Self-Driving with Learned Object-Centric Representations

Shadi Hamdan and Fatma Guney

ECCV 2024

@inProceedings{Hamdan2024ECCV,
        title={{CarFormer}: Self-Driving with Learned Object-Centric Representations}, 
        author={Shadi Hamdan and Fatma Güney},
        year={2024},
        booktitle={ECCV}
  }