Vision-based Autonomous Trajectory Drifting using Deep Reinforcement Learning

Konferenz: AmEC 2024 – Automotive meets Electronics & Control - 14. GMM Symposium
14.03.2024-15.03.2024 in Dortmund, Germany

Tagungsband: GMM-Fb. 108: AmEC 2024

Seiten: 6Sprache: EnglischTyp: PDF

Autoren:
Domberg, Fabian; Barkow, Ben; Schildbach, Georg (Institute for Electrical Engineering in Medicine, Universität zu Lübeck, Germany)

Inhalt:
We present a method for training an autonomous vehicle to drift along different trajectories. Unlike previous work, we use only visual input from a front-facing camera. Because it is difficult to train end-to-end policies to achieve such complex behavior with only image input, we train different parts of our neural network separately. We use supervised learning to train a CNN-based feature estimator to predict the vehicle state, which is then fed into a fully connected network trained using Reinforcement Learning. This training method allows for better interpretability and reliable convergence compared to end-to-end approaches that attempt to optimize over the much larger combined search space. While our final trained controller shows some undesirable oscillations during drift and overly cautious driving, it generally proves the feasibility of drifting using only visual inputs.