Vision-based Autonomous Trajectory Drifting using Deep Reinforcement Learning

Conference: AmEC 2024 – Automotive meets Electronics & Control - 14. GMM Symposium
03/14/2024 - 03/15/2024 at Dortmund, Germany

Proceedings: GMM-Fb. 108: AmEC 2024

Pages: 6Language: englishTyp: PDF

Authors:
Domberg, Fabian; Barkow, Ben; Schildbach, Georg (Institute for Electrical Engineering in Medicine, Universität zu Lübeck, Germany)

Abstract:
We present a method for training an autonomous vehicle to drift along different trajectories. Unlike previous work, we use only visual input from a front-facing camera. Because it is difficult to train end-to-end policies to achieve such complex behavior with only image input, we train different parts of our neural network separately. We use supervised learning to train a CNN-based feature estimator to predict the vehicle state, which is then fed into a fully connected network trained using Reinforcement Learning. This training method allows for better interpretability and reliable convergence compared to end-to-end approaches that attempt to optimize over the much larger combined search space. While our final trained controller shows some undesirable oscillations during drift and overly cautious driving, it generally proves the feasibility of drifting using only visual inputs.