Radar-Based Gesture Recognition Using Gaussian Mixture Variational Autoencoder
Konferenz: ICMIM 2024 - 7th IEEE MTT Conference
16.04.2024-17.04.2024 in Boppard
Tagungsband: ITG-Fb. 315: ICMIM 2024
Seiten: 4Sprache: EnglischTyp: PDF
Autoren:
Janjua, Muhammad Ghufran; Kaiser, Kevin; Arsalan, Muhammad; Mauro, Gianfranco; Schoenfeldt, Stephan; Issakov, Vadim
Inhalt:
Radar-based gesture recognition serves as an intuitive alternative to conventional click and touch-based humanmachine interfaces. Current deep learning methods in this domain heavily rely on extensive labeled datasets for effective training. However, labeling radar data demands substantial amount of time, labor, and specialized expertise. In this paper, we introduce a gesture-recognition solution based on the Gaussian Mixture Variational Autoencoder (GMVAE). It is tailored for both unsupervised scenarios, where labeled data is entirely absent, and semi-supervised scenarios, where only a limited amount of labeled data is available. Our approach involves processing raw data to extract range, velocity, and angle features over time which are subsequently fed into the GMVAE model for gesture classification. Our experimental findings demonstrate promising outcomes in both unsupervised and semi-supervised scenarios. In the unsupervised case, without using any labels for training, the proposed model achieves an accuracy of 83.34 % on the test dataset. In the semi-supervised setting, our approach performs on par with supervised methods, achieving 98.12 % test accuracy using only 50 % of the labels for training.