Self-Learning and Active-Learning for Electromyography-to-Speech Conversion
Conference: Speech Communication - 15th ITG Conference
09/20/2023 - 09/22/2023 at Aachen
doi:10.30420/456164048
Proceedings: ITG-Fb. 312: Speech Communication
Pages: 5Language: englishTyp: PDF
Authors:
Ren, Zhao; Scheck, Kevin; Schultz, Tanja (Cognitive Systems Lab, University of Bremen, Germany)
Abstract:
Electromyography-to-Speech conversion has demonstrated its potential to synthesise speech from electromyography signals for silent speech interfaces. In this study, we specifically tackle the lack of corresponding speech samples during real-life usage of Electromyography-to-Speech. We propose a framework that combines self-learning and active-learning. It balances automatically annotating electromyography signals by predicting speech samples and using ground-truth speech targets provided by the user. The framework is validated in a set of session-independent experiments. The results demonstrate that the proposed framework is effective to improve the model performance by increasing the size of training data while lowering the required human effort to generate speech targets. This framework appears to be promising for real-life Electromyography-to-Speech.