A comparison of EMG-to-Speech Conversion for Isolated and Continuous Speech
Conference: Speech Communication - 13. ITG-Fachtagung Sprachkommunikation
10/10/2018 - 10/12/2018 at Oldenburg, Deutschland
Proceedings: Speech Communication
Pages: 5Language: englishTyp: PDF
Personal VDE Members are entitled to a 10% discount on this title
Authors:
Diener, Lorenz; Bredehoeft, Sebastian; Schultz, Tanja (Cognitive Systems Lab, University of Bremen, Bremen, Germany)
Abstract:
This paper presents initial results of performing EMG-to-Speech conversion within our new EMG-to-Speech corpus. This new corpus consists of parallel facial array sEMG and read audible speech signals recorded from multiple speakers. It contains different styles of utterances – continuous sentences, isolated words, and isolated consonant-vowel combinations – which allows us to evaluate the performance of EMG-to-Speech conversion when trying to convert these different styles of utterance as well as the effect of training systems on one style to convert another. We find that our system deals with isolated-word/consonant-vowel utterances better than with continuous speech. We also find that it is possible to use a model trained on one style to convert utterances from another – however, performance suffers compared to training within that style, especially when going from isolated to continuous speech.