Single Channel Source Separation in the Wild – Conversational Speech in Realistic Environments
Konferenz: Speech Communication - 15th ITG Conference
20.09.2023-22.09.2023 in Aachen
doi:10.30420/456164018
Tagungsband: ITG-Fb. 312: Speech Communication
Seiten: 5Sprache: EnglischTyp: PDF
Autoren:
Berger, Emil; Schuppler, Barbara; Pernkopf, Franz; Hagmueller, Martin (Signal Processing and Speech Communication Laboratory, Graz University of Technology, Graz, Austria)
Inhalt:
Recent progress in Single Channel Source Separation (SCSS) using deep neural networks led to impressive performance gains while also increasing the model sizes, requiring tremendous data resources. This demand is covered by artificially composed speech and noise mixtures that do not capture real-life characteristics of conversations taking place in noisy environments. This paper introduces a new dataset containing task-oriented dialogues spoken in a realistic environment and presents experimental results for two SCSS architectures - the Conv-TasNet and the transformer-based MossFormer. Overall, we observe a severe drop in performance of up to 4.3dB (SI-SDR improvement) for the 8kHz variant of the Conv-TasNet. For speaker pairs of homogeneous sex, the difference is even higher of up to 6dB. Only the model using 16kHz sample rate performs on a comparable level for speaker pairs of mixed sex. Our findings illustrate the need of using realistic data for both, training and evaluating.