Comparative Analysis of the wav2vec 2.0 Feature Extractor
Konferenz: Speech Communication - 15th ITG Conference
20.09.2023-22.09.2023 in Aachen
doi:10.30420/456164025
Tagungsband: ITG-Fb. 312: Speech Communication
Seiten: 5Sprache: EnglischTyp: PDF
Autoren:
Vieting, Peter (Machine Learning and Human Language Technology, RWTH Aachen University, Germany)
Schlueter, Ralf; Ney, Hermann (Machine Learning and Human Language Technology, RWTH Aachen University, Germany & AppTek GmbH, Germany)
Inhalt:
Automatic speech recognition (ASR) systems typically use handcrafted feature extraction pipelines. To avoid their inherent information loss and to achieve more consistent modeling from speech to transcribed text, neural raw waveform feature extractors (FEs) are an appealing approach. Also the wav2vec 2.0 model, which has recently gained large popularity, uses a convolutional FE which operates directly on the speech waveform. However, it is not yet studied extensively in the literature. In this work, we study its capability to replace the standard feature extraction methods in a connectionist temporal classification (CTC) ASR model and compare it to an alternative neural FE. We show that both are competitive with traditional FEs on the LibriSpeech benchmark and analyze the effect of the individual components. Furthermore, we analyze the learned filters and show that the most important information for the ASR system is obtained by a set of bandpass filters.