Improvement of Weak Coupling Between Questions and Answers for Machine Reading Comprehension
Conference: ICMLCA 2021 - 2nd International Conference on Machine Learning and Computer Application
12/17/2021 - 12/19/2021 at Shenyang, China
Proceedings: ICMLCA 2021
Pages: 5Language: englishTyp: PDF
Personal VDE Members are entitled to a 10% discount on this title
Authors:
Zhou, Enji; Zou, Hang; Xiao, Yuhang (Wuhan Research Institute of Posts and Telecommunications Nanjing Fenghuo Tiandi Communication Technology Co., Ltd, Nanjing, Jiangsu, China)
Liao, Wenjian (Nanjing Fenghuo Tiandi Communication Technology Co., Ltd, Nanjing, Jiangsu, China)
Abstract:
Machine reading comprehension (MRC) requires the machine to answer the questions according to the given paragraphs. In recent years, pre-trained language models have made outstanding progress in many tasks, including machine reading comprehension. Based on the pre-trained language model, we propose an extractive reading comprehension model based on comparative learning. The model consists of two parts: the question-answer encoder and the reading comprehension model. We extract question-answer text pairs, use contrast learning to construct a representation learning system, and map the questions and answers to the same semantic space to train the question-answer encoder, and judge their coupling. The reading comprehension model contains two sub-models: an ordinary reading model and a re-reading model, which draw on human reading comprehension. If the model encounters a question that cannot be answered decisively, the paragraph will be re-read to get more details. The rereading model aims at weakly coupled questions and answers, uses the display attention mechanism to match, and further explores the inner connection between the question and the paragraph. We conducted experiments on the SQUAD2.0 data set and got significantly improved. Experimental results show that our model is significantly better than the traditional model BiDAF and the baseline model BERT.