ABSTRACT

Teaching machines to read and learn natural language documents and seek answers to questions is an elusive task. Traditional question-answering systems were based on rule-based and keyword-searching algorithms without proper natural language understanding. Machine reading comprehension (MRC) belongs to reading comprehension models and facilitates the machine’s learning from context. MRC can infer the answer from the context through language understanding. Neural machine reading comprehension has built reading comprehension models by employing the advancements of deep neural networks that have shown unprecedented performance compared to other non-neural and feature-based models. The article comprises the MRC span extraction tasks using Transformer models and, in addition, the illustration of the MRC tasks, trends, modules, benchmarked datasets, implementation, and empirical results.