ABSTRACT

This chapter presents the basics of natural language processing (NLP) and recurrent neural networks (RNN). It highlights the structures and critical abilities of RNNs to process sequential data and generate models that involve time components. The chapter also explores various real-life applications of NLP, such as time-series forecasting, text processing, speech recognition, gesture recognition, sentiment analysis, etc. It explains how modern transformer-based models, like BERT, GPT, and ELMo, offer advanced functionalities and play a significant role in pushing the boundaries of NLP. The basic concepts covered in this chapter relate to Case Study IV of Chapter 9, which demonstrates the image captioning problem using CNN and RNN.