Breadcrumbs Section. Click here to navigate to respective pages.
Chapter

Chapter
Thinking Deeply: Neural Networks and Deep Learning
DOI link for Thinking Deeply: Neural Networks and Deep Learning
Thinking Deeply: Neural Networks and Deep Learning book
Thinking Deeply: Neural Networks and Deep Learning
DOI link for Thinking Deeply: Neural Networks and Deep Learning
Thinking Deeply: Neural Networks and Deep Learning book
ABSTRACT
An artificial neural network is effectively a computing system that takes into account inputs that are combined, typically in a nonlinear manner, to calculate outputs that can be compared to expected outcomes. This chapter explains the general architecture of a neural network in terms of layers and nodes, cover forward and backward propagation and deals with a discussion on convolutional and recurrent neural networks. The neuron doctrine as proposed around 1888 by Spanish Nobel Prize winner Santiago Ramon y Cajal is the basis of modern neuroscience. An important feature of the architecture of our neural networks is the fact that the nodes are arranged in layers. A convolutional neural network is a type of artificial neutral network that relies on convolution to learn patterns in the training data provided. The unfolded version of the recurrent neural network can clearly depict the importance of the sequence of inputs and outputs during training.