ABSTRACT

Machine learning algorithms typically search for the optimal representation of data using a feedback signal in the form of an objective function. However, most machine learning algorithms only have the ability to use one or two layers of data transformation to learn the output representation. Like other machine learning algorithms, deep neural networks (DNN) perform learning by mapping features to targets through a process of simple data transformations and feedback signals; however, DNNs place an emphasis on learning successive layers of meaningful representations. Feedforward DNNs are densely connected layers where inputs influence each successive layer which then influences the final output layer. Hyperparameter tuning for DNNs tends to be a bit more involved than other Machine learning (ML) models due to the number of hyperparameters that can/should be assessed and the dependencies between these parameters. Training DNNs often requires more time and attention than other ML algorithms.