ABSTRACT

Loss functions are a crucial component of training a deep learning model. They quantify the degree to which the trained model solves its appointed task. Based upon this quantification of model performance, features within the model are modified during the training process to further improve the model’s ability to solve the task. In part, the loss function dictates the characteristics that the model learns, and, when properly tailored to a task, can be used to train models capable of impressive clinical results. This chapter presents the unique properties of a loss function and specific data preparation required to use certain loss functions. Then, a series of loss functions, their applications, and motivations are discussed. Considering that not all loss functions are robust for all segmentation tasks, specific problems such as class imbalance and sparsely labeled data are discussed. Solutions to both of these issues, in the form of altered loss functions are examined. In total, twelve loss functions are presented, including cross entropy, Dice, Hausdorff, focal, sensitivity-specificity and Tversky. Finally, the chapter concludes with a review on choosing an initial loss function, real-time performance evaluation strategies, and troubleshooting methods.