ABSTRACT

In diagnostic medical images, the patient body orientation or view of the scanning posture is often not recorded explicitly during storage in digital archive systems like Picture Archive and Communication System (PACS). Different orientations like anterior or frontal view, posterior or back view, and the lateral or side views, also known as left lateral or right lateral can be used during scanning. However, computer-aided diagnosis systems often do not provide this piece of header information of the image. This kind of orientation identification for images is required for quality and quantitative analysis of the image in many diagnostic applications. If such patient body orientations are not recorded or are documented using an incorrect label, automated system indexing may be inconsistent, and may also result in improper interpretation by computers and radiologists. In this chapter, the objective is to investigate this problem and develop a learnable neural model for accurately identifying the view positions of different organs of the body like the spine, cranium, abdomen, arm, and foot available from the ImageCLEF 2009 dataset. Four different convolutional neural networks, ResNet18, AlexNet, GoogleNet, and SqueezeNet were used as transfer learning approaches, for classifying a new set of images as per their orientation. A new model, ViewNet is also proposed, and experimental evaluation of the proposed model showed that it outperformed all other convolutional neural network CNN based models, predicting the orientation label with an accuracy of 85.71%, proving its effectiveness in diagnostic applications.