ABSTRACT

The present study aims to design a machine vision system using deep learning algorithm for quality monitoring of iron ores. A total of 53 image samples were used for model calibration and testing. The model was trained using 45 image samples and tested using 9 image samples. The model parameters like the number of nodes and number layers were optimized based on the Root Mean Squared Error (RMSE) values. It was observed that the RMSE was lowest for the network architecture having 5-nodes and 3-hidden layers. The performance of the optimized model was evaluated using four indices including RMSE, Normalized Mean Square Error (NMSE), R-squared, and bias. The RMSE, NMSE, R-squared, and bias of the optimized model were obtained as 8.77, 0.0026, 0.87, and −1.14 respectively. The results indicate that the model gives satisfactory performance in quality predictions of iron ores.