ABSTRACT

Extraction of feature attributes is prominent in any image analytics study, and especially for health informatics, biomedical images need to be represented as feature vectors for further analytics. Deep learning model approach incurs high computational overheads with big image data. Transfer learning, however, promises a truce; hence, this study evaluates the performances of five deep image embedders on X-ray and computed tomography signals for feature extraction towards predicting Covid-19 instances. Performances of three predictive models is evaluated through statistical tools of F1 measure, receiver operating characteristics curve, and multidimensional scaling to ascertain their precision rate on Covid-19 health informatics. Experimental result from VGG16 and Painters have the highest precision weighted average of 0.997 and 0.998, respectively, with SVM and MLP models; while F1 measure returns Painters as the best on SVM and MLP. Multidimensional scaling shows that feature vector attributes extracted by Inception v3 and Painters are loosely networked, while SqueezeNet, VGG16, and VGG19-generated data points are closely knitted with relative isolated data points.