Diagnostic error accounts for approximately 10% of patient deaths, and between 6 and 17% of adverse events occurring during hospitalization. At a total of ~20 million radiology errors per year, and 30,000 practicing radiologists, this averages to just under 700 errors per practicing radiologist. Errors in diagnosis have been associated with clinical reasoning, including: intelligence, knowledge, age, affect, experience, physical state (fatigue), and gender (male predilection for risk taking). These factors, and the limited access to radiologic specialists for up to 2/3 of the world, encourage a more urgent role for the use of AI in medical imaging, a huge focus of which is machine learning. Operator dependency in radiologic procedures, particularly sonography, has led researchers to develop automated image interpretation techniques similar to those used for histopathology. AI now allows for the connection of image analysis with diagnostic outcome, in real time. AI has the potential to assist with care, teaching, and diagnosis of illness. According to the market research firm Tractica, the market for virtual digital assistants worldwide will reach $16 billion by 2021. The term “machine learning” as it applies to radiomics is used to describe high throughput extraction of quantitative imaging features with the intent of creating minable databases from radiological images. It comprises different algorithmic modeling systems, one of the best described of which is deep learning. The technology utilizes the concept of deep neural networks, or large logic networks, in three basic layers: input, hidden, and output. The input layer processes large amounts of relevant data. The hidden layer tests and compares new data against pre-existing data, classifying and re-classifying in real time, with particular connections weighted with degrees of influence. The output layer utilizes confidence intervals to determine the best outcome from various predicted outcomes.
Principles of AI permeate many different disciplines, such as the financial sector of investing. Deep learning assists with determining signals over millions of data points. In this context, data from AI reports behavior, and thus it must be open to the interpretation of atypical behavior patterns. Unsupervised feature extraction utilizes images and clinical narrative texts to allow for high throughput application in the analysis of clinical data. The steps in this process include disease detection, lesion segmentation, diagnosis, treatment selection, response assessment via repeat imaging, and using data about a patient to create clinical predictions in regards to potential patient outcomes. Feedforward neural networks have been used to analyze prognosis of bladder cancer recurrence by histopathology. Many urologists have also used this technology with favorable results in the interpretation of renal and prostatic ultrasound. Looking to the future, AI’s continued development will likely follow Satya Nadella’s three phases for the many technological breakthroughs that have preceded it. The first, invention and design of the technology itself; the second, retrofitting (e.g. engineers receive new training, traditional radiologic equipment is redesigned and rebuilt); the third, navigation of the dissonance, distortion, and dislocation, where challenging novel questions are raised. This may include: what the function of the physician will be when radiomics enables the detection of trends in particular illness, or a computer can detect lesions unseen by the radiologist. Many agree that AI should augment, rather than replace, human ability. Amongst other considerations previously discussed, this must also be infused with the applicable protections for privacy, transparency (suitable to accountability), and security.