ABSTRACT

Prior to the present decade, getting a computer to distinguish objects in images (e.g., dog vs. cat) involved thinking about observable image features that humans use for this task, carefully coding computer algorithms to characterize these features, and combining the strengths of these characterizations to produce an estimate of the likelihood that the image contains one of the objects. In addition to specific logic for the task at hand, the algorithms must take into account artifacts of the imaging process, e.g., noise, irregular illumination, as well as normal variations in the objects themselves. Conventional radiomics is the analogous process of characterizing these “human engineered” features, e.g., intensity/color distribution, margin sharpness, and image texture, and possibly adding the results of human annotation, resulting in a feature vector mathematically describing the object’s visual characteristics. This chapter describes the process and challenges of deriving these features so that they that can be used for object classification and/or to derive associations between the object and clinical data such as survival, response to specific therapy, or molecular characteristics of the object.