ABSTRACT

It presents a holistic picture of sketch-based interactions from two-dimensional sketching in physical and digital ink to sketch interactions in the three-dimensions (3D) domain, presenting the technologies and algorithms needed to tackle the interpretation and use of sketches as well as discussing open issues and limitations of existing techniques. While traditional approaches to sketching simplification focus on the selection of features that correspond to the psycho-visual constructs that govern the human understanding of the sketch, in deep-learning approaches these constructs are learned through training. This has the advantage that the convolutional kernels learned are no longer restricted by the few selected constructs chosen for implementation, but can be learnt and adapted from the training examples provided. One important consideration in deep-learning approaches is the high dependency on the datasets available for training.