ABSTRACT

The computer vision research area presently stands in an exciting time, with the ubiquity of imaging sensors in DSLRs, cellphones, and laptops. Together with the advent of large computing power and global internet connectivity, these factors have eased the restrictions that limited amounts of data impose on the statistical learning of visual models, turning the so-called curse of dimensionality into a blessing of dimensionality [30]. Nonetheless, this new paradigm comes with its unique set of challenges: First, scalable algorithmic solutions are needed to harness this data, which cannot be stored or processed in a single computer; second, models that enforce Occam’s razor’s notion of simplicity become of utmost importance, so as to preserve model interpretability and avoid the risk of overfitting.