One of the new challenges of our current digital age is processing and analyzing the vast amounts of data (Gaber, 2012; Gaber et al., 2005; L'Heureux et al., 2017). Data are collected via many different devices, for example smartphones and wearables, and in various contexts like at home, while navigating, or in a hospital. The common characteristic of these data is that the data are often too large to be processed at once and/or have an accumulative nature where new data points continue to augment the data set. Even though computational power is increasing exponentially, the storage, processing, and analysis of such data remains challenging (Ippel et al., 2016a, 2019; Yang et al., 2017). Storing all data might be expensive, and computations of complex models can be time consuming. Even the computations of ‘simple’ models like linear regressions can become too time consuming when using large or, even worse, growing data sets. Moreover, methods typically used to analyze such large data sets are often black boxes, making it difficult to explain the results of these methods (Rudin, 2019).