ABSTRACT

Nowadays, several stream learning algorithms have been developed. Most of them learn decision models that continuously evolve over time, run in resourceaware environments, and detect and react to changes in the environment generating data. One important issue, not yet conveniently addressed, is the design of experimental work to evaluate and compare decision models that evolve over time. In this chapter we present a general framework for assessing the quality of streaming learning algorithms. We defend the use of Predictive Sequential error estimates over a sliding window to assess performance of learning algorithms that learn from open-ended data streams in non-stationary environments. This chapter studies convergence properties and methods to comparatively assess algorithm performance.