ABSTRACT

The anomaly detection task is to recognize the presence of an unusual (and potentially hazardous) state within the behaviors or activities of a system, with respect to some model of “normal” behavior that may be either hard coded or learned from observation. We focus here on learning models of normalcy at the user behavioral level. An anomaly detection agent faces many learning problems including learning from streams of temporal data, learning from instances of a single class, and adapting to a dynamically changing concept. In addition, the domain is complicated by considerations of the trusted insider problem (recognizing the difference between innocuous and malicious behavior changes on the part of a trusted user) and the hostile training problem (avoiding learning the behavioral patterns of a hostile user who is attempting to deceive the agent).