ABSTRACT

This chapter provides an overview of the testing methodology and the experimental results. It explores how inject concept drift in the dataset. The chapter shows that the anomaly detection rate in the presence of concept drifts. The datasets used for training and testing have been created from Trace Files received from the University of Calgary project. Anomaly detection in the presence of concept drift is difficult to achieve. The framework then processes these commands, updating the data structures with distribution and bounded concept-drift variance. The framework is then ready to process the next set of commands, and upon request can produce predicted variants based on the concept drift. The produced set of commands or new ones can be used to update the concept drift and provide for a constantly evolving command distribution that represents an individual. Sudden changes that do not fit within a calculated concept drift can be flagged as suspicious and therefore possibly representative of an insider threat.