ABSTRACT

Intrusion Detection Systems (IDSs) are an essential part of any security posture in industry and academia. In general, evaluating an intrusion detection method should consider a variety of factors, including overhead, complexity, interpretability of alarms, upgradeabilty, and resiliency to evasion. To introduce the problem of evaluating classification accuracy for IDS, it is useful to look at how classification algorithms are evaluated in other domains. Several algorithms for information assurance such as IDSs, static analysis tools and anti-virus software can be modeled as detection algorithms. Detection rate (DR) and false positive rate (FPR) are the most common metrics used to evaluate the classification accuracy of an IDS, often considered as benchmarks in the field of intrusion detection. The evaluation of IDSs is an inherently difficult problem. Many different factors may affect intrusion detection systems, from diversity of network traffic to the unpredictable nature of a deployed environment, which makes the traditional statistical evaluation methods inappropriate or misleading.