ABSTRACT

Current commonly deployed rule detection engines use a very basic form of correlation, based primarily on streams of information. Firewall logs, IPS/IDS alerts, SNMP messages from antivirus servers all are sent to a single point, which then watches for two pieces of data within a certain temporal proximity to each other according to a rule. If two such pieces of data come through the stream within a certain interval of time, then the rule ‚res, and an action takes place as a result. Œat action-e.g., checking to see if a bu‰er overŠow directed at a device was actually successful-is largely manual in nature, done by humans at the speed of phone calls and the rate at which an e-mailed trouble ticket can be read and acted upon. As a result, in many large environments, it can take up to a week to determine if a security incident is real or just a false-positive, and in a week, an attacker can do a great deal to further strengthen his/her position in a targeted environment, as many have already begun to discover. Œe goal of the approach listed here is to

To examine the nature of the problem, it is important to ‚rst establish the di‰erence between the two types of rules being processed. Œe underlying rule engines de‚ne the nature of the rules, which in turn de‚ne the limitations and challenges of event processing that uses those rules. Signi‚cant changes to the rule engines also incur changes to the architecture, which has both advantages and challenges that will be discussed later.