Anomaly-based intrusion detection is about the discrimination of malicious and legitimate behaviors on the basis of the characterization of system normality in terms of particular observable subjects. As the system normality is constructed solely from an observed sample of normally occurring patterns, anomaly detectors always suffer excessive false alerts. Adaptability is therefore a desirable feature that enables an anomaly detector to alleviate, if not eliminate, such annoyance. To achieve that, we either design self-learning anomaly detectors to capture the drifts of system normality or develop postprocessing mechanisms to deal with the outputs. As the former methodology is usually scenario- and application-specific, in this article, we focus on the latter one. In particular, our design starts from three key observations: (1) most of anomaly detectors are threshold based and parametric, that is, configurable by a set of parameters; (2) anomaly detectors differ in operational environment and operational capability in terms of detection coverage and blind spots; (3) an intrusive anomaly may leave traces across multiple system layers, incurring different observable events of interest. Firstly, we present a statistical framework to formally characterize and analyze the basic behaviors of anomaly detectors by examining the properties of their operational environments. The framework then serves as a theoretical basis for developing an adaptive middleware, which is called M-AID, to optimally integrate a number of observation-specific parameterizable anomaly detectors. Specifically, M-AID treats these fine-grained anomaly detectors as a whole and casts their collective behaviors in a framework which is formulated as a Multiagent Partially Observable Markov Decision Process (MPO-MDP). The generic anomaly detection models of M-AID are thus automatically inferred via a reinforcement learning algorithm which dynamically adjusts the behaviors of anomaly detectors in accordance with a reward signal that is defined and quantified by a suit of evaluation metrics. Fundamentally, the distributed and autonomous architecture enables M-AID to be scalable, dependable, and adaptable, and the reward signal allows security administrators to specify cost factors and take into account the operational context for taking rational response. Finally, a host-based prototype of M-AID is developed, along with comprehensive experimental evaluation and comparative studies.
ACM Transactions on Autonomous and Adaptive Systems (TAAS) – Association for Computing Machinery
Published: Nov 1, 2009