ABSTRACT

CONTENTS 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 5.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.3 Trust Management Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.3.1 Satisfaction Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.3.2 Dirichlet-Based Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.3.3 Evaluating the Trustworthiness of a Peer . . . . . . . . . . . . . . . . . . . 57

5.4 Test Message Exchange Rate and Scalability of Our System . . . . . . . . . 59 5.5 Robustness against Common Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

5.5.1 Newcomer Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.5.2 Betrayal Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.5.3 Collusion Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.5.4 Inconsistency Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5.6 Simulations and Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.6.1 Simulation Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.6.2 Modeling the Expertise Level of a Peer . . . . . . . . . . . . . . . . . . . . . 61 5.6.3 Deception Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.6.4 Trust Values and Confidence Levels for Honest Peers . . . . . . . 63 5.6.5 Trust Values for Dishonest Peers . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.6.6 Robustness of Our Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.6.7 Scalability of Our Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.6.8 Efficiency of Our Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

In the previous chapter we presented the architecture design of a peer-to-peer-based intrusion detection network. In this chapter we focus on the trust component design. Trust management is critical because it is used to distinguish malicious peers from honest ones and improve intrusion detection accuracy. It is a central component in the IDN architecture because most other components, including resource management, acquaintance management, and collaborative intrusion components, rely on its input. In an IDN, a malicious (or malfunctioning) IDS can send false intrusion assessments or useless information to degrade the performance of other IDSs in the collaboration network. If some nodes are controlled by the same adversaries, they can easily collude and send false intrusion assessments. Moreover, IDSs may have different levels of expertise in intrusion assessment so that the quality of their information varies. To protect an IDN from malicious attacks as well as find expert IDSs to consult for intrusion assessment, it is important to evaluate the trustworthiness of participating IDSs. Because the trust model itself may also be the target of malicious attacks, robustness is a desired feature of the trust management scheme in collaborative intrusion detection networks.