ABSTRACT

Detection of outliers is important for many applications and has recently attracted much attention in the data mining research community. The existed density-based method identifies outliers by calculating every neighborhood. In this paper, we present a new density-based method to detect outliers by random sampling. This method makes the best of neighbor information that has been detected to reduce neighborhood queries, which made its performance better than the other density-based approach’s. The performance of our approach is compared with LOF in theoretical analysis. The experimental results show that our approach outperformed the existing density-based methods in time performance.