Yang Song, Alek Kolcz, and C. Lee Giles
Email spam has become a major problem for Internet users and providers. One major obstacle to its eradication is that potential solutions need to ensure a very low false-positive rate (FPR), which tends to be difficult in practice. We address the problem of low-FPR classification in the context of Naive Bayes, which represents one of the most popular machine learning models applied in the spam filtering domain. Drawing from the recent extensions, we propose a new term weight aggregation function, which leads to markedly better results than the standard alternatives. We identify short instances as ones with disproportionally poor performance and counter this behavior with a collaborative-filtering based feature augmentation. Finally, we propose a tree based classifier cascade for which decision thresholds of the leaf nodes are jointly optimized for best overall performance. These improvements, both individually and in aggregate, lead to substantially better detection rate of precision when compared to some of the best variants of Naive Bayes proposed to date.
|Published in||Journal of Software: Practice and Experience (SPE)|
All copyrights reserved by Wiley 2007.