Secure Personalization: Towards Trustworthy Recommender Systems*

Publicly-accessible adaptive systems such as recommender systems present a security problem. Attackers, who cannot be readily distinguished from ordinary users, may introduce biased data in an attempt to force the system to “adapt” in a manner advantageous to them. The ability to understand, identify, and defeat such “bias injection” attacks will have significant implications for a variety of adaptive information systems that rely on users’ input for learning user or group profiles. Many such systems have open components through which a malicious user or an automated agent can affect the overall system behavior. Among the most widely-used adaptive systems are Web personalization and recommender systems that are often used in e-commerce. Users have come to trust such systems to reduce the burden of navigating large information spaces and product catalogs. The preservation of this trust is important both for users and site owners, and is dependent upon the perception of recommender systems as objective, unbiased and accurate.

Recent research has begun to examine the vulnerabilities and robustness of different recommendation techniques, such as collaborative filtering, in the face of bias injection attacks. In this presentation, I will outline some of the major issues in building secure recommender systems, concentrating in particular on the modeling of attacks, their impact on various recommendation algorithms, and methods for automatic detection of attack profiles. I will introduce several new attack models not previously studied and present simulation-based evaluation results to show which attack models are most successful against common recommendation techniques. The evaluation criteria will consider both the overall impact on the ability of the system to make predictions and generate recommendations, as well as the degree of knowledge about the system required by the attacker to mount a realistic and successful attack. Our study, to date, shows that standard collaborative filtering algorithms are highly vulnerable to specific attack models, but that hybrid algorithms, which integrate semantic knowledge about items with user profiles, may provide a higher degree of robustness.

*This research is supported in part by the National Science Foundation Cyber Trust program under Grant IIS-0430303.

Speaker Details

Dr. Bamshad Mobasher is an Associate professor of Computer Science and the director of the Center for Web Intelligence at DePaul University in Chicago. His research areas include Web mining, Web personalization, predictive user modeling, agent-based systems, and information retrieval. He has published more than 80 papers and articles in these areas. As the director of the Center for Web Intelligence, Dr. Mobasher is directing research in Web mining and personalization, as well as overseeing several joint projects with the industry. He regularly conducts seminars and delivers presentations to a variety of companies and organizations involved with Web usage and e-commerce data analysis. Dr. Mobasher has served as an organizer and on the program committees of numerous conferences and workshops in the areas of Web data mining, Artificial Intelligence, and Autonomous Agents. His most recent activities include an edited volume, “Intelligent Techniques for Web Personalization”, published by Springer, culminating from a series of successful workshops on the same topic. He is also the guest editor for an upcoming special issue of ACM Transactions on Internet Technologies on Web personalization.More detailed information, as well as electronic versions of many publications, are available at: http://maya.cs.depaul.edu/~mobasher/.

Date:
Speakers:
Bamshad Mobasher
Affiliation:
DePaul University
    • Portrait of Jeff Running

      Jeff Running