Large-scale Linear and Kernel Classification – Part 1

Linear and kernel methods are important machine learning techniques for data classification. Popular examples include support vector machines (SVM) and logistic regression. We begin with an introduction on this subject by deriving their optimization problems through different aspects. This discussion is useful because many people are confused about the relationships between, for example, SVM and logistic regression. We then move to investigate techniques for solving optimization problems for linear and kernel classification. In particular, we show details of two representative settings: coordinate descent methods and Newton methods. Recently, extending these optimization techniques to handle big data in either multi-core or distributed environments is a very important research direction. We present some promising results and discuss future challenges.

Speaker Details

Chih-Jen Lin is currently a distinguished professor at the Department of Computer Science, National Taiwan University. He obtained his B.S. degree from National Taiwan University in 1993 and Ph.D. degree from University of Michigan in 1998. His major research areas include machine learning, data mining, and numerical optimization. He is best known for his work on support vector machines (SVM) for data classification. His software LIBSVM is one of the most widely used and cited SVM packages. For his research work he has received many awards, including the ACM KDD 2010 and ACM RecSys 2013 best paper awards. He is an IEEE fellow, an AAAI fellow, and an ACM distinguished scientist for his contribution to machine learning algorithms and software design. More information about him can be found at http://www.csie.ntu.edu.tw/~cjlin.

Date:
Speakers:
Chih-Jen Lin
Affiliation:
National Taiwan University