A New Perspective on Sparse Support Vector Machines (SVMs) and a Projected Gradient Method
The presentation consists of two parts: First I introduce a novel convex relaxation of a sparse support vector machine (SVM) based on the perspective relaxation of mixed-integer nonlinear programs. The sparse SVM formulation seeks to minimize the zero-norm of a separating hyperplane normal vector with a standard SVM hinge-loss penalty. We then extend our approach to a zero-one loss penalty. The relaxation that we propose is a second-order cone formulation that can be efficiently solved by standard conic optimization solvers. Computational experiments are used to compare solution properties and classification performance of the new formulation with those of previous sparse SVM formulations suggested in the literature.
In the second part of the talk I will discuss recent work on a projected gradient method for quadratic programs with second-order cone constraints. The method that we propose generalizes well studied algorithms for bound-constrained quadratic programming. Preliminary computational experiments with different classes of random instances show a clear advantage of our method over standard conic and nonlinear optimization solvers.