Concepedia

Publication | Closed Access

Training support vector machines: an application to face detection

2.5K

Citations

11

References

2002

Year

TLDR

Support Vector Machines, introduced by Vapnik et al. in 1985, provide a new method for training polynomial, neural network, or radial basis function classifiers. The study investigates applying SVMs to computer vision and introduces a decomposition algorithm that guarantees global optimality for training on very large datasets. The authors propose a decomposition method that iteratively solves dense quadratic sub‑problems and evaluates optimality conditions, reducing memory demands and enabling training on datasets with tens of thousands of points.

Abstract

We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.

References

YearCitations

Page 1