Publication | Closed Access
An overview of statistical learning theory
6.2K
Citations
21
References
1999
Year
Statistical learning theory, introduced in the late 1960s, evolved from a purely theoretical analysis of function estimation to a practical framework with support vector machines in the 1990s, enabling the creation of multidimensional function‑estimation algorithms. The article aims to show that the theory’s generalization conditions surpass classical paradigms and have inspired novel function‑estimation algorithms. It provides a comprehensive overview of both theoretical foundations and algorithmic implementations of statistical learning theory.
Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems.
| Year | Citations | |
|---|---|---|
1995 | 39.8K | |
1995 | 31.8K | |
1999 | 26.9K | |
1992 | 11.5K | |
1998 | 8K | |
1999 | 5.8K | |
1991 | 5K | |
1971 | 2.4K | |
1989 | 1.8K | |
1995 | 1.3K |
Page 1
Page 1