Concepedia

Publication | Open Access

Rate of approximation results motivated by robust neural network learning

61

Citations

11

References

1993

Year

Abstract

The set of functions which a single hidden layer neural network can approximate is increasingly well understood, yet our knowledge of how the approximation error depends upon the number of hidden units, i.e. the rate of approximation, remains relatively primitive. Barron [1991] and Jones [1992] give bounds on the rate of approximation valid for Hilbert spaces. We derive bounds for L spaces, 1 < p < m, recovering the 0(1 /&) bounds of Barron and Jones for the case p = 2. The results were motivated in part by the desire to understand approximation in the more "robust" (resistant to exemplar noise) LP, 1 ~p <2 norms.

References

YearCitations

Page 1