Concepedia

TLDR

Supervised learning often suffers from small training sets, and exploiting prior knowledge—whether domain‑specific or learned from prototypical examples—is a natural way to improve generalization, though generating virtual examples for real‑world pattern recognition remains challenging. The paper proposes using prior knowledge to create virtual examples, thereby expanding the effective training‑set size. They generate virtual examples to augment the training set, demonstrating the technique on object and speech recognition tasks. They show that this strategy is mathematically equivalent to regularization in some contexts and that it can be effectively applied to object and speech recognition.

Abstract

One of the key problems in supervised learning is the insufficient size of the training set. The natural way for an intelligent learner to counter this problem and successfully generalize is to exploit prior information that may be available about the domain or that can be learned from prototypical examples. We discuss the notion of using prior knowledge by creating virtual examples and thereby expanding the effective training-set size. We show that in some contexts this idea is mathematically equivalent to incorporating the prior knowledge as a regularizer, suggesting that the strategy is well motivated. The process of creating virtual examples in real-world pattern recognition tasks is highly nontrivial. We provide demonstrative examples from object recognition and speech recognition to illustrate the idea.

References

YearCitations

Page 1