Publication | Closed Access
Totally corrective boosting algorithms that maximize the margin
121
Citations
17
References
2006
Year
Unknown Venue
Artificial IntelligenceEngineeringMachine LearningData ScienceData MiningPattern RecognitionStochastic OptimizationUpdated DistributionWeak HypothesisKnowledge DiscoveryConvex OptimizationCleaner Boosting MethodComputational Learning TheoryComputer ScienceClassifier SystemStatistical Learning TheorySupervised LearningLinear Optimization
We consider boosting algorithms that maintain a distribution over a set of examples. At each iteration a weak hypothesis is received and the distribution is updated. We motivate these updates as minimizing the relative entropy subject to linear constraints. For example AdaBoost constrains the edge of the last hypothesis w.r.t. the updated distribution to be at most γ = 0. In some sense, AdaBoost is "corrective" w.r.t. the last hypothesis. A cleaner boosting method is to be "totally corrective": the edges of all past hypotheses are constrained to be at most γ, where γ is suitably adapted.Using new techniques, we prove the same iteration bounds for the totally corrective algorithms as for their corrective versions. Moreover with adaptive γ, the algorithms provably maximizes the margin. Experimentally, the totally corrective versions return smaller convex combinations of weak hypotheses than the corrective ones and are competitive with LPBoost, a totally corrective boosting algorithm with no regularization, for which there is no iteration bound known.
| Year | Citations | |
|---|---|---|
Page 1
Page 1