Publication | Open Access
An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation
313
Citations
42
References
2009
Year
EngineeringMachine LearningData AggregationComputational ComplexityOrdered AggregationEnsemble MethodsClassification MethodAggregate FunctionData ScienceData MiningPattern RecognitionComplementary ClassifiersCombinatorial OptimizationMultiple Classifier SystemPruned EnsemblesSorting AlgorithmComputer ScienceData ClassificationClassifier SystemBagging EnsemblesEnsemble Algorithm
Bagging ensembles can be improved by selecting complementary classifiers, but the original algorithm leaves aggregation order unspecified, and random ordering typically reduces generalization error as ensemble size grows. The study analyzes pruning strategies that reduce ensemble size while increasing accuracy. The authors investigate pruning by reordering classifiers, selecting an optimal ordering that minimizes error at intermediate ensemble sizes, then retaining a fraction of classifiers and evaluating the resulting pruned ensembles on benchmark tasks. Ordered aggregation achieves a minimum error below bagging’s asymptote and produces pruned ensembles that are competitive in performance and robustness with more computationally expensive optimal subensemble selection methods.
Several pruning strategies that can be used to reduce the size and increase the accuracy of bagging ensembles are analyzed. These heuristics select subsets of complementary classifiers that, when combined, can perform better than the whole ensemble. The pruning methods investigated are based on modifying the order of aggregation of classifiers in the ensemble. In the original bagging algorithm, the order of aggregation is left unspecified. When this order is random, the generalization error typically decreases as the number of classifiers in the ensemble increases. If an appropriate ordering for the aggregation process is devised, the generalization error reaches a minimum at intermediate numbers of classifiers. This minimum lies below the asymptotic error of bagging. Pruned ensembles are obtained by retaining a fraction of the classifiers in the ordered ensemble. The performance of these pruned ensembles is evaluated in several benchmark classification tasks under different training conditions. The results of this empirical investigation show that ordered aggregation can be used for the efficient generation of pruned ensembles that are competitive, in terms of performance and robustness of classification, with computationally more costly methods that directly select optimal or near-optimal subensembles.
| Year | Citations | |
|---|---|---|
Page 1
Page 1