Publication | Closed Access
Learning Discriminative Features via Label Consistent Neural Network
26
Citations
42
References
2017
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningAutoencodersData SciencePattern RecognitionSelf-supervised LearningRobot LearningSemi-supervised LearningSupervised LearningMachine VisionFeature LearningComputer ScienceDeep LearningComputer VisionDiscriminative FeaturesLabel Consistency RegularizationGradient VanishingLabel Consistency Constraint
Deep Convolutional Neural Networks (CNN) enforce supervised information only at the output layer, and hidden layers are trained by back propagating the prediction error from the output layer without explicit supervision. We propose a supervised feature learning approach, Label Consistent Neural Network, which enforces direct supervision in late hidden layers in a novel way. We associate each neuron in a hidden layer with a particular class label and encourage it to be activated for input signals from the same class. More specifically, we introduce a label consistency regularization called "discriminative representation error" loss for late hidden layers and combine it with classification error loss to build our overall objective function. This label consistency constraint alleviates the common problem of gradient vanishing and tends to faster convergence, it also makes the features derived from late hidden layers discriminative enough for classification even using a simple k-NN classifier. Experimental results demonstrate that our approach achieves state-of-the-art performances on several public datasets for action and object category recognition.
| Year | Citations | |
|---|---|---|
Page 1
Page 1