Publication | Open Access
Stable Rank Normalization for Improved Generalization in Neural Networks\n and GANs
33
Citations
26
References
2019
Year
Exciting new work on the generalization bounds for neural networks (NN) given\nby Neyshabur et al. , Bartlett et al. closely depend on two\nparameter-depenedent quantities: the Lipschitz constant upper-bound and the\nstable rank (a softer version of the rank operator). This leads to an\ninteresting question of whether controlling these quantities might improve the\ngeneralization behaviour of NNs. To this end, we propose stable rank\nnormalization (SRN), a novel, optimal, and computationally efficient\nweight-normalization scheme which minimizes the stable rank of a linear\noperator. Surprisingly we find that SRN, inspite of being non-convex problem,\ncan be shown to have a unique optimal solution. Moreover, we show that SRN\nallows control of the data-dependent empirical Lipschitz constant, which in\ncontrast to the Lipschitz upper-bound, reflects the true behaviour of a model\non a given dataset. We provide thorough analyses to show that SRN, when applied\nto the linear layers of a NN for classification, provides striking\nimprovements-11.3% on the generalization gap compared to the standard NN along\nwith significant reduction in memorization. When applied to the discriminator\nof GANs (called SRN-GAN) it improves Inception, FID, and Neural divergence\nscores on the CIFAR 10/100 and CelebA datasets, while learning mappings with\nlow empirical Lipschitz constants.\n
| Year | Citations | |
|---|---|---|
Page 1
Page 1