Concepedia

Publication | Open Access

Lets keep it simple, Using simple architectures to outperform deeper and\n more complex architectures

42

Citations

0

References

2016

Year

Abstract

Major winning Convolutional Neural Networks (CNNs), such as AlexNet, VGGNet,\nResNet, GoogleNet, include tens to hundreds of millions of parameters, which\nimpose considerable computation and memory overhead. This limits their\npractical use for training, optimization and memory efficiency. On the\ncontrary, light-weight architectures, being proposed to address this issue,\nmainly suffer from low accuracy. These inefficiencies mostly stem from\nfollowing an ad hoc procedure. We propose a simple architecture, called\nSimpleNet, based on a set of designing principles, with which we empirically\nshow, a well-crafted yet simple and reasonably deep architecture can perform on\npar with deeper and more complex architectures. SimpleNet provides a good\ntradeoff between the computation/memory efficiency and the accuracy. Our simple\n13-layer architecture outperforms most of the deeper and complex architectures\nto date such as VGGNet, ResNet, and GoogleNet on several well-known benchmarks\nwhile having 2 to 25 times fewer number of parameters and operations. This\nmakes it very handy for embedded systems or systems with computational and\nmemory limitations. We achieved state-of-the-art result on CIFAR10\noutperforming several heavier architectures, near state of the art on MNIST and\ncompetitive results on CIFAR100 and SVHN. We also outperformed the much larger\nand deeper architectures such as VGGNet and popular variants of ResNets among\nothers on the ImageNet dataset. Models are made available at:\nhttps://github.com/Coderx7/SimpleNet\n