Publication | Closed Access
DeepBurning
207
Citations
10
References
2016
Year
Unknown Venue
Nn AcceleratorsDeep Neural NetworksEngineeringMachine LearningHardware AccelerationHardware AlgorithmComputer EngineeringComputer ArchitectureMachine Learning AcceleratorsNeural Architecture SearchDomain-specific AcceleratorEmbedded Machine LearningComputer ScienceParallel ComputingDeep LearningGenerated Learning Accelerators
Recent advances in Neural Networks (NN) are enabling more and more innovative applications. As an energy-efficient hardware solution, machine learning accelerators for CNNs or traditional ANNs are also gaining popularity in the area of embedded vision, robotics and cyberphysics. However, the design parameters of NN models vary significantly from application to application. Hence, it's hard to provide one general and highly-efficient hardware solution to accommodate all of them, and it is also impractical for the domain-specific developers to customize their flown hardware targeting on a specific NN model. To deal with this dilemma, this study proposes a design automation tool, DeepBurning, allowing the application developers to build from scratch learning accelerators that targets their specific NN models with custom configurations and optimized performance. DeepBurning includes a RTL-level accelerator generator and a coordinated compiler that generates the control flow and data layout under the user-specified constraints. The results can be used to implement FPGA-based NN accelerator or help generate chip design for early design stage. In general, DeepBurning supports a large family of NN models, and greatly simplifies the design flow of NN accelerators for the machine learning or AI application developers. The evaluation shows that the generated learning accelerators burnt to our FPGA board exhibit great power efficiency compared to state-of-the-art FPGA-based solutions.
| Year | Citations | |
|---|---|---|
Page 1
Page 1