Publication | Closed Access
PuDianNao
253
Citations
35
References
2015
Year
Unknown Venue
Artificial IntelligenceEngineeringMachine LearningData ScienceMl DomainHardware AccelerationMachine Learning ModelMachine Learning ToolComputer EngineeringComputer ArchitectureDomain-specific AcceleratorEmbedded Machine LearningParallel ProgrammingComputer ScienceParallel ComputingMl Technique
Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique.
| Year | Citations | |
|---|---|---|
Page 1
Page 1