Publication | Open Access
MTAdam: Automatic Balancing of Multiple Training Loss Terms
12
Citations
28
References
2021
Year
Artificial IntelligenceModel OptimizationEngineeringMachine LearningData ScienceData MiningMachine Learning ModelGenerative Adversarial NetworkPredictive AnalyticsLoss TermLoss TermsMulti-task LearningAutomatic BalancingComputer ScienceDeep LearningNeural Architecture SearchSupervised LearningMultiple Loss Terms
When training neural models, it is common to combine multiple loss terms. The balancing of these terms requires considerable human effort and is computationally demanding. Moreover, the optimal trade-off between the loss terms can change as training progresses, e.g., for adversarial terms. In this work, we generalize the Adam optimization algorithm to handle multiple loss terms. The guiding principle is that for every layer, the gradient magnitude of the terms should be balanced. To this end, the Multi-Term Adam (MTAdam) computes the derivative of each loss term separately, infers the first and second moments per parameter and loss term, and calculates a first moment for the magnitude per layer of the gradients arising from each loss. This magnitude is used to continuously balance the gradients across all layers, in a manner that both varies from one layer to the next and dynamically changes over time. Our results show that training with the new method leads to fast recovery from suboptimal initial loss weighting and to training outcomes that match or improve conventional training with the prescribed hyperparameters of each method.
| Year | Citations | |
|---|---|---|
Page 1
Page 1