Publication | Open Access
Learning to Learn without Forgetting by Maximizing Transfer and\n Minimizing Interference
345
Citations
0
References
2018
Year
Lack of performance when it comes to continual learning over non-stationary\ndistributions of data remains a major challenge in scaling neural network\nlearning to more human realistic settings. In this work we propose a new\nconceptualization of the continual learning problem in terms of a temporally\nsymmetric trade-off between transfer and interference that can be optimized by\nenforcing gradient alignment across examples. We then propose a new algorithm,\nMeta-Experience Replay (MER), that directly exploits this view by combining\nexperience replay with optimization based meta-learning. This method learns\nparameters that make interference based on future gradients less likely and\ntransfer based on future gradients more likely. We conduct experiments across\ncontinual lifelong supervised learning benchmarks and non-stationary\nreinforcement learning environments demonstrating that our approach\nconsistently outperforms recently proposed baselines for continual learning.\nOur experiments show that the gap between the performance of MER and baseline\nalgorithms grows both as the environment gets more non-stationary and as the\nfraction of the total experiences stored gets smaller.\n