Concepedia

TLDR

Artificial intelligence models struggle to learn new tasks quickly without forgetting prior knowledge, prompting the emergence of continual learning to address this sequential learning challenge. The authors propose Reinforced Continual Learning, a novel continual learning approach that selects the optimal neural architecture for each new task using reinforcement learning strategies. It employs reinforcement learning to search for the best neural architecture for each incoming task. Experiments on sequential MNIST and CIFAR‑100 variants show that Reinforced Continual Learning effectively prevents catastrophic forgetting, fits new tasks well, and outperforms existing deep network continual learning alternatives.

Abstract

Most artificial intelligence models are limited in their ability to solve new tasks faster, without forgetting previously acquired knowledge. The recently emerging paradigm of continual learning aims to solve this issue, in which the model learns various tasks in a sequential fashion. In this work, a novel approach for continual learning is proposed, which searches for the best neural architecture for each coming task via sophisticatedly designed reinforcement learning strategies. We name it as Reinforced Continual Learning. Our method not only has good performance on preventing catastrophic forgetting but also fits new tasks well. The experiments on sequential classification tasks for variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach outperforms existing continual learning alternatives for deep networks.

References

YearCitations

Page 1