SS-IL: Separated Softmax for Incremental Learning

Hongjoon Ahn, Jihwan Kwak, Subin Lim, Hyeonsu Bang, Hyojun Kim, Taesup Moon

We consider class incremental learning (CIL) problem, in which a learning agent continuously learns new classes from incrementally arriving training data batches and aims to predict well on all the classes learned so far. The main challenge of the problem is the catastrophic forgetting, and for the exemplar-memory based CIL methods, it is generally known that the forgetting is commonly caused by the prediction score bias that is injected due to the data imbalance between the new classes and the old classes (in the exemplar-memory). While several methods have been proposed to correct such score bias by some additional post-processing, e.g., score re-scaling or balanced fine-tuning, no systematic analysis on the root cause of such bias has been done. To that end, we analyze that computing the softmax probabilities by combining the output scores for all old and new classes could be the main source of the bias and propose a new CIL method, Separated Softmax for Incremental Learning (SS-IL). Our SS-IL consists of separated softmax (SS) output layer and ratio-preserving (RP) mini-batches combined with task-wise knowledge distillation (TKD), and through extensive experimental results, we show our SS-IL achieves very strong state-of-the-art accuracy on several large-scale benchmarks. We also show SS-IL makes much more balanced prediction, without any additional post-processing steps as is done in other baselines.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment