Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning

Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, Fei Sha

Object recognition in real-world requires handling long-tailed or even open-ended data. An ideal visual system needs to reliably recognize the populated visual concepts and meanwhile efficiently learn about emerging new categories with a few training instances. Class-balanced many-shot learning and few-shot learning tackle one side of this problem, via either learning strong classifiers for populated categories or learning to learn few-shot classifiers for the tail classes. In this paper, we investigate the problem of generalized few-shot learning (GFSL) -- a model during the deployment is required to not only learn about "tail" categories with few shots but simultaneously classify the "head" and "tail" categories. We propose the ClAssifier SynThesis LEarning (CASTLE), a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of "head" classes with a shared neural dictionary, shedding light upon the inductive GFSL. Furthermore, we propose an adaptive version of CASTLE (ACASTLE) that adapts the "head" classifiers conditioned on the incoming "tail" training examples, yielding a framework that allows effective backward knowledge transfer. As a consequence, ACASTLE can handle generalized few-shot learning with classes from heterogeneous domains effectively. CASTLE and ACASTLE demonstrate superior performances than existing GFSL algorithms and strong baselines on MiniImageNet as well as TieredImageNet data sets. More interestingly, it outperforms previous state-of-the-art methods when evaluated on standard few-shot learning.

Knowledge Graph



Sign up or login to leave a comment