Data Augmentation for Spoken Language Understanding via Pretrained Models

Baolin Peng, Chenguang Zhu, Michael Zeng, Jianfeng Gao

The training of spoken language understanding (SLU) models often faces the problem of data scarcity. In this paper, we put forward a data augmentation method with pretrained language models to boost the variability and accuracy of generated utterances. Furthermore, we investigate and propose solutions to two previously overlooked scenarios of data scarcity in SLU: i) Rich-in-Ontology: ontology information with numerous valid dialogue acts are given; ii) Rich-in-Utterance: a large number of unlabelled utterances are available. Empirical results show that our method can produce synthetic training data that boosts the performance of language understanding models in various scenarios.

Knowledge Graph



Sign up or login to leave a comment