ResNeXt and Res2Net Structures for Speaker Verification

Tianyan Zhou, Yong Zhao, Jian Wu

The ResNet-based architecture has been widely adopted to extract speaker embeddings for text-independent speaker verification systems. By introducing the residual connections to the CNN and standardizing the residual blocks, the ResNet structure is capable of training deep networks to achieve highly competitive recognition performance. However, when the input feature space becomes more complicated, simply increasing the depth and width of the ResNet network may not fully realize its performance potential. In this paper, we present two extensions of the ResNet architecture, ResNeXt and Res2Net, for speaker verification. Originally proposed for image recognition, the ResNeXt and Res2Net introduce two more dimensions, cardinality and scale, in addition to depth and width, to improve the model's representation capacity. By increasing the scale dimension, the Res2Net model can represent multi-scale features with various granularities, which particularly facilitates speaker verification for short utterances. We evaluate our proposed systems on three speaker verification tasks. Experiments on the VoxCeleb test set demonstrated that the ResNeXt and Res2Net can significantly outperform the conventional ResNet model. The Res2Net model achieved superior performance by reducing the EER by 18.5% relative. Experiments on the other two internal test sets of mismatched conditions further confirmed the generalization of the ResNeXt and Res2Net architectures against noisy environment and segment length variations.

Knowledge Graph



Sign up or login to leave a comment