PhaseAug: A Differentiable Augmentation for Speech Synthesis to Simulate One-to-Many Mapping

Junhyeok Lee, Seungu Han, Hyunjae Cho, Wonbin Jung

Previous generative adversarial network (GAN)-based neural vocoders are trained to reconstruct the exact ground truth waveform from the paired mel-spectrogram and do not consider the one-to-many relationship of speech synthesis. This conventional training causes overfitting for both the discriminators and the generator, leading to the periodicity artifacts in the generated audio signal. In this work, we present PhaseAug, the first differentiable augmentation for speech synthesis that rotates the phase of each frequency bin to simulate one-to-many mapping. With our proposed method, we outperform baselines without any architecture modification. Code and audio samples will be available at https://github.com/mindslab-ai/phaseaug.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment