Deep Network Perceptual Losses for Speech Denoising

Mark R. Saddler, Andrew Francl, Jenelle Feather, Kaizhi Qian, Yang Zhang, Josh H. McDermott

Contemporary speech enhancement predominantly relies on audio transforms that are trained to reconstruct a clean speech waveform. Here we investigate whether deep feature representations learned for audio classification tasks can be used to improve denoising. We first trained deep neural networks to classify either spoken words or environmental sounds from audio. We then trained an audio transform to map noisy speech to an audio waveform that minimized 'perceptual' losses derived from the recognition network. When the transform was trained to minimize the difference in the deep feature representations between the output audio and the corresponding clean audio, it removed noise substantially better than baseline methods trained to reconstruct clean waveforms. The learned deep features were essential for this improvement, as features from untrained networks with random weights did not provide the same benefit. The results suggest the use of deep features as perceptual metrics to guide speech enhancement.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment