Debiasing Multilingual Word Embeddings: A Case Study of Three Indian Languages

Srijan Bansal, Vishal Garimella, Ayush Suhane, Animesh Mukherjee

In this paper, we advance the current state-of-the-art method for debiasing monolingual word embeddings so as to generalize well in a multilingual setting. We consider different methods to quantify bias and different debiasing approaches for monolingual as well as multilingual settings. We demonstrate the significance of our bias-mitigation approach on downstream NLP applications. Our proposed methods establish the state-of-the-art performance for debiasing multilingual embeddings for three Indian languages - Hindi, Bengali, and Telugu in addition to English. We believe that our work will open up new opportunities in building unbiased downstream NLP applications that are inherently dependent on the quality of the word embeddings used.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment