Fairness Considered Harmful: On the Non-portability of Fair-ML in India

Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Vinodkumar Prabhakaran

Conventional algorithmic fairness is Western in its sub-groups, values, and optimizations. In this paper, we ask how portable the assumptions of this largely Western take on algorithmic fairness are to a different geo-cultural context such as India. Based on 36 expert interviews with Indian scholars, and an analysis of emerging algorithmic deployments in India, we identify three clusters of challenges that engulf the large distance between machine learning models and oppressed communities in India. We argue that a mere translation of technical fairness work to Indian subgroups may serve only as a window dressing, and instead, call for a collective re-imagining of Fair-ML, by re-contextualising data and models, empowering oppressed communities, and more importantly, enabling ecosystems.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment