Towards Fair and Decentralized Privacy-Preserving Deep Learning

Lingjuan Lyu, Jiangshan Yu, Karthik Nandakumar, Yitong Li, Xingjun Ma, Jiong Jin

In current deep learning paradigms, the standalone framework tends to result in overfitting and low utility. This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates. Server-based frameworks unfortunately suffer from the single-point-of-failure problem, and the decentralized framework is born to be resistant to it. However, all the existing collaborative learning frameworks (distributed or decentralized) have overlooked an important aspect of participation: fairness. In particular, all parties can get similar models, even the ones merely making marginal contribution with low-quality data. To address this issue, we propose a decentralized privacy-preserving deep learning framework called DPPDL. It makes the first-ever investigation on the collaborative fairness in deep learning, and proposes two novel strategies to guarantee both fairness and privacy. We experimentally demonstrate that, on benchmark image datasets, fairness, privacy and accuracy in collaborative deep learning can now be effectively achieved at the same time by our proposed DPPDL. Moreover, it provides a viable solution to detect and reduce the impact of low-quality parties in the collaborative learning system.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment