Multi-Server Secure Aggregation with Unreliable Communication Links

Kai Liang, Songze Li, Ming Ding, Youlong Wu

In many distributed learning setups such as federated learning (FL), client nodes at the edge use individually collected data to compute local gradients and send them to a central master server. The master server then aggregates the received gradients and broadcasts the aggregation to all clients, with which the clients can update the global model. In this paper, we consider multi-server federated learning with secure aggregation and unreliable communication links. We first define a threat model using Shannon's information-theoretic security framework and propose a novel scheme called Lagrange Coding with Mask (LCM), which divides the servers into groups and uses Coding and Masking techniques. LCM can achieve a trade-off between the uplink and downlink communication loads by adjusting the number of servers in each group. Furthermore, we derive the lower bounds of the uplink and downlink communication loads, respectively, and prove that LCM achieves the optimal uplink communication load, which is unrelated to the number of collusion clients.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment