Recursive Euclidean Distance Based Robust Aggregation Technique For Federated Learning. (arXiv:2303.11337v1 [cs.LG])

Federated learning has gained popularity as a solution to data availability
and privacy challenges in machine learning. However, the aggregation process of
local model updates to obtain a global model in federated learning is
susceptible to malicious attacks, such as backdoor poisoning, label-flipping,
and membership inference. Malicious users aim to sabotage the collaborative
learning process by training the local model with malicious data. In this
paper, we propose a novel robust aggregation approach based on recursive
Euclidean distance calculation. Our approach measures the distance of the local
models from the previous global model and assigns weights accordingly. Local
models far away from the global model are assigned smaller weights to minimize
the data poisoning effect during aggregation. Our experiments demonstrate that
the proposed algorithm outperforms state-of-the-art algorithms by at least
$5%$ in accuracy while reducing time complexity by less than $55%$. Our
contribution is significant as it addresses the critical issue of malicious
attacks in federated learning while improving the accuracy of the global model.



Related post