Skip to main content
Seminar | Mathematics and Computer Science

Privacy-Preserving Distributed Optimization Algorithms and Software Framework

LANS Seminar

Abstract: In distributed optimization (DO), multiple agents cooperate to minimize a global objective function, which can be expressed as a sum of local objective functions, subject to some constraints. Specifically, each agent solves a local optimization model constructed by its own data as well as the information received from its neighbors in a communication network. Compared with a centralized framework, DO has several advantages, such as (i) less communication as the agents share limited amounts of information, (ii) robustness with respect to the failure of individual agents, and (iii) the potential to be computationally superior mainly due to parallel computations. Due to these advantages, DO has been utilized in various application areas, such as power systems and machine learning. However, most DO algorithms do not guarantee data privacy even though data are not shared with others, which limits the practical usage of DO in applications with sensitive personal data.

To address the data privacy concern in DO, we propose privacy-preserving DO algorithms that provide a statistical guarantee of data privacy known as differential privacy and apply them to some optimization problems in power systems as well as federated learning (FL). The proposed privacy-preserving DO algorithms stand out in that a sequence of iterates converges to an optimal solution in expectation without any error bound. Additionally, we develop and release an open-source software framework for privacy-preserving FL which enables real-world FL by communication information via gRPC as well as simulation of FL under an HPC architecture.