Upcoming Events

Distributed Multi-agent Optimization in a Stochastic Derivative-free Setting

October 24, 2013 10:30AM to 11:30AM
Presenter 
Jeffrey Larson, Postdoc Interviewee
Location 
Building 240, Room 4301
Type 
Seminar
Series 
Abstract:
This talk presents a distributed algorithm for the cooperative optimization of multiple agents connected by a network. We attempt to optimize a global objective which is a sum of the agents' objectives, while assuming each agent can only query his objective under stochastic noise. While convergence results for the complete algorithm have yet to be established, we present two intermediate results that are of interest on their own. We first show that a common technique for distributed optimization, the alternative direction method of multipliers (ADMM), converges when agents' models are inexact. Such an inexact minimization is show to improve the performance of ADMM on problem classes where it is already successful.

We then propose and prove almost-sure convergence of a method for stochastic optimization which constructs models that are used in a trust region framework. When optimizing stochastic functions without derivatives, many existing algorithms either

  1. evaluate the function in a fixed pattern,
  2. require the user to define a decaying sequence of step sizes, or
  3. repeatedly sample points of interest.

Our method avoids such requirements and is shown to out perform existing algorithms on a benchmark suite of problems. We lastly outline how these two results will be combined to prove convergence in the stochastic multi-agent setting.