Allerton 2015 Paper Abstract


Paper ThB3.4

Xi, Chenguang (Tufts University), Khan, Usman A. (Tufts University)

Directed-Distributed Gradient Descent

Scheduled for presentation during the Regular Session "Optimization" (ThB3), Thursday, October 1, 2015, 11:30−11:50, Butternut

53rd Annual Allerton Conference on Communication, Control, and Computing, Sept 29-Oct 2, 2015, Allerton Park and Retreat Center, Monticello, IL, USA

This information is tentative and subject to change. Compiled on December 5, 2021

Keywords Optimization, Decentralized and Distributed Control, Distributed Computation on Networks


Distributed Gradient Descent (DGD) is a well established algorithm to solve the minimization of a sum of multi-agents' objective functions in the network, with the assumption that the network is undirected, i.e., requiring the weight matrices to be doubly-stochastic. In this paper, we present a distributed algorithm, called Directed-Distributed Gradient Descent (D-DGD), to solve the same problem over directed graphs. In each iteration of D-DGD, we augment an additional variable at each agent to record the change in the state evolution. The algorithm simultaneously constructs a row-stochastic matrix and a column-stochastic matrix instead of only a doubly-stochastic matrix. The analysis shows that D-DGD converges at a rate of O((ln k)/(k^0.5)).



All Content © PaperCept, Inc..

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2021 PaperCept, Inc.
Page generated 2021-12-05  08:33:02 PST  Terms of use