2015年6月2日星期二

ADMM

“Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers”

Solve this kind of problem:
where f and g are convex.

Precursors:
- Dual Ascent

- Dual Decomposition
  Related ideas appear in well known work by Dantzig and Wolfe [44] and Benders [13] on large-scale linear programming, as well as in Dantzig's seminal book [43]
  The general idea of dual decomposition appears to be originally due to Everett [69] and is explored in many early references [107, 84, 117, 14].
  Subgradient method to solve the dual problem by Shor [152].
  Good references on dual methods and decomposition include the book by Bertsekas [16, chapter 6] and the survey by Nedic and Ozdaglar [131].
  More generally, decentralized optimization.

- Augmented Lagrangians and the Method of Multipliers
  Augmented Lagrangian methods yield convergence without assumptions like strict convexity or finiteness of f.
  Augmented Lagrangians and the method of multipliers for constrained optimization were first proposed by Hestenes [97, 98] and Powell [138].

Alternating Direction Method of Multipliers:
unscaled form & scaled form.

Algorithms:
Scaled form:


Very useful algorithm, I need to spend more time on it to have a better understanding.

没有评论:

发表评论