2015年3月19日星期四

Reading List - 2015.3.18


“Fronthaul Compression for Cloud Radio Access Networks”
A survey of the work in the area of fronthaul compression with emphasis on advanced signal processing solutions based on network information theoretic concepts.

- Multiterminal compression
  Uplink: The key technique is distributed compression or Wyner-Ziv coding[10].
  Downlink: multivariate compression [9 Ch.9].
- Structured coding
  compute-and-forward[11]

  • Uplink:
- Point-to-Point Fronthaul Compression

- Distributed Fronthaul Compression
  First proposed in [17].
  sequential decompression [9, Ch. 10] [18] [19].
  Wyner-Ziv compression.
  channel decoding algorithms: message passing or trellis search[10].
  optimization problem of this scheme: block-coordinate optimization approach and leverages a key result in [20].

- Compute-and-forword [11]
- Multihop Fronthaul Topology [22]

  • Downlink
- Point-to-Point Fronthaul Compression
- Multivariate Fronthaul Compression
- Compute-and-forward [24]
- "dirty paper" nonlinear precoding [25]
- Performance Evaluation
  cell-edge throughput versus the average per-UE spectral efficiency [8, Fig.5].

2015年3月14日星期六

Reading List 2015.3.14


“Gradient Descent for Unconstrained Smooth Minimization - Quanming Yao [1]

1. Rudiments
- Taylor's Theorem
- Lipschitz Constant
- Convex and Strong Convex
- Hessian. Condition Number & Bound on Hessian
- Class of Differential Functions

2. Gradient Descent
- Gradient as General Descent Method:
  choose the step size to ensure convergence to a stationary point of f(x).
- Gradient Descent under Strong Convex
  convergent rate: 1-(1/k)
- Gradient Descent under Weak Convex
  convergent rate: A0/(cA0k + 1)

3. Message from Quadratic Programming
  first-order gradient descent: O(1/k) rate for weak convex; O(1-1/k)^k for strong convex.
- Heavy ball. Need to know function's parameter.
- Conjugate Gradient. Avoid the knowledge of function parameters.

4. Accelerated Gradient Descent
- Weak Convex
- Strong Convex

5. Newton Type Method

Proof of Lemma 1.2 in [1] referring to Lecture 2.

Very difficult for me to understand everything in the paper now.

2015年3月12日星期四

Reading List 2015.3.12


“Alternative Distributed Algorithms for Network Utility Maximization”

Decomposition techniques: primal decomposition & dual decomposition methods

subproblems (separable) & master problem (update coupling variable)

Solve coupling variable: primal method
Solve coupling constraint: dual method

- Direct Primal and Direct Dual Decompositions
- Indirect Primal and Indirect Dual Decompositions (transform coupling constraint into coupling variable)
- Multilevel Primal and Dual Decompositions
  In problem (17): two sets of constraints (similar to my problem). dual-primal / dual-dual decomposition
- Gradient/Subgradient Methods
  choices of stepsize[33][34][36].
- Standard Dual-Based Algorithm for Basic NUM (Network Utility Maximization)

Application:
- Power-Constrained Rate Allocation
- QoS Rate Allocation
- Hybrid Rate-Based and Price-Based Rate Allocation
- Multipath-Routing Rate Allocation

Reading List 2015.3.11

“Distributed Methods for Constrained Nonconvex Multi-Agent  Optimization - Part I: Theory”
Comparison of some methods: 
1) Feasible Sequential Quadratic Programming (FSQP) methods [2];  --- maintain feasibility but centralized.
2) Parallel Variable Distribution (PVD) methods [3]-[5]; --- parallel but an amount of info exchange/knowledge & convergence only for convex or non convex but block separable constraints.
3) SCA algorithms [6]-[11].
    --- [6][7][11]: centralized; [8]-[10]: distributed methods but convex and separable constraints.

2015年3月11日星期三

Reading List 2015.3.11 - Parallel variable distribution

  • “Parallel variable distribution”
- Unconstrained parallel variable distribution;
- PVD with block separable constraints;
- PVD with general constraints: min f(x) such that g(x) <= 0;
  Handling inseparable constraints: exterior penalty[8], augmented Lagrangian methods[17], [3]. Avoid both of difficulties of above: the dual differentiable exact penalty function[10].

  • “Parallel variable distribution for constrained optimization”
In parallel algorithms, an iteration consists of two steps: parallelization & synchronization[2].

Some methods: Block-Jacobi[2], updated conjugate subspaces[10], coordinate descent[21], parallel gradient distribution[14], PVD.

- Nonconvex separable constraints
- Convex inseparable constraints


  • “On the Convergence of Constrained Parallel Variable Distribution Algorithms”
Also mentioned some methods: block Jacobi[2], coordinate descent[26], parallel gradient distribution algorithms[16].
Mainly prove the convergence of optimization problems with general convex constraints.