Dykstra's Projection Algorithm
   HOME

TheInfoList



OR:

Dykstra's algorithm is a method that computes a point in the intersection of convex sets, and is a variant of the
alternating projection In mathematics, projections onto convex sets (POCS), sometimes known as the alternating projection method, is a method to find a point in the intersection of two closed convex sets. It is a very simple algorithm and has been rediscovered many times ...
method (also called the projections onto convex sets method). In its simplest form, the method finds a point in the intersection of two convex sets by iteratively projecting onto each of the convex set; it differs from the alternating projection method in that there are intermediate steps. A parallel version of the algorithm was developed by Gaffke and Mathar. The method is named after Richard L. Dykstra who proposed it in the 1980s. A key difference between Dykstra's algorithm and the standard alternating projection method occurs when there is more than one point in the intersection of the two sets. In this case, the alternating projection method gives some arbitrary point in this intersection, whereas Dykstra's algorithm gives a specific point: the projection of ''r'' onto the intersection, where ''r'' is the initial point used in the algorithm,


Algorithm

Dykstra's algorithm finds for each r the only \bar \in C\cap D such that: : \, \bar-r\, ^2\le \, x-r\, ^2, \text x\in C \cap D, where C,D are convex sets. This problem is equivalent to finding the projection of r onto the set C\cap D, which we denote by \mathcal_. To use Dykstra's algorithm, one must know how to project onto the sets C and D separately. First, consider the basic
alternating projection In mathematics, projections onto convex sets (POCS), sometimes known as the alternating projection method, is a method to find a point in the intersection of two closed convex sets. It is a very simple algorithm and has been rediscovered many times ...
(aka POCS) method (first studied, in the case when the sets C,D were linear subspaces, by John von Neumann), which initializes x_0=r and then generates the sequence : x_ = \mathcal_C \left( \mathcal_D ( x_k ) \right) . Dykstra's algorithm is of a similar form, but uses additional auxiliary variables. Start with x_0 =r, p_0=q_0=0 and update by : y_k = \mathcal_D( x_k + p_k ) : p_ = x_k + p_k - y_k : x_ = \mathcal_C( y_k + q_k ) : q_ =y_k + q_k - x_. Then the sequence (x_k) converges to the solution of the original problem. For convergence results and a modern perspective on the literature, see P. L. Combettes and J.-C. Pesquet, "Proximal splitting methods in signal processing," in: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, (H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, Editors), pp. 185–212. Springer, New York, 201

/ref>


References

* *


Citations

{{reflist Convex geometry Optimization algorithms and methods