HOME

TheInfoList



OR:

The Landweber iteration or Landweber algorithm is an algorithm to solve
ill-posed The mathematical term well-posed problem stems from a definition given by 20th-century French mathematician Jacques Hadamard. He believed that mathematical models of physical phenomena should have the properties that: # a solution exists, # the sol ...
linear
inverse problems An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the ...
, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, and it can be now viewed as a special case of many other more general methods.


Basic algorithm

The original Landweber algorithm attempts to recover a signal ''x'' from (noisy) measurements ''y''. The linear version assumes that y = Ax for a
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a Map (mathematics), mapping V \to W between two vect ...
''A''. When the problem is in finite
dimensions In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coord ...
, ''A'' is just a matrix. When ''A'' is
nonsingular In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
, then an explicit solution is x = A^ y. However, if ''A'' is
ill-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
, the explicit solution is a poor choice since it is sensitive to any noise in the data ''y''. If ''A'' is
singular Singular may refer to: * Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms * Singular homology * SINGULAR, an open source Computer Algebra System (CAS) * Singular or sounder, a group of boar ...
, this explicit solution doesn't even exist. The Landweber algorithm is an attempt to regularize the problem, and is one of the alternatives to
Tikhonov regularization Ridge regression is a method of estimating the coefficients of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Als ...
. We may view the Landweber algorithm as solving: : \min_x \, Ax-y\, _2^2/2 using an iterative method. The algorithm is given by the update : x_ = x_ - \omega A^*(Ax_k - y). where the relaxation factor \omega satisfies 0 < \omega < 2/\sigma_1^2. Here \sigma_1 is the largest
singular value In mathematics, in particular functional analysis, the singular values, or ''s''-numbers of a compact operator T: X \rightarrow Y acting between Hilbert spaces X and Y, are the square roots of the (necessarily non-negative) eigenvalues of the se ...
of A. If we write f(x) = \, Ax-y\, _2^2 /2, then the update can be written in terms of the
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
: x_ = x_k - \omega \nabla f(x_k) and hence the algorithm is a special case of
gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of ...
. For
ill-posed The mathematical term well-posed problem stems from a definition given by 20th-century French mathematician Jacques Hadamard. He believed that mathematical models of physical phenomena should have the properties that: # a solution exists, # the sol ...
problems, the iterative method needs to be stopped at a suitable iteration index, because it semi-converges. This means that the iterates approach a regularized solution during the first iterations, but become unstable in further iterations. The reciprocal of the iteration index 1/k acts as a regularization parameter. A suitable parameter is found, when the mismatch \, Ax_k-y\, _2^2 approaches the noise level. Using the Landweber iteration as a
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
algorithm has been discussed in the literature.


Nonlinear extension

In general, the updates generated by x_ = x_ - \tau \nabla f(x_k) will generate a sequence f(x_k) that converges to a minimizer of ''f'' whenever ''f'' is
convex Convex or convexity may refer to: Science and technology * Convex lens, in optics Mathematics * Convex set, containing the whole line segment that joins points ** Convex polygon, a polygon which encloses a convex set of points ** Convex polytop ...
and the stepsize \tau is chosen such that 0 < \tau < 2/( \, \nabla f\, ^2 ) where \, \cdot \, is the
spectral norm In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions). Preliminaries Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m row ...
. Since this is special type of gradient descent, there currently is not much benefit to analyzing it on its own as the nonlinear Landweber, but such analysis was performed historically by many communities not aware of unifying frameworks. The nonlinear Landweber problem has been studied in many papers in many communities; see, for example,.


Extension to constrained problems

If ''f'' is a
convex function In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph (the set of poin ...
and ''C'' is a
convex set In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex ...
, then the problem : \min_ f(x) can be solved by the constrained, nonlinear Landweber iteration, given by: : x_ = \mathcal_C( x_ - \tau \nabla f(x_k) ) where \mathcal is the projection onto the set ''C''. Convergence is guaranteed when 0 < \tau < 2/( \, A\, ^2 ) . This is again a special case of
projected gradient descent Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalt ...
(which is a special case of the
forward–backward algorithm The forward–backward algorithm is an inference algorithm for hidden Markov models which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions o_:= o_1,\dots,o_T, i.e. it computes, for all hi ...
) as discussed in.


Applications

Since the method has been around since the 1950s, it has been adopted and rediscovered by many scientific communities, especially those studying ill-posed problems. In
X-ray computed tomography X-rays (or rarely, ''X-radiation'') are a form of high-energy electromagnetic radiation. In many languages, it is referred to as Röntgen radiation, after the German scientist Wilhelm Conrad Röntgen, who discovered it in 1895 and named it ' ...
it is called SIRT - simultaneous iterative reconstruction technique. It has also been used in the
computer vision Computer vision is an Interdisciplinarity, interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate t ...
community and the signal restoration community. It is also used in
image processing An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimension ...
, since many image problems, such as
deconvolution In mathematics, deconvolution is the operation inverse to convolution. Both operations are used in signal processing and image processing. For example, it may be possible to recover the original signal after a filter (convolution) by using a de ...
, are ill-posed. Variants of this method have been used also in sparse approximation problems and
compressed sensing Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This ...
settings.{{cite book , title = Recipes for hard thresholding methods , author = Anastasios Kyrillidis , author2 = Volkan Cevher , chapter = Recipes on hard thresholding methods , pages = 353–356 , name-list-style = amp , doi = 10.1109/CAMSAP.2011.6136024 , isbn = 978-1-4577-2105-2 , chapter-url = http://infoscience.epfl.ch/record/183059 , year = 2011 , s2cid = 14996037


References

Landweber, L. (1951): An iteration formula for Fredholm integral equations of the first kind. Amer. J. Math. 73, 615–624 P. L. Combettes and J.-C. Pesquet, "Proximal splitting methods in signal processing," in: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, (H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, Editors), pp. 185–212. Springer, New York, 2011
ArXiv
/ref>
Image processing Inverse problems Gradient methods