HOME
*





Moreau Envelope
The Moreau envelope (or the Moreau-Yosida regularization) M_f of a proper lower semi-continuous convex function f is a smoothed version of f. It was proposed by Jean-Jacques Moreau in 1965. The Moreau envelope has important applications in mathematical optimization: minimizing over M_f and minimizing over f are equivalent problems in the sense that set of minimizers of f and M_f are the same. However, first-order optimization algorithms can be directly applied to M_f, since f may be non-differentiable while M_f is always continuously differentiable. Indeed, many proximal gradient methods can be interpreted as a gradient descent method over M_f. Definition The Moreau envelope of a proper lower semi-continuous convex function f from a Hilbert space \mathcal to (-\infty,+\infty] is defined as M_(v) = \inf_ \left(f(x) + \frac \, x - v\, _2^2\right). Given a parameter \lambda \in \mathbb, the Moreau envelope of \lambda f is also called as the Moreau envelope of f with parameter \lam ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Semi-continuity
In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function f is upper (respectively, lower) semicontinuous at a point x_0 if, roughly speaking, the function values for arguments near x_0 are not much higher (respectively, lower) than f\left(x_0\right). A function is continuous if and only if it is both upper and lower semicontinuous. If we take a continuous function and increase its value at a certain point x_0 to f\left(x_0\right) + c for some c>0, then the result is upper semicontinuous; if we decrease its value to f\left(x_0\right) - c then the result is lower semicontinuous. The notion of upper and lower semicontinuous function was first introduced and studied by René Baire in his thesis in 1899. Definitions Assume throughout that X is a topological space and f:X\to\overline is a function with values in the extended real numbers \overline=\R \cup ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Convex Conjugate
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). It allows in particular for a far reaching generalization of Lagrangian duality. Definition Let X be a real topological vector space and let X^ be the dual space to X. Denote by :\langle \cdot , \cdot \rangle : X^ \times X \to \mathbb the canonical dual pairing, which is defined by \left( x^*, x \right) \mapsto x^* (x). For a function f : X \to \mathbb \cup \ taking values on the extended real number line, its is the function :f^ : X^ \to \mathbb \cup \ whose value at x^* \in X^ is defined to be the supremum: :f^ \left( x^ \right) := \sup \left\, or, equivalently, in terms of the infimum: :f^ \left( x^ \right) := - \inf \left\. This definition can ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Proximal Gradient Method
Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. Many interesting problems can be formulated as convex optimization problems of the form \operatorname\limits_ \sum_^n f_i(x) where f_i: \mathbb^N \rightarrow \mathbb,\ i = 1, \dots, n are possibly non-differentiable convex functions. The lack of differentiability rules out conventional smooth optimization techniques like the steepest descent method and the conjugate gradient method, but proximal gradient methods can be used instead. Proximal gradient methods starts by a splitting step, in which the functions f_1, . . . , f_n are used individually so as to yield an easily implementable algorithm. They are called proximal because each non-differentiable function among f_1, . . . , f_n is involved via its proximity operator. Iterative shrinkage thresholding algorithm, projected Landweber, projected gradient, alternating projections, alternating- ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Proximal Operator
In mathematical optimization, the proximal operator is an operator associated with a proper,An (extended) real-valued function ''f'' on a Hilbert space is said to be ''proper'' if it is not identically equal to +\infty, and -\infty is not in its image. lower semi-continuous convex function f from a Hilbert space \mathcal to \infty,+\infty/math>, and is defined by: ::\operatorname_f(v) = \arg \min_ \left(f(x) + \frac 1 2 \, x - v\, _\mathcal^2\right). For any function in this class, the minimizer of the right-hand side above is unique, hence making the proximal operator well-defined. The proximal operator is used in proximal gradient methods, which is frequently used in optimization algorithms associated with non-differentiable optimization problems such as total variation denoising. Properties The \text of a proper, lower semi-continuous convex function f enjoys several useful properties for optimization. * Fixed points of \text_f are minimizers of f: \ = \arg \min f. * G ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Cole–Hopf Transformation
The Cole–Hopf transformation is a method of solving parabolic partial differential equations (PDEs) with a quadratic nonlinearity of the form:u_ - a\Delta u + b\, \nabla u\, ^ = 0, \quad u(0,x) = g(x) where x\in \mathbb^, a,b are constants, \Delta is the Laplace operator, \nabla is the gradient, and \, \cdot\, ^ is the \ell^-norm. By assuming that w = \phi(u), where \phi(\cdot) is an unknown smooth function, we may calculate:w_ = \phi'(u)u_, \quad \Delta w = \phi'(u)\Delta u + \phi''(u)\, \nabla u\, ^ Which implies that:\begin w_ = \phi'(u)u_ &= \phi'(u)\left( a\Delta u - b\, \nabla u\, ^\right) \\ &= a\Delta w - (a\phi'' + b\phi')\, \nabla u\, ^ \\ &= a\Delta w \end if we constrain \phi to satisfy a\phi'' + b\phi' = 0. Then we may transform the original nonlinear PDE into the canonical heat equation by using the transformation: This is ''the'' ''Cole-Hopf transformation''. With the transformation, the following initial-value problem can now be solved:w_ - a\Delta w = 0, \quad w( ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Stanley Osher
Stanley Osher (born April 24, 1942) is an American mathematician, known for his many contributions in shock capturing, level-set methods, and PDE-based methods in computer vision and image processing. Osher is a professor at the University of California, Los Angeles (UCLA), Director of Special Projects in the Institute for Pure and Applied Mathematics (IPAM) and member of the California NanoSystems Institute (CNSI) at UCLA. He has a daughter, Kathryn, and a son, Joel. Education * BS, Brooklyn College, 1962 * MS, New York University, 1964 * PhD, New York University, 1966 Research interests * Level-set methods for computing moving fronts * Approximation methods for hyperbolic conservation laws and Hamilton–Jacobi equations * Total variation (TV) and other PDE-based image processing techniques * Scientific computing * Applied partial differential equations * L1/TV-based convex optimization Osher is listed as an ISI highly cited researcher. Research contributions Osher was ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hamilton–Jacobi Equation
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely. The Hamilton–Jacobi equation is also the only formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, Schrödinger's equation, as described below; for this reason, the H ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Viscosity Solution
In mathematics, the viscosity solution concept was introduced in the early 1980s by Pierre-Louis Lions and Michael G. Crandall as a generalization of the classical concept of what is meant by a 'solution' to a partial differential equation (PDE). It has been found that the viscosity solution is the natural solution concept to use in many applications of PDE's, including for example first order equations arising in dynamic programming (the Hamilton–Jacobi–Bellman equation), differential games (the Hamilton–Jacobi–Isaacs equation) or front evolution problems, as well as second-order equations such as the ones arising in stochastic optimal control or stochastic differential games. The classical concept was that a PDE : F(x,u,Du,D^2 u) = 0 over a domain x\in\Omega has a solution if we can find a function ''u''(''x'') continuous and differentiable over the entire domain such that x, u, Du, D^2 u satisfy the above equation at every point. If a scalar equation is degenerate ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Fenchel's Duality Theorem
In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel. Let ''ƒ'' be a proper convex function on R''n'' and let ''g'' be a proper concave function on R''n''. Then, if regularity conditions are satisfied, :\inf_x ( f(x)-g(x) ) = \sup_p ( g_*(p)-f^*(p)). where ''ƒ'' * is the convex conjugate of ''ƒ'' (also referred to as the Fenchel–Legendre transform) and ''g'' * is the concave conjugate of ''g''. That is, :f^ \left( x^ \right) := \sup \left \ :g_ \left( x^ \right) := \inf \left \ Mathematical theorem Let ''X'' and ''Y'' be Banach spaces, f: X \to \mathbb \cup \ and g: Y \to \mathbb \cup \ be convex functions and A: X \to Y be a bounded linear map. Then the Fenchel problems: :p^* = \inf_ \ :d^* = \sup_ \ satisfy weak duality, i.e. p^* \geq d^*. Note that f^*,g^* are the convex conjugates of ''f'',''g'' respectively, and A^* is the adjoint operator. The perturbation function for this du ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convex Function
In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain. Well-known examples of convex functions of a single variable include the quadratic function x^2 and the exponential function e^x. In simple terms, a convex function refers to a function whose graph is shaped like a cup \cup, while a concave function's graph is shaped like a cap \cap. Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on an open set has n ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Proximal Operator
In mathematical optimization, the proximal operator is an operator associated with a proper,An (extended) real-valued function ''f'' on a Hilbert space is said to be ''proper'' if it is not identically equal to +\infty, and -\infty is not in its image. lower semi-continuous convex function f from a Hilbert space \mathcal to \infty,+\infty/math>, and is defined by: ::\operatorname_f(v) = \arg \min_ \left(f(x) + \frac 1 2 \, x - v\, _\mathcal^2\right). For any function in this class, the minimizer of the right-hand side above is unique, hence making the proximal operator well-defined. The proximal operator is used in proximal gradient methods, which is frequently used in optimization algorithms associated with non-differentiable optimization problems such as total variation denoising. Properties The \text of a proper, lower semi-continuous convex function f enjoys several useful properties for optimization. * Fixed points of \text_f are minimizers of f: \ = \arg \min f. * G ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]