HOME

TheInfoList



OR:

The
Least mean squares filter Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual ...
solution converges to the
Wiener filter In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant ( LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and ...
solution, assuming that the unknown system is LTI and the noise is stationary. Both filters can be used to identify the impulse response of an unknown system, knowing only the original input signal and the output of the unknown system. By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n, the LMS algorithm can be derived from the Wiener filter.


Derivation of the Wiener filter for system identification

Given a known input signal s /math>, the output of an unknown LTI system x /math> can be expressed as: x = \sum_^ h_ks -k+ w /math> where h_k is an unknown filter tap coefficients and w /math> is noise. The model system \hat /math>, using a Wiener filter solution with an order N, can be expressed as: \hat = \sum_^\hat_ks -k/math> where \hat_k are the filter tap coefficients to be determined. The error between the model and the unknown system can be expressed as: e = x - \hat /math> The total squared error E can be expressed as: E = \sum_^e 2 E = \sum_^(x - \hat ^2 E = \sum_^(x 2 - 2x hat + \hat 2) Use the Minimum mean-square error criterion over all of n by setting its
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
to zero: \nabla E = 0 which is \frac = 0 for all i = 0, 1, 2, ..., N-1 \frac = \frac \sum_^[x 2 - 2x hat + \hat 2 ] Substitute the definition of \hat /math>: \frac = \frac \sum_^[x 2 - 2x sum_^\hat_ks -k+ (\sum_^\hat_ks -k^2 ] Distribute the partial derivative: \frac = \sum_^ 2x[n[n-i">.html" ;"title="2x[n">2x[n[n-i+ 2(\sum_^\hat_ks -ks[n-i] ] Using the definition of discrete cross-correlation: R_(i) = \sum_^ x[n]y[n-i] \frac = -2R_ + 2\sum_^\hat_kR_ - k= 0 Rearrange the terms: R_ = \sum_^\hat_kR_ - k for all i = 0, 1, 2, ..., N-1 This system of N equations with N unknowns can be determined. The resulting coefficients of the Wiener filter can be determined by: W = R_^ P_, where P_ is the cross-correlation vector between x and s.


Derivation of the LMS algorithm

By relaxing the infinite sum of the Wiener filter to just the error at time n, the LMS algorithm can be derived. The squared error can be expressed as: E = (d - y ^2 Using the Minimum mean-square error criterion, take the gradient: \frac = \frac(d - y ^2 Apply chain rule and substitute definition of y \frac = 2(d - y \frac(d - \sum_^\hat_kx -k \frac = -2(e (x -i Using gradient descent and a step size \mu: w +1= w - \mu\frac{\partial w} which becomes, for i = 0, 1, ..., N-1, w_i +1= w_i + 2\mu(e (x -i This is the LMS update equation.


See also

*
Wiener filter In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant ( LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and ...
*
Least mean squares filter Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual ...


References

* J.G. Proakis and D.G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, Prentice-Hall, 4th ed., 2007. Digital signal processing Filter theory