Kalman filter
   HOME

TheInfoList



OR:

For
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
and
control theory Control theory is a field of mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive ...
, Kalman filtering, also known as linear quadratic estimation (LQE), is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a
joint probability distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory. This digital filter is sometimes termed the ''Stratonovich–Kalman–Bucy filter'' because it is a special case of a more general, nonlinear filter developed somewhat earlier by the Soviet
mathematician A mathematician is someone who uses an extensive knowledge of mathematics in their work, typically to solve mathematical problems. Mathematicians are concerned with numbers, data, quantity, structure, space, models, and change. History On ...
Ruslan Stratonovich. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before summer 1960, when Kalman met with Stratonovich during a conference in Moscow. Kalman filtering has numerous technological applications. A common application is for
guidance, navigation, and control Guidance, navigation and control (abbreviated GNC, GN&C, or G&C and within the context of NASA operations, often pronounced 'Gintsee' or (IPA) ʤɪnsiː) is a branch of engineering dealing with the design of systems to control the movement ...
of vehicles, particularly aircraft, spacecraft and ships positioned dynamically. Furthermore, Kalman filtering is a concept much applied in
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Ex ...
analysis used for topics such as
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing '' signals'', such as sound, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, ...
and
econometrics Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics," '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. ...
. Kalman filtering is also one of the main topics of robotic
motion planning Motion planning, also path planning (also known as the navigation problem or the piano mover's problem) is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is use ...
and control and can be used for trajectory optimization. Kalman filtering also works for modeling the
central nervous system The central nervous system (CNS) is the part of the nervous system consisting primarily of the brain and spinal cord. The CNS is so named because the brain integrates the received information and coordinates and influences the activity of all p ...
's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters provides a realistic model for making estimates of the current state of a motor system and issuing updated commands. The algorithm works by a two-phase process. For the prediction phase, the Kalman filter produces estimates of the current
state variable A state variable is one of the set of variables that are used to describe the mathematical "state" of a dynamical system. Intuitively, the state of a system describes enough about the system to determine its future behaviour in the absence of a ...
s, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required. Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible ''
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
'' estimator in the minimum mean-square-error sense. Extensions and
generalization A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common character ...
s of the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on
nonlinear system In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many othe ...
s. The basis is a
hidden Markov model A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ...
such that the state space of the latent variables is
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous g ...
and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filtering.


History

The filtering method is named for Hungarian
émigré An ''émigré'' () is a person who has emigrated, often with a connotation of political or social self-exile. The word is the past participle of the French ''émigrer'', "to emigrate". French Huguenots Many French Huguenots fled France followin ...
Rudolf E. Kálmán, although Thorvald Nicolai Thiele and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the Johns Hopkins Applied Physics Laboratory contributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering.
Stanley F. Schmidt Stanley F. Schmidt (January 21, 1926 – August 13, 2015) was an American aerospace engineer who pioneered the Schmidt-Kalman filter used in air and space navigation, most notably in Apollo program, Apollo spacecraft. Early life and education S ...
is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements. It was during a visit by Kálmán to the NASA Ames Research Center that Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for the Apollo program resulting in its incorporation in the Apollo navigation computer. This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961). Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear
ballistic missile submarine A ballistic missile submarine is a submarine capable of deploying submarine-launched ballistic missiles (SLBMs) with nuclear warheads. The United States Navy's hull classification symbols for ballistic missile submarines are SSB and SSBN ...
s, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's
Air Launched Cruise Missile An air-launched cruise missile (ALCM) is a cruise missile that is launched from a military aircraft. Current versions are typically standoff weapons which are used to attack predetermined land targets with conventional, nuclear or thermonuclear ...
. They are also used in the guidance and navigation systems of reusable launch vehicles and the attitude control and navigation systems of spacecraft which dock at the
International Space Station The International Space Station (ISS) is the largest Modular design, modular space station currently in low Earth orbit. It is a multinational collaborative project involving five participating space agencies: NASA (United States), Roscosmos ( ...
.


Overview of the calculation

Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its
state State may refer to: Arts, entertainment, and media Literature * ''State Magazine'', a monthly magazine published by the U.S. Department of State * ''The State'' (newspaper), a daily newspaper in Columbia, South Carolina, United States * ''Our S ...
) that is better than the estimate obtained by using only one measurement alone. As such, it is a common sensor fusion and data fusion algorithm. Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state. The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's ''
gain Gain or GAIN may refer to: Science and technology * Gain (electronics), an electronics and signal processing term * Antenna gain * Gain (laser), the amplification involved in laser emission * Gain (projection screens) * Information gain in de ...
''. The Kalman-gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain close to one will result in a more jumpy estimated trajectory, while a low gain close to zero will smooth out noise but decrease the responsiveness. When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.


Example application

As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as
dead reckoning In navigation, dead reckoning is the process of calculating current position of some moving object by using a previously determined position, or fix, and then incorporating estimates of speed, heading direction, and course over elapsed time. ...
. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will
drift Drift or Drifts may refer to: Geography * Drift or ford (crossing) of a river * Drift, Kentucky, unincorporated community in the United States * In Cornwall, England: ** Drift, Cornwall, village ** Drift Reservoir, associated with the village ...
over time as small errors accumulate. For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping.


Technical description and context

The Kalman filter is an efficient
recursive filter In signal processing, a recursive filter is a type of filter which re-uses one or more of its outputs as an input. This feedback typically results in an unending impulse response (commonly referred to as ''infinite impulse response'' (IIR)), chara ...
estimating the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of
engineering Engineering is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings. The discipline of engineering encompasses a broad range of more speciali ...
and econometric applications from
radar Radar is a detection system that uses radio waves to determine the distance (''ranging''), angle, and radial velocity of objects relative to the site. It can be used to detect aircraft, Marine radar, ships, spacecraft, guided missiles, motor v ...
and
computer vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human ...
to estimation of structural macroeconomic models, and is an important topic in
control theory Control theory is a field of mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive ...
and
control system A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial ...
s engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear–quadratic–Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory. In most applications, the internal state is much larger (has more
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state. For the Dempster–Shafer theory, each state equation or observation is considered a special case of a
linear belief function Linear belief functions are an extension of the Dempster–Shafer theory of belief functions to the case when variables of interest are continuous. Examples of such variables include financial asset prices, portfolio performance, and other anteced ...
and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov tree. Additional methods include belief filtering which use Bayes or evidential updates to the state equations. A wide variety of Kalman filters exists by now, from Kalman's original formulation - now termed the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the
phase-locked loop A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal whose phase is related to the phase of an input signal. There are several different types; the simplest is an electronic circuit consisting of a ...
, which is now ubiquitous in radios, especially
frequency modulation Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. The technology is used in telecommunications, radio broadcasting, signal processing, and computing. In analog ...
(FM) radios, television sets,
satellite communications A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. ...
receivers, outer space communications systems, and nearly any other electronic communications equipment.


Underlying dynamic system model

Kalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a
Markov chain A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happen ...
built on
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a Map (mathematics), mapping V \to W between two vect ...
s perturbed by errors that may include
Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponym ...
noise Noise is unwanted sound considered unpleasant, loud or disruptive to hearing. From a physics standpoint, there is no distinction between noise and desired sound, as both are vibrations through a medium, such as air or water. The difference aris ...
. The
state State may refer to: Arts, entertainment, and media Literature * ''State Magazine'', a monthly magazine published by the U.S. Department of State * ''The State'' (newspaper), a daily newspaper in Columbia, South Carolina, United States * ''Our S ...
of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a vector of
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every ...
s. At each
discrete time In mathematical dynamics, discrete time and continuous time are two alternative frameworks within which variables that evolve over time are modeled. Discrete time Discrete time views values of variables as occurring at distinct, separate "po ...
increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999) and Hamilton (1994), Chapter 13.Hamilton, J. (1994), ''Time Series Analysis'', Princeton University Press. Chapter 13, 'The Kalman Filter' In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step ''k'', following: * F''k'', the state-transition model; * H''k'', the observation model; * Q''k'', the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
of the process noise; * R''k'', the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
of the observation noise; * and sometimes B''k'', the control-input model as described below; if B''k'' is included, then there is also * u''k'', the control vector, representing the controlling input into control-input model. The Kalman filter model assumes the true state at time ''k'' is evolved from the state at (''k'' − 1) according to : \mathbf_k = \mathbf_k \mathbf_ +\mathbf_k \mathbf_ + \mathbf_k where * F''k'' is the state transition model which is applied to the previous state x''k''−1; * B''k'' is the control-input model which is applied to the control vector u''k''; * w''k'' is the process noise, which is assumed to be drawn from a zero mean
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
, \mathcal, with
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
, Q''k'': \mathbf_k \sim \mathcal\left(0, \mathbf_k\right) . At time ''k'' an observation (or measurement) z''k'' of the true state x''k'' is made according to :\mathbf_k = \mathbf_k \mathbf_k + \mathbf_k where * H''k'' is the observation model, which maps the true state space into the observed space and * v''k'' is the observation noise, which is assumed to be zero mean Gaussian
white noise In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used, with this or similar meanings, in many scientific and technical disciplines ...
with covariance R''k'': \mathbf_k \sim \mathcal\left(0, \mathbf_k\right) . The initial state, and the noise vectors at each step are all assumed to be mutually
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
. Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using
robust control In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typicall ...
.


Details

The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation \hat_ represents the estimate of \mathbf at time ''n'' given observations up to and including at time . The state of the filter is represented by two variables: * \mathbf_, the ''
a posteriori ("from the earlier") and ("from the later") are Latin phrases used in philosophy to distinguish types of knowledge, justification, or argument by their reliance on empirical evidence or experience. knowledge is independent from current ex ...
'' state estimate at time ''k'' given observations up to and including at time ''k''; * \mathbf_, the ''a posteriori'' estimate covariance matrix (a measure of the estimated
accuracy Accuracy and precision are two measures of '' observational error''. ''Accuracy'' is how close a given set of measurements ( observations or readings) are to their '' true value'', while ''precision'' is how close the measurements are to each o ...
of the state estimate). The algorithm structure of the Kalman filter resembles that of
Alpha beta filter An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filterEli Brookner: Tracking and Kalman Filtering Made Easy. Wiley-Interscience, 1st edition, 4 1998.) is a simplified form of observer for estimation, data smoothing and contr ...
. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the ''a priori'' state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the
innovation Innovation is the practical implementation of ideas that result in the introduction of new goods or services or improvement in offering goods or services. ISO TC 279 in the standard ISO 56000:2020 defines innovation as "a new or changed enti ...
(the pre-fit residual), i.e. the difference between the current ''a priori'' prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the ''a posteriori'' state estimate. Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices H''k'').


Predict


Update

The formula for the updated (''a posteriori'') estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the '' derivations'' section, where the formula valid for any Kk is also shown. A more intuitive way to express the updated state estimate (\hat_) is: :\mathbf_ = (\mathbf - \mathbf_k \mathbf_k) \hat_ + \mathbf_k \mathbf_k This expression reminds us of a linear interpolation, x = (1-t)(a) + t(b) for t between ,1 In our case: * t is the Kalman gain (\mathbf_k), a matrix that takes values from 0 (high error in the sensor) to I (low error). * a is the value estimated from the model. * b is the value from the measurement. This expression also resembles the
alpha beta filter An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filterEli Brookner: Tracking and Kalman Filtering Made Easy. Wiley-Interscience, 1st edition, 4 1998.) is a simplified form of observer for estimation, data smoothing and contr ...
update step.


Invariants

If the model is accurate, and the values for \hat_ and \mathbf_ accurately reflect the distribution of the initial state values, then the following invariants are preserved: :\begin \operatorname mathbf_k - \hat_&= \operatorname mathbf_k - \hat_= 0 \\ \operatorname tilde_k&= 0 \end where \operatorname xi/math> is the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
of \xi. That is, all estimates have a mean error of zero. Also: :\begin \mathbf_ &= \operatorname\left(\mathbf_k - \hat_\right) \\ \mathbf_ &= \operatorname\left(\mathbf_k - \hat_\right) \\ \mathbf_k &= \operatorname\left(\tilde_k\right) \end so covariance matrices accurately reflect the covariance of estimates.


Estimation of the noise covariances Q''k'' and R''k''

Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Q''k'' and R''k''. Extensive research has been done to estimate these covariances from data. One practical method of doing this is the ''autocovariance least-squares (ALS)'' technique that uses the time-lagged autocovariances of routine operating data to estimate the covariances. The
GNU Octave GNU Octave is a high-level programming language primarily intended for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a lan ...
and
Matlab MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
code used to calculate the noise covariance matrices using the ALS technique is available online using the
GNU General Public License The GNU General Public License (GNU GPL or simply GPL) is a series of widely used free software licenses that guarantee end users the four freedoms to run, study, share, and modify the software. The license was the first copyleft for general ...
. Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed. The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods.


Optimality and performance

It follows from theory that the Kalman filter is the optimal linear filter in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated) and c) the covariances of the noise are known exactly. Correlated noises can also be treated using Kalman filters. Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. After the covariances are estimated, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the
innovations Innovation is the practical implementation of ideas that result in the introduction of new goods or services or improvement in offering goods or services. ISO TC 279 in the standard ISO 56000:2020 defines innovation as "a new or changed entit ...
measures filter performance. Several different methods can be used for this purpose. If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.


Example application, technical

Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δ''t'' seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and
velocity Velocity is the directional speed of an object in motion as an indication of its rate of change in position as observed from a particular frame of reference and as measured by a particular standard of time (e.g. northbound). Velocity i ...
. We show here how we derive the model from which we create our Kalman filter. Since \mathbf, \mathbf, \mathbf, \mathbf are constant, their time indices are dropped. The position and velocity of the truck are described by the linear state space :\mathbf_k = \begin x \\ \dot \end where \dot is the velocity, that is, the derivative of position with respect to time. We assume that between the (''k'' − 1) and ''k'' timestep, uncontrolled forces cause a constant acceleration of ''a''''k'' that is normally distributed with mean 0 and standard deviation ''σ''''a''. From
Newton's laws of motion Newton's laws of motion are three basic laws of classical mechanics that describe the relationship between the motion of an object and the forces acting on it. These laws can be paraphrased as follows: # A body remains at rest, or in mo ...
we conclude that :\mathbf_k = \mathbf \mathbf_ + \mathbf a_k (there is no \mathbfu term since there are no known control inputs. Instead, ''a''''k'' is the effect of an unknown input and \mathbf applies that effect to the state vector) where :\begin \mathbf &= \begin 1 & \Delta t \\ 0 & 1 \end \\ pt \mathbf &= \begin \frac^2 \\ pt \Delta t \end \end so that :\mathbf_k = \mathbf \mathbf_ + \mathbf_k where :\begin \mathbf_k &\sim N(0, \mathbf) \\ \mathbf &= \mathbf\mathbf^\textsf\sigma_a^2 = \begin \frac^4 & \frac^3 \\ pt \frac^3 & ^2 \end\sigma_a^2. \end The matrix \mathbf is not full rank (it is of rank one if \Delta t \neq 0). Hence, the distribution N(0, \mathbf) is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by :\mathbf_k \sim \mathbf \cdot N\left(0, \sigma_a^2\right). At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise ''v''''k'' is also distributed normally, with mean 0 and standard deviation ''σ''''z''. :\mathbf_k = \mathbf_k + \mathbf_k where :\mathbf = \begin 1 & 0 \end and : \mathbf = \mathrm\left mathbf_k \mathbf_k^\textsf\right = \begin \sigma_z^2 \end We know the initial starting state of the truck with perfect precision, so we initialize :\hat_ = \begin 0 \\ 0 \end and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix: :\mathbf_ = \begin 0 & 0 \\ 0 & 0 \end If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal: :\mathbf_ = \begin \sigma_x^2 & 0 \\ 0 & \sigma_\dot^2 \end The filter will then prefer the information from the first measurements over the information already in the model.


Asymptotic form

For simplicity, assume that the control input \mathbf_k=\mathbf. Then the Kalman filter may be written: :\hat_ = \mathbf_k \hat_ + \mathbf_k mathbf_k - \mathbf_k \mathbf_k\hat_ A similar equation holds if we include a non-zero control input. Gain matrices \mathbf_k evolve independently of the measurements \mathbf_k. From above, the four equations needed for updating the Kalman gain are as follows: :\begin \mathbf_ &= \mathbf_k \mathbf_ \mathbf_k^\textsf + \mathbf_k, \\ \mathbf_k &= \mathbf_k + \mathbf_k \mathbf_ \mathbf_k^\textsf, \\ \mathbf_k &= \mathbf_\mathbf_k^\textsf \mathbf_k^, \\ \mathbf_ &= \left(\mathbf - \mathbf_k \mathbf_k\right) \mathbf_. \end Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices \mathbf_k to an asymptotic matrix \mathbf_\infty applies for conditions established in Walrand and Dimakis. Simulations establish the number of steps to convergence. For the moving truck example described above, with \Delta t = 1. and \sigma_a^2=\sigma_z^2 =\sigma_x^2= \sigma_\dot^2=1, simulation shows convergence in 10 iterations. Using the asymptotic gain, and assuming \mathbf_k and \mathbf_k are independent of k, the Kalman filter becomes a linear time-invariant filter: :\hat_ = \mathbf \hat_ + \mathbf_\infty mathbf_k - \mathbf\mathbf \hat_ The asymptotic gain \mathbf_\infty, if it exists, can be computed by first solving the following discrete Riccati equation for the asymptotic state covariance \mathbf_\infty: :\mathbf_\infty = \mathbf\left(\mathbf_\infty - \mathbf_\infty \mathbf^\textsf \left(\mathbf\mathbf_\infty\mathbf^\textsf + \mathbf\right) ^ \mathbf\mathbf_\infty\right) \mathbf^\textsf + \mathbf. The asymptotic gain is then computed as before. :\mathbf_\infty = \mathbf_\infty \mathbf^\textsf \left( \mathbf + \mathbf \mathbf_\infty \mathbf^\textsf \right) ^. : Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by : where :\overline_\infty = \mathbf\mathbf_\infty \mathbf^\textsf \left( \mathbf + \mathbf \mathbf_\infty \mathbf^\textsf \right) ^. This leads to an estimator of the form :


Derivations

The Kalman filter can be derived as a
generalized least squares In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinar ...
method operating on previous data.


Deriving the ''posteriori'' estimate covariance matrix

Starting with our invariant on the error covariance P''k'' ,  ''k'' as above :\mathbf_ = \operatorname\left(\mathbf_k - \hat_\right) substitute in the definition of \hat_ :\mathbf_ = \operatorname\left mathbf_k - \left(\hat_ + \mathbf_k\tilde_k\right)\right/math> and substitute \tilde_k :\mathbf_ = \operatorname\left(\mathbf_k - \left hat_ + \mathbf_k\left(\mathbf_k - \mathbf_k\hat_\right)\rightright) and \mathbf_k :\mathbf_ = \operatorname\left(\mathbf_k - \left hat_ + \mathbf_k\left(\mathbf_k\mathbf_k + \mathbf_k - \mathbf_k\hat_\right)\rightright) and by collecting the error vectors we get :\mathbf_ = \operatorname\left left(\mathbf - \mathbf_k \mathbf_k\right)\left(\mathbf_k - \hat_\right) - \mathbf_k \mathbf_k\right/math> Since the measurement error v''k'' is uncorrelated with the other terms, this becomes :\mathbf_ = \operatorname\left left(\mathbf - \mathbf_k \mathbf_k\right)\left(\mathbf_k - \hat_\right)\right+ \operatorname\left mathbf_k \mathbf_k\right/math> by the properties of vector covariance this becomes :\mathbf_ = \left(\mathbf - \mathbf_k \mathbf_k\right)\operatorname\left(\mathbf_k - \hat_\right)\left(\mathbf - \mathbf_k \mathbf_k\right)^\textsf + \mathbf_k\operatorname\left(\mathbf_k\right)\mathbf_k^\textsf which, using our invariant on P''k'' ,  ''k''−1 and the definition of R''k'' becomes :\mathbf_ = \left(\mathbf - \mathbf_k \mathbf_k\right) \mathbf_ \left(\mathbf - \mathbf_k \mathbf_k\right)^\textsf + \mathbf_k \mathbf_k \mathbf_k^\textsf This formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of K''k''. It turns out that if K''k'' is the optimal Kalman gain, this can be simplified further as shown below.


Kalman gain derivation

The Kalman filter is a minimum mean-square error estimator. The error in the ''a posteriori'' state estimation is :\mathbf_k - \hat_ We seek to minimize the expected value of the square of the magnitude of this vector, \operatorname\left \mathbf_ - \hat_\right\, ^2\right/math>. This is equivalent to minimizing the trace of the ''a posteriori'' estimate
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
\mathbf_ . By expanding out the terms in the equation above and collecting, we get: :\begin \mathbf_ & = \mathbf_ - \mathbf_k \mathbf_k \mathbf_ - \mathbf_ \mathbf_k^\textsf \mathbf_k^\textsf + \mathbf_k \left(\mathbf_k \mathbf_ \mathbf_k^\textsf + \mathbf_k\right) \mathbf_k^\textsf \\ pt &= \mathbf_ - \mathbf_k \mathbf_k \mathbf_ - \mathbf_ \mathbf_k^\textsf \mathbf_k^\textsf + \mathbf_k \mathbf_k\mathbf_k^\textsf \end The trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that :\frac = -2 \left(\mathbf_k \mathbf_\right)^\textsf + 2 \mathbf_k \mathbf_k = 0. Solving this for K''k'' yields the Kalman gain: :\begin \mathbf_k \mathbf_k &= \left(\mathbf_k \mathbf_\right)^\textsf = \mathbf_ \mathbf_k^\textsf \\ \Rightarrow \mathbf_k &= \mathbf_ \mathbf_k^\textsf \mathbf_k^ \end This gain, which is known as the ''optimal Kalman gain'', is the one that yields MMSE estimates when used.


Simplification of the ''posteriori'' error covariance formula

The formula used to calculate the ''a posteriori'' error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by S''k''K''k''T, it follows that :\mathbf_k \mathbf_k \mathbf_k^\textsf = \mathbf_ \mathbf_k^\textsf \mathbf_k^\textsf Referring back to our expanded formula for the ''a posteriori'' error covariance, : \mathbf_ = \mathbf_ - \mathbf_k \mathbf_k \mathbf_ - \mathbf_ \mathbf_k^\textsf \mathbf_k^\textsf + \mathbf_k \mathbf_k \mathbf_k^\textsf we find the last two terms cancel out, giving : \mathbf_ = \mathbf_ - \mathbf_k \mathbf_k \mathbf_ = (\mathbf - \mathbf_k \mathbf_k) \mathbf_. This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with
numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algori ...
, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the ''a posteriori'' error covariance formula as derived above (Joseph form) must be used.


Sensitivity analysis

The Kalman filtering equations provide an estimate of the state \hat_ and its error covariance \mathbf_ recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter. In the absence of reliable statistics or the true values of noise covariance matrices \mathbf_ and \mathbf_k, the expression :\mathbf_ = \left(\mathbf - \mathbf_k\mathbf_k\right)\mathbf_\left(\mathbf - \mathbf_k\mathbf_k\right)^\textsf + \mathbf_k\mathbf_k\mathbf_k^\textsf no longer provides the actual error covariance. In other words, \mathbf_ \neq E\left left(\mathbf_k - \hat_\right)\left(\mathbf_k - \hat_\right)^\textsf\right/math>. In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices \mathbf_k and \mathbf_k that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator. This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by \mathbf^a_k and \mathbf^a_k respectively, whereas the design values used in the estimator are \mathbf_k and \mathbf_k respectively. The actual error covariance is denoted by \mathbf_^a and \mathbf_ as computed by the Kalman filter is referred to as the Riccati variable. When \mathbf_k \equiv \mathbf^a_k and \mathbf_k \equiv \mathbf^a_k, this means that \mathbf_ = \mathbf_^a. While computing the actual error covariance using \mathbf_^a = E\left left(\mathbf_k - \hat_\right)\left(\mathbf_k - \hat_\right)^\textsf\right, substituting for \widehat_ and using the fact that E\left mathbf_k\mathbf_k^\textsf\right= \mathbf_k^a and E\left mathbf_k \mathbf_k^\textsf\right= \mathbf_k^a, results in the following recursive equations for \mathbf_^a : :\mathbf_^a = \mathbf_k\mathbf_^a \mathbf_k^\textsf + \mathbf_k^a and :\mathbf_^a = \left(\mathbf - \mathbf_k \mathbf_k\right)\mathbf_^a \left(\mathbf - \mathbf_k \mathbf_k\right)^\textsf + \mathbf_k \mathbf_k^a \mathbf_k^\textsf While computing \mathbf_, by design the filter implicitly assumes that E\left mathbf_k \mathbf_k^\textsf\right= \mathbf_k and E\left mathbf_k \mathbf_k^\textsf\right= \mathbf_k. The recursive expressions for \mathbf_^a and \mathbf_ are identical except for the presence of \mathbf_k^a and \mathbf_k^a in place of the design values \mathbf_k and \mathbf_k respectively. Researches have been done to analyze Kalman filter system's robustness.


Square root form

One problem with the Kalman filter is its
numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algori ...
. If the process noise covariance Q''k'' is small, round-off error often causes a small positive eigenvalue to be computed as a negative number. This renders the numerical representation of the state covariance matrix P indefinite, while its true form is positive-definite. Positive definite matrices have the property that they have a triangular matrix
square root In mathematics, a square root of a number is a number such that ; in other words, a number whose '' square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . ...
P = S·ST. This can be computed efficiently using the Cholesky factorization algorithm, but more importantly, if the covariance is kept in this form, it can never have a negative diagonal or become asymmetric. An equivalent form, which avoids many of the
square root In mathematics, a square root of a number is a number such that ; in other words, a number whose '' square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . ...
operations required by the matrix square root yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix. Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used square root form. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions, while on 21st-century computers they are only slightly more expensive.) Efficient algorithms for the Kalman prediction and update steps in the square root form were developed by G. J. Bierman and C. L. Thornton. The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter. The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage ( LAPACK). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix. Any singular covariance matrix is pivoted so that the first diagonal partition is
nonsingular In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplic ...
and
well-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input ...
. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk, k-1 that are associated with auxiliary observations in yk. The l·d·lt square-root filter requires
orthogonalization In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean s ...
of the observation vector. This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).


Parallel form

The Kalman filter is efficient for sequential data processing on
central processing units A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, an ...
(CPUs), but in its original form it is inefficient on parallel architectures such as graphics processing units (GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä (2021). The filter solution can then be retrieved by the use of a prefix sum algorithm which can be efficiently implemented on GPU. This reduces the
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
from O(N) in the number of time steps to O(\log(N)).


Relationship to recursive Bayesian estimation

The Kalman filter can be presented as one of the simplest dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
(PDF) recursively over time using incoming measurements and a mathematical process model. In recursive Bayesian estimation, the true state is assumed to be an unobserved
Markov process A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happen ...
, and the measurements are the observed states of a hidden Markov model (HMM). Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state. :p(\mathbf_k\mid \mathbf_0,\dots,\mathbf_) = p(\mathbf_k\mid \mathbf_) Similarly, the measurement at the ''k''-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state. :p(\mathbf_k\mid\mathbf_0,\dots,\mathbf_k) = p(\mathbf_k\mid \mathbf_k ) Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as: :p\left(\mathbf_0, \dots, \mathbf_k, \mathbf_1, \dots, \mathbf_k\right) = p\left(\mathbf_0\right)\prod_^k p\left(\mathbf_i \mid \mathbf_i\right)p\left(\mathbf_i \mid \mathbf_\right) However, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set. This results in the ''predict'' and ''update'' phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (''k'' − 1)-th timestep to the ''k''-th and the probability distribution associated with the previous state, over all possible x_. :p\left(\mathbf_k \mid \mathbf_\right) = \int p\left(\mathbf_k \mid \mathbf_\right) p\left(\mathbf_ \mid \mathbf_\right)\, d\mathbf_ The measurement set up to time ''t'' is :\mathbf_t = \left\ The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state. :p\left(\mathbf_k \mid \mathbf_k\right) = \frac The denominator :p\left(\mathbf_k \mid \mathbf_\right) = \int p\left(\mathbf_k \mid \mathbf_k\right) p\left(\mathbf_k \mid \mathbf_\right)\, d\mathbf_k is a normalization term. The remaining probability density functions are :\begin p\left(\mathbf_k \mid \mathbf_\right) &= \mathcal\left(\mathbf_k\mathbf_, \mathbf_k\right) \\ p\left(\mathbf_k \mid \mathbf_k\right) &= \mathcal\left(\mathbf_k\mathbf_k, \mathbf_k\right) \\ p\left(\mathbf_ \mid \mathbf_\right) &= \mathcal\left(\hat_, \mathbf_\right) \end The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for \mathbf_k given the measurements \mathbf_k is the Kalman filter estimate.


Marginal likelihood

Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a generative model, i.e., a process for ''generating'' a stream of random observations z = (z0, z1, z2, ...). Specifically, the process is # Sample a hidden state \mathbf_0 from the Gaussian prior distribution p\left(\mathbf_0\right) = \mathcal\left(\hat_, \mathbf_\right). # Sample an observation \mathbf_0 from the observation model p\left(\mathbf_0 \mid \mathbf_0\right) = \mathcal\left(\mathbf_0\mathbf_0, \mathbf_0\right). # For k = 1, 2, 3, \ldots, do ## Sample the next hidden state \mathbf_k from the transition model p\left(\mathbf_k \mid \mathbf_\right) = \mathcal\left(\mathbf_k \mathbf_ + \mathbf_k\mathbf_k, \mathbf_k\right). ## Sample an observation \mathbf_k from the observation model p\left(\mathbf_k \mid \mathbf_k\right) = \mathcal\left(\mathbf_k\mathbf_k, \mathbf_k\right). This process has identical structure to the
hidden Markov model A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ...
, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions. In some applications, it is useful to compute the ''probability'' that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using
Bayesian model comparison The Bayes factor is a ratio of two competing statistical models represented by their marginal likelihood, and is used to quantify the support for one model over the other. The models in questions can have a common set of parameters, such as a nul ...
. It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
, the likelihood can be factored as the product of the probability of each observation given previous observations, :p(\mathbf) = \prod_^T p\left(\mathbf_k \mid \mathbf_, \ldots, \mathbf_0\right), and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate \hat_, \mathbf_. Thus the marginal likelihood is given by :\begin p(\mathbf) &= \prod_^T \int p\left(\mathbf_k \mid \mathbf_k\right) p\left(\mathbf_k \mid \mathbf_, \ldots,\mathbf_0\right) d\mathbf_k\\ &= \prod_^T \int \mathcal\left(\mathbf_k; \mathbf_k\mathbf_k, \mathbf_k\right) \mathcal\left(\mathbf_k; \hat_, \mathbf_\right) d\mathbf_k\\ &= \prod_^T \mathcal\left(\mathbf_k; \mathbf_k\hat_, \mathbf_k + \mathbf_k \mathbf_ \mathbf_k^\textsf\right)\\ &= \prod_^T \mathcal\left(\mathbf_k; \mathbf_k\hat_, \mathbf_k\right), \end i.e., a product of Gaussian densities, each corresponding to the density of one observation z''k'' under the current filtering distribution \mathbf_k\hat_, \mathbf_k. This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the ''log'' marginal likelihood \ell = \log p(\mathbf) instead. Adopting the convention \ell^ = 0, this can be done via the recursive update rule :\ell^ = \ell^ - \frac \left(\tilde_k^\textsf \mathbf^_k \tilde_k + \log \left, \mathbf_k\ + d_y\log 2\pi \right), where d_y is the dimension of the measurement vector. An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found.


Information filter

In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and
information Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random, ...
vector respectively. These are defined as: :\begin \mathbf_ &= \mathbf_^ \\ \hat_ &= \mathbf_^\hat_ \end Similarly the predicted covariance and state have equivalent information forms, defined as: :\begin \mathbf_ &= \mathbf_^ \\ \hat_ &= \mathbf_^\hat_ \end as have the measurement covariance and measurement vector, which are defined as: :\begin \mathbf_k &= \mathbf_k^\textsf \mathbf_k^ \mathbf_k \\ \mathbf_k &= \mathbf_k^\textsf \mathbf_k^ \mathbf_k \end The information update now becomes a trivial sum. :\begin \mathbf_ &= \mathbf_ + \mathbf_k \\ \hat_ &= \hat_ + \mathbf_k \end The main advantage of the information filter is that ''N'' measurements can be filtered at each time step simply by summing their information matrices and vectors. :\begin \mathbf_ &= \mathbf_ + \sum_^N \mathbf_ \\ \hat_ &= \hat_ + \sum_^N \mathbf_ \end To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used. :\begin \mathbf_k &= \left mathbf_k^\right\textsf \mathbf_ \mathbf_k^ \\ \mathbf_k &= \mathbf_k \left mathbf_k + \mathbf_k^\right \\ \mathbf_k &= \mathbf - \mathbf_k \\ \mathbf_ &= \mathbf_k \mathbf_k \mathbf_k^\textsf + \mathbf_k \mathbf_k^ \mathbf_k^\textsf \\ \hat_ &= \mathbf_k \left mathbf_k^\right\textsf \hat_ \end


Fixed-lag smoother

The optimal fixed-lag smoother provides the optimal estimate of \hat_ for a given fixed-lag N using the measurements from \mathbf_1 to \mathbf_k. It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following: : \begin \hat_ \\ \hat_ \\ \vdots \\ \hat_ \\ \end = \begin \mathbf \\ 0 \\ \vdots \\ 0 \\ \end \hat_ + \begin 0 & \ldots & 0 \\ \mathbf & 0 & \vdots \\ \vdots & \ddots & \vdots \\ 0 & \ldots & \mathbf \\ \end \begin \hat_ \\ \hat_ \\ \vdots \\ \hat_ \\ \end + \begin \mathbf^ \\ \mathbf^ \\ \vdots \\ \mathbf^ \\ \end \mathbf_ where: * \hat_ is estimated via a standard Kalman filter; * \mathbf_ = \mathbf_t - \mathbf\hat_ is the innovation produced considering the estimate of the standard Kalman filter; * the various \hat_ with i = 1, \ldots, N-1 are new variables; i.e., they do not appear in the standard Kalman filter; * the gains are computed via the following scheme: *: \mathbf^ = \mathbf^ \mathbf^\textsf \left \mathbf \mathbf \mathbf^\textsf + \mathbf \right :and :: \mathbf^ = \mathbf \left \left( \mathbf - \mathbf \mathbf \right)^\textsf \righti :where \mathbf and \mathbf are the prediction error covariance and the gains of the standard Kalman filter (i.e., \mathbf_ ). If the estimation error covariance is defined so that : \mathbf_i := E \left \left( \mathbf_ - \hat_ \right)^ \left( \mathbf_ - \hat_ \right) \mid z_1 \ldots z_t \right then we have that the improvement on the estimation of \mathbf_ is given by: : \mathbf - \mathbf_i = \sum_^i \left \mathbf^ \mathbf^\textsf \left( \mathbf \mathbf \mathbf^\textsf + \mathbf \right)^ \mathbf \left( \mathbf^ \right)^\textsf \right


Fixed-interval smoothers

The optimal fixed-interval smoother provides the optimal estimate of \hat_ (k < n) using the measurements from a fixed interval \mathbf_1 to \mathbf_n. This is also called "Kalman Smoothing". There are several smoothing algorithms in common use.


Rauch–Tung–Striebel

The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing. The forward pass is the same as the regular Kalman filter algorithm. These ''filtered'' a-priori and a-posteriori state estimates \hat_, \hat_ and covariances \mathbf_, \mathbf_ are saved for use in the backward pass (for retrodiction). In the backward pass, we compute the ''smoothed'' state estimates \hat_ and covariances \mathbf_. We start at the last time step and proceed backward in time using the following recursive equations: :\begin \hat_ &= \hat_ + \mathbf_k \left(\hat_ - \hat_\right) \\ \mathbf_ &= \mathbf_ + \mathbf_k \left(\mathbf_ - \mathbf_\right) \mathbf_k^\textsf \end where :\mathbf_k = \mathbf_ \mathbf_^\textsf \mathbf_^. \mathbf_ is the a-posteriori state estimate of timestep k and \mathbf_ is the a-priori state estimate of timestep k + 1. The same notation applies to the covariance.


Modified Bryson–Frazier smoother

An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance. The recursive equations are :\begin \tilde_k &= \mathbf_k^\textsf \mathbf_k^ \mathbf_k + \hat_k^\textsf \hat_k \hat_k \\ \hat_ &= \mathbf_k^\textsf\tilde_k\mathbf_k \\ \hat_n &= 0 \\ \tilde_k &= -\mathbf_k^\textsf \mathbf_k^ \mathbf_k + \hat_k^\textsf \hat_k \\ \hat_ &= \mathbf_k^\textsf\tilde_k \\ \hat_n &= 0 \end where \mathbf_k is the residual covariance and \hat_k = \mathbf - \mathbf_k \mathbf_k. The smoothed state and covariance can then be found by substitution in the equations :\begin \mathbf_ &= \mathbf_ - \mathbf_\hat_k\mathbf_ \\ \mathbf_ &= \mathbf_ - \mathbf_\hat_k \end or :\begin \mathbf_ &= \mathbf_ - \mathbf_\tilde_k\mathbf_ \\ \mathbf_ &= \mathbf_ - \mathbf_\tilde_k. \end An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix.


Minimum-variance smoother

The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely. This smoother is a time-varying state-space generalization of the optimal non-causal Wiener filter. The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by :\begin \hat_ &= (\mathbf_k - \mathbf_k\mathbf_k)\hat_ + \mathbf_k\mathbf_k \\ \alpha_k &= -\mathbf_k^\mathbf_k\hat_ + \mathbf_k^\mathbf_k \end The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass \beta_k may be calculated by operating the forward equations on the time-reversed \alpha_k and time reversing the result. In the case of output estimation, the smoothed estimate is given by :\hat_ = \mathbf_k - \mathbf_k\beta_k Taking the causal part of this minimum-variance smoother yields :\hat_ = \mathbf_k - \mathbf_k \mathbf_k^ \alpha_k which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly. A continuous-time version of the above smoother is described in. Expectation–maximization algorithms may be employed to calculate approximate
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stat ...
estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation. In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions ( extended Kalman filtering).


Frequency-weighted Kalman filters

Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest. Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let \mathbf - \hat denote the output estimation error exhibited by a conventional Kalman filter. Also, let \mathbf denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of \mathbf\left(\mathbf - \hat\right) arises by simply constructing \mathbf^ \hat. The design of \mathbf remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting \mathbf equal to the inverse of that system. This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers.


Nonlinear filters

The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be
nonlinear In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many oth ...
. The nonlinearity can be associated either with the process model or with the observation model or with both. The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.


Extended Kalman filter

In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of differentiable type. :\begin \mathbf_k &= f(\mathbf_, \mathbf_k) + \mathbf_k \\ \mathbf_k &= h(\mathbf_k) + \mathbf_k \end The function ''f'' can be used to compute the predicted state from the previous estimate and similarly the function ''h'' can be used to compute the predicted measurement from the predicted state. However, ''f'' and ''h'' cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the
Jacobian In mathematics, a Jacobian, named for Carl Gustav Jacob Jacobi, may refer to: * Jacobian matrix and determinant * Jacobian elliptic functions * Jacobian variety *Intermediate Jacobian In mathematics, the intermediate Jacobian of a compact Kähle ...
) is computed. At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate.


Unscented Kalman filter

When the state transition and observation models—that is, the predict and update functions f and h—are highly nonlinear, the extended Kalman filter can give particularly poor performance. This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF)  uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are then formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way. For certain systems, the resulting UKF more accurately estimates the true mean and covariance. This can be verified with Monte Carlo sampling or
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable).


Sigma points

For a random vector \mathbf=(x_1, \dots, x_L), sigma points are any set of vectors : \=\bigl\ attributed with * first-order weights W_0^a, \dots, W_N^a that fulfill # \sum_^N W_j^a=1 # for all i=1, \dots, L: E _i\sum_^N W_j^a s_ * second-order weights W_0^c, \dots, W_N^c that fulfill # \sum_^N W_j^c=1 # for all pairs (i,l) \in \^2: E _ix_l\sum_^N W_j^c s_s_ . A simple choice of sigma points and weights for \mathbf_ in the UKF algorithm is :\begin \mathbf_0&=\hat \mathbf_\\ -1& where \hat \mathbf_ is the mean estimate of \mathbf_. The vector \mathbf_j is the ''j''th column of \mathbf where \mathbf_=\mathbf^\textsf. Typically, \mathbf is obtained via Cholesky decomposition of \mathbf_. With some care the filter equations can be expressed in such a way that \mathbf is evaluated directly without intermediate calculations of \mathbf_. This is referred to as the ''square-root unscented Kalman filter''. The weight of the mean value, W_0, can be chosen arbitrarily. Another popular parameterization (which generalizes the above) is :\begin \mathbf_0&=\hat \mathbf_\\ W_0^a&= \frac\\ W_0^c&= W_0^a + 1-\alpha^2+\beta \\ \mathbf_j&=\hat \mathbf_ + \alpha\sqrt \mathbf_j, \quad j=1, \dots, L\\ \mathbf_&=\hat \mathbf_ - \alpha\sqrt \mathbf_j, \quad j=1, \dots, L\\ W_j^a&=W_j^c=\frac, \quad j=1, \dots, 2L. \end \alpha and \kappa control the spread of the sigma points. \beta is related to the distribution of x. Appropriate values depend on the problem at hand, but a typical recommendation is \alpha = 10^, \kappa = 1, and \beta = 2. However, a larger value of \alpha (e.g., \alpha = 1) may be beneficial in order to better capture the spread of the distribution and possible nonlinearities. If the true distribution of x is Gaussian, \beta = 2 is optimal.


Predict

As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa. Given estimates of the mean and covariance, \hat\mathbf_ and \mathbf_, one obtains N = 2L+1 sigma points as described in the section above. The sigma points are propagated through the transition function ''f''. :\mathbf_ = f\left(\mathbf_\right) \quad j = 0, \dots, 2L . The propagated sigma points are weighed to produce the predicted mean and covariance. :\begin \hat_ &= \sum_^ W_j^a \mathbf_j \\ \mathbf_ &= \sum_^ W_j^c \left(\mathbf_j - \hat_\right)\left(\mathbf_j - \hat_\right)^\textsf+\mathbf_k \end where W_j^a are the first-order weights of the original sigma points, and W_j^c are the second-order weights. The matrix \mathbf_k is the covariance of the transition noise, \mathbf_k.


Update

Given prediction estimates \hat_ and \mathbf_, a new set of N = 2L+1 sigma points \mathbf_0, \dots, \mathbf_ with corresponding first-order weights W_0^a,\dots W_^a and second-order weights W_0^c,\dots, W_^c is calculated. These sigma points are transformed through the measurement function h. : \mathbf_j=h(\mathbf_j), \,\, j=0,1, \dots, 2L . Then the empirical mean and covariance of the transformed points are calculated. :\begin \hat &= \sum_^ W_j^a \mathbf_j \\ pt \hat_k &= \sum_^ W_j^c (\mathbf_j-\hat)(\mathbf_j-\hat)^\textsf + \mathbf_k \end where \mathbf_k is the covariance matrix of the observation noise, \mathbf_k. Additionally, the cross covariance matrix is also needed :\begin \mathbf &= \sum_^ W_j^c (\mathbf_j-\hat\mathbf_)(\mathbf_j-\hat\mathbf)^\textsf. \end The Kalman gain is : \begin \mathbf_k=\mathbf\hat_k^. \end The updated mean and covariance estimates are : \begin \hat\mathbf_&=\hat\mathbf_+\mathbf_k(\mathbf_k-\hat\mathbf)\\ \mathbf_&=\mathbf_-\mathbf_k\hat_k\mathbf_k^\textsf. \end


Discriminative Kalman filter

When the observation model p(\mathbf_k\mid\mathbf_k) is highly non-linear and/or non-Gaussian, it may prove advantageous to apply
Bayes' rule In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For exa ...
and estimate : p(\mathbf_k\mid\mathbf_k) \approx \frac where p(\mathbf_k\mid\mathbf_k) \approx \mathcal(g(\mathbf_k),Q(\mathbf_k)) for nonlinear functions g,Q. This replaces the generative specification of the standard Kalman filter with a
discriminative model Discriminative models, also referred to as conditional models, are a class of logistical models used for classification or regression. They distinguish decision boundaries through observed data, such as pass/fail, win/lose, alive/dead or healthy/sic ...
for the latent states given observations. Under a stationary state model : \begin p(\mathbf_1) &= \mathcal(0, \mathbf), \\ p(\mathbf_k\mid\mathbf_) &= \mathcal(\mathbf\mathbf_, \mathbf), \end where \mathbf= \mathbf\mathbf\mathbf^\intercal + \mathbf, if : p(\mathbf_k\mid\mathbf_) \approx \mathcal(\hat_, \mathbf_), then given a new observation \mathbf_k, it follows that : p(\mathbf_\mid\mathbf_) \approx \mathcal(\hat_, \mathbf_) where : \begin \mathbf_ &= \mathbf\mathbf_\mathbf^\intercal + \mathbf, \\ \mathbf_ &= (\mathbf_^ + Q(\mathbf_k)^ - \mathbf^)^, \\ \hat_ &= \mathbf_ (\mathbf_^\mathbf\hat_ + \mathbf_^g(\mathbf_k) ). \end Note that this approximation requires Q(\mathbf_k)^ - \mathbf^ to be positive-definite; in the case that it is not, : \mathbf_ = (\mathbf_^ + Q(\mathbf_k)^)^ is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states and can be used build filters that are particularly robust to nonstationarities in the observation model.


Adaptive Kalman filter

Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process model \mathbf(t), which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking.


Kalman–Bucy filter

Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering. It is based on the state space model :\begin \frac\mathbf(t) &= \mathbf(t)\mathbf(t) + \mathbf(t)\mathbf(t) + \mathbf(t) \\ \mathbf(t) &= \mathbf(t) \mathbf(t) + \mathbf(t) \end where \mathbf(t) and \mathbf(t) represent the intensities (or, more accurately: the Power Spectral Density - PSD - matrices) of the two white noise terms \mathbf(t) and \mathbf(t), respectively. The filter consists of two differential equations, one for the state estimate and one for the covariance: :\begin \frac\hat(t) &= \mathbf(t)\hat(t) + \mathbf(t)\mathbf(t) + \mathbf(t) \left(\mathbf(t) - \mathbf(t)\hat(t)\right) \\ \frac\mathbf(t) &= \mathbf(t)\mathbf(t) + \mathbf(t)\mathbf^\textsf(t) + \mathbf(t) - \mathbf(t)\mathbf(t)\mathbf^\textsf(t) \end where the Kalman gain is given by :\mathbf(t) = \mathbf(t)\mathbf^\textsf(t)\mathbf^(t) Note that in this expression for \mathbf(t) the covariance of the observation noise \mathbf(t) represents at the same time the covariance of the prediction error (or ''innovation'') \tilde(t) = \mathbf(t) - \mathbf(t)\hat(t); these covariances are equal only in the case of continuous time. The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time. The second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter.


Hybrid Kalman filter

Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by :\begin \dot(t) &= \mathbf(t)\mathbf(t) + \mathbf(t)\mathbf(t) + \mathbf(t), &\mathbf(t) &\sim N\left(\mathbf, \mathbf(t)\right) \\ \mathbf_k &= \mathbf_k\mathbf_k + \mathbf_k, &\mathbf_k &\sim N(\mathbf,\mathbf_k) \end where :\mathbf_k = \mathbf(t_k).


Initialize

:\hat_ = E\left mathbf(t_0)\right \mathbf_ = \operatorname\left mathbf\left(t_0\right)\right/math>


Predict

:\begin \dot(t) &= \mathbf(t) \hat(t) + \mathbf(t) \mathbf(t) \text \hat\left(t_\right) = \hat_ \\ \Rightarrow \hat_ &= \hat\left(t_k\right) \\ \dot(t) &= \mathbf(t)\mathbf(t) + \mathbf(t)\mathbf(t)^\textsf + \mathbf(t) \text \mathbf\left(t_\right) = \mathbf_ \\ \Rightarrow \mathbf_ &= \mathbf\left(t_k\right) \end The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., \mathbf(t) = 0 . The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step. For the case of
linear time invariant In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defi ...
systems, the continuous time dynamics can be exactly discretized into a discrete time system using
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential give ...
s.


Update

:\begin \mathbf_k &= \mathbf_\mathbf_k^\textsf \left(\mathbf_k\mathbf_\mathbf_k^\textsf + \mathbf_k\right)^ \\ \hat_ &= \hat_ + \mathbf_k\left(\mathbf_k - \mathbf_k\hat_\right) \\ \mathbf_ &= \left(\mathbf - \mathbf_k\mathbf_k\right)\mathbf_ \end The update equations are identical to those of the discrete-time Kalman filter.


Variants for the recovery of sparse signals

The traditional Kalman filter has also been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Recent works utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems.


Relation to Gaussian processes

Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers for
Gaussian process regression In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging giv ...
.


Applications

* Attitude and heading reference systems * Autopilot *
Electric battery An electric battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive terminal is the cathode and its neg ...
state of charge (SoC) estimation * Brain–computer interfaces * Chaotic signals * Tracking and vertex fitting of
charged particles In physics, a charged particle is a particle with an electric charge. It may be an ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons. It can also be an electron or a proton, or another elementary particle, ...
in particle detectors * Tracking of objects in
computer vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human ...
* Dynamic positioning in shipping *
Economics Economics () is the social science that studies the production, distribution, and consumption of goods and services. Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics anal ...
, in particular
macroeconomics Macroeconomics (from the Greek prefix ''makro-'' meaning "large" + ''economics'') is a branch of economics dealing with performance, structure, behavior, and decision-making of an economy as a whole. For example, using interest rates, taxes, and ...
, time series analysis, and
econometrics Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics," '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. ...
*
Inertial guidance system An inertial navigation system (INS) is a navigation device that uses motion sensors ( accelerometers), rotation sensors ( gyroscopes) and a computer to continuously calculate by dead reckoning the position, the orientation, and the velocity ...
*
Nuclear medicine Nuclear medicine or nucleology is a medical specialty involving the application of radioactive substances in the diagnosis and treatment of disease. Nuclear imaging, in a sense, is " radiology done inside out" because it records radiation emi ...
– single photon emission computed tomography image restoration * Orbit determination * Power system state estimation * Radar tracker *
Satellite navigation system A satellite or artificial satellite is an object intentionally placed into orbit in outer space. Except for passive satellites, most satellites have an electricity generation system for equipment on board, such as solar panels or radioiso ...
s *
Seismology Seismology (; from Ancient Greek σεισμός (''seismós'') meaning "earthquake" and -λογία (''-logía'') meaning "study of") is the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other ...
* Sensorless control of AC motor
variable-frequency drive A variable-frequency drive (VFD) is a type of motor drive used in electro-mechanical drive systems to control AC motor speed and torque by varying motor input frequency and, depending on topology, to control associated voltage or current va ...
s * Simultaneous localization and mapping * Speech enhancement *
Visual odometry In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Expl ...
*
Weather forecasting Weather forecasting is the application of science and technology to predict the conditions of the atmosphere for a given location and time. People have attempted to predict the weather informally for millennia and formally since the 19th cen ...
* Navigation system *
3D modeling In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of any surface of an object (inanimate or living) in three dimensions via specialized software by manipulating edges, vertices, a ...
* Structural health monitoring * Human sensorimotor processing


See also

*
Alpha beta filter An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filterEli Brookner: Tracking and Kalman Filtering Made Easy. Wiley-Interscience, 1st edition, 4 1998.) is a simplified form of observer for estimation, data smoothing and contr ...
* Inverse-variance weighting *
Covariance intersection Covariance intersection is an algorithm for combining two or more estimates of state variables in a Kalman filter when the correlation between them is unknown. Specification Items of information a and b are known and are to be fused into informat ...
*
Data assimilation Data assimilation is a mathematical discipline that seeks to optimally combine theory (usually in the form of a numerical model) with observations. There may be a number of different goals sought – for example, to determine the optimal state es ...
* Ensemble Kalman filter * Extended Kalman filter * Fast Kalman filter * Filtering problem (stochastic processes) * Generalized filtering * Invariant extended Kalman filter * Kernel adaptive filter * Masreliez's theorem * Moving horizon estimation *
Particle filter Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to solve filtering problems arising in signal processing and Bayesian statistical inference. The filtering problem consists of estimating the inte ...
estimator *
PID controller A proportional–integral–derivative controller (PID controller or three-term controller) is a control loop mechanism employing feedback that is widely used in industrial control systems and a variety of other applications requiring continuou ...
* Predictor–corrector method * Recursive least squares filter *
Schmidt–Kalman filter The Schmidt–Kalman Filter is a modification of the Kalman filter for reducing the dimensionality of the state estimate, while still considering the effects of the additional state in the calculation of the covariance matrix and the Kalman gains. ...
*
Separation principle In control theory, a separation principle, more formally known as a principle of separation of estimation and control, states that under some assumptions the problem of designing an optimal feedback controller for a stochastic system can be solved b ...
* Sliding mode control * State-transition matrix * Stochastic differential equations * Switching Kalman filter


References


Further reading

* * * * * * * * * * * * * * * * * * * * * * *


External links


A New Approach to Linear Filtering and Prediction Problems
by R. E. Kalman, 1960
Kalman and Bayesian Filters in Python
Open source Kalman filtering textbook.
How a Kalman filter works, in pictures
Illuminates the Kalman filter with pictures and colors
''Kalman–Bucy Filter''
a derivation of the Kalman–Bucy Filter *
Kalman filter in Javascript
Open source Kalman filter library for node.js and the web browser.
An Introduction to the Kalman Filter
SIGGRAPH 2001 Course, Greg Welch and Gary Bishop
Kalman Filter
webpage, with many links
Kalman Filter Explained Simply
Step-by-Step Tutorial of the Kalman Filter with Equations * *
Gerald J. Bierman's Estimation Subroutine Library
Corresponds to the code in the research monograph "Factorization Methods for Discrete Sequential Estimation" originally published by Academic Press in 1977. Republished by Dover.
Matlab Toolbox implementing parts of Gerald J. Bierman's Estimation Subroutine Library
UD / UDU' and LD / LDL' factorization with associated time and measurement updates making up the Kalman filter.
Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping
Vehicle moving in 1D, 2D and 3D

A comprehensive introduction.
Matlab code to estimate Cox–Ingersoll–Ross interest rate model with Kalman Filter
: Corresponds to the paper "estimating and testing exponential-affine term structure models by kalman filter" published by Review of Quantitative Finance and Accounting in 1999.
Online demo of the Kalman Filter
Demonstration of Kalman Filter (and other data assimilation methods) using twin experiments. *

A Tutorial on Filtering and Estimation
Explaining Filtering (Estimation) in One Hour, Ten Minutes, One Minute, and One Sentence
by Yu-Chi Ho *Simo Särkkä (2013). "Bayesian Filtering and Smoothing". ''Cambridge University Press.'' Full text available on author's webpage https://users.aalto.fi/~ssarkka/. {{Authority control Control theory Linear filters Markov models Nonlinear filters Robot control Signal estimation Stochastic differential equations Hungarian inventions