HOME

TheInfoList



OR:

In
control engineering Control engineering or control systems engineering is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls o ...
, a state-space representation is a mathematical model of a physical system as a set of input, output and state variables related by first-order
differential equation In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, ...
s or
difference equation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
s. State variables are variables whose values evolve over time in a way that depends on the values they have at any given time and on the externally imposed values of input variables. Output variables’ values depend on the values of the state variables. The "
state space A state space is the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory. For instance, the to ...
" is the
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidea ...
in which the variables on the axes are the state variables. The state of the system can be represented as a ''state vector'' within that space. To abstract from the number of inputs, outputs and states, these variables are expressed as vectors. If the
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water i ...
is linear, time-invariant, and finite-dimensional, then the differential and
algebraic equation In mathematics, an algebraic equation or polynomial equation is an equation of the form :P = 0 where ''P'' is a polynomial with coefficients in some field, often the field of the rational numbers. For many authors, the term ''algebraic equation'' ...
s may be written in
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
form. The state-space method is characterized by significant algebraization of general
system theory Systems theory is the interdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or human-made. Every system has causal boundaries, is influenced by its context, defined by its stru ...
, which makes it possible to use Kronecker vector-matrix structures. The capacity of these structures can be efficiently applied to research systems with modulation or without it. The state-space representation (also known as the "
time-domain Time domain refers to the analysis of mathematical functions, physical signals or time series of economic or environmental data, with respect to time. In the time domain, the signal or function's value is known for all real numbers, for the ca ...
approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With p inputs and q outputs, we would otherwise have to write down q \times p
Laplace transform In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually t, in the '' time domain'') to a function of a complex variable s (in the ...
s to encode all the information about a system. Unlike the
frequency domain In physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time. Put simply, a time-domain graph shows how a s ...
approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. The state-space model can be applied in subjects such as economics, statistics, computer science and electrical engineering, and neuroscience. In
econometrics Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics," '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. ...
, for example, state-space models can be used to decompose a
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Ex ...
into trend and cycle, compose individual indicators into a composite index, identify turning points of the business cycle, and estimate GDP using latent and unobserved time series. Many applications rely on the
Kalman Filter For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estima ...
to produce estimates of the current unknown state variables using their previous observations.


State variables

The internal
state variable A state variable is one of the set of variables that are used to describe the mathematical "state" of a dynamical system. Intuitively, the state of a system describes enough about the system to determine its future behaviour in the absence of a ...
s are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system, n, is usually equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such as
capacitor A capacitor is a device that stores electrical energy in an electric field by virtue of accumulating electric charges on two close surfaces insulated from each other. It is a passive electronic component with two terminals. The effect of ...
s and
inductor An inductor, also called a coil, choke, or reactor, is a passive two-terminal electrical component that stores energy in a magnetic field when electric current flows through it. An inductor typically consists of an insulated wire wound into a c ...
s. The state variables defined must be linearly independent, i.e., no state variable can be written as a linear combination of the other state variables, or the system cannot be solved.


Linear systems

The most general state-space representation of a linear system with p inputs, q outputs and n state variables is written in the following form: :\dot(t) = \mathbf(t) \mathbf(t) + \mathbf(t) \mathbf(t) :\mathbf(t) = \mathbf(t) \mathbf(t) + \mathbf(t) \mathbf(t) where: :\mathbf(\cdot) is called the "state vector",  \mathbf(t) \in \mathbb^n; :\mathbf(\cdot) is called the "output vector",  \mathbf(t) \in \mathbb^q; :\mathbf(\cdot) is called the "input (or control) vector",  \mathbf(t) \in \mathbb^p; :\mathbf(\cdot) is the "state (or system) matrix",  \dim mathbf(\cdot)= n \times n, :\mathbf(\cdot) is the "input matrix",  \dim mathbf(\cdot)= n \times p, :\mathbf(\cdot) is the "output matrix",  \dim mathbf(\cdot)= q \times n, :\mathbf(\cdot) is the "feedthrough (or feedforward) matrix" (in cases where the system model does not have a direct feedthrough, \mathbf(\cdot) is the zero matrix),  \dim mathbf(\cdot)= q \times p, : \dot(t) := \frac \mathbf(t). In this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time); however, in the common LTI case, matrices will be time invariant. The time variable t can be continuous (e.g. t \in \mathbb) or discrete (e.g. t \in \mathbb). In the latter case, the time variable k is usually used instead of t.
Hybrid system A hybrid system is a dynamical system that exhibits both continuous and discrete dynamic behavior – a system that can both ''flow'' (described by a differential equation) and ''jump'' (described by a state machine or automaton). Often, the te ...
s allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms:


Example: continuous-time LTI case

Stability and natural response characteristics of a continuous-time
LTI system In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defi ...
(i.e., linear with matrices that are constant with respect to time) can be studied from the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of the matrix \mathbf. The stability of a time-invariant state-space model can be determined by looking at the system's
transfer function In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a mathematical function that theoretically models the system's output for each possible input. They are widely used ...
in factored form. It will then look something like this: : \textbf(s) = k \frac. The denominator of the transfer function is equal to the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
found by taking the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of s\mathbf - \mathbf, :\lambda(s) = , s\mathbf - \mathbf, . The roots of this polynomial (the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s) are the system transfer function's poles (i.e., the singularities where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system is
asymptotically stable Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. T ...
or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's
Lyapunov stability Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. ...
. The zeros found in the numerator of \textbf(s) can similarly be used to determine whether the system is
minimum phase In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable. The most general causal LTI transfer function can be uniquely factored into a series of an ...
. The system may still be input–output stable (see BIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function are removable).


Controllability

The state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is controllable
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bic ...
:\operatorname\begin\mathbf& \mathbf\mathbf& \mathbf^\mathbf& \cdots & \mathbf^\mathbf\end = n, where
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
is the number of linearly independent rows in a matrix, and where ''n'' is the number of state variables.


Observability

Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, observability provides that knowing an output trajectory provides enough information to predict the initial state of the system). A continuous time-invariant linear state-space model is observable if and only if :\operatorname\begin\mathbf\\ \mathbf\mathbf\\ \vdots\\ \mathbf\mathbf^\end = n.


Transfer function

The "
transfer function In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a mathematical function that theoretically models the system's output for each possible input. They are widely used ...
" of a continuous time-invariant linear state-space model can be derived in the following way: First, taking the
Laplace transform In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually t, in the '' time domain'') to a function of a complex variable s (in the ...
of :\dot(t) = \mathbf \mathbf(t) + \mathbf \mathbf(t) yields :s\mathbf(s)-\mathbf(0) = \mathbf \mathbf(s) + \mathbf \mathbf(s). Next, we simplify for \mathbf(s), giving :(s\mathbf - \mathbf)\mathbf(s) =\mathbf(0)+ \mathbf\mathbf(s) and thus :\mathbf(s) =(s\mathbf - \mathbf)^\mathbf(0)+ (s\mathbf - \mathbf)^\mathbf\mathbf(s). Substituting for \mathbf(s) in the output equation :\mathbf(s) = \mathbf\mathbf(s) + \mathbf\mathbf(s), giving :\mathbf(s) = \mathbf((s\mathbf - \mathbf)^\mathbf(0)+ (s\mathbf - \mathbf)^\mathbf\mathbf(s)) + \mathbf\mathbf(s). Assuming zero initial conditions \mathbf(0) =\mathbf and a single-input single-output (SISO) system, the
transfer function In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a mathematical function that theoretically models the system's output for each possible input. They are widely used ...
is defined as the ratio of output and input G(s)=Y(s)/U(s). For a multiple-input multiple-output (MIMO) system, however, this ratio is not defined. Therefore, assuming zero initial conditions, the
transfer function matrix In control system theory, and various branches of engineering, a transfer function matrix, or just transfer matrix is a generalisation of the transfer functions of single-input single-output (SISO) systems to multiple-input and multiple-output ...
is derived from :\mathbf(s) = \mathbf(s) \mathbf(s) using the method of equating the coefficients which yields :\mathbf(s) = \mathbf(s\mathbf - \mathbf)^\mathbf + \mathbf . Consequently, \mathbf(s) is a matrix with the dimension q \times p which contains transfer functions for each input output combination. Due to the simplicity of this matrix notation, the state-space representation is commonly used for multiple-input, multiple-output systems. The
Rosenbrock system matrix In applied mathematics, the Rosenbrock system matrix or Rosenbrock's system matrix of a linear time-invariant system is a useful representation bridging state-space representation and transfer function matrix form. It was proposed in 1967 by Howard ...
provides a bridge between the state-space representation and its
transfer function In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a mathematical function that theoretically models the system's output for each possible input. They are widely used ...
.


Canonical realizations

Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system): Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form: : \textbf(s) = \frac. The coefficients can now be inserted directly into the state-space model by the following approach: :\dot(t) = \begin 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\\ -d_4 & -d_3 & -d_2 & -d_1 \end\mathbf(t) + \begin 0\\ 0\\ 0\\ 1 \end\mathbf(t) : \mathbf(t) = \begin n_4 & n_3 & n_2 & n_1 \end \mathbf(t). This state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state). The transfer function coefficients can also be used to construct another type of canonical form :\dot(t) = \begin 0& 0& 0& -d_\\ 1& 0& 0& -d_\\ 0& 1& 0& -d_\\ 0& 0& 1& -d_ \end\textbf(t) + \begin n_\\ n_\\ n_\\ n_ \end\textbf(t) : \textbf(t) = \begin 0& 0& 0& 1 \end\textbf(t). This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).


Proper transfer functions

Transfer functions which are only
proper Proper may refer to: Mathematics * Proper map, in topology, a property of continuous function between topological spaces, if inverse images of compact subsets are compact * Proper morphism, in algebraic geometry, an analogue of a proper map for ...
(and not strictly proper) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant. : \textbf(s) = \textbf_\mathrm(s) + \textbf(\infty). The strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is trivially \textbf(t) = \textbf(\infty)\textbf(t). Together we then get a state-space realization with matrices ''A'', ''B'' and ''C'' determined by the strictly proper part, and matrix ''D'' determined by the constant. Here is an example to clear things up a bit: : \textbf(s) = \frac = \frac + 1 which yields the following controllable realization :\dot(t) = \begin -2& -1\\ 1& 0\\ \end\textbf(t) + \begin 1\\ 0\end\textbf(t) : \textbf(t) = \begin 1& 2\end\textbf(t) + \begin 1\end\textbf(t) Notice how the output also depends directly on the input. This is due to the \textbf(\infty) constant in the transfer function.


Feedback

A common method for feedback is to multiply the output by a matrix ''K'' and setting this as the input to the system: \mathbf(t) = K \mathbf(t). Since the values of ''K'' are unrestricted the values can easily be negated for
negative feedback Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by othe ...
. The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results. : \dot(t) = A \mathbf(t) + B \mathbf(t) : \mathbf(t) = C \mathbf(t) + D \mathbf(t) becomes : \dot(t) = A \mathbf(t) + B K \mathbf(t) : \mathbf(t) = C \mathbf(t) + D K \mathbf(t) solving the output equation for \mathbf(t) and substituting in the state equation results in : \dot(t) = \left(A + B K \left(I - D K\right)^ C \right) \mathbf(t) : \mathbf(t) = \left(I - D K\right)^ C \mathbf(t) The advantage of this is that the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
of ''A'' can be controlled by setting ''K'' appropriately through eigendecomposition of \left(A + B K \left(I - D K\right)^ C \right). This assumes that the closed-loop system is controllable or that the unstable eigenvalues of ''A'' can be made stable through appropriate choice of ''K''.


Example

For a strictly proper system ''D'' equals zero. Another fairly common situation is when all states are outputs, i.e. ''y'' = ''x'', which yields ''C'' = ''I'', the
Identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
. This would then result in the simpler equations : \dot(t) = \left(A + B K \right) \mathbf(t) : \mathbf(t) = \mathbf(t) This reduces the necessary eigendecomposition to just A + B K.


Feedback with setpoint (reference) input

In addition to feedback, an input, r(t), can be added such that \mathbf(t) = -K \mathbf(t) + \mathbf(t). : \dot(t) = A \mathbf(t) + B \mathbf(t) : \mathbf(t) = C \mathbf(t) + D \mathbf(t) becomes : \dot(t) = A \mathbf(t) - B K \mathbf(t) + B \mathbf(t) : \mathbf(t) = C \mathbf(t) - D K \mathbf(t) + D \mathbf(t) solving the output equation for \mathbf(t) and substituting in the state equation results in : \dot(t) = \left(A - B K \left(I + D K\right)^ C \right) \mathbf(t) + B \left(I - K \left(I + D K\right)^D \right) \mathbf(t) : \mathbf(t) = \left(I + D K\right)^ C \mathbf(t) + \left(I + D K\right)^ D \mathbf(t) One fairly common simplification to this system is removing ''D'', which reduces the equations to : \dot(t) = \left(A - B K C \right) \mathbf(t) + B \mathbf(t) : \mathbf(t) = C \mathbf(t)


Moving object example

A classical linear system is that of one-dimensional movement of an object (e.g., a cart).
Newton's laws of motion Newton's laws of motion are three basic laws of classical mechanics that describe the relationship between the motion of an object and the forces acting on it. These laws can be paraphrased as follows: # A body remains at rest, or in mo ...
for an object moving horizontally on a plane and attached to a wall with a spring: :m \ddot(t) = u(t) - b\dot(t) - k y(t) where *y(t) is position; \dot y(t) is velocity; \ddot(t) is acceleration *u(t) is an applied force *b is the viscous friction coefficient *k is the spring constant *m is the mass of the object The state equation would then become :\begin \dot_1(t) \\ \dot_2(t) \end = \begin 0 & 1 \\ -\frac & -\frac \end \begin \mathbf_1(t) \\ \mathbf_2(t) \end + \begin 0 \\ \frac \end \mathbf(t) :\mathbf(t) = \left \begin 1 & 0 \end \right\left \begin \mathbf(t) \\ \mathbf(t) \end \right/math> where *x_1(t) represents the position of the object *x_2(t) = \dot_1(t) is the velocity of the object *\dot_2(t) = \ddot_1(t) is the acceleration of the object *the output \mathbf(t) is the position of the object The controllability test is then :\begin B & AB \end = \begin \begin 0 \\ \frac \end & \begin 0 & 1 \\ -\frac & -\frac \end \begin 0 \\ \frac \end \end = \begin 0 & \frac \\ \frac & -\frac \end which has full rank for all b and m. This means, that if initial state of the system is known (y(t), \dot y(t), \ddot(t)), and if the b and m are constants, then there is a force u that could move the cart into any other position in the system. The
observability Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. In control theory, the observability and controllability of a linear system are mathematical duals. The concept of observ ...
test is then :\begin C \\ CA \end = \begin \begin 1 & 0 \end \\ \begin 1 & 0 \end \begin 0 & 1 \\ -\frac & -\frac \end \end = \begin 1 & 0 \\ 0 & 1 \end which also has full rank. Therefore, this system is both controllable and observable.


Nonlinear systems

The more general form of a state-space model can be written as two functions. :\mathbf(t) = \mathbf(t, x(t), u(t)) :\mathbf(t) = \mathbf(t, x(t), u(t)) The first is the state equation and the latter is the output equation. If the function f(\cdot,\cdot,\cdot) is a linear combination of states and inputs then the equations can be written in matrix notation like above. The u(t) argument to the functions can be dropped if the system is unforced (i.e., it has no inputs).


Pendulum example

A classic nonlinear system is a simple unforced
pendulum A pendulum is a weight suspended from a pivot so that it can swing freely. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward th ...
:m\ell^2\ddot\theta(t)= - m\ell g\sin\theta(t) - k\ell\dot\theta(t) where *\theta(t) is the angle of the pendulum with respect to the direction of gravity *m is the mass of the pendulum (pendulum rod's mass is assumed to be zero) *g is the gravitational acceleration *k is coefficient of friction at the pivot point *\ell is the radius of the pendulum (to the center of gravity of the mass m) The state equations are then :\dot_1(t) = x_2(t) :\dot_2(t) = - \frac\sin(t) - \frac(t) where *x_1(t) = \theta(t) is the angle of the pendulum *x_2(t) = \dot_1(t) is the rotational velocity of the pendulum *\dot_2 = \ddot_1 is the rotational acceleration of the pendulum Instead, the state equation can be written in the general form :\dot(t) = \begin \dot_1(t) \\ \dot_2(t) \end = \mathbf(t, x(t)) = \begin x_2(t) \\ - \frac\sin(t) - \frac(t) \end. The equilibrium/
stationary point In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" in ...
s of a system are when \dot = 0 and so the equilibrium points of a pendulum are those that satisfy :\begin x_1 \\ x_2 \end = \begin n\pi \\ 0 \end for integers ''n''.


See also

*
Control engineering Control engineering or control systems engineering is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls o ...
*
Control theory Control theory is a field of mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive ...
* State observer *
Observability Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. In control theory, the observability and controllability of a linear system are mathematical duals. The concept of observ ...
* Controllability *
Discretization In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerica ...
of state-space models *
Phase space In dynamical system theory, a phase space is a space in which all possible states of a system are represented, with each possible state corresponding to one unique point in the phase space. For mechanical systems, the phase space usuall ...
for information about phase state (like state space) in physics and mathematics. *
State space A state space is the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory. For instance, the to ...
for information about state space with discrete states in computer science. *
State space (physics) In physics, a state space is an abstract space in which different "positions" represent, not literal locations, but rather states of some physical system. This makes it a type of phase space. Specifically, in quantum mechanics a state space is a ...
for information about state space in physics. *
Kalman filter For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estima ...
for a statistical application.


References


Further reading

* * * * * * * ;On the applications of state-space models in econometrics: *


External links

*
Wolfram language The Wolfram Language ( ) is a general multi-paradigm programming language developed by Wolfram Research. It emphasizes symbolic computation, functional programming, and rule-based programming and can employ arbitrary structures and data. It ...
functions fo
linear state-space models
an

{{differentiable computing Classical control theory Mathematical modeling Time domain analysis Time series models