
In
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, a time series is a series of
data points indexed (or listed or graphed) in time order. Most commonly, a time series is a
sequence
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is cal ...
taken at successive equally spaced points in time. Thus it is a sequence of
discrete-time data. Examples of time series are heights of ocean
tides, counts of
sunspots, and the daily closing value of the
Dow Jones Industrial Average
The Dow Jones Industrial Average (DJIA), Dow Jones, or simply the Dow (), is a stock market index of 30 prominent companies listed on stock exchanges in the United States.
The DJIA is one of the oldest and most commonly followed equity indice ...
.
A time series is very frequently plotted via a
run chart (which is a temporal
line chart
A line chart or line graph, also known as curve chart, is a type of chart that displays information as a series of data points called 'markers' connected by straight wikt:line, line segments. It is a basic type of chart common in many fields. ...
). Time series are used in
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
,
signal processing
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
,
pattern recognition
Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR capabilities but their p ...
,
econometrics
Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8 ...
,
mathematical finance
Mathematical finance, also known as quantitative finance and financial mathematics, is a field of applied mathematics, concerned with mathematical modeling in the financial field.
In general, there exist two separate branches of finance that req ...
,
weather forecasting
Weather forecasting or weather prediction is the application of science and technology forecasting, to predict the conditions of the Earth's atmosphere, atmosphere for a given location and time. People have attempted to predict the weather info ...
,
earthquake prediction,
electroencephalography
Electroencephalography (EEG)
is a method to record an electrogram of the spontaneous electrical activity of the brain. The biosignal, bio signals detected by EEG have been shown to represent the postsynaptic potentials of pyramidal neurons in ...
,
control engineering,
astronomy
Astronomy is a natural science that studies celestial objects and the phenomena that occur in the cosmos. It uses mathematics, physics, and chemistry in order to explain their origin and their overall evolution. Objects of interest includ ...
,
communications engineering, and largely in any domain of applied
science
Science is a systematic discipline that builds and organises knowledge in the form of testable hypotheses and predictions about the universe. Modern science is typically divided into twoor threemajor branches: the natural sciences, which stu ...
and
engineering
Engineering is the practice of using natural science, mathematics, and the engineering design process to Problem solving#Engineering, solve problems within technology, increase efficiency and productivity, and improve Systems engineering, s ...
which involves
temporal measurements.
Time series ''analysis'' comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series ''forecasting'' is the use of a
model
A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin , .
Models can be divided in ...
to predict future values based on previously observed values. Generally, time series data is modelled as a
stochastic process
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Sto ...
. While
regression analysis is often employed in such a way as to test relationships between one or more different time series, this type of analysis is not usually called "time series analysis", which refers in particular to relationships between different points in time within a single series.
Time series data have a natural temporal ordering. This makes time series analysis distinct from
cross-sectional studies, in which there is no natural ordering of the observations (e.g. explaining people's wages by reference to their respective education levels, where the individuals' data could be entered in any order). Time series analysis is also distinct from
spatial data analysis where the observations typically relate to geographical locations (e.g. accounting for house prices by the location as well as the intrinsic characteristics of the houses). A
stochastic Stochastic (; ) is the property of being well-described by a random probability distribution. ''Stochasticity'' and ''randomness'' are technically distinct concepts: the former refers to a modeling approach, while the latter describes phenomena; i ...
model for a time series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will often make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values (see
time reversibility
In mathematics and physics, time-reversibility is the property (mathematics), property of a process whose governing rules remain unchanged when the direction of its sequence of actions is reversed.
A deterministic process is time-reversible if th ...
).
Time series analysis can be applied to
real-valued, continuous data,
discrete
Discrete may refer to:
*Discrete particle or quantum in physics, for example in quantum theory
* Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit
* Discrete group, ...
numeric data, or discrete symbolic data (i.e. sequences of characters, such as letters and words in the
English language
English is a West Germanic language that developed in early medieval England and has since become a English as a lingua franca, global lingua franca. The namesake of the language is the Angles (tribe), Angles, one of the Germanic peoples th ...
).
Methods for analysis
Methods for time series analysis may be divided into two classes:
frequency-domain methods and
time-domain methods. The former include
spectral analysis and
wavelet analysis; the latter include
auto-correlation and
cross-correlation analysis. In the time domain, correlation and analysis can be made in a filter-like manner using
scaled correlation, thereby mitigating the need to operate in the frequency domain.
Additionally, time series analysis techniques may be divided into
parametric and
non-parametric methods. The
parametric approaches assume that the underlying
stationary stochastic process has a certain structure which can be described using a small number of parameters (for example, using an
autoregressive or
moving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast,
non-parametric approaches explicitly estimate the
covariance or the
spectrum
A spectrum (: spectra or spectrums) is a set of related ideas, objects, or properties whose features overlap such that they blend to form a continuum. The word ''spectrum'' was first used scientifically in optics to describe the rainbow of co ...
of the process without assuming that the process has any particular structure.
Methods of time series analysis may also be divided into
linear
In mathematics, the term ''linear'' is used in two distinct senses for two different properties:
* linearity of a '' function'' (or '' mapping'');
* linearity of a '' polynomial''.
An example of a linear function is the function defined by f(x) ...
and
non-linear, and
univariate and
multivariate.
Panel data
A time series is one type of
panel data. Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel (as is a
cross-sectional data
In statistics and econometrics, cross-sectional data is a type of data collected by observing many subjects (such as individuals, firms, countries, or regions) at a single point or period of time. Analysis of cross-sectional data usually consists ...
set). A data set may exhibit characteristics of both panel data and time series data. One way to tell is to ask what makes one data record unique from the other records. If the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a time data field and an additional identifier which is unrelated to time (e.g. student ID, stock symbol, country code), then it is panel data candidate. If the differentiation lies on the non-time identifier, then the data set is a cross-sectional data set candidate.
Analysis
There are several types of motivation and data analysis available for time series which are appropriate for different purposes.
Motivation
In the context of
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
,
econometrics
Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8 ...
,
quantitative finance,
seismology
Seismology (; from Ancient Greek σεισμός (''seismós'') meaning "earthquake" and -λογία (''-logía'') meaning "study of") is the scientific study of earthquakes (or generally, quakes) and the generation and propagation of elastic ...
,
meteorology
Meteorology is the scientific study of the Earth's atmosphere and short-term atmospheric phenomena (i.e. weather), with a focus on weather forecasting. It has applications in the military, aviation, energy production, transport, agricultur ...
, and
geophysics
Geophysics () is a subject of natural science concerned with the physical processes and Physical property, properties of Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists conduct i ...
the primary goal of time series analysis is
forecasting
Forecasting is the process of making predictions based on past and present data. Later these can be compared with what actually happens. For example, a company might Estimation, estimate their revenue in the next year, then compare it against the ...
. In the context of
signal processing
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
,
control engineering and
communication engineering it is used for signal detection. Other applications are in
data mining,
pattern recognition
Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR capabilities but their p ...
and
machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, where time series analysis can be used for
clustering,
classification
Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
, query by content,
anomaly detection as well as
forecasting
Forecasting is the process of making predictions based on past and present data. Later these can be compared with what actually happens. For example, a company might Estimation, estimate their revenue in the next year, then compare it against the ...
.
Exploratory analysis

A simple way to examine a regular time series is manually with a
line chart
A line chart or line graph, also known as curve chart, is a type of chart that displays information as a series of data points called 'markers' connected by straight wikt:line, line segments. It is a basic type of chart common in many fields. ...
. The datagraphic shows tuberculosis deaths in the United States, along with the yearly change and the percentage change from year to year. The total number of deaths declined in every year until the mid-1980s, after which there were occasional increases, often proportionately - but not absolutely - quite large.
A study of corporate data analysts found two challenges to exploratory time series analysis: discovering the shape of interesting patterns, and finding an explanation for these patterns. Visual tools that represent time series data as
heat map matrices can help overcome these challenges.
Estimation, filtering, and smoothing
This approach may be based on
harmonic analysis and filtering of signals in the
frequency domain using the
Fourier transform, and
spectral density estimation. Its development was significantly accelerated during
World War II
World War II or the Second World War (1 September 1939 – 2 September 1945) was a World war, global conflict between two coalitions: the Allies of World War II, Allies and the Axis powers. World War II by country, Nearly all of the wo ...
by mathematician
Norbert Wiener
Norbert Wiener (November 26, 1894 – March 18, 1964) was an American computer scientist, mathematician, and philosopher. He became a professor of mathematics at the Massachusetts Institute of Technology ( MIT). A child prodigy, Wiener late ...
, electrical engineers
Rudolf E. Kálmán,
Dennis Gabor and others for filtering signals from noise and predicting signal values at a certain point in time.
An equivalent effect may be achieved in the time domain, as in a
Kalman filter
In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unk ...
; see
filtering and
smoothing for more techniques.
Other related techniques include:
*
Autocorrelation analysis to examine
serial dependence
*
Spectral analysis to examine cyclic behavior which need not be related to
seasonality. For example, sunspot activity varies over 11 year cycles. Other common examples include celestial phenomena, weather patterns, neural activity, commodity prices, and economic activity.
* Separation into components representing trend, seasonality, slow and fast variation, and cyclical irregularity: see
trend estimation and
decomposition of time series
Curve fitting
Curve fitting is the process of constructing a
curve
In mathematics, a curve (also called a curved line in older texts) is an object similar to a line, but that does not have to be straight.
Intuitively, a curve may be thought of as the trace left by a moving point. This is the definition that ...
, or
mathematical function
In mathematics, a function from a set (mathematics), set to a set assigns to each element of exactly one element of .; the words ''map'', ''mapping'', ''transformation'', ''correspondence'', and ''operator'' are sometimes used synonymously. ...
, that has the best fit to a series of
data
Data ( , ) are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted for ...
points, possibly subject to constraints. Curve fitting can involve either
interpolation
In the mathematics, mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In engineering and science, one ...
, where an exact fit to the data is required, or
smoothing, in which a "smooth" function is constructed that approximately fits the data. A related topic is
regression analysis, which focuses more on questions of
statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables.
Extrapolation refers to the use of a fitted curve beyond the
range of the observed data, and is subject to a
degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data.

For processes that are expected to generally grow in magnitude one of the curves in the graphic (and many others) can be fitted by estimating their parameters.
The construction of economic time series involves the estimation of some components for some dates by
interpolation
In the mathematics, mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In engineering and science, one ...
between values ("benchmarks") for earlier and later dates. Interpolation is estimation of an unknown quantity between two known quantities (historical data), or drawing conclusions about missing information from the available information ("reading between the lines"). Interpolation is useful where the data surrounding the missing data is available and its trend, seasonality, and longer-term cycles are known. This is often done by using a related series known for all relevant dates. Alternatively
polynomial interpolation or
spline interpolation is used where piecewise
polynomial functions are fitted in time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called
regression). The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set. Spline interpolation, however, yield a piecewise continuous function composed of many polynomials to model the data set.
Extrapolation is the process of estimating, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar to
interpolation
In the mathematics, mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In engineering and science, one ...
, which produces estimates between known observations, but extrapolation is subject to greater
uncertainty
Uncertainty or incertitude refers to situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown, and is particularly relevant for decision ...
and a higher risk of producing meaningless results.
Function approximation
In general, a function approximation problem asks us to select a
function among a well-defined class that closely matches ("approximates") a target function in a task-specific way.
One can distinguish two major classes of function approximation problems: First, for known target functions,
approximation theory is the branch of
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
that investigates how certain known functions (for example,
special function
Special functions are particular mathematical functions that have more or less established names and notations due to their importance in mathematical analysis, functional analysis, geometry, physics, or other applications.
The term is defined by ...
s) can be approximated by a specific class of functions (for example,
polynomials or
rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).
Second, the target function, call it ''g'', may be unknown; instead of an explicit formula, only a set of points (a time series) of the form (''x'', ''g''(''x'')) is provided. Depending on the structure of the
domain and
codomain of ''g'', several techniques for approximating ''g'' may be applicable. For example, if ''g'' is an operation on the
real number
In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s, techniques of
interpolation
In the mathematics, mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In engineering and science, one ...
,
extrapolation,
regression analysis, and
curve fitting can be used. If the
codomain (range or target set) of ''g'' is a finite set, one is dealing with a
classification
Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
problem instead. A related problem of ''online'' time series approximation is to summarize the data in one-pass and construct an approximate representation that can support a variety of time series queries with bounds on worst-case error.
To some extent, the different problems (
regression,
classification
Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
,
fitness approximation) have received a unified treatment in
statistical learning theory
Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on da ...
, where they are viewed as
supervised learning
In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
problems.
Prediction and forecasting
In
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
,
prediction
A prediction (Latin ''præ-'', "before," and ''dictum'', "something said") or forecast is a statement about a future event or about future data. Predictions are often, but not always, based upon experience or knowledge of forecasters. There ...
is a part of
statistical inference. One particular approach to such inference is known as
predictive inference, but the prediction can be undertaken within any of the several approaches to statistical inference. Indeed, one description of statistics is that it provides a means of transferring knowledge about a sample of a population to the whole population, and to other related populations, which is not necessarily the same as prediction over time. When information is transferred across time, often to specific points in time, the process is known as
forecasting
Forecasting is the process of making predictions based on past and present data. Later these can be compared with what actually happens. For example, a company might Estimation, estimate their revenue in the next year, then compare it against the ...
.
* Fully formed statistical models for
stochastic simulation A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities.DLOUHÝ, M.; FÁBRY, J.; KUNCOVÁ, M.. Simulace pro ekonomy. Praha : VŠE, 2005.
Realizations of these ...
purposes, so as to generate alternative versions of the time series, representing what might happen over non-specific time-periods in the future
* Simple or fully formed statistical models to describe the likely outcome of the time series in the immediate future, given knowledge of the most recent outcomes (forecasting).
* Forecasting on time series is usually done using automated statistical software packages and programming languages, such as
Julia,
Python,
R,
SAS,
SPSS and many others.
* Forecasting on large scale data can be done with
Apache Spark using the Spark-TS library, a third-party package.
Classification
Assigning time series pattern to a specific category, for example identify a word based on series of hand movements in
sign language
Sign languages (also known as signed languages) are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with #Non-manual elements, no ...
.
Segmentation
Splitting a time-series into a sequence of segments. It is often the case that a time-series can be represented as a sequence of individual segments, each with its own characteristic properties. For example, the audio signal from a conference call can be partitioned into pieces corresponding to the times during which each person was speaking. In time-series segmentation, the goal is to identify the segment boundary points in the time-series, and to characterize the dynamical properties associated with each segment. One can approach this problem using Change detection, change-point detection, or by modeling the time-series as a more sophisticated system, such as a Markov jump linear system.
Clustering
Time series data may be clustered, however special care has to be taken when considering subsequence clustering.
Time series clustering may be split into
* whole time series clustering (multiple time series for which to find a cluster)
* subsequence time series clustering (single timeseries, split into chunks using sliding windows)
* time point clustering
Subsequence time series clustering
Subsequence time series clustering resulted in unstable (random) clusters ''induced by the feature extraction'' using chunking with sliding windows. It was found that the cluster centers (the average of the time series in a cluster - also a time series) follow an arbitrarily shifted sine pattern (regardless of the dataset, even on realizations of a random walk). This means that the found cluster centers are non-descriptive for the dataset because the cluster centers are always nonrepresentative sine waves.
Models
Models for time series data can have many forms and represent different stochastic processes. When modeling variations in the level of a process, three broad classes of practical importance are the ''
autoregressive'' (AR) models, the ''integrated'' (I) models, and the ''moving-average model, moving-average'' (MA) models. These three classes depend ''linearly'' on previous data points.
Combinations of these ideas produce autoregressive moving-average model, autoregressive moving-average (ARMA) and autoregressive integrated moving average, autoregressive integrated moving-average (ARIMA) models. The autoregressive fractionally integrated moving average, autoregressive fractionally integrated moving-average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial "V" for "vector", as in VAR for vector autoregression. An additional set of extensions of these models is available for use where the observed time-series is driven by some "forcing" time-series (which may not have a causal effect on the observed series): the distinction from the multivariate case is that the forcing series may be deterministic or under the experimenter's control. For these models, the acronyms are extended with a final "X" for "exogenous".
Non-linear dependence of the level of a series on previous data points is of interest, partly because of the possibility of producing a chaos theory, chaotic time series. However, more importantly, empirical investigations can indicate the advantage of using predictions derived from non-linear models, over those from linear models, as for example in nonlinear autoregressive exogenous models. Further references on nonlinear time series analysis: (Kantz and Schreiber), and (Abarbanel)
Among other types of non-linear time series models, there are models to represent the changes of variance over time (heteroskedasticity). These models represent autoregressive conditional heteroskedasticity (ARCH) and the collection comprises a wide variety of representation (GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc.). Here changes in variability are related to, or predicted by, recent past values of the observed series. This is in contrast to other possible representations of locally varying variability, where the variability might be modelled as being driven by a separate time-varying process, as in a doubly stochastic model.
In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor. Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. See also Markov switching multifractal (MSMF) techniques for modeling volatility evolution.
A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text.
Many of these models are collected in the python package sktime.
Notation
A number of different notations are in use for time-series analysis. A common notation specifying a time series ''X'' that is indexed by the natural numbers is written
:.
Another common notation is
:,
where ''T'' is the index set.
Conditions
There are two sets of conditions under which much of the theory is built:
* Stationary process
* Ergodic process
Ergodicity implies stationarity, but the converse is not necessarily the case. Stationarity is usually classified into strict stationarity and wide-sense or Stationary process#Weaker forms of stationarity, second-order stationarity. Both models and applications can be developed under each of these conditions, although the models in the latter case might be considered as only partly specified.
In addition, time-series analysis can be applied where the series are cyclostationary process, seasonally stationary or non-stationary. Situations where the amplitudes of frequency components change with time can be dealt with in time-frequency analysis which makes use of a time–frequency representation of a time-series or signal.
Tools
Tools for investigating time-series data include:
* Consideration of the autocorrelation, autocorrelation function and the spectral density, spectral density function (also cross-correlation functions and cross-spectral density functions)
* Scaled correlation, Scaled cross- and auto-correlation functions to remove contributions of slow components
* Performing a
Fourier transform to investigate the series in the
frequency domain
* Performing a clustering analysis
* Discrete, continuous or mixed spectra of time series, depending on whether the time series contains a (generalized) harmonic signal or not
* Use of a digital filter, filter to remove unwanted noise (physics), noise
* Principal component analysis (or empirical orthogonal function analysis)
* Singular spectrum analysis
* "Structural" models:
** General state space models
** Unobserved components models
* Machine learning
** Artificial neural networks
** Support vector machine
** Fuzzy logic
** Gaussian process
** Genetic programming
** Gene expression programming
** Hidden Markov model
** Multi expression programming
* Queueing theory analysis
* Control chart
** Shewhart individuals control chart
** CUSUM chart
** EWMA chart
* Detrended fluctuation analysis
* Nonlinear mixed-effects modeling
* Dynamic time warping
* Dynamic Bayesian network
* Time-frequency representation, Time-frequency analysis techniques:
** Fast Fourier transform
** Continuous wavelet transform
** Short-time Fourier transform
** Chirplet transform
** Fractional Fourier transform
* Chaos theory, Chaotic analysis
** Correlation dimension
** Recurrence plots
** Recurrence quantification analysis
** Lyapunov exponents
** Entropy encoding
Measures
Time-series metrics or features (pattern recognition), features that can be used for time series classification (machine learning), classification or
regression analysis:
* Univariate linear measures
** Moment (mathematics)
** Spectral band power
** Spectral edge frequency
** Accumulated energy (signal processing)
** Characteristics of the autocorrelation function
** Hjorth parameters
** Fast Fourier transform, FFT parameters
** Autoregressive model parameters
** Mann–Kendall test
* Univariate non-linear measures
** Measures based on the correlation sum
** Correlation dimension
** Correlation integral
** Correlation density
** Correlation entropy
** Approximate entropy
** Sample entropy
**
** Wavelet entropy
** Dispersion entropy
** Fluctuation dispersion entropy
** Rényi entropy
** Higher-order methods
** Marginal predictability
** Dynamical similarity index
** State space dissimilarity measures
** Lyapunov exponent
** Permutation methods
** Local flow
* Other univariate measures
** Algorithmic information theory, Algorithmic complexity
** Kolmogorov complexity estimates
** Hidden Markov model states
** Rough path#Signature, Rough path signature
** Surrogate time series and surrogate correction
** Loss of recurrence (degree of non-stationarity)
* Bivariate linear measures
** Maximum linear
cross-correlation
** Linear Coherence (signal processing)
* Bivariate non-linear measures
** Non-linear interdependence
** Dynamical Entrainment (physics)
** Measures for phase synchronization
** Measures for phase locking
* Similarity measures:
** Cross-correlation
** Dynamic time warping
** Hidden Markov model
** Edit distance
** Total correlation
** Newey–West estimator
** Prais–Winsten estimation, Prais–Winsten transformation
** Data as vectors in a metrizable space
*** Minkowski distance
*** Mahalanobis distance
** Data as time series with envelopes
*** Global standard deviation
*** Local standard deviation
*** Windowed standard deviation
** Data interpreted as stochastic series
*** Pearson product-moment correlation coefficient
*** Spearman's rank correlation coefficient
** Data interpreted as a probability distribution function
*** Kolmogorov–Smirnov test
*** Cramér–von Mises criterion
Visualization
Time series can be visualized with two categories of chart: Overlapping Charts and Separated Charts. Overlapping Charts display all-time series on the same layout while Separated Charts presents them on different layouts (but aligned for comparison purpose)
Overlapping charts
* Braided graphs
* Line charts
* Slope graphs
*
Separated charts
* Horizon chart, Horizon graphs
* Reduced line chart (small multiples)
* Silhouette graph
* Circular silhouette graph
See also
References
Further reading
*
*
* James Durbin, Durbin J., Koopman S.J. (2001), ''Time Series Analysis by State Space Methods'', Oxford University Press.
*
*
* Maurice Priestley, Priestley, M. B. (1981), ''Spectral Analysis and Time Series'', Academic Press.
*
* Shumway R. H., Stoffer D. S. (2017), ''Time Series Analysis and its Applications: With R Examples (ed. 4)'', Springer,
* Weigend A. S., Gershenfeld N. A. (Eds.) (1994), ''Time Series Prediction: Forecasting the Future and Understanding the Past''. Proceedings of the NATO Advanced Research Workshop on Comparative Time Series Analysis (Santa Fe, May 1992), Addison-Wesley.
*
* Woodward, W. A., Gray, H. L. & Elliott, A. C. (2012), ''Applied Time Series Analysis'', CRC Press.
*
External links
Introduction to Time series Analysis (Engineering Statistics Handbook)— A practical guide to Time series analysis.
{{DEFAULTSORT:Time Series
Time series,
Statistical data types
Mathematical and quantitative methods (economics)
Machine learning
Mathematics in medicine