HOME

TheInfoList



OR:

Surrogate data testing (or the ''method of surrogate data'') is a statistical
proof by contradiction In logic, proof by contradiction is a form of proof that establishes the truth or the validity of a proposition by showing that assuming the proposition to be false leads to a contradiction. Although it is quite freely used in mathematical pr ...
technique similar to
permutation test A permutation test (also called re-randomization test or shuffle test) is an exact statistical hypothesis test. A permutation test involves two or more samples. The (possibly counterfactual) null hypothesis is that all samples come from the same ...
s and parametric bootstrapping. It is used to detect
non-linearity In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathe ...
in a
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
. The technique involves specifying a
null hypothesis The null hypothesis (often denoted ''H''0) is the claim in scientific research that the effect being studied does not exist. The null hypothesis can also be described as the hypothesis in which no relationship exists between two sets of data o ...
H_0 describing a linear process and then generating several surrogate data sets according to H_0 using
Monte Carlo Monte Carlo ( ; ; or colloquially ; , ; ) is an official administrative area of Monaco, specifically the Ward (country subdivision), ward of Monte Carlo/Spélugues, where the Monte Carlo Casino is located. Informally, the name also refers to ...
methods. A discriminating statistic is then calculated for the original time series and all the surrogate set. If the value of the statistic is significantly different for the original series than for the surrogate set, the null hypothesis is rejected and non-linearity assumed. The particular surrogate data testing method to be used is directly related to the null hypothesis. Usually this is similar to the following: ''The data is a realization of a stationary linear system, whose output has been possibly measured by a monotonically increasing possibly nonlinear (but static) function''. Here ''linear'' means that each value is linearly dependent on past values or on present and past values of some independent identically distributed (i.i.d.) process, usually also Gaussian. This is equivalent to saying that the process is
ARMA Arma, ARMA or variants, may refer to: Places * Arma, Kansas, United States * Arma, Nepal * Arma District, Peru * Arma District, Yemen * Arma Mountains, Afghanistan People * Arma people, an ethnic group of the middle Niger River valley * Arma lan ...
type. In case of fluxes (continuous mappings), linearity of system means that it can be expressed by a linear differential equation. In this hypothesis, the ''static'' measurement function is one which depends only on the present value of its argument, not on past ones.


Methods

Many algorithms to generate surrogate data have been proposed. They are usually classified in two groups: * ''Typical realizations'': data series are generated as outputs of a well-fitted model to the original data. * ''Constrained realizations'': data series are created directly from original data, generally by some suitable transformation of it. The last surrogate data methods do not depend on a particular model, nor on any parameters, thus they are non-parametric methods. These surrogate data methods are usually based on preserving the linear structure of the original series (for instance, by preserving the
autocorrelation function Autocorrelation, sometimes known as serial correlation in the discrete time case, measures the correlation of a signal with a delayed copy of itself. Essentially, it quantifies the similarity between observations of a random variable at differe ...
, or equivalently the
periodogram In signal processing, a periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898. Today, the periodogram is a component of more sophisticated methods (see spectral estimation). It is the most ...
, an estimate of the sample spectrum). Among constrained realizations methods, the most widely used (and thus could be called the ''classical methods'') are: # Algorithm 0, or RS (for ''Random Shuffle''): New data are created simply by random permutations of the original series. This concept is also used in
permutation test A permutation test (also called re-randomization test or shuffle test) is an exact statistical hypothesis test. A permutation test involves two or more samples. The (possibly counterfactual) null hypothesis is that all samples come from the same ...
s. The permutations guarantee the same amplitude distribution as the original series, but destroy any temporal correlation that may have been in the original data. This method is associated to the null hypothesis of the data being uncorrelated i.i.d noise (possibly Gaussian and measured by a static nonlinear function). # Algorithm 1, or RP (for ''Random Phases''; also known as FT, for
Fourier Transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
): In order to preserve the linear correlation (the periodogram) of the series, surrogate data are created by the inverse Fourier Transform of the modules of Fourier Transform of the original data with new (uniformly random) phases. If the surrogates must be real, the Fourier phases must be antisymmetric with respect to the central value of data. # Algorithm 2, or AAFT (for ''Amplitude Adjusted Fourier Transform''): This method has approximately the advantages of the two previous ones: it tries to preserve both the linear structure and the amplitude distribution. This method consists of these steps: #* Scaling the data to a Gaussian distribution (''Gaussianization''). #* Performing a RP transformation of the new data. #* Finally doing a transformation inverse of the first one (''de-Gaussianization''). #:The drawback of this method is precisely that the last step changes somewhat the linear structure. # Iterative algorithm 2, or IAAFT (for ''Iterative Amplitude Adjusted Fourier Transform''): This algorithm is an iterative version of AAFT. The steps are repeated until the autocorrelation function is sufficiently similar to the original, or until there is no change in the amplitudes. Many other surrogate data methods have been proposed, some based on optimizations to achieve an autocorrelation close to the original one, some based on wavelet transform and some capable of dealing with some types of non-stationary data. The above mentioned techniques are called linear surrogate methods, because they are based on a linear process and address a linear null hypothesis. Broadly speaking, these methods are useful for data showing irregular fluctuations (short-term variabilities) and data with such a behaviour abound in the real world. However, we often observe data with obvious periodicity, for example, annual sunspot numbers, electrocardiogram (ECG) and so on. Time series exhibiting strong periodicities are clearly not consistent with the linear null hypotheses. To tackle this case, some algorithms and null hypotheses have been proposed.


See also

*
Resampling (statistics) In statistics, resampling is the creation of new samples based on one observed sample. Resampling methods are: # Permutation tests (also re-randomization tests) for generating counterfactual samples # Bootstrapping # Cross validation # Jackkn ...
*
Permutation test A permutation test (also called re-randomization test or shuffle test) is an exact statistical hypothesis test. A permutation test involves two or more samples. The (possibly counterfactual) null hypothesis is that all samples come from the same ...


References

{{reflist, 2 Nonlinear time series analysis Statistical tests