HOME
*





List Of Things Named After Andrey Markov
{{Short description, none This article is a list of things named after Andrey Markov, an influential Russian mathematician. * Chebyshev–Markov–Stieltjes inequalities * Dynamics of Markovian particles * Dynamic Markov compression * Gauss–Markov theorem * Gauss–Markov process * Markov blanket ** Markov boundary * Markov chain ** Markov chain central limit theorem ** Additive Markov chain ** Markov additive process ** Absorbing Markov chain ** Continuous-time Markov chain ** Discrete-time Markov chain ** Nearly completely decomposable Markov chain ** Quantum Markov chain ** Telescoping Markov chain * Markov condition ** Causal Markov condition * Markov model ** Hidden Markov model ** Hidden semi-Markov model ** Layered hidden Markov model ** Hierarchical hidden Markov model ** Maximum-entropy Markov model ** Variable-order Markov model * Markov renewal process * Markov chain mixing time * Markov kernel * Piecewise-deterministic Markov process * Markovian arrival process ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Andrey Markov
Andrey Andreyevich Markov, first name also spelled "Andrei", in older works also spelled Markoff) (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as the Markov chain. Markov and his younger brother Vladimir Andreevich Markov (1871–1897) proved the Markov brothers' inequality. His son, another Andrey Andreyevich Markov (1903–1979), was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. Biography Andrey Markov was born on 14 June 1856 in Russia. He attended the St. Petersburg Grammar School, where some teachers saw him as a rebellious student. In his academics he performed poorly in most subjects other than mathematics. Later in life he attended Saint Petersburg Imperial University (now Saint Petersburg State University). Among his teachers were Yulian Sokhotski (differential calculus, higher algebra) ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov Condition
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time. The term Markov assumption is used to describe a model where the Markov assumption is assumed to hold, such as a hidden Markov model. A Markov random field extends this property to two or more dimensions or to random variables defined for an interconnected network of items. An example of a model for such a field is the Ising model. A discrete-time stochastic process satisfying the Markov property is known as a Markov chain. Introduction A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the presen ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Markov Strategy
In game theory Game theory is the study of mathematical models of strategic interactions among rational agents. Myerson, Roger B. (1991). ''Game Theory: Analysis of Conflict,'' Harvard University Press, p.&nbs1 Chapter-preview links, ppvii–xi It has appli ..., a Markov strategy is one that depends only on state variables that summarize the history of the game in one way or another. For instance, a state variable can be the current play in a repeated game, or it can be any interpretation of a recent sequence of play. A profile of Markov strategies is a Markov perfect equilibrium if it is a Nash equilibrium in every state of the game. The Markov strategy was invented by Andrey Markov. References Game theory {{Gametheory-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Markovian Arrival Process
In queueing theory, a discipline within the mathematical theory of probability, a Markovian arrival process (MAP or MArP) is a mathematical model for the time between job arrivals to a system. The simplest such process is a Poisson process where the time between each arrival is exponentially distributed. The processes were first suggested by Neuts in 1979. Definition A Markov arrival process is defined by two matrices, ''D''0 and ''D''1 where elements of ''D''0 represent hidden transitions and elements of ''D''1 observable transitions. The block matrix ''Q'' below is a transition rate matrix for a continuous-time Markov chain. : Q=\left begin D_&D_&0&0&\dots\\ 0&D_&D_&0&\dots\\ 0&0&D_&D_&\dots\\ \vdots & \vdots & \ddots & \ddots & \ddots \end\right; . The simplest example is a Poisson process where ''D''0 = −''λ'' and ''D''1 = ''λ'' where there is only one possible transition, it is observable, and occurs at rate ''λ''. For ''Q'' to be a valid transition ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Piecewise-deterministic Markov Process
In probability theory, a piecewise-deterministic Markov process (PDMP) is a process whose behaviour is governed by random jumps at points in time, but whose evolution is deterministically governed by an ordinary differential equation between those times. The class of models is "wide enough to include as special cases virtually all the non-diffusion models of applied probability." The process is defined by three quantities: the flow, the jump rate, and the transition measure. The model was first introduced in a paper by Mark H. A. Davis in 1984. Examples Piecewise linear models such as Markov chains, continuous-time Markov chains, the M/G/1 queue, the GI/G/1 queue and the fluid queue can be encapsulated as PDMPs with simple differential equations. Applications PDMPs have been shown useful in ruin theory, queueing theory, for modelling biochemical processes such as DNA replication in eukaryotes and subtilin production by the organism B. subtilis, and for modelling earthquakes ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Markov Kernel
In probability theory, a Markov kernel (also known as a stochastic kernel or probability kernel) is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space. Formal definition Let (X,\mathcal A) and (Y,\mathcal B) be measurable spaces. A ''Markov kernel'' with source (X,\mathcal A) and target (Y,\mathcal B) is a map \kappa : \mathcal B \times X \to ,1/math> with the following properties: # For every (fixed) B \in \mathcal B, the map x \mapsto \kappa(B, x) is \mathcal A-measurable # For every (fixed) x \in X, the map B \mapsto \kappa(B, x) is a probability measure on (Y, \mathcal B) In other words it associates to each point x \in X a probability measure \kappa(dy, x): B \mapsto \kappa(B, x) on (Y,\mathcal B) such that, for every measurable set B\in\mathcal B, the map x\mapsto \kappa(B, x) is measurable with respect to the \sigma-algebra \mathcal A. Examples Simple ra ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Markov Chain Mixing Time
In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique stationary distribution ''π'' and, regardless of the initial state, the time-''t'' distribution of the chain converges to ''π'' as ''t'' tends to infinity. Mixing time refers to any of several variant formalizations of the idea: how large must ''t'' be until the time-''t'' distribution is approximately ''π'' ? One variant, ''variation distance mixing time'', is defined as the smallest ''t'' such that the total variation distance of probability measures is small: :, \Pr(X_t \in A) - \pi(A), \leq 1/4 for all subsets A of states and all initial states. This is the sense in which proved that the number of riffle shuffles needed to mix an ordinary 52 card deck is 7. Mathematical theory focuses on how mixing times chan ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov Renewal Process
In probability and statistics, a Markov renewal process (MRP) is a random process that generalizes the notion of Markov jump processes. Other random processes like Markov chains, Poisson processes and renewal processes can be derived as special cases of MRP's. Definition Consider a state space \mathrm. Consider a set of random variables (X_n,T_n), where T_n are the jump times and X_n are the associated states in the Markov chain (see Figure). Let the inter-arrival time, \tau_n=T_n-T_. Then the sequence (X_n,T_n) is called a Markov renewal process if : \begin & \Pr(\tau_\le t, X_=j\mid(X_0, T_0), (X_1, T_1),\ldots, (X_n=i, T_n)) \\ pt= & \Pr(\tau_\le t, X_=j\mid X_n=i)\, \forall n \ge1,t\ge0, i,j \in \mathrm \end Relation to other stochastic processes # If we define a new stochastic process Y_t:=X_n for t \in continuous time Markov chain/process (CTMC). Instead the process is Markovian only at the specified jump instants. This is the rationale behind the name, ''Semi''-Mark ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Variable-order Markov Model
In the mathematical theory of stochastic processes, variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization. This realization sequence is often called the ''context''; therefore the VOM models are also called ''context trees''. VOM models are nicely rendered by colorized probabilistic suffix trees (PST). The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such as statistical analysis, classification and prediction. Example Consider for example a sequence of random variables, each of which takes a value from the ternary alphabet . Specifically, consider the string ' constructed from infinite concate ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Maximum-entropy Markov Model
In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction. Model Suppose we have a sequence of observations O_1, \dots, O_n that we seek to tag with the labels S_1, \dots, S_nthat maximize the conditional probability P(S_1, \dots, S_n \mid O_1, \dots, O_n). In a MEMM, this probability is factored into Markov transition probabilities, where the probability of transitioning to a particular label depends only on the observation at that position and the previous position's label ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Hierarchical Hidden Markov Model
The hierarchical hidden Markov model (HHMM) is a statistical model derived from the hidden Markov model (HMM). In an HHMM, each state is considered to be a self-contained probabilistic model. More precisely, each state of the HHMM is itself an HHMM. HHMMs and HMMs are useful in many fields, including pattern recognition. Background It is sometimes useful to use HMMs in specific structures in order to facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data is available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM into a greater structure; which, theoretically, may not be able to solve any other problems than the basic HMM but can solve some problems more efficiently when it comes to the amount of training data required. Description In the hierarchical hidden Markov model (HHMM), each state is considered to ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Layered Hidden Markov Model
The layered hidden Markov model (LHMM) is a statistical model derived from the hidden Markov model (HMM). A layered hidden Markov model (LHMM) consists of ''N'' levels of HMMs, where the HMMs on level ''i'' + 1 correspond to observation symbols or probability generators at level ''i''. Every level ''i'' of the LHMM consists of ''K''''i'' HMMs running in parallel. Background LHMMs are sometimes useful in specific structures because they can facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data were available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM in a layered structure which, theoretically, may not be able to solve any problems the basic HMM cannot, but can solve some problems more efficiently because less training data is needed. The layered hidden Markov model A layered hidden Markov mod ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]