List Of Things Named After Andrey Markov
{{Short description, none This article is a list of things named after Andrey Markov, an influential Russian mathematician. * Chebyshev–Markov–Stieltjes inequalities * Dynamics of Markovian particles * Dynamic Markov compression * Gauss–Markov theorem * Gauss–Markov process * Markov blanket ** Markov boundary * Markov chain ** Markov chain central limit theorem ** Additive Markov chain ** Markov additive process ** Absorbing Markov chain ** Continuous-time Markov chain ** Discrete-time Markov chain ** Nearly completely decomposable Markov chain ** Quantum Markov chain ** Telescoping Markov chain * Markov condition ** Causal Markov condition * Markov model ** Hidden Markov model ** Hidden semi-Markov model ** Layered hidden Markov model ** Hierarchical hidden Markov model ** Maximum-entropy Markov model ** Variable-order Markov model * Markov renewal process * Markov chain mixing time * Markov kernel * Piecewise-deterministic Markov process * Markovian arrival process ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Andrey Markov
Andrey Andreyevich Markov (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as the Markov chain. He was also a strong, close to master-level, chess player. Markov and his younger brother Vladimir Markov (mathematician), Vladimir Andreyevich Markov (1871–1897) proved the Markov brothers' inequality. His son, another Andrey Markov (Soviet mathematician), Andrey Andreyevich Markov (1903–1979), was also a notable mathematician, making contributions to constructive mathematics and Recursion#Functional recursion, recursive function theory. Biography Andrey Markov was born on 14 June 1856 in Russia. He attended the St. Petersburg Grammar School, where some teachers saw him as a rebellious student. In his academics he performed poorly in most subjects other than mathematics. Later in life he attended Saint Petersburg Imperial University (now Saint Petersburg State Uni ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Markov Condition
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time. The term Markov assumption is used to describe a model where the Markov property is assumed to hold, such as a hidden Markov model. A Markov random field extends this property to two or more dimensions or to random variables defined for an interconnected network of items. An example of a model for such a field is the Ising model. A discrete-time stochastic process satisfying the Markov property is known as a Markov chain. Introduction A stochastic process has the Markov property if the conditional probability distribution of future states of the process (condit ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Markov Strategy
In game theory, a Markov strategy is a strategy that depends only on the current state of the game, rather than the full history of past actions. The state summarizes all relevant past information needed for decision-making. For example, in a repeated game, the state could be the outcome of the most recent round or any summary statistic that captures the strategic situation or recent sequence of play. A profile of Markov strategies forms a Markov perfect equilibrium if it constitutes a Nash equilibrium in every possible state of the game. Markov strategies are widely used in dynamic and stochastic games, where the state evolves over time according to probabilistic rules. Although the concept is named after Andrey Markov due to its reliance on the Markov property In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Markovian Arrival Process
In queueing theory, a discipline within the mathematical theory of probability, a Markovian arrival process (MAP or MArP) is a mathematical model for the time between job arrivals to a system. The simplest such process is a Poisson process where the time between each arrival is exponentially distributed. The processes were first suggested by Marcel F. Neuts in 1979. Definition A Markov arrival process is defined by two matrices, ''D''0 and ''D''1 where elements of ''D''0 represent hidden transitions and elements of ''D''1 observable transitions. The block matrix ''Q'' below is a transition rate matrix for a continuous-time Markov chain. : Q=\left begin D_&D_&0&0&\dots\\ 0&D_&D_&0&\dots\\ 0&0&D_&D_&\dots\\ \vdots & \vdots & \ddots & \ddots & \ddots \end\right; . The simplest example is a Poisson process where ''D''0 = −''λ'' and ''D''1 = ''λ'' where there is only one possible transition, it is observable, and occurs at rate ''λ''. For ''Q'' to be a valid ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Piecewise-deterministic Markov Process
In probability theory, a piecewise-deterministic Markov process (PDMP) is a process whose behaviour is governed by random jumps at points in time, but whose evolution is deterministically governed by an ordinary differential equation between those times. The class of models is "wide enough to include as special cases virtually all the non-diffusion models of applied probability." The process is defined by three quantities: the flow, the jump rate, and the transition measure. The model was first introduced in a paper by Mark H. A. Davis in 1984. Examples Piecewise linear models such as Markov chains, continuous-time Markov chains, the M/G/1 queue, the GI/G/1 queue and the fluid queue can be encapsulated as PDMPs with simple differential equations. Applications PDMPs have been shown useful in ruin theory, queueing theory, for modelling biochemistry, biochemical processes such as DNA replication in eukaryotes and subtilin production by the organism B. subtilis, and for modelling e ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Markov Kernel
In probability theory, a Markov kernel (also known as a stochastic kernel or probability kernel) is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space. Formal definition Let (X,\mathcal A) and (Y,\mathcal B) be measurable spaces. A ''Markov kernel'' with source (X,\mathcal A) and target (Y,\mathcal B), sometimes written as \kappa:(X,\mathcal)\to(Y,\mathcal), is a function \kappa : \mathcal B \times X \to ,1/math> with the following properties: # For every (fixed) B_0 \in \mathcal B, the map x \mapsto \kappa(B_0, x) is \mathcal A- measurable # For every (fixed) x_0 \in X, the map B \mapsto \kappa(B, x_0) is a probability measure on (Y, \mathcal B) In other words it associates to each point x \in X a probability measure \kappa(dy, x): B \mapsto \kappa(B, x) on (Y,\mathcal B) such that, for every measurable set B\in\mathcal B, the map x\mapsto \kappa(B, x) is measur ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Markov Chain Mixing Time
In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique stationary distribution and, regardless of the initial state, the time-''t'' distribution of the chain converges to as ''t'' tends to infinity. Mixing time refers to any of several variant formalizations of the idea: how large must ''t'' be until the time-''t'' distribution is approximately ? One variant, ''total variation distance mixing time'', is defined as the smallest ''t'' such that the total variation distance of probability measures is small: :t_(\varepsilon) = \min \left\. Choosing a different \varepsilon, as long as \varepsilon < 1/2, can only change the mixing time up to a constant factor (depending on ) and so one often fixes and simply writes [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Markov Renewal Process
Markov ( Bulgarian, ), Markova, and Markoff are common surnames used in Russia and Bulgaria. Notable people with the name include: Academics * Ivana Markova (1938–2024), Czechoslovak-British emeritus professor of psychology at the University of Stirling *John Markoff (sociologist) (born 1942), American professor of sociology and history at the University of Pittsburgh * Konstantin Markov (1905–1980), Soviet geomorphologist and quaternary geologist Mathematics, science, and technology * Alexander V. Markov (born 1965), Russian biologist *Andrey Markov (1856–1922), Russian mathematician * Andrey Markov Jr. (1903–1979), Russian mathematician and son of Andrey Markov * Elena Vladimirovna Markova (1923–2023), Soviet and Russian cyberneticist, Doctor of Technical Sciences, gulag convict and memoirist. * John Markoff (born 1949), American journalist of computer industry and technology *Moisey Markov (1908–1994), Russian physicist * Vladimir Andreevich Markov (1871–1897), Russ ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Variable-order Markov Model
In the mathematical theory of stochastic processes, variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization. This realization sequence is often called the ''context''; therefore the VOM models are also called ''context trees''. VOM models are nicely rendered by colorized probabilistic suffix trees (PST). The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such as statistical analysis, classification and prediction. Example Consider for example a sequence of random variables, each of which takes a value from the ternary alphabet . Specifically, consider the string constructed from infinite concatena ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Maximum-entropy Markov Model
In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction. Model Suppose we have a sequence of observations O_1, \dots, O_n that we seek to tag with the labels S_1, \dots, S_nthat maximize the conditional probability P(S_1, \dots, S_n \mid O_1, \dots, O_n). In a MEMM, this probability is factored into Markov transition probabilities, where the probability of transitioning to a particular label depends only on the observation at that position and the previous position's label: ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hierarchical Hidden Markov Model
The hierarchical hidden Markov model (HHMM) is a statistical model derived from the hidden Markov model (HMM). In an HHMM, each state is considered to be a self-contained probabilistic model. More precisely, each state of the HHMM is itself an HHMM. HHMMs and HMMs are useful in many fields, including pattern recognition. Background It is sometimes useful to use HMMs in specific structures in order to facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data is available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM into a greater structure; which, theoretically, may not be able to solve any other problems than the basic HMM but can solve some problems more efficiently when it comes to the amount of training data required. Description In the hierarchical hidden Markov model (HHMM), each state is considered to b ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Layered Hidden Markov Model
The layered hidden Markov model (LHMM) is a statistical model derived from the hidden Markov model (HMM). A layered hidden Markov model consists of ''N'' levels of HMMs, where the HMMs on level ''i'' + 1 correspond to observation symbols or probability generators at level ''i''. Every level ''i'' of the LHMM consists of ''K''''i'' HMMs running in parallel. Background LHMMs are sometimes useful in specific structures because they can facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data were available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM in a layered structure which, theoretically, may not be able to solve any problems the basic HMM cannot, but can solve some problems more efficiently because less training data is needed. The layered hidden Markov model A layered hidden Markov model (LH ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |