HOME

TheInfoList



OR:

In
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which ...
, maximum entropy thermodynamics (colloquially, ''MaxEnt''
thermodynamics Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws ...
) views equilibrium thermodynamics and
statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. It does not assume or postulate any natural laws, but explains the macroscopic b ...
as
inference Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word ''wikt:infer, infer'' means to "carry forward". Inference is theoretically traditionally divided into deductive reasoning, deduction and in ...
processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory,
Bayesian probability Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification ...
, and the
principle of maximum entropy The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition ...
. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g.,
image reconstruction Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative recon ...
,
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing '' signals'', such as sound, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, ...
, spectral analysis, and
inverse problem An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the ...
s). MaxEnt thermodynamics began with two papers by
Edwin T. Jaynes Edwin Thompson Jaynes (July 5, 1922 – April 30, 1998) was the Wayman Crow Distinguished Professor of Physics at Washington University in St. Louis. He wrote extensively on statistical mechanics and on foundations of probability and statist ...
published in the 1957 ''Physical Review''.


Maximum Shannon entropy

Central to the MaxEnt thesis is the
principle of maximum entropy The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition ...
. It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" about the
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
, for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy, :S_I = - \sum_i p_i \ln p_i . This is known as the
Gibbs algorithm 200px, Josiah Willard Gibbs In statistical mechanics, the Gibbs algorithm, introduced by J. Willard Gibbs in 1902, is a criterion for choosing a probability distribution for the statistical ensemble of microstates of a thermodynamic system by ...
, having been introduced by
J. Willard Gibbs Josiah Willard Gibbs (; February 11, 1839 – April 28, 1903) was an American scientist who made significant theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in t ...
in 1878, to set up
statistical ensemble In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a ...
s to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function). A direct connection is thus made between the equilibrium thermodynamic entropy ''S''Th, a
state function In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system ...
of pressure, volume, temperature, etc., and the
information entropy In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X, which takes values in the alphabet \ ...
for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables: :S_(P,V,T,\ldots)_\text = k_B \, S_I(P,V,T,\ldots) ''kB'',
Boltzmann's constant The Boltzmann constant ( or ) is the proportionality factor that relates the average relative kinetic energy of particles in a gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin and the gas constant ...
, has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by
Clausius Rudolf Julius Emanuel Clausius (; 2 January 1822 – 24 August 1888) was a German physicist and mathematician and is considered one of the central founding fathers of the science of thermodynamics. By his restatement of Sadi Carnot's principl ...
(1865) (see
Boltzmann's constant The Boltzmann constant ( or ) is the proportionality factor that relates the average relative kinetic energy of particles in a gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin and the gas constant ...
). However, the MaxEnt school argue that the MaxEnt approach is a general technique of statistical inference, with applications far beyond this. It can therefore also be used to predict a distribution for "trajectories" Γ "over a period of time" by maximising: :S_I = - \sum p_ \ln p_ This "information entropy" does ''not'' necessarily have a simple correspondence with thermodynamic entropy. But it can be used to predict features of nonequilibrium thermodynamic systems as they evolve over time. For non-equilibrium scenarios, in an approximation that assumes local thermodynamic equilibrium, with the maximum entropy approach, the Onsager reciprocal relations and the Green-Kubo relations fall out directly. The approach also creates a theoretical framework for the study of some very special cases of far-from-equilibrium scenarios, making the derivation of the entropy production fluctuation theorem straightforward. For non-equilibrium processes, as is so for macroscopic descriptions, a general definition of entropy for microscopic statistical mechanical accounts is also lacking. ''Technical note'': For the reasons discussed in the article
differential entropy Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuo ...
, the simple definition of Shannon entropy ceases to be directly applicable for
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s with continuous probability distribution functions. Instead the appropriate quantity to maximize is the "relative information entropy," :H_c=-\int p(x)\log\frac\,dx. ''Hc'' is the negative of the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fr ...
, or discrimination information, of ''m''(''x'') from ''p''(''x''), where ''m''(''x'') is a prior
invariant measure In mathematics, an invariant measure is a measure that is preserved by some function. The function may be a geometric transformation. For examples, circular angle is invariant under rotation, hyperbolic angle is invariant under squeeze mapping ...
for the variable(s). The relative entropy ''Hc'' is always less than zero, and can be thought of as (the negative of) the number of bits of uncertainty lost by fixing on ''p''(''x'') rather than ''m''(''x''). Unlike the Shannon entropy, the relative entropy ''Hc'' has the advantage of remaining finite and well-defined for continuous ''x'', and invariant under 1-to-1 coordinate transformations. The two expressions coincide for
discrete probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
s, if one can make the assumption that ''m''(''x''i) is uniform - i.e. the principle of equal a-priori probability, which underlies statistical thermodynamics.


Philosophical implications

Adherents to the MaxEnt viewpoint take a clear position on some of the conceptual/philosophical questions in thermodynamics. This position is sketched below.


The nature of the probabilities in statistical mechanics

Jaynes (1985,Jaynes, E.T. (1985). 2003,Jaynes, E.T. (2003). ''et passim'') discussed the concept of probability. According to the MaxEnt viewpoint, the probabilities in statistical mechanics are determined jointly by two factors: by respectively specified particular models for the underlying state space (e.g. Liouvillian
phase space In dynamical system theory, a phase space is a space in which all possible states of a system are represented, with each possible state corresponding to one unique point in the phase space. For mechanical systems, the phase space usuall ...
); and by respectively specified particular partial descriptions of the system (the macroscopic description of the system used to constrain the MaxEnt probability assignment). The probabilities are objective in the sense that, given these inputs, a uniquely defined probability distribution will result, the same for every rational investigator, independent of the subjectivity or arbitrary opinion of particular persons. The probabilities are epistemic in the sense that they are defined in terms of specified data and derived from those data by definite and objective rules of inference, the same for every rational investigator. Here the word epistemic, which refers to objective and impersonal scientific knowledge, the same for every rational investigator, is used in the sense that contrasts it with opiniative, which refers to the subjective or arbitrary beliefs of particular persons; this contrast was used by
Plato Plato ( ; grc-gre, Πλάτων ; 428/427 or 424/423 – 348/347 BC) was a Greek philosopher born in Athens during the Classical period in Ancient Greece. He founded the Platonist school of thought and the Academy, the first institution ...
and
Aristotle Aristotle (; grc-gre, Ἀριστοτέλης ''Aristotélēs'', ; 384–322 BC) was a Greek philosopher and polymath during the Classical period in Ancient Greece. Taught by Plato, he was the founder of the Peripatetic school of ...
, and stands reliable today. Jaynes also used the word 'subjective' in this context because others have used it in this context. He accepted that in a sense, a state of knowledge has a subjective aspect, simply because it refers to thought, which is a mental process. But he emphasized that the principle of maximum entropy refers only to thought which is rational and objective, independent of the personality of the thinker. In general, from a philosophical viewpoint, the words 'subjective' and 'objective' are not contradictory; often an entity has both subjective and objective aspects. Jaynes explicitly rejected the criticism of some writers that, just because one can say that thought has a subjective aspect, thought is automatically non-objective. He explicitly rejected subjectivity as a basis for scientific reasoning, the epistemology of science; he required that scientific reasoning have a fully and strictly objective basis. Nevertheless, critics continue to attack Jaynes, alleging that his ideas are "subjective". One writer even goes so far as to label Jaynes' approach as "ultrasubjectivist", and to mention "the panic that the term subjectivism created amongst physicists". The probabilities represent both the degree of knowledge and lack of information in the data and the model used in the analyst's macroscopic description of the system, and also what those data say about the nature of the underlying reality. The fitness of the probabilities depends on whether the constraints of the specified macroscopic model are a sufficiently accurate and/or complete description of the system to capture all of the experimentally reproducible behavior. This cannot be guaranteed, ''a priori''. For this reason MaxEnt proponents also call the method predictive statistical mechanics. The predictions can fail. But if they do, this is informative, because it signals the presence of new constraints needed to capture reproducible behavior in the system, which had not been taken into account.


Is entropy "real"?

The thermodynamic entropy (at equilibrium) is a function of the state variables of the model description. It is therefore as "real" as the other variables in the model description. If the model constraints in the probability assignment are a "good" description, containing all the information needed to predict reproducible experimental results, then that includes all of the results one could predict using the formulae involving entropy from classical thermodynamics. To that extent, the MaxEnt ''STh'' is as "real" as the entropy in classical thermodynamics. Of course, in reality there is only one real state of the system. The entropy is not a direct function of that state. It is a function of the real state only through the (subjectively chosen) macroscopic model description.


Is ergodic theory relevant?

The Gibbsian ensemble idealizes the notion of repeating an experiment again and again on ''different'' systems, not again and again on the ''same'' system. So long-term time averages and the
ergodic hypothesis In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., t ...
, despite the intense interest in them in the first part of the twentieth century, strictly speaking are not relevant to the probability assignment for the state one might find the system in. However, this changes if there is additional knowledge that the system is being prepared in a particular way some time before the measurement. One must then consider whether this gives further information which is still relevant at the time of measurement. The question of how 'rapidly mixing' different properties of the system are then becomes very much of interest. Information about some degrees of freedom of the combined system may become unusable very quickly; information about other properties of the system may go on being relevant for a considerable time. If nothing else, the medium and long-run time correlation properties of the system are interesting subjects for experimentation in themselves. Failure to accurately predict them is a good indicator that relevant macroscopically determinable physics may be missing from the model.


The second law

According to Liouville's theorem for Hamiltonian dynamics, the hyper-volume of a cloud of points in
phase space In dynamical system theory, a phase space is a space in which all possible states of a system are represented, with each possible state corresponding to one unique point in the phase space. For mechanical systems, the phase space usuall ...
remains constant as the system evolves. Therefore, the information entropy must also remain constant, if we condition on the original information, and then follow each of those microstates forward in time: :\Delta S_I = 0 \, However, as time evolves, that initial information we had becomes less directly accessible. Instead of being easily summarizable in the macroscopic description of the system, it increasingly relates to very subtle correlations between the positions and momenta of individual molecules. (Compare to Boltzmann's H-theorem.) Equivalently, it means that the probability distribution for the whole system, in 6N-dimensional phase space, becomes increasingly irregular, spreading out into long thin fingers rather than the initial tightly defined volume of possibilities. Classical thermodynamics is built on the assumption that entropy is a
state function In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system ...
of the macroscopic variables—i.e., that none of the history of the system matters, so that it can all be ignored. The extended, wispy, evolved probability distribution, which still has the initial Shannon entropy ''STh''(1), should reproduce the expectation values of the observed macroscopic variables at time ''t''2. However it will no longer necessarily be a maximum entropy distribution for that new macroscopic description. On the other hand, the new thermodynamic entropy ''STh''(2) assuredly ''will'' measure the maximum entropy distribution, by construction. Therefore, we expect: :^ \geq ^ At an abstract level, this result implies that some of the information we originally had about the system has become "no longer useful" at a macroscopic level. At the level of the 6''N''-dimensional probability distribution, this result represents
coarse graining Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is subd ...
—i.e., information loss by smoothing out very fine-scale detail.


Caveats with the argument

Some caveats should be considered with the above. 1. Like all statistical mechanical results according to the MaxEnt school, this increase in thermodynamic entropy is only a ''prediction''. It assumes in particular that the initial macroscopic description contains all of the information relevant to predicting the later macroscopic state. This may not be the case, for example if the initial description fails to reflect some aspect of the preparation of the system which later becomes relevant. In that case the "failure" of a MaxEnt prediction tells us that there is something more which is relevant that we may have overlooked in the physics of the system. It is also sometimes suggested that quantum measurement, especially in the
decoherence Quantum decoherence is the loss of quantum coherence. In quantum mechanics, particles such as electrons are described by a wave function, a mathematical representation of the quantum state of a system; a probabilistic interpretation of the wa ...
interpretation, may give an apparently unexpected reduction in entropy per this argument, as it appears to involve macroscopic information becoming available which was previously inaccessible. (However, the entropy accounting of quantum measurement is tricky, because to get full decoherence one may be assuming an infinite environment, with an infinite entropy). 2. The argument so far has glossed over the question of ''fluctuations''. It has also implicitly assumed that the uncertainty predicted at time ''t''1 for the variables at time ''t2'' will be much smaller than the measurement error. But if the measurements do meaningfully update our knowledge of the system, our uncertainty as to its state is reduced, giving a new ''SI''(2) which is ''less'' than ''SI''(1). (Note that if we allow ourselves the abilities of
Laplace's demon In the history of science, Laplace's demon was a notable published articulation of causal determinism on a scientific basis by Pierre-Simon Laplace in 1814. According to determinism, if someone (the demon) knows the precise location and moment ...
, the consequences of this new information can also be mapped backwards, so our uncertainty about the dynamical state at time ''t''1 is now ''also'' reduced from ''SI''(1) to ''SI''(2)). We know that ''STh''(2) ''> SI''(2); but we can now no longer be certain that it is greater than ''STh''(1) ''= SI''(1). This then leaves open the possibility for fluctuations in ''STh''. The thermodynamic entropy may go "down" as well as up. A more sophisticated analysis is given by the entropy Fluctuation Theorem, which can be established as a consequence of the time-dependent MaxEnt picture. 3. As just indicated, the MaxEnt inference runs equally well in reverse. So given a particular final state, we can ask, what can we "retrodict" to improve our knowledge about earlier states? However the Second Law argument above also runs in reverse: given macroscopic information at time ''t''2, we should expect it too to become less useful. The two procedures are time-symmetric. But now the information will become less and less useful at earlier and earlier times. (Compare with Loschmidt's paradox.) The MaxEnt inference would predict that the most probable origin of a currently low-entropy state would be as a spontaneous fluctuation from an earlier high entropy state. But this conflicts with what we know to have happened, namely that entropy has been increasing steadily, even back in the past. The MaxEnt proponents' response to this would be that such a systematic failing in the prediction of a MaxEnt inference is a "good" thing.Jaynes, E.T. (1979). It means that there is thus clear evidence that some important physical information has been missed in the specification the problem. If it is correct that the dynamics "are" time-symmetric, it appears that we need to put in by hand a
prior probability In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
that initial configurations with a low thermodynamic entropy are more likely than initial configurations with a high thermodynamic entropy. This cannot be explained by the immediate dynamics. Quite possibly, it arises as a reflection of the evident time-asymmetric evolution of the universe on a cosmological scale (see
arrow of time The arrow of time, also called time's arrow, is the concept positing the "one-way direction" or "asymmetry" of time. It was developed in 1927 by the British astrophysicist Arthur Eddington, and is an unsolved general physics question. This ...
).


Criticisms

The Maximum Entropy thermodynamics has some important opposition, in part because of the relative paucity of published results from the MaxEnt school, especially with regard to new testable predictions far-from-equilibrium. The theory has also been criticized in the grounds of internal consistency. For instance, Radu Balescu provides a strong criticism of the MaxEnt School and of Jaynes' work. Balescu states that Jaynes' and coworkers theory is based on a non-transitive evolution law that produces ambiguous results. Although some difficulties of the theory can be cured, the theory "lacks a solid foundation" and "has not led to any new concrete result". Though the maximum entropy approach is based directly on informational entropy, it is applicable to physics only when there is a clear physical definition of entropy. There is no clear unique general physical definition of entropy for non-equilibrium systems, which are general physical systems considered during a process rather than thermodynamic systems in their own internal states of thermodynamic equilibrium. It follows that the maximum entropy approach will not be applicable to non-equilibrium systems until there is found a clear physical definition of entropy. This problem is related to the fact that heat may be transferred from a hotter to a colder physical system even when local thermodynamic equilibrium does not hold so that neither system has a well defined temperature. Classical entropy is defined for a system in its own internal state of thermodynamic equilibrium, which is defined by state variables, with no non-zero fluxes, so that flux variables do not appear as state variables. But for a strongly non-equilibrium system, during a process, the state variables must include non-zero flux variables. Classical physical definitions of entropy do not cover this case, especially when the fluxes are large enough to destroy local thermodynamic equilibrium. In other words, for entropy for non-equilibrium systems in general, the definition will need at least to involve specification of the process including non-zero fluxes, beyond the classical static thermodynamic state variables. The 'entropy' that is maximized needs to be defined suitably for the problem at hand. If an inappropriate 'entropy' is maximized, a wrong result is likely. In principle, maximum entropy thermodynamics does not refer narrowly and only to classical thermodynamic entropy. It is about informational entropy applied to physics, explicitly depending on the data used to formulate the problem at hand. According to Attard, for physical problems analyzed by strongly non-equilibrium thermodynamics, several physically distinct kinds of entropy need to be considered, including what he calls second entropy. Attard writes: "Maximizing the second entropy over the microstates in the given initial macrostate gives the most likely target macrostate.".Attard, P. (2012). ''Non-Equilibrium Thermodynamics and Statistical Mechanics: Foundations and Applications'', Oxford University Press, Oxford UK, , p. 161. The physically defined second entropy can also be considered from an informational viewpoint.


See also

*
Edwin Thompson Jaynes Edwin Thompson Jaynes (July 5, 1922 – April 30, 1998) was the Wayman Crow Distinguished Professor of Physics at Washington University in St. Louis. He wrote extensively on statistical mechanics and on foundations of probability and statist ...
*
First law of thermodynamics The first law of thermodynamics is a formulation of the law of conservation of energy, adapted for thermodynamic processes. It distinguishes in principle two forms of energy transfer, heat and thermodynamic work for a system of a constant am ...
*
Second law of thermodynamics The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. One simple statement of the law is that heat always moves from hotter objects to colder objects (or "downhill"), unle ...
*
Principle of maximum entropy The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition ...
* Principle of Minimum Discrimination Information *
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fr ...
*
Quantum relative entropy In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy. Motivation For simplicity, it will be assumed that all objects in th ...
* Information theory and measure theory * Entropy power inequality


References


Bibliography of cited references

* * *Guttmann, Y.M. (1999). ''The Concept of Probability in Statistical Physics'', Cambridge University Press, Cambridge UK, . * * * *


Further reading

* * * * * Shows invalidity of Dewar's derivations (a) of maximum entropy production (MaxEP) from fluctuation theorem for far-from-equilibrium systems, and (b) of a claimed link between MaxEP and self-organized criticality. * Grandy, W. T., 1987. ''Foundations of Statistical Mechanics. Vol 1: Equilibrium Theory; Vol. 2: Nonequilibrium Phenomena''. Dordrecht: D. Reidel. ''Vol. 1'': . ''Vol. 2'': . * * *
Extensive archive of further papers
by E.T. Jaynes on probability and physics. Many are collected in * * {{refend Statistical mechanics Philosophy of thermal and statistical physics Non-equilibrium thermodynamics Information theory Thermodynamics Thermodynamic entropy