The Bennett acceptance ratio method (BAR) is an algorithm for estimating the difference in free energy between two systems (usually the systems will be simulated on the computer).
It was suggested by Charles H. Bennett in 1976.

_{''y''}) − ''U''(State_{''x''}) is the difference in potential energy, β = 1/''kT'' (''T'' is the temperature in _{B} − ''F''_{A}) on moving between the two super states be calculated from sampling in both ensembles? The kinetic energy part in the free energy is equal between states so can be ignored. Also the

_{A} and ''U''_{B} are the potential energies of the same configurations, calculated using potential function A (when the system is in superstate A) and potential function B (when the system is in the superstate B) respectively.

Bennett Acceptance Ratio

from AlchemistryWiki.

Multistate Bennett Acceptance Ratio

from AlchemistryWiki.

Weighted Histogram Analysis Method (MBAR being the unbinned case)

from AlchemistryWiki. Thermodynamics Statistical mechanics

Preliminaries

Take a system in a certain super (i.e. Gibbs) state. By performing a Metropolis Monte Carlo walk it is possible to sample the landscape of states that the system moves between, using the equation :$p(\backslash text\_x\; \backslash rightarrow\; \backslash text\_y)\; =\; \backslash min\; \backslash left(e\; ^\; ,\; 1\; \backslash right)\; =\; M(\backslash beta\; \backslash ,\; \backslash Delta\; U)$ where Δ''U'' = ''U''(Statekelvin
The kelvin, symbol K, is the primary unit of temperature in the International System of Units (SI), used alongside its metric prefix, prefixed forms and the degree Celsius. It is named after the Belfast-born and University of Glasgow-based eng ...

s, while ''k'' is the Boltzmann constant
The Boltzmann constant ( or ) is the proportionality factor that relates the average relative kinetic energy of particles in a ideal gas, gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin and the gas ...

), and $M(x)\; \backslash equiv\; \backslash min(e^\; ,\; 1)$ is the Metropolis function.
The resulting states are then sampled according to the Boltzmann distribution
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution Translated by J.B. Sykes and M.J. Kearsley. See section 28) is a probability distribution or probability measure that gives the probability th ...

of the super state at temperature ''T''.
Alternatively, if the system is dynamically simulated in the canonical ensemble
In statistical mechanics
In physics
Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its motion and behavior through Spacetime, space and time, and the related entities of ene ...

(also called the ''NVT'' ensemble), the resulting states along the simulated trajectory are likewise distributed.
Averaging along the trajectory (in either formulation) is denoted by angle brackets
$\backslash left\backslash langle\; \backslash cdots\; \backslash right\backslash rangle$.
Suppose that two super states of interest, A and B, are given. We assume that they have a common configuration space, i.e., they share all of their micro states, but the energies associated to these (and hence the probabilities) differ because of a change in some parameter (such as the strength of a certain interaction).
The basic question to be addressed is, then, how can the Helmholtz free energy
In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work (thermodynamics), work obtainable from a closed system, closed thermodynamic system at a constant temperature (Isotherma ...

change (Δ''F'' = ''F''Gibbs free energy
In thermodynamics, the Gibbs free energy (or Gibbs energy; symbol G) is a thermodynamic potential that can be used to calculate the maximum amount of work (physics), work that may be performed by a closed system, thermodynamically closed system a ...

corresponds to the ''NpT'' ensemble.
The general case

Bennett shows that for every function ''f'' satisfying the condition $f(x)/f(-x)\; \backslash equiv\; e^$ (which is essentially the detailed balance condition), and for every energy offset ''C'', one has the exact relationship : $e\; ^\; =\; \backslash frac$ where ''U''The basic case

Substituting for ''f'' the Metropolis function defined above (which satisfies the detailed balance condition), and setting ''C'' to zero, gives : $e\; ^\; =\; \backslash frac$ The advantage of this formulation (apart from its simplicity) is that it can be computed without performing two simulations, one in each specific ensemble. Indeed, it is possible to define an extra kind of "potential switching" Metropolis trial move (taken every fixed number of steps), such that the single sampling from the "mixed" ensemble suffices for the computation.The most efficient case

Bennett explores which specific expression for Δ''F'' is the most efficient, in the sense of yielding the smallest standard error for a given simulation time. He shows that the optimal choice is to take # $f(x)\; \backslash equiv\; \backslash frac$, which is essentially the Fermi–Dirac distribution (satisfying indeed the detailed balance condition). # $C\; \backslash approx\; \backslash Delta\; F$. This value, of course, is not known (it is exactly what one is trying to compute), but it can be approximately chosen in a self-consistent manner. Some assumptions needed for the efficiency are the following: # The densities of the two super states (in their common configuration space) should have a large overlap. Otherwise, a chain of super states between A and B may be needed, such that the overlap of each two consecutive super states is adequate. # The sample size should be large. In particular, as successive states are correlated, the simulation time should be much larger than the correlation time. # The cost of simulating both ensembles should be approximately equal - and then, in fact, the system is sampled roughly equally in both super states. Otherwise, the optimal expression for ''C'' is modified, and the sampling should devote equal times (rather than equal number of time steps) to the two ensembles.Multistate Bennett acceptance ratio

The multistate Bennett acceptance ratio (MBAR) is a generalization of the Bennett acceptance ratio that calculates the (relative) free energies of several multi states. It essentially reduces to the BAR method when only two super states are involved.Relation to other methods

The perturbation theory method

This method, also called Free energy perturbation (or FEP), involves sampling from state A only. It requires that all the high probability configurations of super state B are contained in high probability configurations of super state A, which is a much more stringent requirement than the overlap condition stated above.The exact (infinite order) result

: $e\; ^\; =\; \backslash left\backslash langle\; e\; ^\; \backslash right\backslash rangle\_\backslash text$ or : $\backslash Delta\; F\; =\; -kT\; \backslash cdot\; \backslash log\; \backslash left\backslash langle\; e\; ^\; \backslash right\backslash rangle\_\backslash text$ This exact result can be obtained from the general BAR method, using (for example) the Metropolis function, in the limit $C\; \backslash rightarrow\; -\backslash infty$. Indeed, in that case, the denominator of the general case expression above tends to 1, while the numerator tends to $e^\; \backslash left\backslash langle\; e^\; \backslash right\backslash rangle\_\backslash text$. A direct derivation from the definitions is more straightforward, though.The second order (approximate) result

Assuming that $U\_\backslash text\; -\; U\_\backslash text\; \backslash ll\; kT$ and Taylor expanding the second exact perturbation theory expression to the second order, one gets the approximation :$\backslash Delta\; F\; \backslash approx\; \backslash left\backslash langle\; U\_\backslash text\; -\; U\_\backslash text\; \backslash right\backslash rangle\_\backslash text\; -\; \backslash frac\; \backslash left(\; \backslash left\backslash langle\; (U\_\backslash text\; -\; U\_\backslash text)^2\; \backslash right\backslash rangle\_\backslash text\; -\; \backslash left(\backslash left\backslash langle\; (U\_\backslash text\; -\; U\_\backslash text)\; \backslash right\backslash rangle\_\backslash text\backslash right)^2\; \backslash right)$ Note that the first term is the expected value of the energy difference, while the second is essentially its variance.The first order inequalities

Using the convexity of the log function appearing in the exact perturbation analysis result, together withJensen's inequality
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen (mathematician), Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was mathematical proof, pro ...

, gives an inequality in the linear level; combined with the analogous result for the B ensemble one gets the following version of the Gibbs-Bogoliubov inequality:
:$\backslash langle\; U\_\backslash text\; -\; U\_\backslash text\; \backslash rangle\_\backslash text\; \backslash le\; \backslash Delta\; F\; \backslash le\; \backslash langle\; U\_\backslash text\; -\; U\_\backslash text\; \backslash rangle\_\backslash text$
Note that the inequality agrees with the negative sign of the coefficient of the (positive) variance term in the second order result.
The thermodynamic integration method

writing the potential energy as depending on a continuous parameter, $U\_\backslash text\; =\; U(\backslash lambda\; =\; 0),\; U\_\backslash text\; =\; U(\backslash lambda\; =\; 1),$ one has the exact result $\backslash frac\; =\; \backslash left\backslash langle\; \backslash frac\; \backslash right\backslash rangle\_\backslash lambda$ This can either be directly verified from definitions or seen from the limit of the above Gibbs-Bogoliubov inequalities when $\backslash text\; =\; \backslash lambda^+,\; \backslash text\; =\; \backslash lambda^-$. we can therefore write : $\backslash Delta\; F\; =\; \backslash int\_0^1\; \backslash left\backslash langle\; \backslash frac\; \backslash right\backslash rangle\; \backslash ,\; d\backslash lambda$ which is the thermodynamic integration (or TI) result. It can be approximated by dividing the range between states A and B into many values of λ at which the expectation value is estimated, and performing numerical integration.Implementation

The Bennett acceptance ratio method is implemented in modernmolecular dynamics
Molecular dynamics (MD) is a computer simulation method for analyzing the Motion (physics), physical movements of atoms and molecules. The atoms and molecules are allowed to interact for a fixed period of time, giving a view of the dynamics (m ...

systems, such as Gromacs.
Python-based code for MBAR and BAR is available for download aSee also

* Parallel temperingReferences

{{Reflist, refs= Charles H. Bennett (1976) Efficient estimation of free energy differences from Monte Carlo data. ''Journal of Computational Physics'' 22 : 245–26External links

Bennett Acceptance Ratio

from AlchemistryWiki.

Multistate Bennett Acceptance Ratio

from AlchemistryWiki.

Weighted Histogram Analysis Method (MBAR being the unbinned case)

from AlchemistryWiki. Thermodynamics Statistical mechanics