Random Subcube Model
   HOME

TheInfoList



OR:

In
statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applicati ...
, the random-subcube model (RSM) is an exactly solvable model that reproduces key properties of hard
constraint satisfaction problem Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite const ...
s (CSPs) and optimization problems, such as geometrical organization of solutions, the effects of frozen variables, and the limitations of various algorithms like decimation schemes. The RSM consists of a set of ''N'' binary variables, where solutions are defined as points in a
hypercube In geometry, a hypercube is an ''n''-dimensional analogue of a square ( ) and a cube ( ); the special case for is known as a ''tesseract''. It is a closed, compact, convex figure whose 1- skeleton consists of groups of opposite parallel l ...
. The model introduces clusters, which are random subcubes of the hypercube, representing groups of solutions sharing specific characteristics. As the density of constraints increases, the solution space undergoes a series of phase transitions similar to those observed in CSPs like random k-satisfiability ( k-SAT) and random k-coloring (k-COL). These transitions include clustering, condensation, and ultimately the unsatisfiable phase where no solutions exist. The RSM is equivalent to these real CSPs in the limit of large constraint size. Notably, it reproduces the cluster size distribution and freezing properties of k-SAT and k-COL in the large-k limit. This is similar to how the
random energy model In the statistical physics of disordered systems, the random energy model is a toy model of a system with quenched disorder, such as a spin glass, having a first-order phase transition. It concerns the statistics of a collection of N spins (''i.e. ...
is the large-p limit of the p-spin glass model.


Setup


Subcubes

There are N particles. Each particle can be in one of two states -1, +1. The state space \^N has 2^N states. Not all are available. Only those satisfying the constraints are allowed. Each constraint is a subset A_i of the state space. Each A_i is a "subcube", structured like A_i = \prod_ A_ where each A_ can be one of \, \, \. The available states is the union of these subsets: S = \cup_i A_i


Random subcube model

Each random subcube model is defined by two parameters \alpha, p \in (0, 1). To generate a random subcube A_i, sample its components A_ IID according to \begin Pr(A_ &= \) &= p/2 \\ Pr(A_ &= \) &= p/2 \\ Pr(A_ &= \) &= 1-p \end Now sample 2^ random subcubes, and union them together.


Entropies

The entropy density of the r-th cluster in bits is s_r := \frac 1N \log_2 , A_r, The entropy density of the system in bits is s := \frac 1N \log_2 , \cup_r A_r,


Phase structure


Cluster sizes and numbers

Let n(s) be the number of clusters with entropy density s, then it is
binomially distributed In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of independent experiments, each asking a yes–no question, and each wi ...
, thus \begin E (s)&= 2^ P \to 2^ \\ Var (s)&= 2^ P(1-P) \\ \frac &\to 2^ \end where \begin P &:= \binomp^(1-p)^, \\ \Sigma(s) &:= 1-\alpha - D_(s \, 1-p) \\ D_(s \, 1-p) &:= s\log_2\frac + (1-s) \log_2\frac \end By the
Chebyshev inequality In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability ...
, if \Sigma > 0, then n(s) concentrates to its mean value. Otherwise, since E (s)\to 0, n(s) also concentrates to 0 by the
Markov inequality In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive constant. Markov's inequality is tight in the sense that for each chosen positive con ...
. Thus, n(s) \to \begin 2^ \quad &\text\Sigma(s) > 0\\ 0 \quad &\text\Sigma(s) < 0 \end almost surely as N\to\infty. When \Sigma = 0 exactly, the two forces exactly balance each other out, and n(s) does not collapse, but instead converges in distribution to the
Poisson distribution In probability theory and statistics, the Poisson distribution () is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known const ...
Poisson(1) by the law of small numbers.


Liquid phase

For each state, the number of clusters it is in is also binomially distributed, with expectation2^(1-p/2)^N = 2^ So if \alpha < \log_2(2-p), then it concentrates to 2^, and so each state is in an exponential number of clusters. Indeed, in that case, the probability that ''all'' states are allowed is -[1-(1 - p/2)^N">-(1_-_p_2)^N.html" ;"title="-[1-(1 - p/2)^N">-[1-(1 - p/2)^N\sim e^ \to 1 Thus almost surely, all states are allowed, and the entropy density is 1 bit per particle.


Clustered phase

If \alpha > \alpha_d := \log_2(2-p), then it concentrates to zero exponentially, and so most states are not in any cluster. Those that do are exponentially unlikely to be in more than one. Thus, we find that almost all states are in zero clusters, and of those in at least one cluster, almost all are in just one cluster. The state space is thus roughly speaking the disjoint union of the clusters. Almost surely, there are n(s) = 2^ clusters of size 2^, therefore, the state space is dominated by clusters with optimal entropy density s^* = \arg \max_s (\Sigma (s) + s). Thus, in the clustered phase, the state space is almost entirely partitioned among 2^ clusters of size 2^ each. Roughly, the state space looks like exponentially many equally-sized clusters.


Condensation phase

Another phase transition occurs when \Sigma(s^*) = 0, that is,\alpha = \alpha_c := \frac+\log _2(2-p)When \alpha > \alpha_c, the optimal entropy density becomes unreachable, as there almost surely exists zero clusters with entropy density s^*. Instead, the state space is dominated by clusters with entropy close to s_c, the larger solution to \Sigma(s_c) = 0. Near s_c, the contribution of clusters with entropy density s = s_c - \delta to the total state space is \underbrace_ \times \underbrace_ = 2^ = 2^ At large N, the possible entropy densities are s_c, s_c - 1/N, s_c - 2/N, \dots . The contribution of each is 2^, 2^2^, 2^2^, \dots We can tabulate them as follows: Thus, we see that for any \epsilon > 0, at N \to \infty limit, over 1-\epsilon of the total state space is covered by only a finite number of clusters. The state space looks partitioned into clusters with exponentially decaying sizes. This is the condensation phase.


Unsatisfiable phase

When \alpha > 1, the number of clusters is zero, so there are no states.


Extensions

The RSM can be extended to include energy landscapes, allowing for the study of glassy behavior, temperature chaos, and the dynamic transition.


See also

* Random energy model


References

* * * {{Reflist Statistical mechanics