Overview
gives this description: : ''We first show the existence of a provably good approximate solution using the probabilistic method...Efficiency
In a typical application of the method, the goal is to be able to implement the resulting deterministic process by a reasonably efficient algorithm (the word "efficient" usually means an algorithm that runs inExample using conditional expectations
This example demonstrates the method of conditional probabilities using a conditional expectation.Max-Cut Lemma
Given any undirected graph ''G'' = (''V'', ''E''), theProbabilistic proof. Color each vertex black or white by flipping a fair coin. By calculation, for any edge e in ''E'', the probability that it is cut is 1/2. Thus, by linearity of expectation, the expected number of edges cut is , ''E'', /2. Thus, there exists a coloring that cuts at least , ''E'', /2 edges. ''QED''
The method of conditional probabilities with conditional expectations
To apply the method of conditional probabilities, first model the random experiment as a sequence of small random steps. In this case it is natural to consider each step to be the choice of color for a particular vertex (so there are , ''V'', steps). Next, replace the random choice at each step by a deterministic choice, so as to keep the conditional probability of failure, given the vertices colored so far, below 1. (Here ''failure'' means that finally fewer than , ''E'', /2 edges are cut.) In this case, the conditional probability of failure is not easy to calculate. Indeed, the original proof did not calculate the probability of failure directly; instead, the proof worked by showing that the expected number of cut edges was at least , ''E'', /2. Let random variable ''Q'' be the number of edges cut. To keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of ''Q'' at or above the threshold , ''E'', /2. (This is because as long as the conditional expectation of ''Q'' is at least , ''E'', /2, there must be some still-reachable outcome where ''Q'' is at least , ''E'', /2, so the conditional probability of reaching such an outcome is positive.) To keep the conditional expectation of ''Q'' at , ''E'', /2 or above, the algorithm will, at each step, color the vertex under consideration so as to ''maximize'' the resulting conditional expectation of ''Q''. This suffices, because there must be some child whose conditional expectation is at least the current state's conditional expectation (and thus at least , ''E'', /2). Given that some of the vertices are colored already, what is this conditional expectation? Following the logic of the original proof, the conditional expectation of the number of cut edges is :: ''the number of edges whose endpoints are colored differently so far'' :: + (1/2)*(''the number of edges with at least one endpoint not yet colored'').Algorithm
The algorithm colors each vertex to maximize the resulting value of the above conditional expectation. This is guaranteed to keep the conditional expectation at , ''E'', /2 or above, and so is guaranteed to keep the conditional probability of failure below 1, which in turn guarantees a successful outcome. By calculation, the algorithm simplifies to the following: 1. For each vertex ''u'' in ''V'' (in any order): 2. Consider the already-colored neighboring vertices of ''u''. 3. Among these vertices, if more are black than white, then color ''u'' white. 4. Otherwise, color ''u'' black. Because of its derivation, this deterministic algorithm is guaranteed to cut at least half the edges of the given graph. (This makes it a 0.5-approximation algorithm for Max-cut.)Example using pessimistic estimators
The next example demonstrates the use of pessimistic estimators.Turán's theorem
One way of stating Turán's theorem is the following: : Any graph ''G'' = (''V'', ''E'') contains an independent set of size at least , ''V'', /(''D''+1), where ''D'' = 2, ''E'', /, ''V'', is the average degree of the graph.Probabilistic proof of Turán's theorem
Consider the following random process for constructing an independent set ''S'': 1. Initialize ''S'' to be the empty set. 2. For each vertex ''u'' in ''V'' in random order: 3. If no neighbors of ''u'' are in ''S'', add ''u'' to ''S'' 4. Return ''S''. Clearly the process computes an independent set. Any vertex ''u'' that is considered before all of its neighbors will be added to ''S''. Thus, letting ''d''(''u'') denote the degree of ''u'', the probability that ''u'' is added to ''S'' is at least 1/(''d''(''u'')+1). By linearity of expectation, the expected size of ''S'' is at least : (The inequality above follows because 1/(''x''+1) is convex in ''x'', so the left-hand side is minimized, subject to the sum of the degrees being fixed at 2, ''E'', , when each ''d''(''u'') = ''D'' = 2, ''E'', /, ''V'', .) ''QED''The method of conditional probabilities using pessimistic estimators
In this case, the random process has , ''V'', steps. Each step considers some not-yet considered vertex ''u'' and adds ''u'' to ''S'' if none of its neighbors have yet been added. Let random variable ''Q'' be the number of vertices added to ''S''. The proof shows that ''E'' 'Q''≥ , ''V'', /(''D''+1). We will replace each random step by a deterministic step that keeps the conditional expectation of ''Q'' at or above , ''V'', /(''D''+1). This will ensure a successful outcome, that is, one in which the independent set ''S'' has size at least , ''V'', /(''D''+1), realizing the bound in Turán's theorem. Given that the first t steps have been taken, let ''S''(''t'') denote the vertices added so far. Let ''R''(''t'') denote those vertices that have not yet been considered, and that have no neighbors in ''S''(''t''). Given the first t steps, following the reasoning in the original proof, any given vertex ''w'' in ''R''(''t'') has conditional probability at least 1/(''d''(''w'')+1) of being added to ''S'', so the conditional expectation of ''Q'' is ''at least'' : Let ''Q''(''t'') denote the above quantity, which is called a pessimistic estimator for the conditional expectation. The proof showed that the pessimistic estimator is initially at least , ''V'', /(''D''+1). (That is, ''Q''(0) ≥ , ''V'', /(''D''+1).) The algorithm will make each choice to keep the pessimistic estimator from decreasing, that is, so that ''Q''(''t''+1) ≥ ''Q''(''t'') for each ''t''. Since the pessimistic estimator is a lower bound on the conditional expectation, this will ensure that the conditional expectation stays above , ''V'', /(''D''+1), which in turn will ensure that the conditional probability of failure stays below 1. Let ''u'' be the vertex considered by the algorithm in the next ((''t''+1)-st) step. If ''u'' already has a neighbor in ''S'', then ''u'' is not added to ''S'' and (by inspection of ''Q''(''t'')), the pessimistic estimator is unchanged. If ''u'' does ''not'' have a neighbor in ''S'', then ''u'' is added to ''S''. By calculation, if ''u'' is chosen randomly from the remaining vertices, the expected increase in the pessimistic estimator is non-negative. [The calculation. Conditioned on choosing a vertex in ''R''(''t''), the probability that a given term 1/(''d''(''w'')+1) is dropped from the sum in the pessimistic estimator is at most (''d''(''w'')+1)/, ''R''(''t''), , so the expected decrease in each term in the sum is at most 1/, ''R''(''t''), . There are ''R''(''t'') terms in the sum. Thus, the expected decrease in the sum is at most 1. Meanwhile, the size of ''S'' increases by 1.] Thus, there must exist some choice of ''u'' that keeps the pessimistic estimator from decreasing.Algorithm maximizing the pessimistic estimator
The algorithm below chooses each vertex ''u'' to maximize the resulting pessimistic estimator. By the previous considerations, this keeps the pessimistic estimator from decreasing and guarantees a successful outcome. Below, ''N''(''t'')(''u'') denotes the neighbors of ''u'' in ''R''(''t'') (that is, neighbors of ''u'' that are neither in ''S'' nor have a neighbor in ''S''). 1. Initialize ''S'' to be the empty set. 2. While there exists a not-yet-considered vertex ''u'' with no neighbor in ''S'': 3. Add such a vertex ''u'' to ''S'' where ''u'' minimizes . 4. Return ''S''.Algorithms that don't maximize the pessimistic estimator
For the method of conditional probabilities to work, it suffices if the algorithm keeps the pessimistic estimator from decreasing (or increasing, as appropriate). The algorithm does not necessarily have to maximize (or minimize) the pessimistic estimator. This gives some flexibility in deriving the algorithm. The next two algorithms illustrate this. 1. Initialize ''S'' to be the empty set. 2. While there exists a vertex ''u'' in the graph with no neighbor in ''S'': 3. Add such a vertex ''u'' to ''S'', where ''u'' minimizes ''d''(''u'') (the initial degree of ''u''). 4. Return ''S''. 1. Initialize ''S'' to be the empty set. 2. While the remaining graph is not empty: 3. Add a vertex ''u'' to ''S'', where ''u'' has minimum degree in the ''remaining'' graph. 4. Delete ''u'' and all of its neighbors from the graph. 5. Return ''S''. Each algorithm is analyzed with the same pessimistic estimator as before. With each step of either algorithm, the net increase in the pessimistic estimator is : where ''N''(''t'')(''u'') denotes the neighbors of ''u'' in the remaining graph (that is, in ''R''(''t'')). For the first algorithm, the net increase is non-negative because, by the choice of ''u'', : , where ''d''(''u'') is the degree of ''u'' in the original graph. For the second algorithm, the net increase is non-negative because, by the choice of ''u'', : , where ''d′''(''u'') is the degree of ''u'' in the remaining graph.See also
* Probabilistic method *References
* * .Further reading
* (cited pages in 2nd edition, ) * *External links