Greedy algorithm
   HOME

TheInfoList



OR:

A greedy algorithm is any
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
that follows the problem-solving
heuristic A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate ...
of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time. For example, a greedy strategy for the travelling salesman problem (which is of high computational complexity) is the following heuristic: "At each step of the journey, visit the nearest unvisited city." This heuristic does not intend to find the best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. In mathematical optimization, greedy algorithms optimally solve combinatorial problems having the properties of matroids and give constant-factor approximations to optimization problems with the submodular structure.


Specifics

Greedy algorithms produce good solutions on some
mathematical problem A mathematical problem is a problem that can be represented, analyzed, and possibly solved, with the methods of mathematics. This can be a real-world problem, such as computing the orbits of the planets in the solar system, or a problem of a more ...
s, but not on others. Most problems for which they work will have two properties: ; Greedy choice property: We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far, but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage and may reconsider the previous stage's algorithmic path to the solution. ;Optimal substructure: "A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to the sub-problems."


Cases of failure

Greedy algorithms fail to produce the optimal solution for many other problems and may even produce the ''unique worst possible'' solution. One example is the travelling salesman problem mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest-neighbour heuristic produces the unique worst possible tour. For other possible examples, see
horizon effect The horizon effect, also known as the horizon problem, is a problem in artificial intelligence whereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically ...
.


Types

Greedy algorithms can be characterized as being 'short sighted', and also as 'non-recoverable'. They are ideal only for problems that have an 'optimal substructure'. Despite this, for many simple problems, the best-suited algorithms are greedy. It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch-and-bound algorithm. There are a few variations to the greedy algorithm: * Pure greedy algorithms * Orthogonal greedy algorithms * Relaxed greedy algorithms


Theory

Greedy algorithms have a long history of study in combinatorial optimization and
theoretical computer science computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumscribe the ...
. Greedy heuristics are known to produce suboptimal results on many problems, and so natural questions are: * For which problems do greedy algorithms perform optimally? * For which problems do greedy algorithms guarantee an approximately optimal solution? * For which problems are the greedy algorithm guaranteed ''not'' to produce an optimal solution? A large body of literature exists answering these questions for general classes of problems, such as matroids, as well as for specific problems, such as set cover.


Matroids

A matroid is a mathematical structure that generalizes the notion of linear independence from vector spaces to arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally.


Submodular functions

A function f defined on subsets of a set \Omega is called
submodular In mathematics, a submodular set function (also known as a submodular function) is a set function whose value, informally, has the property that the difference in the incremental value of the function that a single element makes when added to an ...
if for every S, T \subseteq \Omega we have that f(S)+f(T)\geq f(S\cup T)+f(S\cap T). Suppose one wants to find a set S which maximizes f. The greedy algorithm, which builds up a set S by incrementally adding the element which increases f the most at each step, produces as output a set that is at least (1 - 1/e) \max_ f(X). That is, greedy performs within a constant factor of (1 - 1/e) \approx 0.63 as good as the optimal solution. Similar guarantees are provable when additional constraints, such as cardinality constraints, are imposed on the output, though often slight variations on the greedy algorithm are required. See for an overview.


Other problems with guarantees

Other problems for which the greedy algorithm gives a strong guarantee, but not an optimal solution, include * Set cover * The
Steiner tree problem In combinatorial mathematics, the Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a ...
* Load balancing * Independent set Many of these problems have matching lower bounds; i.e., the greedy algorithm does not perform better than the guarantee in the worst case.


Applications

Greedy algorithms typically (but not always) fail to find the globally optimal solution because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early, preventing them from finding the best overall solution later. For example, all known
greedy coloring In the study of graph coloring problems in mathematics and computer science, a greedy coloring or sequential coloring is a coloring of the vertices of a graph formed by a greedy algorithm that considers the vertices of the graph in sequence ...
algorithms for the graph coloring problem and all other NP-complete problems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum. If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
. Examples of such greedy algorithms are
Kruskal's algorithm Kruskal's algorithm finds a minimum spanning forest of an undirected edge-weighted graph. If the graph is connected, it finds a minimum spanning tree. (A minimum spanning tree of a connected graph is a subset of the edges that forms a tree that ...
and Prim's algorithm for finding
minimum spanning tree A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. ...
s and the algorithm for finding optimum Huffman trees. Greedy algorithms appear in the network
routing Routing is the process of selecting a path for traffic in a network or between or across multiple networks. Broadly, routing is performed in many types of networks, including circuit-switched networks, such as the public switched telephone netw ...
as well. Using greedy routing, a message is forwarded to the neighbouring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as in geographic routing used by
ad hoc network An ad hoc network refers to technologies that allow network communications on an ad hoc basis. Associated technologies include: *Wireless ad hoc network *Mobile ad hoc network * Vehicular ad hoc network ** Intelligent vehicular ad hoc network * Prot ...
s. Location may also be an entirely artificial construct as in small world routing and distributed hash table.


Examples

* The
activity selection problem The activity selection problem is a combinatorial optimization problem concerning the selection of non-conflicting activities to perform within a given time frame, given a set of activities each marked by a start time (si) and finish time (fi). ...
is characteristic of this class of problems, where the goal is to pick the maximum number of activities that do not clash with each other. * In the Macintosh computer game ''
Crystal Quest ''Crystal Quest'' is an action game written by Patrick Buckland for the Macintosh and published by Casady & Greene in 1987. It was ported to the Apple IIGS in 1989 by Rebecca Heineman. Ports were also made to the Amiga, Game Boy, iOS, and Pal ...
'' the objective is to collect crystals, in a fashion similar to the travelling salesman problem. The game has a demo mode, where the game uses a greedy algorithm to go to every crystal. The
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech ...
does not account for obstacles, so the demo mode often ends quickly. * The
matching pursuit Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary D. The basic idea is to approximately represent a signal ...
is an example of a greedy algorithm applied on signal approximation. * A greedy algorithm finds the optimal solution to Malfatti's problem of finding three disjoint circles within a given triangle that maximize the total area of the circles; it is conjectured that the same greedy algorithm is optimal for any number of circles. * A greedy algorithm is used to construct a Huffman tree during
Huffman coding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algo ...
where it finds an optimal solution. * In
decision tree learning Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of ...
, greedy algorithms are commonly used, however they are not guaranteed to find the optimal solution. **One popular such algorithm is the
ID3 algorithm In decision tree learning, ID3 (Iterative Dichotomiser 3) is an algorithm invented by Ross QuinlanQuinlan, J. R. 1986. Induction of Decision Trees. Mach. Learn. 1, 1 (Mar. 1986), 81–106 used to generate a decision tree from a dataset. ID3 is th ...
for decision tree construction. * Dijkstra's algorithm and the related A* search algorithm are verifiably optimal greedy algorithms for graph search and shortest path finding. **A* search is conditionally optimal, requiring an "
admissible heuristic In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowes ...
" that will not overestimate path costs. *
Kruskal's algorithm Kruskal's algorithm finds a minimum spanning forest of an undirected edge-weighted graph. If the graph is connected, it finds a minimum spanning tree. (A minimum spanning tree of a connected graph is a subset of the edges that forms a tree that ...
and Prim's algorithm are greedy algorithms for constructing
minimum spanning tree A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. ...
s of a given connected graph. They always find an optimal solution, which may not be unique in general.


See also

*
Best-first search Best-first search is a class of search algorithms, which explore a graph by expanding the most promising node chosen according to a specified rule. Judea Pearl described the best-first search as estimating the promise of node ''n'' by a "heuristic ...
* Epsilon-greedy strategy * Greedy algorithm for Egyptian fractions * Greedy source *
Hill climbing numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solutio ...
*
Horizon effect The horizon effect, also known as the horizon problem, is a problem in artificial intelligence whereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically ...
* Matroid


References


Sources

* * * * * * * *


External links

* * {{Authority control Optimization algorithms and methods Combinatorial algorithms Matroid theory Exchange algorithms Greedy algorithms