HOME

TheInfoList



OR:

Dijkstra's algorithm ( ) is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
for finding the shortest paths between
node In general, a node is a localized swelling (a " knot") or a point of intersection (a vertex). Node may refer to: In mathematics * Vertex (graph theory), a vertex in a mathematical graph * Vertex (geometry), a point where two or more curves, line ...
s in a graph, which may represent, for example, road networks. It was conceived by
computer scientist A computer scientist is a person who is trained in the academic study of computer science. Computer scientists typically work on the theoretical side of computation, as opposed to the hardware side on which computer engineers mainly focus (a ...
Edsger W. Dijkstra in 1956 and published three years later. The algorithm exists in many variants. Dijkstra's original algorithm found the shortest path between two given nodes, but a more common variant fixes a single node as the "source" node and finds shortest paths from the source to all other nodes in the graph, producing a shortest-path tree. For a given source node in the graph, the algorithm finds the shortest path between that node and every other. It can also be used for finding the shortest paths from a single node to a single destination node by stopping the algorithm once the shortest path to the destination node has been determined. For example, if the nodes of the graph represent cities and edge path costs represent driving distances between pairs of cities connected by a direct road (for simplicity, ignore red lights, stop signs, toll roads and other obstructions), Dijkstra's algorithm can be used to find the shortest route between one city and all other cities. A widely used application of shortest path algorithms is network
routing protocol A routing protocol specifies how routers communicate with each other to distribute information that enables them to select routes between nodes on a computer network. Routers perform the traffic directing functions on the Internet; data packet ...
s, most notably IS-IS (Intermediate System to Intermediate System) and
OSPF Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol (IP) networks. It uses a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within a single autonomous syst ...
(Open Shortest Path First). It is also employed as a
subroutine In computer programming, a function or subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Functions may ...
in other algorithms such as Johnson's. The Dijkstra algorithm uses labels that are positive integers or real numbers, which are
totally ordered In mathematics, a total or linear order is a partial order in which any two elements are comparable. That is, a total order is a binary relation \leq on some set X, which satisfies the following for all a, b and c in X: # a \leq a ( reflexive ...
. It can be generalized to use any labels that are partially ordered, provided the subsequent labels (a subsequent label is produced when traversing an edge) are monotonically non-decreasing. This generalization is called the generic Dijkstra shortest-path algorithm. Dijkstra's algorithm uses a data structure for storing and querying partial solutions sorted by distance from the start. While the original algorithm uses a
min-priority queue In computer science, a priority queue is an abstract data-type similar to a regular queue or stack data structure in which each element additionally has a ''priority'' associated with it. In a priority queue, an element with high priority is s ...
and runs in
time Time is the continued sequence of existence and event (philosophy), events that occurs in an apparently irreversible process, irreversible succession from the past, through the present, into the future. It is a component quantity of various me ...
\Theta((, V, + , E, ) \log , V, )(where , V, is the number of nodes and , E, is the number of edges), it can also be implemented in \Theta(, V, ^ 2) using an array. The idea of this algorithm is also given in . propose using a
Fibonacci heap In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the bina ...
min-priority queue to optimize the running time complexity to \Theta(, E, +, V, \log, V, ). This is
asymptotically In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the ''x'' or ''y'' coordinates tends to infinity. In projective geometry and related contexts, ...
the fastest known single-source shortest-path algorithm for arbitrary directed graphs with unbounded non-negative weights. However, specialized cases (such as bounded/integer weights, directed acyclic graphs etc.) can indeed be improved further as detailed in Specialized variants. Additionally, if preprocessing is allowed algorithms such as
contraction hierarchies In computer science, the method of contraction hierarchies is a speed-up technique for finding the shortest-path in a graph. The most intuitive applications are car-navigation systems: a user wants to drive from A to B using the quickest possible ...
can be up to seven orders of magnitude faster. In some fields,
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech ...
in particular, Dijkstra's algorithm or a variant of it is known as uniform cost search and formulated as an instance of the more general idea of
best-first search Best-first search is a class of search algorithms, which explore a graph by expanding the most promising node chosen according to a specified rule. Judea Pearl described the best-first search as estimating the promise of node ''n'' by a "heuristic ...
.


History

Dijkstra thought about the shortest path problem when working at the Mathematical Center in Amsterdam in 1956 as a programmer to demonstrate the capabilities of a new computer called ARMAC. His objective was to choose both a problem and a solution (that would be produced by computer) that non-computing people could understand. He designed the shortest path algorithm and later implemented it for ARMAC for a slightly simplified transportation map of 64 cities in the Netherlands (64, so that 6 bits would be sufficient to encode the city number). A year later, he came across another problem from hardware engineers working on the institute's next computer: minimize the amount of wire needed to connect the pins on the back panel of the machine. As a solution, he re-discovered the algorithm known as Prim's minimal spanning tree algorithm (known earlier to Jarník, and also rediscovered by
Prim Prim may refer to: People *Prim (given name) *Prim (surname) Places * Prim, Virginia, unincorporated community in King George County *Dolní Přím, village in the Czech Republic; as Nieder Prim (Lower Prim) site of the Battle of Königgrätz * ...
). Dijkstra published the algorithm in 1959, two years after Prim and 29 years after Jarník.


Algorithm

Let the node at which we are starting be called the initial node. Let the distance of node ''Y'' be the distance from the initial node to ''Y''. Dijkstra's algorithm will initially start with infinite distances and will try to improve them step by step. # Mark all nodes unvisited. Create a
set Set, The Set, SET or SETS may refer to: Science, technology, and mathematics Mathematics *Set (mathematics), a collection of elements *Category of sets, the category whose objects and morphisms are sets and total functions, respectively Electro ...
of all the unvisited nodes called the ''unvisited set''. # Assign to every node a ''tentative distance'' value: set it to zero for our initial node and to infinity for all other nodes. During the run of the algorithm, the tentative distance of a node ''v'' is the length of the shortest path discovered so far between the node ''v'' and the ''starting'' node. Since initially no path is known to any other vertex than the source itself (which is a path of length zero), all other tentative distances are initially set to infinity. Set the initial node as current. # For the current node, consider all of its unvisited neighbors and calculate their tentative distances through the current node. Compare the newly calculated tentative distance to the one currently assigned to the neighbor and assign it the smaller one. For example, if the current node ''A'' is marked with a distance of 6, and the edge connecting it with a neighbor ''B'' has length 2, then the distance to ''B'' through ''A'' will be 6 + 2 = 8. If B was previously marked with a distance greater than 8 then change it to 8. Otherwise, the current value will be kept. # When we are done considering all of the unvisited neighbors of the current node, mark the current node as visited and remove it from the unvisited set. A visited node will never be checked again (this is valid and optimal in connection with the behavior in step 6.: that the next nodes to visit will always be in the order of 'smallest distance from ''initial node'' first' so any visits after would have a greater distance). # If the destination node has been marked visited (when planning a route between two specific nodes) or if the smallest tentative distance among the nodes in the unvisited set is infinity (when planning a complete traversal; occurs when there is no connection between the initial node and remaining unvisited nodes), then stop. The algorithm has finished. # Otherwise, select the unvisited node that is marked with the smallest tentative distance, set it as the new ''current node'', and go back to step 3. When planning a route, it is actually not necessary to wait until the destination node is "visited" as above: the algorithm can stop once the destination node has the smallest tentative distance among all "unvisited" nodes (and thus could be selected as the next "current").


Description

Suppose you would like to find the ''shortest path'' between two
intersections In mathematics, the intersection of two or more objects is another object consisting of everything that is contained in all of the objects simultaneously. For example, in Euclidean geometry, when two lines in a plane are not parallel, their ...
on a city map: a ''starting point'' and a ''destination''. Dijkstra's algorithm initially marks the distance (from the starting point) to every other intersection on the map with ''infinity''. This is done not to imply that there is an infinite distance, but to note that those intersections have not been visited yet. Some variants of this method leave the intersections' distances ''unlabeled''. Now select the ''current intersection'' at each iteration. For the first iteration, the current intersection will be the starting point, and the distance to it (the intersection's label) will be ''zero''. For subsequent iterations (after the first), the current intersection will be a ''closest unvisited intersection'' to the starting point (this will be easy to find). From the current intersection, ''update'' the distance to every unvisited intersection that is directly connected to it. This is done by determining the ''sum'' of the distance between an unvisited intersection and the value of the current intersection and then relabeling the unvisited intersection with this value (the sum) if it is less than the unvisited intersection's current value. In effect, the intersection is relabeled if the path to it through the current intersection is shorter than the previously known paths. To facilitate shortest path identification, in pencil, mark the road with an arrow pointing to the relabeled intersection if you label/relabel it, and erase all others pointing to it. After you have updated the distances to each neighboring intersection, mark the current intersection as ''visited'' and select an unvisited intersection with minimal distance (from the starting point) – or the lowest label—as the current intersection. Intersections marked as visited are labeled with the shortest path from the starting point to it and will not be revisited or returned to. Continue this process of updating the neighboring intersections with the shortest distances, marking the current intersection as visited, and moving onto a closest unvisited intersection until you have marked the destination as visited. Once you have marked the destination as visited (as is the case with any visited intersection), you have determined the shortest path to it from the starting point and can ''trace your way back following the arrows in reverse''. In the algorithm's implementations, this is usually done (after the algorithm has reached the destination node) by following the nodes' parents from the destination node up to the starting node; that's why we also keep track of each node's parent. This algorithm makes no attempt of direct "exploration" towards the destination as one might expect. Rather, the sole consideration in determining the next "current" intersection is its distance from the starting point. This algorithm therefore expands outward from the starting point, interactively considering every node that is closer in terms of shortest path distance until it reaches the destination. When understood in this way, it is clear how the algorithm necessarily finds the shortest path. However, it may also reveal one of the algorithm's weaknesses: its relative slowness in some topologies.


Pseudocode

In the following
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
algorithm, is an array that contains the current distances from the to other vertices, i.e. is the current distance from the source to the vertex . The array contains pointers to previous-hop nodes on the shortest path from source to the given vertex (equivalently, it is the ''next-hop'' on the path ''from'' the given vertex ''to'' the source). The code , searches for the vertex in the vertex set that has the least value. returns the length of the edge joining (i.e. the distance between) the two neighbor-nodes and . The variable on line 14 is the length of the path from the root node to the neighbor node if it were to go through . If this path is shorter than the current shortest path recorded for , that current path is replaced with this path. 1 function Dijkstra(''Graph'', ''source''): 2 3 for each vertex ''v'' in ''Graph.Vertices'': 4 dist 'v''← INFINITY 5 prev 'v''← UNDEFINED 6 add ''v'' to ''Q'' 7 dist 'source''← 0 8 9 while ''Q'' is not empty: 10 ''u'' ← vertex in ''Q'' with min dist 11 remove u from ''Q'' 12 13 for each neighbor ''v'' of ''u'' still in ''Q'': 14 ''alt'' ← dist 'u''+ Graph.Edges(''u'', ''v'') 15 if ''alt'' < dist 'v'' 16 dist 'v''← ''alt'' 17 prev 'v''← ''u'' 18 19 return dist[], prev[] If we are only interested in a shortest path between vertices and , we can terminate the search after line 10 if . Now we can read the shortest path from to by reverse iteration: 1 ''S'' ← empty sequence 2 ''u'' ← ''target'' 3 if prev 'u''is defined or ''u'' = ''source'': ''// Do something only if the vertex is reachable'' 4 while ''u'' is defined: ''// Construct the shortest path with a stack S'' 5 insert ''u'' at the beginning of ''S'' ''// Push the vertex onto the stack'' 6 ''u'' ← prev 'u'' ''// Traverse from target to source'' Now sequence is the list of vertices constituting one of the shortest paths from to , or the empty sequence if no path exists. A more general problem would be to find all the shortest paths between and (there might be several different ones of the same length). Then instead of storing only a single node in each entry of we would store all nodes satisfying the relaxation condition. For example, if both and connect to and both of them lie on different shortest paths through (because the edge cost is the same in both cases), then we would add both and to . When the algorithm completes, data structure will actually describe a graph that is a subset of the original graph with some edges removed. Its key property will be that if the algorithm was run with some starting node, then every path from that node to any other node in the new graph will be the shortest path between those nodes in the original graph, and all paths of that length from the original graph will be present in the new graph. Then to actually find all these shortest paths between two given nodes we would use a path finding algorithm on the new graph, such as
depth-first search Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible a ...
.


Using a priority queue

A min-priority queue is an abstract data type that provides 3 basic operations: , and . As mentioned earlier, using such a data structure can lead to faster computing times than using a basic queue. Notably,
Fibonacci heap In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the bina ...
or Brodal queue offer optimal implementations for those 3 operations. As the algorithm is slightly different, we mention it here, in pseudocode as well: 1 function Dijkstra(''Graph'', ''source''): 2 dist 'source''← 0 ''// Initialization'' 3 4 create vertex priority queue Q 5 6 for each vertex ''v'' in ''Graph.Vertices'': 7 if ''v'' ≠ ''source'' 8 dist 'v''← INFINITY ''// Unknown distance from source to v'' 9 prev 'v''← UNDEFINED ''// Predecessor of v'' 10 11 ''Q''.add_with_priority(''v'', dist 'v'' 12 13 14 while ''Q'' is not empty: ''// The main loop'' 15 ''u'' ← ''Q''.extract_min() ''// Remove and return best vertex'' 16 for each neighbor ''v'' of ''u'': ''// Go through all v neighbors of u'' 17 ''alt'' ← dist 'u''+ Graph.Edges(''u'', ''v'') 18 if ''alt'' < dist 'v'' 19 dist 'v''← ''alt'' 20 prev 'v''← ''u'' 21 ''Q''.decrease_priority(''v'', ''alt'') 22 23 return dist, prev Instead of filling the priority queue with all nodes in the initialization phase, it is also possible to initialize it to contain only ''source''; then, inside the if ''alt'' < dist 'v''/code> block, the becomes an operation if the node is not already in the queue. Yet another alternative is to add nodes unconditionally to the priority queue and to instead check after extraction that no shorter connection was found yet. This can be done by additionally extracting the associated priority ''p'' from the queue and only processing further if ''p''

dist 'u''/code> inside the while ''Q'' is not empty loop. Observe that cannot ever hold because of the update when updating the queue. See https://cs.stackexchange.com/questions/118388/dijkstra-without-decrease-key for discussion. These alternatives can use entirely array-based priority queues without decrease-key functionality, which have been found to achieve even faster computing times in practice. However, the difference in performance was found to be narrower for denser graphs.


Proof of correctness

''Proof of Dijkstra's algorithm is constructed by induction on the number of visited nodes.'' ''Invariant hypothesis'': For each visited node , is the shortest distance from to , and for each unvisited node , is the shortest distance from to when traveling via visited nodes only, or infinity if no such path exists. (Note: we do not assume is the actual shortest distance for unvisited nodes, while is the actual shortest distance) The base case is when there is just one visited node, namely the initial node , in which case the hypothesis is
trivial Trivia is information and data that are considered to be of little value. It can be contrasted with general knowledge and common sense. Latin Etymology The ancient Romans used the word ''triviae'' to describe where one road split or fork ...
. Next, assume the hypothesis for ''k-1'' visited nodes. Next, we choose to be the next visited node according to the algorithm. We claim that is the shortest distance from to . To prove that claim, we will proceed with a proof by contradiction. If there were a shorter path, then there can be two cases, either the shortest path contains another unvisited node or not. In the first case, let be the first unvisited node on the shortest path. By the induction hypothesis, the shortest path from to and through visited node only has cost and respectively. That means the cost of going from to through has the cost of at least + the minimal cost of going from to . As the edge costs are positive, the minimal cost of going from to is a positive number. Also < because the algorithm picked instead of . Now we arrived at a contradiction that < yet + a positive number < . In the second case, let be the last but one node on the shortest path. That means . That is a contradiction because by the time is visited, it should have set to at most . For all other visited nodes , the induction hypothesis told us is the shortest distance from already, and the algorithm step is not changing that. After processing it will still be true that for each unvisited node , will be the shortest distance from to using visited nodes only, because if there were a shorter path that doesn't go by we would have found it previously, and if there were a shorter path using we would have updated it when processing . After all nodes are visited, the shortest path from to any node consists only of visited nodes, therefore is the shortest distance.


Running time

Bounds of the running time of Dijkstra's algorithm on a graph with edges and vertices can be expressed as a function of the number of edges, denoted , E, , and the number of vertices, denoted , V, , using
big-O notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
. The complexity bound depends mainly on the data structure used to represent the set . In the following, upper bounds can be simplified because , E, is O(, V, ^2) for any graph, but that simplification disregards the fact that in some problems, other upper bounds on , E, may hold. For any data structure for the vertex set , the running time is in :\Theta(, E, \cdot T_\mathrm + , V, \cdot T_\mathrm), where T_\mathrm and T_\mathrm are the complexities of the ''decrease-key'' and ''extract-minimum'' operations in , respectively. The simplest version of Dijkstra's algorithm stores the vertex set as an linked list or array, and edges as an
adjacency list In graph theory and computer science, an adjacency list is a collection of unordered lists used to represent a finite graph. Each unordered list within an adjacency list describes the set of neighbors of a particular vertex in the graph. This is ...
or
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
. In this case, extract-minimum is simply a linear search through all vertices in , so the running time is \Theta(, E, + , V, ^2) = \Theta(, V, ^2). For sparse graphs, that is, graphs with far fewer than , V, ^2 edges, Dijkstra's algorithm can be implemented more efficiently by storing the graph in the form of adjacency lists and using a
self-balancing binary search tree In computer science, a self-balancing binary search tree (BST) is any node-based binary search tree that automatically keeps its height (maximal number of levels below the root) small in the face of arbitrary item insertions and deletions.Donal ...
, binary heap,
pairing heap A pairing heap is a type of heap data structure with relatively simple implementation and excellent practical amortized performance, introduced by Michael Fredman, Robert Sedgewick, Daniel Sleator, and Robert Tarjan in 1986. Pairing heaps ar ...
, or
Fibonacci heap In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the bina ...
as a
priority queue In computer science, a priority queue is an abstract data-type similar to a regular queue or stack data structure in which each element additionally has a ''priority'' associated with it. In a priority queue, an element with high priority is se ...
to implement extracting minimum efficiently. To perform decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data structure that maps each vertex to its position in the heap, and to keep this structure up to date as the priority queue changes. With a self-balancing binary search tree or binary heap, the algorithm requires :\Theta((, E, + , V, ) \log , V, ) time in the worst case (where \log denotes the binary logarithm \log_2); for connected graphs this time bound can be simplified to \Theta( , E , \log , V , ). The
Fibonacci heap In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the bina ...
improves this to :\Theta(, E, + , V, \log, V, ). When using binary heaps, the average case time complexity is lower than the worst-case: assuming edge costs are drawn independently from a common
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
, the expected number of ''decrease-key'' operations is bounded by \Theta(, V, \log (, E, /, V, )), giving a total running time of :O\left(, E, + , V, \log \frac \log , V, \right).


Practical optimizations and infinite graphs

In common presentations of Dijkstra's algorithm, initially all nodes are entered into the priority queue. This is, however, not necessary: the algorithm can start with a priority queue that contains only one item, and insert new items as they are discovered (instead of doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise insert it). This variant has the same worst-case bounds as the common variant, but maintains a smaller priority queue in practice, speeding up the queue operations. In a route-finding problem, Felner finds that the queue can be a factor 500–600 smaller, taking some 40% of the running time. Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to find the shortest path from a single source to the closest of a set of target nodes on infinite graphs or those too large to represent in memory. The resulting algorithm is called ''uniform-cost search'' (UCS) in the artificial intelligence literature and can be expressed in pseudocode as procedure uniform_cost_search(start) is node ← start frontier ← priority queue containing node only expanded ← empty set do if frontier is empty then return failure node ← frontier.pop() if node is a goal state then return solution(node) expanded.add(node) for each of node's neighbors ''n'' do if ''n'' is not in expanded and not in frontier then frontier.add(''n'') else if ''n'' is in frontier with higher cost replace existing node with ''n'' The complexity of this algorithm can be expressed in an alternative way for very large graphs: when is the length of the shortest path from the start node to any node satisfying the "goal" predicate, each edge has cost at least , and the number of neighbors per node is bounded by , then the algorithm's worst-case time and space complexity are both in . Further optimizations of Dijkstra's algorithm for the single-target case include
bidirectional Bidirectional may refer to: * Bidirectional, a roadway that carries traffic moving in opposite directions * Bi-directional vehicle, a tram or train or any other vehicle that can be controlled from either end and can move forward or backward with e ...
variants, goal-directed variants such as the
A* algorithm A* (pronounced "A-star") is a graph traversal and path search algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its O(b^d) space complexity, ...
(see ), graph pruning to determine which nodes are likely to form the middle segment of shortest paths (reach-based routing), and hierarchical decompositions of the input graph that reduce routing to connecting and to their respective " transit nodes" followed by shortest-path computation between these transit nodes using a "highway". Combinations of such techniques may be needed for optimal practical performance on specific problems.


Specialized variants

When arc weights are small integers (bounded by a parameter C), specialized queues which take advantage of this fact can be used to speed up Dijkstra's algorithm. The first algorithm of this type was Dial's algorithm for graphs with positive integer edge weights, which uses a bucket queue to obtain a running time O(, E, +, V, C). The use of a Van Emde Boas tree as the priority queue brings the complexity to O(, E, \log\log C) . Another interesting variant based on a combination of a new radix heap and the well-known Fibonacci heap runs in time O(, E, +, V, \sqrt) . Finally, the best algorithms in this special case are as follows. The algorithm given by runs in O(, E, \log\log, V, ) time and the algorithm given by runs in O(, E, + , V, \min\) time.


Related problems and algorithms

The functionality of Dijkstra's original algorithm can be extended with a variety of modifications. For example, sometimes it is desirable to present solutions which are less than mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal solution is first calculated. A single edge appearing in the optimal solution is removed from the graph, and the optimum solution to this new graph is calculated. Each edge of the original solution is suppressed in turn and a new shortest-path calculated. The secondary solutions are then ranked and presented after the first optimal solution. Dijkstra's algorithm is usually the working principle behind
link-state routing protocol Link-state routing protocols are one of the two main classes of routing protocols used in packet switching networks for computer communications, the others being distance-vector routing protocols. Examples of link-state routing protocols includ ...
s,
OSPF Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol (IP) networks. It uses a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within a single autonomous syst ...
and IS-IS being the most common ones. Unlike Dijkstra's algorithm, the
Bellman–Ford algorithm The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it i ...
can be used on graphs with negative edge weights, as long as the graph contains no negative cycle reachable from the source vertex ''s''. The presence of such cycles means there is no shortest path, since the total weight becomes lower each time the cycle is traversed. (This statement assumes that a "path" is allowed to repeat vertices. In
graph theory In mathematics, graph theory is the study of '' graphs'', which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of '' vertices'' (also called ''nodes'' or ''points'') which are conn ...
that is normally not allowed. In
theoretical computer science computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumscribe the ...
it often is allowed.) It is possible to adapt Dijkstra's algorithm to handle negative weight edges by combining it with the Bellman-Ford algorithm (to remove negative edges and detect negative cycles); such an algorithm is called
Johnson's algorithm Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted directed graph. It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist. It works by using ...
. The
A* algorithm A* (pronounced "A-star") is a graph traversal and path search algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its O(b^d) space complexity, ...
is a generalization of Dijkstra's algorithm that cuts down on the size of the subgraph that must be explored, if additional information is available that provides a lower bound on the "distance" to the target. This approach can be viewed from the perspective of
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is ...
: there is a natural linear program for computing shortest paths, and solutions to its dual linear program are feasible if and only if they form a
consistent heuristic In the study of path-finding problems in artificial intelligence, a heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighbouring vertex to the goal, plus the ...
(speaking roughly, since the sign conventions differ from place to place in the literature). This feasible dual / consistent heuristic defines a non-negative reduced cost and A* is essentially running Dijkstra's algorithm with these reduced costs. If the dual satisfies the weaker condition of
admissibility Admissibility may refer to: Law * Admissible evidence, evidence which may be introduced in a court of law * Admissibility (ECHR), whether a case will be considered in the European Convention on Human Rights system Mathematics and logic * Admissible ...
, then A* is instead more akin to the Bellman–Ford algorithm. The process that underlies Dijkstra's algorithm is similar to the greedy process used in Prim's algorithm. Prim's purpose is to find a
minimum spanning tree A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. ...
that connects all nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate the total weight of the path from the starting node, only the individual edges. Breadth-first search can be viewed as a special-case of Dijkstra's algorithm on unweighted graphs, where the priority queue degenerates into a FIFO queue. The
fast marching method The fast marching methodJ.A. Sethian. A Fast Marching Level Set Method for Monotonically Advancing Fronts, Proc. Natl. Acad. Sci., 93, 4, pp.1591--1595, 1996/ref> is a numerical method created by James Sethian for solving boundary value problems ...
can be viewed as a continuous version of Dijkstra's algorithm which computes the geodesic distance on a triangle mesh.


Dynamic programming perspective

From a
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
point of view, Dijkstra's algorithm is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method.Online version of the paper with interactive computational modules.
/ref> In fact, Dijkstra's explanation of the logic behind the algorithm, namely is a paraphrasing of Bellman's famous
Principle of Optimality A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time ...
in the context of the shortest path problem.


Applications

Least-cost paths are calculated for instance to establish tracks of electricity lines or oil pipelines. The algorithm has also been used to calculate optimal long-distance footpaths in Ethiopia and contrast them with the situation on the ground.Nyssen, J., Tesfaalem Ghebreyohannes, Hailemariam Meaza, Dondeyne, S., 2020. Exploration of a medieval African map (Aksum, Ethiopia) – How do historical maps fit with topography? In: De Ryck, M., Nyssen, J., Van Acker, K., Van Roy, W., Liber Amicorum: Philippe De Maeyer In Kaart. Wachtebeke (Belgium): University Press: 165-178.


See also

* A* search algorithm *
Bellman–Ford algorithm The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it i ...
*
Euclidean shortest path The Euclidean shortest path problem is a problem in computational geometry: given a set of polyhedral obstacles in a Euclidean space, and two points, find the shortest path between the points that does not intersect any of the obstacles. Two di ...
*
Floyd–Warshall algorithm In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a directed weighted graph with ...
*
Johnson's algorithm Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted directed graph. It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist. It works by using ...
* Longest path problem *
Parallel all-pairs shortest path algorithm A central problem in algorithmic graph theory is the shortest path problem. Hereby, the problem of finding the shortest path between every pair of nodes is known as all-pair-shortest-paths (APSP) problem. As sequential algorithms for this problem o ...


Notes


References

* * * * * * * * * * *


External links


Oral history interview with Edsger W. Dijkstra
Charles Babbage Institute The IT History Society (ITHS) is an organization that supports the history and scholarship of information technology by encouraging, fostering, and facilitating archival and historical research. Formerly known as the Charles Babbage Foundation, ...
, University of Minnesota, Minneapolis
Implementation of Dijkstra's algorithm using TDD
Robert Cecil Martin Robert Cecil Martin (born 5 December 1952), colloquially called "Uncle Bob", is an American software engineer, instructor, and best-selling author. He is most recognized for developing many software design principles and for being a founder of ...
, The Clean Code Blog {{Optimization algorithms
Algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
1959 in computing Graph algorithms Search algorithms Routing algorithms Combinatorial optimization Articles with example pseudocode Dutch inventions Graph distance