
In
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
and
computer science
Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, an algorithm () is a finite sequence of
mathematically rigorous instructions, typically used to solve a class of specific
problems or to perform a
computation
A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computation are mathematical equation solving and the execution of computer algorithms.
Mechanical or electronic devices (or, hist ...
.
Algorithms are used as specifications for performing
calculations and
data processing. More advanced algorithms can use
conditionals to divert the code execution through various routes (referred to as
automated decision-making) and deduce valid
inference
Inferences are steps in logical reasoning, moving from premises to logical consequences; etymologically, the word '' infer'' means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinct ...
s (referred to as
automated reasoning).
In contrast, a
heuristic is an approach to solving problems without well-defined correct or optimal results.
[David A. Grossman, Ophir Frieder, ''Information Retrieval: Algorithms and Heuristics'', 2nd edition, 2004, ] For example, although social media
recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.
As an
effective method, an algorithm can be expressed within a finite amount of space and time
["Any classical mathematical algorithm, for example, can be described in a finite number of English words" (Rogers 1987:2).] and in a well-defined
formal language[Well defined concerning the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the instructions and carry out the computations" (Rogers 1987:2).] for calculating a
function. Starting from an initial state and initial input (perhaps
empty), the instructions describe a computation that, when
executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily
deterministic; some algorithms, known as
randomized algorithms, incorporate random input.
Etymology
Around 825 AD, Persian scientist and polymath
Muḥammad ibn Mūsā al-Khwārizmī wrote ''kitāb al-ḥisāb al-hindī'' ("Book of Indian computation") and ''kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī'' ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving the
Hindu–Arabic numeral system and
arithmetic appeared, for example ''Liber Alghoarismi de practica arismetrice'', attributed to
John of Seville, and ''Liber Algorismi de numero Indorum'', attributed to
Adelard of Bath.
[Blair, Ann, Duguid, Paul, Goeing, Anja-Silvia and Grafton, Anthony. Information: A Historical Companion, Princeton: Princeton University Press, 2021. p. 247] Here, ''alghoarismi'' or ''algorismi'' is the
Latinization of Al-Khwarizmi's name;
the text starts with the phrase ''Dixit Algorismi'', or "Thus spoke Al-Khwarizmi".
The word ''
algorism'' in English came to mean the use of place-value notation in calculations; it occurs in the ''
Ancrene Wisse'' from circa 1225. By the time
Geoffrey Chaucer
Geoffrey Chaucer ( ; – 25 October 1400) was an English poet, author, and civil servant best known for ''The Canterbury Tales''. He has been called the "father of English literature", or, alternatively, the "father of English poetry". He w ...
wrote ''
The Canterbury Tales'' in the late 14th century, he used a variant of the same word in describing ''augrym stones'', stones used for place-value calculation. In the 15th century, under the influence of the Greek word ἀριθμός (''arithmos'', "number"; ''cf.'' "arithmetic"), the Latin word was altered to ''algorithmus''. By 1596, this form of the word was used in English, as ''algorithm'', by
Thomas Hood
Thomas Hood (23 May 1799 – 3 May 1845) was an English poet, author and humorist, best known for poems such as "The Bridge of Sighs (poem), The Bridge of Sighs" and "The Song of the Shirt". Hood wrote regularly for ''The London Magazine'', '' ...
.
Definition
One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all
computer program
A computer program is a sequence or set of instructions in a programming language for a computer to Execution (computing), execute. It is one component of software, which also includes software documentation, documentation and other intangibl ...
s (including programs that do not perform numeric calculations), and any prescribed
bureaucratic procedure
or
cook-book recipe. In general, a program is an algorithm only if it stops eventually—even though
infinite loops may sometimes prove desirable. define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols''.''
Most algorithms are intended to be
implemented as
computer program
A computer program is a sequence or set of instructions in a programming language for a computer to Execution (computing), execute. It is one component of software, which also includes software documentation, documentation and other intangibl ...
s. However, algorithms are also implemented by other means, such as in a
biological neural network (for example, the
human brain
The human brain is the central organ (anatomy), organ of the nervous system, and with the spinal cord, comprises the central nervous system. It consists of the cerebrum, the brainstem and the cerebellum. The brain controls most of the activi ...
performing
arithmetic or an insect looking for food), in an
electrical circuit
An electrical network is an interconnection of electrical components (e.g., battery (electricity), batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e. ...
, or a mechanical device.
History
Ancient algorithms
Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in
Babylonian mathematics (around 2500 BC),
Egyptian mathematics (around 1550 BC),
Indian mathematics
Indian mathematics emerged in the Indian subcontinent from 1200 BCE until the end of the 18th century. In the classical period of Indian mathematics (400 CE to 1200 CE), important contributions were made by scholars like Aryabhata, Brahmagupta, ...
(around 800 BC and later),
the Ifa Oracle (around 500 BC),
Greek mathematics
Ancient Greek mathematics refers to the history of mathematical ideas and texts in Ancient Greece during Classical antiquity, classical and late antiquity, mostly from the 5th century BC to the 6th century AD. Greek mathematicians lived in cities ...
(around 240 BC),
Chinese mathematics (around 200 BC and later), and
Arabic mathematics (around 800 AD).
The earliest evidence of algorithms is found in ancient
Mesopotamia
Mesopotamia is a historical region of West Asia situated within the Tigris–Euphrates river system, in the northern part of the Fertile Crescent. Today, Mesopotamia is known as present-day Iraq and forms the eastern geographic boundary of ...
n mathematics. A
Sumer
Sumer () is the earliest known civilization, located in the historical region of southern Mesopotamia (now south-central Iraq), emerging during the Chalcolithic and Early Bronze Age, early Bronze Ages between the sixth and fifth millennium BC. ...
ian clay tablet found in
Shuruppak
Shuruppak ( , SU.KUR.RUki, "the healing place"), modern Tell Fara, was an ancient Sumerian city situated about 55 kilometres (35 mi) south of Nippur and 30 kilometers north of ancient Uruk on the banks of the Euphrates in Iraq's Al-Qādisiy ...
near
Baghdad
Baghdad ( or ; , ) is the capital and List of largest cities of Iraq, largest city of Iraq, located along the Tigris in the central part of the country. With a population exceeding 7 million, it ranks among the List of largest cities in the A ...
and dated to describes the earliest
division algorithm.
During the
Hammurabi dynasty ,
Babylonia
Babylonia (; , ) was an Ancient history, ancient Akkadian language, Akkadian-speaking state and cultural area based in the city of Babylon in central-southern Mesopotamia (present-day Iraq and parts of Kuwait, Syria and Iran). It emerged as a ...
n clay tablets described algorithms for computing formulas. Algorithms were also used in
Babylonian astronomy
Babylonian astronomy was the study or recording of celestial objects during the early history of Mesopotamia. The numeral system used, sexagesimal, was based on 60, as opposed to ten in the modern decimal system. This system simplified the ca ...
. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.
Algorithms for arithmetic are also found in ancient
Egyptian mathematics, dating back to the
Rhind Mathematical Papyrus .
Algorithms were later used in ancient
Hellenistic mathematics
Ancient Greek mathematics refers to the history of mathematical ideas and texts in Ancient Greece during classical and late antiquity, mostly from the 5th century BC to the 6th century AD. Greek mathematicians lived in cities spread around the s ...
. Two examples are the
Sieve of Eratosthenes, which was described in the ''
Introduction to Arithmetic
Nicomachus of Gerasa (; ) was an Ancient Greek Neopythagoreanism, Neopythagorean philosopher from Gerasa, in the Syria (Roman province), Roman province of Syria (now Jerash, Jordan). Like many Pythagoreans, Nicomachus wrote about the mystical pr ...
'' by
Nicomachus,
and the
Euclidean algorithm, which was first described in ''
Euclid's Elements
The ''Elements'' ( ) is a mathematics, mathematical treatise written 300 BC by the Ancient Greek mathematics, Ancient Greek mathematician Euclid.
''Elements'' is the oldest extant large-scale deductive treatment of mathematics. Drawing on the w ...
'' ().
Examples of ancient Indian mathematics included the
Shulba Sutras, the
Kerala School, and the
Brāhmasphuṭasiddhānta.
The first cryptographic algorithm for deciphering encrypted code was developed by
Al-Kindi, a 9th-century Arab mathematician, in ''A Manuscript On Deciphering Cryptographic Messages''. He gave the first description of
cryptanalysis
Cryptanalysis (from the Greek ''kryptós'', "hidden", and ''analýein'', "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic se ...
by
frequency analysis, the earliest codebreaking algorithm.
Computers
Weight-driven clocks
Bolter credits the invention of the weight-driven clock as "the key invention
Europe in the Middle Ages">Europe in the middle ages">Europe in the Middle Ages" specifically the
verge escapement
The verge (or crown wheel) escapement is the earliest known type of mechanical escapement, the mechanism in a mechanical clock that controls its rate by allowing the gear train to advance at regular intervals or 'ticks'. Verge escapements were us ...
mechanism producing the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical
automata" in the 13th century and "computational machines"—the
difference and
analytical engines of
Charles Babbage and
Ada Lovelace in the mid-19th century. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real
Turing-complete computer instead of just a
calculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer".
Electromechanical relay
Bell and Newell (1971) write that the
Jacquard loom
The Jacquard machine () is a device fitted to a loom that simplifies the process of manufacturing textiles with such complex patterns as brocade, damask and matelassé. The resulting ensemble of the loom and Jacquard machine is then called a Jac ...
, a precursor to
Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers. By the mid-19th century, the
telegraph
Telegraphy is the long-distance transmission of messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore is a method of telegraphy, whereas ...
, the precursor of the telephone, was in use throughout the world. By the late 19th century, the
ticker tape () was in use, as were Hollerith cards (c. 1890). Then came the
teleprinter () with its punched-paper use of
Baudot code
The Baudot code () is an early character encoding for telegraphy invented by Émile Baudot in the 1870s. It was the predecessor to the International Telegraph Alphabet No. 2 (ITA2), the most common teleprinter code in use before ASCII. Each ch ...
on tape.
Telephone-switching networks of
electromechanical relays were invented in 1835. These led to the invention of the digital adding device by
George Stibitz
George Robert Stibitz (April 30, 1904 – January 31, 1995) was an American researcher at Bell Labs who is internationally recognized as one of the fathers of the modern digital computer. He was known for his work in the 1930s and 1940s on the r ...
in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".
Formalization

In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the ''
Entscheidungsproblem
In mathematics and computer science, the ; ) is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. It asks for an algorithm that considers an inputted statement and answers "yes" or "no" according to whether it is universally valid ...
''(decision problem) posed by
David Hilbert. Later formalizations were framed as attempts to define "
effective calculability" or "effective method". Those formalizations included the
Gödel–
Herbrand–
Kleene
Stephen Cole Kleene ( ; January 5, 1909 – January 25, 1994) was an American mathematician. One of the students of Alonzo Church, Kleene, along with Rózsa Péter, Alan Turing, Emil Post, and others, is best known as a founder of the branch of ...
recursive functions of 1930, 1934 and 1935,
Alonzo Church
Alonzo Church (June 14, 1903 – August 11, 1995) was an American computer scientist, mathematician, logician, and philosopher who made major contributions to mathematical logic and the foundations of theoretical computer science. He is bes ...
's
lambda calculus
In mathematical logic, the lambda calculus (also written as ''λ''-calculus) is a formal system for expressing computability, computation based on function Abstraction (computer science), abstraction and function application, application using var ...
of 1936,
Emil Post
Emil Leon Post (; February 11, 1897 – April 21, 1954) was an American mathematician and logician. He is best known for his work in the field that eventually became known as computability theory.
Life
Post was born in Augustów, Suwałki Govern ...
's
Formulation 1
Formulation is a term used in various senses in various applications, both the material and the abstract or formal. Its fundamental meaning is the putting together of components in appropriate relationships or structures, according to a formu ...
of 1936, and
Alan Turing
Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer ...
's
Turing machines of 1936–37 and 1939.
Representations
Algorithms can be expressed in many kinds of notation, including
natural languages,
pseudocode,
flowcharts,
drakon-charts,
programming languages or
control tables (processed by
interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms.
Turing machines
There are many possible representations and
Turing machine
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
programs can be expressed as a sequence of machine tables (see
finite-state machine,
state-transition table, and
control table for more), as flowcharts and drakon-charts (see
state diagram for more), as a form of rudimentary
machine code or
assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description.
[Sipser 2006:157] A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine.
An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states.
In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.
Flowchart representation
The graphical aid called a
flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure.
Algorithmic analysis
It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of ''n'' numbers would have a time requirement of , using
big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of , otherwise is required.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or '
effort' than others. For example, a
binary search algorithm (with cost ) outperforms a sequential search (cost ) when used for
table lookups on sorted lists or arrays.
Formal versus empirical
The
analysis, and study of algorithms is a discipline of
computer science
Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
. Algorithms are often studied abstractly, without referencing any specific
programming language
A programming language is a system of notation for writing computer programs.
Programming languages are described in terms of their Syntax (programming languages), syntax (form) and semantics (computer science), semantics (meaning), usually def ...
or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation.
Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their
algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful for uncovering unexpected interactions that affect performance.
Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.
Execution efficiency
To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to
FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications.
[Haitham Hassanieh, Piotr Indyk, Dina Katabi, and Eric Price,]
ACM-SIAM Symposium On Discrete Algorithms (SODA)
, Kyoto, January 2012. See also th
sFFT Web Page
. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
Best Case and Worst Case
The best case of an algorithm refers to the scenario or input for which the algorithm or data structure takes the least time and resources to complete its tasks. The worst case of an algorithm is the case that causes the algorithm or data structure to consume the maximum period of time and computational resources.
Design
Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as
divide-and-conquer or
dynamic programming within
operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the
big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases.
Structured programming
Per the
Church–Turing thesis
In Computability theory (computation), computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) ...
, any algorithm can be computed by any
Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "
spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three
Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to
proofs of correctness using
mathematical induction
Mathematical induction is a method for mathematical proof, proving that a statement P(n) is true for every natural number n, that is, that the infinitely many cases P(0), P(1), P(2), P(3), \dots all hold. This is done by first proving a ...
.
Legal status
By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in ''
Gottschalk v. Benson
''Gottschalk v. Benson'', 409 U.S. 63 (1972), was a Supreme Court of the United States, United States Supreme Court case in which the Court ruled that a process claim directed to a numerical algorithm, as such, was not patentable because "the pat ...
''). However practical applications of algorithms are sometimes patentable. For example, in ''
Diamond v. Diehr'', the application of a simple
feedback
Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause and effect that forms a circuit or loop. The system can then be said to ''feed back'' into itself. The notion of cause-and-effect has to be handle ...
algorithm to aid in the curing of
synthetic rubber was deemed patentable. The
patenting of software is controversial, and there are criticized patents involving algorithms, especially
data compression algorithms, such as
Unisys's
LZW patent. Additionally, some cryptographic algorithms have export restrictions (see
export of cryptography).
Classification
By implementation
; Recursion
: A
recursive algorithm invokes itself repeatedly until meeting a termination condition and is a common
functional programming
In computer science, functional programming is a programming paradigm where programs are constructed by Function application, applying and Function composition (computer science), composing Function (computer science), functions. It is a declarat ...
method.
Iterative algorithms use repetitions such as
loops or data structures like
stacks to solve problems. Problems may be suited for one implementation or the other. The
Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
; Serial, parallel or distributed
: Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike
parallel or
distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems.
; Deterministic or non-deterministic
:
Deterministic algorithms solve the problem with exact decisions at every step; whereas
non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of
heuristics.
; Exact or approximate
: While many algorithms reach an exact solution,
approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the
Knapsack problem
The knapsack problem is the following problem in combinatorial optimization:
:''Given a set of items, each with a weight and a value, determine which items to include in the collection so that the total weight is less than or equal to a given lim ...
, where there is a set of items, and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider the weights of items as well as their value.
; Quantum algorithm
:
Quantum algorithms run on a realistic model of
quantum computation. The term is usually used for those algorithms that seem inherently quantum or use some essential feature of
Quantum computing such as
quantum superposition
Quantum superposition is a fundamental principle of quantum mechanics that states that linear combinations of solutions to the Schrödinger equation are also solutions of the Schrödinger equation. This follows from the fact that the Schrödi ...
or
quantum entanglement
Quantum entanglement is the phenomenon where the quantum state of each Subatomic particle, particle in a group cannot be described independently of the state of the others, even when the particles are separated by a large distance. The topic o ...
.
By design paradigm
Another way of classifying algorithms is by their design methodology or
paradigm. Some common paradigms are:
;
Brute-force or exhaustive search
: Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords.
; Divide and conquer
: A
divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually
recursively) until the instances are small enough to solve easily.
Merge sorting is an example of divide and conquer, where an unordered list is repeatedly split into smaller lists, which are sorted in the same way and then merged. In a simpler variant of divide and conquer called
prune and search or ''decrease-and-conquer algorithm'', which solves one smaller instance of itself, and does not require a merge step. An example of a prune and search algorithm is the
binary search algorithm.
; Search and enumeration
: Many problems (such as playing
chess) can be modelled as problems on
graphs. A
graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes
search algorithms,
branch and bound enumeration, and
backtracking.
;
Randomized algorithm
: Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems, the fastest approximations must involve some
randomness. Whether randomized algorithms with
polynomial time complexity can be the fastest algorithm for some problems is an open question known as the
P versus NP problem. There are two large classes of such algorithms:
#
Monte Carlo algorithms return a correct answer with high probability. E.g.
RP is the subclass of these that run in
polynomial time.
#
Las Vegas algorithm
In computing, a Las Vegas algorithm is a randomized algorithm that always gives Correctness (computer science), correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas alg ...
s always return the correct answer, but their running time is only probabilistically bound, e.g.
ZPP.
;
Reduction of complexity
: This technique transforms difficult problems into better-known problems solvable with (hopefully)
asymptotically optimal algorithms. The goal is to find a reducing algorithm whose
complexity is not dominated by the resulting reduced algorithms. For example, one
selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as ''
transform and conquer''.
;
Back tracking
: In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.
Optimization problems
For
optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:
;
Linear programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear function#As a polynomia ...
: When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular
simplex algorithm.
[
George B. Dantzig and Mukund N. Thapa. 2003. ''Linear Programming 2: Theory and Extensions''. Springer-Verlag.] Problems that can be solved with linear programming include the
maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be
integer
An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
s, then it is classified in
integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
;
Dynamic programming
: When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and
overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called ''dynamic programming'' avoids recomputing solutions. For example,
Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted
graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and
memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial.
; The greedy method
:
Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems, they always find the optimal solution but for others they may stop at
local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles.
Huffman Tree,
Kruskal,
Prim,
Sollin are greedy algorithms that can solve this optimization problem.
;The heuristic method
:In
optimization problems,
heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include
local search,
tabu search,
simulated annealing, and
genetic algorithm
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to g ...
s. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an
approximation algorithm.
Examples
One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as:
''High-level description:''
# If a set of numbers is empty, then there is no highest number.
# Assume the first number in the set is the largest.
# For each remaining number in the set: if this number is greater than the current largest, it becomes the new largest.
# When there are no unchecked numbers left in the set, consider the current largest number to be the largest in the set.
''(Quasi-)formal description:''
Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in
pseudocode or
pidgin code:
Input: A list of numbers ''L''.
Output: The largest number in the list ''L''.
if ''L.size'' = 0 return null
''largest'' ← ''L''
for each ''item'' in ''L'', do
if ''item'' > ''largest'', then
''largest'' ← ''item''
return ''largest''
See also
*
Abstract machine
*
ALGOL
*
Algorithm = Logic + Control
*
Algorithm aversion
*
Algorithm engineering
*
Algorithm characterizations
*
Algorithmic bias
*
Algorithmic composition
*
Algorithmic entities
*
Algorithmic synthesis
*
Algorithmic technique
*
Algorithmic topology
*
Computational mathematics
*
Garbage in, garbage out
In computer science, garbage in, garbage out (GIGO) is the concept that flawed, biased or poor quality ("garbage") information or input (computer science), input produces a result or input/output, output of similar ("garbage") quality. The adage ...
* ''
Introduction to Algorithms
''Introduction to Algorithms'' is a book on computer programming by Thomas H. Cormen, Charles E. Leiserson, Ron Rivest, Ronald L. Rivest, and Clifford Stein. The book is described by its publisher as "the leading algorithms text in universities w ...
'' (textbook)
*
Government by algorithm
Government by algorithm (also known as algorithmic regulation, regulation by algorithms, algorithmic governance, algocratic governance, algorithmic legal order or algocracy) is an alternative form of government or social ordering where the usag ...
*
List of algorithms
An algorithm is fundamentally a set of rules or defined procedures that is typically designed and used to solve a specific problem or a broad set of problems.
Broadly, algorithms define process(es), sets of rules, or methodologies that are to be f ...
*
List of algorithm general topics
*
Medium is the message
*
Regulation of algorithms
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorith ...
*
Theory of computation
**
Computability theory
**
Computational complexity theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem ...
Notes
Bibliography
*
* Bell, C. Gordon and Newell, Allen (1971), ''Computer Structures: Readings and Examples'', McGraw–Hill Book Company, New York. .
* Includes a bibliography of 56 references.
* ,
* : cf. Chapter 3 ''Turing machines'' where they discuss "certain enumerable sets not effectively (mechanically) enumerable".
*
* Campagnolo, M.L.,
Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In ''Proc. of the 4th Conference on Real Numbers and Computers'', Odense University, pp. 91–109
* Reprinted in ''The Undecidable'', p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (''The Undecidable'') where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc.
* Reprinted in ''The Undecidable'', p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes.
*
* Davis gives commentary before each article. Papers of
Gödel,
Alonzo Church
Alonzo Church (June 14, 1903 – August 11, 1995) was an American computer scientist, mathematician, logician, and philosopher who made major contributions to mathematical logic and the foundations of theoretical computer science. He is bes ...
,
Turing,
Rosser,
Kleene
Stephen Cole Kleene ( ; January 5, 1909 – January 25, 1994) was an American mathematician. One of the students of Alonzo Church, Kleene, along with Rózsa Péter, Alan Turing, Emil Post, and others, is best known as a founder of the branch of ...
, and
Emil Post
Emil Leon Post (; February 11, 1897 – April 21, 1954) was an American mathematician and logician. He is best known for his work in the field that eventually became known as computability theory.
Life
Post was born in Augustów, Suwałki Govern ...
are included; those cited in the article are listed here by author's name.
* Davis offers concise biographies of
Leibniz,
Boole,
Frege,
Cantor,
Hilbert
David Hilbert (; ; 23 January 1862 – 14 February 1943) was a German mathematician and philosophy of mathematics, philosopher of mathematics and one of the most influential mathematicians of his time.
Hilbert discovered and developed a broad ...
, Gödel and Turing with
von Neumann as the show-stealing villain. Very brief bios of
Joseph-Marie Jacquard
Joseph Marie Charles ''dit'' (called or nicknamed) Jacquard (; 7 July 1752 – 7 August 1834) was a French people, French weaver and merchant. He played an important role in the development of the earliest programmable loom (the "Jacquard loom") ...
,
Babbage,
Ada Lovelace,
Claude Shannon,
Howard Aiken
Howard Hathaway Aiken (March 8, 1900 – March 14, 1973) was an American physicist and a list of pioneers in computer science, pioneer in computing. He was the original conceptual designer behind IBM's Harvard Mark I, the United States' first C ...
, etc.
*
*
*
* ,
*
Yuri Gurevich''Sequential Abstract State Machines Capture Sequential Algorithms'' ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pp. 77–111. Includes bibliography of 33 sources.
* , 3rd edition 1976
(pbk.)
* , . Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
* Presented to the American Mathematical Society, September 1935. Reprinted in ''The Undecidable'', p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper ''An Unsolvable Problem of Elementary Number Theory'' that proved the "decision problem" to be "undecidable" (i.e., a negative result).
* Reprinted in ''The Undecidable'', p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the
Church thesis).
*
*
*
* Kosovsky, N.K. ''Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms'', LSU Publ., Leningrad, 1981
*
*
A.A. Markov (1954) ''Theory of algorithms''.
ranslated by Jacques J. Schorr-Kon and PST staffImprint Moscow, Academy of Sciences of the USSR, 1954
.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, WashingtonDescription 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov.
A248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS .* Minsky expands his "...idea of an algorithm – an effective procedure..." in chapter 5.1 ''Computability, Effective Procedures and Algorithms. Infinite machines.''
* Reprinted in ''The Undecidable'', pp. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called
Church–Turing thesis
In Computability theory (computation), computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) ...
.
*
* Reprinted in ''The Undecidable'', p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, ''The Undecidable'')
*
*
*
*
* Cf. in particular the first chapter titled: ''Algorithms, Turing Machines, and Programs''. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an ''algorithm''" (p. 4).
*
* . Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in ''The Undecidable'', p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK.
* Reprinted in ''The Undecidable'', pp. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton.
*
United States Patent and Trademark Office
The United States Patent and Trademark Office (USPTO) is an List of federal agencies in the United States, agency in the United States Department of Commerce, U.S. Department of Commerce that serves as the national patent office and trademark ...
(2006)
''2106.02 **>Mathematical Algorithms: 2100 Patentability'' Manual of Patent Examining Procedure (MPEP). Latest revision August 2006
* Zaslavsky, C. (1970). Mathematics of the Yoruba People and of Their Neighbors in Southern Nigeria. The Two-Year College Mathematics Journal, 1(2), 76–99. https://doi.org/10.2307/3027363
Further reading
*
*
*
*
*
*
* Jon Kleinberg, Éva Tardos(2006): ''Algorithm Design'', Pearson/Addison-Wesley, ISBN 978-0-32129535-4
*
Knuth, Donald E. (2000).
Selected Papers on Analysis of Algorithms''. Stanford, California: Center for the Study of Language and Information.
* Knuth, Donald E. (2010).
''. Stanford, California: Center for the Study of Language and Information.
*
*
External links
*
*
Dictionary of Algorithms and Data Structures–
National Institute of Standards and Technology
The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into Outline of p ...
; Algorithm repositories
The Stony Brook Algorithm Repository–
State University of New York at Stony Brook
Collected Algorithms of the ACM–
Associations for Computing Machinery
The Stanford GraphBase –
Stanford University
Leland Stanford Junior University, commonly referred to as Stanford University, is a Private university, private research university in Stanford, California, United States. It was founded in 1885 by railroad magnate Leland Stanford (the eighth ...
{{Authority control
Articles with example pseudocode
Mathematical logic
Theoretical computer science