HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
(specifically
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
), the worst-case complexity measures the
resources Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified upon their av ...
(e.g. running time,
memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remember ...
) that an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
requires given an input of arbitrary size (commonly denoted as in
asymptotic notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
). It gives an upper bound on the resources required by the algorithm. In the case of running time, the worst-case
time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
indicates the longest running time performed by an algorithm given ''any'' input of size , and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the
efficiency Efficiency is the often measurable ability to avoid wasting materials, energy, efforts, money, and time in doing something or in producing a desired result. In a more general sense, it is the ability to do things well, successfully, and without ...
of two algorithms. The worst-case complexity of an algorithm should be contrasted with its
average-case complexity In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case com ...
, which is an average measure of the amount of resources the algorithm uses on a random input.


Definition

Given a
model of computation In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how ...
and an algorithm \mathsf that halts on each input s, the mapping t_ \colon \^\star \to \N is called the time complexity of \mathsf if, for every input string s, \mathsf halts after exactly t_(s) steps. Since we usually are interested in the dependence of the
time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
on different input lengths, abusing terminology, the time complexity is sometimes referred to the mapping t_ \colon \N \to \N, defined by the maximal complexity :t_(n) := \max_ t_(s) of inputs s with length or size \le n. Similar definitions can be given for space complexity, randomness complexity, etc.


Ways of speaking

Very frequently, the complexity t_ of an algorithm \mathsf is given in asymptotic
Big-O Notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
, which gives its growth rate in the form t_ = O(g(n)) with a certain real valued comparison function g(n) and the meaning: * There exists a positive real number M and a natural number n_0 such that :, t_(n), \le M g(n) \quad \text n\ge n_0. Quite frequently, the wording is: * „Algorithm \mathsf has the worst-case complexity O(g(n)).“ or even only: * „Algorithm \mathsf has complexity O(g(n)).“


Examples

Consider performing
insertion sort Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. Ho ...
on n numbers on a random access machine. The best-case for the algorithm is when the numbers are already sorted, which takes O(n) steps to perform the task. However, the input in the worst-case for the algorithm is when the numbers are reverse sorted and it takes O(n^2) steps to sort them; therefore the worst-case time-complexity of insertion sort is of O(n^2).


See also

*
Analysis of algorithms In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that r ...


References

*
Thomas H. Cormen Thomas H. Cormen is the co-author of ''Introduction to Algorithms'', along with Charles Leiserson, Ron Rivest, and Cliff Stein. In 2013, he published a new book titled '' Algorithms Unlocked''. He is a professor of computer science at Dartmou ...
,
Charles E. Leiserson Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof. As part of this effort, he developed the Cilk multithreaded language. ...
,
Ronald L. Rivest Ronald Linn Rivest (; born May 6, 1947) is a cryptographer and an Institute Professor at MIT. He is a member of MIT's Department of Electrical Engineering and Computer Science (EECS) and a member of MIT's Computer Science and Artificial Inte ...
, and
Clifford Stein Clifford Seth Stein (born December 14, 1965), a computer scientist, is a professor of industrial engineering and operations research at Columbia University in New York, NY, where he also holds an appointment in the Department of Computer Scien ...
.
Introduction to Algorithms ''Introduction to Algorithms'' is a book on computer programming by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. The book has been widely used as the textbook for algorithms courses at many universities and is ...
, Second Edition. MIT Press and McGraw-Hill, 2001. . Chapter 2.2: Analyzing algorithms, pp.25-27. * Oded Goldreich. Computational Complexity: A Conceptual Perspective. Cambridge University Press, 2008. {{ISBN, 0-521-88473-X, p.32. Analysis of algorithms