Selection by sorting
By sorting the list or array then selecting the desired element, selection can be reduced toUnordered partial sorting
If partial sorting is relaxed so that the ''k'' smallest elements are returned, but not in order, the factor of O(''k'' log ''k'') can be eliminated. An additional maximum selection (taking O(''k'') time) is required, but since , this still yields asymptotic complexity of O(''n''). In fact, partition-based selection algorithms yield both the ''k''th smallest element itself and the ''k'' smallest elements (with other elements not in order). This can be done in O(''n'') time – average complexity of Quickselect, and worst-case complexity of refined partition-based selection algorithms. Conversely, given a selection algorithm, one can easily get an unordered partial sort (''k'' smallest elements, not in order) in O(''n'') time by iterating through the list and recording all elements less than the ''k''th element. If this results in fewer than ''k'' − 1 elements, any remaining elements equal the ''k''th element. Care must be taken, due to the possibility of equality of elements: one must not include all elements less than ''or equal to'' the ''k''th element, as elements greater than the ''k''th element may also be equal to it. Thus unordered partial sorting (lowest ''k'' elements, but not ordered) and selection of the ''k''th element are very similar problems. Not only do they have the same asymptotic complexity, O(''n''), but a solution to either one can be converted into a solution to the other by a straightforward algorithm (finding a max of ''k'' elements, or filtering elements of a list below a cutoff of the value of the ''k''th element).Partial selection sort
A simple example of selection by partial sorting is to use the partial selection sort. The obvious linear time algorithm to find the minimum (resp. maximum) – iterating over the list and keeping track of the minimum (resp. maximum) element so far – can be seen as a partial selection sort that selects the 1 smallest element. However, many other partial sorts also reduce to this algorithm for the case ''k'' =1, such as a partial heap sort. More generally, a partial selection sort yields a simple selection algorithm which takes O(''kn'') time. This is asymptotically inefficient, but can be sufficiently efficient if ''k'' is small, and is easy to implement. Concretely, we simply find the minimum value and move it to the beginning, repeating on the remaining list until we have accumulated ''k'' elements, and then return the ''k''th element. Here is partial selection sort-based algorithm: function select(list ..n k) for i from 1 to k minIndex = i minValue = list for j from i+1 to n do if list < minValue then minIndex = j minValue = list swap list and list inIndex return listPartition-based selection
Linear performance can be achieved by a partition-based selection algorithm, most basically Quickselect. Quickselect is a variant of Quicksort – in both one chooses a pivot and then partitions the data by it, but while Quicksort recurses on both sides of the partition, Quickselect only recurses on one side, namely the side on which the desired ''k''th element is. As with Quicksort, this has optimal average performance, in this case linear, but poor worst-case performance, in this case quadratic. This occurs for instance by taking the first element as the pivot and searching for the maximum element, if the data is already sorted. In practice this can be avoided by choosing a random element as pivot, which yields almost certain linear performance. Alternatively, a more careful deterministic pivot strategy can be used, such as median of medians. These are combined in the hybrid introselect algorithm (analogous to introsort), which starts with Quickselect but falls back to median of medians if progress is slow, resulting in both fast average performance and optimal worst-case performance of O(''n''). The partition-based algorithms are generally done in place, which thus results in partially sorting the data. They can be done out of place, not changing the original data, at the cost of O(''n'') additional space.Median selection as pivot strategy
A median-selection algorithm can be used to yield a general selection algorithm or sorting algorithm, by applying it as the pivot strategy in Quickselect or Quicksort; if the median-selection algorithm is asymptotically optimal (linear-time), the resulting selection or sorting algorithm is as well. In fact, an exact median is not necessary – an approximate median is sufficient. In the median of medians selection algorithm, the pivot strategy computes an approximate median and uses this as pivot, recursing on a smaller set to compute this pivot. In practice the overhead of pivot computation is significant, so these algorithms are generally not used, but this technique is of theoretical interest in relating selection and sorting algorithms. In detail, given a median-selection algorithm, one can use it as a pivot strategy in Quickselect, obtaining a selection algorithm. If the median-selection algorithm is optimal, meaning O(''n''), then the resulting general selection algorithm is also optimal, again meaning linear. This is because Quickselect is a divide and conquer algorithm, and using the median at each pivot means that at each step the search set decreases by half in size, so the overall complexity is a geometric series times the complexity of each step, and thus simply a constant times the complexity of a single step, in fact times (summing the series). Similarly, given a median-selection algorithm or general selection algorithm applied to find the median, one can use it as a pivot strategy in Quicksort, obtaining a sorting algorithm. If the selection algorithm is optimal, meaning O(''n''), then the resulting sorting algorithm is optimal, meaning O(''n'' log ''n''). The median is the best pivot for sorting, as it evenly divides the data, and thus guarantees optimal sorting, assuming the selection algorithm is optimal. A sorting analog to median of medians exists, using the pivot strategy (approximate median) in Quicksort, and similarly yields an optimal Quicksort.Incremental sorting by selection
Converse to selection by sorting, one can incrementally sort by repeated selection. Abstractly, selection only yields a single element, the ''k''th element. However, practical selection algorithms frequently involve partial sorting, or can be modified to do so. Selecting by partial sorting naturally does so, sorting the elements up to ''k'', and selecting by partitioning also sorts some elements: the pivots are sorted to the correct positions, with the ''k''th element being the final pivot, and the elements between the pivots have values between the pivot values. The difference between partition-based selection and partition-based sorting, as in Quickselect versus Quicksort, is that in selection one recurses on only one side of each pivot, sorting only the pivots (an average of log(''n'') pivots are used), rather than recursing on both sides of the pivot. This can be used to speed up subsequent selections on the same data; in the extreme, a fully sorted array allows O(1) selection. Further, compared with first doing a full sort, incrementally sorting by repeated selection amortizes the sorting cost over multiple selections. For partially sorted data (up to ''k''), so long as the partially sorted data and the index ''k'' up to which the data is sorted are recorded, subsequent selections of ''j'' less than or equal to ''k'' can simply select the ''j''th element, as it is already sorted, while selections of ''j'' greater than ''k'' only need to sort the elements above the ''k''th position. For partitioned data, if the list of pivots is stored (for example, in a sorted list of the indices), then subsequent selections only need to select in the interval between two pivots (the nearest pivots below and above). The biggest gain is from the top-level pivots, which eliminate costly large partitions: a single pivot near the middle of the data cuts the time for future selections in half. The pivot list will grow over subsequent selections, as the data becomes more sorted, and can even be passed to a partition-based sort as the basis of a full sort.Using data structures to select in sublinear time
Given an unorganized list of data, linear time (Ω(''n'')) is required to find the minimum element, because we have to examine every element (otherwise, we might miss it). If we organize the list, for example by keeping it sorted at all times, then selecting the ''k''th largest element is trivial, but then insertion requires linear time, as do other operations such as combining two lists. The strategy to find an order statistic inLower bounds
In '' The Art of Computer Programming'', Donald E. Knuth discussed a number of lower bounds for the number of comparisons required to locate the ''t'' smallest entries of an unorganized list of ''n'' items (using only comparisons). There is a trivial lower bound of ''n'' − 1 for the minimum or maximum entry. To see this, consider a tournament where each game represents one comparison. Since every player except the winner of the tournament must lose a game before we know the winner, we have a lower bound of ''n'' − 1 comparisons. The story becomes more complex for other indexes. We define as the minimum number of comparisons required to find the ''t'' smallest values. Knuth references a paper published by S. S. Kislitsyn, which shows an upper bound on this value: : This bound is achievable for ''t''=2 but better, more complex bounds are known for larger ''t''.Space complexity
The required space complexity of selection is O(1) additional storage, in addition to storing the array in which selection is being performed. Such space complexity can be achieved while preserving optimal O(n) time complexity.Online selection algorithm
Online selection may refer narrowly to computing the ''k''th smallest element of a stream, in which case partial sorting algorithms — with ''k'' + O(1) space for the ''k'' smallest elements so far — can be used, but partition-based algorithms cannot be. Alternatively, selection itself may be required to be online, that is, an element can only be selected from a sequential input at the instance of observation and each selection, respectively refusal, is irrevocable. The problem is to select, under these constraints, a specific element of the input sequence (as for example the largest or the smallest value) with largest probability. This problem can be tackled by the Odds algorithm, which yields the optimal under an independence condition; it is also optimal itself as an algorithm with the number of computations being linear in the length of input. The simplest example is the secretary problem of choosing the maximum with high probability, in which case optimal strategy (on random data) is to track the running maximum of the first ''n''/''e'' elements and reject them, and then select the first element that is higher than this maximum.Related problems
One may generalize the selection problem to apply to ranges within a list, yielding the problem ofLanguage support
Very few languages have built-in support for general selection, although many provide facilities for finding the smallest or largest element of a list. A notable exception is C++, which provides a templatednth_element
method with a guarantee of expected linear time, and also partitions the data, requiring that the ''n''th element be sorted into its correct place, elements before the ''n''th element are less than it, and elements after the ''n''th element are greater than it. It is implied but not required that it is based on Hoare's algorithm (or some variant) by its requirement of expected linear time and partitioning of data.
For Perl, the modulnlargest()
, returning sorted lists, in O(''n'' log ''k'') time. Numpy has the partition()
function.
Matlab includes maxk()
and mink()
functions, which return the maximal (minimal) k values in a vector as well as their indices.
Because language support for sorting is more ubiquitous, the simplistic approach of sorting followed by indexing is preferred in many environments despite its disadvantage in speed. Indeed, for lazy languages, this simplistic approach can even achieve the best complexity possible for the ''k'' smallest/greatest sorted (with maximum/minimum as a special case) if the sort is lazy enough.
See also
*References
Bibliography
* * * * Donald Knuth. '' The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Third Edition. Addison-Wesley, 1997. . Section 5.3.3: Minimum-Comparison Selection, pp. 207–219. * Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. '' Introduction to Algorithms'', Second Edition. MIT Press and McGraw-Hill, 2001. . Chapter 9: Medians and Order Statistics, pp. 183–196. Section 14.1: Dynamic order statistics, pp. 302–308. *External links
*