
The knapsack problem is the following problem in
combinatorial optimization
Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combina ...
:
:''Given a set of items, each with a weight and a value, determine which items to include in the collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.''
It derives its name from the problem faced by someone who is constrained by a fixed-size
knapsack and must fill it with the most valuable items. The problem often arises in
resource allocation
In economics, resource allocation is the assignment of available resources to various uses. In the context of an entire economy, resources can be allocated by various means, such as markets, or planning.
In project management, resource allocatio ...
where the decision-makers have to choose from a set of non-divisible projects or tasks under a fixed budget or time constraint, respectively.
The knapsack problem has been studied for more than a century, with early works dating as far back as 1897.
The
subset sum problem
The subset sum problem (SSP) is a decision problem in computer science. In its most general formulation, there is a multiset S of integers and a target-sum T, and the question is to decide whether any subset of the integers sum to precisely T''.'' ...
is a special case of the decision and 0-1 problems where each kind of item, the weight equals the value:
. In the field of
cryptography
Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
, the term ''knapsack problem'' is often used to refer specifically to the subset sum problem. The subset sum problem is one of
Karp's 21 NP-complete problems.
Applications
Knapsack problems appear in real-world decision-making processes in a wide variety of fields, such as finding the least wasteful way to cut raw materials, selection of
investment
Investment is traditionally defined as the "commitment of resources into something expected to gain value over time". If an investment involves money, then it can be defined as a "commitment of money to receive more money later". From a broade ...
s and
portfolios, selection of assets for
asset-backed securitization, and generating keys for the
Merkle–Hellman and other
knapsack cryptosystems.
One early application of knapsack algorithms was in the construction and scoring of tests in which the test-takers have a choice as to which questions they answer. For small examples, it is a fairly simple process to provide the test-takers with such a choice. For example, if an exam contains 12 questions each worth 10 points, the test-taker need only answer 10 questions to achieve a maximum possible score of 100 points. However, on tests with a heterogeneous distribution of point values, it is more difficult to provide choices. Feuerman and Weiss proposed a system in which students are given a heterogeneous test with a total of 125 possible points. The students are asked to answer all of the questions to the best of their abilities. Of the possible subsets of problems whose total point values add up to 100, a knapsack algorithm would determine which subset gives each student the highest possible score.
A 1999 study of the Stony Brook University Algorithm Repository showed that, out of 75 algorithmic problems related to the field of combinatorial algorithms and algorithm engineering, the knapsack problem was the 19th most popular and the third most needed after
suffix tree
In computer science, a suffix tree (also called PAT tree or, in an earlier form, position tree) is a compressed trie containing all the suffixes of the given text as their keys and positions in the text as their values. Suffix trees allow particu ...
s and the
bin packing problem
The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has m ...
.
Definition
The most common problem being solved is the 0-1 knapsack problem, which restricts the number ''
'' of copies of each kind of item to zero or one. Given a set of ''
'' items numbered from 1 up to ''
'', each with a weight ''
'' and a value ''
'', along with a maximum weight capacity ''
'',
: maximize
: subject to
and
.
Here ''
'' represents the number of instances of item ''
'' to include in the knapsack. Informally, the problem is to maximize the sum of the values of the items in the knapsack so that the sum of the weights is less than or equal to the knapsack's capacity.
The bounded knapsack problem (BKP) removes the restriction that there is only one of each item, but restricts the number
of copies of each kind of item to a maximum non-negative integer value
:
: maximize
: subject to
and
The unbounded knapsack problem (UKP) places no upper bound on the number of copies of each kind of item and can be formulated as above except that the only restriction on
is that it is a non-negative integer.
: maximize
: subject to
and
One example of the unbounded knapsack problem is given using the figure shown at the beginning of this article and the text "if any number of each book is available" in the caption of that figure.
Computational complexity
The knapsack problem is interesting from the perspective of computer science for many reasons:
* The
decision problem
In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question on a set of input values. An example of a decision problem is deciding whether a given natura ...
form of the knapsack problem (''Can a value of at least ''V'' be achieved without exceeding the weight ''W''?'') is
NP-complete
In computational complexity theory, NP-complete problems are the hardest of the problems to which ''solutions'' can be verified ''quickly''.
Somewhat more precisely, a problem is NP-complete when:
# It is a decision problem, meaning that for any ...
, thus there is no known algorithm that is both correct and fast (polynomial-time) in all cases.
* There is no known polynomial algorithm which can tell, given a solution, whether it is optimal (which would mean that there is no solution with a larger ''V''). This problem is
co-NP-complete
In complexity theory, computational problems that are co-NP-complete are those that are the hardest problems in co-NP, in the sense that any problem in co-NP can be reformulated as a special case of any co-NP-complete problem with only polynomial ...
.
* There is a
pseudo-polynomial time
In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the ''numeric value'' of the input (the largest integer present in the input)—but not necessarily in the ''length'' of ...
algorithm using
dynamic programming.
* There is a
fully polynomial-time approximation scheme, which uses the pseudo-polynomial time algorithm as a subroutine, described below.
* Many cases that arise in practice, and "random instances" from some distributions, can nonetheless be solved exactly.
There is a link between the "decision" and "optimization" problems in that if there exists a polynomial algorithm that solves the "decision" problem, then one can find the maximum value for the optimization problem in polynomial time by applying this algorithm iteratively while increasing the value of k. On the other hand, if an algorithm finds the optimal value of the optimization problem in polynomial time, then the decision problem can be solved in polynomial time by comparing the value of the solution output by this algorithm with the value of k. Thus, both versions of the problem are of similar difficulty.
One theme in research literature is to identify what the "hard" instances of the knapsack problem look like,
[Pisinger, D. 2003]
Where are the hard knapsack problems?
Technical Report 2003/08, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark. or viewed another way, to identify what properties of instances in practice might make them more amenable than their worst-case NP-complete behaviour suggests.
The goal in finding these "hard" instances is for their use in
public-key cryptography
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic alg ...
systems, such as the
Merkle–Hellman knapsack cryptosystem The Merkle–Hellman knapsack cryptosystem was one of the earliest public key cryptosystems. It was published by Ralph Merkle and Martin Hellman in 1978. A polynomial time attack was published by Adi Shamir in 1984. As a result, the cryptosystem is ...
. More generally, better understanding of the structure of the space of instances of an optimization problem helps to advance the study of the particular problem and can improve algorithm selection.
Furthermore, notable is the fact that the hardness of the knapsack problem depends on the form of the input. If the weights and profits are given as integers, it is
weakly NP-complete, while it is
strongly NP-complete In computational complexity, strong NP-completeness is a property of computational problems that is a special case of NP-completeness. A general computational problem may have numerical parameters. For example, the input to the bin packing proble ...
if the weights and profits are given as rational numbers.
However, in the case of rational weights and profits it still admits a
fully polynomial-time approximation scheme.
Unit-cost models
The NP-hardness of the Knapsack problem relates to computational models in which the size of integers matters (such as the
Turing machine
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
). In contrast,
decision trees
A decision tree is a decision support system, decision support recursive partitioning structure that uses a Tree (graph theory), tree-like Causal model, model of decisions and their possible consequences, including probability, chance event ou ...
count each decision as a single step. Dobkin and Lipton show an
lower bound on linear decision trees for the knapsack problem, that is, trees where decision nodes test the sign of
affine function
In Euclidean geometry, an affine transformation or affinity (from the Latin, ''wikt:affine, affinis'', "connected with") is a geometric transformation that preserves line (geometry), lines and parallel (geometry), parallelism, but not necessarily ...
s. This was generalized to algebraic decision trees by Steele and Yao.
If the elements in the problem are
real number
In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s or
rationals, the decision-tree lower bound extends to the
real random-access machine model with an instruction set that includes addition, subtraction and multiplication of real numbers, as well as comparison and either division or remaindering ("floor"). This model covers more algorithms than the algebraic decision-tree model, as it encompasses algorithms that use indexing into tables. However, in this model all program steps are counted, not just decisions.
An ''upper bound'' for a decision-tree model was given by Meyer auf der Heide who showed that for every ''n'' there exists an -deep linear decision tree that solves the
subset-sum problem with ''n'' items. Note that this does not imply any upper bound for an algorithm that should solve the problem for ''any given n''.
Solving
Several algorithms are available to solve knapsack problems, based on the dynamic programming approach,
the
branch and bound
Branch and bound (BB, B&B, or BnB) is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution.
It is an algorithm ...
approach
[S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations,
John Wiley and Sons, 1990] or
hybridizations of both approaches.
[S. Martello, D. Pisinger, P. Toth]
Dynamic programming and strong bounds for the 0-1 knapsack problem
''Manag. Sci.'', 45:414–424, 1999.
Dynamic programming in-advance algorithm
The unbounded knapsack problem (UKP) places no restriction on the number of copies of each kind of item. Besides, here we assume that
:
: subject to and
Observe that greatest common divisor
In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers , , the greatest co ...
is a way to improve the running time.
Even if P≠NP, the O(nW) complexity does not contradict the fact that the knapsack problem is NP-complete
In computational complexity theory, NP-complete problems are the hardest of the problems to which ''solutions'' can be verified ''quickly''.
Somewhat more precisely, a problem is NP-complete when:
# It is a decision problem, meaning that for any ...
, since W, unlike n, is not polynomial in the length of the input to the problem. The length of the W input to the problem is proportional to the number of bits in W, \log W, not to W itself. However, since this runtime is pseudopolynomial, this makes the (decision version of the) knapsack problem a weakly NP-complete problem.
0-1 knapsack problem
A similar dynamic programming solution for the 0-1 knapsack problem also runs in pseudo-polynomial time. Assume w_1,\,w_2,\,\ldots,\,w_n,\, W are strictly positive integers. Define m ,w/math> to be the maximum value that can be attained with weight less than or equal to w using items up to i (first i items).
We can define m ,w/math> recursively as follows: (Definition A)
* m ,\,w0
* m ,\,wm -1,\,w/math> if w_i > w\,\! (the new item is more than the current weight limit)
*m ,\,w\max(m -1,\,w\,m -1,w-w_iv_i) if w_i \leqslant w.
The solution can then be found by calculating m ,W/math>. To do this efficiently, we can use a table to store previous computations.
The following is pseudocode for the dynamic program:
// Input:
// Values (stored in array v)
// Weights (stored in array w)
// Number of distinct items (n)
// Knapsack capacity (W)
// NOTE: The array "v" and array "w" are assumed to store all relevant values starting at index 1.
array m ..n, 0..W
for j from 0 to W do:
m, j
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
:= 0
for i from 1 to n do:
m , 0:= 0
for i from 1 to n do:
for j from 1 to W do:
if w > j then:
m, j
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
:= m -1, j else:
m, j
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
:= max(m -1, j m -1, j-w[i + v[i">.html" ;"title="-1, j-w[i">-1, j-w[i + v[i
This solution will therefore run in O(nW) time and O(nW) space.
(If we only need the value m[n,W], we can modify the code so that the amount of memory required is O(W) which stores the recent two lines of the array "m".)
However, if we take it a step or two further, we should know that the method will run in the time between O(nW) and O(2^n). From Definition A, we know that there is no need to compute all the weights when the number of items and the items themselves that we chose are fixed. That is to say, the program above computes more than necessary because the weight changes from 0 to W often. From this perspective, we can program this method so that it runs recursively.
// Input:
// Values (stored in array v)
// Weights (stored in array w)
// Number of distinct items (n)
// Knapsack capacity (W)
// NOTE: The array "v" and array "w" are assumed to store all relevant values starting at index 1.
Define value, W
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
Initialize all value, j
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
= -1
Define m:=(i,j) // Define function m so that it represents the maximum value we can get under the condition: use first i items, total weight limit is j
Run m(n, W)
For example, there are 10 different items and the weight limit is 67. So,
\begin
&w 1 23 ,w 2 26,w 3 20,w 4 18,w 5 32,w 6 27,w 7 29,w 8 26,w 9 30,w 10 27 \\
&v 1505 ,v 2352,v 3458,v 4220,v 5354,v 6414,v 7498,v 8545,v 9473,v 10543 \\
\end
If you use above method to compute for m(10,67), you will get this, excluding calls that produce m(i,j) = 0:
\begin
&m(10, 67) = 1270\\
&m(9, 67) = 1270, m(9, 40) = 678\\
&m(8, 67) = 1270, m(8, 40) = 678, m(8, 37) = 545\\
&m(7, 67) = 1183, m(7, 41) = 725, m(7, 40) = 678, m(7, 37) = 505\\
&m(6, 67) = 1183, m(6, 41) = 725, m(6, 40) = 678, m(6, 38) = 678, m(6, 37) = 505\\
&m(5, 67) = 1183, m(5, 41) = 725, m(5, 40) = 678, m(5, 38) = 678, m(5, 37) = 505\\
&m(4, 67) = 1183, m(4, 41) = 725, m(4, 40) = 678, m(4, 38) = 678, m(4, 37) = 505, m(4, 35) = 505\\
&m(3, 67) = 963, m(3, 49) = 963, m(3, 41) = 505, m(3, 40) = 505, m(3, 38) = 505, m(3, 37) = 505, m(3, 35) = 505, m(3, 23) = 505, m(3, 22) = 458, m(3, 20) = 458\\
&m(2, 67) = 857, m(2, 49) = 857, m(2, 47) = 505, m(2, 41) = 505, m(2, 40) = 505, m(2, 38) = 505, m(2, 37) = 505, m(2, 35) = 505, m(2, 29) = 505, m(2, 23) = 505\\
&m(1, 67) = 505, m(1, 49) = 505, m(1, 47) = 505, m(1, 41) = 505, m(1, 40) = 505, m(1, 38) = 505, m(1, 37) = 505, m(1, 35) = 505, m(1, 29) = 505, m(1, 23) = 505\\
\end
Besides, we can break the recursion and convert it into a tree. Then we can cut some leaves and use parallel computing
Parallel computing is a type of computing, computation in which many calculations or Process (computing), processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. ...
to expedite the running of this method.
To find the actual subset of items, rather than just their total value, we can run this after running the function above:
/**
* Returns the indices of the items of the optimal knapsack.
* i: We can include items 1 through i in the knapsack
* j: maximum weight of the knapsack
*/
function knapsack(i: int, j: int): Set
knapsack(n, W)
Meet-in-the-middle
Another algorithm for 0-1 knapsack, discovered in 1974 and sometimes called "meet-in-the-middle" due to parallels to a similarly named algorithm in cryptography, is exponential in the number of different items but may be preferable to the DP algorithm when W is large compared to ''n''. In particular, if the w_i are nonnegative but not integers, we could still use the dynamic programming algorithm by scaling and rounding (i.e. using fixed-point arithmetic
In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, represen ...
), but if the problem requires d fractional digits of precision to arrive at the correct answer, W will need to be scaled by 10^d, and the DP algorithm will require O(W10^d) space and O(nW10^d) time.
algorithm Meet-in-the-middle is
input: A set of items with weights and values.
output: The greatest combined value of a subset.
partition the set into two sets ''A'' and ''B'' of approximately equal size
compute the weights and values of all subsets of each set
for each subset of ''A'' do
find the subset of ''B'' of greatest value such that the combined weight is less than ''W''
keep track of the greatest combined value seen so far
The algorithm takes O(2^) space, and efficient implementations of step 3 (for instance, sorting the subsets of B by weight, discarding subsets of B which weigh more than other subsets of B of greater or equal value, and using binary search to find the best match) result in a runtime of O(n2^). As with the meet in the middle attack in cryptography, this improves on the O(n2^n) runtime of a naive brute force approach (examining all subsets of \), at the cost of using exponential rather than constant space (see also baby-step giant-step). The current state of the art improvement to the meet-in-the-middle algorithm, using insights from Schroeppel and Shamir's Algorithm for Subset Sum, provides as a corollary a randomized algorithm for Knapsack which preserves the O^(2^) (up to polynomial factors) running time and reduces the space requirements to O^(2^) (see Corollary 1.4). In contrast, the best known deterministic algorithm runs in O^(2^) time with a slightly worse space complexity of O^(2^).
Approximation Algorithms
As for most NP-complete problems, it may be enough to find workable solutions even if they are not optimal. Preferably, however, the approximation comes with a guarantee of the difference between the value of the solution found and the value of the optimal solution.
As with many useful but computationally complex algorithms, there has been substantial research on creating and analyzing algorithms that approximate a solution. The knapsack problem, though NP-Hard, is one of a collection of algorithms that can still be approximated to any specified degree. This means that the problem has a polynomial time approximation scheme. To be exact, the knapsack problem has a fully polynomial time approximation scheme (FPTAS).[Vazirani, Vijay. Approximation Algorithms. Springer-Verlag Berlin Heidelberg, 2003.]
Greedy approximation algorithm
George Dantzig proposed a greedy approximation algorithm
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned sol ...
to solve the unbounded knapsack problem. His version sorts the items in decreasing order of value per unit of weight, v_1/w_1\ge\cdots\ge v_n/w_n. It then proceeds to insert them into the sack, starting with as many copies as possible of the first kind of item until there is no longer space in the sack for more. Provided that there is an unlimited supply of each kind of item, if m is the maximum value of items that fit into the sack, then the greedy algorithm is guaranteed to achieve at least a value of m/2.
For the bounded problem, where the supply of each kind of item is limited, the above algorithm may be far from optimal. Nevertheless, a simple modification allows us to solve this case: Assume for simplicity that all items individually fit in the sack (w_i \le W for all i). Construct a solution S_1 by packing items greedily as long as possible, i.e. S_1=\left\ where k=\textstyle\max_\textstyle\sum_^ w_i\le W. Furthermore, construct a second solution S_2=\left\ containing the first item that did not fit. Since S_1\cup S_2 provides an upper bound for the LP relaxation of the problem, one of the sets must have value at least m/2; we thus return whichever of S_1 and S_2 has better value to obtain a 1/2-approximation.
It can be shown that the average performance converges to the optimal solution in distribution at the error rate n^
Fully polynomial time approximation scheme
The fully polynomial time approximation scheme (FPTAS) for the knapsack problem takes advantage of the fact that the reason the problem has no known polynomial time solutions is because the profits associated with the items are not restricted. If one rounds off some of the least significant digits of the profit values then they will be bounded by a polynomial and 1/ε where ε is a bound on the correctness of the solution. This restriction then means that an algorithm can find a solution in polynomial time that is correct within a factor of (1-ε) of the optimal solution.[
algorithm FPTAS is
input: ε ∈ (0,1]
a list A of n items, specified by their values, v_i, and weights
output: S' the FPTAS solution
P := max \ // the highest item value
K := ε \frac
for i from 1 to n do
v'_i := \left\lfloor \frac \right\rfloor
end for
return the solution, S', using the v'_i values in the dynamic program outlined above
Theorem: The set S' computed by the algorithm above satisfies \mathrm(S') \geq (1-\varepsilon) \cdot \mathrm(S^*), where S^* is an optimal solution.
]
Quantum approximate optimization
Quantum approximate optimization algorithm (QAOA) can be employed to solve Knapsack problem using quantum computation
A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing takes advantage of this behavior using specialized hardware. C ...
by minimizing the Hamiltonian
Hamiltonian may refer to:
* Hamiltonian mechanics, a function that represents the total energy of a system
* Hamiltonian (quantum mechanics), an operator corresponding to the total energy of that system
** Dyall Hamiltonian, a modified Hamiltonian ...
of the problem. The Knapsack Hamiltonian is constructed via embedding the constraint condition to the cost function of the problem with a penalty term.
= -\sum_^n v_i x_i + P \left(\sum_^n w_i x_i - W\right)^2,
where
P
is the penalty constant which is determined by case-specific fine-tuning.
Dominance relations
Solving the unbounded knapsack problem can be made easier by throwing away items which will never be needed. For a given item i, suppose we could find a set of items J such that their total weight is less than the weight of i, and their total value is greater than the value of i. Then i cannot appear in the optimal solution, because we could always improve any potential solution containing i by replacing i with the set J. Therefore, we can disregard the i-th item altogether. In such cases, J is said to dominate i. (Note that this does not apply to bounded knapsack problems, since we may have already used up the items in J.)
Finding dominance relations allows us to significantly reduce the size of the search space. There are several different types of dominance relations,[ which all satisfy an inequality of the form:
\qquad \sum_ w_j\,x_j \ \le \alpha\,w_i, and \sum_ v_j\,x_j \ \ge \alpha\,v_i\, for some x \in Z _+^n
where
\alpha\in Z_+ \,,J\subsetneq N and i\not\in J. The vector x denotes the number of copies of each member of J.
;Collective dominance: The i-th item is collectively dominated by J, written as i\ll J, if the total weight of some combination of items in J is less than ''wi'' and their total value is greater than ''vi''. Formally, \sum_ w_j\,x_j \ \le w_i and \sum_ v_j\,x_j \ \ge v_i for some x \in Z _+^n , i.e. \alpha=1. Verifying this dominance is computationally hard, so it can only be used with a dynamic programming approach. In fact, this is equivalent to solving a smaller knapsack decision problem where V = v_i, W = w_i, and the items are restricted to J.
;Threshold dominance: The i-th item is threshold dominated by J, written as i\prec\prec J, if some number of copies of i are dominated by J. Formally, \sum_ w_j\,x_j \ \le \alpha\,w_i, and \sum_ v_j\,x_j \ \ge \alpha\,v_i\, for some x \in Z _+^n and \alpha\geq 1. This is a generalization of collective dominance, first introduced in][ and used in the EDUK algorithm. The smallest such \alpha defines the threshold of the item i, written t_i =(\alpha-1)w_i. In this case, the optimal solution could contain at most \alpha-1 copies of i.
;Multiple dominance: The i-th item is multiply dominated by a single item j, written as i\ll_ j, if i is dominated by some number of copies of j. Formally, w_j\,x_j \ \le w_i, and v_j\,x_j \ \ge v_i for some x_j \in Z _+ i.e. J=\, \alpha=1, x_j=\lfloor \frac\rfloor. This dominance could be efficiently used during preprocessing because it can be detected relatively easily.
;Modular dominance: Let b be the ''best item'', i.e. \frac\ge\frac\, for all i. This is the item with the greatest density of value. The i-th item is modularly dominated by a single item j, written as i\ll_\equiv j, if i is dominated by j plus several copies of b. Formally, w_j+tw_b \le w_i, and v_j +tv_b \ge v_i i.e. J=\, \alpha=1, x_b=t, x_j=1.
]
Variations
There are many variations of the knapsack problem that have arisen from the vast number of applications of the basic problem. The main variations occur by changing the number of some problem parameter such as the number of items, number of objectives, or even the number of knapsacks.
Multi-dimensional objective
Here, instead of a single objective (e.g. maximizing the monetary profit from the items in the knapsack), there can be several objectives. For example, there could be environmental or social concerns as well as economic goals. Problems frequently addressed include portfolio and transportation logistics optimizations.
As an example, suppose you run a cruise ship. You have to decide how many famous comedians to hire. This boat can handle no more than one ton of passengers and the entertainers must weigh less than 1000 lbs. Each comedian has a weight, brings in business based on their popularity and asks for a specific salary. In this example, you have multiple objectives. You want, of course, to maximize the popularity of your entertainers while minimizing their salaries. Also, you want to have as many entertainers as possible.
Multi-dimensional weight
Here, the weight of knapsack item i is given by a D-dimensional vector \overline=(w_,\ldots,w_) and the knapsack has a D-dimensional capacity vector (W_1,\ldots,W_D). The target is to maximize the sum of the values of the items in the knapsack so that the sum of weights in each dimension d does not exceed W_d.
Multi-dimensional knapsack is computationally harder than knapsack; even for D=2, the problem does not have EPTAS unless P=NP. However, the algorithm in[Cohen, R. and Grebla, G. 2014]
"Multi-Dimensional OFDMA Scheduling in a Wireless Network with Relay Nodes"
in ''Proc. IEEE INFOCOM'14'', 2427–2435. is shown to solve sparse instances efficiently. An instance of multi-dimensional knapsack is sparse if there is a set J=\ for m such that for every knapsack item i, \exists z>m such that \forall j\in J\cup \,\ w_\geq 0 and \forall y\notin J\cup\, w_=0. Such instances occur, for example, when scheduling packets in a wireless network with relay nodes.[ The algorithm from][ also solves sparse instances of the multiple choice variant, multiple-choice multi-dimensional knapsack.
The IHS (Increasing Height Shelf) algorithm is optimal for 2D knapsack (packing squares into a two-dimensional unit size square): when there are at most five squares in an optimal packing.
]
Multiple knapsacks
Here, there are multiple knapsacks. This may seem like a trivial change, but it is not equivalent to adding to the capacity of the initial knapsack, as each knapsack has its own capacity constraint. This variation is used in many loading and scheduling problems in Operations Research and has a Polynomial-time approximation scheme
In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems).
A PTAS is an algorithm which takes an inst ...
. This variation is similar to the Bin Packing Problem
The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has m ...
. It differs from the Bin Packing Problem in that a subset of items can be selected, whereas, in the Bin Packing Problem, all items have to be packed to certain bins.
Quadratic
The quadratic knapsack problem maximizes a quadratic objective function subject to binary and linear capacity constraints. The problem was introduced by Gallo, Hammer, and Simeone in 1980, however the first treatment of the problem dates back to Witzgall in 1975.
Geometric
In the geometric knapsack problem, there is a set of rectangles with different values, and a rectangular knapsack. The goal is to pack the largest possible value into the knapsack.
Online
In the online knapsack problem, the items come one by one. Whenever an item arrives, we must decide immediately whether to put it in the knapsack or discard it. There are two variants: (a) non-removable - an inserted item remains in the knapsack forever; (b) removable - an inserted item may be removed later, to make room for a new item.
Han, Kawase and Makino present a randomized algorithm for the unweighted non-removable setting. It is 2-competitive, which is the best possible. For the weighted removable setting, they give a 2-competitive algorithm, prove a lower bound of ~1.368 for randomized algorithms, and prove that no deterministic algorithm can have a constant competitive ratio. For the unweighted removable setting, they give an 10/7-competitive-ratio algorithm, and prove a lower bound of 1.25.
There are several other papers on the online knapsack problem.
See also
*
*
*
*
*
*
*
*
*
Notes
References
* A6: MP9, pg.247.
*
*
External links
Lecture slides on the knapsack problem
PYAsUKP: Yet Another solver for the Unbounded Knapsack Problem
with code taking advantage of the dominance relations in an hybrid algorithm, benchmarks and downloadable copies of some papers.
Home page of David Pisinger
with downloadable copies of some papers on the publication list (including "Where are the hard knapsack problems?")
Knapsack Problem solutions in many languages
at Rosetta Code
Rosetta Code is a wiki-based programming chrestomathy website with implementations of common algorithms and solutions to various computer programming, programming problems in many different programming languages. It is named for the Rosetta Stone ...
Dynamic Programming algorithm to 0/1 Knapsack problem
Solving 0-1-KNAPSACK with Genetic Algorithms in Ruby
Optimizing Three-Dimensional Bin Packing
Knapsack Integer Programming Solution in Python
Gekko (optimization software)
The GEKKO Python package solves large-scale mixed-integer and differential algebraic equations with nonlinear programming solvers (IPOPT, APOPT, BPOPT, SNOPT, MINOS_(optimization_software), MINOS). Modes of operation include machine learning, dat ...
{{DEFAULTSORT:Knapsack Problem
Cryptography
Packing problems
NP-complete problems
Dynamic programming
Combinatorial optimization
Weakly NP-complete problems
Pseudo-polynomial time algorithms