HOME

TheInfoList



OR:

Matrix chain multiplication (or the matrix chain ordering problem) is an
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
concerning the most efficient way to
multiply Multiplication (often denoted by the cross symbol , by the mid-line dot operator , by juxtaposition, or, on computers, by an asterisk ) is one of the four elementary mathematical operations of arithmetic, with the other ones being additi ...
a given sequence of
matrices Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
. The problem is not actually to ''perform'' the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
. There are many options because matrix multiplication is
associative In mathematics, the associative property is a property of some binary operations, which means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacemen ...
. In other words, no matter how the product is parenthesized, the result obtained will remain the same. For example, for four matrices ''A'', ''B'', ''C'', and ''D'', there are five possible options: :((''AB'')''C'')''D'' = (''A''(''BC''))''D'' = (''AB'')(''CD'') = ''A''((''BC'')''D'') = ''A''(''B''(''CD'')). Although it does not affect the product, the order in which the terms are parenthesized affects the number of simple arithmetic operations needed to compute the product, that is, the
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
. The straightforward multiplication of a matrix that is by a matrix that is requires ordinary multiplications and ordinary additions. In this context, it is typical to use the number of ordinary multiplications as a measure of the runtime complexity. If ''A'' is a 10 × 30 matrix, ''B'' is a 30 × 5 matrix, and ''C'' is a 5 × 60 matrix, then :computing (''AB'')''C'' needs (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations, while :computing ''A''(''BC'') needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations. Clearly the first method is more efficient. With this information, the problem statement can be refined as "how to determine the optimal parenthesization of a product of ''n'' matrices?" Checking each possible parenthesization ( brute force) would require a run-time that is exponential in the number of matrices, which is very slow and impractical for large ''n''. A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems.


A dynamic programming algorithm

To begin, let us assume that all we really want to know is the minimum cost, or minimum number of arithmetic operations needed to multiply out the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so the minimum cost is the cost of doing this. In general, we can find the minimum cost using the following
recursive algorithm In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves ...
: * Take the sequence of matrices and separate it into two subsequences. * Find the minimum cost of multiplying out each subsequence. * Add these costs together, and add in the cost of multiplying the two result matrices. * Do this for each possible position at which the sequence of matrices can be split, and take the minimum over all of them. For example, if we have four matrices ''ABCD'', we compute the cost required to find each of (''A'')(''BCD''), (''AB'')(''CD''), and (''ABC'')(''D''), making recursive calls to find the minimum cost to compute ''ABC'', ''AB'', ''CD'', and ''BCD''. We then choose the best one. Better still, this yields not only the minimum cost, but also demonstrates the best way of doing the multiplication: group it the way that yields the lowest total cost, and do the same for each factor. However, this algorithm has exponential runtime complexity making it as inefficient as the naive approach of trying all permutations. The reason is that the algorithm does a lot of redundant work. For example, above we made a recursive call to find the best cost for computing both ''ABC'' and ''AB''. But finding the best cost for computing ABC also requires finding the best cost for ''AB''. As the recursion grows deeper, more and more of this type of unnecessary repetition occurs. One simple solution is called
memoization In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. Memoization ...
: each time we compute the minimum cost needed to multiply out a specific subsequence, we save it. If we are ever asked to compute it again, we simply give the saved answer, and do not recompute it. Since there are about ''n''2/2 different subsequences, where ''n'' is the number of matrices, the space required to do this is reasonable. It can be shown that this simple trick brings the runtime down to O(''n''3) from O(2''n''), which is more than efficient enough for real applications. This is top-down dynamic programming. The following bottom-up approach computes, for each 2 ≤ ''k'' ≤ n, the minimum costs of all subsequences of length ''k'' using the costs of smaller subsequences already computed. It has the same asymptotic runtime and requires no recursion. Pseudocode: // Matrix A has dimension dims -1x dims for i = 1..n MatrixChainOrder(int dims[]) *Note : The first index for dims is 0 and the first index for m and s is 1. A Python (programming language), Python implementation using the memoization decorator from the standard library: from functools import cache def matrixChainOrder(dims: List nt -> int: @cache def a(i, j): return min((a(i, k) + dims * dims * dims + a(k, j) for k in range(i + 1, j)), default=0) return a(0, len(dims) - 1) A
Java Java (; id, Jawa, ; jv, ꦗꦮ; su, ) is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea to the north. With a population of 151.6 million people, Java is the world's mo ...
implementation using zero-based array indexes along with a convenience method for printing the solved order of operations: public class MatrixOrderOptimization At the end of this program, we have the minimum cost for the full sequence.


More efficient algorithms

There are algorithms that are more efficient than the ''O''(''n''3) dynamic programming algorithm, though they are more complex.


Hu & Shing

An algorithm published by Hu and Shing achieves ''O''(''n'' log ''n'')
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
. They showed how the matrix chain multiplication problem can be transformed (or reduced) into the problem of
triangulation In trigonometry and geometry, triangulation is the process of determining the location of a point by forming triangles to the point from known points. Applications In surveying Specifically in surveying, triangulation involves only angle ...
of a
regular polygon In Euclidean geometry, a regular polygon is a polygon that is direct equiangular (all angles are equal in measure) and equilateral (all sides have the same length). Regular polygons may be either convex, star or skew. In the limit, a sequence ...
. The polygon is oriented such that there is a horizontal bottom side, called the base, which represents the final result. The other ''n'' sides of the polygon, in the clockwise direction, represent the matrices. The vertices on each end of a side are the dimensions of the matrix represented by that side. With ''n'' matrices in the multiplication chain there are ''n''−1
binary operation In mathematics, a binary operation or dyadic operation is a rule for combining two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two. More specifically, an internal binary ope ...
s and ''C''''n''−1 ways of placing parentheses, where ''C''''n''−1 is the (''n''−1)-th
Catalan number In combinatorial mathematics, the Catalan numbers are a sequence of natural numbers that occur in various counting problems, often involving recursively defined objects. They are named after the French-Belgian mathematician Eugène Charles C ...
. The algorithm exploits that there are also ''C''''n''−1 possible triangulations of a polygon with ''n''+1 sides. This image illustrates possible triangulations of a regular
hexagon In geometry, a hexagon (from Greek , , meaning "six", and , , meaning "corner, angle") is a six-sided polygon. The total of the internal angles of any simple (non-self-intersecting) hexagon is 720°. Regular hexagon A '' regular hexagon'' ha ...
. These correspond to the different ways that parentheses can be placed to order the multiplications for a product of 5 matrices. For the example below, there are four sides: A, B, C and the final result ABC. A is a 10×30 matrix, B is a 30×5 matrix, C is a 5×60 matrix, and the final result is a 10×60 matrix. The regular polygon for this example is a 4-gon, i.e. a square: The matrix product AB is a 10x5 matrix and BC is a 30x60 matrix. The two possible triangulations in this example are: File:Matrix chain multiplication polygon example AB.svg, alt=(AB)C, Polygon representation of (AB)C File:Matrix chain multiplication polygon example BC.svg, alt=A(BC), Polygon representation of A(BC) The cost of a single triangle in terms of the number of multiplications needed is the product of its vertices. The total cost of a particular triangulation of the polygon is the sum of the costs of all its triangles: :(''AB'')''C'': (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 multiplications :''A''(''BC''): (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 multiplications Hu & Shing developed an algorithm that finds an optimum solution for the minimum cost partition problem in ''O''(''n'' log ''n'') time. Their proof of correctness of the algorithm relies on "Lemma 1" proved in a 1981 technical report and omitted from the published paper. The technical report's proof of the lemma is incorrect, but Shing has presented a corrected proof.


Other ''O''(''n'' log ''n'') algorithms

Wang, Zhu and Tian have published a simplified ''O''(''n'' log ''m'') algorithm, where ''n'' is the number of matrices in the chain and ''m'' is the number of local minimums in the dimension sequence of the given matrix chain. Nimbark, Gohel, and Doshi have published a greedy ''O''(''n'' log ''n'') algorithm, but their proof of optimality is incorrect and their algorithm fails to produce the most efficient parentheses assignment for some matrix chains.


Chin-Hu-Shing approximate solution

An algorithm created independently by Chin and Hu & Shing runs in O(''n'') and produces a parenthesization which is at most 15.47% worse than the optimal choice. In most cases the algorithm yields the optimal solution or a solution which is only 1-2 percent worse than the optimal one. The algorithm starts by translating the problem to the polygon partitioning problem. To each vertex ''V'' of the polygon is associated a weight ''w''. Suppose we have three consecutive vertices V_, V_i, V_, and that V_ is the vertex with minimum weight w_. We look at the quadrilateral with vertices V_, V_, V_i, V_ (in clockwise order). We can triangulate it in two ways: *(V_, V_, V_i) and (V_, V_, V_i), with cost w_w_w_i+w_w_w_i *(V_, V_, V_) and (V_, V_i, V_) with cost w_w_w_+w_w_w_. Therefore, if : w_w_w_+w_w_w_ or equivalently : \frac+\frac<\frac+\frac we remove the vertex V_i from the polygon and add the side (V_, V_) to the triangulation. We repeat this process until no V_i satisfies the condition above. For all the remaining vertices V_n, we add the side (V_, V_n) to the triangulation. This gives us a nearly optimal triangulation.


Generalizations

The matrix chain multiplication problem generalizes to solving a more abstract problem: given a linear sequence of objects, an associative binary operation on those objects, and a way to compute the cost of performing that operation on any two given objects (as well as all partial results), compute the minimum cost way to group the objects to apply the operation over the sequence. One somewhat contrived special case of this is
string concatenation In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalisations of concatenati ...
of a list of strings. In C, for example, the cost of concatenating two strings of length ''m'' and ''n'' using ''strcat'' is O(''m'' + ''n''), since we need O(''m'') time to find the end of the first string and O(''n'') time to copy the second string onto the end of it. Using this cost function, we can write a dynamic programming algorithm to find the fastest way to concatenate a sequence of strings. However, this optimization is rather useless because we can straightforwardly concatenate the strings in time proportional to the sum of their lengths. A similar problem exists for singly linked lists. Another generalization is to solve the problem when parallel processors are available. In this case, instead of adding the costs of computing each factor of a matrix product, we take the maximum because we can do them simultaneously. This can drastically affect both the minimum cost and the final optimal grouping; more "balanced" groupings that keep all the processors busy are favored. There are even more sophisticated approaches.Heejo Lee, Jong Kim, Sungje Hong, and Sunggu Lee
Processor Allocation and Task Scheduling of Matrix Chain Products on Parallel Systems
. ''IEEE Trans. on Parallel and Distributed Systems,'' Vol. 14, No. 4, pp. 394–407, Apr. 2003


See also

*
Associahedron In mathematics, an associahedron is an -dimensional convex polytope in which each vertex corresponds to a way of correctly inserting opening and closing parentheses in a string of letters, and the edges correspond to single application o ...
* Tamari lattice


References

{{DEFAULTSORT:Matrix Chain Multiplication Optimization algorithms and methods Matrices Dynamic programming