Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems. Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. In this context, a divide-and-conquer algorithm does more work than necessary, repeatedly solving the common subsubproblems. A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered. Matrix-chain multiplication can be solved using Dynamic programming.

**Matrix-chain multiplication**

Given a sequence of matrices, find the most efficient way to multiply these matrices together. We have many options to multiply a chain of matrices because matrix multiplication is associative. In other words, no matter how we parenthesize the product, the result will be the same.

We are given a sequence (chain) *A _{1}, A_{2}, . . ., A_{n}* of

*n*matrices to be multiplied, and we wish to compute the product

*A = A*_{1 }x *A*_{2 }x..x *A _{n}*

The way we parenthesize a chain of matrices can have a dramatic impact on the cost of evaluating the product. To illustrate the different costs incurred by different parenthesizations of a matrix product, consider the problem of a chain *A*_{1}, *A*_{2}, *A*_{3} of three matrices. Suppose that the dimensions of the matrices are 10 X 100,100 5, and 5 50, respectively. If we multiply according to the parenthesization ((*A*_{1}*A*_{2})*A*_{3}), we perform 10 100 5 = 5000 scalar multiplications to compute the 10 X 5 matrix product *A*_{1}*A*_{2}, plus another 10 5 50 = 2500 scalar multiplications to multiply this matrix by *A*_{3}, for a total of 7500 scalar multiplications. If instead we multiply according to the parenthesization (*A*_{1}(*A*_{2}*A*_{3})), we perform 100 5 50 = 25,000 scalar multiplications to compute the 100 X 50 matrix product *A*_{2}*A*_{3}, plus another 10 100 50 = 50,000 scalar multiplications to multiply *A*_{1} by this matrix, for a total of 75,000 scalar multiplications. Thus, computing the product according to the first parenthesization is 10 times faster.

The * matrix-chain multiplication problem* can be stated as follows: given a chain

*A*

_{1},

*A*

_{2}, . . . ,

*A*

_{n}of

*n*matrices, where for

*i*= 1, 2, . . . ,

*n*, matrix

*A*has dimension

_{i }*p*–

_{i }_{1}X

*p*, fully parenthesize the product

_{i}*A*

_{1}

*A*

_{2}…

*A*in a way that minimizes the number of scalar multiplications.

_{n}The first step of the dynamic-programming paradigm is to characterize the structure of an optimal solution. For the matrix-chain multiplication problem, we can perform this step as follows. For convenience, let us adopt the notation *A _{i}*..

*j*for the matrix that results from evaluating the product

*A*+ 1

_{i}A_{i }*A*. An optimal parenthesization of the product

_{j}*A*

_{1}

*A*

_{2}

*A*splits the product between

_{n }*A*and

_{k}*A*+ 1 for some integer

_{k }*k*in the range 1

*k*<

*n*. That is, for some value of

*k*, we first compute the matrices

*A*

_{1..k}and

*A*+ 1..

_{k }*n*and then multiply them together to produce the final product

*A*

_{1..n}. The cost of this optimal parenthesization is thus the cost of computing the matrix

*A*

_{1..k}, plus the cost of computing

*A*+ 1..

_{k }*n*, plus the cost of multiplying them together.

We observed that an optimal parenthesization of *A*_{1 }*A*_{2 } _{ }*A _{n} *that splits the product between

*A*and

_{k}*A*+ 1 contains within it optimal solutions to the problems of parenthesizing

_{k }*A*

_{1}

*A*

_{2}

*A*1

_{k}and A_{k + }*A*2

_{k +}*. . . A*. The technique that we used to show that subproblems have optimal solutions is typical.

_{n}

Reference :-