## Introduction

Dynamic programming (DP) is a very powerful technique to solve a particular class of problems. The idea is very simple, if we have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again. If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem (Optimal Substructure Property).

There are two ways of doing this.

- Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer i.e. Memoization. Memoization is a technique that is associated with Dynamic Programming. The concept is to cache the result of a function given its parameter so that the calculation will not be repeated; it is simply retrieved. In other words, Memoization is a term describing an optimization technique where you cache previously computed results, and return the cached result when the same computation is needed again.
- Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem.

What is the difference between tabulation (typical dynamic programming technique) and memoization?

When you solve a dynamic programming problem using tabulation you solve the problem “bottom up”, i.e., by solving all related sub-problems first, typically by filling up an n-dimensional table. Based on the results in the table, the solution to the “top” / original problem is then computed.

If you use memoization to solve the problem you do it by maintaining a map of already solved sub problems. You do it “top down” in the sense that you solve the “top” problem first (which typically recurses down to solve the sub-problems).

## Example

// Fibonacci - Simple Recursive Solution (Sudo code) fibb(0) = 0, fibb(1) = 1; fibb(n) = fibb(n-1) + fibb(n-2) fibb(n) if n < 2 return n else return fibb(n-1) + fibb(n-2)

// Fibonacci: Iterative Bottom-Up Solution // Compute a table of values of fibb(0) up to fibb(n) // We always had solutions to subproblems when we needed them. fibb(n): -- Precond: n ≥ 0 A: Array(0..n) of Natural; A(0) := 0; if n > 0 then A(1) := 1; for i in 2 .. n loop A(i) := A(i-1) + A(i-2); end if; return A(n)

// Fibonacci: Memoized, Recursive Top-Down Solution // A linear recursive algorithm - uses memoization. Do recursion until reach base case, then fill array while returning. // We arrange the recursion so that A(n-2) is calculated before it is needed.This technique is called memoization. fib(n): A: Array(0..n) of Natural := (0=>0, 1=>1, others=>0); return fib-aux(n, A) fib-aux(n, A: in out array): if A(n) = 0 and then n /= 0 then -- Check if A(n) is calculated yet A(n) := fib-aux(n-1, A) + A(n-2); -- Store solution end if; end if return A(n) -- Return solution

## Steps for solving problem

When developing a dynamic-programming algorithm, we follow a sequence of four steps:

- Characterize the structure of an optimal solution.
- Recursively deﬁne the value of an optimal solution.
- Compute the value of an optimal solution, typically in a bottom-up fashion.
- Construct an optimal solution from computed information.