# Dynamic Programming In this tutorial, you will learn what dynamic programming is. Likewise, you will discover the comparison between dynamic programming and greedy algorithms to tackle issues.

Contents

## What is Dynamic Programming?

Dynamic Programming (DP) is an algorithmic strategy for tackling an optimization issue by breaking it down into straightforward subproblems and using the way that the optimal solution for the general issue relies on the optimal solution for its subproblems.

Dynamic Programming is a procedure in computer programming that helps to proficiently take care of a class of issues that have overlapping subproblems and optimal substructure property.

Such issues include consistently calculating the value of the equivalent subproblems to track down the optimum solution.

## Dynamic Programming Example

Take the instance of producing the fibonacci sequence.

On the off chance that the sequence is F(1) F(2) F(3)……..F(50), it keeps the standard F(n) = F(n-1) + F(n-2)

```F(50) = F(49) + F(48)
F(49) = F(48) + F(47)
F(48) = F(47) + F(46)
...```

Notice how there are overlapping subproblems, we need to calculate F(48) to calculate both F(50) and F(49). This is actually the kind of algorithm where DynamicProgramming sparkles.

## How Dynamic Programming Works

Dynamic programming works by putting away the result of subproblems so when their solutions are required, they are close by and we don’t have to recalculate them.

This procedure of putting away the value of subproblems is called memoization. By saving the values in the array, we save time for computations of sub-issues we have already come across.

```var m = map(0 → 0, 1 → 1)
function fib(n)
if key n is not in map m
m[n] = fib(n − 1) + fib(n − 2)
return m[n]```

Dynamicprogramming by memoization is a top-down way to deal with dynamicprogramming. By turning around the direction in which the algorithm works for example by beginning from the base case and working towards the solution, we can likewise carry out dynamicprogramming in a base-up way.

```function fib(n)
if n = 0
return 0
else
var prevFib = 0, currFib = 1
repeat n − 1 times
var newFib = prevFib + currFib
prevFib = currFib
currFib  = newFib
return currentFib```

## Recursion versus Dynamic Programming

Dynamicprogramming is for the most part applied to recursive algorithms. This isn’t a coincidence most optimization issues require recursion and dynamicprogramming is used for optimization.

However, not all issues that use recursion can use DynamicProgramming. Unless if there is a presence of overlapping subproblems like in the fibonacci sequence issue, a recursion can just arrive at the solution using a divide and conquer approach.

That is the reason why a recursive algorithm like Merge Sort can’t use DynamicProgramming, in light of the fact that the subproblems are not overlapping in any way.

## Greedy Algorithms versus Dynamic Programming

Greedy Algorithms are like dynamicprogramming as in they are the two tools for improvement.

In any case, greedy algorithms search for locally optimum solutions or all in all, a greedy decision, with expectations of tracking down a global optimum. Consequently, greedy algorithms can make a guess that looks optimum at that point however turns out to be expensive down the line and don’t ensure a globally optimum.

Dynamicprogramming, then again, tracks down the optimal solution for subproblems and afterward settles on an informed decision to combine the outcomes of those subproblems to track down the most optimum solution.

Thanks for reading! We hope you found this tutorial helpful and we would love to hear your feedback in the Comments section below. And show us what you’ve learned by sharing your photos and creative projects with us. ### Huffman Coding ### Coronavirus helpers get hearts & a creative doodle from Google  