A student asked this in class today: What’s the difference between Recursion, Memoization and Dynamic Programming.

I was quite surprised that said student couldn’t find a good answer online, so I made one. Here’s the answer:

Okay. Here’s the short version: Recursion + Memoization = Dynamic Programming.

Was going to go through this at recitation but wtheck. :D

## Recursion

Recursion is recursion is recursion but it ends somewhere. Basically, a recursive expression is one that depends on previous values of the same expression, and we have a base condition. Think Fibonacci numbers. In this case, I’m going to use the example of the coin change problem (which you’ve probably done in class).

The question goes like: coins come in 35 cents, 25 cents, 15 cents, 10 cents, 5 cents, and 1 cents. So I write it like this int[] combinations = new int[] {35, 25, 15, 10, 5, 1}. Now given x cents, what’s the minimum number of coins I need?

I can derive a recursive equation for this:

Let’s say C(x) is a magical function. C(x) gives me the minimum number of coins required to represent x cents. I can define C(x) recursively:

Why? Say x=105. Then, the minimum number of coins required to piece together 105 cents is the minimum of

• the minimum number of coins required to piece 70 cents plus another 35 cent coin
• the minimum number of coins required to piece 80 cents plus another 25 cent coin
• the minimum number of coins required to piece 90 cents plus another 15 cent coin etc. etc.

What about our base condition? Since the smallest denomination is 1 cent, I can define C(0) = 0 and C(1) = 1. Then, the entire recursive function is basically:

## Memoization

The problem with this is that you’re doing a lot of repeat work. Let’s do a trace:

• To calculate C(105), you need C(70, 80, 90, 95, 100, 104).
• For each of those, you’d need more values.

Convince yourself that those values repeat by drawing up a giant tree. You’re doing a lot of duplicate work. This is basically the same reason why Fibonacci recursion is so inefficient.

The reason is because naive recursion is forgetful. You only remember results for the current layer, and that’s it. Instead, what if there’s a way to remember all the past results that you’ve had so far? Memoization is exactly that.

The only major difference between this and the previous recursive solution is that instead of recursing down costs.add(minimumCoin(combinations, x - combination) + 1) every single time, we check if this value has been calculated before in int[] memory. If it has, we just take the value straightaway instead of calculating it. We only calculate if it hasn’t been calculated (ie. still equals to the initial value of Integer.MAX_VALUE).

This is memoization.

## Dynamic Programming

So DP really comprises of two parts:

• Getting a recursive equation
• Coming up with a memoized way to do this

Usually, the memoized solution is way easier to write iteratively than recursively. I just stuck to recursion in this case to extend from the original recursion example. Plus, providing you with the iterative solution would be too much of a giveaway.