Jump Game VI — Algorithms&Visualizations

Federico Feresini
6 min readJan 3, 2021

Introduction
I’ve recently come across one of the thousands of problems that can be solved with dynamic programming techniques. I remember that, when I was trying to get my head around this topic, one of the most frustrating aspects was the ever-green Fibonacci Numbers example. It was everywhere, but I had the impression that reading over and over the same stories about overlapping subproblems and optimal substructures, always with the same example and the same images in from of my eyes wasn’t really paying off the effort. Thus, I’ve decided to write this article in which we will brush up on some dynamic programming principles, starting from a practical example, but this time ditching the Fibonacci Numbers example!

The problem
This medium-difficulty problem, titled “Jump Game VI”, was part of a challenge hosted by LeetCode:

You are given a 0-indexed integer array nums and an integer k.

You are initially standing at index 0. In one move, you can jump at most k steps forward without going outside the boundaries of the array. That is, you can jump from index i to any index in the range [i + 1, min(n - 1, i + k)] inclusive.

You want to reach the last index of the array (index n - 1). Your score is the sum of all nums[j] for each index j you visited in the array.

Return the maximum score you can get.

Constraints:
1 <= nums.length, k <= 1⁰⁵
-
10⁴ <= nums[i] <= 10⁴

Example:
Input:
nums = [1,-1,-2,4,3,-7], k = 2
Output: 0
Explanation: You can choose your jumps forming the subsequence [1,-1,4,3,-7]. The sum is 0.

The First Approach
First off, I generally like to draw down the recursion tree, because it helps me visualize the problem and come up with possible solutions:

Scanning the whole solution space as shown in the above image is not a wrong approach, nevertheless, it is computationally too expensive, the time complexity is O(n^k), with n the number of elements in the input array. It might work for small inputs, but it has got no chance with 10⁵ elements.

The Second Approach
When it comes to solving problems with dynamic programming, the most crucial thing is to find the recursive equation linking two subproblems, where the solution of one is required to compute the result of another similar subproblem. What does all of that mean? To figure that out, we can have a closer look at the above recursion tree, which didn’t help us solve the problem right away, but perhaps it contains some pieces of information that we can tap into:

Let’s consider any of the nodes.

In this specific node, we have reached the element [4].
Whatever the path that took us from the root to this node, from now on we will try to understand where to jump next to maximize the final result.

In other words, whatever is the actual score, how can we maximize the sum when we consider node 4? If the specific case looks a bit misleading to you, let’s transform it into a more generic case:

We are considering the element [4] at position i. Independently from what we have picked beforehand, and whatever are the elements sitting after, how can we get the maximum final sum?

We can’t do much for the past, but for sure we want to follow the path leading to the maximum sum!

Basically, at each step, we will maximize the final sum, if we pick the node and we add the best result we can get from considering only the subarray containing the element after the node.

I know, it really sounds like the original problem itself, given an array, we want to take the first node and add its value to the best result we can get from the array containing all the rest of the nodes.

So let’s introduce something that can help us create the solution: MaxSumStartingAt[i], an array that we will assume that contains the best results for each subarray containing elements from index i to the end. We still don’t know how we will populate this array, one problem at a time, but at least now we can write down the equations linking the subproblems:

If it wasn’t clear enough, let me write it down in plain English:

MaxSumStartingAt[i] =
array[i] +MAX(MaxSumStartingAt[i+1],…, MaxSumStartingAt [i+k])

means:

The maximum sum I can obtain from my array is the value of the first element (The problem specifically says that we have to keep the first element) plus the maximum sum I can obtain from the subarrays starting from one node I can jump to.

Which turns the problem into finding the solution for MaxSumStartingAt(0):

It’s easy to spot that in order to compute MaxSumStartingAt(i), first we need to calculate MaxSumStartingAt(i+1), MaxSumStartingAt(i+2),…, MaxSumStartingAt(i+K).
And to compute MaxSumStartingAt(i+1), we first need to have MaxSumStartingAt(i+2), MaxSumStartingAt(i+3),…, MaxSumStartingAt(i+1+k) and so on.
For example, given nums = [1,-1,-2,4,3,-7], k = 2

To compute the MaxSumStartingAt[0], first I need to know array[0] (which is nums[0], easy one), MaxSumStartingAt[1] and MaxSumStartingAt[2]

So apparently, to compute an element of MaxSumStartingAt array, we need to compute the results of K elements ahead.
Where should we start then? Well, obviously from the element which doesn’t need any other results, in this case, the last one (check out the graph above; No outgoing arrows from MaxSumStartingAt [5] means that its result doesn’t hinge on any other results).
MaxSumStartingAt[5] is nothing else than the last element itself. As a matter of fact, if the input array were nums=[-7], the maximum sum we could obtain, starting from element[-7], is the element itself.

Computing the MaxSumStartingAt[5] gives us all the information we need to compute MaxSumStartingAt[4]:

Now that we have both MaxSumStartingAt[5] and MaxSumStartingAt[4], we can compute MaxSumStartingAt[3]:

And now that we have MaxSumStartingAt[3] and MaxSumStartingAt[4], we can compute the MaxSumStartingAt[2]
The final result is:

And our solution sits at MaxSumStartingAt[0].

I hope everything is clear now. To come up with a dynamic programming solution we had to figure out how the problem can be broken down into smaller identical subproblems, and then link one to the others, because the solution of one contributes to the solution of the next one, and so on. Drawing down a DAG (directed acyclic graph) to visualize the dependencies between the subproblems is a crucial step to me, but the more experience you gain, the faster and more immediate this step becomes.

Let’s jump into the code:

The Code

Conclusions
I hope you enjoyed this article and learned something new. I can’t wait to read your feedback.
For other challenge solutions, don’t hesitate to visit my profile. Have a great day!

“Teach yourself is a noble cause, but even more noble to teach others” — Mark Twain

--

--