[Algorithm] Magical Recurrence


In a recent algorithmic coding contest which I didn't do well, the hardest problem killed me. It distinguished me from other superior coders. But it's still an interesting one. The problem is essentially to solve a 0/1 knapsack problem on a tree where each node is associated with an item with a given value and size. The only constraint is that the set of selected nodes should be connected.

Noting that the tree is not rooted, it differs from the knapsack problem on a rooted tree, i.e. TPK, in only that the root has to be chosen in TPK, if possible. So one of the insights I didn't come up with in order to accommodate TPK is to iteratively find and use the "central" node of the tree as root, and solve the rooted sub-problem recursively. Then, if it takes O(N) to solve a TPK for a tree with N nodes, the overall complexity will be O(Nlog(N)) instead of N2. Gladly I came up with my own O(N) algorithm to find central node after the contest, so it is a good exercise to think of one by yourself. To clarify, a central node has none of its child subtrees sized more than N ⁄ 2.

The main reason I write this article is the algorithm that solves TPK in O(Nh), where h is the bag capacity. During contest I did find a paper describing a DFS-based DP algorithm, which I adapted to keep D[u][k][b], meaning the maximum value achievable with capacity b if every node on the path from root to node u are chosen, and all other nodes up to u in DFS traversal order, plus the the k-th child subtree of u are considered. When k is 0 no child subtree of u is considered. For anyone interested, it is described in [1]. I'm guessing the better solution we're going to discuss is covered by [2], but I didn't manage to get its content for free in the course of 2 days. The least I'd have to pay was $24. Regardless whether [2] is the key to the solution or not I've at least figured something out after the contest.

What is interesting is the solution posted after the contest and likely covered in [2] is to keep D[u][b] to indicate the maximum value attainable up to node u in the DFS traversal order with capacity b. The recurrence is surprisingly simple:

Recurence

  1. if (size[u] ≤ b) { f[u][b] = MAX(f[u − 1][b − size[u]] + value[u], f[u][b]); }
  2. f[u + subtree_size[u] − 1][b] = MAX(f[u + subtree_size[u] − 1][b], f[u − 1][b]);

Here u − 1 is the node traversed right before u.

Does the recurrence make sense? The second update is very obvious -- if we don't take node u, then the whole subtree rooted at u is not taken. However, the first update doesn't sound correct to me -- if we take node u, then we have to make sure all ancestors of u are taken too, but f[u − 1][b − size[i]] doesn't imply it, which is simply the optimal value obtainable when considering up to the node before u!

With confusion, I created a simple test case and check D values for some intermediate nodes. In particular I created a first tree and appended one more node at the end of its DFS traversal to get the second tree. Then I ran the algorithm, and found the D values for a node present in both tree DIFFER!

So obviously the interpretation of D by the problem setter is NOT precise, if his/her solution actually works. Eventually, I'm able to show this magical recurrence does work.

Lemma

If u is the last traversed node by DFS, then D[u][b] is the optimal value that could be collected from the tree with bag capacity b.

Proof

We will show by induction on N, the tree size, as induction are close to DP in nature. Suppose the Lemma holds for every tree with up to N − 1 nodes. First, the case that the root has only one child is trivial -- we could descend from root until we hit a node with multiple children and reduce the problem. So assume the root has multiple children, and consider the last child u of root traversed by DFS. Let t(u) be the subtree rooted at u. Let T be the original tree and by removing t(u) from T we get T1 which ends at u − 1, the node visited by DFS right before u.

By induction, D[u − 1][b] is the optimal value for T1. Now the only subtree_size affected by adding t(u) is subtree_size[root], which doesn't matter to us. So we know D[x][] doesn't change for any x between the root and u − 1, inclusive, and we will show D[v][b] is the optimal value for T, where v is the last node in T.

How will D[v][b] get updated? By (2) in the recurrence it is at least D[u − 1][b] -- sure, if we don't take u, then D[u − 1][b] is our answer and by induction we know it is correct. In fact this is the only direct (2) update on t(u) from T1 because by DFS nature no subtree rooted in T1 ends in a node in t(u). The only remaining influence of T1 on t(u) is the update of D[u][b] based on (1).

If b < size[u], (1) couldn't carry out, so all nodes in t(u) has no D value except v which has D[v][b] = D[u − 1][b]. This is expected, because the whole t(u) couldn't be taken since size[u] exceeds capacity b.

If b ≥ size[u], we set D[u][b] = D[u − 1][b − size[u]] + value[u] directly. Then the DP algorithm computes D in the subproblem of t(u) as usual except an added term of D[u − 1][b − size[u]] on D[u][b]. All other nodes in t(u) are updated as usual except with the blessing of D[u − 1][b − size[u]], so we know that before the aformentioned (2) update on D[v][b], this value is the optimal value we can get if the entire tree T is considered and node u is taken. After the (2) update, D[v][b] also considers not taking u(t). Therefore D[v][b] is optimal.

Q.E.D.

By the way, although both DP algorithms have the same asymptotic time complexity, I couldn't make the one in [1] pass the contest's time limit even with quite some optimization! It is because each of its node has to compute one extra value, so the table it keeps is at twice as big. Thus is the power of the magical recurrence!


post by Shen-Fu Tsai

References

[1]D. S. Johnson and K. A. Niemi. On Knapsacks, partitions, and a new dynamic programming technique for trees
[2]Geon Cho and Dong X. Shaw. A Depth-First Dynamic Programming Algorithm for the Tree Knapsack Problem