5:13 Longest Common Palindromic Sequence ( problem 1) 21:55 Second problem (optimal Search Trees) starts 54:14 Alternating Coins Game (3rd and last example)
If you really want a counter example for 3 nodes, I suggest 1(w=20), 2(w=19), 3(w=18) Greedy would unbalance the tree in 1->2->3 giving a cost of 112 Optimal solution would give a balanced tree of 13 that would cost 95
Good example. Also note that if you just choose individual weights to be N+1, N+2,..., N+n, you get a completely unbalanced tree (similar to a linked list). But if N is high enough, those differences should be negligible, so a balanced BST would be best.
I can see Eric Demaine sitting in the first row at (16:16) , he might be wondering, no subproblems and guessing, just directly the recurrence relation :D
In the last example I don't really understand why should we assume that the second player chooses the min of his options at the moment. Isn't they running a greedy?, it is said that they play optimally, so why don't we model them decision as another dp?
It is very subtle, indeed. As the instructor said, for e(i,j) you add wi to wj once. For the next depth, when you compute e(i, r -1) you again add wi to w(r-1) so in total you add wi to w(r-1) twice, which corresponds to the increasing depth. The same thing goes for e(r+1, j). you add w(r+1) to wj once again for this calculation and in total w(r+1) to wj is added twice. But wr is added only once since it remained in the upper level (depth) and not included both e(i, r-1) and e(r+1, j)
First example question: the pseudocode shown at 22:33 only works with contiguous palindromes, am I wrong? Whereas the brain teasers at the start weren’t necessarily contiguous e.g. “rotator” out of “turboventilator” I do not see how the code can ever figure out “rotator” if you start it with L(0,end) Also I don’t see why nr. Of subproblems is n squared
The code as written only calculates the length of the best (noncontiguous) palindrome, but it will in fact give L(0, 14) = 7: L(turboventilator) = max(L(urboventilator), L(turboventilato))
Why do we need to minimize opponents move. If we just consider all possible move opponent can play and just maximize shouldn't we still get the correct answer? Unless we specifically demand the opponent to optimize as well.
Maybe an easier (to understand) solution is that given V(i, j) you pick i or j and pass the move to the opponent. Whatever is he's best, you get the rest. So V(i, j)=max(v[i]+sum(i+1, j)-V(i+1, j), v[j]+sum(i, j-1)-V(i, j-1)). The sum(i, j) is O(1) given O(n) precomputation.
@@antontitov2 I am not sure what you mean by sum(i,j) here. It looks to me as if the point that your and Praveen's point can be made more clearly by omitting the references to sum, like so (using C "?" syntax): V(i,j) = (i == j) ? v[i] : max(v[i]-V(i+1,j), v[j]-V(i,j-1)).
+Jonathan Morales Strength There are three instructors for this undergraduate course: Prof. Erik Demaine, Prof. Srinivas Devadas, Prof. Nancy Lynch. See the course on MIT OpenCourseWare for more information at ocw.mit.edu/6-046JS15
One thing I don't get about all those recursions is, why are you guys don't mention that stack is limited and able to hold limited recursive calls to functions, and for some big enough input number for problem to be solved for, you might get stack overflow. Or is this all just theory and one shouldn't be thinking about how to use it in practice?
Virtually every DP problem using recursion + memoization (i.e. top-down approach) can be implemented in an iterative way starting from the base cases and building up (i.e. bottom-up approach). Concretely, if the subproblems depend, say, on two parameters i,j in some range, then you iteratively fill a 2-dimensional matrix which at the entry (i,j) stores the value of subproblem i,j.
Also, if an algorithm requires a deep stack you can always heap-allocate it by using an explicit stack data structure so that you won't overflow unless you actually run out of physical memory. Usually, even "bad" recursive algorithms only recurse O(n) deep at most, so you need O(n) extra space which is reasonable.
5:13 Longest Common Palindromic Sequence ( problem 1)
21:55 Second problem (optimal Search Trees) starts
54:14 Alternating Coins Game (3rd and last example)
you saved my energy :D
Thanks man. Saved me time which I spent on writing this comment lol
@@robertalaverdyan3150 Haha. It took you THAT LONG to write this comment !? :p ...Just kiddin! Glad it could be of use to you.
All of Prof. Devadas lectures are just amazing. In particular, this is probably the best video I watched about DP.
One of the best DP lectures I have ever seen. Thanks to MIT open course ware and Prof. Srinivas Devadas
A counter-example to the greedy optimal BST algorithm using 3 nodes has w1 = 10, w2 = 11, w3 = 12. Cheers!
Srinivas and Erik you guys are the real Gs!
If you really want a counter example for 3 nodes, I suggest 1(w=20), 2(w=19), 3(w=18)
Greedy would unbalance the tree in 1->2->3 giving a cost of 112
Optimal solution would give a balanced tree of 13 that would cost 95
Although I have absolutely no idea what you are talking about, I have to concur.
Good example. Also note that if you just choose individual weights to be N+1, N+2,..., N+n, you get a completely unbalanced tree (similar to a linked list). But if N is high enough, those differences should be negligible, so a balanced BST would be best.
It was a very good lecture, everything simplified and quite interactive class.Thanks
dp[i, j] = min(dp[i, j], e(i, r-1) + e(r+1, j) + wr * (depth+1)) also works if you add the depth every time you go down the recursion tree.
@@useForwardMax You are completely right. In fact, I did not have r in the dp when I tested it, it just slipped when I made this comment, haha.
I can see Eric Demaine sitting in the first row at (16:16) , he might be wondering, no subproblems and guessing, just directly the recurrence relation :D
hahahaha
Erik*. Get It Right! =P
Who is he?
@@mridulkumar786 The other lecturer. Google him. =)
@@mridulkumar786 erik is prodigy recieved phd at age 20
"If you go first your guaranteed to not lose". Man goes second and still won.
In the last example I don't really understand why should we assume that the second player chooses the min of his options at the moment. Isn't they running a greedy?, it is said that they play optimally, so why don't we model them decision as another dp?
why we need to add all of the node (51:40) i though we need add wr (i
It is very subtle, indeed. As the instructor said, for e(i,j) you add wi to wj once. For the next depth, when you compute e(i, r -1) you again add wi to w(r-1) so in total you add wi to w(r-1) twice, which corresponds to the increasing depth. The same thing goes for e(r+1, j). you add w(r+1) to wj once again for this calculation and in total w(r+1) to wj is added twice. But wr is added only once since it remained in the upper level (depth) and not included both e(i, r-1) and e(r+1, j)
@@yesil1026 Woww!! That's a nice explanation Thanks :)
The third question should be MinMax problem.
Erik Demaine 16:17
what is wi's and pi's I didn't understand it yet.
First example question: the pseudocode shown at 22:33 only works with contiguous palindromes, am I wrong?
Whereas the brain teasers at the start weren’t necessarily contiguous e.g. “rotator” out of “turboventilator”
I do not see how the code can ever figure out “rotator” if you start it with L(0,end)
Also I don’t see why nr. Of subproblems is n squared
The code as written only calculates the length of the best (noncontiguous) palindrome, but it will in fact give L(0, 14) = 7:
L(turboventilator) = max(L(urboventilator), L(turboventilato))
@@digama0 👍🏻
Greedy al gore rhythm sounds like my kind of thing! I use greedy algorithms at my home in my own programs all the time
Why there were only 5 trees for n=3? (Why did he look at the different structures only, but not different ways to form the multiplication?)
Because it is binary search tree.
Thank You sir !! :)
can anyone help me with the complexity of coin game ques how is no of sub problems n square?
the number of pairs of (i,j) where i
Why do we need to minimize opponents move. If we just consider all possible move opponent can play and just maximize shouldn't we still get the correct answer? Unless we specifically demand the opponent to optimize as well.
we are actually asking the opponent to play his best move by minimizing our optimal gain.
Maybe an easier (to understand) solution is that given V(i, j) you pick i or j and pass the move to the opponent. Whatever is he's best, you get the rest. So V(i, j)=max(v[i]+sum(i+1, j)-V(i+1, j), v[j]+sum(i, j-1)-V(i, j-1)). The sum(i, j) is O(1) given O(n) precomputation.
I'm guessing since it's zero sum, maximizing our score and minimizing theirs would give identical solution.
@@antontitov2 I am not sure what you mean by sum(i,j) here. It looks to me as if the point that your and Praveen's point can be made more clearly by omitting the references to sum, like so (using C "?" syntax): V(i,j) = (i == j) ? v[i] : max(v[i]-V(i+1,j), v[j]-V(i,j-1)).
Actually we are computing the max value that we can certainly win with even if opponent play optimally.
So i don't get it... why would a teacher be sitting in student's benches when another theacher is teaching...? :X
To learn, always time to learn
so do Srinivas Devadas and Eric Demain both teach for the same class? Or are these 2 different classes? Also, is this masters material?
+Jonathan Morales Strength There are three instructors for this undergraduate course: Prof. Erik Demaine, Prof. Srinivas Devadas, Prof. Nancy Lynch. See the course on MIT OpenCourseWare for more information at ocw.mit.edu/6-046JS15
This is an undergraduate-level course. They taught it together.
B.p
.
1:04:04 dp mazimize
One thing I don't get about all those recursions is, why are you guys don't mention that stack is limited and able to hold limited recursive calls to functions, and for some big enough input number for problem to be solved for, you might get stack overflow.
Or is this all just theory and one shouldn't be thinking about how to use it in practice?
Virtually every DP problem using recursion + memoization (i.e. top-down approach) can be implemented in an iterative way starting from the base cases and building up (i.e. bottom-up approach). Concretely, if the subproblems depend, say, on two parameters i,j in some range, then you iteratively fill a 2-dimensional matrix which at the entry (i,j) stores the value of subproblem i,j.
Also, if an algorithm requires a deep stack you can always heap-allocate it by using an explicit stack data structure so that you won't overflow unless you actually run out of physical memory. Usually, even "bad" recursive algorithms only recurse O(n) deep at most, so you need O(n) extra space which is reasonable.
16:16 Thats Erik right?
Ya Erik Demaine ruclips.net/video/OQ5jsbhAv_M/видео.html
What a catch!
I guess we found the reason why there is so much plastic in the ocean: those students solve too much problems and receive too much frisbee
With all due respect you start monetizing your videos that will also help you