At first I was completely confused and irritated so I started looking for comments of the same feeling, and then I watched it again and started to get the idea, now I finished the third time and things are crystal clear. For those who feel lost at the beginning, don't give up and try more times, you'll get better.
I was having a really hard time with DP but it's so much easier to imagine the solution every dp problem as the shortest paths tree of a directed acyclic graph. so in this case it would follow the topological sort and ONLY update cell (or relax it) if DP's condition is met. In cases of actual graphs this is exactly what we do to find shortest paths tree for a DAG.
Hi William, your graph theory playlist is awesome but your dp playlist has a flaw. These videos are explaining how the solutions to these problems work but not how one can use the general steps of dp to come up with these. I think the reason why dp is hard for most people is because of the pedagogy. I think dp should be taught by identifying sub-problems, showing the recurrence relationship between them and then how we can connect it all together to solve the problem. While looking at this, my brain tended to focus more on the visual aspect of how the table is being built instead of what every step is telling me to do. I hope I made my point clear. Thanks for making these videos for us for free. I'll pay it forward by making such videos myself when I have fully understood these concepts from ground up.
Thanks for the feedback Saad, I agree that just showing someone the solution is a bit like giving a man a fish, rather than teaching him how to fish. In more recent videos I've been trying to do more of the latter as I believe it helps in the learning process.
Thank you so much! Im studying for Brazilian informatics its difficult to find content in my native language and the ones o find i didn't understand anything. In the second time i watch this i undestood everything.
Thanks I understood that no possibility is missed, but in starting we think some possibility would be missed as we gave the first element priority over second, second over third!!
I think entries [4,4] and [5,4] should both be 5 instead of 6 in your example. [5,6] should also be 8. Nonetheless, good video. I didn’t understand any solutions until this one, so thanks!
Could you simply: 1. Create ObjArray = [weight: 1 value: 5, weight: 2 value: 3, weight 4 value: 8, weight: 3 value: 6, weight: 4, value: 2] 2. Create a ratioArray[] containing the simplified ratios of each object such as ratioArray = [1:5, 2:3, 1:2, 1:2, 2:1] 3. Sort.() ratioArray[] from highest to lowest 4. Loop through ratioArray[] and add elements into knapsack until the total weight met the weight capacity. In the case where the weight of items is less than the weight capacity, but the next available sorted ratio is higher than your allowed difference, find the value of the remainder (say 2kg left) and look for the highest weight to value ratio of all 2kg items. That way, for any given weight, the best weight to value would be selected.
What you are describing is a greedy algorithm. Greedy algorithms do not always provide the optimal solution. I wanted to find out why. Wikipedia says (en.wikipedia.org/wiki/Greedy_algorithm#Matroids): Intuitionally you might want to consider: "Greedy algorithms produce good solutions on some mathematical problems, but not on others. Most problems for which they work will have two properties: - Greedy choice property - Optimal substructure Greedy algorithms fail to produce the optimal solution for many other problems and may even produce the unique worst possible solution. One example is the travelling salesman problem" More formally: "A matroid is a mathematical structure that generalizes the notion of linear independence from vector spaces to arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally." I asked GPT4 to explain the above statement to a 5 year old. Take this with a grain of salt as there is no guarantee this is correct: "Sure, let's imagine we're playing a game with building blocks. Each block is a different shape and color. Now, I give you some rules to play this game: We start with no blocks. If you have a good set of blocks (we'll call it a 'good set' if it follows our rules), and you want to take away some blocks, it'll still be a good set! Suppose you have a good set, but it's smaller than your friend's good set. Then, there's definitely a block in your friend's set that you can add to your set and it will still be a good set. Now, when we're playing the game, you want to make the 'best' set of blocks you can, according to these rules. So, what do you do? You start with no blocks, then you keep adding the 'best' block you can that doesn't break the rules. This is kind of like choosing the biggest piece of candy you can without getting a tummy ache! This is a lot like something mathematicians call a 'matroid'. It's a special rule for playing with blocks (or numbers, or anything else!) that means the 'greedy' strategy - always picking the best thing next - will give you the best set of blocks you can have! Remember, this is a really simplified explanation, and the real idea is a bit more complicated. But I hope it helps you get a sense of what matroids are like." Given a problem to solve, how can one determine whether it can be formulated as a matroid?: "Identifying whether a problem can be formulated as a matroid essentially involves checking if the problem satisfies the three properties of a matroid. The problem or system can be seen as a matroid if: The empty set is independent: This is usually straightforward to check. Essentially, you need to see if doing nothing or having no elements is a valid option in your problem. Hereditary property: This means if a set is independent, then all of its subsets should be independent too. In practical terms, if a solution to your problem is valid, then any solution that "does less" should also be valid. If there are exceptions to this rule in your problem, then it can't be formulated as a matroid. Exchange property: This is often the trickiest to check. It means that if you have two independent sets and one is larger than the other, then there must be some element in the larger set that you can add to the smaller one to still keep it independent. Essentially, it says that there's always a way to "grow" a smaller solution into a larger one while keeping it valid. If this doesn't hold, then the problem can't be formulated as a matroid. Applying these properties to your problem can require some insight and creativity, especially since the concept of an "element" and an "independent set" might not be immediately obvious. In many problems, it can be useful to think about what the "elements" and "independent sets" could represent and whether the matroid properties are satisfied. For example, in a network design problem, the elements might represent potential links in the network, and an independent set might represent a subset of links that forms a tree. If all three properties are satisfied, then the problem can be formulated as a matroid, and a greedy algorithm can be used to find an optimal solution." Perform this analisis on the 0/1 knapsack problem: "The 0/1 Knapsack problem can be simply stated as: given a set of items, each with a weight and a value, determine a subset of items to include in a knapsack such that the total weight is less than or equal to a given limit and the total value is as large as possible. Now, let's see how the 0/1 Knapsack problem holds up against the three properties that define a matroid: The empty set is independent: Yes, an empty set of items does satisfy the constraints of the knapsack problem. The total weight and total value of an empty set are both 0, which are valid. Hereditary property: This property is also satisfied in the knapsack problem. If we have a valid subset of items that do not exceed the weight limit of the knapsack, any subset of these items will also be a valid set (i.e., it will also not exceed the weight limit). Exchange property: This is where the 0/1 knapsack problem fails to be a matroid. The exchange property requires that if you have two independent sets and one is larger than the other, then there must be some element in the larger set that you can add to the smaller one to still keep it independent. But in the knapsack problem, adding an item from a larger set to a smaller one may cause the total weight to exceed the weight limit of the knapsack, violating the constraints of the problem. In conclusion, while the 0/1 Knapsack problem satisfies the first two properties of matroids, it does not satisfy the exchange property. Therefore, the 0/1 Knapsack problem cannot be formulated as a matroid. This is why we cannot use a greedy algorithm to solve the 0/1 Knapsack problem and instead use techniques such as dynamic programming." I then asked it to compare it to a problem where your solution would in fact work: What about the fractional knapsack problem: "The Fractional Knapsack problem is a variation of the Knapsack problem where you're allowed to take fractions of items instead of being limited to taking the whole item or leaving it (as in the 0/1 Knapsack problem). Now let's analyze it with the three matroid properties: The empty set is independent: An empty set of items is indeed a valid solution for the Fractional Knapsack problem, as it doesn't violate the weight constraint. Therefore, this property is satisfied. Hereditary property: If you have a valid set of items for the Fractional Knapsack problem (i.e., the total weight doesn't exceed the maximum weight), any subset of these items (or any fraction of them) is also valid. Hence, this property is satisfied. Exchange property: This property is where the Fractional Knapsack problem differs fundamentally from the 0/1 Knapsack problem. In the Fractional Knapsack problem, if you have an independent set that is smaller than another independent set, you can always take enough of an item from the larger set and add it to the smaller set to make it larger without violating the weight constraint. Therefore, the exchange property is satisfied. So, the Fractional Knapsack problem satisfies all the properties of a matroid. Consequently, it can be solved using a greedy algorithm, which in this case involves picking items with the highest value-to-weight ratio first until the knapsack is full."
great job sir!!! though not what am searching for how do i solve knapsack problem using linear programming? is there a way around it. or can you recommend a textbook.
5:10 Really confuses me because how does the algo know if the capacity is filled or not? Sure if I take dp[i-1][j], I take the one cell above and I know it fulills the capacity as well but at which point can the algorithm casually decide to include 2 items, repsectively x items in the KnapSack? Like, in dp[2][4], we're talking about item 2 but for some reason we can fit in both in there because the weight limit is 4. Somewhere the algo kind of figured out that the weight limit was reduced to 1 because item 1 was inserted but because item 2 has weight 1, we can insert item 2 as well?
It's somewhat clear, but it doesn't explain how to arrive at this solution in the first place. However, I am just beginning to learn DP, so perhaps it's only a misunderstanding on my part.
Hi, I wanna ask, when you insert the value for the next cell (blue arrow) why it points to previous item (the diagonal arrow), instead you said that for the next cell we use the value of current item?
The diagonal arrow represents including the current element in the knapsack (dp[i-1][sz-w] + v in the code), and the arrow downwards represents keeping the best optimal value from before (dp[i-1][sz] in the code). You select the cell which has the greater value, and if they're the same value it doesn't matter.
@@WilliamFiset-videos thanks. Can you please do more videos on suffix and lcp arrays? Efficient algorithms to construct them, use cases and so forth. Thanks for the awesome content.
if we include the item we still look on row above but shifted by the weight of the current item because that would be the state of best possible value and a knapsack large enough to hold that item
if you cant see that, then its always the optimal use of the allocated optimized variable if you pick by the order division ratio value, but some dont fit, just pick the next one, until you cant fit any more, in the knapsack
Simple counter example: Capacity =10 Items = [ [value=2, weight=1], [value=8, weight=10] ] Updating the weight to match the capacity if it's the last item that would fit also doesn't work in general.
@@alexdelacalzada2969 in the sense of optimal filling order (most value per pick) its always the most optimal, even if you dont fill the knapsack fully
Hi Varun, in the 0/1 Knapsack problem each item only exists once, so you cannot choose an item multiple times. I believe there's a variation of the Knapsack problem called the 0/∞ Knapsack problem which allows selecting an item multiple times.
It appears I implemented this a while ago, the problem is called the Unbounded Knapsack problem. Instead of looking diagonally to include a new element, you would look across to the left (in the same row) to select the same item multiple times. Here's an impl: github.com/williamfiset/Algorithms/blob/master/src/main/java/com/williamfiset/algorithms/dp/KnapsackUnbounded.java
You have to help a thief to steal as many as GoldCoins as possible from a GoldMine. There he saw N Gold Boxes an each Gold Boxes consists of Ai Plates each plates consists of Bi Gold Coins. Your task is to print the maximum gold coins theif can steal if he can take a maximum of T plates. This problem is solved by this method right ??????????
Perfect explanation as to how to find the highest value and also what items to include! One request though, can you please post videos with a white background? It becomes difficult for me to watch these videos due to contrast sensitivity. Thank you!
At first I was completely confused and irritated so I started looking for comments of the same feeling, and then I watched it again and started to get the idea, now I finished the third time and things are crystal clear. For those who feel lost at the beginning, don't give up and try more times, you'll get better.
Your repository of algorithms containing running times for each, and video explanations has been absolutely invaluable! Thanks so much!
simple + clear + concise explanation = thank you so much.
please continue ❤
I completely dont understand
Mission completed unsuccessfully
✌️
fellow brother
the dp approach is a little hard to comprehend, let alone code up.
i advise you learn the memoization approach first since it's a lot more intuitive
fr whats he yapping
Thank you for giving us your time and sharing your knowledge.
Thank you so much for the clear explanations about how to know wich items are added to the knapsack.
Best explanation I have found so far. Thank you.
I was having a really hard time with DP but it's so much easier to imagine the solution every dp problem as the shortest paths tree of a directed acyclic graph. so in this case it would follow the topological sort and ONLY update cell (or relax it) if DP's condition is met. In cases of actual graphs this is exactly what we do to find shortest paths tree for a DAG.
Hi William, your graph theory playlist is awesome but your dp playlist has a flaw. These videos are explaining how the solutions to these problems work but not how one can use the general steps of dp to come up with these. I think the reason why dp is hard for most people is because of the pedagogy. I think dp should be taught by identifying sub-problems, showing the recurrence relationship between them and then how we can connect it all together to solve the problem. While looking at this, my brain tended to focus more on the visual aspect of how the table is being built instead of what every step is telling me to do. I hope I made my point clear. Thanks for making these videos for us for free. I'll pay it forward by making such videos myself when I have fully understood these concepts from ground up.
Thanks for the feedback Saad, I agree that just showing someone the solution is a bit like giving a man a fish, rather than teaching him how to fish. In more recent videos I've been trying to do more of the latter as I believe it helps in the learning process.
Finaly I understand. Took me a few tries but its clear now, thank you
Thank you so much! Im studying for Brazilian informatics its difficult to find content in my native language and the ones o find i didn't understand anything. In the second time i watch this i undestood everything.
Thanks I understood that no possibility is missed, but in starting we think some possibility would be missed as we gave the first element priority over second, second over third!!
Hi William be blessed highly, you have really educated me in alot programming concepts. Keep sharing your gift.
💯
Crystal clear explanation! Appreciate your effort!!
With this third video (and two other previous) this theme has cleared up. Thanks to author!
nice video but it doesn't really explain the intuition behind how one would achieve this solution
This was very simple and perfect explanation 🙏🙏
Beautiful explanation! Thanks a million times
I think entries [4,4] and [5,4] should both be 5 instead of 6 in your example. [5,6] should also be 8. Nonetheless, good video. I didn’t understand any solutions until this one, so thanks!
Is there a way for knapsack problem that has limitations, items cant be adjacent? Or to be specific, three items isnt allowed but two is allowed.
Invaluable content THANK YOU SO MUCH
Could you simply:
1. Create ObjArray = [weight: 1 value: 5, weight: 2 value: 3, weight 4 value: 8, weight: 3 value: 6, weight: 4, value: 2]
2. Create a ratioArray[] containing the simplified ratios of each object such as ratioArray = [1:5, 2:3, 1:2, 1:2, 2:1]
3. Sort.() ratioArray[] from highest to lowest
4. Loop through ratioArray[] and add elements into knapsack until the total weight met the weight capacity.
In the case where the weight of items is less than the weight capacity, but the next available sorted ratio is higher than your allowed difference, find the value of the remainder (say 2kg left) and look for the highest weight to value ratio of all 2kg items.
That way, for any given weight, the best weight to value would be selected.
There are some edge cases, for example imagine a capacity of 100 with weights of [51,50,50] and values [3,2,2].
What you are describing is a greedy algorithm. Greedy algorithms do not always provide the optimal solution.
I wanted to find out why. Wikipedia says (en.wikipedia.org/wiki/Greedy_algorithm#Matroids):
Intuitionally you might want to consider:
"Greedy algorithms produce good solutions on some mathematical problems, but not on others. Most problems for which they work will have two properties:
- Greedy choice property
- Optimal substructure
Greedy algorithms fail to produce the optimal solution for many other problems and may even produce the unique worst possible solution. One example is the travelling salesman problem"
More formally:
"A matroid is a mathematical structure that generalizes the notion of linear independence from vector spaces to arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally."
I asked GPT4 to explain the above statement to a 5 year old. Take this with a grain of salt as there is no guarantee this is correct:
"Sure, let's imagine we're playing a game with building blocks. Each block is a different shape and color. Now, I give you some rules to play this game:
We start with no blocks.
If you have a good set of blocks (we'll call it a 'good set' if it follows our rules), and you want to take away some blocks, it'll still be a good set!
Suppose you have a good set, but it's smaller than your friend's good set. Then, there's definitely a block in your friend's set that you can add to your set and it will still be a good set.
Now, when we're playing the game, you want to make the 'best' set of blocks you can, according to these rules. So, what do you do? You start with no blocks, then you keep adding the 'best' block you can that doesn't break the rules. This is kind of like choosing the biggest piece of candy you can without getting a tummy ache!
This is a lot like something mathematicians call a 'matroid'. It's a special rule for playing with blocks (or numbers, or anything else!) that means the 'greedy' strategy - always picking the best thing next - will give you the best set of blocks you can have!
Remember, this is a really simplified explanation, and the real idea is a bit more complicated. But I hope it helps you get a sense of what matroids are like."
Given a problem to solve, how can one determine whether it can be formulated as a matroid?:
"Identifying whether a problem can be formulated as a matroid essentially involves checking if the problem satisfies the three properties of a matroid. The problem or system can be seen as a matroid if:
The empty set is independent: This is usually straightforward to check. Essentially, you need to see if doing nothing or having no elements is a valid option in your problem.
Hereditary property: This means if a set is independent, then all of its subsets should be independent too. In practical terms, if a solution to your problem is valid, then any solution that "does less" should also be valid. If there are exceptions to this rule in your problem, then it can't be formulated as a matroid.
Exchange property: This is often the trickiest to check. It means that if you have two independent sets and one is larger than the other, then there must be some element in the larger set that you can add to the smaller one to still keep it independent. Essentially, it says that there's always a way to "grow" a smaller solution into a larger one while keeping it valid. If this doesn't hold, then the problem can't be formulated as a matroid.
Applying these properties to your problem can require some insight and creativity, especially since the concept of an "element" and an "independent set" might not be immediately obvious. In many problems, it can be useful to think about what the "elements" and "independent sets" could represent and whether the matroid properties are satisfied. For example, in a network design problem, the elements might represent potential links in the network, and an independent set might represent a subset of links that forms a tree.
If all three properties are satisfied, then the problem can be formulated as a matroid, and a greedy algorithm can be used to find an optimal solution."
Perform this analisis on the 0/1 knapsack problem:
"The 0/1 Knapsack problem can be simply stated as: given a set of items, each with a weight and a value, determine a subset of items to include in a knapsack such that the total weight is less than or equal to a given limit and the total value is as large as possible.
Now, let's see how the 0/1 Knapsack problem holds up against the three properties that define a matroid:
The empty set is independent: Yes, an empty set of items does satisfy the constraints of the knapsack problem. The total weight and total value of an empty set are both 0, which are valid.
Hereditary property: This property is also satisfied in the knapsack problem. If we have a valid subset of items that do not exceed the weight limit of the knapsack, any subset of these items will also be a valid set (i.e., it will also not exceed the weight limit).
Exchange property: This is where the 0/1 knapsack problem fails to be a matroid. The exchange property requires that if you have two independent sets and one is larger than the other, then there must be some element in the larger set that you can add to the smaller one to still keep it independent. But in the knapsack problem, adding an item from a larger set to a smaller one may cause the total weight to exceed the weight limit of the knapsack, violating the constraints of the problem.
In conclusion, while the 0/1 Knapsack problem satisfies the first two properties of matroids, it does not satisfy the exchange property. Therefore, the 0/1 Knapsack problem cannot be formulated as a matroid. This is why we cannot use a greedy algorithm to solve the 0/1 Knapsack problem and instead use techniques such as dynamic programming."
I then asked it to compare it to a problem where your solution would in fact work:
What about the fractional knapsack problem:
"The Fractional Knapsack problem is a variation of the Knapsack problem where you're allowed to take fractions of items instead of being limited to taking the whole item or leaving it (as in the 0/1 Knapsack problem).
Now let's analyze it with the three matroid properties:
The empty set is independent: An empty set of items is indeed a valid solution for the Fractional Knapsack problem, as it doesn't violate the weight constraint. Therefore, this property is satisfied.
Hereditary property: If you have a valid set of items for the Fractional Knapsack problem (i.e., the total weight doesn't exceed the maximum weight), any subset of these items (or any fraction of them) is also valid. Hence, this property is satisfied.
Exchange property: This property is where the Fractional Knapsack problem differs fundamentally from the 0/1 Knapsack problem. In the Fractional Knapsack problem, if you have an independent set that is smaller than another independent set, you can always take enough of an item from the larger set and add it to the smaller set to make it larger without violating the weight constraint. Therefore, the exchange property is satisfied.
So, the Fractional Knapsack problem satisfies all the properties of a matroid. Consequently, it can be solved using a greedy algorithm, which in this case involves picking items with the highest value-to-weight ratio first until the knapsack is full."
yeah that's the greedy algorithm, nice one but we're tryna do the dynamic programming version here
Great Fan of Your Video Series. You did a great job. Thanks a lottttttt!
sei fortissmo fratè , thank u so much
I found this video very helpful, thank you
Amazing explanation! Thank you so much!
Great video as always!
your explanations are phenomenal. Thank you
Ty man. Finally makes total sense...
Thank you for the video sir. It is helpful!
Man you are a legend!! Thank you!!
Why do you make your arrays 1 based? It seems like you just add a lot of confusion and make more work for yourself.
Great explanation, thank you :)
great job sir!!! though not what am searching for how do i solve knapsack problem using linear programming?
is there a way around it. or can you recommend a textbook.
5:10
Really confuses me because how does the algo know if the capacity is filled or not? Sure if I take dp[i-1][j], I take the one cell above and I know it fulills the capacity as well but at which point can the algorithm casually decide to include 2 items, repsectively x items in the KnapSack?
Like, in dp[2][4], we're talking about item 2 but for some reason we can fit in both in there because the weight limit is 4. Somewhere the algo kind of figured out that the weight limit was reduced to 1 because item 1 was inserted but because item 2 has weight 1, we can insert item 2 as well?
Wow what a problem. So intense.
Thanks a lot! love your videos
This was great. Thank you.
Great video as always thanks a lot!
It's somewhat clear, but it doesn't explain how to arrive at this solution in the first place. However, I am just beginning to learn DP, so perhaps it's only a misunderstanding on my part.
very helpful, thanks a lot
the diagonal arrow was confusing.
Hi, I wanna ask, when you insert the value for the next cell (blue arrow) why it points to previous item (the diagonal arrow), instead you said that for the next cell we use the value of current item?
This is sooooooo good.
Splendid 😍
How do you make such awesome videos? What tools do you use?
I don't get this diagonal arrow.
What is the formula to know which element it should point to?
The diagonal arrow represents including the current element in the knapsack (dp[i-1][sz-w] + v in the code), and the arrow downwards represents keeping the best optimal value from before (dp[i-1][sz] in the code). You select the cell which has the greater value, and if they're the same value it doesn't matter.
@@WilliamFiset-videos thanks. Can you please do more videos on suffix and lcp arrays? Efficient algorithms to construct them, use cases and so forth. Thanks for the awesome content.
if we include the item we still look on row above but shifted by the weight of the current item because that would be the state of best possible value and a knapsack large enough to hold that item
Why is the first blue arrow is pointed at (0,0)?
At the end, in your program, why itemIndex = i-1 instead of i?
According to you tube, I gave the 670th like :)
Knapsack approach so each time the profit gets closer
Thank you so much!
Why in the world do you not just explain it in a layman's terms formula in the beginning to find each block's max? This video did nothing
very helpful!
How it is correct for v4=5, w4=4? I think you should put value 5 instead of 6. please explain
Thank you!!!
try first calculating the "Wh/kg" value, then sort, then breadth first fill your knapsack, its optimal selection always
if you cant see that, then its always the optimal use of the allocated optimized variable if you pick by the order division ratio value, but some dont fit, just pick the next one, until you cant fit any more, in the knapsack
how do you prove your algorithm is good, optimal or doing anything
Simple counter example:
Capacity =10
Items = [
[value=2, weight=1],
[value=8, weight=10]
]
Updating the weight to match the capacity if it's the last item that would fit also doesn't work in general.
@@alexdelacalzada2969 counter to what, you must be dreaming, in what sense optimal, your rules or mine
@@alexdelacalzada2969 in the sense of optimal filling order (most value per pick) its always the most optimal, even if you dont fill the knapsack fully
Shouldnt row 1 column 7, and row 1 column 6 be 4 because we can fit 2 elements (each with weight 3) into a knapsack that has a max weight of 7 and 6.
Hi Varun, in the 0/1 Knapsack problem each item only exists once, so you cannot choose an item multiple times. I believe there's a variation of the Knapsack problem called the 0/∞ Knapsack problem which allows selecting an item multiple times.
It appears I implemented this a while ago, the problem is called the Unbounded Knapsack problem. Instead of looking diagonally to include a new element, you would look across to the left (in the same row) to select the same item multiple times. Here's an impl:
github.com/williamfiset/Algorithms/blob/master/src/main/java/com/williamfiset/algorithms/dp/KnapsackUnbounded.java
Thanks!
Hello sir, I saw the code of unbounded knapsack in your repository, if you have some free time please make a video of it for us.
You have to help a thief to steal as many as GoldCoins as possible from a GoldMine. There he saw N Gold Boxes an each Gold Boxes consists of Ai Plates each plates consists of Bi Gold Coins. Your task is to print the maximum gold coins theif can steal if he can take a maximum of T plates. This problem is solved by this method right ??????????
Perfect explanation as to how to find the highest value and also what items to include! One request though, can you please post videos with a white background? It becomes difficult for me to watch these videos due to contrast sensitivity. Thank you!
use color inverting extensions for your comfort, but don't ask him to change the background. These videos are just too perfect for any changes
thjank you
thank
please add more dp vedios
You should be teaching at Coursera. You provide a much better explanation. Thanks!
You didn't say how to calculate where the blue arrow points to.