02:00 priority queue 06:15 heap 09:34 heap as a tree 10:20 max and min heaps 16:40 heap operations 19:55 max_heapify example 26:06 max_heapify complexity 30:05 build max-heap 35:05 build_max_heap complexity 48:00 heap-sort
I'm a student of Computer Science studying Data Structures this semester... I want to say that this professor is really a master at what he is doing! Thanks MIT, for this free video recording. It was really very helpful. +1!
@@laraibanwar1618 You can't compare Abdul Bari lectures and MIT OCW. Abdul Bari lectures are helpful if your exam is near and you are thinking about getting marks. But OCW is helpful if you want to get into deep.
For those who are confused with the n/4..n/8, n/16... equation to describe complexity of build heap. Here's why: He's actually going bottom-up, where level 0 is the bottom most level with n/2 nodes (n being the total nodes in the tree). This is contrary to most depiction of a tree where level 0 is the root node with 1 node. So if you think about it just generally, as your work per node increases at each level, the number of nodes at each level decreases by (/2). So, in the mathematical equation you have one component that increases and another component neutralizes by going down in value. That's why the complexity is O(n). In his equation he starts from one level up from the bottom level because there is no work at the bottom most level in the tree since they have no children. So bottom level would have n/2 nodes and the level above that would have n/4 nodes and so on. So he starts the equation from n/4 - since you would start with one swap per node at that level. 2 swaps per node at the level above it. 3 swaps per node at the level above that one and so on. Although, number of swaps per node has gone up every level, the number of nodes have gone down equally to neutralize that. If you're still confused, take pen and paper and draw it out - it will be apparent. Hope that helps.
But then in that case, won't the complexity of all tree algos or functions reduce to O(n)? Instead of O(nlogn) ? I am confused. We always consider complexity of operations like insert or search in a tree as log n and nlogn for n items
This lecture was literally uploaded 8 years back and shot almost a decade back, yet it feels so timelessly new. I never thought I'd ever like this course till this lecture series. Prof Srini, you are a legitimate BAWSE. Respect++
I never gave donation to my own university because the quality of the courses my university offered simply sucked. But watching only one lecture from this course, I decided to donate to MIT on regular basis.
True. With this kind of teaching it deserves the title of a university, others I don't know what they are but they claim themselves to be 'universities'
I think the fact that it is from MIT makes a lot of people think they are getting a better lecture then they would at their school...Not they case...He is teaching completely from the Cormen textbook chapter 6 exact same examples...I think it is a great supplemental lecture...I only understand it after reading the textbook and the first lecture from my prof...I think the fact that we are hearing it for the second time helps a ton and I think MIT a lot for explaining the same topic in a different light and I am now happy to know that my school isn't cheating me any with the curriculum just far less competition
yingyangjedi thanks for mentioning the book. Sometimes the lecturers forget to tell us the book they use, and they (the books) are usually more straightforward.
Try to take courses in my university. I was exchanged to Canada and have taken abstract algebra there, taught by a native speaker and I get a B+, not a fantastic grade but at least I get most of the material. Then back to my university, during a quantum mechanics course the prof (a mainland China professor) was explaining something related to abstract algebra, I knew the subject he was trying to explain but just can't really follow his (messy) writing and explaining(or mumbling?). I think English should be taken into account when they select professors.
Haven't yet read Cormen, but what I feel is this is such an amazing lecture, where the Professor gives appropriate amount of time to each and every step of the algorithm and then beautifully explains the math behind all of the stuff. I love this :)
This is honestly by far the best video on Max/Min heaps on the internet. His explanation on building max heap was extremely simple and intuitive. The reason why we start at i = n/2 down to 1 is that this ensure the max_heapify assumption is always true. Ingenious really.
Great and very passionate lecturer. But mind the error @25:35 "Exchange A[4] with A[8]" should be with A[9] instead as we are referring, between brackets, to indices of each node in the tree not to their values, so don't be confused.
+ziddy26 You are welcome! His way of teaching is so amazing that makes me feel that the students were "possessed" when listening to him, which I can understand :)
nope. if n = 1, the last term should be 0. in your case it is negative, i.e lg1 -1 = -1. Actually he just missed one last term which is (k+2)/2^(k+1) after he defines n/4=2^k. You can have a pen to verify.
@44:05 the sum of the this series is 4, rearrange the sum as a upside down pyramid, the first layer as (1/2^0+1/2^1+...) the second layer (1/2^1+1/2^2+...), so 1/2^0 is summed once, 1/2^1 twice, etc. The we got the sum of the first layer 2, the second layer 1/2*2, etc. Finally, the sum is 2* (1/2^0+1/2^1+..) which is 4. We can do the rerrange because this series is absolutely convergent as you can verify by ratio test.
For the less geometrically inclined, it is the MacLaurin series of 1/(1-x)^2 evaluated at x = 1/2. The lectures assume it is the geometric series, ie the MacLaurin series of 1/(1-x), but it is actually the derivative of that.
I hope this comment is helpful for people who are taking this for the first time. I'm reviewing this course since I took a similar one on my uni 6 years ago and back then I hated it. Now, I really like it, but that's because I vaguely know what to expect. If you feel a bit shaky on understanding sums (like me :D), then I think it's a good rule of thumb to understand the following ideas. I understood these ideas beforehand (by chance) and they helped me immediately understand everything he did, regarding the sums. IDEA 1 SIGMA(n^2) is between 1/3n^3 to n^3. This means that most to any diverging sums of the form SIGMA(n^2) have O(n^3) as their answer. Explanation lower bound: a sum is basically a blocky form of integration, so you can use calculus to get the lower bound. Integrating n^2 is 1/3n^3. Integration works on continuous numbers and not discrete numbers, so the actual answer is a bit different. Here is how: since the actual answer always involves extra additive terms (of the form an^3 + bn^2 + cn) and we don't have those, the lower bound is 1/3n^3. Explanation upperbound: the upperbound is n^3, because if you take the sum of n^2 and always do n, as opposed from i to n, then you get with n = 3 (for example) 3^2 + 3^2 + 3^2 = 9 + 9 + 9 = 27 = 3^3. Source: I thought a lot about sums in the shower, because I realized they are quite key to complexity analysis and they aren't really clear cut. I also read Knuth's book about sums (chapter 2, p. 21 to 66) and he showed to me how discrete sums are basically a variant of integration. His book gave me the idea that using normal ideas about calculus are an approximation for the much more complex calculus he presented in his book -- the calculus of finite differences. The same trick works for SIGMA(n). IDEA 2 Another thing that one needs to understand is to understand series of the form 1/2 + 1/4 + 1/8 + ... 1/n, such a sum wil always be the first time plus the first time, so in this case 1/2 + 1/2. See a numberphile video on it here: ruclips.net/video/u7Z9UnWOJNY/видео.html --- Another thing I noticed is that he has two modes of analyzing time complexity. (1) he goes line by line and (2) he does some form of summation by looking at the data structure. He did this in lecture 3 as well, but then visually. I was able to immediately get to the answer of O(n) because I visualized the operations done on the data structure, as opposed to analyzing line by line. Because of this, I have the following strategy to solve these big O questions: 1. See if I can solve the question by visualizing the data structure and what operations are done on it and sum it. 2. If I'm not able to, then count line by line, with the potential of me being wrong.
***** Correct me if I'm wrong, but its (n+1)/4 = 2^k => lg(n+1) = k + 2, lg(n+1)-1 = k+2 -1 => lg(n+1) -1 levels are there = k+1, and since each iteration results in division by 2^k we get (k+1)/2^k. I found that above statements had too many typos such as '-' as '+' etc. Please do rectify If I got it all wrong.. Thanks for the explanation though!!
Hey, think you guys are right with the math for lg(n) => k+2 where n/4 = 2^k BUT, I think the lecturer meant 1*(lg(n) - 1)*c for the last term (since there are only lg(n) - 1 levels *below* the root node), which implies k+1. Hope that helps, let me know if I'm missing anything.
I need to say one thing that in real scenario most of the language supports array indexing from index 0. If it is taken from 1 then there forms an error in the parent(i). Then i/2 should have been replaced with (i-1)/2 and the left child = 2*i + 1 and right child = 2*i + 2
Some points which I would like to mention, if you guys are coding merge sort then make sure: 1. While building the max heap from unordered array , start from (n/2) - 1 (because index starts from 0) till 0. 2. Make sure you reduce the heap size after swapping first and last element.
At 23:43, child node 4 and 5 have value 14 and 7 while parent node 2 have value 4. Now the important note that professor missed to tell is: Suppose, the child node would be 7 and 14 (i.e. interchange left and right node) then replace parent node with right child node because it was the largest among the two child node.
to prove the expression is bounded. Let S = 1 + 2/2 + 3/(2^2) + ... + (k+1)/(2^k), S = 1 + (1+1)/2 + (1+2)/(2^2) + ... + (1+k)/(2^k) = 1+1/2+1/(2^2)+...+1/(2^k) + 1/2(S), when k--> positive infinite. S = 2+1/2S, S = 4 when k --> positive infinite. S is bounded by 4.
About half way through the video I realized that this course uses the same book as my course, making this lecture series all the more helpful!! Thank you so much!!!
on minute 45:00 explaining time complexity for max heapify cost analysis, shouldn't the last term of serie be (k+2)/2^k instead of (k+1)/2^k assuming n/4 = 2^k ?!
I noticed he is mixing print with cursive on the board now I can't stop looking for it. Its amazingly easy to read his writing but I think it is interesting I didn't notice it until lecture 4.
@40:18 why did he say "n/4 nodes with level 1" ?? would not it be "n/2 nodes with level 1" as we are starting from n/2 leaf in the loop?? or it does not matter either way??
That's so a binary tree can be constructed using the following: Left child: i * 2 Right child: (i * 2) + 1 Parent: Math.floor(i / 2) For example, from this array: // 0 1 2 3 4 5 6 7 8 9 [null, 'A', 'C', 'B', 'D', 'F', 'G', 'J', 'H', 'K'] We can derive the following tree: (A) / \ (C) (B) / \ / \ (D) (F) (G) (J) / \ (H) (K)
Dudes.. only the first gave the right answer. If the root index is 0, the indices of its children are 0 * 2 = 0 or 0 * 2 + 1 = 1. yeah. there occurs a corner case.
@@euiyoungchung8492 What he asked was why they use indices that start from 1 (not just this video). If you look at previous videos, Insertion sort, Merge Sort..etc, all use indices that start from 1. The reason is simple - They follow the book "Introduction to Algorithms" (as mentioned in the course's website), which follows this convention
He is teaching foriegn students when Indian students needs him. He is good for nothing for Indians not in MIT. Thank Opencourseware to make this vid available.
Can someone help understand how did he came up with the substitution n/4=2^k such as lg(n)=k+1. I mean if I take n/4=2^k => n=2^(k+2) such as lg(n)=lg[2^(k+2)] and (k+2)*lg2 is not equal to k+1. Look at 43:18. Thanks in advance!
my prof only explain from slides. i prefer this style of teaching - board and chalk. he explains and writes at the same time, so much easier to understand!!!
Only 1 or two of them are good, I prefer MIT lectures over IIT ones any day. They're to the point and don't waste time revolving around the topic and take forever to get to the original point which is the case with most IIT professors..
Great lecture @MIT OpenCourseWare Lecture 4 1 Question At the end of a lecture, after removing max from a heap, isn't step 1 necessary to make Max Heap again for further removal of A[1] which should be MAX?
7:20, Sorry, I'd say "Heap is an array visualized as a nearly full binary tree", here the concept the professor used of "complete binary tree" is actually "full binary tree" by convention. Except that, wonderful course!
Because heap is already a complete binary tree, just not a full binary tree. So we should remove "nearly" for professor's words "nearly complete binary tree".
Marko prološčić I see it now Marko, With the root at index 0 use left = 2*index+1 and right = 2*index+2 With the root at index 1 use left = 2*index and right = 2*index+1 But I think I was more commenting on the strangeness of seeing a 10 slot array indexed from 1-10 instead of 0-9.
What are those cushion things that he hands out to students that participate? Also, excellent lecture. MIT's professors seem to be about 10000x better than mine. Really appreciate MIT putting this up. Thank you!
When using a maxheap for sorting, instead of just plucking off the max element then re-heapifying, why not grab the larger valued child too since those are the 2 largest in the maxheap? Only one comparison is needed to get the correct sorted order for those 2. Then those elements can be deleted from the heap and then re-heapify. I wonder if that would have any impact on execution speed.
Nobody explains Max_heapify clearly... I couldn't get the max_heapify algorithm even here. 24:39. Take a look - what if instead of "8" there was "20", even after all of the steps he described, the subheap of index 2 would violate the max heap property. So the question is - how to fix this?
02:00 priority queue
06:15 heap
09:34 heap as a tree
10:20 max and min heaps
16:40 heap operations
19:55 max_heapify example
26:06 max_heapify complexity
30:05 build max-heap
35:05 build_max_heap complexity
48:00 heap-sort
thanks
"The cutest little data structure ever invented, the heap."
- Prof Srini Devadas @0:22
this guy is the fucking boss. I'd never skip a lecture with a professor like that.
even a 8:30 lecture? :P
GreyFace. dude thats the best you dont even have to miss work to go to class
shoutout to the camera guy
I'm a student of Computer Science studying Data Structures this semester... I want to say that this professor is really a master at what he is doing! Thanks MIT, for this free video recording. It was really very helpful. +1!
He is an Indian
What you doing now?
Checkout Abdul bari algorithms
I hope it will be helpful
@@laraibanwar1618 You can't compare Abdul Bari lectures and MIT OCW. Abdul Bari lectures are helpful if your exam is near and you are thinking about getting marks. But OCW is helpful if you want to get into deep.
@@skittles6486 +inf
seriously OCW provides way in depth knowledge
For those who are confused with the n/4..n/8, n/16... equation to describe complexity of build heap. Here's why:
He's actually going bottom-up, where level 0 is the bottom most level with n/2 nodes (n being the total nodes in the tree). This is contrary to most depiction of a tree where level 0 is the root node with 1 node. So if you think about it just generally, as your work per node increases at each level, the number of nodes at each level decreases by (/2). So, in the mathematical equation you have one component that increases and another component neutralizes by going down in value. That's why the complexity is O(n).
In his equation he starts from one level up from the bottom level because there is no work at the bottom most level in the tree since they have no children. So bottom level would have n/2 nodes and the level above that would have n/4 nodes and so on. So he starts the equation from n/4 - since you would start with one swap per node at that level. 2 swaps per node at the level above it. 3 swaps per node at the level above that one and so on. Although, number of swaps per node has gone up every level, the number of nodes have gone down equally to neutralize that. If you're still confused, take pen and paper and draw it out - it will be apparent. Hope that helps.
You're amazing.
Thank You!
Thank you, kind stranger!
But then in that case, won't the complexity of all tree algos or functions reduce to O(n)? Instead of O(nlogn) ? I am confused. We always consider complexity of operations like insert or search in a tree as log n and nlogn for n items
@@MsRP2Bang You've got that wrong. It's not O(nlogn).
BST:
Algorithm Average Worst case
Space O(n) O(n)
Search O(log n) O(n)
Insert O(log n) O(n)
Delete O(log n) O(n)
BINARY HEAP:
Algorithm Average Worst case
Space O(n) O(n)
Search O(n) O(n)
Insert O(1) O(log n)
Delete O(log n) O(log n)
Peek O(1) O(1)
Couldn't understand where that last term in the summation came from. (k + 1) / 2^k
25:28 - it should be Exchange A[4] with A[9] since 4 and 8 keys were swapped and not 4 and 2 keys
Welcome to MIT, where even our chalkboards are better than everyone else's. Seriously, though, I've never seen a chalkboard so clean and clear.
This lecture was literally uploaded 8 years back and shot almost a decade back, yet it feels so timelessly new. I never thought I'd ever like this course till this lecture series. Prof Srini, you are a legitimate BAWSE. Respect++
I haven't had a single CS teacher that speaks English well enough to even remotely explain concepts. I should be paying 40K to RUclips a year instead.
This hits hard
I think that's Indian accent.
You mean 'only' accent
@@Naton I don't know. I never met one
This comment is just too relatable
Great era. I, an ordinary person, can study algorithm through the lecture offered by the top-notch universities for free.
I never gave donation to my own university because the quality of the courses my university offered simply sucked. But watching only one lecture from this course, I decided to donate to MIT on regular basis.
True. With this kind of teaching it deserves the title of a university, others I don't know what they are but they claim themselves to be 'universities'
Did u ?😅
First time a youtube video was actually helpful. No questions remain. Explained everything in a very short time. Great lecture.
I think the fact that it is from MIT makes a lot of people think they are getting a better lecture then they would at their school...Not they case...He is teaching completely from the Cormen textbook chapter 6 exact same examples...I think it is a great supplemental lecture...I only understand it after reading the textbook and the first lecture from my prof...I think the fact that we are hearing it for the second time helps a ton and I think MIT a lot for explaining the same topic in a different light and I am now happy to know that my school isn't cheating me any with the curriculum just far less competition
+yingyangjedi I agree with this. I saw the lecture on merge sort, and he used the same examples for the insertion sort chapter that comes before.
yingyangjedi thanks for mentioning the book. Sometimes the lecturers forget to tell us the book they use, and they (the books) are usually more straightforward.
Try to take courses in my university. I was exchanged to Canada and have taken abstract algebra there, taught by a native speaker and I get a B+, not a fantastic grade but at least I get most of the material. Then back to my university, during a quantum mechanics course the prof (a mainland China professor) was explaining something related to abstract algebra, I knew the subject he was trying to explain but just can't really follow his (messy) writing and explaining(or mumbling?). I think English should be taken into account when they select professors.
Thanks for pointing to the book
Haven't yet read Cormen, but what I feel is this is such an amazing lecture, where the Professor gives appropriate amount of time to each and every step of the algorithm and then beautifully explains the math behind all of the stuff. I love this :)
This is honestly by far the best video on Max/Min heaps on the internet. His explanation on building max heap was extremely simple and intuitive. The reason why we start at i = n/2 down to 1 is that this ensure the max_heapify assumption is always true. Ingenious really.
Great and very passionate lecturer. But mind the error @25:35
"Exchange A[4] with A[8]" should be with A[9] instead as we are referring, between brackets, to indices of each node in the tree not to their values, so don't be confused.
+encryptionalgorithm I got it too, HAHA
+Z UU Good that you spot it as well ;)
+encryptionalgorithm thats right...this also proves student are half asleep in the class :D
+encryptionalgorithm Thanks!..And nobody in the class cared to correct him. But a good lecture tho
+ziddy26 You are welcome! His way of teaching is so amazing that makes me feel that the students were "possessed" when listening to him, which I can understand :)
I wish I would have so lectures and professors in my university. Everything is clear!
42:20 the last term should be 1 ((lg n - 1)c)
------
Total amount of work in the for loop:
N/4 (1c) + n/8 (2c) + n/16 (3c) + ... + 1 ((lg n - 1)c)
exactly
nope. if n = 1, the last term should be 0. in your case it is negative, i.e lg1 -1 = -1. Actually he just missed one last term which is (k+2)/2^(k+1) after he defines n/4=2^k. You can have a pen to verify.
Really helpful and inspiring lectures. I am so lucky to be born in this era. Online lectures are brilliant idea!
In MAX_HEAPIFY operation (time 25:35), the step after calling MAX_HEAPIFY(A,4) should be like Exchange A[4] with A[9].
@44:05 the sum of the this series is 4, rearrange the sum as a upside down pyramid, the first layer as (1/2^0+1/2^1+...) the second layer (1/2^1+1/2^2+...), so 1/2^0 is summed once, 1/2^1 twice, etc. The we got the sum of the first layer 2, the second layer 1/2*2, etc. Finally, the sum is 2* (1/2^0+1/2^1+..) which is 4. We can do the rerrange because this series is absolutely convergent as you can verify by ratio test.
For the less geometrically inclined, it is the MacLaurin series of 1/(1-x)^2 evaluated at x = 1/2. The lectures assume it is the geometric series, ie the MacLaurin series of 1/(1-x), but it is actually the derivative of that.
What a Excellent Source of Crystal Clear Knowledge on each and every topics . Hats off to MIT profs ......
their professor can actually explain things, WOW
Gratitude to the mit due to them i can study these lectures for free.
I like this professor.
I prefer the other guy. It's mostly just a matter of accent though XD
One of the best CS open course in the world
I hope this comment is helpful for people who are taking this for the first time. I'm reviewing this course since I took a similar one on my uni 6 years ago and back then I hated it. Now, I really like it, but that's because I vaguely know what to expect.
If you feel a bit shaky on understanding sums (like me :D), then I think it's a good rule of thumb to understand the following ideas. I understood these ideas beforehand (by chance) and they helped me immediately understand everything he did, regarding the sums.
IDEA 1
SIGMA(n^2) is between 1/3n^3 to n^3. This means that most to any diverging sums of the form SIGMA(n^2) have O(n^3) as their answer.
Explanation lower bound: a sum is basically a blocky form of integration, so you can use calculus to get the lower bound. Integrating n^2 is 1/3n^3. Integration works on continuous numbers and not discrete numbers, so the actual answer is a bit different. Here is how: since the actual answer always involves extra additive terms (of the form an^3 + bn^2 + cn) and we don't have those, the lower bound is 1/3n^3.
Explanation upperbound: the upperbound is n^3, because if you take the sum of n^2 and always do n, as opposed from i to n, then you get with n = 3 (for example) 3^2 + 3^2 + 3^2 = 9 + 9 + 9 = 27 = 3^3.
Source: I thought a lot about sums in the shower, because I realized they are quite key to complexity analysis and they aren't really clear cut. I also read Knuth's book about sums (chapter 2, p. 21 to 66) and he showed to me how discrete sums are basically a variant of integration. His book gave me the idea that using normal ideas about calculus are an approximation for the much more complex calculus he presented in his book -- the calculus of finite differences.
The same trick works for SIGMA(n).
IDEA 2
Another thing that one needs to understand is to understand series of the form 1/2 + 1/4 + 1/8 + ... 1/n, such a sum wil always be the first time plus the first time, so in this case 1/2 + 1/2. See a numberphile video on it here: ruclips.net/video/u7Z9UnWOJNY/видео.html
---
Another thing I noticed is that he has two modes of analyzing time complexity. (1) he goes line by line and (2) he does some form of summation by looking at the data structure. He did this in lecture 3 as well, but then visually. I was able to immediately get to the answer of O(n) because I visualized the operations done on the data structure, as opposed to analyzing line by line. Because of this, I have the following strategy to solve these big O questions:
1. See if I can solve the question by visualizing the data structure and what operations are done on it and sum it.
2. If I'm not able to, then count line by line, with the potential of me being wrong.
Thanks a lot. Though I did not understand a word about your first idea, your second idea was very helpful for me.
why lg(n) have benn replaced with (k+1)/(2^k) [43:21] . if n/4 is 2^k, then lg(n) should't be k+2 ?
***** Correct me if I'm wrong, but its (n+1)/4 = 2^k => lg(n+1) = k + 2, lg(n+1)-1 = k+2 -1 => lg(n+1) -1 levels are there = k+1, and since each iteration results in division by 2^k we get (k+1)/2^k. I found that above statements had too many typos such as '-' as '+' etc. Please do rectify If I got it all wrong..
Thanks for the explanation though!!
+Pawel Englert As per what he has taken, n/4=2^k => n= 2^(k+2)+> lg(n)= (k+2). So you are right. I got the same. It must be (k+2)/(2^k).
Got the same. it must be k+2.
Yes, sat there staring at that for like two minutes. Definitely k+2.
Hey, think you guys are right with the math for lg(n) => k+2 where n/4 = 2^k
BUT, I think the lecturer meant 1*(lg(n) - 1)*c for the last term (since there are only lg(n) - 1 levels *below* the root node), which implies k+1.
Hope that helps, let me know if I'm missing anything.
32:40 that kid should drop out of MIT and become an orchestra conductor
You're not as funny as you think.
Anurag Baundwal You’re too serious than you think.
I'm impressed by the chalks and blackboards.
Yes, It's so real, raw and deep. I like it too!
25:40 , shouldn't A[4] be exchanged with A[9] . or should it be exchanged with A[8] ??
A[9] , it is a typo
THAT is how you teach!
I need to say one thing that in real scenario most of the language supports array indexing from index 0. If it is taken from 1 then there forms an error in the parent(i). Then i/2 should have been replaced with (i-1)/2 and the left child = 2*i + 1 and right child = 2*i + 2
Some points which I would like to mention, if you guys are coding merge sort then make sure:
1. While building the max heap from unordered array , start from (n/2) - 1 (because index starts from 0) till 0.
2. Make sure you reduce the heap size after swapping first and last element.
@@gabrielkennethmarinas6244 sir that way 2 years ago, thank you for helping us out.
32:47 if you watch it at 2x speed it looks like that student is casting a spell on the professor lol
😂😂😂😂
"Abra cadabra, you must give me a pillow"
lol
Harry potter though 😂 😂
MIT is secretly Hogwarts lol 😂
In case anyone is wondering,@25:30 he should've written "Exchange A[4] with A[9]"
The professor's striped shirts' kung fu is stronger than my graphics card's kung fu.
XD
Aliasing
It's a kung fu that stood the tests of time...
Thank you, Prof. Srini Devadas and MIT!
the most excellent explanation of heap I have ever seen.
At 23:43,
child node 4 and 5 have value 14 and 7 while parent node 2 have value 4.
Now the important note that professor missed to tell is:
Suppose, the child node would be 7 and 14 (i.e. interchange left and right node) then replace parent node with right child node because it was the largest among the two child node.
He did mention that we take the max of child nodes and swap with parent node. So, this would have been redundant
This teacher is so good at explaining concepts .
"The Pseudocode is in the notes". Me 10 years late halfway across the world checking the notes I've been taking
you can find the notes and other material on the site mentioned in the description.
Awesomely explained. I would have 100 % attendance with this kind of lectures
Qué curiosa por ustedes explora✌😂
beauty of this lecture is that all the example are taken from the CLRS ......
At 25:40 A[4] is exchanged with A[9], right? It is a 1-based array as shown in diagram.
yep, that's right - that was an omission
30:58 I'm gona write pseudocode for build maxHeap, cos it's 2 lines of code. That's about the limit of a program I can understand. LOL
I'm so grateful MIT, SO GRATEFUL! :') Thank you so much for this!
It's so cool that these top schools release courses like this one online free of charge. I may not get a chance to go to MIT
Watch 30:00-47:00 if you want to know how build max heap takes O(n) time instead of O(log n)
to prove the expression is bounded. Let S = 1 + 2/2 + 3/(2^2) + ... + (k+1)/(2^k), S = 1 + (1+1)/2 + (1+2)/(2^2) + ... + (1+k)/(2^k) = 1+1/2+1/(2^2)+...+1/(2^k) + 1/2(S), when k--> positive infinite. S = 2+1/2S, S = 4 when k --> positive infinite. S is bounded by 4.
43:10
Why the last term 1(lgn c) is simplified to (k+1) instead of (k+2)?
n/4 = 2^k -> n = 2^(k+2) -> lgn = k+2?
Actually he just missed one last term which is (k+2)/2^(k+1). You can have a pen to verify. The denominator also needs to increase by 1.
Thanks M.I.T and professor for providing this for free.
Does anyone notice the the teacher made a mistake at 25:36? Exchange A[4] with A[9], not A[8].
Should that be the exchange of A[4] with A[9] instead of A[8] at 25:30?
About half way through the video I realized that this course uses the same book as my course, making this lecture series all the more helpful!! Thank you so much!!!
Which book?
on minute 45:00 explaining time complexity for max heapify cost analysis, shouldn't the last term of serie be (k+2)/2^k instead of (k+1)/2^k assuming n/4 = 2^k ?!
actually he just missed one last term which is (k+2)/2^(k+1). You can have a pen to verify.
Great lecture, but at time 42:30 - 45:00 it's bound by constant 4, not 3. It's not really important, but I noticed.
Thank you MIT and a big thank you to the professor.
Awesome Lecture, Loved it!!
At 40:30 professor is writing n/4 nodes at level 1, what is level 1?
Is it from bottom of the heap, or top of the heap?
I think he's talking about the bottom heap and you're working your way up to the root i=1
a bit confusing, i agree.
i believe he means bottom-up.
43:21 the last term should be (k+2)/(2^k) right? as logn = k+2 (given n/4=2^k ie log(n/4)=k ie logn-log4=k which implies logn=k+2
Actually he just missed one last term which is (k+2)/2^(k+1) after he defines n/4=2^k. You can have a pen to verify.
I agree with you, but then it doesn't work with the summation formula.
Heapify complexity analysis starts at 37:20
25:40 , A[4] should be exchanged with A[9] , not A[8].
45:00 , he just missed one last term which is (k+2)/2^(k+1) after he defines n/4=2^k.
I think there is a small mistake in 25:36 !
Instead of A[ 8 ] it would be A[ 9 ] !
but it was a great lecture.......!
+Rayhan Mahmud Yep it's a mistake
But great work by MIT ... :)
I noticed he is mixing print with cursive on the board now I can't stop looking for it. Its amazingly easy to read his writing but I think it is interesting I didn't notice it until lecture 4.
God bless this free lectures!
these are really helping. definitely better than my algorithms and programming lecturer
There are n/2 nodes with level 0 - the leaves
on the 1st level there are half the nodes than on the lower level (0), or 1/2 * n/2 = n/4
@40:18 why did he say "n/4 nodes with level 1" ?? would not it be "n/2 nodes with level 1" as we are starting from n/2 leaf in the loop?? or it does not matter either way??
He's going in a bottom-up direction so I think he really meant "n/4 nodes are 1 level above the bottom"
Excellent lecturer, I love algorithms and data structures! :)
Why does MIT start indices at 1 lol?
That's so a binary tree can be constructed using the following:
Left child: i * 2
Right child: (i * 2) + 1
Parent: Math.floor(i / 2)
For example, from this array:
// 0 1 2 3 4 5 6 7 8 9
[null, 'A', 'C', 'B', 'D', 'F', 'G', 'J', 'H', 'K']
We can derive the following tree:
(A)
/ \
(C) (B)
/ \ / \
(D) (F) (G) (J)
/ \
(H) (K)
Or they just love that pascal
That's because they follow the book: "Introduction to Algorithms" by Cormen. The book uses indices that start from 1
Dudes.. only the first gave the right answer. If the root index is 0, the indices of its children are 0 * 2 = 0 or 0 * 2 + 1 = 1. yeah. there occurs a corner case.
@@euiyoungchung8492 What he asked was why they use indices that start from 1 (not just this video). If you look at previous videos, Insertion sort, Merge Sort..etc, all use indices that start from 1. The reason is simple - They follow the book "Introduction to Algorithms" (as mentioned in the course's website), which follows this convention
At time 25:45, one mistake is there, A[4] must be exchanged with A[9] not with A[8]... please make yourself sure with this..
Good to see one among many Indians there in MIT.
He is teaching foriegn students when Indian students needs him. He is good for nothing for Indians not in MIT. Thank Opencourseware to make this vid available.
Can someone help understand how did he came up with the substitution n/4=2^k such as lg(n)=k+1. I mean if I take n/4=2^k => n=2^(k+2) such as lg(n)=lg[2^(k+2)] and (k+2)*lg2 is not equal to k+1. Look at 43:18. Thanks in advance!
I have the same doubt
thank you for replying an maintaining the channel. I learned alot.
32:36 What is meant by backseat property of leaves ?
11:05 "Key of the node" means the actual value in the array?
Finally, I understood heaps and heap sort. :D
He came to class with just a couple of sheets as teaching material, but carried a whole bag of cushions XD
25:36 exchange a[4] with a[9] not a[8]
my prof only explain from slides. i prefer this style of teaching - board and chalk. he explains and writes at the same time, so much easier to understand!!!
Only 1 or two of them are good, I prefer MIT lectures over IIT ones any day. They're to the point and don't waste time revolving around the topic and take forever to get to the original point which is the case with most IIT professors..
Could anybody please explain the line the professor has used at 7:41 that starts from, " And we want ... " ? Thanks in advance.
Thanks MIT for this lecture.
Great lecture @MIT OpenCourseWare Lecture 4
1 Question
At the end of a lecture,
after removing max from a heap, isn't step 1 necessary to make Max Heap again for further removal of A[1] which should be MAX?
Heap Sort @30:07
7:20, Sorry, I'd say "Heap is an array visualized as a nearly full binary tree", here the concept the professor used of "complete binary tree" is actually "full binary tree" by convention. Except that, wonderful course!
Because heap is already a complete binary tree, just not a full binary tree. So we should remove "nearly" for professor's words "nearly complete binary tree".
What does max heapify precondition at 20:20 mean?...i did not understood the condition...someone help
At 7:00, who numbers an array from slot 1???
Shouldn't the array have slots 0 to 9, and NOT 1 to 10
because this algorithm doesn't work if the first slot is 0 because then the root's children would be 0*2=0 and 0*2+1=1 and that's incorrect
Marko prološčić
it will work from 0 with: left = 2i +1 , right = 2i +2 there are 2 ways
matlab uses 1-indexed arrays, it's a matter of convention and of no practical importance.
Marko prološčić
I see it now Marko,
With the root at index 0 use left = 2*index+1 and right = 2*index+2
With the root at index 1 use left = 2*index and right = 2*index+1
But I think I was more commenting on the strangeness of seeing a 10 slot array indexed from 1-10 instead of 0-9.
I can't believe I ended up watching the whole video....lol....so clear
Prof. resembles Rahul Dravid (Indian Cricketer).... Same smile and laughter too. Two greats!.
What are those cushion things that he hands out to students that participate? Also, excellent lecture. MIT's professors seem to be about 10000x better than mine. Really appreciate MIT putting this up. Thank you!
Thks MIT for this great serie of lecture! Really appriciated
MAXHeapify is at 22:13
52+ minutes video for Heap Sort !!!!
#mitocw should have released module wise.
When using a maxheap for sorting, instead of just plucking off the max element then re-heapifying, why not grab the larger valued child too since those are the 2 largest in the maxheap? Only one comparison is needed to get the correct sorted order for those 2. Then those elements can be deleted from the heap and then re-heapify. I wonder if that would have any impact on execution speed.
Con razón son los mejores, q calidad de clase
It gives me goosebumps. ... whenever i see indians at such place
Nobody explains Max_heapify clearly...
I couldn't get the max_heapify algorithm even here.
24:39. Take a look - what if instead of "8" there was "20", even after all of the steps he described, the subheap of index 2 would violate the max heap property. So the question is - how to fix this?
he's assuming the two subtrees are already max heaps
Heaps are so elegant.