*My takeaways:* 1. Prerequisites for MIT 6.0002 2:16 2. What is a computation model 4:17 3. Optimization models 5:47 - Knapsack problem 8:04 - Solutions of knapsack problem: brute force algorithm 16:18, greedy algorithm 19:38 and problem with greedy algorithm 37:05
Professor Guttag gives simple and well understandable explanations for otherwise actually pretty complex optimization problems (especially digital optimization). It is so nice that MIT is making these lectures public
And one of the worst things about the age we live in is that we have to spend 8-9 hours a day in front of a computer screen wasting our lives on menial corporate tasks instead of watching lectures like these and applying what we learned from them to do something really meaningful.
Personal Notes. 1. Keyfunction serves to map elements (items) into numbers. Tells us what we mean by best. In this case, the professor wishes to use the one algorithm independently of his definition of best. 2. Lambda function creates anonymous functions (a great one for one-liners) by taking an input of parameters and then executes the ONE expression. (lambda : [expression]) 3. Greedy algorithms can't really bring you an optimal solution. Different approaches to greedy tests: greedy by profit/value (selects the biggest value first), greedy by cost (selects the ones with minimal cost in hopes of obtaining as much items as possible), and finally greedy by density (selects the one with the biggest value per cost)
This Course is gold. This quality does not exist anywhere else. I read the book, watched all the videos, solve the priogramming assignments. Thanks MIT and Professor Guttag! You can find assignment solutions for 6.0001 and 6.0002 on my github account: github.com/emtsdmr
Imagination expansion is the single most valuable skill to learn that can assist further learning in the future . This imagination comes in forms like mind palace aka the Art of memory , maybe (Learn how to Learn ) ... This lecture made me think about why i became interested in Machine learning and made the path seem less intimidating , which makes me glad that i found this lecture playlist and youtube channel
How optimization works? 6:08 I. E. Route by car from a to b Objective to min travel time So objective function = sum( mins spent) from a to b On top of that layer a set of constraints(default empty) Fast way Boston by plane but impossible on a 100 budget Timw: to be before 5 pm While bus only 15 but impossible before 5, infer better to drive Constraints help elimination some solutions This asymmetry is handled differently Knapsack a burglar with limited space, items more than he takes 11:00 contonus problems solved by greedy algorithm takes best, nice on 0 1 knapsack: decision affects other decisions I could end up multiple solution 1300 or 1450, greedy does not guarantee best answer Assume n items: 0. Total max w 1. Set available l 2. V item is taken 16:30 bruteforce algorithm Generate all subsets (of items) From a powerset 23:31 Key function used to sort the items (based on. Some criteria Take item subtract calories Next time best time found out (but can't leave yet) 🤔 If an item makes it overbudget "wait and see" check others, then Algorithm efficiency? Python built in timsort Same as quicksort= same as mergesort n log n N (len items) N log b + n (constant) Order n (log n) Door for large number (1M) Not for cost but cheap ones first We get different answers with greedy Only local optimal solutions chosen each point Hey stuck local points boy the best one
A good example of the global vs local optimum is: Problem: consider vals = {1/2, 1,3, 1,4}, and then find the subset of values in vals such that the sum of values in this subset is as large as possible, but is also constrained to be 5/8. However, not confined to taking this greedy algorithm, you can see that 1/3 + 1/4 = 7/12, which is less than 5/8, but better than our greedy alg result of 1/2. So therefore the point is that greedy algorithms give you different results to the knapsack problem depending on what your metric is (our greedy metric here was 'next largest', but we could have chosen something else. In fact, 'next smallest', would have gotten us the global optimum solution!). "local optimum" in this context refers to the optimal solution *for a given metric* ('next largest' - which yielded our result of 1/2), which as mentioned, isn't necessarily the same as the best possible global solution (our result of 7/12) to a knapsack (optimisation) problem.
Thank you! I was confused that he was describing a local optimum with those examples because the metrics he is using are qualitatively different, ie. it might be more desirable to me to have slightly less overall calories but me maximising on "value" (how much I like the food) rather than cost. What seems significant for determining the optimum is the _order_ of the elements, and the metric (or the key function) determines the order. So then the global optimum is the solution with biggest total across all orderings.
Easy introduction; Using human mind as an example for understanding of how mental congnition takes place in logic sets, to more logic sets, taken into relativity to personal information that is believed from the correlation of past believed information that foundationally supports anything believed by that individual to be true. *Because, beliefs equal what we deem to be real (more on that later). For example, Artificial Intelligence is computationally created (unintentionally), but found to be necessary based upon exposure to beliefs or purposely created by the creators (humans) without knowledge of the methods that are being used for an outsider source of creation. This is the greatest factor of creation. It is statistically possible to re-create what has been proven and even possible to prove that nothing is random in the event that it be understands the mirrored language in which it comparatively recognizes as belonging to a "concious" observation of some outcome. If the created language is is newly acquired and uknown, then no phenemonela is observed for validate its existence. Therefore, no new DATA is confirmed and a moment for observational phenomena was lost (some call this luck). In the event that new. Infornation is realized and then it turns into data due to concious observation then it will be consciously compared to what is known in some context that cognitively gives validation to a past experience that has been deemed factual and correct, therefore creating a sense of beliefs. *If the Universe offers assistance to the creation of other Universes and its nature is to produce systems that are in mirrored in reproduction then it would seem relative. Some of these observations would be similiar, metaphor like, opposite of, symbolically important or whatever is conciously observed and to be factual or possibly thought of and believed to somehow shaped or formed the connected understandings of the unique observer. We could jump into many acdemic subject matters and show how concious creation through cross sourcing one subject matter to the nex subject matter and to helps to identify the creation of anything, because everything is a "system" persay...
the length of the list of names is 9, but the length of the list of values and calories is 8. Therefore, no value or calorie is asigned to the cake. But the lecture is really great...minor mistake...
Woahh, nice video. Didnt expect to use knapsack algo in data science... We learnt it in design and analysis of algorithms.... Interesting idea.. i got a idea.. maybe i can do something innovative 🤔 By the way love from India
dothemathright 1111 by this definition no person at time t will understand lambda functions unless they know it, and If we let t = 0, no one understands lambdas, and there fore no one will ever be able to understand lambdas and therefore lambdas become useless
Timsort is a variant of Quick Sort? AND QS has worst case complexity similar to merge sort?? I guess I don't understand Computational Complexity that well :(
[36:00] I don't get why we get different answers in the greedy algorithms as long as we use the same items and the same key function It does local optimization, but it does not mean that local optimization is different each time we run the program given the same parameters
The textbook is Guttag, John. Introduction to Computation and Programming Using Python: With Application to Understanding Data. 2nd ed. MIT Press, 2016. ISBN: 9780262529624. It is available both in hard copy and as an e-book. (mitpress.mit.edu/9780262529624). The course materials are available on MIT OpenCourseWare at: ocw.mit.edu/6-0002F16. Best wishes on your studies!
Great course...but objectively speaking...we are always looking for a=b...now subjectively speaking...a=whatever the *user* wants... which brings us back to why we stick to frigging applying a linear transform on everything...
6.0001 Introduction to Computer Science and Programming in Python is the prerequisite for the course. See the course (and the prerequisite) on MIT OpenCourseWare at: ocw.mit.edu/6-0002F16. Best wishes on your studies!
It probably was a white lie, having to explain the actual difference between average and worse case time complexity would drive people's attention away from the actual problem imo. Would've been better if he just used mergeSort which the students already knew tho
The Syllabus lists the Prerequisites as "6.0001 Introduction to Computer Science and Programming in Python or permission of instructor." See the course on MIT OpenCourseWare for more info at: ocw.mit.edu/6-0002F16. Best wishes on your studies!
No, "itemsCopy = sorted(itmes, key = keyFunction, reverse = True)" has a complexity of O(nlogn) as the fastest sorting algorithm has that complexity. by "n = len(items)" the professor means that in O(nlogn) n is equal to the number of items we have to sort.
Vectors come in maths. Think of them as objects when added with another object gives u a same type of object. Amd when scaled by a constant float gives u a same type of object.
@@sharan9993 so are they a specific type of object like a tuple or a list? I fully understand what a vector is in a math context. That's simple. What was skipped over here was their adaptation and use in coding applications.
@@winkcrittenden6011 in maths we define them as a 2 coordinates in 2d and 3 in 3d etc. Here we define them as arrays. In python array can be implemented using list. Each vector can be thought of as a n list of numbers. If we move in higher dimensions we can increase the size of the list.
love the prof and the content, but what's with the lame students. I was yelling FOOD! FOOD! while watching this at around the 32:00 mark, and the students were just not interested in answering or participating. Don't they know how lucky they are to be sitting there?
I don't think it's a great idea to throw in a new programming concept, and not trivial btw, in a first lecture where you want to focus on the main subject and communicate an intuitive view. Students who get lost in the programming side will lose the optimization side too.
*My takeaways:*
1. Prerequisites for MIT 6.0002 2:16
2. What is a computation model 4:17
3. Optimization models 5:47
- Knapsack problem 8:04
- Solutions of knapsack problem: brute force algorithm 16:18, greedy algorithm 19:38 and problem with greedy algorithm 37:05
thanks a lot, it's really helped me
@@vegitoblue21 nice GH Z 👍
GH Z you’re welcome
W, need more comments like these
Professor Guttag gives simple and well understandable explanations for otherwise actually pretty complex optimization problems (especially digital optimization). It is so nice that MIT is making these lectures public
One of the best things about the age we live in is that we all have FREE access to amazing lectures like these from MIT, no matter where we are
agreed lol.
@@w3w3w3
Specially during the pandemia and lockdown.
And we recognize that watching videos of lectures is meaningless for most people.
we know
And one of the worst things about the age we live in is that we have to spend 8-9 hours a day in front of a computer screen wasting our lives on menial corporate tasks instead of watching lectures like these and applying what we learned from them to do something really meaningful.
For anyone interested, this course starts in march 2021 in EDx. It's free with an optional certificate for $75.
Personal Notes.
1. Keyfunction serves to map elements (items) into numbers. Tells us what we mean by best. In this case, the professor wishes to use the one algorithm independently of his definition of best.
2. Lambda function creates anonymous functions (a great one for one-liners) by taking an input of parameters and then executes the ONE expression. (lambda : [expression])
3. Greedy algorithms can't really bring you an optimal solution. Different approaches to greedy tests: greedy by profit/value (selects the biggest value first), greedy by cost (selects the ones with minimal cost in hopes of obtaining as much items as possible), and finally greedy by density (selects the one with the biggest value per cost)
I am amazed that these courses are freely available. Thank you, MIT!
I'm working on an MS in data science, and man do I wish I had this guy. My professors over complicate everything.
This Course is gold. This quality does not exist anywhere else. I read the book, watched all the videos, solve the priogramming assignments. Thanks MIT and Professor Guttag!
You can find assignment solutions for 6.0001 and 6.0002 on my github account: github.com/emtsdmr
Hey, do we get a certificate on completion? Just curious.
Hey I'm having hard time completing the last problem set. Can you please help me?
thank you so much mit, I am a colombian student and without you I wouldn't be able to take this kind of courses
Imagination expansion is the single most valuable skill to learn that can assist further learning in the future . This imagination comes in forms like mind palace aka the Art of memory , maybe (Learn how to Learn ) ... This lecture made me think about why i became interested in Machine learning and made the path seem less intimidating , which makes me glad that i found this lecture playlist and youtube channel
Just finished 6.0001. If you want to go through 6.0002 with me im starting today!
How optimization works?
6:08 I. E. Route by car from a to b
Objective to min travel time
So objective function = sum( mins spent) from a to b
On top of that layer a set of constraints(default empty)
Fast way Boston by plane but impossible on a 100 budget
Timw: to be before 5 pm
While bus only 15 but impossible before 5, infer better to drive
Constraints help elimination some solutions
This asymmetry is handled differently
Knapsack a burglar with limited space, items more than he takes
11:00 contonus problems solved by greedy algorithm takes best, nice on
0 1 knapsack: decision affects other decisions
I could end up multiple solution 1300 or 1450, greedy does not guarantee best answer
Assume n items:
0. Total max w
1. Set available l
2. V item is taken
16:30 bruteforce algorithm
Generate all subsets (of items)
From a powerset
23:31
Key function used to sort the items
(based on. Some criteria
Take item subtract calories
Next time best time found out (but can't leave yet) 🤔
If an item makes it overbudget
"wait and see" check others, then
Algorithm efficiency?
Python built in timsort
Same as quicksort= same as mergesort n log n
N (len items)
N log b + n (constant)
Order n (log n)
Door for large number (1M)
Not for cost but cheap ones first
We get different answers with greedy
Only local optimal solutions chosen each point
Hey stuck local points boy the best one
Thank You MIT
same bro lmao, i apologize for my broke ass
yeah , same goal
Now you have money, so donate already
donate bro
Now its time
A good example of the global vs local optimum is:
Problem: consider vals = {1/2, 1,3, 1,4}, and then find the subset of values in vals such that the sum of values in this subset is as large as possible, but is also constrained to be 5/8.
However, not confined to taking this greedy algorithm, you can see that 1/3 + 1/4 = 7/12, which is less than 5/8, but better than our greedy alg result of 1/2. So therefore the point is that greedy algorithms give you different results to the knapsack problem depending on what your metric is (our greedy metric here was 'next largest', but we could have chosen something else. In fact, 'next smallest', would have gotten us the global optimum solution!). "local optimum" in this context refers to the optimal solution *for a given metric* ('next largest' - which yielded our result of 1/2), which as mentioned, isn't necessarily the same as the best possible global solution (our result of 7/12) to a knapsack (optimisation) problem.
Thank you! I was confused that he was describing a local optimum with those examples because the metrics he is using are qualitatively different, ie. it might be more desirable to me to have slightly less overall calories but me maximising on "value" (how much I like the food) rather than cost. What seems significant for determining the optimum is the _order_ of the elements, and the metric (or the key function) determines the order. So then the global optimum is the solution with biggest total across all orderings.
The fact that his name basically 'means' "goodday" in German and "abdominal label" in English cheers me up for some reason.
What a brilliant lecture and a amazing professor. He reminded me of what a pleasure it is to attend university.
Great lecture. Really looking forward to dive into this second part of the course, thank you MIT for uploading those
İt is so nice that MIT is making these lectures public 🎉
Great content, teacher and course. Thank you so much for uploading this course.
Easy introduction;
Using human mind as an example for understanding of how mental congnition takes place in logic sets, to more logic sets, taken into relativity to personal information that is believed from the correlation of past believed information that foundationally supports anything believed by that individual to be true.
*Because, beliefs equal what we deem to be real (more on that later). For example, Artificial Intelligence is computationally created (unintentionally), but found to be necessary based upon exposure to beliefs or purposely created by the creators (humans) without knowledge of the methods that are being used for an outsider source of creation.
This is the greatest factor of creation. It is statistically possible to re-create what has been proven and even possible to prove that nothing is random in the event that it be understands the mirrored language in which it comparatively recognizes as belonging to a "concious" observation of some outcome. If the created language is is newly acquired and uknown, then no phenemonela is observed for validate its existence. Therefore, no new DATA is confirmed and a moment for observational phenomena was lost (some call this luck). In the event that new. Infornation is realized and then it turns into data due to concious observation then it will be consciously compared to what is known in some context that cognitively gives validation to a past experience that has been deemed factual and correct, therefore creating a sense of beliefs. *If the Universe offers assistance to the creation of other Universes and its nature is to produce systems that are in mirrored in reproduction then it would seem relative. Some of these observations would be similiar, metaphor like, opposite of, symbolically important or whatever is conciously observed and to be factual or possibly thought of and believed to somehow shaped or formed the connected understandings of the unique observer.
We could jump into many acdemic subject matters and show how concious creation through cross sourcing one subject matter to the nex subject matter and to helps to identify the creation of anything, because everything is a "system" persay...
Hyperparameters tuning is making so much sense now!. Thank you so much for this.
how???
wots that ?
this is the best teacher ,i realized that most of mit teacher are great wish i could study there
Great content and teacher.
A little remark in the code:
names values and calories are not of same length. names is 9 and cake is indeed excluded
If you are confused when Wednesday is, yes it is 2. Optimization Problems on autoplay
I just have two words: Thank You
Anyone here because of the damn quarantine?
i suppose you are optimizing your time
I don't even know how I got here lol
Maybe want to become bald.
4:10
Start
Fantastic course, thank you to MIT, like many here I will donate when I start earning!
the length of the list of names is 9, but the length of the list of values and calories is 8. Therefore, no value or calorie is asigned to the cake. But the lecture is really great...minor mistake...
The 'no good solution' statement for 0/1 knapsack problem is true if we assume P not = NP
it feels funny to hear absolute silence in response to some questions, the way that even MIT students dont know or are afraid of answering wrong
Woahh, nice video. Didnt expect to use knapsack algo in data science... We learnt it in design and analysis of algorithms.... Interesting idea.. i got a idea.. maybe i can do something innovative 🤔
By the way love from India
What a personable prof!
This John Guttag guy, I like his style
he is legend ,great explainer
I Love the way they teach us .....Awesome I have great experience .....#Great Content and Also Valuable ......
Wish I could attend in person. Great lecture, just sad not enough interaction.
It is such a shame that this video has 287K views and the last video has only 20K views, why do people don't complete the course?
Thank you MIT
31:40 The moment the professor discovers that no one understood anything.
Because he is teaching the wrong folk.
Jesus that was so cringe
@dothemathright 1111 that is so true, haha
dothemathright 1111 by this definition no person at time t will understand lambda functions unless they know it, and If we let t = 0, no one understands lambdas, and there fore no one will ever be able to understand lambdas and therefore lambdas become useless
@@ramind10001 It's almost as if he was joking ...
6:14 here should it not be objective value than a function? What am I missing? Minimum time would be a value right?
I love this guy! Man literally threw out candy to encourage students to answer questions, that’s so cute lol
What about a genetic tournament algorithm?
I can't believe this is for free
Timsort is a variant of Quick Sort? AND QS has worst case complexity similar to merge sort?? I guess I don't understand Computational Complexity that well :(
[36:00]
I don't get why we get different answers in the greedy algorithms as long as we use the same items and the same key
function
It does local optimization, but it does not mean that local optimization is different each time we run the program given the same parameters
I code excactly like in the video but when i run it, the error name “Food” is not defined in line 17 (build menu) appear. Does anyone has any ideas ?😢
What a cliffhanger to end on! :)
This Parachute is a knapsack! XD
Thanks for the assist Ana (heart emoji)
LOLLLL I love when no one can answer his questions. Omg, I feel so bad for that professor.
Thank you from Algeria
36:06 Do you mean calories as weight, sir?
this food rewards reminds me my relationship with my dog. :) Anyhow, good explanation and overall definition of such concepts!
32:14 i feel so bad for the prof... he's trying so hard to build a connection with his students...
Where did the I[i] come from? Shouldn't it be L[i]?
He didnt define it in the beginning as a list but it is the list of item values and weights.
Professor knows to solve complex optimization problems but don't know what to do when the screen freezes. Calls the assistant.
Thank you for these lectures. If I come into money I will make a large donation.
Which book is used for this course and how I can exercise on the different topics concerned the course? If there are any...
The textbook is Guttag, John. Introduction to Computation and Programming Using Python: With Application to Understanding Data. 2nd ed. MIT Press, 2016. ISBN: 9780262529624. It is available both in hard copy and as an e-book. (mitpress.mit.edu/9780262529624). The course materials are available on MIT OpenCourseWare at: ocw.mit.edu/6-0002F16. Best wishes on your studies!
Are the numbers inside the 'values' array randomly picked by the instructor or the does it act as a grading scale for each menu item?
I think they are a grading scale he has chosen to order the items according to how much value they have to him (how much he likes them).
Great course...but objectively speaking...we are always looking for a=b...now subjectively speaking...a=whatever the *user* wants... which brings us back to why we stick to frigging applying a linear transform on everything...
Just finished the exam of this... What if this uploaded few months ago...
32:31 really no one can answer !!
Is there a specific order in which I should watch the different playlists for ML?
Yes depends on wt u want to learn?
Dr. Anna Bell from 6.0001 pops out in this video... Did any of you guys notice???
35:18
very good lecture
why are there 9 names and only 8 values and claoires
I think it's a minor mistake. You have to omit cake.
thanks , mit
This is amazing
36:48 donut should have 95 in calories instead of 195 showing in the result, and apple should be 150, not 95.
Thank You
What are the prerequesites of this course?
6.0001 Introduction to Computer Science and Programming in Python is the prerequisite for the course. See the course (and the prerequisite) on MIT OpenCourseWare at: ocw.mit.edu/6-0002F16. Best wishes on your studies!
thank uu mit ocw
so i have learned machine learning ,python,sql,tableue,powerbi,flask in 10months thanks to corona ugggh
what have you put to practise?
@@ArunKumar-yb2jn got job in business analyst role
@@axa3547 What's a business analyst do? Work with Excel or coding?
@@ArunKumar-yb2jn depends upon you which ever tool you wanna use , I use both
Did you get the job without a diploma in those, simply by skill?
Great!
thanks
Boy, talk about your cliffhangers.
Quicksort worst case is O(n^2). The professor probably wanted to say average case complexity.
It probably was a white lie, having to explain the actual difference between average and worse case time complexity would drive people's attention away from the actual problem imo.
Would've been better if he just used mergeSort which the students already knew tho
Maybe he was saying worst case for Timsort is O(n log(n)). en.wikipedia.org/wiki/Timsort
But timsort is not a quicksort, it is more like a mergesort.
Excelente, ¿podrían igualmente subir vídeos de física y matemáticas con subtítulos en español o traducidos al español? Gracias.
@Nicolás Gómez Aragón roflmao, i got it.
Aprende inglés
La Casa De Papel knows the 0/1 knapsack problems omg!
28:42 what is "item" that used for ?
'List' of Food items or Menu
Thank you I enjoyed it
source code of that example program please.
ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-0002-introduction-to-computational-thinking-and-data-science-fall-2016/lecture-slides-and-files/
Cheers!!!
error?
thnx MIT
I could have got the candy reward it was so obvious that the answer is Food .
The americans love examples with food
What programming classes should I take before learning this course?
Thanks
The Syllabus lists the Prerequisites as "6.0001 Introduction to Computer Science and Programming in Python or permission of instructor." See the course on MIT OpenCourseWare for more info at: ocw.mit.edu/6-0002F16. Best wishes on your studies!
@@mitocw doesn't cover the math perequisites
👍Gud morning gud video
damn i'm hooked
27:11 so n = len (item) has a computation time of O (n log n) huh? I just understand now. thank you sir
No, "itemsCopy = sorted(itmes, key = keyFunction, reverse = True)" has a complexity of O(nlogn) as the fastest sorting algorithm has that complexity. by "n = len(items)" the professor means that in O(nlogn) n is equal to the number of items we have to sort.
hey help me. is he using phyton x,y?
One more question: Why is density function returns self.getValue() / self.getCost()
value / cost gives you how much value is packed into 1 unit of cost for the object, and he chose to call that the density.
cool
RE: Carnegie Hall Joke.
--> Is that where Inglorious Bastards got the line from?
Kind of frustrating that he starts talking about vectors without ever actually explaining what they are. This wasn't covered in 6.0001.
Vectors come in maths.
Think of them as objects when added with another object gives u a same type of object. Amd when scaled by a constant float gives u a same type of object.
@@sharan9993 so are they a specific type of object like a tuple or a list? I fully understand what a vector is in a math context. That's simple. What was skipped over here was their adaptation and use in coding applications.
@@winkcrittenden6011 in maths we define them as a 2 coordinates in 2d and 3 in 3d etc.
Here we define them as arrays.
In python array can be implemented using list. Each vector can be thought of as a n list of numbers. If we move in higher dimensions we can increase the size of the list.
@@sharan9993 that actually helps a lot. Thank you
@@winkcrittenden6011 what r u learning this for
machine learning?
There's no value and calories for the name 'cake' :(
love the prof and the content, but what's with the lame students. I was yelling FOOD! FOOD! while watching this at around the 32:00 mark, and the students were just not interested in answering or participating. Don't they know how lucky they are to be sitting there?
Tbh I had to pause and go back a couple of times to understand what the prof was saying so maybe they were just abit confused.
I don't think it's a great idea to throw in a new programming concept, and not trivial btw, in a first lecture where you want to focus on the main subject and communicate an intuitive view. Students who get lost in the programming side will lose the optimization side too.
Then they wouldn't be MIT students...