Lecture 12: Square Roots, Newton's Method
HTML-код
- Опубликовано: 28 май 2024
- MIT 6.006 Introduction to Algorithms, Fall 2011
View the complete course: ocw.mit.edu/6-006F11
Instructor: Srini Devadas
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu
00:00 review of last lecture
05:10 error analysis of Newton's Method
10:40 multiplication algorithms
13:20 generalized Karatsuba algorithm
17:31 Schönhage-Strassen algorithm
19:06 Furer's algorithm
23:20 high precision division
32:35 example
36:00 complexity of division
47:40 complexity of computing square roots
complexity of whole lecture is ♧(n!)
most of this went way over my head but i'm glad it exists and is available for free.
don't worry clear concepts first then watch it again
Whoever controlled the camera helped make this lecture much easier indeed. Thank you so much for keep turning and zooming into professor’s face! Looking at the prof’s face is exactly what I need to understand the lecture - you a genius knowing this.
Oh god... i came here as a simple programmer looking for a way to make my own square root function for decimal characters. Who would have thought this was so complex.
Same.
Actually creating your own sqrt is not that hard. The hard part here are the multiplications and divisions of large numbers (larger than the 64 bit long type) If you want less than 64 bit sqrt you just use 1 expression (Newton method) and use it few times to get the desired accuracy.
Step up your game . Simple is not good enough.
It's high school math wtf is so "complicated"?
@@47Mortuus Which high school? :) I really wanna know
Great lecture ! Easy to follow with his explanation.
Newton's method is more general form of what they used. The case used in the class is called Babylonian Method (also special case of Newton's Method) but it has been used for centuries before Newton. I thought Babylonians deserve the credit by mentioning here (it would be better if they mention in class from now on ).
Exactly. The algorithm presented is the classic 'Babylonian Method To Compute Square Roots'
indians know that too
@@golupandey1006 ha bhai ✌️
The views of this lecture series are also converging at quadratic rate video by video XD
so we are calculating (2*(10)^2d)^(1/2) so that we deal with the numbers upto our precision digits from beginning. and latter its easy to shift by d.
I love this so much!
can you help me out here, For high precision of a/b, we did 1/b first (using R/b and then shifting by R will be easy). But how do we multiply a to 1/b? Is this done by our high precision multiplication methods?
@@akash9907 yes, algo can be found in last lecture, like naive algo or karatsuba, or the algo taught in this lecture like Toom-cook, Toom-3 or schonhage-strassen
I stopped at 32:00. I am going to BFS first. If I have time someday in the future I will definitely finish this lecture. Thanks for the lecture but I don't think I have sufficient math background for this right now.
Yap - if you can't even understand high school math, you shouldn't be here
Someone in the comments is being very rude
@@zeronothinghere9334 Probably, bragging that he is above someone.
This lecture is helpful u can check this!
ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/recitation-videos/recitation-12-karatsuba-multiplication-newtons-method/
Have you come back and seen the rest?
can anyone explain what does it mean starting with small precision digits and then increasing precision digits ,ie 1,2,4...d/2,d i mean with small example or something, that would be great help
I'm going to have to rewatch lecture 11 and 12 again another time. Lecture 12 was in another language lol
Is the material in this lecture in CLRS? Couldn’t find it.
not in CLRS.
there should be an example to explain the last part, i mean little careful analysis of division complexity
does anybody knows what did he gave to the students that answered the questions ??
Cushion, coz the chairs in the classrooms are not so comfortable.
Can anyone explain why we can assume that after approximating R/b, computing (R/b)/R would be cheap? R/b is not necessarily 2^k, right? Thanks
for real, he does explain this idea well
imagede zach you got it, good for you. Why reply if you’re not saying anything worthwhile?
@@yicheng1991 what does " REP of a/b" means?
Yes we choose R to be 2^k. We need to find floor (a/b) so we need floor(1/b) so to find this we compute floor ( R/b ) first then simply do a right shift at the end..
can someone help me out, For high precision of a/b, we did 1/b first (using R/b and then shifting by R will be easy). But how do we multiply a to 1/b? Is this done by our high precision multiplication methods?
cushions. An incentive so that students participate in class. Mentioned in the 1st video of the playlist.
Dear MIT OpenCourseWare: Thank you for these wonderful videos! But please instruct your camera people not to zoom in so much. It makes it much harder to see the bigger picture.
When you're learning math/CS, you often have a number of equations on the board, and as you're figuring out what's going on, your eye jumps back and forth between them. That's for *you* to do, not for the video operator to focus on one particular formula and say "now look at only this one." I'd say that in general, you never need to zoom in closer than one board's worth of material.
How does 5T(n/3) becomes n^lg(3 5) ?
Refer Master's theorem
or
previous lectures(merge sort, heaps) for understanding calculation of complexity using recurrence tree
@@raghavendrakaushik4871 Thank you. can you help me refer to the actual video or link that explains this? I was watching all the video playlist but obviously I missed the part where this was explained
@@crjacinro ruclips.net/video/Kg4bqzAqRBM/видео.html
here prof explains calculating complexity using recurrence tree
this is a nice place to refer the master theorem i mentioned - brilliant.org/wiki/master-theorem/
@@raghavendrakaushik4871 thank you
After 11 and 12 lectures, I have implemented my own square root, multiplication, and division methods only using +, -, >>,
in code.py and start at line 597
You are really hard working! I just read through the problem sets and did all the none-coding problems.
good job!
Can anyone tell me how we net 9 multiplication in multiplying X1,X2,X3 to Y1,Y2,Y3
I don't get it
imagine multiplying 123 * 456... you have to multiply 9 times!
6*3, 6*2, 6*1 , 5*3, 5*2, 5*1 , etc
@@crjacinro thanks 😊
Appreciate your effort🙏
38:20 ooooo getting mad
For square root just raise x to the 1/2 power (0.5) -- converts into very fast assembly.
Lucius Dark How would you do that to a million places?
@@TheBilly that's a joke :))
no entiendo ni una wea xd
I gave a nearly identical lecture to my fellow gifted 7th graders many years ago... all but 2 out of 12 looked at me like I was insane. The 2 who understood it are a physician and a theoretical physicist...both are amateur programmers/hackers/pot heads/smart asses...I invest our money and get free healthcare as well as get to argue about string theory while we toke on a bong just like in our mit days. Funny how life works out from random meeting of the minds :)
they are being too abstract
divisions are not necessary. Search the internet.
There is a version that uses a "magic number" to get the initial estimate when using 32bit floats.
i disagree complexity of division is the same as the complexity of multiplication. Fast square root routines try to avoid divisions because division is slow.
Newton's method should be simple is most cases. The professor makes it 100 times harder by trying to take the square root of a very large number represented as an integer instead of a floating point number.
> The professor makes it 100 times harder by trying to take the square root of a very large number represented as an integer instead of a floating point number.
Floating points numbers wouldn't give you arbitrary precision (denoted as d in the lecture), unlike integers, which is the whole point of this and the previous lecture.