4. Factorization into A = LU
HTML-код
- Опубликовано: 18 окт 2024
- MIT 18.06 Linear Algebra, Spring 2005
Instructor: Gilbert Strang
View the complete course: ocw.mit.edu/18-...
RUclips Playlist: • MIT 18.06 Linear Algeb...
4. Factorization into A = LU
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu
The shoes and socks analogy for inverses of matrix products is probably the cutest thing a math genius has ever said.
It is in every textbook.
@Reed Morris Who put together the ranking you speak of? This is not something one can evaluate objectively. Try again.
And of course this is basic math. This is a freshman or sophomore undergraduate math class. How does that relate to my original comment at all?
@@SilverArro I know not of the ranking you speak of, how can you rank the cuteness of all genius's by your own mechanism's. He isn't very bright in my book, but I guess he can still be a math genius. How do you know he didn't just study hard. Every competent is not a genius just because they are to you.
@@SilverArro most places require 2 years of college calculus before linear that's the third year btw just saying. Saying its entry level doesn't uplift you at all.
@@Gojam12 The ranking thing was in reply to a comment that has apparently since been deleted. I really have no idea what you’re talking about and I don’t particularly care. You do not need 2 years of college calculus to take linear - 2 semesters maybe (that’s a single year). I took AP Calc in high school and went straight into Multivariate Calc and Linear my freshman year. Since this is MIT, I’m going to guess many students are in a similar boat. Linear Algebra is indeed basic math, sorry. It is where most students just begin to get their feet wet in digging into mathematical theory. If you want to argue further over something so trivial, feel free to keep arguing with yourself here - I won’t be replying again. This was a lighthearted comment and certainly not meant to be a treatise on mathematical genius - I find it amusing that you should need that pointed out to you. Goodbye.
The internet is such a wonder! Thanks to it, I can learn from great educators like Prof. Strang from the comfort of my home. What a nice era to be a human in.
Gilbert Strang lecture: "and this is a matrix..."
Gilbert Strang textbook: "Find the corners of a square in n dimensions and whether vectors a, s, d,e w,wieidwdjdkdk are contained in the cube...."
lmaoo omg
THAT IS SO TRUE OMG
so in other words he aint worth a shit is that what you mean? Because I agree
he even admits that some of its examples are dumb
@@Gojam12 what do you mean?
Lecture timeline Links
Lecture 0:00
What's the Inverse of a Product 0:25
Inverse of a Transposed Matrix 4:02
How's A related to U 7:51
3x3 LU Decomposition (without Row Exchange) 13:53
L is product of inverses 16:45
How expensive is Elimination 26:05
LU Decomposition (with Row exchange) 40:18
Permutations for Row exchanges 41:15
awesome
Solomon Xie You are the hero everyone needs :D
What is the book that the student use for this course?
@@saurycarmona5716 The professor himself is the author of the book.Its called
'INTRODUCTION TO LINEAR ALGEBRA'
By GILBERT STRANG
thank you mit for this...to give such a privilege to the world that they can view and learn from your content,...its commendable,..and i am grateful
how many times has watching a lecture brought a smile to your face ? I was constantly smiling - every time he pointed out something that I hadn't thought of in the way he mentions it. Such an amazing teacher!
The moment he said the inverses of these matrices (permutation matrices) are just their transposes...
Blew my mind I had to pause to check all of them... wow
@@ranjanachaudhary2110 Same lol that blew my mind
46:15 Holy shit! He gave a teaser to abstract algebra right there! I just finished abstract algebra and was just watching these lectures because... I don't need a reason, and I just noticed that now! Prof Strang is amazing. I am glad I can watch these lectures from anywhere and at anytime I want :)
Jͼan Yep. He gives a very nice and basic example of a group. The permutation group is often one of the first examples you examine when studying group theory.
Holy Beautiful
“Because... I don’t need a reason,” you are damn right
I studied discreet maths set theory today, and it just clicked that its an algebraic group the moment he said closure. Small moments of happiness :)
In Germany our linear algebra courses open up with group and field theory and some later on even do modules if theyre feeling mean. I kind of like the approach of geometry and computations first tho, at least for physics
I remember being a student and rushing out after class, as these students did. But now at the ripe age of 35, I see these students doing the same and I think "HOW DARE THEY NOT STOP AND APPLAUD FOR SUCH A MASTERFUL PERFORMANCE"
I feel exactly the same
@Supersnowva he's great
It would be hard for a beginning linear algebra student to appreciate how much better Strang’s teaching is than most courses.
I did my maths degree in the late 1970s-early 1980s. I did a load of linear algebra, I didn't realize how lucky I was. Watching professor Strang just makes me want to pick up an algebra book, and work through it. Bravo Professor.
If reincarnation and time travel both turn out to be things, I want to come back as a student in his class.
He just gave an intro to computational complexity in CS measured in order notations such as Big O, Omega etc. Pure gold how he also almost put the definition of small O right there. Applauds
for those unaware, this video was originally uploaded in very bad quality (you willl see complaints about this in the next one) and MIT OCW claimed to have lost the original recording and thus were unable to upload in higher quality. Fortunately, the seem to have found the tapes. Thanks MIT OCW!
nachomasterCR 8
yeah, those unaware fools...what do they know ! ...i remember the bad quality and was afraid of lecture 4, when i repeated the course...but let´s not scare the young folks with stories from the past... let it rest...
And where exactly is this HQ version? Or is there one that's more echo-y than this one?
@@windowsforvista ruclips.net/video/5hO3MrzPa0A/видео.html
@@briann10 Damn that is bad!
Professor Strang is really enjoying teaching, I am so appreciate that I could learn from him! I like the way he teaches so much!
i literally waited years for this video. im going to binge watch this bitch
"I'm sorry that's on tape" Strang 2005
isn't it yr 2000? 2005 is the year they published the "tape", I think.
I know of no soft power more effective than these lectures. Thank you MIT for the generosity and commitment.
I feel so sorry for younger version of me who doesn't know about this great course and the nicest instructor. poor guy just hated the math classes. thank MIT, thanks dear Gilbert Strang :)
I won't understand people who disliked this videos. I haven't liked revisiting a topic, unless it is from Deep Learning. But this! This is a gem that I will revisit my entire life, any given day.
Bored? Pick a topic from this series.
Depressed? Pick a topic from this series.
Need inspiration? Pick a topic from this series.
Time on your hands? Still need a hint?
26:05 "What did it cost?"
~40:00 "Everything"
I bet 2$
i guess I am the lucky one since i've just started to watch this video lecture series today!
God bless MIT and Professor Strang. Such a bright light for a wonderful course!
He just non-chalantly planted the idea of Group Theory in at the very end of the lecture - Genius!
If only one can make a math playlist of all the best lecturers in the world... may be I will do this.
kindly share that genius playlist here..
Yes pls share playlist
yes please do
@@ManishKumar-xx7ny This guy is doing most of the same. Follow him.
"The Bright Side of Mathematics"
36:49 it's (1/3)n^3 + (1/2)n^2 + (1/6)n for those wanting to know the exact answer.
Porque
Can you clear one thing why it is not 99sq. or(n-1)sq for first operation what is the significance Of saying about 100sq. when there is no operation specifically on first row
Ajitesh Bhan That is because we only wanna know the highest order of the possible answer, as a estimate parameter of what so called “cost” or “complexity”.Try 1 to 10 you will find n cube is significantly greater then n sq when n grows bigger and bigger, so we just do a estimate to know the highest order is n cube which is good enough to know the cost, because n sq and other factor is so small compared with n cube.
Ajitesh Bhan the first step is n(n-1) if we do a accurate count, but n sq is fine for the same reason that we just wanna know the approximately cost order, it is Order 2 obviously so n sq is okay instead of a more acc n(n-1).
Still an approximation. You’re assuming that the cost for an n x n matrix is approximately n squared when it’s n squared minus n.
Good God these lectures are a perfect addendum when trying to learn this topic from the book alone. Thank you thank you thank you
Note for myself : Elementary matrices ka inversion simple hai sirf jaha -ve hai usko +ve karna hai
Also jab ham row exchanges nahi karte hai then L simply mil jata hai bas identity matrix mein E21 ,E31 and E32 ko zero karne k liye jo operations kiye hai unka sign reverse karke Identity matrix mein respective positions par likhna hai .
And jab ham row exchanges karte hai in order to get the U matrix then also its very simple bas as compared to previous case jab row exchanges nahi kar rahe the yaha par permutation matrices will also be present as the elementary matrices and their inverse is also very simple to calculate.
Permutation matrix is itself it's inverse.
Congratulations for finding this recording! Thank you a lot!!!
n^2 + (n-1)^2 + (n-2)^2 + ... + 2^2 + 1^2 = n * (n+1) * (2n + 1) / 6. As n becomes "big", this sum approaches n^3 / 3. His approach in the lecture is also good. Plot a graph of y = x^2 and identify the points x = n, x = n-1, x = n-2, ..., x = 1, and you'll find that if n is big enough, the discrete plots start to look more and more like the curve y = x^2, which then allows you to approximate the area under the curve. Again, what you get for a reasonably large n is n^3 / 3. One final thing is, if these operations are performed in a loop, you'll need way more time, because his analysis assumes an operation on an entire row. To achieve this, you would need vectorized code that can operate on the entire row at once. Hope this helped someone :)
Shouldn't it be : n(n+1)/2
2x2 ----> 1 operations
3x3 ------>1+2=3 operations
4x4 -------> 3+3=6 operations
5x5-------->6+4=10 operations
6x6--------->10+5 = 15 operations
7x7---------> 15+6 = 21 operations
Hence 1+2+3+4+5+6+.....n = n(n+1)/2
I considerd multpliying the a row with a constant and then subtracting from another row as one operation.
@@mkjav596 If you multiply a row(say a size of n) with a constant , then you are having the cost complexity as O(n).
We generally consider it a linear complexity than just considering a constant. Since for bigger n (say > 10000) taking the operation as a constant can be expensive.
@@mkjav596 thanks
dude your comment is really helpful to me, may I ask you more details about the second point of your comment, that loop thing , I'm not getting that point clearly
Another visualization is building a pyramid starting with a block of base n^2 and height equal to one, then another smaller block with base (n-1)^2 and height one on top and so on eventually resulting in the overall height of n. When n grows and the point of observation moves away from the pyramid such that the height appears to be constant, the blocky pyramid becomes increasingly smooth and the volume approaches 1/3n^3.
The best teacher teaching this material as far as I know. I wonder whether his books are as good as his lectures. May he still have a long and healthy life. :)
I have Strang's Linear Algebra for Everyone. It's a decent book. I prefer Bretscher's Linear Algebra with Applications
great lectures,, much better than any paid courses on Udemy or other sites
So clear the explanation is that for a simple matrix (3X3) I can directly flush out the inverse matrix given the multiplier at each elimination step, without going thru matrix inverse and multiplication.
Here is the process:
(1) Flip the sign of multiplier at each elimination step.
(2) Directly add it in the L matrix in the same position (index) of L.
In the example of 18:00, flip (-2) I get 2, then add 2 in position L[2,1]; flip (-5) I get 5, add 5 in position L[2,3]. So I got L.
BTW, another way to understand that L is better than E is that:
(1) When producing E, there are interfered operations happening and thus a new (implicit) relationship between row1 and row3 is formed. As a result, a new entry (10) is appearing to reflect the newly created (implicit) relationship among row1 and row3, as shown at the position E[3, 1].
(2) When producing L, the operations are in the right order that there are no interfered operations thus no implicit relationships were generated. So we can just plug in the multiplier (the entry) directly to L, without worrying about missing any entries.
(3) As a side note, the entry value tells about the multiplication, BUT more importantly, the index of each entry tells about he the relationship between rows. Eg. In matrix E32, the entry E[3, 2] = -5, it means the change coming from row2 to row3, with (-5) as the amount. Getting an explicit explanation on the role of entry indexes helps a lot to build some intuitive in a long run.
Thank you Dr. Strang. You are the hero of Linear Algebra!
"When producing L, the operations are in the right order"
I don't understand why that is. The order is determined by the order of the E_i, just in each case it is the inverse of the respective E... How can the order be "right" if it was determined by the order of the E_i. There are multiple possible sequences of E_i after all.
This man is just a genius in the purest meaning of the word. He is like Neo, he can see matrices everywhere.
thank god someone uploaded a better audio/video quality version the other one was abysmal
An extra ordinary teacher. Thank you MIT.
This is SO helpful, more than thankful for this upload. I really like this professor too.
Thank you Dr. W. G. Strang, for all this knowledge you have all us favored of.
The way he connects the dots. Wow!
Ah that explains it. I don’t understand the math but now I finally understand why my socks are getting wet when it rains
Thank you so much. This is the proper education people should receive.
36:49 the precise answer would be n(n²-1)/3..
when i first run into linear algebra at university i was so stuck to understand even the basic topics of my courses. then after 2-3 years i discovered mr. Strang's lectures and i have to say i am so grateful for this professor because his teaching aproach made me understang the whole concept of linear algebra and i actually found it very interesting for the first time in my life. Plus i finally passed my courses after all these years xD god bless you mr. Strang :)
Put on socks > put on shoes, thus to inverse the process take of shoes > take of socks 😂 Great analogy!
But do you put on one sock, then one shoe, then the other sock, then the other shoe? Or both socks first, then both shoes? And do you have to take them off in the same order?
uuuu the quality in this video is better than the another on the previous playlist
Thanks, Dr. Strang. I always enjoy your lectures.
I just repeated watching this video twice and then got the idea of why L is better than E. Thanks Dr. Strang
Didn't know there were videos of his classes. I have being learning from his books in my school.
I don't understand how some people find it easy to disrespect teachers. In Bangladesh, we respect our teachers.
Thank's MIT to share this type of documents with the worlds. Thank's...
(1/3)*n^3, magically! hmm! But hey, it makes sense, that is a sum of all (n-x)^2, and as he assumed for all pivots it is a continues variable then the discrete sum of continues variables turns into integral and there you have it, 1/3n^3.
Group. They are a nice little GROUP.
Wonderful explanation, prof. Gilbert. Thank you!
Since all eliminations must be done by computers for large matrices, intuitive approaches fail quickly. So, precise rigorous algorithms are the only practical way to do elimination. Gilbert Strang style defies the rigorous approach, and does it on purpose to breath life into the dull process of elimination.
So glad there's a solution to the Sox Shoe Primacy Dilemma.
thanks, Profesor Gilbert Strang
Finally! Where did you end up finding this??? Cached in someone's IE?
They restored it by training a neural net on all the other videos together and feeding it the low quality one as input to transform
@@nickpayne4724 no way this is the work of a neural net.
His closet consists of 7 version of the exact same outfit.
My exact inference
@Emanuel D Underrated comment LMAO
He's a superhero! What do you expect?
@Wilhelm Eley Dayum!
@Wilhelm Eley If he changes his outfit daily with periodicity coprime with 7 (basically "any number"), at the end of the time we would see he using all his clothes thanks to Bézout's identity. u.u We just need to spot the minimal differences in his outfits and, statistically, hope we are right with the amount of cloaths he uses. We are doing our jobs right. u.u
The educational lead up to 40:07 "We really have discussed the most fundamental algorithm for a system of equations."
Yes in fact, the shoes-socks rule stands well in this science
Man the chalk glides so smoothly across the black board, when I give tutorials at my university it's usually a huge pain in the ass to draw stuff on there because it just feels like shit haha
I did't understand at the first view nor at the 2nd one but at the 3rd or even at the 4th time waw CRAZY approach !
100× 100 matrix will take max 49950 steps of ellimination to form an Identity , if and only if -> all the elements are not equal 0 ; explanation -formula as per me 😁 i.e 100 × 100 matrix = 10000 elements , so 100 num of pivots , which are no to be changed and so remaining elements ;10000-100 = 99900 ; we are supposed to make either upper or a lower Tri. , So now we have to change 1/2 of the elements skipping the pivots i.e = 1/2 × 99900 = 49950 elements so 49950 steps 😎
I would say that the most efficient way for solving Ax=b would be solving A^t A x = A^t b (minimum square problem) using CG algorithm, due to my personal amazement with CG method. Pretty sure that it's not the case, but this method gives me chills. Hahaha
36:05 to sleep in front of 300k people... a fucking legend
This python program shows that total sum is n cube divided by 3
import matplotlib.pyplot as plt
def r(n):
sum = 0
for i in range(1,n+1):
sum = sum + i*i
return sum
def y_axes(n):
lst = []
for i in range(1, n+1):
lst.append(r(i))
return lst
plt.plot([x for x in range(1,1001)], y_axes(1000))
plt.show()
how is the number of operations n^2.
2x2 ----> 1 operations
3x3 ------>1+2=3 operations
4x4 -------> 3+3=6 operations
5x5-------->6+4=10 operations
6x6--------->10+5 = 15 operations
7x7---------> 15+6 = 21 operations
So it should be 1+2+3+4+5+6+.....n = n(n+1)/2
Correct me if I am wrong
This lecture on factorization is very helpful ,however I don't fully understand why this concept. is important.
lower and upper
Great video! I love this series. In this lecture, Dr. Strang briefly mentions that the cost of operations for the det(A) will be (n!) He also shows us how the cost of getting A into upper triangular form U will be (1/3)n^3. But from Lecture 2 we know that one way of finding det(A) is to get A into U and then simply find the product of all n pivots. So it seems like the cost for finding det(A) would just be a bit more than (1/3)n^3, perhaps (1/3)n^3 + n. I must be missing something here; any thoughts?
If I understand you correctly, I think the new term n is ignored because it's not significant compared to n^3 as n goes to infinity. I'm no expert and I'm not even sure if you are right. But if you are, and you are wondering why the n is ignored compared to n^3, i think this is the reason.
Weston Loucks you're right. When you calculate the upper triangular form and than multiply the pivots, the work scales with n^3.
When he says, the effort is n!, he refers to a calculation with the laplace formular.
plug in some numbers and see how rational the outcomes seem
The lower term for large n almost vanishes so it isn't significant in Big O notation stuff. n^3 will dominate as n goes to infinity. The n! comes from a popular determinant based algorithm.
Yaa, I was thinking about it and, it came to me that, let's say we have 100 by 100 matrix. Now if we count a multiplication and subtraction as two operations then to reach a state where the first column of the matrix has only one non zero element and which is our pivot in 1st row, we do 100 subtractions for 99 rows as the first row remains unchanged, also, since we are multiplying all the elements of a row by some multiplier we also have 100 multiplications 99 times . So , the answer should be about 2*100*99 . Generalizing for n elements it comes to be 2*n*(n-1). And so the total operations will be 2*[(1/3)(n^3) - (1/2)(n^2)]
PS: If we take into account the discreteness, total no of operations = 2*[n(n+1)(n-1)/3]
Thanks for this one ! Was awaiting it for a long time :D
This professor is awesome!
I really find it funny how towards the end of the lecture most students can't wait to go..😂
thank you mit for this
June 15/2019-Very good lecture!
Look. I’m going say this. I learned much oh this in Pre-Calculus in 1981 👍🏾
Correct me if I'm wrong.
I was following the lecture series in order but I don't think transpose was taught in any of the previous lectures.
Lol u are watching MIT courseware...they expect u would be knowing the basics..they won't spoon feed u everything!!!!
Gilbert Strang a legend
I=Lu
Tnx to MIT for this kind of stuff.
Thanks, Dr. Strang
Thanks Gilbert!
This actually makes sense now! Thank you!
love this course
Thanks a Lot, Sir & MIT for bringing out these excellent lecture series on Linear Algebra. May I know where one can find the corresponding problems & assignments for these lectures. Thanks.
The course materials are on MIT OpenCourseWare at: ocw.mit.edu/18-06S05. We also recommend you look at the OCW Scholar version of the course. It has more materials to help self-learners out: ocw.mit.edu/18-06SCF11. Best wishes on your studies!
@@mitocw Thanks for sharing the requested course contents
This is some dark magic right here.
At 39:00
The cost of b is not EXACTLY n^2 operations like he says in the video.
Yes there are n elements, and we assume that all the elements are non-zero from the beginning. But that doesn't mean that EVERY element are being changed. For this to be true(that we are using n^2 operations) we ALSO must assume that there is no 1's in the pivot positions from the beginning. For example if it were 1's in all the pivot positions from the beginning the cost of b would be n^2 - 100(unlikely, but as an example) So the cost of b is not exactly but CLOSE to n^2.
Why is it n^2 to obtain a column of n zeros
At 24:20 when he says that the multiplier goes directly into L, he means the negative right? If you keep a track of the operation on the left side, it inverts while bringing it to the right side.
He defines operations as multiplication + subtractions so by definition the inverses have the positive multipliers.
Can anyone explain why the E21 in 11:16 is easy to invert? Did he teach about the skills in the previous lecture? Or the skill is taught in the readings?
If there is -4, just put 4 in the sample place. It is kind of related to the ways of understanding matrix multiplication. Start with EA = U, this means you are doing one step of elimination to A, say step E21. The step is, you get row 2 minus 4 times row 1 (for A). This is what the second row of E, i.e., (-4, 1) means. Now you want to cancel this step E to get A = LU. You need to add back the 4 times row 1 to row 2. So the new (4, 1) means, get a changed row 2, add 4 times row 1 back.
Why is space a subspace of itself? A container is not ann object inside of itself, a the components of an object are not the sum of its components.
hmm is there a complementary book we must follow? I'm puzzled as I don't think we've introduced the elementary matrices in previous lectures, or Upper/lower matrices. I feel this is not beginner friendly.
We recommend you view the course materials along with the videos at: ocw.mit.edu/courses/18-06sc-linear-algebra-fall-2011. For further study, there are suggested readings in Professor Strang’s textbook (both the 4th and 5th editions):
Strang, Gilbert. Introduction to Linear Algebra. 4th ed. Wellesley, MA: Wellesley-Cambridge Press, February 2009. ISBN: 9780980232714
Strang, Gilbert. Introduction to Linear Algebra. 5th ed. Wellesley, MA: Wellesley-Cambridge Press, February 2016. ISBN: 9780980232776
Best wishes on your studies!
@@mitocw Thank you much. Already looking in the resources and things are making more sense. Thank you for taking the time.
at 32:35 => Should not the first cost be 99*2 instead of 100^2 because there are 99 rows below first one. For pivoting each row below, you need to multiply the first row by some constant then subtract from the row you are pivoting. So, essentially are we not ending up having 2 operations per row for those 99 rows, hence, 99*2 instead of 100^2 ???
I think I have figured it out. It would be like for each element in any row below first one, you would be getting (that element - multiplier*corresponding first row element) and this is 1 operation. So, you have 99 rows below first row each having 100 elements, so, it should be 100*99 operations, which Prof. Strang writes as *about 100*.
Thanks
@@bridge5189 But he said that after 1st step 1st row(which is obviously not changed) , 2nd row and 1st column only these are clean. So, for this I think he only meant by cleaning up the 1st column (just like Gauss), So, in that case the operations for just 1st step would only be 99
@@anuragagarwal5480 Elimination steps are nothing but writing linear combinations of the given system of algebraic equations by multiplying by some constants and then adding/subtracting them from each other.
So, when you do first elimination step for bringing in zero at second row's first column by subtracting the second row from some constant times the first row, you would have to do the same operation for the whole second row. Thus, you would get n operations, one for each element in the second row. Here, n = 100.
Similarly, you would keep on doing this for bringing zero in the first element of all the rows beneath the second row, which would be 98 in total.
Hence, you would end up doing 100*99 operations in total.
@@bridge5189 Yes, but we also don't want to make our 2nd row's 2nd column and respectively in further rows to be zero as we want our diagonal elements to be non-zero.
2. So, total number of operations must be 100*99 to complete the elimination process, then, why sir is taking another (n-1)² + (n-2)² + ......2² + 1² ≈ n(n+1)(2n+1)/6 operations to complete the whole Elimination process ?
e as about n^2 (100^2).
Now, what we would need in our matrix is bringing zero in the second column starting from 3rd row to 100th row. This would be just like bringing zeros in the first column from the second row to last row in any (n-1)×(n-1) matrix. So, for this we can say we would need about (n-1)^2 operations.
So, we have 1^2 + 2^2 + ...... + n^2 = n(n+1)(2n+1)/6
4:57 if i transpose these guys, that product, then again, ... : why did he jump suddenly to this theorem without any definition of transpose nor the logical derivation? Did I miss something?
I finally got stuck at this video. I guess I haven’t mastered what was taught at last lecture. Will revise it by solving assigned problem set like the MIT students. Cannot demand the same progress while I don’t practice as much the MIT student do.
He proves that you multiply in reverse order with inverses. He doesn't prove that with transpose. I want to work it out - I think it has to do with the fact that you multiply row * matrix, you don't multiply column * matrix. The rows of A-transpose are the columns of A. So you have to reverse order to multiply row * matrix.
now doubt sir you are the blessing for mathematicians and also for related to this field,i often enjoy your lectures in my vocations
Can someone explain why the computatinal expense is roughly 100^2 for the first step? I thought it would be roughly proportional to n = 100
because once you eliminate a single element from a row then that whole row will be changed, as each row has 100 elements all those will be changed, i.e in a single row operation you will be changing 100 elements. so in first step when you're eliminating first element of all 99 rows (below first row) so a total of 100*99 elements are changed which is roughly taken as 100^2.
@@raqeebkhan8678 ooookey ty. so as i understand we calculated expense for per number, not for per row.
@@raqeebkhan8678 thanks that confused me too
37:29 Here's where the genius says about his favourite subject.
26:10
I have a question about the cost of elimination.
Ax=b
For an nxn matrix A
For the 1st pivot, instead of 100^2, shouldn’t it cost 100*99?
2nd pivot, instead of 99^2, shouldn’t it cost 99*98?
3rd pivot, instead of 98^2, shouldn’t it cost 98*87?
So, instead of n^2+(n-1)^2+(n-2)^2+...., shouldn’t it be n(n-1)+(n-1)(n-2)+(n-2)(n-3)+.... instead?
Also, how did he get n^2 for the matrix b?
How did transpose suddenly come into the picture?
Was about to ask the same thing!
Thank you for your leasons!
Yea see you on Wednesday
A*I is A ... correction at 3.04
What is the insights between E vs L? Is the matrix L can be computed in-place using A's matrix memory?
Thank you so much!
What can I say, thank you!
39:35,The cost of columns is about n^2 or 1/2n^2?I think it should be about 1/2n^2.
Yes, I think so too. And to be precise it would be n*(n-1)/2