Another perfect lecture, finally we can understand such beautiful subject and not just memorize it like mindless robots. Thank you so much Ritvik, your our hero! Gratitude from Brazil
This a huge gem! I love all your videos, they’re always a beautiful mix of theory, applied, and visual examples. I also think they’re the perfect length as well as depth and breath of connected material covered. That’s a delicate balance most technical RUclips videos fail at and what makes yours special. 👍
This is the best linear algebra explanation I've ever heard and I've watched basically everything. The only thing you missed was the geometric interpretation, the point of the basis axes don't change. Still, absolutely excellent. 3b1b is the one everyone praises when actually he confuses simple things. You did the reverse.
PERFECT! As a programmer, I found the process just like "data normalization" which is indeed recommended and useful, amazing. One stupid question, so what's the difference between the column-column check you did, and echelon(row-row) form? I've seen some use echelon
Thank you so much, you're great at explaining and I appreciate you including the application of the concept in the real world, that helps to connect the points!
Another great video, thanks RItvik! Could you please make one about the determinant / trace / diagonalization? Because many happen to see these stuff in Linear Algebra courses, I specifically wonder how are they used in Data Science.
Good topic. It turns out that a deep neural network framework is pretty convenient for solving for the two low rank approximation matrices, or finding the exact solution matrices if they exist. I came up with the following technique: In Tensorflow you use two Embeddings layers with your choice of k and one Lambda layer to do a matrix multiply. Your loss function can be a typical choice like L2 distance between the result of the Lambda layer and the entry of the original big matrix. Each entry of teh original big matrix constitutes one training example. The optimizer is your choice like Adam, everyone loves Adam optimizer. So I came up with this arrangement to do movie recommendations on the MovieLens dataset. And it's better than Alternating Least Squares algorithm for many reasons, one big one being with the DNN technique, you will completely avoid making the dumb assumption that there are zero values in the original matrix entries that are missing values. Of course if you are not missing any values then ALS is probably fine.
Hi :) thank you for this video. I wish Ive watched this video before svd video . Would you pls make a video about latent factor Decomposition and CUR model for approximation?
And which math book do you recommend to have an in_depth concept about data science, ml and ai at the same time with practical concept ? Just the way you teach (not pure useless math formula without any data sience related explanation )
Off topic, but you should make a video on implementing linear bayes/bayesian logistic regression/similar. Would be on-topic for your channel and would also compliment your non-bayesian implementations.
Another perfect lecture, finally we can understand such beautiful subject and not just memorize it like mindless robots. Thank you so much Ritvik, your our hero! Gratitude from Brazil
You are most welcome
"I want to make sure to show you the actual applications..." God bless this man.
Thanks :)
3 years later and still the goat
This a huge gem! I love all your videos, they’re always a beautiful mix of theory, applied, and visual examples. I also think they’re the perfect length as well as depth and breath of connected material covered. That’s a delicate balance most technical RUclips videos fail at and what makes yours special. 👍
Wow, thank you!
Excellent! What you said in the intro is the exact reason why I found your video!
Man, you should have been my math teacher at undergrad level. I would have scored more than what I actually did. Simple yet effective explanation.
best explanation of rank of a matrix in the world and how it is related to data science
wonderfully explained. thanks
Best explanation I've seen about this subject! Thank you!
Awesome explanation!!
This is the best linear algebra explanation I've ever heard and I've watched basically everything. The only thing you missed was the geometric interpretation, the point of the basis axes don't change.
Still, absolutely excellent. 3b1b is the one everyone praises when actually he confuses simple things. You did the reverse.
I like the way you link these things with application, which is mind blowing...whenever I look for answer, I come here. thanks for all your videos.
The explanation is really Awesome!!!
Thank you so much!!
Thank you for making this so clear and specific!
You’re so gifted at explaining things in an easy to understand way! Thank you!
Happy to help!
Thank you Ritvik, you explained in a much needed beautiful way
OMG you are an excellent teacher!
I'm majoring Economics at South Korea. This video helped me so much. Thank you
Straight to the point and elegantly explained. Love it!
PERFECT! As a programmer, I found the process just like "data normalization" which is indeed recommended and useful, amazing. One stupid question, so what's the difference between the column-column check you did, and echelon(row-row) form? I've seen some use echelon
This was the best, and filled many gaps in my mind, bravo👏
Your explanation is awesome man. I simply love the way you explain the concept.
Outstanding video; the best I have seen on the subject!
Excellent explanation.
Thank you so much, you're great at explaining and I appreciate you including the application of the concept in the real world, that helps to connect the points!
Superb explanation
Helped me for my JEE exam and I learnt something new. Good video!
4-5 years spent to understand the real world use case, that's so true brother, for many other concepts as well.
Crystal Clear, very well explained.
so clear and easy to understand! amazing!!
thankyou so much i was struggling to learn this topic from every resource but didnt understand a bit :)
Cool! This is the first time that i really catch the rank of a matrix.
Really, thank you, it is a very beneficial video, it is the first time to understand the rank of the matrix.
Amazing content as always Ritvik!
Incredible! Thank you so much for the intuitive video.
No problem!
Great video, thanks so much!!
Masterclass
Brilliant
Another great video, thanks RItvik! Could you please make one about the determinant / trace / diagonalization? Because many happen to see these stuff in Linear Algebra courses, I specifically wonder how are they used in Data Science.
Thank you, sir!
excellent explanation! Thank you so much!
fantastic explanation!
can u explain its use in solving physical problems
nice explanation
Very good Video! Keep up the good work!!!
Good topic. It turns out that a deep neural network framework is pretty convenient for solving for the two low rank approximation matrices, or finding the exact solution matrices if they exist. I came up with the following technique: In Tensorflow you use two Embeddings layers with your choice of k and one Lambda layer to do a matrix multiply. Your loss function can be a typical choice like L2 distance between the result of the Lambda layer and the entry of the original big matrix. Each entry of teh original big matrix constitutes one training example. The optimizer is your choice like Adam, everyone loves Adam optimizer. So I came up with this arrangement to do movie recommendations on the MovieLens dataset. And it's better than Alternating Least Squares algorithm for many reasons, one big one being with the DNN technique, you will completely avoid making the dumb assumption that there are zero values in the original matrix entries that are missing values. Of course if you are not missing any values then ALS is probably fine.
nice
Gem content. Worth to subscribe.
At 9:10 How does A' have 8 numbers? How come it's 4x2? Can anyone please explain this to me? I don't get it.
Can there be any connection to eigenvectors given the relation to PCA?
Thanks, it was really useful. Hope you get more views ! ;)
Hi :) thank you for this video. I wish Ive watched this video before svd video . Would you pls make a video about latent factor Decomposition and CUR model for approximation?
Thank you!
You're welcome!
Thanks sir
Amazing!
Neat...👌🏽
Thanks!
very nice video
And which math book do you recommend to have an in_depth concept about data science, ml and ai at the same time with practical concept ? Just the way you teach
(not pure useless math formula without any data sience related explanation )
Very very good lecture
Just, isn't it:. K / p + p/N ?
Great 👍
Such a simple idea used by a major paper: LoRA - Low Rank Adaptation for Large Language Models
Off topic, but you should make a video on implementing linear bayes/bayesian logistic regression/similar. Would be on-topic for your channel and would also compliment your non-bayesian implementations.
an you make a video on the trace of a matrix, does it have any particular objective? thank u
Is this the fundamental idea behind LoRA finetuning of AI models?
I am probably coming back again after getting some sense (cause it's first time that I heard about existing this kind of concept :/)
Brilliant!!! Do teachers know this?
Revenge of the dorks leave alone the nerds.
fk, u make it so simple, thanks
You're good alright
I can't see the left side of the board tho
What i couldn’t understand in a whole fooking year of my varsity life.
Never mind I see it.
what about this matrix
1 2 3
4 5 6
7 8 9
the actual rank is 2 but with ur method it must be 1
Ga bisa bahasa enggres
Great explanation!
🙏 thanks