Another perfect lecture, finally we can understand such beautiful subject and not just memorize it like mindless robots. Thank you so much Ritvik, your our hero! Gratitude from Brazil
This a huge gem! I love all your videos, they’re always a beautiful mix of theory, applied, and visual examples. I also think they’re the perfect length as well as depth and breath of connected material covered. That’s a delicate balance most technical RUclips videos fail at and what makes yours special. 👍
Thank you so much, you're great at explaining and I appreciate you including the application of the concept in the real world, that helps to connect the points!
This is the best linear algebra explanation I've ever heard and I've watched basically everything. The only thing you missed was the geometric interpretation, the point of the basis axes don't change. Still, absolutely excellent. 3b1b is the one everyone praises when actually he confuses simple things. You did the reverse.
PERFECT! As a programmer, I found the process just like "data normalization" which is indeed recommended and useful, amazing. One stupid question, so what's the difference between the column-column check you did, and echelon(row-row) form? I've seen some use echelon
Another great video, thanks RItvik! Could you please make one about the determinant / trace / diagonalization? Because many happen to see these stuff in Linear Algebra courses, I specifically wonder how are they used in Data Science.
Good topic. It turns out that a deep neural network framework is pretty convenient for solving for the two low rank approximation matrices, or finding the exact solution matrices if they exist. I came up with the following technique: In Tensorflow you use two Embeddings layers with your choice of k and one Lambda layer to do a matrix multiply. Your loss function can be a typical choice like L2 distance between the result of the Lambda layer and the entry of the original big matrix. Each entry of teh original big matrix constitutes one training example. The optimizer is your choice like Adam, everyone loves Adam optimizer. So I came up with this arrangement to do movie recommendations on the MovieLens dataset. And it's better than Alternating Least Squares algorithm for many reasons, one big one being with the DNN technique, you will completely avoid making the dumb assumption that there are zero values in the original matrix entries that are missing values. Of course if you are not missing any values then ALS is probably fine.
Hi :) thank you for this video. I wish Ive watched this video before svd video . Would you pls make a video about latent factor Decomposition and CUR model for approximation?
Off topic, but you should make a video on implementing linear bayes/bayesian logistic regression/similar. Would be on-topic for your channel and would also compliment your non-bayesian implementations.
And which math book do you recommend to have an in_depth concept about data science, ml and ai at the same time with practical concept ? Just the way you teach (not pure useless math formula without any data sience related explanation )
Another perfect lecture, finally we can understand such beautiful subject and not just memorize it like mindless robots. Thank you so much Ritvik, your our hero! Gratitude from Brazil
You are most welcome
"I want to make sure to show you the actual applications..." God bless this man.
Thanks :)
3 years later and still the goat
Man, you should have been my math teacher at undergrad level. I would have scored more than what I actually did. Simple yet effective explanation.
This a huge gem! I love all your videos, they’re always a beautiful mix of theory, applied, and visual examples. I also think they’re the perfect length as well as depth and breath of connected material covered. That’s a delicate balance most technical RUclips videos fail at and what makes yours special. 👍
Wow, thank you!
best explanation of rank of a matrix in the world and how it is related to data science
Straight to the point and elegantly explained. Love it!
Your explanation is awesome man. I simply love the way you explain the concept.
4-5 years spent to understand the real world use case, that's so true brother, for many other concepts as well.
wonderfully explained. thanks
I like the way you link these things with application, which is mind blowing...whenever I look for answer, I come here. thanks for all your videos.
This was the best, and filled many gaps in my mind, bravo👏
Thank you for making this so clear and specific!
Thank you Ritvik, you explained in a much needed beautiful way
Thank you so much, you're great at explaining and I appreciate you including the application of the concept in the real world, that helps to connect the points!
The explanation is really Awesome!!!
Thank you so much!!
Outstanding video; the best I have seen on the subject!
Amazing content as always Ritvik!
so clear and easy to understand! amazing!!
This is the best linear algebra explanation I've ever heard and I've watched basically everything. The only thing you missed was the geometric interpretation, the point of the basis axes don't change.
Still, absolutely excellent. 3b1b is the one everyone praises when actually he confuses simple things. You did the reverse.
Awesome explanation!!
Incredible! Thank you so much for the intuitive video.
No problem!
You’re so gifted at explaining things in an easy to understand way! Thank you!
Happy to help!
thankyou so much i was struggling to learn this topic from every resource but didnt understand a bit :)
Very good Video! Keep up the good work!!!
Really, thank you, it is a very beneficial video, it is the first time to understand the rank of the matrix.
Helped me for my JEE exam and I learnt something new. Good video!
OMG you are an excellent teacher!
Great video, thanks so much!!
Cool! This is the first time that i really catch the rank of a matrix.
excellent explanation! Thank you so much!
Excellent explanation.
Crystal Clear, very well explained.
PERFECT! As a programmer, I found the process just like "data normalization" which is indeed recommended and useful, amazing. One stupid question, so what's the difference between the column-column check you did, and echelon(row-row) form? I've seen some use echelon
fantastic explanation!
I'm majoring Economics at South Korea. This video helped me so much. Thank you
Gem content. Worth to subscribe.
Thanks, it was really useful. Hope you get more views ! ;)
Another great video, thanks RItvik! Could you please make one about the determinant / trace / diagonalization? Because many happen to see these stuff in Linear Algebra courses, I specifically wonder how are they used in Data Science.
Superb explanation
Thank you, sir!
Great explanation!
🙏 thanks
Amazing!
can u explain its use in solving physical problems
nice explanation
very nice video
Good topic. It turns out that a deep neural network framework is pretty convenient for solving for the two low rank approximation matrices, or finding the exact solution matrices if they exist. I came up with the following technique: In Tensorflow you use two Embeddings layers with your choice of k and one Lambda layer to do a matrix multiply. Your loss function can be a typical choice like L2 distance between the result of the Lambda layer and the entry of the original big matrix. Each entry of teh original big matrix constitutes one training example. The optimizer is your choice like Adam, everyone loves Adam optimizer. So I came up with this arrangement to do movie recommendations on the MovieLens dataset. And it's better than Alternating Least Squares algorithm for many reasons, one big one being with the DNN technique, you will completely avoid making the dumb assumption that there are zero values in the original matrix entries that are missing values. Of course if you are not missing any values then ALS is probably fine.
Hi :) thank you for this video. I wish Ive watched this video before svd video . Would you pls make a video about latent factor Decomposition and CUR model for approximation?
Can there be any connection to eigenvectors given the relation to PCA?
Such a simple idea used by a major paper: LoRA - Low Rank Adaptation for Large Language Models
Masterclass
Off topic, but you should make a video on implementing linear bayes/bayesian logistic regression/similar. Would be on-topic for your channel and would also compliment your non-bayesian implementations.
nice
Brilliant
an you make a video on the trace of a matrix, does it have any particular objective? thank u
Thank you!
You're welcome!
Thanks sir
Great 👍
At 9:10 How does A' have 8 numbers? How come it's 4x2? Can anyone please explain this to me? I don't get it.
Neat...👌🏽
Thanks!
And which math book do you recommend to have an in_depth concept about data science, ml and ai at the same time with practical concept ? Just the way you teach
(not pure useless math formula without any data sience related explanation )
Very very good lecture
Just, isn't it:. K / p + p/N ?
Brilliant!!! Do teachers know this?
Revenge of the dorks leave alone the nerds.
I am probably coming back again after getting some sense (cause it's first time that I heard about existing this kind of concept :/)
Is this the fundamental idea behind LoRA finetuning of AI models?
fk, u make it so simple, thanks
You're good alright
I can't see the left side of the board tho
What i couldn’t understand in a whole fooking year of my varsity life.
Never mind I see it.
what about this matrix
1 2 3
4 5 6
7 8 9
the actual rank is 2 but with ur method it must be 1
.
Why is A' 4x2?
Ga bisa bahasa enggres