I listened to my lecturer and I was convinced that not even she understood something of her lecture...You clarified a lot for me only in three minutes...
Incredible video, no messing around with long introductions, not patronising and easy to follow. It should be used as a guide for other people making educational videos!
seriously !!!! simplest explanation of Kernel in SVM ever seen , just wow thank you so so much bro for the hard work you are doing to make such great videos ;*
Wow! I can't believe I didn't find this channel until now- your videos are amazing! As a creator myself, I understand how much work must go into this, so HUGE props!! Liked and subscribed 💛
Please keep making these contents. They are so intuitive. I don't understand why such channel don't grow on the other hand shitty contents are growing exponentially.
Thanks a lot! I struggled to understand the difference between conventional dimension lifting and the "trick". Now it's crystal clear! Great explanation.
Fantastic video, but I think you should mention at the end why the "kernel trick" isn't practical with lots of data (i.e. why deep learning is used much more than the "kernel trick" in this age of big data): given a kernel, you need to store the entire matrix of data inputs into the kernel. There are ways to mitigate this a bit (for example Random Fourier Features and Nystrom method) but still this is a huge issue that no one has seemed to figure out how to fix. On the other hand, if you have a small amount of complicated data then the kernel trick is very useful! For example a medical researcher might only have access to the horrifically complicated genomic data of patients at their hospital.
Holy sh*, my lecturer recommend us a reading but I was lost in the math terms and formulas, and I didn't even understood what was the purpose of kernel, incredible what you did in just three mins, thank you !
At 2:11 the shown Kernel and Transformation function do not match. The transformation function is missing the element ,1 as its last component and needs a scaling factor of sqrt(2) innfront of first 3 elements
Just fantastic explanation, i was wondering how much time takes to make such a high quality video, and what software he is using to do it. ? ? anyone knows ?
kernel ridge regression is just that: regression using the kernel trick. Namely, instead of a hyperplane of best fit, you do the kernel trick to implicitly nonlinearly map the data into a high dimensional (sometimes infinite dimensional) space. But like with SVM, we don't need this map, and just need the kernel matrix at all the data points to practically perform algorithms. "Ridge" just refers to adding an l2 penalty to avoid overfitting. "Lasso" refers to l1 penalty, and I think in practice people even use l1+l2 penalties.
You can control the flexibility of the kernel to avoid overfitting. For example, in the case of the radial basis function kernel you could use a lower value of gamma to avoid overfitting, as is shown in the video.
I really would have liked to know why you claim that we only need to computer inner products. Does it arise from the dual problem? If i remember correctly that problem features such scalar products. And why is that better?
If you write the dual problem (e.g., page 4 of www.robots.ox.ac.uk/~az/lectures/ml/lect3.pdf) you can see indeed that it only depends on inner products of the training points. There is, however, a more "intuitive" way to see why SVM only cares about the inner products without actually writing the dual problem. If I give you the matrix X = (xi^Txj) of pair-wise inner products of n points, you can recover the coordinates of the n points "up to rotation" by computing the matrix-square root of X for example. In other words, all sets of n points that have the inner product matrix X are obtained by rotating the n points whose coordinates are the columns of sqrtm(X). Now, if you want to separate the n points with a hyperplane, your problem doesn't fundamentally change if all the points are rotated in some arbitrary way (because you can just rotate the separating hyperplane in the same way). So the knowledge of the matrix of inner products X is sufficient for SVM to do its job. As to why that's helpful, let's say have 100 points, and each point has a million features (which can easily happen if you "lift" to the data). That's a 100 million numbers you need to store. However, the matrix of inner products will be 100x100 only, which is a huge saving!
@@VisuallyExplained this is my other Account. The first one is for work, didn't realize i used that one. Thanks for taking the time to answer :). Interesting, not quite trivial, that the matrix contains all relevant information. I have already read some more sources on it now. Since you seem to understand involved math, maybe u can help me with another Question. The Question is, why exactly we use the kernel trick instead of simply using a usual transformation into another vectorspace and then use usual linear svms. This seems like it would work so there has to be a motivation for the kernel trick. I already read that this has better performance. But even the book "hands on machine learning" only says that it "makes the whole process more efficient" which says practically nothing about the motivation. One thing one can easily notice is that since the dual problem optimizes only for the lagrange multipliers,, we have to calculate the kernel only once before training. This also seems to be the reason why the kernel trick only works for the dual problem. But i was wondering weither this is the whole Motivation or if there is some more magic that i missed here?
@@crush3dices There are basically two main reasons. The first that you already alluded to, is for performance reasons. It's more efficient to compute k(x,x') than the transformation f(x) if f is very high dimensional (or worse, infinite dimensional). The second reasons is practical: sometimes, it is easier to choose a kernel than a transformation f
I listened to my lecturer and I was convinced that not even she understood something of her lecture...You clarified a lot for me only in three minutes...
More common than you'd like to think, I'm also convinced my lecturers don't understand it either.
Your vidoes are absolutely amazing. Please keep making these, eventually the serious view numbers will come!
Please don't stop doing videos you are helping a lot of people
This video is perfectly clear. I learnt SVM in class while I was confused by the lecture, and it is much clearer now.👍
I went through class video for 1 hour didnt understand a thing..thank god you thought me in 3 min ..you are a legend bro
Fantastic job at making a not so simple concept easily understandable, the video was perfect, nothing can be removed from it.
Incredible video, no messing around with long introductions, not patronising and easy to follow. It should be used as a guide for other people making educational videos!
thank you i dont know why uni profs wont explain stuff this easy
Dude this video is so awesome. Teaching in perfection. Thank you for your service to humanit
seriously !!!! simplest explanation of Kernel in SVM ever seen , just wow
thank you so so much bro for the hard work you are doing to make such great videos ;*
Wow! I can't believe I didn't find this channel until now- your videos are amazing! As a creator myself, I understand how much work must go into this, so HUGE props!! Liked and subscribed 💛
Please keep making these contents. They are so intuitive. I don't understand why such channel don't grow on the other hand shitty contents are growing exponentially.
The best and most concise tutorial on Kernel tricks and SVM.
Absolutely incredible explanation 👏
Man, I just checked that you haven't uploaded any new videos since 2 years!
Hope you're doing well and come back with these amazing videos
Thanks a lot! I struggled to understand the difference between conventional dimension lifting and the "trick". Now it's crystal clear! Great explanation.
Wonderful!!
I am korean so i am not good at english, but your teaching is very clear and easy to understand. Thank you teacher!
You saved me for my Data mining exam tomorrow
🙏
understood fully. thank you. giving a code sample is like a bonus. Awesome explanation.
Fantastic video, but I think you should mention at the end why the "kernel trick" isn't practical with lots of data (i.e. why deep learning is used much more than the "kernel trick" in this age of big data): given a kernel, you need to store the entire matrix of data inputs into the kernel. There are ways to mitigate this a bit (for example Random Fourier Features and Nystrom method) but still this is a huge issue that no one has seemed to figure out how to fix.
On the other hand, if you have a small amount of complicated data then the kernel trick is very useful! For example a medical researcher might only have access to the horrifically complicated genomic data of patients at their hospital.
so amazingly simple and clear explanation, thank you so much !
You are amazing bro, come back and make more stuff like this
Great Visuals and explanation. Got it in One go. Thanks
Amazing videos by explaining different concepts in simple words.
Please have long vides as well..
excellent video, short but informative!
Wow...this is for free.
Amazing visuals!
just subscribed after watching this video only. Hoping to find more good content as these in your channel
Simply AMAZING! Thanks a lot!!!
the simplest lecture on this to exist
Wow, I am sharing this everywhere bro. Fantastic videos, we will grow together !!
thankyou so much you have nailed it it is crystal clear about kernel after watching your video thanks again
What software do you use to make these videos? and why have you stopped making videos!? and why did you start?
Thank you for this clear explanation.
Holy sh*, my lecturer recommend us a reading but I was lost in the math terms and formulas, and I didn't even understood what was the purpose of kernel, incredible what you did in just three mins, thank you !
super smooth explanation. Thanks!
At 2:11 the shown Kernel and Transformation function do not match. The transformation function is missing the element ,1 as its last component and needs a scaling factor of sqrt(2) innfront of first 3 elements
👍
Keep doing the Good work 👏
Boy am I feeling lucky that I watched this video
Such an amazing explanation!
i swear that's better than 2h lecture
to the point, concise, easy to understand, and even with code sample
thanks!
Great.....Great presentation...This is what I mean when I say use Visuals graphics to explain a concepts
i have never understood kernel trick in SVM better.
Very good video! Nice visualization! :)
in given time stamp it is a good explanation
Gonna try the Kernel Trick!
why did you have "1 + " term in your polynomial kernel?
Wow, very nicely explained
Quick question: How do we choose the gamma parameter in the RBF kernel at 3:00? By, say, cross validation?
The feature map f(x) shown at 2:11 is incorrect, please double check.
Hmmmm!
Grant Sanderson is getting a serious contender right here
Thanks, Sir. Your explanation is incredibly amazing
So nice of you! You are most welcome!
@Visually Explained It helps so much. I'm waiting for the next video
Amazing Man.. 😍
Just Awesome...
I wish I could like this video twice.
Bravo! Great lecture! How the hell do you do this interactive function animations?
Damn, liked and subscribed! :) Thanks!
incredible explanation!
Thank you so much, excellent content.
Wow! Great video, thanks :)
i love it ! thanks for the explanation
Can you make a video on unique game conjecture / unique label cover ? That would be very helpful
Amazing explanation
Super good explanation
fantastic contribution
that's so powerful to understand
That was amazing, thanks 😊
Are you computing the kernel value for each pair of points?
Thank you sir
Loved it
i hope u come back , i really like this content , pls
I really like this video. Thanks!~
Really great video, thanks for sharing! Out of interest, what do you use to produce your videos?
Thank you!! I use Blender3D and manim
awsm video please make a video explaining K-Nearest Neighbors Algorithm also
Amazing video! How do you animate your videos?
Thanks! I use manim and Blender3D
Thank you!! Nice video :)
Great Video Sir
kindly continue your content
What did you use for visualization ?
Blender, manim library, and after effect
you are amazing, you saved me !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
YAYY!
Holy shit these are good videos!
did you use the Unity to produce this video?
he said blender
How to I use this when my decision boundary needs to be spiral?
Im confused, how to display the hyperplane with polynomial kernel? Help me please?
Thanks! How do you know which gamma to use?
What kind of software did you use to create such beautiful illustrations?
I have added a list to the video description
@@VisuallyExplained appreciated!
Fantastic video
Just fantastic explanation, i was wondering how much time takes to make such a high quality video, and what software he is using to do it. ? ? anyone knows ?
Fantastic video, thankyou
Thank you too!
very informative video!!
really a superbe video ! Thank you.
So nice of you :-)
Awesome! Thanks a ton!
amazing !! subscribed
As I see, kernel-based regression is a type of symbolic regression
Why did you stop making these video's?
Great video! How does this differ from kernal ridge regression?
kernel ridge regression is just that: regression using the kernel trick. Namely, instead of a hyperplane of best fit, you do the kernel trick to implicitly nonlinearly map the data into a high dimensional (sometimes infinite dimensional) space. But like with SVM, we don't need this map, and just need the kernel matrix at all the data points to practically perform algorithms.
"Ridge" just refers to adding an l2 penalty to avoid overfitting. "Lasso" refers to l1 penalty, and I think in practice people even use l1+l2 penalties.
In witch programmulka can doing this
Label array must be binery?
Why isn’t the use of kernels considered overfitting?
You can control the flexibility of the kernel to avoid overfitting. For example, in the case of the radial basis function kernel you could use a lower value of gamma to avoid overfitting, as is shown in the video.
Loved your video..can you Mentor me?
I really would have liked to know why you claim that we only need to computer inner products. Does it arise from the dual problem? If i remember correctly that problem features such scalar products. And why is that better?
If you write the dual problem (e.g., page 4 of www.robots.ox.ac.uk/~az/lectures/ml/lect3.pdf) you can see indeed that it only depends on inner products of the training points. There is, however, a more "intuitive" way to see why SVM only cares about the inner products without actually writing the dual problem. If I give you the matrix X = (xi^Txj) of pair-wise inner products of n points, you can recover the coordinates of the n points "up to rotation" by computing the matrix-square root of X for example. In other words, all sets of n points that have the inner product matrix X are obtained by rotating the n points whose coordinates are the columns of sqrtm(X). Now, if you want to separate the n points with a hyperplane, your problem doesn't fundamentally change if all the points are rotated in some arbitrary way (because you can just rotate the separating hyperplane in the same way). So the knowledge of the matrix of inner products X is sufficient for SVM to do its job.
As to why that's helpful, let's say have 100 points, and each point has a million features (which can easily happen if you "lift" to the data). That's a 100 million numbers you need to store. However, the matrix of inner products will be 100x100 only, which is a huge saving!
@@VisuallyExplained this is my other Account. The first one is for work, didn't realize i used that one.
Thanks for taking the time to answer :). Interesting, not quite trivial, that the matrix contains all relevant information.
I have already read some more sources on it now. Since you seem to understand involved math, maybe u can help me with another Question. The Question is, why exactly we use the kernel trick instead of simply using a usual transformation into another vectorspace and then use usual linear svms. This seems like it would work so there has to be a motivation for the kernel trick. I already read that this has better performance. But even the book "hands on machine learning" only says that it "makes the whole process more efficient" which says practically nothing about the motivation. One thing one can easily notice is that since the dual problem optimizes only for the lagrange multipliers,, we have to calculate the kernel only once before training. This also seems to be the reason why the kernel trick only works for the dual problem. But i was wondering weither this is the whole Motivation or if there is some more magic that i missed here?
@@crush3dices There are basically two main reasons. The first that you already alluded to, is for performance reasons. It's more efficient to compute k(x,x') than the transformation f(x) if f is very high dimensional (or worse, infinite dimensional). The second reasons is practical: sometimes, it is easier to choose a kernel than a transformation f
@@VisuallyExplained alright thanks.
Still don't really understand, but I'm closer, thanks!