SVM (The Math) : Data Science Concepts

Поделиться
HTML-код
  • Опубликовано: 26 сен 2024
  • Let's get mathematical.
    SVM Intuition Video: • Support Vector Machine...

Комментарии • 198

  • @stanlukash33
    @stanlukash33 3 года назад +200

    This guy is underrated for real. RUclips - throw him into recommendations.

    • @jmspiers
      @jmspiers 3 года назад +6

      I know... I recommend him all the time on Reddit.

    • @backstroke0810
      @backstroke0810 2 года назад +1

      True! He deserves way more subscription. He should prepare a booklet like statquest did but of his own. Would definitely buy it!

    • @aravind_selvam
      @aravind_selvam 2 года назад +1

      True!!

  • @supersql8406
    @supersql8406 3 года назад +65

    This guy is super smart and he takes sophisticated concepts and explains it in a way where it's digestible without mocking the theory! What a great teacher!

  • @ragyakaul6027
    @ragyakaul6027 2 года назад +31

    I can't explain how grateful I am for your channel! I am doing an introductory machine learning course at Uni and it's extremely challenging as it's full of complex concepts and the basics aren't explored throughly. Many videos I came across on youtube were too overly simplified and only helped me very briefly to make sense of my course. However, your videos offer the perfect balance, you explore the complex maths and don't oversimplify it, but do so in a way that's easy to understand. I read through this concept several times before watching your video, but only now do I feel as if I TRULY understand it. I HIGHLY appreciate the work you do and look forward to supporting your channel.

  • @shusrutorishik8159
    @shusrutorishik8159 3 года назад +15

    This has been simultaneously the simplest, most detailed and yet most concise explanation of this topic I've come across so far. Much appreciated! I hope you keep making awesome content!

    • @ritvikmath
      @ritvikmath  3 года назад +2

      Glad it was helpful!

    • @friktogurg9242
      @friktogurg9242 Месяц назад

      @@ritvikmath Is it possible to find w and b if you are not explicitly given constraints?
      Is it possible to find the values of w and b without explicitly solving the optimization problem?
      Can both be done through geometric intuition?

  • @sejmou
    @sejmou Год назад +5

    In case you're also having trouble figuring out how we arrive at k=1/||w|| from k * (w*w/||w||) = 1:
    remember that the dot product of any vector with itself is equal to its squared magnitude. Then, w*w can also be expressed as ||w||^2.
    ||w||^2/||w|| simplifies to just ||w||. Finally bring ||w|| to the other side by dividing the whole equation by ||w||, and you're done :)
    if you also have trouble understanding why exactly the dot product of any vector with itself is equal to its squared magnitude it also helps to know that the magnitude of a vector is the square root of the sum of squares of its components and that sqrt(x) * sqrt(x) = x
    I hope that somehow makes sense if you're struggling, surely took me a while to get that lol

    • @FootballIsLife00
      @FootballIsLife00 9 месяцев назад

      I almost forget this rule, thank you brother for saving my day

    • @mdrashadalhasanrony8694
      @mdrashadalhasanrony8694 2 месяца назад

      yes. w*w = ||w||*||w|| * cos 0 = (||w||)^2
      angle is 0 degress because multiplying the same vectors

  • @KARINEMOOSE
    @KARINEMOOSE 2 года назад +2

    I'm a PhD student studying data mining and I just wanted commend you for this SUPERB explanation. I can't thank you enough for the explaining this so clearly. Keep up the excellent work!!

  • @vedantpuranik8619
    @vedantpuranik8619 2 года назад +1

    This is the best and most comprehensible math video on hard margin SVM I have seen till date!

  • @tollesch_tieries
    @tollesch_tieries Месяц назад

    THE BEST EXPLANATION of SVM on RUclips! And the whole internet! THANK YOU!

  • @FPrimeHD1618
    @FPrimeHD1618 Год назад +1

    Just to add onto all the love, I'm a data scientist in marketing and you are my number one channel for reviewing concepts. You are a very talented individual!

  • @velevki
    @velevki 2 года назад +1

    You answered all the questions I had in mind without me even asking them to you. This was an amazing walkthrough. Thank you!

  • @honeyBadger582
    @honeyBadger582 3 года назад +9

    That's what i've been waiting for! Thanks a lot. Great video!

  • @srivatsa1193
    @srivatsa1193 3 года назад +4

    This is the best and the most intuitive explanation for SVM. It is really hard for me to actually read research papers and understand what story each line of the equation is telling. But you made it soo intuitive. Thanks a ton! Please Please make more videos like this

  • @lakhanpal1987
    @lakhanpal1987 2 года назад +1

    Great video on SVM. Simple to understand.

  • @suparnaprasad8187
    @suparnaprasad8187 8 дней назад

    The best video I've watched on SVMs! Thank you so much!!

  • @pavelrozsypal8956
    @pavelrozsypal8956 2 года назад

    Another great video on SVM. As a mathematician I do appreciate your succinct yet accurate exposition not playing around with irrelevant details.

  • @stephonhenry-rerrie3997
    @stephonhenry-rerrie3997 2 года назад

    I think this might be top 5 explanations of SVM mathematics all-time. Very well done

  • @polarbear986
    @polarbear986 2 года назад

    I finally get svm after watching a lot of tutorial on RUclips. Clever explanation. Thank you

  • @TheWhyNotSeries
    @TheWhyNotSeries 3 года назад +7

    At 5:10, I don't get how you obtain K from the last simplification. Can you/someone please explain?
    Btw beautiful video!

    • @ritvikmath
      @ritvikmath  3 года назад +10

      thanks! I did indeed kind of skip a step. The missing step is that the dot product of a vector with itself is the square of the magnitude of the vector. ie. w · w = ||w||^2

    • @TheWhyNotSeries
      @TheWhyNotSeries 3 года назад +1

      @@ritvikmath right, thank you!!

  • @Shaan11s
    @Shaan11s 6 месяцев назад

    your videos are what allowed me to take a spring break vacation bro, saved me so much time thank you

  • @usmanabbas7
    @usmanabbas7 2 года назад +1

    You and statquest are the perfect combination :) Thanks for all of your hardwork.

  • @lisaxu1848
    @lisaxu1848 2 года назад

    studying my masters in data science and this is a brilliant easy to understand explanation tying graphical and mathematical concepts - thank you!

  • @chimetone
    @chimetone 6 месяцев назад

    Best high-level explanation of SVMs out there, huge thanks

    • @ritvikmath
      @ritvikmath  6 месяцев назад

      Glad it was helpful!

  • @yangwang9688
    @yangwang9688 3 года назад +1

    Very easy to follow the concept! Thanks for this wonderful video! Looking forward to seeing next video!

  • @gdivadnosdivad6185
    @gdivadnosdivad6185 10 месяцев назад

    I love your channel. You explain difficult concepts that could be explained to my dear grandmother who never went to college. Excellent job sir! You should become a professor one day. You would be good.

  • @prathamghavri
    @prathamghavri 6 месяцев назад

    Thanks man great explaination , was trying to understand the math for 2 days , finally got it

  • @mindyquan3141
    @mindyquan3141 2 года назад

    So simple, so clear!!! Wish all the teachers are like this!

  • @clifftondouangdara6249
    @clifftondouangdara6249 2 года назад

    Thank you so much for this video! I am learning about SVM now and your tutorial perfectly breaks it down for me!

  • @nikkatalnikov
    @nikkatalnikov 3 года назад

    Great video as usual!
    A possible side note - I find 3d picture even more intuitive.
    Adding z-direction which is basically can be shrunk to [-1;1] is our class prediction dimension and x1 x2 are feature dimensions.
    Hence, the margin hyperplane "sits" exactly on (x1; x1; 0)
    This is also helpful for further explanation of what SVM kernels are and why kernel alters the norms (e.g. distances) between data points, but not the data points themselves.

  • @germinchan
    @germinchan 2 года назад +1

    This is very clearly defined. Thank you.
    But could someone explain to me what w is? How can I visualize it and calculate it.

  • @borisshpilyuck3560
    @borisshpilyuck3560 4 месяца назад +1

    Great video ! Why we can assume that right hand side of wx - b in those three lines is 1, 0, -1 ?

  • @zz-9463
    @zz-9463 3 года назад +1

    very informative and helpful video to help understand the SVM! Thanks for such a great video! You deserve more subscribers

  • @maheshsonawane8737
    @maheshsonawane8737 Год назад

    🌟Magnificient🌟I actually understood this loss function in by watching once. Very nice explanation of math. I saw lot of other lectures but you cant understand math without graphical visualization.

  • @houyao2147
    @houyao2147 3 года назад

    It's so easy to understand thi s math stuff! Best explanation ever in such a short video.

  • @rndtnt
    @rndtnt 2 года назад +1

    Hi, how exactly did you choose 1 and -1, the values for wx -b where x is a support vector? wx-b = 0 for x on the separating line makes sense however. Could it have other values?

  • @more-uv4nl
    @more-uv4nl 5 месяцев назад

    this guy explained what my professors couldn't explain in 2 hours 😂😂😂

  • @nishanttailor4786
    @nishanttailor4786 2 года назад

    Just Amazing Clarity of Topics!!

  • @jingzhouzhao8609
    @jingzhouzhao8609 4 месяца назад

    thank you for your genius explanation. At 5:11, before getting the value k, the equation k * ( w * w) / (magnitude of w) = 1 contains w * w, why the output k doesn't have w in the end.

  • @techienomadiso8970
    @techienomadiso8970 Год назад

    This is a serious good stuff video. I have not seen a better svm explanation

  • @ananya___1625
    @ananya___1625 2 года назад +2

    Awesome explanation
    I've a doubt, (might be silly) How did people come up with W.X-b=1 and W.X-b=-1?does 1, -1 in these equations tell us something? For some reason, I'm unable to get the intuition of 1,-1 in the above equations.(although i understood that they are parallel lines)
    Someone pls help me

    • @pauledam2174
      @pauledam2174 11 месяцев назад

      I have the same question.

    • @mohamedahmedfathy84
      @mohamedahmedfathy84 Месяц назад

      maybe an assumption so we say that the margin is the magnitude of w so easily interpreted? i dont know really

  • @salzshady8794
    @salzshady8794 3 года назад +9

    Could you do the math behind each Machine learning algorithm, also would you be doing Neural Networks in the future?

    • @marthalanaveen
      @marthalanaveen 3 года назад

      along with the assumptions of supervised and un-supervised ML algorithms that deals specifically with structured data.

    • @ritvikmath
      @ritvikmath  3 года назад +3

      Yup neural nets are coming up

    • @jjabrahamzjjabrhamaz1568
      @jjabrahamzjjabrhamaz1568 3 года назад

      @@ritvikmath CNN's and Super Resolution PLEASE PLEASE PLEASE

  • @WassupCarlton
    @WassupCarlton 5 месяцев назад

    This is giving "Jacked Kal Penn clearly explains spicy math" and | am HERE for it

  • @acidaly
    @acidaly 2 года назад +3

    Equation for points on margins are:
    w.x - b = 1
    w.x - b = -1
    That means we have fixed our margin to "2" (from -1 to +1). But our problem is to maximize the margin, so shouldn't we keep it a variable? like:
    w.x - b = +r
    w.x - b = -r
    where maximizing r is our goal?

    • @davud7525
      @davud7525 Год назад

      Have you figured it out?

  • @badermuteb4552
    @badermuteb4552 3 года назад +2

    Thank you so much. This is what i have been looking for so long time. would you please do the behind other ML and DL algorithms.

  • @pedrocolangelo5844
    @pedrocolangelo5844 Год назад

    Once again, ritvikmath being a lifesaver for me. If I understand the underlying math behind this concepts, it is because of him

  • @himanshu1056
    @himanshu1056 3 года назад

    Best video on large margin classifiers 👍

  • @lemongrass3628
    @lemongrass3628 Год назад

    You are an amazing elucidator👍

  • @emid6811
    @emid6811 2 года назад +1

    Such a clear explanation! Thank you!!!

  • @madshyom6257
    @madshyom6257 2 года назад

    Bro, you're a superhero

  • @fengjeremy7878
    @fengjeremy7878 2 года назад +1

    Hi ritvik! I wonder what is the geometric intuition of the vector w? We want to minimize ||w||, but what does w look like on the graph?

  • @ifyifemanima3972
    @ifyifemanima3972 Год назад

    Thank you for this video. Thanks for simplifying SVM.

  • @mykhailoseniutovych6099
    @mykhailoseniutovych6099 4 месяца назад

    Great video, with easy to follow explanation. However, you formulated the optimization problem that needs to be solved by the end of thevideo. The most ineteresting question now is how to actually solve this optimization problem. Can you give some directions on how this problem is actually solved?

  • @sukritgarg3175
    @sukritgarg3175 5 месяцев назад

    Holy shit what a banger of a video this is

  • @nickmillican22
    @nickmillican22 3 года назад +7

    Question on the notation.
    The image shows that the vector between the central line and decision line is w. So, I think, that w is the length of the decision boundary. But then we go on to show that the length of the decision boundary is k=1/||w||. So I'm not clear on what w (or k, for that matter) are actually representing.

    • @WassupCarlton
      @WassupCarlton 5 месяцев назад

      I too expected k to equal the length of that vector w :-/

  • @ShakrinJahanMozumder
    @ShakrinJahanMozumder 4 дня назад

    Great Work! Just one confusion; why minus b? Your response would be highly appreciated!

  • @SreehariNarasipur
    @SreehariNarasipur Год назад

    Excellent explanation Ritvik

  • @sorrefly
    @sorrefly 3 года назад +1

    I'm not sure but I think you forgot to say that in order to have margin = +-1 you should scale multiplying constants to w and b. Otherwise I don't explain how we could have distance of 1 from the middle
    The rest of the video is awesome, thank you very much :)

  • @asharnk
    @asharnk Год назад

    What an amazing video bro. Keep going.

  • @TheCsePower
    @TheCsePower 2 года назад

    You should mention that your W is an arbitrary direction vector of the hyperplane. (it is not the same size as the margin)

  • @jaibhambra
    @jaibhambra 2 года назад

    Absolutely amazing channel! You're a great teacher

  • @junderfitting8717
    @junderfitting8717 2 года назад

    Terrific tutorial, save me
    5:12 to simplify k*(W*W)/||w|| =1, W means vector w
    W*W = ||w||*||w||*cos 0; cos 0 == 1; Thus k*(||w||*||w||*1)/||w|| = 1; k = 1/||w||
    vector x is actually a point (x0, x1, ..., xn) that on the Decision Boundary, i.e. vector x starts at the original points and ends at the D.B.

    • @ketankumar5689
      @ketankumar5689 8 месяцев назад

      why we are multiplying unit vector of w as w is normal to the plane ? is the vector x also normal to the plane along the direction of w ? but, x is a point on that plane which in that case k will be 0. I am confused . Can you please simplify ?

  • @dcodsp_
    @dcodsp_ Год назад

    Thanks for such brilliant explanation really appreciate your work!!

  • @TheOilDoctor
    @TheOilDoctor 11 месяцев назад

    great, concise explanation !

  • @joyc5784
    @joyc5784 2 года назад +1

    On the other references they use the plus (+) sign on w x - b = 0. Why on your example this was changed to minus sign? w x - b = 0. or wx - b > 1. Hope you could answer. Thanks

  • @akshaypai2096
    @akshaypai2096 3 года назад +1

    Can you please do videos on normal to a plane, distance of a point from a plane and other basic aspects of linear algebra...
    Big fan and an early subscriber🙏🏻keep growing!

    • @ritvikmath
      @ritvikmath  3 года назад +1

      That's a good idea; I've been thinking of next videos and these linear algebra basics would be likely helpful in understanding the eventually more difficult concepts. Thanks for the input!

    • @akshaypai2096
      @akshaypai2096 3 года назад

      @@ritvikmath I'm a big fan of your content since I saw your videos on time series AR and MAs....now I'm going through the math behind ML, but given I have a business degree at my undergrad I don't have the intuition behind lot of very basic stuff hence your video series on those would be great help for people like me👍🏻Always happy to help

  • @zhiyuzhang7096
    @zhiyuzhang7096 8 месяцев назад

    bro is a savior

  • @ht2239
    @ht2239 3 года назад

    You explained this topic really well and helped me a lot! Great work!

  • @Jayanth_mohan
    @Jayanth_mohan 2 года назад

    This really helped me learn the math of svm thanks !!

  • @AchrafMessaoudi-d3o
    @AchrafMessaoudi-d3o 8 месяцев назад +1

    you are my savior

  • @Snaqex
    @Snaqex 8 месяцев назад

    Youre so unbelieveble good in explaining :)

  • @godse54
    @godse54 3 года назад +1

    Pls also make one for svm regression.. you are amazing

  • @BlueDopamine
    @BlueDopamine 2 года назад

    I am very happy that I found Your YT Channel Awsome Videos I was unable to Understand SVM UntilNow !!!!

  • @SESHUNITR
    @SESHUNITR 2 года назад +1

    very informative and intuitive

  • @kanishksoman7830
    @kanishksoman7830 Год назад

    Hi Ritvik, you are a great teacher of stats, calculus and ML/DL!
    I have one question regarding the equations. Why is the decision boundary equation W.X - b = 0? Shouldn't it be W.X + b = 0. I know the derivations and procedure to find the maximal margin is not affected but I don't understand -b. Please let me know if the sign is inconsequential. If it is, why is it? Thanks!

  • @maujmishra4098
    @maujmishra4098 3 года назад +2

    Great explanation!
    I just want to know how is vector 'w' perpendicular to the plane?

  • @learn5081
    @learn5081 3 года назад

    very helpful! I always wanted to learn math behind the model! thanks!

  • @wildbear7877
    @wildbear7877 Год назад

    You explained this topic perfectly! Amazing!

  • @debirath4916
    @debirath4916 8 месяцев назад

    it is a great video to understand svm.
    but the equation for hard margin W * X + B >= 1 (is it + or -). In video we are saying it is -

  • @Cobyboss12345
    @Cobyboss12345 Год назад

    you are the smartest person I know

  • @TheCsePower
    @TheCsePower 2 года назад

    Great Viideo!. I found your notation for x to be quite confusing. I think the small x should be x11 x12 x13 to x1p. Say GPA is xi1 and MCAT is xi2. Then the student data for these two features will be: student 1(x11,x12) student 2 (x21, x22) student 3(x31,x32)

  • @yashshah4172
    @yashshah4172 3 года назад +1

    Hey Ritvik, Nice video, can you please cover the kernalization part too.

  • @anishmohan7813
    @anishmohan7813 Год назад

    Thanks for this wonderful video.
    I understand that the equation of blue dotted line (plane) is W.X+b =0 .
    But how can we decide the other two lines. I mean how those can be W.X+b = +1 and W.X+b = -1.
    And if they are, then the width is 2 right? how we can maximize it, it is fixed isnt it so?
    I know I am talking nonsense :) I dont have anyone else to ask this :)
    Thanks in advance!

  • @NiladriBhattacharjya
    @NiladriBhattacharjya Год назад

    Amazing explanation!

  • @logicverse
    @logicverse 2 года назад +1

    It is unclear how you derived the equations of planes. For instance, why it is w.x-b=1 and not w.x-b=2?

  • @fatriantobong
    @fatriantobong 7 месяцев назад

    maybe the question is, what algorithm svm uses to look for the weight or coefficients of hyperplane?

  • @maurosobreira8695
    @maurosobreira8695 2 года назад

    Amazing teaching skills - Thanks, a lot!

  • @naengmyeonkulukulu
    @naengmyeonkulukulu Год назад +3

    Hi all, at 5:14, how does he get from k (W.W/|| W ||) =1 to k = 1/|| W ||?
    Appreciate if anyone can enlighten me

    • @raulfernandez9370
      @raulfernandez9370 7 месяцев назад

      || W || = [W.W]^{1/2} so, square everything to get rid of the square root in the denominator and there you have it.

  • @akashnayak6144
    @akashnayak6144 2 года назад +2

    Loved it!

  • @akwagaiusakwajunior2903
    @akwagaiusakwajunior2903 2 года назад

    How will the algorithm classify if an arbitrary observation lies within the hyperplane

  • @pradeepsvs
    @pradeepsvs Год назад

    @ritvikmath, why is the intercept (b) is negative? The equation of the line/plane/hyperplane should be w1x1 + w2x2 + w3x3 + b = 0, i.e. wx+b = 0 should the line equation. isn't?

    • @ketankumar5689
      @ketankumar5689 8 месяцев назад

      did you able to understand it ? I am still confused for that negative b.

  • @mensahjacob3453
    @mensahjacob3453 3 года назад

    Thank you Sir . You really simplified the concept. I have subscribed already waiting patiently for more videos 😊

  • @YonatanDan-z3m
    @YonatanDan-z3m Год назад

    phenomenal

  • @trishulcurtis1810
    @trishulcurtis1810 2 года назад

    Great explanation!

  • @robfurlong8868
    @robfurlong8868 9 месяцев назад

    @ritvikmath - Thanks for this great explanation. I have noticed other material online advises the equation for the hyperplan is w.x+b=0 rather than w.x-b=0. Can you confirm which is accurate

  • @superbatman1462
    @superbatman1462 3 года назад

    Easily Explained 👍,
    Can you also explain how does SVM works with respect to regression problems?

  • @fengjeremy7878
    @fengjeremy7878 2 года назад

    Thank you! I am wodering why do we use "+1 and -1" instead of "+1 and 0" to classify these two areas?

  • @Pazurrr1501
    @Pazurrr1501 2 года назад

    BRILLIANT!

  • @shubhamguptamusic
    @shubhamguptamusic 3 года назад +1

    woww what an explanation..........great

  • @bhuvaneshkumarsrivastava906
    @bhuvaneshkumarsrivastava906 3 года назад

    Eagerly waiting for your video on SVM Soft margin :D

  • @jaisheel008
    @jaisheel008 3 года назад +2

    How do I choose the values for w vector and b ??

    • @andreykol13
      @andreykol13 3 года назад +2

      you might want to search 'lagrange multipliers' for solving this problem
      and maybe this will also help: web.mit.edu/6.034/wwwbob/svm-notes-long-08.pdf

    • @jaisheel008
      @jaisheel008 3 года назад +1

      Thanks for your inputs Andrey !!

  • @Max-my6rk
    @Max-my6rk 3 года назад

    Smart! This is the easiest way to come up with the margin when given theta (or weight)... gosh..