Lecture 16 - Radial Basis Functions

Поделиться
HTML-код
  • Опубликовано: 15 янв 2025

Комментарии • 47

  • @pyro226
    @pyro226 5 лет назад +17

    Just wanted to say, thank you so much for this series. It's really good.

  • @Rbtamaki
    @Rbtamaki 12 лет назад +3

    Good lecture, I'm not from an English speaking country, but your spelling was really clear and, most importantly, the lecture itself was linear, with a clear reasoning, had a good pace and was not confusing at all. Thank you very much.

  • @frankd1156
    @frankd1156 4 года назад +4

    wow wow....enlightenment every detail explained and it makes sense.This is pure understanding

  • @AndyLee-xq8wq
    @AndyLee-xq8wq Год назад

    wow wow wow! Lucky to have this lecture!

  • @movax20h
    @movax20h 8 лет назад +2

    Around 13:28, one can easily choice extremely high gamma, making the matrix close to diagonal with just 1 on a diagonal, and vanishingly small elements out of diagonal. This makes solution w_m=y_m, for every m. Which is quiet obvious, all elements in the sum are small, except for the one that is exp(0) == 1. If there are repeating points, there are probably some minor tricks to do that too, maybe with w_m=y_m/k, where k is repetition count of a point.
    For other values of gamma, it is not obvious if the solution exists all the time.
    In fact I think you can select different gamma for each dimension, essentially making it diagonal form, or maybe even arbitrary diagonalizable form (in some different linear basis), and it will still work.

    • @Bing.W
      @Bing.W 7 лет назад +2

      This is exactly what the professor mentioned on the impact of gamma value choice. With extremely high value of gamma(s), you virtually generate a discrete function h(x) that maps h(xn) = yn. It has no smoothness, hence resulting terrible generalization problem.

  • @HajjAyman
    @HajjAyman 12 лет назад

    Loved the RBF model and Lloyd's algorithm! I even started to better understand SVM in this lecture after you've compared it with RBF and neural networks. Thanks.

  • @LowLifeGraphicsProgrammer
    @LowLifeGraphicsProgrammer 4 года назад +2

    Excellent explaination, thanks a lot!

  • @longluongtuanmb62s
    @longluongtuanmb62s 11 лет назад +1

    Thank you very much sir. Your lectures give me more interests in Machine Learning :)

  • @fradamor
    @fradamor 8 лет назад +9

    Very good lecture...
    The book should cover also these topics. :-)
    Thank you for your work and for sharing it.

    • @hugosalescorrea6997
      @hugosalescorrea6997 6 лет назад +1

      There are e-chapters for these subjects, as said in the professor's webpage work.caltech.edu/textbook.html

    • @tungo7941
      @tungo7941 4 года назад +1

      hi, I've just found the additional chapters. They're available here: amlbook.com/support.html

    • @fradamor
      @fradamor 4 года назад +1

      ​@@tungo7941It seems that the last e-chapter loaded is e-Chapter 9 - Learning Aides. Have you uploaded another one? Thank's for your job.

    • @tungo7941
      @tungo7941 4 года назад

      @@fradamor No, I've just found out these chapters, it seems that chapter 9 is the last one. :)

  • @brainstormingsharing1309
    @brainstormingsharing1309 4 года назад +1

    Absolutely well done and definitely keep it up!!! 👍👍👍👍👍

  • @m.hosseinrahimi5611
    @m.hosseinrahimi5611 5 лет назад

    Best online material about RBF!

  • @lilianedeaquino6685
    @lilianedeaquino6685 2 года назад

    Thank you for this!

  • @sanador2826
    @sanador2826 2 года назад

    This rocked!

  • @ShoaibKhan
    @ShoaibKhan 11 лет назад +3

    Thanks!
    Really helpful!

  • @supremachine
    @supremachine 2 года назад

    16:46 for the graphs regarding the gammas, what do the axes represent. My first thought was xs and ys, but how do we plot for the part of the curve for which we do not have the points for, since we can only have the points that we are trying to fit the model to? Thanks in advance.

    • @llmstr
      @llmstr 19 дней назад

      i think y-axis is the h(x) and x-axis is for x sub 1 until x sub N. cmiiw

  • @ahmedshalaby6976
    @ahmedshalaby6976 2 года назад

    Excellent explanation thanks a lot :)

  • @nias2631
    @nias2631 8 лет назад

    Outstanding lecture Professor!

  • @yunlongsong7618
    @yunlongsong7618 7 лет назад

    This video is awesome. Really helpful

  • @andreluisal
    @andreluisal 3 года назад

    Excellent!!!!

  • @leiqin5756
    @leiqin5756 11 лет назад

    Great Lecture! Thanks a lot!

  • @rockfordlines3547
    @rockfordlines3547 2 года назад

    fantastic

  • @zingg7203
    @zingg7203 5 лет назад +1

    RBF 17:40

  • @deeplearning3077
    @deeplearning3077 6 лет назад +1

    Thanks for this inputs :-)

  • @dc33333
    @dc33333 5 лет назад

    good speaker so clear

  • @LeFawkes
    @LeFawkes 10 лет назад

    Amazing! Excellent!

  • @MoeketsiNdaba
    @MoeketsiNdaba 7 лет назад

    This guy is good!. Ottimo.

  • @tshepisomokoena1751
    @tshepisomokoena1751 10 лет назад

    Wow, Thank you Sir!

  • @mmbadrm
    @mmbadrm 5 лет назад

    Thanks a lot

  • @brainstormingsharing1309
    @brainstormingsharing1309 4 года назад +1

    👍👍👍👍👍👍👍👍👍

  • @scorch_d62
    @scorch_d62 8 лет назад +1

    How could SVM be expanded to non-binary classification, and does it still perform as well on non-binary classification as binary classification? I know he said that they don't perform as well on real-valued functions, but does that refer to anything that isn't binary?

    • @tejaskumthekar4155
      @tejaskumthekar4155 7 лет назад +1

      According to my understanding, SVMs can be definitely expanded to non-binary aka multi-label classification and it still performs as good as it would on a binary classification problem.

  • @niamatkhan5563
    @niamatkhan5563 Год назад

    center point=. 0, 0.5 ,1
    Evaluation point= 0 ,0.5 ,0.6 ,0.7 ,0.8 ,1
    Shape parameter=3
    function=e^sinπx
    Solve this by using MQ RBF interpretation technique

  • @vishnumurali6613
    @vishnumurali6613 7 лет назад

    Great!!

  • @phuongnamtran8161
    @phuongnamtran8161 6 лет назад

    thanks

  • @meetplace
    @meetplace 8 лет назад

    I think the conjecture is valid for an n-1 dimensional space, awaiting the proof for an infinte-dimensional space... if I'm wrong I apologise

  • @erfanebrahimi9748
    @erfanebrahimi9748 7 лет назад

    Very nice lecture, thanks for sharing.
    SVM can only be used for binary classification. However, there are techniques to use it in non-binary classification as well. Is it the same for RBF as well? The classification example was also binary in this lecture.

  • @brandomiranda6703
    @brandomiranda6703 9 лет назад

    From the regularization variational problem, does one get the one with movable centers or does one get the one with data points as centers?

    • @Bing.W
      @Bing.W 7 лет назад

      It does not give you automatically the clusters. So yes, it is RBF on data points.

  • @zhz538
    @zhz538 Год назад

    求一期中国大陆的DDR5内存接口芯片澜起科技官网大解析

  • @RahulSharma-oc2qd
    @RahulSharma-oc2qd 3 года назад

    the video could have been better... if you would have shown x and x_n and y_n all those on a graph during initial minutes of the video. Although it was helpful, but I had to struggle a lot before catching up to all the notations used, as I am not having mathematical background.