RBF Networks

Поделиться
HTML-код
  • Опубликовано: 15 янв 2025

Комментарии • 119

  • @ilyaturner4587
    @ilyaturner4587 7 месяцев назад +5

    Ths video is insanely good. You uploaded it 7 years ago, I hope you're doing well today!

  • @rishabhlaheja7689
    @rishabhlaheja7689 5 лет назад +70

    By far the only well explained tutorial on RBF , thank you!!!👌

    • @Mortz76
      @Mortz76 5 лет назад +2

      Agreed, 100%! :-)

    • @rmttbj
      @rmttbj 3 года назад

      Year 2021 - I agree with you

    • @jamaribishop3357
      @jamaribishop3357 3 года назад

      I know Im asking randomly but does someone know a tool to get back into an Instagram account??
      I somehow lost the login password. I would appreciate any tricks you can give me

    • @saulbenedict6718
      @saulbenedict6718 3 года назад

      @Jamari Bishop instablaster ;)

    • @jamaribishop3357
      @jamaribishop3357 3 года назад

      @Saul Benedict I really appreciate your reply. I found the site through google and Im waiting for the hacking stuff now.
      Looks like it's gonna take quite some time so I will reply here later with my results.

  • @itsfabiolous
    @itsfabiolous 3 года назад +6

    As many people mentioned, you did a great job explaining this with minimal amount of complexity to grasp the concept and ivestigate further!

  • @alonmota3099
    @alonmota3099 3 года назад +2

    You just out teached the hell out of all my graduation teachers in english (that is not even my primary language). Thanks man!

  • @karannchew2534
    @karannchew2534 Год назад +14

    Notes for my future revision.
    RBF has only three layers: Input, Hidden and Output.
    Number of input node = number of variables (or features). Example: each input digit is a grid of 300, then number of input nodes is 300.
    Number of Hidden nodes depends on model optimisation. Each hidden node is a RBF function e.g. Gaussian, with a beta parameters. The number of variables (or dimensions) of the function is the same as the number (dimension) of the input variables.
    Number nodes in output layer:
    a) for classification = number of possible result/class
    b) for value estimation = one node?
    Unlike a multi-layer perceptron, an RBF network does not have any weights associated with the connections between the input layer and the hidden layer. Instead of weights, this layer has the RBF node’s position, width, and height.
    Weights are associated with the connections between the hidden layer and the output layer. As with an MLP, these weights are optimized when the RBF network is trained.
    The output layer nodes of an RBF network are the same as those in an MLP. They contain a combination function - usually a weighted sum of the inputs - and a transfer function that is often linear or logistic.

  • @vewmet
    @vewmet 6 лет назад +3

    Dear brother, i swear your videos are the best for the topic. precise visuals, examples, crisp explanation! Keep your channel active, wanna see it grow

  • @adelsalam9735
    @adelsalam9735 Год назад +2

    no one, and I mean no one explain this like you do, thanks, thanks a lot, thankyou very much

  • @LJHuang-jn8bj
    @LJHuang-jn8bj 7 лет назад +9

    Extremely clear explanation. You are a very smart teacher. Thank you.

  • @iansimpson7546
    @iansimpson7546 4 года назад +2

    Just such a good explanation, by far the simplest and most complete I've seen out of several videos - thank you!

  • @lyesboudia
    @lyesboudia 11 месяцев назад

    the best video that clearly and simply explains RBF, I have seen many videos but this one is by far the best, I learned a lot, unfortunately the channel is no longer active.

  • @chieeyeoh6204
    @chieeyeoh6204 4 года назад +1

    I am blessed to be able to hear from you

  • @sairandhripatil7884
    @sairandhripatil7884 2 года назад

    very clear explanation. So far the best video of RBF networks on internet. Thankyou !

  • @maxwell77176
    @maxwell77176 7 месяцев назад

    I wish we continued his videos. Great job!

  • @arunprasad8606
    @arunprasad8606 5 лет назад

    One of the best explanations for RBF. I tried understanding them from several texts but this one is crystal Clear.

  • @bbulletube
    @bbulletube 2 месяца назад

    What a nice explanation, of a complex theme. Thanks for sharing your knowledge.

  • @sahith2547
    @sahith2547 2 года назад

    Great Explanation....Far more better than many other explanations...Thank U....It helped

  • @rbca
    @rbca 5 лет назад +4

    What a great video! Thank you for the easy and visualized explanations!

  • @adnaneakk1068
    @adnaneakk1068 2 года назад

    I bet you are a master at MATH in general, because some one who understands it can simplify it... thats why this video is so easy to understand

  • @sebastianromerolaguna7408
    @sebastianromerolaguna7408 3 года назад

    Thank you,
    I am learning about, and in short time its good to learn about.
    have a good day man

  • @pavelzobov
    @pavelzobov 2 года назад

    Great video, its much more easier than i got in uni. Keep going!

  • @tyfooods
    @tyfooods 3 года назад

    So many kudos to you. It goes to show that it doesn't matter how fancy your video is if your explanation is trash. Well done!

  • @AlexXPandian
    @AlexXPandian Год назад

    Very well explained with great intuitive motivations.

  • @hirenthakkar9962
    @hirenthakkar9962 6 лет назад

    excellent explanation. The complexity of the algorithm is simplified. Thank you.

  • @samlighthero5465
    @samlighthero5465 7 лет назад

    Great explanation! You did a great job breaking down these complicated ideas.

  • @SEK117
    @SEK117 8 лет назад

    Great video, I was just struggling with RBFs, but your video just made them much more understandable, thanks

  • @LuthandoMaqondo
    @LuthandoMaqondo 7 лет назад +4

    macheads101 , Thank you bro. YOU made Everything simple & demystified.

  • @glokta1
    @glokta1 Год назад

    Hah, I was very impressed about by how you broke the concept down so I went link surfing and realized you now work at OpenAI. Can't say I'm surprised :)

  • @andyd568
    @andyd568 7 лет назад +1

    Your explanations are very clear. Thanks.

  • @tymothylim6550
    @tymothylim6550 3 года назад

    Thank you very much for this video! I learnt a lot and am thankful for the good use of slides as well! Great work!

  • @talessomensi2078
    @talessomensi2078 4 года назад

    Great explanation, that's real didactics. Thank you very much!

  • @bhaumikchoksi8198
    @bhaumikchoksi8198 7 лет назад

    Excellent tutorial! Finally found a video that's easy to understand. Thanks a lot!

  • @kaushilkundalia9653
    @kaushilkundalia9653 5 лет назад

    So clear an in-depth explanation thank you.

  • @Monotheism-MonoTheos
    @Monotheism-MonoTheos 3 года назад

    you gonna fire this youtube platform...wow!!!! blast man!

  • @ashishintown
    @ashishintown 2 месяца назад

    Thank you so much for the explanation.

  • @Rene-gx7fh
    @Rene-gx7fh 7 лет назад +1

    Super helpful! Please continue

  • @maxcrous
    @maxcrous 5 лет назад

    Great step by step explanation, thank you!

  • @3000franky
    @3000franky 10 месяцев назад

    Very well explained and the diagrams are helpful

  • @tanugupta3921
    @tanugupta3921 5 лет назад

    Really helpful video. Thanks Macheads101

  • @AAA.BBBAAA
    @AAA.BBBAAA 7 лет назад +1

    Thanks for your useful video, I wanna know about the method that we can correct the weights? Do we drive RBF to correct the weights? How?

  • @Ray11mond
    @Ray11mond Месяц назад

    Greatt
    You have done it, brother. I hope you are or will become successful.

  • @cameronmackay1606
    @cameronmackay1606 4 года назад

    Amazing explanation. Thanks!

  • @rv-b9z
    @rv-b9z 4 года назад

    Thankyouuu so muchhhhh this video is a gem for a beginner, basically cleared all my doubts! ❤️

  • @chri_pierma
    @chri_pierma 4 года назад

    You are better than my teacher

  • @reubengutmann7773
    @reubengutmann7773 4 года назад

    Really great explanatory video!!

  • @anagabrielacruzbaltuano2752
    @anagabrielacruzbaltuano2752 5 лет назад

    Muchas gracias por tu explicación.. Muy buen video.. Felicitaciones

  • @rightmrs8187
    @rightmrs8187 8 лет назад +2

    This video saves my day! I am learning the RBF for forecasting and don't know where to start! I want to use RBF to correct the forecasting errors. Do you hav any advice of materials like books ,video or paper that can help me? Thank you so much!

  • @perfectketchup
    @perfectketchup 6 лет назад +1

    its basically a gaussian mixture model on hidden layer with gaussian activation function(like kernel machine ). The question is how do I back propagate the mean and the variance of this gaussians. Another question is on RBF networks, sum of hidden to Output layer weights are 1 and you can estimate a Pdf with it. What makes this sum of that weights are one constraint happen. You cant have it on normal MLP

  • @arunbali7480
    @arunbali7480 3 года назад

    Thank you Sir for this wonderful video , i have a question . How are the basis function determined in practice ? why does you choose Gaussian function as the basis function??

  • @kundaichinomona9958
    @kundaichinomona9958 4 года назад

    you made it so simple thanks

  • @saraincin8055
    @saraincin8055 4 года назад

    Great explanation! Thank you

  • @r.walid2323
    @r.walid2323 2 года назад

    thanks for your explanation

  • @amirrezaghafoori7593
    @amirrezaghafoori7593 5 лет назад

    thanks a lot! very nice and clear explanation

  • @Artformatics
    @Artformatics 5 лет назад

    Great video. Dont delete.

  • @parkboulevard4167
    @parkboulevard4167 4 года назад

    Great explanation!

  • @finderlandrs7965
    @finderlandrs7965 4 года назад

    If i use k-means to find centers, so i just need to train the output neurons?

  • @iidtxbc
    @iidtxbc 3 года назад

    So, if I use RBF, I can do clustering and classification?

  • @jaredTsunami
    @jaredTsunami 8 лет назад +5

    Can you make a video on Neuroevolution and explain how genetic algorithms work? And nice video.

    • @macheads101
      @macheads101  8 лет назад +3

      Haha, it's funny that you asked this question. Recently, I have been developing a neuroevolution algorithm for training "large" neural networks--something which hasn't been done yet (at least not fully). I definitely plan to make a video on neuroevolution soon in which I will hopefully (knock on wood) show off some of my new results.

    • @corey333p
      @corey333p 7 лет назад

      macheads101 I would be interested to see how that turns out. I experimented with a genetic algorithm neural network that handles only the weights, which worked in parallel with gradient descent. I hoped the network would have an edge at breaking out of local minima and eventually reach better peak performance. I found that gradient descent did most of the work, and even the improvements made by crossing elites in the population weren't superior than continuous gradient descent with any one member. I didn't run any very long term tests, but in the tests I did run I didn't find any final error rates getting much lower than networks trained without the genetic method.
      I eventually want to tinker with genetically evolving topologies (NEAT, HyperNEAT), but I haven't got around to it yet. Again I would be very interested in watching that video if you do make it!

    • @macheads101
      @macheads101  7 лет назад +1

      As an update, I don't think I want to make an evolution video anymore. While I was able to train some networks with evolution, the training was *way* slower than with gradient descent. I just see no practical motivation for it.

  • @sepidet6970
    @sepidet6970 5 лет назад

    Nicely explained, thanks.

  • @BrettClimb
    @BrettClimb 5 лет назад

    I may have missed it, but do you talk about how to determine beta? Nice explanation btw, best video I've seen on RBF.

  • @paknbagn9917
    @paknbagn9917 7 лет назад

    you are awesome you explain really good and smooth . tnx

  • @MuhammadAbdullah-wr3nh
    @MuhammadAbdullah-wr3nh 3 месяца назад

    Hi,I really enjoyed your video; you explained it very well.I have a question at 8:00: When we are increasing beta, shouldn't the slope drop quickly as the size of the circle increases?

  • @micknamens8659
    @micknamens8659 Год назад

    I assume you preprocessed the image to center and scale them before feeding the pixels into the input layer. IMO rotation correction would be too complicated.

  • @petejuilangchu
    @petejuilangchu 7 лет назад

    cool and great tutorial. thanks for your efforts which make a Chinese student understand the contents and your standard English. please keep on introduce more neural networks to us. btw, could you illustrate the differences between some typical locally connected neural networks such as RBF, B-spline basis and the CMAC. THANK YOU IN ADVANCE.

  • @AlexanderBollbach
    @AlexanderBollbach 8 лет назад +1

    Are RBF networks typically a single layer as shown in the video? how would multiple layers or the concept of a hidden layer work for an RBF network?

    • @macheads101
      @macheads101  8 лет назад

      Typically, RBF networks are "shallow", consisting of one RBF layer and then a layer of output neurons. The layer of radial basis functions is essentially the hidden layer.
      While I have never seen this in practice, it is theoretically possible to create "deep" RBF networks. Just imagine treating the output of one RBF network as a point (each output neuron gives one coordinate) and then feeding this point into a second RBF network. Whether or not this would be useful or easy to train is a different question.

    • @AlexanderBollbach
      @AlexanderBollbach 8 лет назад

      Interesting. Often my first thought with 'neural' algorithms is how can it be made similar to a ConvNet where you have successive layers computing increasingly abstract features. So I was wondering what properties successive radial functions would have but perhaps thats a topic I currently cannot address.

  • @felixnaujoks4873
    @felixnaujoks4873 2 года назад

    Just so good!

  • @vahidjoudakian8649
    @vahidjoudakian8649 5 лет назад

    Excellent, thank you

  • @MrStudent1978
    @MrStudent1978 6 лет назад

    Thanks for this very beautiful explanation. Can you please make a video on use of RBFN for solutions to partial differential equations

  • @RahulSharma-oc2qd
    @RahulSharma-oc2qd 3 года назад

    in the initial seconds of the videos... you said RNN is helpful in pattern recognization.. So do CNNs... CNNs and RNNs are somewhat similar in a way?
    The center of a circles need to be same as inputs (data points), is it possible to have center other than the data points?

  • @LG-nm1xg
    @LG-nm1xg 6 лет назад

    Would rotation and scaling of the original figure improve the accuracy of prediction?

  • @eduardojreis
    @eduardojreis 6 лет назад

    I have no one else to ask, I tried to implement it, but it is not learning. Not sure why.
    Any tips about possible pitfalls?

  • @deepikaupadhyay3206
    @deepikaupadhyay3206 6 лет назад

    Thanks, great explanation :)

  • @LuthandoMaqondo
    @LuthandoMaqondo 7 лет назад

    Hi Macheads101; I wanted to know if we may look up the source code for the Handwritting demo, to modify it for use in more general purposes?

  • @gulseminyesilyurt7282
    @gulseminyesilyurt7282 5 лет назад

    Thanks for the video! say I use two output neurons per class so I have a different set of weights for each neuron. When I am training (with least square method) weights between hidden and each output neurons, should I use only the observations that belong to associated output neuron? I will be glad if you can help.

  • @MuslimMosaic
    @MuslimMosaic 3 года назад

    legend. Thank You.

  • @Harsh__Pandya
    @Harsh__Pandya 8 месяцев назад

    Thanks this was helpful

  • @WahranRai
    @WahranRai 6 лет назад

    With which software/tool did you capture your hand digit writing ?

  • @kiriakipoursaitidou2732
    @kiriakipoursaitidou2732 6 лет назад

    i thought that β is the variance of the Kernel. So ,if you have lower variance - "thinner" kernel(e.g gaussian) then you can have smaller circles (with quite under smoothness yet)

  • @divyareddy6767
    @divyareddy6767 3 года назад

    thank you great stuff

  • @GunamaniJena
    @GunamaniJena 4 года назад

    excellent video

  • @jianjunzhang9108
    @jianjunzhang9108 7 лет назад

    hi, machead, excellent work! just one question. how do you interpret the output of an rbfnn into probability as mlpnn with sigmoid activation does? i mean, when i train an rbfnn with multiple 0-1 target outputs, the predictions are usually real numbers varing in interval [-0.sth, +1.sth], it does not seem like probability to me.... can you comment on this?

  • @diracsea2774
    @diracsea2774 7 лет назад

    Great video as always long time fan

  • @Schmuck
    @Schmuck 8 лет назад +2

    Hey, can you explain to me how you learned to make use of the mnist dataset in Go? I'm looking at github.com/unixpickle/mnist/blob/master/dataset.go but I really don't understand how you're decompressing it and turning it into a go file to be used. Could you point me in the right direction of how you learned how to do that?

    • @macheads101
      @macheads101  8 лет назад +2

      I used a package called go-bindata to embed the MNIST data in a Go source file. This data is compressed with gzip, so my code does the following: decompress embedded MNIST files (line 159-169) -> decode files into "images" (171-207). I suspect it is 171-207 that you are confused about. The MNIST files come in a binary format, so the code there is just dealing with the specific bytes in the MNIST file.

    • @Schmuck
      @Schmuck 8 лет назад +1

      Thank you so much, you're honestly the most helpful youtuber and active in replies. I hope your channel does well

  • @miscymo
    @miscymo 7 лет назад

    Nice explanation

  • @lutzoffun2459
    @lutzoffun2459 4 года назад

    Thank you!

  • @houkensjtu
    @houkensjtu 8 лет назад

    Hi Alex! I know it's a weird idea and it's totally irrelevant to your videos, but if you are going to take the GRE test, would you consider to do a video talking about how you plan to tackle the test? ... cheers

  • @salmaabdelmonem7482
    @salmaabdelmonem7482 2 года назад

    Thank you

  • @MiMi-zl9um
    @MiMi-zl9um 7 лет назад

    hi! I anjoyed the video, really helped me get a better understanding of RBF networks.
    can you explain how to code a RBF in matlab?

  • @saidul14319
    @saidul14319 5 лет назад

    Amazing!!

  • @fernandodpbgb4109
    @fernandodpbgb4109 6 лет назад

    Thanks a lot. BTW, have they ever told you you resemble John Lennon?

  • @vedantjoshi1487
    @vedantjoshi1487 5 лет назад

    illusion in this video at 4:30......if you see continuously on the person then the red circles on the green plane (upper left image) vannishes....and you see full green sheet....amazing how our brain just removes the red circles....just fun comment !!!

  • @garryyegor9008
    @garryyegor9008 6 лет назад +1

    do you mean the RBF like an activation function or what?

  • @brainlink_
    @brainlink_ 2 года назад

    PER ITALIANI: Ho realizzato una playlist sulle reti RBF! :) quì -> ruclips.net/video/fcBz-3NchCI/видео.html

  • @shubhgupta6110
    @shubhgupta6110 5 лет назад

    damn. you're good

  • @lokeshmagnani7854
    @lokeshmagnani7854 5 лет назад

    Hey! I need your help regarding one of the RBF programs. Could you please help me?

  • @satishpatro
    @satishpatro 7 лет назад

    Hi
    I need some suggestion
    Minor project
    -----------------------
    We have taken liver patient disease as a data set, balanced it using SMOTE and applied RBF classifier and Naiver Bayers one.
    I don't know what to do in major projects. Any suggestions including extension of that project or any new one considering the previous projects which would at least be available on internet
    as there is no one to guide us
    Submission of topic is tomorrow
    Thank you

  • @kristoffervagenes5560
    @kristoffervagenes5560 8 лет назад

    Hi, could you make a video about how to see or control someones mac? from a mac.. plzz

  • @TylerMatthewHarris
    @TylerMatthewHarris 7 лет назад

    Higher dimensions

  • @zes3813
    @zes3813 7 лет назад

    wrg, ts not interesx or not. no such thing as smoothx or not or easier or not, readyx or not or so can tx, telx/say/talk/can telx/can say can talk/confidx anyx nmw and it can all b perfx

  • @shreeyajoshi9771
    @shreeyajoshi9771 3 года назад

    Amazing explanation! Thanks a loads!