What do neural networks learn?

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024

Комментарии • 50

  • @oauthaccount7974
    @oauthaccount7974 2 года назад +1

    How vivid was NN concept? OMG I can't believe it. I was looking for it for long time. finally I got it. :)

  • @lcsxwtian
    @lcsxwtian 5 лет назад +16

    Please keep doing this Brandon. We can't thank you enough & leaving you a nice comment like this is the least we can do :)

  • @mostinho7
    @mostinho7 4 года назад +2

    12:00, how a logistic regression curve is used as a classifier. It classifies a continuous variable into categories (one input) but can be extended to multiple inputs.
    12:52 With two inputs, the curve becomes 3D
    IMPORTANT: the contour lines projected on the x1,x2 plane, shows the “decision boundary”. Logistic regression always has linear contour lines, that’s why logistic regression is considered a linear classifier.
    16:40 (non-linear) how a curve is used as a classifier (network has a single output, plotted as a function of the input)
    If only one input x, and one output y, then it is classified by this 2d curve, where when the curve is above a certain line (say y=0) this is category A, and where the curve is below the line is category B
    The non linear classifier doesn’t just split the points into two categories, the categories can also be interleaved.
    So we can classify an input into two categories using only one output node, by specifying this threshold line/value of the output (y= threshold) and seeing the intersection of that line with our curve.
    If our neural network has two inputs and one output, then the generated curve is 3d 19:40
    Another way to classify an input into two classes is to have two output nodes (what I’m used to)
    Todo continue 21:59

  • @chernettuge4629
    @chernettuge4629 6 месяцев назад +2

    Respect Sir, Thank you so much- I am more than satisfied with your lecture.

  • @estifanosabebaw1468
    @estifanosabebaw1468 5 месяцев назад

    the depth of the explanation and visualization, there is no word to describe how much it express and help to grasp the most fundamental and core concept of Neural Networks.
    THANKS Bra

  • @shedrackjassen913
    @shedrackjassen913 Год назад +1

    This was very satisfying. Keep on the good work

  • @lakeguy65616
    @lakeguy65616 5 лет назад +4

    Hands down, this is the best video explaining the concepts and boundaries of Neural Networks I've ever watched. Well done!

  • @xarisalkiviadis2162
    @xarisalkiviadis2162 5 месяцев назад

    What a diamond of a channel i just found ... incredible!

  • @johanneszwilling
    @johanneszwilling 5 лет назад +4

    😍 Thank You! 😊 Got myself a whiteboard now 😀 Much more fun learning this stuff

  • @petrmervart4994
    @petrmervart4994 5 лет назад +2

    Excellent explanation. I really like examples with different numbers of outputs and hidden nodes.

  • @philipthatcher2068
    @philipthatcher2068 5 лет назад +3

    Brilliant. Best explaination of this ever. Great visuals also.

  • @stasgridnev
    @stasgridnev 5 лет назад +4

    Wow, that is just great. Thx for awesom explanation with visualisation. Made a several notes based on your video. Thanks, wish you luck.

  • @chloegame5838
    @chloegame5838 2 года назад

    Finally the video i was looking for! A clear and brilliant explanation of NNs that ties in decision boundaries, linear and nonlinear functions and what the values mean throughout a NN.

    • @BrandonRohrer
      @BrandonRohrer  2 года назад +2

      I'm really happy to hear it Chloe. This video has resonated with a much smaller audience than some of my others, but it's one of my favorites and one that I'm proud of.

    • @user-kf5jx1ug2v
      @user-kf5jx1ug2v Год назад

      !

  • @somdubey5436
    @somdubey5436 4 года назад

    great work and very informative. One thing that really made me think is how can someone even dislike this video.

  • @hackercop
    @hackercop 2 года назад

    This explaines activation funcitons very well!

  • @greglee7708
    @greglee7708 5 лет назад +3

    Very well explained, thank You

  • @ayush612
    @ayush612 5 лет назад +2

    Thanks Brandon. You are awessooomee!

  • @waterflowzz
    @waterflowzz 3 года назад +1

    Wow this is the best explanation of a neural network I’ve ever seen! This channel is so underrated. I hope you get way more subs/views.

    • @BrandonRohrer
      @BrandonRohrer  3 года назад

      Thanks! :) I'm really happy it hit the spot.

  • @carlavirhuez4785
    @carlavirhuez4785 5 лет назад

    Dear Brandon, you have just saved me a lot of time and your explanation is very simple and intuitive. You have helped this humble student :´)

  • @sridharjayaraman8094
    @sridharjayaraman8094 5 лет назад

    Awesome - many many thanks. One of the best lectures for intuitive understanding of NN.

  • @MartinLichtblau
    @MartinLichtblau 5 лет назад +1

    Excellent explanations for so many deep insights. 👍🏼

  • @larsoevlisen
    @larsoevlisen 5 лет назад +2

    Thank you for your videos! They have helped me a lot to digest related information.
    One piece of feedback on the visual side: I believe that, when working with complex structures in a visual way (as the layer diagrams in this video), adding focus to the objects that you talk about could greatly help viewers ability to follow along your narrative.
    How you create these graphics I don't know, but with this video as an example, you could e.g. outline and light up the nodes that you speak of and outline the whole of the layer boxes (green boxes surrounding layers).
    Thank you for your contribution.

    • @BrandonRohrer
      @BrandonRohrer  5 лет назад

      Thanks Lars! I appreciate the feedback. I really like your idea for diagram highlighting, and I'll see if I can fold it into my next video.

  • @purnasaigudikandula3532
    @purnasaigudikandula3532 5 лет назад +4

    Please try to a make a video explaining math behind every Machine learning algorithm. Every beginner out there could get the theory of alogirthm but couldn't get the math behind it.

  • @177heimu
    @177heimu 4 года назад +1

    Great explanation on the topic! Any plans on starting a mini series on mathematics covering calculus, statistics, linear algebra? I believe many will benefit from it :)

  • @abhinav9561
    @abhinav9561 2 года назад +1

    Thanks Brandon. Very helpful and much need. The graphs really helped with the intuition.
    Can I ask how did you make those non-linear graphs?

    • @BrandonRohrer
      @BrandonRohrer  2 года назад

      I'm glad to hear it Abhinav! Here is the code I used to make the illustrations: github.com/brohrer/what_nns_learn

    • @abhinav9561
      @abhinav9561 2 года назад

      @@BrandonRohrer thanks

  • @Silvannetwork
    @Silvannetwork 5 лет назад

    Great and informative video. You are definitely underrated

  • @sakcee
    @sakcee 8 месяцев назад

    excellent!

  • @MohamedMahmoud-ul4ip
    @MohamedMahmoud-ul4ip 5 лет назад

    AMAZING !!!!!!!!!!!!!!!!!!!! ,
    THANK YOU VERY MUCH

  • @jenyasidyakin8061
    @jenyasidyakin8061 5 лет назад +1

    Wow, that was very clear! can you do a course on bayesian statistics ?

    • @BrandonRohrer
      @BrandonRohrer  5 лет назад

      Thanks! Have you watched this one yet?: ruclips.net/video/5NMxiOGL39M/видео.html

  • @svein2330
    @svein2330 4 года назад

    Excellent.

  • @lifeinruroc5918
    @lifeinruroc5918 Год назад

    Any chance to quickly explain why resulting models are straight lines?

  • @harrypotter6505
    @harrypotter6505 Год назад +1

    I am stuck at understanding why "i" below the summation notation is necessary, someone please, would it make a difference to not write that "i" below?

    • @BrandonRohrer
      @BrandonRohrer  Год назад

      The i underneath SUM_i a_i means to sum up all the terms for all values of i, for example, add up a_0 + a_1 + a_2 + a_3 ...
      If you leave the i off, it is usually understood. It means the same thing.

  • @igorg4129
    @igorg4129 4 года назад

    Great video thanks. Please someone help me understand something.
    1) I do not understand where(in which stage of the network) you can get observe a sigmoidal 3d surface like it is shown on
    13:05?
    In my opinion you only get 2d plane defined by weights and bias: (x1*w1 + x2*w2+b= output). To this plane you plug in X1 and X2 you have, and get a simple number (scalar) on an output axis. This scalar will NEVER tell anyone that he came from a 2d plane, when being plugged into a sigmoid formula at the next step . Thus the sigmoid trick is totaly 2d operation: you just plug the scalar from previous step at the X axis, and get Y value as a final output of 1st layer. So such 3d sigmoidal surface as shown never exists in my opinion... Please ell me what did i miss.
    2) at 14:50 (similar to the 1st question) What do you mean by "when we add them together", I mean Where do you make mathematicaly this addition of one 3d curve to another? Fix me If i am wrong, but each activation function gives me at the end only one simple number(scalar)! In case of a sigmoid this scalar is between 0 - 1. Lets say 0.7, but it is just a scalar and NOT a surface! Technicaly when this 0.7 reaches the second layer' it acts like a regular input and NO ONE KNOWS that it was born by some sigmoid. Please could you clarify this point for me?

  • @AshutoshRaj
    @AshutoshRaj 5 лет назад

    Awesome man !! Can you relate the basis and weights of a neural network?

  • @pauldacus4590
    @pauldacus4590 5 лет назад

    Thanks embroidered stockings Grampa Christmas!

  • @gaureesha9840
    @gaureesha9840 5 лет назад

    can a bunch of sigmoid activations produce non-linear classifier?

    • @BrandonRohrer
      @BrandonRohrer  5 лет назад

      Yep, in multiple layers they can do the same thing as hyperbolic tangents, except that the functions they create fall between 0 and 1, rather than between -1 and 1.

  • @gavin5861
    @gavin5861 5 лет назад +1

    🤯

  • @user-kf5jx1ug2v
    @user-kf5jx1ug2v Год назад

    !

  • @SnoopyDoofie
    @SnoopyDoofie Год назад

    8 minutes in, and still very abstract. No thanks. There are better explanations.