How Deep Neural Networks Work

Поделиться
HTML-код
  • Опубликовано: 22 дек 2024

Комментарии • 871

  • @flavialan4544
    @flavialan4544 3 года назад +88

    This should be recommended as the 1st video to watch when it comes to learn neural networks

    • @DR-bq4ph
      @DR-bq4ph 2 года назад +1

      Yes

    • @ckpioo
      @ckpioo 9 месяцев назад +1

      yes I agree but for simplicity sake he should have done a 0 to 1, 0 being black 1 being white and 0.5 being grey, because almost everyone follows that pattern, and for new learners its a bit harder for them to switch from thinking about -1 to 1 to 0 to 1

    • @SimonZimmermann82
      @SimonZimmermann82 5 дней назад

      totally agree. i understood the principle right away and now we have our own nn in operation

  • @heyasmusic7553
    @heyasmusic7553 Год назад +9

    I watched your videos 3 years ago. It'salmost nostalgic. You may not see this. But you're one of the reasons I kept moving through with Machine Learning

    • @BrandonRohrer
      @BrandonRohrer  Год назад +2

      I legit cried a little bit. Thank you for this.

  • @danklabunde
    @danklabunde 4 года назад +71

    I've been struggling to wrap my head around this topic for a few days, now. You went through everything very slowly and thoroughly and I'm now ready to dive into more complex lessons on this. Thank you so much, Brandon!

  • @biokult7828
    @biokult7828 7 лет назад +94

    "Connections are weighted, MEANING".... Holy fuck.....after viewing numerous videos from youtube, online courses and google talks.... (often with comments below saying "thanks for the clear explanation")....This is the FIRST person i have EVER seen that has actually explained what the purpose of weights are....

    • @Tremor244
      @Tremor244 7 лет назад +3

      I feel the same, even though I still can't completely understand how weighting works :/

    • @garretthart4883
      @garretthart4883 7 лет назад +2

      Tremor244 I am by no means an expert but weighting is what makes the network "learn" to be correct. By changing the weights it changes the output of each neuron and eventually the output of the network. If you tune the weights enough you will eventually get an output that is what it is supposed to be. i hope this helps

    • @LuxSolari
      @LuxSolari 7 лет назад +30

      I don't work with neural networks but with other types of machine learning. But weighting is more or less the same in all these fields of mathematics.
      You want a system that, provided with an input (an image, for instance), achieves its classification as the output. For instance you have a scenery (input) and you want to know if it's from vacations at the mountains or at the beach (a classification, ie. the output).
      So you pass the image trough a set of filters: (1) does the image have umbrellas? (2) does it have clouds? (3) is there a lot of blue? (4) is there a lot of brown?, etc.
      If the image passes a specific combination of filters, there is a greater probability that the image is of a specific type (for instance, if the image (1) have umbrellas, (3) is blueish and isn't (4) brownish, it's more likely to be from the BEACH). But how much more likely?
      That's when the WEIGHTING comes into play. Through machine learning we want to calculate some coefficients (weights) that state a sort of likelihood of an image to pass a filter, given its type (for instance, if it has umbrellas there's a probability of 0.9 out of 1 (90%) that it is from the beach and not from a mountain, but if there's a lot of blue maybe only 0.6 of those images are from the beach, and so the WEIGHT IS LIGHTER. That means that, if the image passes a filter of COLOR BLUE it is likely to be from a BEACH, but if it passes a filter of UMBRELLAS it is EVEN MORE LIKELY). Weights, then, are a parameter of RELEVANCE of each of the selected filters to achieve the correct classification.
      So we make the machine learn from LOTS (thousands, perhaps) of images that we KNOW are from the beach or the mountain. One image from the beach has umbrellas, so the classification through the filters was correct and then the WEIGHT for the umbrellas is increased. But if there is an image of the mountains with umbrellas and the program says it's from the beach, the weight goes down for the umbrellas. When we did this with a lot of images, the weights are FINE TUNED to classify correctly most of the time (if the filters are any good... if we chose wrong filters from the beginning, then there's a chance the dictionary won't get any better even fed with lots of images. That could also happen if the training images are biased: ie. if they don't represent the real set of images that we want to classify).
      I hope this works better for you!

    • @anselmoufc
      @anselmoufc 7 лет назад +5

      If you have had a course on linear regression, you will recognize weights are equivalent to parameters. They are just "free variables" you adjust in order to match inputs with outputs. In one-dimensional linear regression, the parameters are the slope and offset of a line, you adjust them so that the distance between the line and your points (your training examples) is the least. Neural networks use the same idea as statistical regression. The main difference is that neural networks use a lot of weights (parameters), and for this reason you have to care about overfitting. This in general does not happen in linear regression, since the models are way more parsimonious (use only a few parameters). The use of a lot of weights is also the reason why neural networks are good general approximators, the large amount of weights give them high flexibility. They are like bazookas, while statistical regression is more like a small gun. The point is that most of the times you need only a small gun. However, people like to apply neural networks to problems where linear regression would do a good job since NN are "sexier".

    • @madsbjerg8186
      @madsbjerg8186 7 лет назад +1

      +Esteban Lucas Solari I want to let you know that I love you for everything you just wrote.

  • @mikewen8216
    @mikewen8216 7 лет назад +321

    I've watched many videos and read many blogs and articles, you are literally the best explainer at making these intuitive to understand

    • @behrampatel3563
      @behrampatel3563 7 лет назад +9

      I agree.Penny dropped for me today with this Video.
      Thank you so much Brandon

    • @a.yashwanth
      @a.yashwanth 5 лет назад +8

      3blue1brown

  • @klaudialustig3259
    @klaudialustig3259 7 лет назад +14

    I already knew how neural networks work, but next time someone asks me, I'll consider showing him or her this video! Your explanation is visualized really nicely.

  • @MatthewKleinsmith
    @MatthewKleinsmith 7 лет назад +69

    Great video. Here are my notes:
    7:54: The edges going into the bottom right node should be white instead of black. This small error repeats throughout the video.
    10:47: You fixed the color error.
    11:15: Man, this video feels good.
    21:41: Man, this video feels really good.
    An extension for the interested:
    Sometimes we calculate the error of a network not by comparing its output to labels immediately, but by first putting its output through a function, and comparing that new output to something we consider to be the truth. That function could be another neural network. For example, in real-time style transfer (Johnson et al.), the network we train takes an image and transforms it into another image; we then take that generated image and analyze it with another neural network, comparing the new output with something we consider to be the truth. The point of the second neural network is to assess the error in the generated image in a deeper way than just calculating errors pixel by pixel with respect to an image we consider to be the truth. The authors of the real-time style transfer paper call this higher-level error "perceptual loss", as opposed to "per-pixel loss".
    I know this was outside the scope of this video, but it was helpful to me to write it, and I hope it will help someone who reads it.

    • @humanity3.090
      @humanity3.090 7 лет назад +4

      Good to know that I'm not the only one who caught the logical mistakes.
      9:14 Bottom second squash should be vertically inverted, if I'm not mistaken.

    • @ganondorfchampin
      @ganondorfchampin 6 лет назад +3

      I had the idea of doing perceptual loss before I even knew the term for it, seems like it would work better for warp transforms and the like versus level transforms.

    • @hozelda
      @hozelda 5 лет назад +2

      Alternatively, the edges are correct but the corresponding picture should be flipped.
      Regardless, the final step (output perceptron at the bottom indicating horizontal) works with either the white white edges or the black black edges scenario.

    • @oz459
      @oz459 4 года назад

      thanks :)

    • @sali-math-arts2769
      @sali-math-arts2769 2 года назад

      YES - thanks , I saw that tiny error too 🙂

  • @claireanderson5903
    @claireanderson5903 5 лет назад +8

    Brilliant! I was involved 50 years ago in a very early AI project and was exposed to simple neural nets back then. Of course, having no need for neural nets, I forgot most of what I ever knew about them during the interval. And, wow, has the field expanded since then. You have given a very clear and accessible explanation of deep networks and their workings. Will happily subscribe and hope to find further edification on Reinforcement Learning from you. THANK YOU.

  • @andrewschroeder4167
    @andrewschroeder4167 7 лет назад +1

    I hate how many people try to explain complicated concepts that require math without using math. Because you used clear mathematical notation, you made this much easier to understand. Thank you so much.

  • @cloudywithachanceofparticl2321
    @cloudywithachanceofparticl2321 7 лет назад +31

    A physics guy coming into coding, this video completely clarified the topic. Your treatment of this topic is perfect!

    •  4 года назад +1

      Don't worry people I asked this guy if he was a physicist

    • @Mau365PP
      @Mau365PP 4 года назад

      @ thanks bro

  • @intros1854
    @intros1854 7 лет назад +1

    Finally! You are the only one on the internet who explained this properly!

  • @InsaneAssassin24
    @InsaneAssassin24 7 лет назад +3

    As a chemist who just recently took Physical Chemistry, back propagation makes SOOO much more sense to me when you put it into a calculus description, rather than a qualitative one as I've been seeing elsewhere. So THANK YOU!

  • @mukulbarai82
    @mukulbarai82 4 года назад +1

    I've watched many videos on RUclips but non of the videos explained the concepts as intuitively as you did. Thought I have to watch it again as I've failed to grasp some concepts, I am sure that it will be clear as I watch more.

  • @fghj-zh6cv
    @fghj-zh6cv 7 лет назад +1

    This simple lecture truly makes all viewers fully understand the logic behind neural networks. I strongly recommend this video clip to my colleagues participated in data driven industry. Thanks.

  • @rickiehatchell8637
    @rickiehatchell8637 4 года назад +3

    Clean, concise, informative, astonishingly helpful, you have my deepest gratitude.
    I've never seen anyone explain backprop as well as you just did, great job!

  • @abhimanyusingh4281
    @abhimanyusingh4281 7 лет назад +1

    I have been trying develop a DNN for a week. I have seen almost a 100 videos, forums, blogs. Of all those this is the only one with calculus that made complete sense to me. You sir are the real MVP

  • @jabrilsdev
    @jabrilsdev 7 лет назад +5

    this is probably the best breakdown ive came across, very dense, you've left no spaces in between your explanations! Thanks for the great lesson! Onward to a calculus class!

  • @NewMediaServicesDe
    @NewMediaServicesDe 5 лет назад +6

    30 years ago, I studied computer science. we were into pattern-recognition and stuff, and I was always interested in learning machines, but couldn't get the underlying principle. now, I got it. that was simply brilliant. thanks a lot.

  • @FlashKenTutorials
    @FlashKenTutorials 7 лет назад +25

    Clean, concise, informative, astonishingly helpful, you have my deepest gratitude.

  • @alignedbyprinciple
    @alignedbyprinciple 6 лет назад +1

    I have seen many many videos regarding NN but this is by far the best; Brandon understands the relationship between the NN and the backbone of the NN, which is the underlining math. He clearly presented them in a very intuitive way. Hats off for you sir. Keep up the good job.

  • @bowbert23
    @bowbert23 Год назад +3

    I always had trouble intuitvely understanding how a derivate works and how practically its calculation is reflected in simple terms. Little did I know starting this video, that I'll finally understand it. Thank you! I'm relieved and feel less stupid now.

    • @BrandonRohrer
      @BrandonRohrer  Год назад

      I'm really happy to hear that Bowbert. Thank you for the note.

  • @thehoxgenre
    @thehoxgenre 5 лет назад

    i was amazed by the way you talk, and explain very slowly as well you remain slow until the end and you dont rush things. bravo

  • @DeltaTrader
    @DeltaTrader 7 лет назад +5

    Possibly one of the best explanations about NN out there... Congratulations!

  • @jones1351
    @jones1351 2 года назад +1

    Fantastic description of what these networks do. I've gone thru a few of these explainers and all they demonstrated was the person knew their subject, they just couldn't teach it. They talk in jargon, that quickly loses those unfamiliar. IOW they're not teaching, they're having a 'conversation' with those who are already versed and have background in the field.
    Einstein is to have said, 'If you can't explain it simply, then you don't understand it yourself'
    Thanks, again. I walk away feeling like I actually learned something. You Can Teach.

    • @BrandonRohrer
      @BrandonRohrer  2 года назад

      Hey thanks! I really appreciate this. It's the highest compliment.

  • @user-kr6dk7bq6b
    @user-kr6dk7bq6b 4 года назад

    It's the first time I get to understand how neural networks work. Thank you.

  • @ViralKiller
    @ViralKiller Год назад

    That was incredible...watched 7 videos so far and every day my brain understands a bit more...I recently learning Houdini VEX code which is 3D graphics programming, and that took 1 year of watching a whole bunch of stuff and not getting it...until I did...so I know I will grasp this soon....Im sticking to these simple examples for now, until I can code it from scratch in Python

  • @Gunth0r
    @Gunth0r 7 лет назад +1

    My kind of teacher! Subscribed! Nice voice, nice face, nice tempo, nice amount of information, nice visuals. You'd almost start to believe this video was produced with the concepts you've talked about.
    And my mind was just blown. I realized that we could make a lot more types of virtual neurons and in that way outclass our own brains (at even a fraction of the informational capacity) with a multitude of task-specific sub-brains forming a higher brain that may or may not develop personality.

  • @SunyangFu
    @SunyangFu 7 лет назад +1

    The best and easily understandable neural net video I have seen

  • @salmamohsen8208
    @salmamohsen8208 5 лет назад

    Easiest most elaborate explanation I have found on that matter

  • @OtRatsaphong
    @OtRatsaphong 5 лет назад +11

    Thank you Brandon for taking the time to explain the logic behind neural networks. You have given me enough information to take the next steps towards building one of my own... and thank you RUclips algo for bringing this video to my attention.

  • @radioactium
    @radioactium 7 лет назад +8

    Wow, this is a very simple explanation, and it helped me understand the concept of neural networks. Thank you.

  • @tomryan9827
    @tomryan9827 5 лет назад

    Great video. A single clear, concrete example is more useful than 100 articles full of abstract equations and brushed-over details. Speaking as someone who's read 100 articles full of abstract equations and brushed-over details.

  • @coolcasper3
    @coolcasper3 7 лет назад +15

    This is the most intuitive explanation of neutral nets that I've seen, keep up the great content!

  • @yashsharma6112
    @yashsharma6112 8 месяцев назад

    Very very rare way to explain a neural network in such a great depth. Loved the way you explained it ❤

  • @Sascha8a
    @Sascha8a 7 лет назад +6

    This is a really good video! For me as a complete beginner this really help me understand the basics of neural networks, thanks!

    • @AviPars
      @AviPars 7 лет назад +1

      Artem Kovera lovely book , just downloaded. for the lazy people : amzn.to/2ntC9Zm

  • @sirnate9065
    @sirnate9065 7 лет назад +317

    Who else paused the video at 15:10, went and did a semester of calculus, then came back and finished watching?

  • @cveja69
    @cveja69 7 лет назад +23

    I almost never post comments, but this one deserve it :D
    Truly great :D

  • @Toonfish_
    @Toonfish_ 7 лет назад +48

    I've never seen anyone explain backprop as well as you just did, great job!

    • @ViralKiller
      @ViralKiller Год назад +1

      I never understood backprop properly until this video...this was the light bulb

  • @RichaChauhandreams
    @RichaChauhandreams 4 года назад +1

    @Brandon Rohrer To each neuron 1. First a number is assigned 2. Then a weight is assigned 3. Then it is squashed using sigmoid and then the values are summed up! Right ? My question is whether each weighted neuron is squashed or the sum of weighted neurons is squashed , why is sqaushing done?

    • @BrandonRohrer
      @BrandonRohrer  4 года назад +1

      Good question Richa, and it will take a bit longer to answer. There's a deeper dive into this material at e2eml.school/312

  • @marioeraso3674
    @marioeraso3674 4 месяца назад +1

    Awesome description of what neural networks are!

  • @Mr_AciD
    @Mr_AciD 7 лет назад +35

    At 7:48, the bottom right receptive field should be Black black white white, not White white black black :)
    Congratulations on the explanation!

    • @yhr4052
      @yhr4052 7 лет назад +12

      Yes, there is a mistake.

    • @BrandonRohrer
      @BrandonRohrer  7 лет назад +12

      It is true! Good catch both of you.

  • @bestoonhussien2851
    @bestoonhussien2851 7 лет назад +4

    I'm in love with the way you explain things! So professional yet simple and easy to follow. Keep it up!

  • @antwonmccadney5994
    @antwonmccadney5994 5 лет назад +4

    Holy shit! Now I... I actually get it!
    Thank you!
    Clean, concise, informative, astonishingly helpful, you have my deepest gratitude.

  • @lucazarts25
    @lucazarts25 7 лет назад +6

    OMG it's even harder then I expected! Thank you very much for the thorough and thoughtful explanation!

    • @lucazarts25
      @lucazarts25 7 лет назад

      it goes without saying that I became a subscriber as well ;)

  • @AnkitSharma-ir8ud
    @AnkitSharma-ir8ud 6 лет назад +3

    Really great explanation Brandon. Also, I greatly appreciate that you share your slides as well and that too in raw (PPT) format. Great work.

  • @PierreThierryKPH
    @PierreThierryKPH 7 лет назад

    Very slowly and clearly gets to the point, nice and accessible video on the subject.

  • @Kaixo
    @Kaixo 7 лет назад

    HELP, is it right that I'm getting values higher than 1 on the output?!?

  • @abhijeetbhowmik2264
    @abhijeetbhowmik2264 7 лет назад +1

    The best Back Propagation explanation on you tube. Thank you sir.

  • @nirbhaythacker6662
    @nirbhaythacker6662 7 лет назад

    The function shown at 4:39 and 20:33 are both referred to as being the same function, but the graph on 4:39 is actually
    2*(o(a)-0.5) , here o(a) is the sigmoid function.

  • @DanielMoleGuacamole
    @DanielMoleGuacamole 2 года назад

    Holy thank you!! ive watched like 50+ ich tutorials on neural networks but all of em explained things poorly or too fast. But you went through everything slowly and actually explained all the info clearly!!

    • @BrandonRohrer
      @BrandonRohrer  2 года назад

      Thank you so much! I'm happy to hear how helpful it was, and it means a lot that you would send me a note saying so.

  • @vipinsingh-dj2ty
    @vipinsingh-dj2ty 7 лет назад +1

    literally THE best explanation i found on the internet.

  • @mdellertson
    @mdellertson 7 лет назад +1

    Yours was a very easy explanation of deep neural networks. Each step in the process was broken down into bite-sized chunks, making it very clear what's going on inside a deep neural network. Thanks so much!

  • @abubakar205
    @abubakar205 5 лет назад +1

    one of the best teacher you cleared all my doubts for neural networks thanks sir let me click an ad for you

  • @cheaterman49
    @cheaterman49 7 лет назад +5

    I totally want to implement this now. Can't be that hard - won't be the best approach, just trying to see how close it can "naturally" get to the ideal solution you displayed and how many iterations of training it takes!

    • @BrandonRohrer
      @BrandonRohrer  7 лет назад +14

      Give it a go! Although I would love to get a sticker for my laptop that says "Can't be that hard." I'll put it right above the one that says "What could possibly go wrong?"

    • @fulanomengano8895
      @fulanomengano8895 7 лет назад +1

      I've been trying to implement the full network as seen (@ 25:30) in python but hit a roadblock. Have you done it?

  • @technoultimategaming2999
    @technoultimategaming2999 4 года назад

    I understand every topic. From computer enginerring to calculus and AI basics, but put them all together and working to make a digital functioning brain is magic

  • @dbiswas
    @dbiswas 3 года назад

    You explanation is so far the best. I am sure you are the best teacher so far. Thanks for uploading such a informative video.

  • @Jojooo64
    @Jojooo64 7 лет назад +1

    Best video explaining neural networks i found so far. Thank you a lot!

  • @AngryCanadian3
    @AngryCanadian3 7 лет назад +1

    This is one of the best explanation of neural network I have seen

  • @BrianKeeganMusic
    @BrianKeeganMusic 6 лет назад

    This makes sense and all, but how do neural networks make sense in terms of prediction modeling for say a regression problem? Not image classification. Like housing prices for example?

  • @shahidmahmood7252
    @shahidmahmood7252 7 лет назад +1

    Superb!! The best explanation of DL that I have come across after completing the Andrew NG's Stanford ML course. I am a follower now.

  • @WilsonMar1
    @WilsonMar1 7 лет назад +2

    I've seen a lot of videos and this is the most clear explanation. Exceptional graphics too.

  • @surtmcgert5087
    @surtmcgert5087 4 года назад +1

    so I ahve a quick questions, In the network you said there was negative weights(black) and positive weights(wight) so when your making the network do you pre-define which weights should be positive and negative or do they work themselves out when you train the network?

    • @BrandonRohrer
      @BrandonRohrer  4 года назад

      They work themselves out during training. If you'd like to go deep on the topic, there's a course on it with detailed tutorials and code: e2eml.school/321

    • @surtmcgert5087
      @surtmcgert5087 4 года назад

      @@BrandonRohrer thank you very much, this has cleared alot of stuff up

  • @Anujkumar-my1wi
    @Anujkumar-my1wi 4 года назад +1

    at 4:32 this isn't a sigmoid function .sigmoid squashes the value to between 0 - 1

  • @Ivan_1791
    @Ivan_1791 5 лет назад +2

    Best explanation I have seen so far man. Congratulations!

  • @slayemin
    @slayemin 7 лет назад +12

    This explanation of back propagation was exactly what I needed. This is very clear and I now have higher confidence in my ability to create my own ANN from scratch.

    • @mehranmemnai
      @mehranmemnai 6 лет назад +1

      Same here. My vision is clear

    • @brendawilliams8062
      @brendawilliams8062 2 года назад

      I just enjoy numbers. Anything to do with them is a fantastic thing.

  • @dexmoe
    @dexmoe 7 лет назад +305

    Very detailed and clear explanation. Thank you for sharing! :)

  • @srinivasabugada2726
    @srinivasabugada2726 6 лет назад

    You explained How Neural Networks in very simple and easy to understand manner. thanks for sharing!

  • @yassinelamarti4157
    @yassinelamarti4157 4 года назад +1

    the best explanation for Neural Networks ever !

  • @antoinedorman
    @antoinedorman 4 года назад

    This is gold if your looking to learn neural networks!! Well done

  • @ZeroRelevance
    @ZeroRelevance 7 лет назад

    Great video, just one question. At roughly 7:56, where you show the shaded squares, in the bottom right it is a bit confusing. The far bottom right one is taking negative input values, which, going off what you said, means it should have black on the top and white on the bottom.

  • @barrotem5627
    @barrotem5627 3 года назад

    9:03. Shouldn't the bottom most square in the 3rd column be different?
    It has 2 negative weights as its input neurons, so shoudln't they be flipped? 🤨

  • @liamdev4855
    @liamdev4855 Год назад +1

    The *Best* Video for beginners!

  • @GAV32
    @GAV32 7 лет назад +1

    Thank you so much! I have been trying to create a neural network of my own for simple tasks, and I haven't been able to learn how until now. Thank you!

    • @TanNguyen-vm2fc
      @TanNguyen-vm2fc 7 лет назад +1

      Gavin Haynes please teach me how to start. im a beginner of this field. thank you

    • @GAV32
      @GAV32 7 лет назад

      I haven't made one of my own yet, sorry for the confusion. The difference is that now, I understand how one works, so I can start structuring my information.

  • @ziaurrehman2180
    @ziaurrehman2180 3 года назад

    After wasting too much time finally I found the right place , excellent explanation 👏👏👏💯

  • @jacolansac
    @jacolansac 5 лет назад

    The internet needed a video like this one. Thanks a lot!

  • @Squash101
    @Squash101 Год назад +1

    Is there some type of playlist for learning about neural networks?

    • @BrandonRohrer
      @BrandonRohrer  Год назад

      Give this a try: ruclips.net/video/ILsA4nyG7I0/видео.html

  • @daksheshgupta7045
    @daksheshgupta7045 4 года назад +2

    I've a question, how do we determine the number of hidden layers and nodes for a layer in a neural network ?

    • @BrandonRohrer
      @BrandonRohrer  4 года назад +1

      That's a big question. Here's a course I wrote to help answer it: e2eml.school/314

    • @daksheshgupta7045
      @daksheshgupta7045 4 года назад

      @@BrandonRohrer Thanks, I'll check it out !

  • @kademmohammed6836
    @kademmohammed6836 7 лет назад +2

    by far the best video about ANN i've watched, thank you so much, really clear

  • @GeorgePolzer
    @GeorgePolzer 7 лет назад

    Hi Brandon, thank you for this video!
    What I am still missing in my understanding is:
    1) What does the "inside" look like of a trained CNN, what makes it trained or where do the "weights", if that is correct term get stored that make it a trained CNN
    2) You've covered how the neural net does the actual identification but not the math/mechanics of how it gets trained
    3) same question but a different way of asking, how is the gradient descent implemented, ie, is it during the training phase?
    4) is part of the training that we have to feed the NN batches of images that are let's say cats vs. dogs. That means we are essentially labeling the data correct?
    5) so once the labeled data has trained the NN, what does that mean in terms of the state the NN is in to be ready to predict.
    Is the "learned" part represented in the "filters" and if not how are the filters determined?
    Lastly, I see images of features that the NN has identified at different stages, how are those features rendered some of which are pretty good pictures of faces. Is that part of the training process?
    Thank you.

    • @GeorgePolzer
      @GeorgePolzer 7 лет назад

      --------
      Additional question is how does feature engineering play into training a CNN?
      Thank you.

  • @opeller
    @opeller 3 года назад +2

    Thank you so much, really helped me understand several things that were hard to understand during class.

  • @makaipost260
    @makaipost260 7 лет назад +1

    +Brandon Rohrer How do you determinate the weights? [2:36]

    • @MrBrobasaur
      @MrBrobasaur 7 лет назад

      I would also very much like to know this.

    • @KlemensSoftware
      @KlemensSoftware 7 лет назад +1

      Makai Post in the beginning you assign them random numbers, because you have to start somewhere. Later you change them to get the optimal values with various methods.

    • @AndriyLinnyk
      @AndriyLinnyk 7 лет назад

      REMEMBER...every tutorial has its retarded parts.....poor explanation.. I listened to that part like 5 times ...and it still sound retarded...ugghhhh

  • @Thejosiphas
    @Thejosiphas 7 лет назад

    I like how much effort you put into making these ideas accessible

  • @Kino-Imsureq
    @Kino-Imsureq 6 лет назад +3

    18:40 thats quite the same way of expressing error/weight since you can rlly just do cancelling

  • @pipchenko
    @pipchenko 7 лет назад

    thank you for explanation! I have a question: why should weights be in range -1;1 or 0;1 ? why can't we take -2;2 or -3;3 and so on? anyway we use some activation function, which will turn output to 0;1 or -1;1

  • @pepham
    @pepham 6 лет назад

    Than you for the video. So the basic building blocks of a NN are weighted connectors , sigmoid function , and the linear rectifier according to this video. Do most people just put the sigmoid and rectifier in series after each receptor? If not how does one figure when to use sigmoid function or when to use the linear rectified unit ?

  • @aronhighgrove4100
    @aronhighgrove4100 5 лет назад +2

    Thanks for your excellent explanation! At 4:05 it seems you really use tanh, not sigmoid/the logistic function, since sigmoid goes from 0.0 to 1.0, but your squashing function goes from -1.0 to 1.0.
    Especially, if you take your definition at 20:26, it confirms you are using a sigmoid/the logistic function going from 0.0 to 1.0, which does not work out with your example network.
    It's a minor detail, but I tripped upon it when I tried to actually compute your example step by step. Maybe worth pointing out with a note in the video? Thanks again!

    • @bhadriv4389
      @bhadriv4389 2 года назад

      phew, thanks for confirming. I thought I was going crazy and that my previous notes on activation were wrong. Yes, this is tanh and not sigmoid

  • @jonasls
    @jonasls 7 лет назад +2

    One of the best videos out there

  • @claritise
    @claritise 7 лет назад +1

    I have a question, around 4:00 just before you explain the sigmoid function, the added weighted value was -1.075, but then one frame after it became positive 1.075, was there something significant I missed?

    • @MrBrobasaur
      @MrBrobasaur 7 лет назад +1

      I thought no one noticed this and posted the question myself. He should post the squashing function that he's using.

    • @BrandonRohrer
      @BrandonRohrer  7 лет назад

      Good catch Anna. Here's what I wrote to Michael: Good catch Michael. Yes, this is a straight up error and oversimplification on my part. The error is that it should be -1.075, not positive 1.075. The oversimplification is that the logistic function (that I show the equation for) varies between 0 and 1, not -1 and 1. You are spot on.

    • @claritise
      @claritise 7 лет назад

      So the function actually squashes it between 0 and 1 rather than -1 to 1?

  • @Curiumx
    @Curiumx 7 лет назад

    To figure out the error to weight correspondence function would you first iterate the neural network (by changing weight values and testing the resultant error) a few times the "numerically expensive" way and figure out the error function for each neuron? More generally; how would you go about finding the error function for each neuron? Is the function always parabolic?

  • @Leonardo-fm7fj
    @Leonardo-fm7fj 3 года назад

    I think that in 7:51, the most-right and lowest neuron should be "max activated" when the upper row of pixels is black and the lower row of pixels is white (i.e., the negative of what is displayed in the video, see lowest most-right square of pixels and respective neuron)

  • @karannchew2534
    @karannchew2534 5 лет назад

    Watched it many times but still confused. (1) Why does the 4 pixel start with shaded grey but then the example (from 09:38) use a black and white? (2) At 03:01, how come the weights change from 1.0s to -0.2, 0.0, 0.8, -0.5? Where are these values from?

  • @ImtiazRashid53
    @ImtiazRashid53 5 лет назад +1

    I have one question. In the chaining example, you assumed 'e' as output. But while describing, you interpreted 'e' as error. Why is that? And also, is it possible to know the error function?
    Thanks in advance.

  • @solaimanjawad5015
    @solaimanjawad5015 7 лет назад

    Arnt the axes for the sigmoid function at 4:12 a little messed up? the sigmoid function cant be negative. it's between 0 and 1

  • @tobiaskarl4939
    @tobiaskarl4939 4 года назад +1

    Excellently explained !
    Automatic subtitle feature enabled would have been nice.

    • @BrandonRohrer
      @BrandonRohrer  4 года назад

      Thanks Tobias! Subtitles for English (and a dozen other languages!) are enabled. I hope they work for you now.

    • @tobiaskarl4939
      @tobiaskarl4939 4 года назад

      @@BrandonRohrer yes, thx a lot.

  • @GnuSnu
    @GnuSnu Год назад +2

    Could you make a video about PPO algorythm?

    • @BrandonRohrer
      @BrandonRohrer  Год назад +1

      It's on my (very long) list of things I'd like to do.

  • @nemuccio1
    @nemuccio1 4 года назад

    Hi, great tutorial but I would like to make a clarification: at 7:50 a minute the fourth drawing at the bottom black-white and black, black of the second layer is wrong.
    The black and black connections would reverse the design, the connection lines should be white-white.
    Then the sigmoid function should not be 0 to 1?
    Looks like a TanH,-1 a +1 to me.
    Correct me if I'm wrong.
    Thank you
    ric

  • @shivamkeshri487
    @shivamkeshri487 7 лет назад +1

    wow awesome i never find a video like this with the simple example and clarity of neural network and its a though topic to explain but you make it easy... thanks

  • @kd4pba
    @kd4pba 2 года назад

    Damn, This is perfect. I had kept backing away from all this until now. I finally get it. You really helped me Brandon, thank you.

  • @davidguaita
    @davidguaita 7 лет назад +1

    You're the man at explaining these things. Thank you so much.

  • @rahimdehkharghani
    @rahimdehkharghani 4 года назад

    This is the best explanation I have ever seen for neural networks. thanks very much!

    • @ahuttee
      @ahuttee 4 года назад +1

      You should check out 3blue1brown's series on neural networks, its absolutely beautiful