Neural Networks Explained from Scratch using Python

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2024

Комментарии • 218

  • @BotAcademyYT
    @BotAcademyYT  3 года назад +79

    Please share this video if you know somebody whom it might help. Thanks :)
    edit: Some people correctly identified the 3Blue1Brown style of the video. That is because I am using the python library manim (created by 3Blue1Brown) for the animations. Link and more information in the description. Huge thanks for all the likes and comments so far. You guys are awesome!

    • @walidbezoui
      @walidbezoui 2 года назад

      WOW FIRST TIME TO KNOW HOW 3Blue1Brown Work Awesoome

    • @jonathanrigby1186
      @jonathanrigby1186 2 года назад

      Can you plz help me with this .. I want a chess ai to teach me what it learnt
      ruclips.net/video/O_NglYqPu4c/видео.html

    • @spendyala
      @spendyala Год назад +3

      Can you share your video manim code?

    • @twanwolthaus
      @twanwolthaus Год назад

      Incredible video. Not because of your insight, but because how you use visuals to represent the information as digestible as possible.

  • @hepengye4239
    @hepengye4239 3 года назад +147

    As an ML beginner, I know how much effort and time is needed for such visualization of a program. I would like to give you a huge thumb! Thank you for the video.

    • @xcessiveO_o
      @xcessiveO_o 3 года назад +4

      a thumbs up you mean?

    • @EagleMasterNews
      @EagleMasterNews 8 месяцев назад +8

      His thumb is now massive

    • @Ibrahim-o3m7m
      @Ibrahim-o3m7m 4 месяца назад

      I dont think he want no thumbs

    • @abdulrafaynawaz1335
      @abdulrafaynawaz1335 Месяц назад

      He will be in pain if you will give him such a huge thumb... just give him a thumbs up

  • @blzahz7633
    @blzahz7633 2 года назад +92

    I can't say anything that hasn't been said already: This video is golden. The visualization, explaining, everything is just so well done. Phenomenal work.
    I'm basically commenting just for the algo bump this video rightfully deserves.

  • @ejkitchen
    @ejkitchen 3 года назад +78

    FANTASTIC video. Doing Stanford's Coursera Deep Learning Specialization and they should be using your video to teach week 4. Much clearer and far better visualized. Clearly, you put great effort into this. And kudos to using 3Blue1Brown's manim lib. Excellent idea. I am going to put your video link in the course chat room.

  • @magitobaetanto5534
    @magitobaetanto5534 3 года назад +42

    You've just explained very clearly in a single video what others try to vaguely explain in series of dozens videos. Thank you. Fantastic job! Looking forward to more great videos from you.

  • @craftydoeseverything9718
    @craftydoeseverything9718 Год назад +2

    I know I'm watching this 2 years after it was released but I really can't stress enough how helpful this is. I've seen heaps of videos explaining the math and heaps of videos explaining the code but this video really helped me to link the two together and demystify what is actually happening in both.

  • @pisoiorfan
    @pisoiorfan Год назад +3

    That's it! Comprehensive training code loop for a 1 hidden layer NN in just 20 lines. Thank you sir!

  • @Transc3nder
    @Transc3nder 3 года назад +11

    This is so interesting. I always wondered how a neural net works... but it's also good to remind ourselves that we're not as clever as we thought. I feel humbled knowing that there's some fierce minds out there working on these complicated problems.

  • @hridumdhital
    @hridumdhital 4 месяца назад

    As someone beginning machine learning, this video was so useful to really getting a deep understanding on how neural networks work!

  • @cactus9277
    @cactus9277 3 года назад +6

    for those actually implementing something, note at 12:08 the values in the hidden layer change back to how they were pre sigmoid application

    • @BotAcademyYT
      @BotAcademyYT  3 года назад +1

      good point! Must have missed it when creating the video.

    • @robertplavka6194
      @robertplavka6194 Год назад

      yes but wasnt the value before sigmoid in the last cell 9 ? precisely I got something like 8.998
      If I missed something please explain I want to know why is that

  • @ElNachoMacho
    @ElNachoMacho Год назад +2

    This is the kind of video that I was looking for to get beyond the basics of ML and start gaining a better and deeper understanding. Thank you for putting the effort into making this great video.

  • @ThomasCaetano1970
    @ThomasCaetano1970 8 месяцев назад

    This is a great video even for those who are not into this field. Great voice and explanation of how neural networks work.

  • @mici432
    @mici432 3 года назад +2

    Saw your post on Reddit. Thank you very much for the work you put in your videos. New subscriber.

  • @GaithTalahmeh
    @GaithTalahmeh 3 года назад +8

    Welcome back dude!
    I have been waiting your comeback for so long
    Please dont go away this long next time :)
    Great editing and audio quality btw
    Reminds me of 3b1b

    • @BotAcademyYT
      @BotAcademyYT  3 года назад +3

      Thanks! I'll try uploading more consistently now that I've finished my Thesis :)

  • @photorealm
    @photorealm 7 месяцев назад

    Excellent video and accompanying code. I just keep staring at the code, its art. And the naming convention with the legend is insightful, the comments tell the story like a first class narrator. Thank you for sharing this.

  • @gonecoastaltoo
    @gonecoastaltoo 3 года назад +2

    Such a great video -- high quality and easy to follow. Thanks.
    One typo in Additional Notes; (X,) + (1,) == (X, 1) -- this is shown correctly in the video, but in the Notes you show result as (1, X)

    • @BotAcademyYT
      @BotAcademyYT  3 года назад +1

      Thank you very much for pointing out the inconsistency. You're right, it is wrong in the description. I just corrected it.

  • @pythonbibye
    @pythonbibye 3 года назад +34

    I can tell you put a lot of work into this. You deserve more views! (also commenting for algorithm)

  • @AVOWIRENEWS
    @AVOWIRENEWS 9 месяцев назад

    It's great to see content that helps demystify complex topics like neural networks, especially using a versatile language like Python! Understanding neural networks is so vital in today's tech-driven world, and Python is a fantastic tool for hands-on learning. It's amazing how such concepts, once considered highly specialized, are now accessible to a wider audience. This kind of knowledge-sharing really empowers more people to dive into the fascinating world of AI and machine learning! 🌟🐍💻

  • @ThootenTootinTabootin
    @ThootenTootinTabootin 10 месяцев назад

    "does some magic." Great explanation. Thanks.

  • @michaelbarry755
    @michaelbarry755 Год назад

    Amazing video. Especially the matrix effect on the code in the first second. Love it.

  • @OrigamiCreeper
    @OrigamiCreeper 3 года назад +5

    Nice job with the explanation!!! I felt like I was watching a 3blue1brown video! A few notes:
    1.)You should run through examples more often because that is one of the best ways to understand a concept. For example. you should have run through the algorithm for the cost function so people understand it intuitively.
    2.)It would be nice if you went more in depth behind backpropagation and why it works.
    Things you did well:
    1.)Nice job with the animations and how you simplified them for learning purposes, the diagrams would be much harder to understand if there was actually 784 input layers.
    2.)I love the way you dissect the code line by line!
    I cant wait to see more videos by you I think this channel could get really big!

    • @BotAcademyYT
      @BotAcademyYT  3 года назад +1

      Thank you very much for the great feedback!

  • @eldattackkrossa9886
    @eldattackkrossa9886 3 года назад +3

    oh hell yeah :) just got yourself a new subscriber, support your small channels folks

  • @Lambertusjan
    @Lambertusjan 2 года назад +2

    Thanks for a very clear explanation. I was doing the same from scratch in python, but got stuck at dimensioning the weight matrices correctly, especially in this case with the 784 neuron input. Now i can check if this helps me to complete my own three layer implementation. 😅

  • @dexterroy
    @dexterroy 8 месяцев назад

    Listen to the man, listen well. He is giving accurate and incredibly valuable knowledge and information that took me years to learn.

  • @vxqr2788
    @vxqr2788 3 года назад +1

    Subscribed. We need more channels like this!

  • @angelo9915
    @angelo9915 3 года назад +2

    Amazing video! The explanation was very clear and I understood everything. Really hope you're gonna be posting more videos on neural networks.

  • @eirikd1682
    @eirikd1682 2 года назад +3

    Great Video! However, you say that "Mean Squared Error" is used as loss function and you also calculate it. However "o - l" (seemingly the derivative of the loss function) isn't the derivative of MSE. It's the derivative of Categorical Cross Entropy ( -np.sum(Y * np.log(output)), with Softmax before it). Anyways, keep up the great work :)

  • @doomcrest8941
    @doomcrest8941 3 года назад +2

    awesome video :) i did not know that you could use that trick for the mse 👍

  • @susakshamjain1926
    @susakshamjain1926 5 месяцев назад

    Best video of ML so far i have seen.

  • @andrewfetterolf7042
    @andrewfetterolf7042 2 года назад +2

    Well done, i couldnt ask for a better video, Germans make the best and most detailed educational videos here on youtube. The pupils of the world say thank you.

  • @Lukas-qy2on
    @Lukas-qy2on Год назад

    This video is pretty great, although i had to pause and sketch along and keep referring to the code you showed, it definitely helped me understand better how to do it

  • @devadethan9234
    @devadethan9234 Год назад

    yes , finally I had found the golden channel
    thanks budd

  • @malamals
    @malamals 3 года назад +2

    Very well explained. I really liked it. making noise for you. Please make such video to understand NLP in the same intuitive way. Thank you :)

  • @kousalyamara8746
    @kousalyamara8746 4 месяца назад

    The BEST video ever! Hats off to your efforts and a Big Big Thanks for imparting the knowledge to us. I will never forget the concept and ever. 😊

  • @napomokoetle
    @napomokoetle Год назад

    Wow! Thanks you so much. You rock. Now looking forward to "Transformers Explained from Scratch using Python" ;)

  • @BlackSheeeper
    @BlackSheeeper 3 года назад +2

    Glad to have you back :D

  • @jordyvandertang2411
    @jordyvandertang2411 3 года назад +2

    hey this was a great into! Gave a good playing ground to experiment with in increasing the nodes of the hidden layer, changing the activation function and even adding an addition hidden layer to evaluate the effects/effectiveness! With more epochs could get it above 99% accuracy (on the training set, so might be overfitted, but hey_)

  • @gustavgotthelf7117
    @gustavgotthelf7117 6 месяцев назад

    Best video to this kind of topic on the whole market. Very well done! 😀

  • @chrisogonas
    @chrisogonas 2 года назад

    Superbly illustrated! Thanks for sharing.

  • @mrmotion7942
    @mrmotion7942 3 года назад +2

    Love this so much. So organised and was really helpful. So glad you put the effort into the animation. Keep up the great work!

  • @mateborkesz7278
    @mateborkesz7278 10 месяцев назад

    Such an awesome video! Helped me a lot to understand neural networks. Thanks a bunch!

  • @bdhaliwal24
    @bdhaliwal24 Год назад

    Fantastic job with your explanation and and especially the animations. All of this really helped to connect the dots

  • @brijeshlakhani4155
    @brijeshlakhani4155 3 года назад +2

    This is really helpful for beginners!! Great work always appreciated bro!!

  • @saidhougga2023
    @saidhougga2023 2 года назад

    Amazing visualized explanation

  • @kallattil
    @kallattil 10 месяцев назад

    Excellent content and illustration 🎉

  • @LetsGoSomewhere87
    @LetsGoSomewhere87 3 года назад +2

    Making noise for you, good luck!

  • @oliverb.2083
    @oliverb.2083 3 года назад +2

    For running the code on Ubuntu 20.04 you need to do this:
    git clone github.com/Bot-Academy/NeuralNetworkFromScratch.git
    cd NeuralNetworkFromScratch
    sudo apt-get install python3 python-is-python3 python3-tk -y
    pip install --user poetry
    ~/.local/bin/poetry install
    ~/.local/bin/poetry run python nn.py

  • @jimbauer9508
    @jimbauer9508 3 года назад +6

    Great explanation - Thank you for making this!

  • @v4dl45
    @v4dl45 Год назад

    Thank you for this amazing video. I understand the huge effort in the animations and I am so grateful. I believe this is THE video for anyone trying to get into machine learning.

  • @alangrant5278
    @alangrant5278 8 месяцев назад

    Gets even more tricky at 50 metres one handed - weak hand!

  • @kenilbhikadiya8073
    @kenilbhikadiya8073 5 месяцев назад

    Great explanation and hats off to ur efforts for these visualisation!!! 🎉❤

  • @Michael-ty2uo
    @Michael-ty2uo 9 месяцев назад

    The first minute of this video got myself asking who is this dude and does he make more videos explaining compicated topics in a simple way. pls do more

  • @VereskM
    @VereskM 3 года назад +1

    Source text
    Excellent video. Best of the best ) i want to see more and slowly about backpropagation algorithm. It is most interesting moments.. maybe better to make the step by step slides?

  • @johannesvartdal624
    @johannesvartdal624 9 месяцев назад

    This video feels like a 3Brown1Blue video, and I like it.

  • @DV-IT
    @DV-IT Месяц назад

    This video is perfect for beginners, thank u so much

  • @HanzoHasashi-bv7rm
    @HanzoHasashi-bv7rm Год назад

    Video Level: Overpowered!

  • @Darth_Zuko
    @Darth_Zuko 3 года назад +1

    This is one of the best explained videos i've seen for this. great job!
    Hope this comment helps :)

  • @Ibrahim-o3m7m
    @Ibrahim-o3m7m 4 месяца назад

    How would you do the 50000 samples for training? Great video by the way!

  • @neuralworknet
    @neuralworknet Год назад +3

    12:40 why dont we use derivative of activation function for delta_o? But we used derivative of activation function for delta_h. Any answers???

    • @hidoxy1
      @hidoxy1 11 месяцев назад

      I was confused about the same thing, did you figure it out?

  • @rverm1000
    @rverm1000 8 месяцев назад

    Thanks. I wonder if I could train it for other pictures?

  • @onlineinformation5320
    @onlineinformation5320 9 месяцев назад +1

    As a neural network, I can confirm that we work like this

    • @ziphy_6471
      @ziphy_6471 6 месяцев назад

      Well , your brain is basically a complex neural network
      Plus, our body isn't us; our brain is us. We are just a complex meat neural network controlling a big fleshy, meaty and boney body.

  • @quant-prep2843
    @quant-prep2843 3 года назад

    intuitive video on the whole planet, likewise can you come up with a brief explanation on NEAT algorithm as well ?

    • @BotAcademyYT
      @BotAcademyYT  3 года назад

      Thanks! I‘ll add it to my list. If more people request it or if I‘m out of video ideas, I‘ll do it :-)

    • @quant-prep2843
      @quant-prep2843 3 года назад

      @@BotAcademyYT Nooo, we cant wait.... i shared this video across all discord servers, and most of em asked , wish this guy could make a video like this on NEAT or hyperNEAT. because there isnt much resources out there. Hope you will make it!

  • @Scronk03
    @Scronk03 3 года назад +1

    Thank you for this. Fantastic video.

  • @Hide310122
    @Hide310122 2 года назад +3

    Such an amazing video with lots of visualization. But I don't think you can simplify delta_o to "o - l" with whatever mathematical tricks. It needs to be "(o - l) * (o * (1 - o))".

    • @Kuratius
      @Kuratius Год назад +2

      I think you're right, but for some reason it seems to work anyway

    • @neuralworknet
      @neuralworknet Год назад

      yess i have been trying to understand this for weeks 🤯

  • @jonnythrive
    @jonnythrive 2 года назад

    This was actually very good! Subscribed.

  • @asfandiyar5829
    @asfandiyar5829 2 года назад

    You create some amazing content. Really well explained.

  • @miguelhernandez3730
    @miguelhernandez3730 3 года назад +1

    Excellent video

  • @itzblinkzy1728
    @itzblinkzy1728 3 года назад +1

    Amazing video I hope this gets more views.

  • @nomnom8127
    @nomnom8127 3 года назад +2

    Great video

  • @neliodiassantos
    @neliodiassantos 3 года назад +1

    Great work! thanks for the explication

  • @2wen98
    @2wen98 Год назад +1

    how could i split the data into training and testing data?

  • @maxstengl6344
    @maxstengl6344 3 года назад +2

    at 14:32 you use the updated weights (to the output layer) to calculate the hidden layer deltas. I never saw anyone doing it this way. Usually, the old weights are used and all weights are updated after backprop. I don't think it makes a large difference but I wonder if this is intentional or I am missing something.

    • @FlyingUnosaur
      @FlyingUnosaur 2 года назад +2

      I also think this is a mistake. Andrew Ng emphasized that the weights must be updated after calculating the derivatives.

    • @neuralworknet
      @neuralworknet Год назад

      ​@@FlyingUnosauryou are talking about the derivative of activation function right?

    • @appliiblive
      @appliiblive Год назад

      Thank you so much for posting this comment, i was wondering why my model was losing accuracy with every epoch. With that little change my accuracy jumped from 20'000 / 60'000 to 56'000 / 60'000

  • @dormetulo
    @dormetulo 3 года назад +2

    Amazing video really helpful!

  • @EnglishRain
    @EnglishRain 3 года назад +2

    Great content, subscribed!

  • @BooleanDisorder
    @BooleanDisorder 8 месяцев назад

    Now, do it again but IN Scratch!😊

  • @hchattaway
    @hchattaway Год назад

    Excellent video and explanation of this classic intro to cv... However, when I clone the repo, install poetry and run poetry install, it throws a ton of errors. is there just a requirements.txt file for this that can be used? I am using Ubuntu 23.04 and Python 3.11.3

  • @curtezyt1984
    @curtezyt1984 Год назад +1

    you got a subscriber ❤

  • @noone-du5qu
    @noone-du5qu 5 месяцев назад

    bro how did u make the first layer know how much number of color scale should be used on the img

  • @cocoarecords
    @cocoarecords 3 года назад +2

    Wow amazing

  • @Maxou
    @Maxou 9 месяцев назад

    Really nice video, keep doing those!!

  • @yoctometric
    @yoctometric 3 года назад +1

    Algy comment right here, thanks for the wonderful video!

  • @jnaneswar1
    @jnaneswar1 2 года назад

    extremely thankful

  • @wariogang1252
    @wariogang1252 3 года назад +1

    Great video, really interesting!

  • @0xxi1
    @0xxi1 Год назад

    you are the man! My respect goes out to you

  • @rejeanto6508
    @rejeanto6508 Год назад

    I have a data set with the same size, how do I change the data set? I have tried to change it but failed. BTW thank you this video really helped me

  • @viktorvegh7842
    @viktorvegh7842 8 месяцев назад

    11:32 why are you checking for the highest value I dont understand when the highest is 0.67 its classified as 0 can you please explain? Like what this number has to be for example for input to be classified as 1

  • @danielniels22
    @danielniels22 3 года назад

    hello, will you do one with Cross Entropy as the loss function? Or do you know any video for reference? Because I'm too confused if reading a book or paper :(

  • @jassi9022
    @jassi9022 3 года назад +2

    brilliant

  • @xXxxSharkLoverxXx
    @xXxxSharkLoverxXx 3 месяца назад

    I can't find tutorials for Java Script, so I am using this. How do I not use any external downloads, or with my own data that I gather later?

  • @hsa1727
    @hsa1727 3 года назад

    after learning i did print the W and Bios but its what i get
    ([[nan nan nan ... nan nan nan]
    [nan nan nan ... nan nan nan]
    [nan nan nan ... nan nan nan]
    ...
    [nan nan nan ... nan nan nan]
    [nan nan nan ... nan nan nan]
    [nan nan nan ... nan nan nan]])
    i dont understant... is there any thing that i can do

  • @Ach_4x
    @Ach_4x 6 месяцев назад

    Hey guys can someone help me i have a project where i need to define an automata for the handwritten digit recognition and i still don't know how to define the states and transitions for my automaton

  • @_Slach_
    @_Slach_ 3 года назад +1

    11:31 What if the first output neuron wasn't the one with the highest value? Does that mean that the neural network classified the image incorrectly?

  • @Ragul_SL
    @Ragul_SL 8 месяцев назад

    how is the hidden layer is set as 20 ? how is it decided?

  • @hynesie11
    @hynesie11 9 месяцев назад

    for the first node in the hidden layer you added the bias node of 1, for the rest of the nodes in the hidden layer you multiplied the bias node of 1 ??

  • @OK-dy8tr
    @OK-dy8tr 3 года назад +1

    Lucid explanation !!

  • @himanshusethi8246
    @himanshusethi8246 3 года назад +1

    Thanks a lot sir

  • @heckyes
    @heckyes Год назад

    Do these initial layer numbers have to be between 0 and 1? Can't they just be any number if the activation function will clamp them down to be between 0 and 1?

  • @neccatisasmaz5406
    @neccatisasmaz5406 3 года назад

    Great explication. Plz what's the software used to make this video ?

    • @BotAcademyYT
      @BotAcademyYT  3 года назад

      Thanks! github.com/ManimCommunity/manim

  • @cryptoknightatheaume6462
    @cryptoknightatheaume6462 2 года назад

    awesome man. Could you please tell me how do you realise this neural animation? It's really nice

  • @mr.talalai3416
    @mr.talalai3416 Год назад

    Finely, someone who explained this to me so I understand