Why neural networks aren't neural networks

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 478

  • @kaisle8412
    @kaisle8412 3 года назад +763

    "Let's watch that animation again, since it took me so long to make"

    • @phafid
      @phafid 3 года назад +20

      as someone who is struggling with math, this is the equivalent of Picasso

    • @roseproctor3177
      @roseproctor3177 3 года назад +5

      Lol it was a great animation though

    • @tielessin
      @tielessin 3 года назад +8

      Honestly, who can not relate haha

    • @mickolesmana5899
      @mickolesmana5899 3 года назад +1

      fair enough

    • @mxmilkiib
      @mxmilkiib 3 года назад +3

      Yelped, paused, rushed to the comments to thumb up the one about that line.

  • @roygalaasen
    @roygalaasen 3 года назад +361

    First video? Off to a VERY promising start. This is just great! Hope low numbers won’t deter you from making more. (Or the amount of work.) Hope to see more from you.

    • @samsartorial
      @samsartorial  3 года назад +93

      Thanks! I don't think I'm going to do a ton more, since it took me like a week of 14-hour days to make. But I was thinking I might do a video on transformers NNs or something unrelated like reactivity in user interfaces once the semester wraps up. IDK, we'll see.

    • @roygalaasen
      @roygalaasen 3 года назад +20

      @@samsartorial I do understand and appreciate that it is a lot of work. I am glad you took the time for this one video at least! 😃

    • @piter239
      @piter239 3 года назад +10

      Is there a way of increasing the possibility of future videos of this exquisite quality?

    • @romanemul1
      @romanemul1 3 года назад +3

      @@samsartorial Lemme tell you. Noone wants from you to make a video in a week or a month. Just make them slowly. Otherwise you end up like 3b1b . Looks like he is at the end of ideas.

    • @RonWolfHowl
      @RonWolfHowl 3 года назад

      @@samsartorial Those both sound like very exciting topics 😁

  • @CharlesWeill
    @CharlesWeill 3 года назад +56

    Even after doing ML professionally for 5 years, seeing the transformations in this way taught me something new.

    • @revimfadli4666
      @revimfadli4666 2 года назад +2

      You might like reading Chris Olah's blog then

  • @patrickinternational
    @patrickinternational 3 года назад +244

    Oh jeez, this is such a great video, I love how you describe the weighting process in NN to actual weights...brilliant. At the beginning you actually describe linear discriminant analysis as well. This is great because NN is really just a series of transforms and this is the best animation I have ever seen, way better than event Grant Sanderson's video on the topic. I added this to the list of all SoME1 videos that I could find. @

    • @patrickinternational
      @patrickinternational 3 года назад +1

      ruclips.net/video/MsNQtj3zVs8/видео.html

    • @Walkofsoul
      @Walkofsoul 3 года назад

      @@patrickinternational Thanks !

    • @vtrandal
      @vtrandal 3 года назад

      Off

    • @alejrandom6592
      @alejrandom6592 3 года назад

      Not better, but they complement each other

    • @Artaxerxes.
      @Artaxerxes. 3 года назад +1

      I don't think there's a better video than 3b1bs linear algebra series to understand linear transformations. This video makes sense only if you've understood that already. And it's perfect to watch once you've done that. I wish he makes more because it was very enjoyable although it lasted only for 10 min

  • @Riley.Rumble
    @Riley.Rumble 3 года назад +103

    This is a great video. I work with neural nets daily and intellectually knew everything you said in this video, but your presentation and visualizations has completely reframed the way I think about NNs. Thank you!

  • @KeirRice
    @KeirRice 3 года назад +5

    I've been reading about neural networks for years with limited understanding. In 10mins you have given me an entirely new and easier to understand perspective. Thank you so much!
    Fantastic work!

  • @edyt4125
    @edyt4125 3 года назад +25

    I have worked extensively with linear and nonlinear transformations of abstract geometries, and this is by far one of the best explanations of their correspondence with “neural networks”!! Great work!

  • @piface3016
    @piface3016 3 года назад +6

    Hey, just wanted to let you know this is one of my favorite videos on RUclips, I absolutely fell in love with it.
    I come from Statistics so when I was first learning about "Neural Networks" and found out that the process of "learning" is literally just minimizing a cost function, that it has no magic going on, my thought was "So it's just MLE? It's just a Math thing?"
    This video is the best piece I've found so far in demystifying neural networks, plus it gives some insight on what's actually going on, besides the usual neuron-layer analogy. That's all!

    • @WelcomeBub
      @WelcomeBub 2 года назад

      Well the field of AI has many more characterizing questions for choosing machine learning methods and models for a problem. For example how do you incorporate new knowledge/data for a trained neural network? NNs aren't really fit for this job at the moment and require relearning everything with the new samples.

  • @kaemmili4590
    @kaemmili4590 3 года назад +37

    hey sam , its marvelously clear , fluid and relevant ,we need more of anything you find interesting
    sincerly
    -everyone

  • @PaulScotti
    @PaulScotti 3 года назад +83

    As someone who has used neural networks in a research setting, I never even realized that neural networks are actually just a series of alternating linear and nonlinear transformations. Amazing video, hope you make more :)

    • @DeadtomGCthe2nd
      @DeadtomGCthe2nd 3 года назад +4

      How does that happen? Brilliant teaches this.

    • @Finnnicus
      @Finnnicus 3 года назад +2

      @@fuzzylogicq You're very smart 🌟

  • @kevinknutson4596
    @kevinknutson4596 3 года назад +9

    Very well done video, definitely feels on par with 3b1b's ability to break up and explain complex phenomenon.

  • @houcemfehri155
    @houcemfehri155 3 года назад +2

    I can’t believe that this your first video with how much of a great quality it has. Keep making more, you’re amazing!!

  • @Geosquare8128
    @Geosquare8128 3 года назад +51

    really great! hope you make some more videos :)

  • @flochforster22
    @flochforster22 3 года назад +3

    I understand more about linear and non-linear transformations, why hyperbolic tangent is useful in NNs, and how NNs really work thanks to this video. Awesome work!

  • @kyguypi
    @kyguypi 3 года назад +1

    This video was great! I hope you take it as a compliment that for a second, I thought I was watching 3blue1brown. I understood activation functions coming in to this, but I still feel that I have even more clarity after your animation. Thanks!

  • @TheRealJavahead
    @TheRealJavahead 3 года назад +2

    Great video. Keep them coming. This will significantly aid my ongoing “ML is just statistics” campaign. Thanks. Subscribed.

  • @salmagamal5676
    @salmagamal5676 3 года назад +1

    THISSSSS!! OMG my mind is blown to pieces even though I sort of already knew the information never have I seen anyone put the pieces together like this. My man thank you so much.

  • @rainzhao2000
    @rainzhao2000 3 года назад +4

    I come back to this video often because your animation of an NN transforming the data is just so satisfying.

  • @paolopiaser_SystemsComposer
    @paolopiaser_SystemsComposer 3 года назад +1

    How is it possible that I watched so many different videos explaining ML without really grasping it, and now this guys with 9 minutes makes it so clear. Thank you.
    PS. just a clarification: the use of the analogy with neurons was kinda on point at the time it was created, because they were trying to understand how living organisms were self organising, hence also trying to create a model of the neurons and the brain.

  • @stevenschilizzi4104
    @stevenschilizzi4104 3 года назад

    Brilliant exposition. Will certainly help to blow away the fog of confusion that other sources may have, or have, generated. Thanks for your hard work!

  • @lb5928
    @lb5928 3 года назад +1

    The single best video ever made on machine/Deep learning. Extremely intuitive and practical explanations and visuals. Well done.

  • @janstaudacher6793
    @janstaudacher6793 3 года назад +2

    How you introduced the scale for classifying the fruits by just one number and therefore introduced logistic regression was just pure genius!

  • @amitbar2121
    @amitbar2121 3 года назад +2

    Fantastic video. I’m extremely impressed, especially with that visualization you worked hard on. Great job, hope to see more videos from you soon!

  • @Yerocregnes
    @Yerocregnes 3 года назад +1

    I only just recently realized that neural nets were just transformations between dimensions and it made so much click. The idea that there exists an information space that can encode data about whether an animal is a cat or a dog was crazy to me. Keep up the good work

  • @iestynne
    @iestynne 2 года назад

    What a *fantastic* video! I've watched a large number of videos on artificial neural networks over the past few years... yet I learned such a lot from this one! Such a (shockingly) clean perspective on how these systems work.
    The choice of the examples and the clarity of the writing and animation are just superb.
    If you didn't win, it's a travesty.

  • @EnergyWell
    @EnergyWell 3 года назад +11

    This video is wonderful. I am very glad you took the time to visualize these transforms into animations. You are a great teacher!

  • @JoshuaCowling
    @JoshuaCowling 3 года назад +15

    I love your explanation here. This kind of simplification and visualisation an excellent way to break down some of the barriers around machine learning and expose more people to the processes involved, their limitations and applications. Bravo!

  • @mirllewist3086
    @mirllewist3086 3 года назад

    Professional Data Scientist here - very well done. And, a useful clarification - too much magical thinking out there regarding AI at the moment. Vids like this are a big help. Thanks

  • @jgcornell
    @jgcornell 3 года назад +5

    This is frickin amazing, I just completed a postgrad in which we were just expected to accept the transforming data was viable. Whilst I mathematically understood it, I never intuited why - a few minutes into this video you visually slapped me in the face and showed me how it is obviously the same as transforming your boundary! I feel both foolish not to have seen it before and so elated!

  • @xxgn
    @xxgn 3 года назад +9

    As someone who took machine learning in college, I have recollections of being surprised when we covered a bunch of techniques and every single technique boiled down to statistics, not only in how they worked but also in proving why they worked (or didn't) for different scenarios.

  • @andrewglick6279
    @andrewglick6279 3 года назад +2

    This video is amazing; I'm sharing it with so many people. I've gotten so tired of seeing the nodes/edges diagrams as an explanations of neural networks--I understood that one layer influenced the next one to get to a final result, but it bothered me that I never understood *how* or *why*. Your explanation in this video is exactly what I was looking for. Thank you!

  • @iwasjason
    @iwasjason 3 года назад +1

    Fantastic video! I love the 3b1b-esque elegance, emphasis on visual intuition, and the build-up to a worthwhile nugget of insight (neural nets aren't magic, just iterated linear and non-linear transformations). Looking forward to your future videos!

  • @switzerland
    @switzerland 3 года назад +1

    Your video made it click in my brain like only 3blue1brown could make when explaining the fouriertransforms. My mind is blown, it makes sense now. Thanks.

  • @Bluedragon2513
    @Bluedragon2513 3 года назад +1

    Great visualization; I had to stop what I was doing and watch because I realized it made so much more sense now

  • @multimolti
    @multimolti 3 года назад +1

    This is by far the best and most concise introduction to Machine Learning, Deep Learning and "AI" that I've ever seen. Great job!

  • @Killadog1980
    @Killadog1980 3 года назад +1

    What a great video! And you only have a single upload! How do you make such an amazing video and explanation without previous uploads

  • @taggosaurus
    @taggosaurus 3 года назад +6

    2:07 suggestion - probably more clarification needed on logical regression and linear regression terminologies. The word 'regression' pretty much always is used for prediction, not classification. Logistic Regression is a classification method and not prediction, so it's sort of a misnomer due to historical reasons from what I've heard.

  • @michealhall7776
    @michealhall7776 3 года назад +1

    This is one of the best videos I have seen on the topic, please make more

  • @19vangogh94
    @19vangogh94 3 года назад +1

    The video was awesome! Sam, you have gift for this, I can totally see your videos teaching millions in few years

  • @fabianbleile9467
    @fabianbleile9467 3 года назад +2

    hey sam, superb video! I am studying mathematics and i am scratching the surface of the neural network thing from time to time. the block from 5:53 to 6:16 made it super clear for me what neural networks actually are and the first real definition i've seen. It's demystifying and clarifying. I am definitely gonna build upon this concept! huge thanks :)

  • @1996Pinocchio
    @1996Pinocchio 3 года назад +1

    Great video, I especially liked the transitions from one topic to the next, and the animations.

  • @hcv1648
    @hcv1648 3 года назад +3

    You changed my understanding of how I saw neural nets before. Amazed..🙏 would wait for more videos

  • @picumtg5631
    @picumtg5631 3 года назад +3

    even though I do not know how linear transformations work(but this inspired me to do so soon) and only knowing the relevant calculus of machine learning, this is really mindblowing. I see how much thought was put in there and I thank you for your time and wish you a great life

  • @jamesdunbar2386
    @jamesdunbar2386 3 года назад +1

    Extremely well done video. I hope you make more! I've taken some classes in Machine Learning and use some simple NNs at work so I have some familiarity with the mechanics, but it's nice to see the concepts so neatly spelled out! Very illuminating.

  • @Artifactorfiction
    @Artifactorfiction 3 года назад +2

    This was superb - at some level this demystified for me how the hidden layers do the work - at least when the dimensions are small - just wonderful clarity in this video

  • @a_name_a
    @a_name_a 3 года назад +1

    Great video, I would also mention the convex hull. ANN's can not extrapolate, they can only Interpolate within the convex hull of the training set.

    • @samsartorial
      @samsartorial  3 года назад

      I mean, they generally can't extrapolate much without additional inductive biases. But I'm not so sure about the convex hull thing: arxiv.org/abs/2101.09849

  • @sarthakkhanal6882
    @sarthakkhanal6882 3 года назад +2

    As a teaching assistant for a deep-learning course, I am definitely referring this video to the students. Gives a very interesting perspective into what NN are, and why we use activation functions.

  • @incredulouschordate
    @incredulouschordate Год назад

    This is seriously the best NN video I've seen, and yes that's after watching 3B1B's series

  • @silverfishers
    @silverfishers 3 года назад

    Great video man! I like the explanation of the backstory of the name. It helps to give the whole concept some context

  • @TallSchmuck
    @TallSchmuck 3 года назад +3

    Unbelievably amazing video. I thoroughly enjoyed it and learned to look at some parts of AI in a different way. I seriously hope you continue making videos! Great work!

  • @tielessin
    @tielessin 3 года назад +5

    Amazing Video! I started studying AI recently and this has given me a new perspective on the topic.

  • @Cybermage10
    @Cybermage10 3 года назад +1

    Hey Sam! Great to see you're doing well, hi from the old Mines LUG crew!

  • @heynowyouarearock
    @heynowyouarearock 3 года назад +2

    This is the most intuitive video I have ever seen.

  • @RichardAlbertMusic
    @RichardAlbertMusic 3 года назад +1

    Excellent! Especially the part „Let’s play this animation again because it took me so long“ 😂

  • @dariuszb.9778
    @dariuszb.9778 3 года назад +1

    That's why we have "neural" (built from neurons) and "neuronal" (built from a simplistic models of neurons) networks distinction in some languages.

  • @LinesThatConnect
    @LinesThatConnect 3 года назад +8

    Thanks for this, it demystified the concept quite a bit for me!

  • @jakob3267
    @jakob3267 3 года назад +1

    This one of the best videos I have seen in a while. I really hope you will make more 🙏

  • @finnaginfrost6297
    @finnaginfrost6297 3 года назад +3

    Absolutely incredible. I also can't think of any berries with those growth patterns. I love how obvious it became that projecting into a higher dimension was important.

  • @raghavendranimiwal9264
    @raghavendranimiwal9264 3 года назад +1

    Great stuff. More power to you. Looking forward to more such amazing videos.

  • @sgaseretto
    @sgaseretto 3 года назад +2

    Damn I really loved this video! You summerized everything I always say to people when explaining them what artificial neural networks actually are, how they are really different from biological neural networks and why I don't like the metaphor of "neural networks" to describe them, but with great animations! Now I have this reference video to share when someone else brings the topic. Thanks for this really great video!

  • @HighlyShifty
    @HighlyShifty 3 года назад +1

    Fantastic video, it may have taken you a lot of time but it was absolutely worth it! Subscribed in case you do decide to make more.

  • @ZubairKhan-sp8vb
    @ZubairKhan-sp8vb Год назад

    Just awsome!!! I have been thinking about neural networks this way, and how the key word has stuck with this statistical process. You illustrated beautifully just amazing!!

  • @Bokbind
    @Bokbind 3 года назад +1

    Wow, this is incredibly well made! I'm going to subscribe in the hopes you make more like it!

  • @jjcadman
    @jjcadman 3 года назад +1

    Fantastic explanations and visualizations.
    Thanks for all the time & effort you put into this.

  • @k2c027
    @k2c027 3 года назад +1

    Such a great video Sam. Thanks for making this 🙏🏻

  • @andrewfriedrichs9340
    @andrewfriedrichs9340 3 года назад +2

    This example is very cool. With enough transformations you can separate out almost any data set.

  • @EssentialsOfMath
    @EssentialsOfMath 3 года назад

    @3:20 on a very technical/mathematical note, translations are not linear transformations. A linear transformation has to preserve the form ax+y, in the sense that T(ax+y) = aT(x) + T(y). Translations T(x) = x+h fail this because T(ax+y) = ax+y+h while aT(x) + T(y) = ax + y + h + ah.

    • @samsartorial
      @samsartorial  3 года назад +3

      Yeah in retrospect I should have put a little note on screen clarifying that the transformations are actually affine. I don't see the distinction made for neural networks very often, because it is always theoretically possible to emulate affine transformations by linearly transforming through 1 additional dimension (making some assumptions about the input space). Giving your neural network biases to play with in addition to weights just makes training easier.

  • @konstantint1588
    @konstantint1588 3 года назад +1

    That was absolutely fascinating and amazingly well done!
    I'd pay for course about this!

  • @deepaks.m.6709
    @deepaks.m.6709 3 года назад +1

    Wow! This is amazing. Thanks for providing a logical perspective for NN's :)

  • @Speed001
    @Speed001 3 года назад +4

    This is amazing. I don't know anything about neural networks, but these are concepts I can understand, at least partially.

  • @archenemy49
    @archenemy49 3 года назад +1

    This is the most beautiful video I have seen in several years. Thank you so much for sharing this enormous observation and perception! Thank you so much!!! ❤️

  • @itsrachelfish
    @itsrachelfish 3 года назад

    How do you only have 494 subscribers!!?! Amazing content

  • @pacukluka
    @pacukluka Год назад

    Amazing ! Please keep it up, we have so much yet to learn.

  • @ApplepieFTW
    @ApplepieFTW 3 года назад +1

    Wow, awesome quality and very clearly explained

  • @BlueAgent
    @BlueAgent 3 года назад +1

    Love the visualisation and explaination. Thanks for making the video. :)

  • @Sydra.
    @Sydra. 2 года назад +1

    You made the world a better place with this video!

  • @gregoryg8902
    @gregoryg8902 3 года назад +1

    Wow. Well done. I'm currently studying machine learning and this is a great introduction to many concepts at a high level.

  • @osten222312
    @osten222312 3 года назад +1

    Great work! Hope to read more from you

  • @DoYouHaveAName1
    @DoYouHaveAName1 6 месяцев назад

    AMAZING! The animations and explanation are perfect!
    Thank you for your hard work

  • @yaiirable
    @yaiirable 3 года назад

    That music is really amping up the tension

  • @IllyNexus
    @IllyNexus 3 года назад +1

    Thank you for this amazing piece of knowledge Sam Sartor. Subscribed, maybe you are devoting some time in the future for other projects of this type. See ya!

  • @nembobuldrini
    @nembobuldrini 2 года назад +1

    Like n. 7001! ;) Brilliant! The best visual explanation of NN I have encountered so far. The visuals are extremely helpful in getting the gist of what a feed-forward neural network does.
    It's important to point out - and this would have spiced up the ending of the video too ;) - that there are other types of neural networks that are more similar to how the brain works. Hebbian learning and recursion are involved in these other types of neural networks, for which a simplification in the terms used in the video would be not so quite straightforward. It would actually be great to see a follow-up video on these kind of NN!

    • @diadetediotedio6918
      @diadetediotedio6918 Год назад

      They are also, extreme simplifications of the processes that occur in a brain and, fundamentally, they end up being used as if they were the current "artificial neural networks" when they depend on statistical methods such as backpropagation.

  • @andreheynes4646
    @andreheynes4646 3 года назад +1

    This is beautiful. I understand these things better now. Thank you.

  • @toasteduranium
    @toasteduranium 3 года назад

    I thought this was 3Blue1Brown the whole time! Good job!

  • @GoriIIaTactics
    @GoriIIaTactics 3 года назад

    Very cool explanation and animation. I'm also just very alarmed at how many people in the comments have "used" neural networks and don't understand this already

  • @JITCompilation
    @JITCompilation 3 года назад +1

    Holy crap. This is just what I needed. Are there any sources you recommend for going deeper into what you covered?

  • @kaiserouo
    @kaiserouo 3 года назад

    I used to think NN as a big nonlinear machine that has so many parameters that pumps up the DC dimension so high that seems to do any task well enough, while being easy enough to optimize loss.
    Understanding this as iterative process of linear & nonlinear transformation tells a better story about NN. I think is is also like PLAs and feature transformations, but iterate between those 2 a bunch of times. That helps the network reach a much more complex decision boundary, but instead of 1 very complex feature transformation (which may be hard to optimize or even think of), they introduce a much simpler and more general model by combining linear & nonlinear transformations, which can reach any complexity we want by simply adding layers.

  • @ming3706
    @ming3706 Год назад +1

    I can't believe my highschool math is this important

    • @dabidmydarling5398
      @dabidmydarling5398 Год назад

      I thought the same thing as well! I love ML and statistics!! :)

  • @SirKi-ef5vw
    @SirKi-ef5vw 3 года назад +1

    3:15 Ackchyually... linear transformations cannot translate. the combination of linear transformation + translation is called affine transformation. sorry it was bugging me

    • @samsartorial
      @samsartorial  3 года назад

      Yeah this is mentioned in the comments a fair bit. In retrospect I probably should have put a little note on screen clarifying that the transformations are actually affine. I don't see the distinction made for neural networks very often, because it is always theoretically possible to emulate affine transformations by linearly transforming through 1 additional dimension (making some assumptions about the input space). Giving your neural network biases to play with in addition to weights just makes training easier.

  • @piter239
    @piter239 3 года назад +1

    Incredible clever AND insightful!
    I just love it!!!
    Why so few views?!?

  • @cherubin7th
    @cherubin7th 3 года назад +1

    True, if you make neural networks in low level tools like Tensorflow or Pytorch yourself, you basically just make matrix multiplications, matrix addition, and send it though some non linear function you like.

  • @ShantanuSingh-wc4ou
    @ShantanuSingh-wc4ou 11 месяцев назад

    Please keep sharing your knowledge with us, you are just awesome, and we love your content.🙏

  • @peteraaser370
    @peteraaser370 3 года назад +2

    This was so thoughtfully put together. Consider me inspired

  • @wlmorgan
    @wlmorgan 3 года назад +2

    So uh, we actually have pretty good ideas about how neurons change in response to activity. Signals generate electrical/protein activity in neurons which can lead to secondary signals that alter protein expression patterns, which in turn alter electrical activity for the same original signal. The trouble of course is that each neuron has different initial states; though, they can be classified broadly and interpreted as classes of neurons which behave in particular ways. These alterations modify the weighting of edges in your brain's network the same way true computational neural networks need modify their edges' weighting. Learning then is simply giving training samples and rewarding good matches, which fits in the scaffolding theory of learning of Vygotsky and Bruner quite well.

    • @WelcomeBub
      @WelcomeBub 2 года назад

      But for neural networks we don't really have continuous learning, only hitting the reset button and changing up the samples. Applying a NN model does not also trigger learning in the process; the result of the application is simply used to nudge the model's weightings somewhat.

    • @diadetediotedio6918
      @diadetediotedio6918 Год назад

      Nah, we really don't have such good ideas of what neurons do or how they change exactly.

  • @alangustav7100
    @alangustav7100 3 года назад +1

    this is amazing, very good and intuitive animations, keep up this amazing work

  • @ikartikthakur
    @ikartikthakur 5 месяцев назад

    I am learning new things each day ..thanks man , for sharing this perspective . I never thought of transforming the data , I was used to rotating the lines..! 👍

  • @etienneboutet7193
    @etienneboutet7193 3 года назад +1

    Great video ! This is a very intuitive approach to neural networks. Thank you

  • @lukamoran
    @lukamoran 3 года назад

    Fantastic video. I almost never comment on videos. Like, literally once or twice a year. This video earned this comment. Keep it up!

  • @audunlarssonkleveland4789
    @audunlarssonkleveland4789 3 года назад

    Im hoping for more videos, the visuals are great and clean.

  • @Ahmed-Hosam-Elrefai
    @Ahmed-Hosam-Elrefai 3 года назад

    Man! That video to put it lightly, pure awesomeness!