First video? Off to a VERY promising start. This is just great! Hope low numbers won’t deter you from making more. (Or the amount of work.) Hope to see more from you.
Thanks! I don't think I'm going to do a ton more, since it took me like a week of 14-hour days to make. But I was thinking I might do a video on transformers NNs or something unrelated like reactivity in user interfaces once the semester wraps up. IDK, we'll see.
@@samsartorial Lemme tell you. Noone wants from you to make a video in a week or a month. Just make them slowly. Otherwise you end up like 3b1b . Looks like he is at the end of ideas.
Oh jeez, this is such a great video, I love how you describe the weighting process in NN to actual weights...brilliant. At the beginning you actually describe linear discriminant analysis as well. This is great because NN is really just a series of transforms and this is the best animation I have ever seen, way better than event Grant Sanderson's video on the topic. I added this to the list of all SoME1 videos that I could find. @
I don't think there's a better video than 3b1bs linear algebra series to understand linear transformations. This video makes sense only if you've understood that already. And it's perfect to watch once you've done that. I wish he makes more because it was very enjoyable although it lasted only for 10 min
This is a great video. I work with neural nets daily and intellectually knew everything you said in this video, but your presentation and visualizations has completely reframed the way I think about NNs. Thank you!
I've been reading about neural networks for years with limited understanding. In 10mins you have given me an entirely new and easier to understand perspective. Thank you so much! Fantastic work!
I have worked extensively with linear and nonlinear transformations of abstract geometries, and this is by far one of the best explanations of their correspondence with “neural networks”!! Great work!
Hey, just wanted to let you know this is one of my favorite videos on RUclips, I absolutely fell in love with it. I come from Statistics so when I was first learning about "Neural Networks" and found out that the process of "learning" is literally just minimizing a cost function, that it has no magic going on, my thought was "So it's just MLE? It's just a Math thing?" This video is the best piece I've found so far in demystifying neural networks, plus it gives some insight on what's actually going on, besides the usual neuron-layer analogy. That's all!
Well the field of AI has many more characterizing questions for choosing machine learning methods and models for a problem. For example how do you incorporate new knowledge/data for a trained neural network? NNs aren't really fit for this job at the moment and require relearning everything with the new samples.
As someone who has used neural networks in a research setting, I never even realized that neural networks are actually just a series of alternating linear and nonlinear transformations. Amazing video, hope you make more :)
I understand more about linear and non-linear transformations, why hyperbolic tangent is useful in NNs, and how NNs really work thanks to this video. Awesome work!
This video was great! I hope you take it as a compliment that for a second, I thought I was watching 3blue1brown. I understood activation functions coming in to this, but I still feel that I have even more clarity after your animation. Thanks!
THISSSSS!! OMG my mind is blown to pieces even though I sort of already knew the information never have I seen anyone put the pieces together like this. My man thank you so much.
How is it possible that I watched so many different videos explaining ML without really grasping it, and now this guys with 9 minutes makes it so clear. Thank you. PS. just a clarification: the use of the analogy with neurons was kinda on point at the time it was created, because they were trying to understand how living organisms were self organising, hence also trying to create a model of the neurons and the brain.
Brilliant exposition. Will certainly help to blow away the fog of confusion that other sources may have, or have, generated. Thanks for your hard work!
I only just recently realized that neural nets were just transformations between dimensions and it made so much click. The idea that there exists an information space that can encode data about whether an animal is a cat or a dog was crazy to me. Keep up the good work
What a *fantastic* video! I've watched a large number of videos on artificial neural networks over the past few years... yet I learned such a lot from this one! Such a (shockingly) clean perspective on how these systems work. The choice of the examples and the clarity of the writing and animation are just superb. If you didn't win, it's a travesty.
I love your explanation here. This kind of simplification and visualisation an excellent way to break down some of the barriers around machine learning and expose more people to the processes involved, their limitations and applications. Bravo!
Professional Data Scientist here - very well done. And, a useful clarification - too much magical thinking out there regarding AI at the moment. Vids like this are a big help. Thanks
This is frickin amazing, I just completed a postgrad in which we were just expected to accept the transforming data was viable. Whilst I mathematically understood it, I never intuited why - a few minutes into this video you visually slapped me in the face and showed me how it is obviously the same as transforming your boundary! I feel both foolish not to have seen it before and so elated!
As someone who took machine learning in college, I have recollections of being surprised when we covered a bunch of techniques and every single technique boiled down to statistics, not only in how they worked but also in proving why they worked (or didn't) for different scenarios.
This video is amazing; I'm sharing it with so many people. I've gotten so tired of seeing the nodes/edges diagrams as an explanations of neural networks--I understood that one layer influenced the next one to get to a final result, but it bothered me that I never understood *how* or *why*. Your explanation in this video is exactly what I was looking for. Thank you!
Fantastic video! I love the 3b1b-esque elegance, emphasis on visual intuition, and the build-up to a worthwhile nugget of insight (neural nets aren't magic, just iterated linear and non-linear transformations). Looking forward to your future videos!
Your video made it click in my brain like only 3blue1brown could make when explaining the fouriertransforms. My mind is blown, it makes sense now. Thanks.
2:07 suggestion - probably more clarification needed on logical regression and linear regression terminologies. The word 'regression' pretty much always is used for prediction, not classification. Logistic Regression is a classification method and not prediction, so it's sort of a misnomer due to historical reasons from what I've heard.
hey sam, superb video! I am studying mathematics and i am scratching the surface of the neural network thing from time to time. the block from 5:53 to 6:16 made it super clear for me what neural networks actually are and the first real definition i've seen. It's demystifying and clarifying. I am definitely gonna build upon this concept! huge thanks :)
even though I do not know how linear transformations work(but this inspired me to do so soon) and only knowing the relevant calculus of machine learning, this is really mindblowing. I see how much thought was put in there and I thank you for your time and wish you a great life
Extremely well done video. I hope you make more! I've taken some classes in Machine Learning and use some simple NNs at work so I have some familiarity with the mechanics, but it's nice to see the concepts so neatly spelled out! Very illuminating.
This was superb - at some level this demystified for me how the hidden layers do the work - at least when the dimensions are small - just wonderful clarity in this video
I mean, they generally can't extrapolate much without additional inductive biases. But I'm not so sure about the convex hull thing: arxiv.org/abs/2101.09849
As a teaching assistant for a deep-learning course, I am definitely referring this video to the students. Gives a very interesting perspective into what NN are, and why we use activation functions.
Unbelievably amazing video. I thoroughly enjoyed it and learned to look at some parts of AI in a different way. I seriously hope you continue making videos! Great work!
Absolutely incredible. I also can't think of any berries with those growth patterns. I love how obvious it became that projecting into a higher dimension was important.
Damn I really loved this video! You summerized everything I always say to people when explaining them what artificial neural networks actually are, how they are really different from biological neural networks and why I don't like the metaphor of "neural networks" to describe them, but with great animations! Now I have this reference video to share when someone else brings the topic. Thanks for this really great video!
Just awsome!!! I have been thinking about neural networks this way, and how the key word has stuck with this statistical process. You illustrated beautifully just amazing!!
@3:20 on a very technical/mathematical note, translations are not linear transformations. A linear transformation has to preserve the form ax+y, in the sense that T(ax+y) = aT(x) + T(y). Translations T(x) = x+h fail this because T(ax+y) = ax+y+h while aT(x) + T(y) = ax + y + h + ah.
Yeah in retrospect I should have put a little note on screen clarifying that the transformations are actually affine. I don't see the distinction made for neural networks very often, because it is always theoretically possible to emulate affine transformations by linearly transforming through 1 additional dimension (making some assumptions about the input space). Giving your neural network biases to play with in addition to weights just makes training easier.
This is the most beautiful video I have seen in several years. Thank you so much for sharing this enormous observation and perception! Thank you so much!!! ❤️
Thank you for this amazing piece of knowledge Sam Sartor. Subscribed, maybe you are devoting some time in the future for other projects of this type. See ya!
Like n. 7001! ;) Brilliant! The best visual explanation of NN I have encountered so far. The visuals are extremely helpful in getting the gist of what a feed-forward neural network does. It's important to point out - and this would have spiced up the ending of the video too ;) - that there are other types of neural networks that are more similar to how the brain works. Hebbian learning and recursion are involved in these other types of neural networks, for which a simplification in the terms used in the video would be not so quite straightforward. It would actually be great to see a follow-up video on these kind of NN!
They are also, extreme simplifications of the processes that occur in a brain and, fundamentally, they end up being used as if they were the current "artificial neural networks" when they depend on statistical methods such as backpropagation.
Very cool explanation and animation. I'm also just very alarmed at how many people in the comments have "used" neural networks and don't understand this already
I used to think NN as a big nonlinear machine that has so many parameters that pumps up the DC dimension so high that seems to do any task well enough, while being easy enough to optimize loss. Understanding this as iterative process of linear & nonlinear transformation tells a better story about NN. I think is is also like PLAs and feature transformations, but iterate between those 2 a bunch of times. That helps the network reach a much more complex decision boundary, but instead of 1 very complex feature transformation (which may be hard to optimize or even think of), they introduce a much simpler and more general model by combining linear & nonlinear transformations, which can reach any complexity we want by simply adding layers.
3:15 Ackchyually... linear transformations cannot translate. the combination of linear transformation + translation is called affine transformation. sorry it was bugging me
Yeah this is mentioned in the comments a fair bit. In retrospect I probably should have put a little note on screen clarifying that the transformations are actually affine. I don't see the distinction made for neural networks very often, because it is always theoretically possible to emulate affine transformations by linearly transforming through 1 additional dimension (making some assumptions about the input space). Giving your neural network biases to play with in addition to weights just makes training easier.
True, if you make neural networks in low level tools like Tensorflow or Pytorch yourself, you basically just make matrix multiplications, matrix addition, and send it though some non linear function you like.
So uh, we actually have pretty good ideas about how neurons change in response to activity. Signals generate electrical/protein activity in neurons which can lead to secondary signals that alter protein expression patterns, which in turn alter electrical activity for the same original signal. The trouble of course is that each neuron has different initial states; though, they can be classified broadly and interpreted as classes of neurons which behave in particular ways. These alterations modify the weighting of edges in your brain's network the same way true computational neural networks need modify their edges' weighting. Learning then is simply giving training samples and rewarding good matches, which fits in the scaffolding theory of learning of Vygotsky and Bruner quite well.
But for neural networks we don't really have continuous learning, only hitting the reset button and changing up the samples. Applying a NN model does not also trigger learning in the process; the result of the application is simply used to nudge the model's weightings somewhat.
I am learning new things each day ..thanks man , for sharing this perspective . I never thought of transforming the data , I was used to rotating the lines..! 👍
"Let's watch that animation again, since it took me so long to make"
as someone who is struggling with math, this is the equivalent of Picasso
Lol it was a great animation though
Honestly, who can not relate haha
fair enough
Yelped, paused, rushed to the comments to thumb up the one about that line.
First video? Off to a VERY promising start. This is just great! Hope low numbers won’t deter you from making more. (Or the amount of work.) Hope to see more from you.
Thanks! I don't think I'm going to do a ton more, since it took me like a week of 14-hour days to make. But I was thinking I might do a video on transformers NNs or something unrelated like reactivity in user interfaces once the semester wraps up. IDK, we'll see.
@@samsartorial I do understand and appreciate that it is a lot of work. I am glad you took the time for this one video at least! 😃
Is there a way of increasing the possibility of future videos of this exquisite quality?
@@samsartorial Lemme tell you. Noone wants from you to make a video in a week or a month. Just make them slowly. Otherwise you end up like 3b1b . Looks like he is at the end of ideas.
@@samsartorial Those both sound like very exciting topics 😁
Even after doing ML professionally for 5 years, seeing the transformations in this way taught me something new.
You might like reading Chris Olah's blog then
Oh jeez, this is such a great video, I love how you describe the weighting process in NN to actual weights...brilliant. At the beginning you actually describe linear discriminant analysis as well. This is great because NN is really just a series of transforms and this is the best animation I have ever seen, way better than event Grant Sanderson's video on the topic. I added this to the list of all SoME1 videos that I could find. @
ruclips.net/video/MsNQtj3zVs8/видео.html
@@patrickinternational Thanks !
Off
Not better, but they complement each other
I don't think there's a better video than 3b1bs linear algebra series to understand linear transformations. This video makes sense only if you've understood that already. And it's perfect to watch once you've done that. I wish he makes more because it was very enjoyable although it lasted only for 10 min
This is a great video. I work with neural nets daily and intellectually knew everything you said in this video, but your presentation and visualizations has completely reframed the way I think about NNs. Thank you!
I've been reading about neural networks for years with limited understanding. In 10mins you have given me an entirely new and easier to understand perspective. Thank you so much!
Fantastic work!
I have worked extensively with linear and nonlinear transformations of abstract geometries, and this is by far one of the best explanations of their correspondence with “neural networks”!! Great work!
Hey, just wanted to let you know this is one of my favorite videos on RUclips, I absolutely fell in love with it.
I come from Statistics so when I was first learning about "Neural Networks" and found out that the process of "learning" is literally just minimizing a cost function, that it has no magic going on, my thought was "So it's just MLE? It's just a Math thing?"
This video is the best piece I've found so far in demystifying neural networks, plus it gives some insight on what's actually going on, besides the usual neuron-layer analogy. That's all!
Well the field of AI has many more characterizing questions for choosing machine learning methods and models for a problem. For example how do you incorporate new knowledge/data for a trained neural network? NNs aren't really fit for this job at the moment and require relearning everything with the new samples.
hey sam , its marvelously clear , fluid and relevant ,we need more of anything you find interesting
sincerly
-everyone
As someone who has used neural networks in a research setting, I never even realized that neural networks are actually just a series of alternating linear and nonlinear transformations. Amazing video, hope you make more :)
How does that happen? Brilliant teaches this.
@@fuzzylogicq You're very smart 🌟
Very well done video, definitely feels on par with 3b1b's ability to break up and explain complex phenomenon.
I can’t believe that this your first video with how much of a great quality it has. Keep making more, you’re amazing!!
really great! hope you make some more videos :)
Did not expect to find you here
MC Speedrun mods are truly everywhere
based
Geosquare go expose some more speedrunning scammers or something
I understand more about linear and non-linear transformations, why hyperbolic tangent is useful in NNs, and how NNs really work thanks to this video. Awesome work!
This video was great! I hope you take it as a compliment that for a second, I thought I was watching 3blue1brown. I understood activation functions coming in to this, but I still feel that I have even more clarity after your animation. Thanks!
Great video. Keep them coming. This will significantly aid my ongoing “ML is just statistics” campaign. Thanks. Subscribed.
THISSSSS!! OMG my mind is blown to pieces even though I sort of already knew the information never have I seen anyone put the pieces together like this. My man thank you so much.
I come back to this video often because your animation of an NN transforming the data is just so satisfying.
How is it possible that I watched so many different videos explaining ML without really grasping it, and now this guys with 9 minutes makes it so clear. Thank you.
PS. just a clarification: the use of the analogy with neurons was kinda on point at the time it was created, because they were trying to understand how living organisms were self organising, hence also trying to create a model of the neurons and the brain.
Brilliant exposition. Will certainly help to blow away the fog of confusion that other sources may have, or have, generated. Thanks for your hard work!
The single best video ever made on machine/Deep learning. Extremely intuitive and practical explanations and visuals. Well done.
How you introduced the scale for classifying the fruits by just one number and therefore introduced logistic regression was just pure genius!
Fantastic video. I’m extremely impressed, especially with that visualization you worked hard on. Great job, hope to see more videos from you soon!
I only just recently realized that neural nets were just transformations between dimensions and it made so much click. The idea that there exists an information space that can encode data about whether an animal is a cat or a dog was crazy to me. Keep up the good work
What a *fantastic* video! I've watched a large number of videos on artificial neural networks over the past few years... yet I learned such a lot from this one! Such a (shockingly) clean perspective on how these systems work.
The choice of the examples and the clarity of the writing and animation are just superb.
If you didn't win, it's a travesty.
This video is wonderful. I am very glad you took the time to visualize these transforms into animations. You are a great teacher!
I love your explanation here. This kind of simplification and visualisation an excellent way to break down some of the barriers around machine learning and expose more people to the processes involved, their limitations and applications. Bravo!
Professional Data Scientist here - very well done. And, a useful clarification - too much magical thinking out there regarding AI at the moment. Vids like this are a big help. Thanks
This is frickin amazing, I just completed a postgrad in which we were just expected to accept the transforming data was viable. Whilst I mathematically understood it, I never intuited why - a few minutes into this video you visually slapped me in the face and showed me how it is obviously the same as transforming your boundary! I feel both foolish not to have seen it before and so elated!
As someone who took machine learning in college, I have recollections of being surprised when we covered a bunch of techniques and every single technique boiled down to statistics, not only in how they worked but also in proving why they worked (or didn't) for different scenarios.
This video is amazing; I'm sharing it with so many people. I've gotten so tired of seeing the nodes/edges diagrams as an explanations of neural networks--I understood that one layer influenced the next one to get to a final result, but it bothered me that I never understood *how* or *why*. Your explanation in this video is exactly what I was looking for. Thank you!
Fantastic video! I love the 3b1b-esque elegance, emphasis on visual intuition, and the build-up to a worthwhile nugget of insight (neural nets aren't magic, just iterated linear and non-linear transformations). Looking forward to your future videos!
Your video made it click in my brain like only 3blue1brown could make when explaining the fouriertransforms. My mind is blown, it makes sense now. Thanks.
Great visualization; I had to stop what I was doing and watch because I realized it made so much more sense now
This is by far the best and most concise introduction to Machine Learning, Deep Learning and "AI" that I've ever seen. Great job!
What a great video! And you only have a single upload! How do you make such an amazing video and explanation without previous uploads
2:07 suggestion - probably more clarification needed on logical regression and linear regression terminologies. The word 'regression' pretty much always is used for prediction, not classification. Logistic Regression is a classification method and not prediction, so it's sort of a misnomer due to historical reasons from what I've heard.
This is one of the best videos I have seen on the topic, please make more
The video was awesome! Sam, you have gift for this, I can totally see your videos teaching millions in few years
hey sam, superb video! I am studying mathematics and i am scratching the surface of the neural network thing from time to time. the block from 5:53 to 6:16 made it super clear for me what neural networks actually are and the first real definition i've seen. It's demystifying and clarifying. I am definitely gonna build upon this concept! huge thanks :)
Great video, I especially liked the transitions from one topic to the next, and the animations.
You changed my understanding of how I saw neural nets before. Amazed..🙏 would wait for more videos
even though I do not know how linear transformations work(but this inspired me to do so soon) and only knowing the relevant calculus of machine learning, this is really mindblowing. I see how much thought was put in there and I thank you for your time and wish you a great life
Extremely well done video. I hope you make more! I've taken some classes in Machine Learning and use some simple NNs at work so I have some familiarity with the mechanics, but it's nice to see the concepts so neatly spelled out! Very illuminating.
This was superb - at some level this demystified for me how the hidden layers do the work - at least when the dimensions are small - just wonderful clarity in this video
Great video, I would also mention the convex hull. ANN's can not extrapolate, they can only Interpolate within the convex hull of the training set.
I mean, they generally can't extrapolate much without additional inductive biases. But I'm not so sure about the convex hull thing: arxiv.org/abs/2101.09849
As a teaching assistant for a deep-learning course, I am definitely referring this video to the students. Gives a very interesting perspective into what NN are, and why we use activation functions.
This is seriously the best NN video I've seen, and yes that's after watching 3B1B's series
Great video man! I like the explanation of the backstory of the name. It helps to give the whole concept some context
Unbelievably amazing video. I thoroughly enjoyed it and learned to look at some parts of AI in a different way. I seriously hope you continue making videos! Great work!
Amazing Video! I started studying AI recently and this has given me a new perspective on the topic.
Hey Sam! Great to see you're doing well, hi from the old Mines LUG crew!
This is the most intuitive video I have ever seen.
Excellent! Especially the part „Let’s play this animation again because it took me so long“ 😂
That's why we have "neural" (built from neurons) and "neuronal" (built from a simplistic models of neurons) networks distinction in some languages.
Thanks for this, it demystified the concept quite a bit for me!
This one of the best videos I have seen in a while. I really hope you will make more 🙏
Absolutely incredible. I also can't think of any berries with those growth patterns. I love how obvious it became that projecting into a higher dimension was important.
Great stuff. More power to you. Looking forward to more such amazing videos.
Damn I really loved this video! You summerized everything I always say to people when explaining them what artificial neural networks actually are, how they are really different from biological neural networks and why I don't like the metaphor of "neural networks" to describe them, but with great animations! Now I have this reference video to share when someone else brings the topic. Thanks for this really great video!
Fantastic video, it may have taken you a lot of time but it was absolutely worth it! Subscribed in case you do decide to make more.
Just awsome!!! I have been thinking about neural networks this way, and how the key word has stuck with this statistical process. You illustrated beautifully just amazing!!
Wow, this is incredibly well made! I'm going to subscribe in the hopes you make more like it!
Fantastic explanations and visualizations.
Thanks for all the time & effort you put into this.
Such a great video Sam. Thanks for making this 🙏🏻
This example is very cool. With enough transformations you can separate out almost any data set.
@3:20 on a very technical/mathematical note, translations are not linear transformations. A linear transformation has to preserve the form ax+y, in the sense that T(ax+y) = aT(x) + T(y). Translations T(x) = x+h fail this because T(ax+y) = ax+y+h while aT(x) + T(y) = ax + y + h + ah.
Yeah in retrospect I should have put a little note on screen clarifying that the transformations are actually affine. I don't see the distinction made for neural networks very often, because it is always theoretically possible to emulate affine transformations by linearly transforming through 1 additional dimension (making some assumptions about the input space). Giving your neural network biases to play with in addition to weights just makes training easier.
That was absolutely fascinating and amazingly well done!
I'd pay for course about this!
Wow! This is amazing. Thanks for providing a logical perspective for NN's :)
This is amazing. I don't know anything about neural networks, but these are concepts I can understand, at least partially.
This is the most beautiful video I have seen in several years. Thank you so much for sharing this enormous observation and perception! Thank you so much!!! ❤️
How do you only have 494 subscribers!!?! Amazing content
Amazing ! Please keep it up, we have so much yet to learn.
Wow, awesome quality and very clearly explained
Love the visualisation and explaination. Thanks for making the video. :)
You made the world a better place with this video!
Wow. Well done. I'm currently studying machine learning and this is a great introduction to many concepts at a high level.
Great work! Hope to read more from you
AMAZING! The animations and explanation are perfect!
Thank you for your hard work
That music is really amping up the tension
Thank you for this amazing piece of knowledge Sam Sartor. Subscribed, maybe you are devoting some time in the future for other projects of this type. See ya!
Like n. 7001! ;) Brilliant! The best visual explanation of NN I have encountered so far. The visuals are extremely helpful in getting the gist of what a feed-forward neural network does.
It's important to point out - and this would have spiced up the ending of the video too ;) - that there are other types of neural networks that are more similar to how the brain works. Hebbian learning and recursion are involved in these other types of neural networks, for which a simplification in the terms used in the video would be not so quite straightforward. It would actually be great to see a follow-up video on these kind of NN!
They are also, extreme simplifications of the processes that occur in a brain and, fundamentally, they end up being used as if they were the current "artificial neural networks" when they depend on statistical methods such as backpropagation.
This is beautiful. I understand these things better now. Thank you.
I thought this was 3Blue1Brown the whole time! Good job!
Very cool explanation and animation. I'm also just very alarmed at how many people in the comments have "used" neural networks and don't understand this already
Holy crap. This is just what I needed. Are there any sources you recommend for going deeper into what you covered?
I used to think NN as a big nonlinear machine that has so many parameters that pumps up the DC dimension so high that seems to do any task well enough, while being easy enough to optimize loss.
Understanding this as iterative process of linear & nonlinear transformation tells a better story about NN. I think is is also like PLAs and feature transformations, but iterate between those 2 a bunch of times. That helps the network reach a much more complex decision boundary, but instead of 1 very complex feature transformation (which may be hard to optimize or even think of), they introduce a much simpler and more general model by combining linear & nonlinear transformations, which can reach any complexity we want by simply adding layers.
I can't believe my highschool math is this important
I thought the same thing as well! I love ML and statistics!! :)
3:15 Ackchyually... linear transformations cannot translate. the combination of linear transformation + translation is called affine transformation. sorry it was bugging me
Yeah this is mentioned in the comments a fair bit. In retrospect I probably should have put a little note on screen clarifying that the transformations are actually affine. I don't see the distinction made for neural networks very often, because it is always theoretically possible to emulate affine transformations by linearly transforming through 1 additional dimension (making some assumptions about the input space). Giving your neural network biases to play with in addition to weights just makes training easier.
Incredible clever AND insightful!
I just love it!!!
Why so few views?!?
True, if you make neural networks in low level tools like Tensorflow or Pytorch yourself, you basically just make matrix multiplications, matrix addition, and send it though some non linear function you like.
Please keep sharing your knowledge with us, you are just awesome, and we love your content.🙏
This was so thoughtfully put together. Consider me inspired
So uh, we actually have pretty good ideas about how neurons change in response to activity. Signals generate electrical/protein activity in neurons which can lead to secondary signals that alter protein expression patterns, which in turn alter electrical activity for the same original signal. The trouble of course is that each neuron has different initial states; though, they can be classified broadly and interpreted as classes of neurons which behave in particular ways. These alterations modify the weighting of edges in your brain's network the same way true computational neural networks need modify their edges' weighting. Learning then is simply giving training samples and rewarding good matches, which fits in the scaffolding theory of learning of Vygotsky and Bruner quite well.
But for neural networks we don't really have continuous learning, only hitting the reset button and changing up the samples. Applying a NN model does not also trigger learning in the process; the result of the application is simply used to nudge the model's weightings somewhat.
Nah, we really don't have such good ideas of what neurons do or how they change exactly.
this is amazing, very good and intuitive animations, keep up this amazing work
I am learning new things each day ..thanks man , for sharing this perspective . I never thought of transforming the data , I was used to rotating the lines..! 👍
Great video ! This is a very intuitive approach to neural networks. Thank you
Fantastic video. I almost never comment on videos. Like, literally once or twice a year. This video earned this comment. Keep it up!
Im hoping for more videos, the visuals are great and clean.
Man! That video to put it lightly, pure awesomeness!