Which class exactly did you learn about neural networks? Did you also learn multi-variable calculus (fundamental to even the simplest neural network) in your high school class? I would love to attend!@@randomguy4738
Most educational videos give viewers the impression that they are learning something, while in reality, they cannot reliably explain any of the important points of the video later, so they haven't really learned anything. But your videos give me the impression that I haven't learned anything, because all the points you make are sort of obvious in isolation, while in reality, after watching them I find myself much better able to explain some of the concepts in simple, accurate terms. I hope more channels follow this pattern of excellent conceptual learning.
Being able to explain it at a conceptual level isn't good enough. You can only understand it by practicing (i.e., build neural nets by yourself and play with it)
busi magen, I don't have a story nearly as touching as that of you and your Grandmother, but I think I would cite my dad as teaching this by example when I was growing up, in that the way that he would describe things centered on what they're actually doing in simple terms, rather than on learning the appropriate jargon.
Particle accelerator is used for creating/verifying hypothesis. Your analogy is terrible. Regarding learning a new skill, one needs to practice rather than just passively absorb information. This is why homework exists. Regarding neural nets, anyone think they can "explain" NN after watching this video is frankly laughable. (not saying the content of this video is bad)
It's a statement I don't agree with. At university, we are taught things in a formal and abstract way, not just for the sake of overcomplicating things. I don't think professors, which are primarily researchers should be considered "fools" because they fail to teach their subject in a more intuitive manner.
I study mathematics, physics and architecture. By definition this man is an ORACLE in the strict meaning of the word. With all honesty I never imagined someone explaining complex topics with the dexterity this man has. He is literally an institution and an outstanding teacher. The computer graphics and the illustrations are simply perplexing. This guy never evades complexity. He never evades complex arguments. He illustrate the complexity and dive into the exhaustive explanation of the details. It's extremely rare to see a professor and a dedicated user to put a lot of effort explaining, animating and describing mathematics the way he does.
@@viharcontractor1679 When did I say that? Please read my comment again. I have no issues with the tutorial, I have objection on the comment to which I have replied. One should always make an appropriate comments. As it is incorrect to say something rude, it is always wrong to do false praising. Have you read the comment? of kummer45? Calling the tutor of the video as " Oracle"? Really? This kind of words should be used for someone like Swami Vivekanda and not for some ordinary tutorial. It almost hurts to see such misuse of words.
@@sdc8591 Come on man, what is wrong with / about that comment? The video is fantastic in every way, It is dense enough that I've had to watch it several times over, yet is able to communicate the concept of a neural network in such a way that even my pea brain can grasp this topic, please think before commenting and make a proper comparison.
@@JCake First of all, don't use the word 'man' , I am a girl. I never said video is bad.It is fine. Why everyone is coming over here and defending the video? Is is so difficult to understand what I am saying? The comment from kummer 45 is an exaggeration and I stick to it because it is. If the video is good enough , one does not need to watch it second time to understand the concept. I had seen one video by Mathew Renze on the same topic. That long tutorial, was first time I came across neural network. It was more that 1.30 hours of series of videos. I never watched it again and still remember every single concept. Now if this man is oracle what you will call him?
secret scientist are much further than this, like they can wirelessly from satellite I think give dream images and change dreams you make yourself... for me they always try to make it ugly,...
from what i understand (i am also an dummy i just tell you what i think) the inputs are the pixels the weights are the pixels 's whiteness or blackness it is like lets say we need first pixel to be white so we need the computer to know there is a pixel there (hence it's an input) we need the computer to change how white or black it is (hence the computer's ability to change weights)
@@randompersondfgb yes, you're not understanding. Uncommon Sense is saying "you" nailed it. The you is Cliff Rosen, the original commenter. He's saying that Cliff Rosen nailed it when he wrote the comment "Brilliantly explained".
I'm currently taking a computer science math course where the professor strongly advised everyone to watch this exact video series to get an intuition about what all the math is actually used for.
I don't understand notifications either. What, we're supposed to do things other than watch all the remaining 3b1b videos we haven't yet seen between notifications? Who would be so wasteful with their lives???
This is how you taught Deep Learning, people. I've seen lectures that either be categorized into 2 groups: too hard or too shallow/general. You have balanced between them. Thanks you so much!
Around 2 years ago I was a sophomore statistics student and had no idea what deep learning is, until I met this video and 3b1b channel. His clear explanation of neural network and animations blew my mind. Since then I started my journey in machine learning. For a random reason I clicked onto this video again, and realized how long my journey in this field have been. This video really changed my life and I am really grateful about it.
@@yashrathi6862 The linear algebra series that was recommended in the video is a good start, other than that you should keep watching this video and you will start to understand it better the more you do. I am also in class 11 and that is what helped me
One year ago I met this video. I couldn't understand any single word in it. A year later, I am back and I still cannot understand it. I am fucking stupid.......
@@yashrathi6862 To be honest there are no real "prerequisites" for learning neural networks, in the end it just gets down to how familiar you are with the concepts of basic graph theory. However, I admit that it can be pretty overwhelming for someone to try and comprehend all the stuff at once, which is why being savvy with the use of linear algebra is a must. Apart from that you should try your hand at programming once, perhaps the algorithmic mode of thinking would help you deveop an intuition for neural networks. And yes, of course try to explore graph theory, for neural networks will resonate much better with you once you do, imo.
@@omarz5009 The main downside of python is the fact it's a high-level language and hence kinda slow. But for ML and NN it has several powerful libraries (pandas, numpy, tensorflow) which make up for that. Given Python supports the implementation of C-Code, those libraries could be optimized like heck to the point bothering with the stuff in C++ is just wasted time. Plus Python is much easier to learn, hence more people use it and develope for it.
This is the 80s generation we were listening rock music and looking how to get things done better we grew without mobile phones just sitting front of a computer or playing basketball outside in the park. We grew without rap, hip-hop, either thinking that the gang is a cool guy! this is what now generations require badly!
And our generation is unlucky that we had no such mentors and internet to deliver their videos. Taking this into account, we demand results, youngsters! We had, at least, an excuse for being dumb :)
this is my first introduction to machine learning and I watched this only twice to get it, really goes to show how good of a teacher this guy is, the effort he puts in is nothing short of amazing !
This video kickstarted my journey in ML a year back. Trust me, back then I watched this video three times to finally understand. It might be challenging for few to get it but when you get it, it just feels amazing
To think that someone would make a video of neural network and explain it in a way so simple yet insightful is such a bless especially for people who want to dig deep into machine learning/ deep learning. Thanks 3Blue1Brown!
I totally agree, my friend. Today is a very important day in the history of youtube mathematics. And since I am the 100th person who liked your comment, I would like to give a little inspirational speech: To all mathematicians, physicists, engineers, computer scientists or people who want to become one of those in the future, today is a very important day. The best youtube mathematician, 3Blue1Brown, has made a video about neural networks and plans to make others about it in the future. I think it's not necessary to explain the inherent significance this topic has concerning the future of our technology and our understanding of the universe and the processes going on in it. These videos will help the new scientific generations to cope with the structures still to be found and to bring on a new and deeper understanding of the things that have been found and examinated before. Humanity is reaching a point, where the wish to understand the world is higher than it has ever been before. You, dear future scientists, can all be a part of the progress we are just going through, you just have to have the Will and the Strength for it, never give up if things aren't working properly or as you expected and always remember: At the end, everything will be fine, so if it isn't fine, it's not the end. Actually, I have reached the end of my little inspirational speech (and it is fine ;) ), and to complement it well, I want to quote a famous poem which plays an important role in a very good and famous science fiction movie.... "Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light. Though wise men at their end know dark is right, Because their words had forked no lightning they Do not go gentle into that good night. Good men, the last wave by, crying how bright Their frail deeds might have danced in a green bay, Rage, rage against the dying of the light." Thank you.
I am just astounded. I spent so much time trying to understand this concept. Everywhere I looked people would show the similar neural network animation, but no one ever really explained and exemplified every single step, layer, term and mathematics behind it. The video is really well structured and with amazing animations. Extremely well done. My mind is so blown I can barely write this comment.
i took a deeep learning lecture in my last semester and my professor couldnt explain in 4 frickin months what u explained in 20 mins much much appreciated man you're doing awesome work hope to learn a lot from you
how is it possible that I can lie in my bed on a Sunday and am presented with mind-boggling cutting edge knowledge told by an incredibly soothing voice in a world class manner on a 2K screen of a pocket supercomputer basically for free
@@Charge11 And software engineering advancements, thousands of years of intellectual history, biological evolution of conscious brains and so forth. point is, it's miraculous if you step back far enough.
@@duykhanh7746 A bit late but if your question hasn't been answered yet: It doesn't really matter if you have a value >1. Basically anything above 0 is an activation and you can also view it as the size of "a" being the intensity of the activation. Biological neurons can also be more active by firing in fast succession (up until they reach the maximum possible firing rate of like 250-1000Hz depending on the source), but you don't want to introduce things like loops in artificial neurons to not slow down your network. So to simulate this kind of behavior, you just let your output get bigger. You can compensate for the lack of an upper limit in the following neurons by adjusting the weights and the biases. TL;DR: No. :D
Written some notes from the video to read quickly. Hope it helps somebody. l Neural Networks can recognize hand written digits, letters, words ( in general, tokens ) l What are Neurons? ○ Something that holds a number [ 0, 1] ○ The higher the number, the higher the "activation" rate l Consider a 28*28 table in which each unit is represented by a value between 0 to 1 ( activation number ) ○ Let us divide each row into a "layer", such that, if we were to divide all the layers, the last layer would contain 10 "cells" ( units ). ○ Values are passed from the previous cells to the last layer ( 10 unit layer ), again, between 0 and 1. The higher or closer the value is to 1, the more probability exists that the image scanned represents that unit cell. So, a unit cell that contains the highest value is indication that the index of the unit cell is the value of the image scanned. ○ 16 cells in the second and third last cells are arbitrary. ○ Each cell is linked ( causes activation ) to some ( not all ) other cells in the next layer which further cause more activation. ○ Each 'cell' corresponds to some sort of identification about how much a certain region 'lights up', and then sends a value to another node which reacts based on the received value. ○ To find whether a certain cell with light us, like each cell be represented by 'a Cell_Number ', and let each cell be 'assigned' a certain weight 'w'. The sum of all the products of each cells 'a' and 'w' will be: w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an ○ Let these weighted sums represent some 'grid cell'. Each cell is either 'on' or 'off' with respect to being positive or negative. In this case, 'green' represents on, and 'red' represents off. ○ Let us concern ourselves to a certain region where the cells are mostly on. Ergo, we would be basically summing up the weightages of those grid cells. ○ Then, if you suppose a region where there are brighter grid cells in some part which are surrounded by dark grid cells, then that area is the main edge we're looking for. ○ Of course the sum of weightages gives us very different value. In order to 'squish' that number line into 0 and 1 , we use the function: Sigma(x) = 1/(1 + e^-x) Which is a sigmoid function or a Logistic Curve. Our equation now becomes: Sigmoid(w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an) ○ But what if you don't always want to light up when it's a positive value, and rather want it to light up when the weighted sum of that grid cell full fills some condition, such as > 10. This is called 'Bias For Inactivity'. Using this example, our equation becomes, Sigmoid(w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an - 10) Here, 10 is the "bias". ○ The possibilities of the different knobs and dials open us to the term of "Learning", which just means to find the correct relation of values which perform the expected behavior. ○ The complete expression above can be adjusted in the formula: a(1) = Sigma(W*a(0) + b ) ( (1) and (0) are superscript here ) Where W = k*n matrix whose elements are weights corresponding to a cell. a(0) = n*1 matrix whose elements are the 'a' of each cell. b= n*1 matrix whose elements are the biases of each cell ○ NOTE: Sigmoid function is not used very often now, instead it is replaced by ReLU ( Rectified Linear Unity ), which is defined as: ReLU(a) = max(0, a), a linear function where f(a) = a for a>= 0, which for a < 0, f(a) = 0.
Behind this material is an extreme shot of giftedness. Explaining something is not easy. You first need a solid physical model for the topic in your brain and then you need to translate this model into a mental model that can be faithfully exported into others' brains. I congratulate you for this excellent job and I hope that you appreciate what you are and what you are doing. This is much more important than how much money this business brings.
I know you read this all the time, but I must say it. You videos are simply incredible! Your work reshapes education. You deserve every cent that this platform puts in your pocket.
This channel is so damn good. Other channels give some terrible analogies and some other explain it in extreme technical detail. This strikes the perfect balance and provides a foundation to understand the more technical details
This is the best intro to neural networks I have ever seen. The presentation is excellent! The animations are very very very helpful especially in understanding the formulas and matrices and how they came to be. Thanks a million. Looking forward for the next one.
My goodness, I’ve watched nearly 20 videos on neural networks, and none of them come close to this one in terms of visual representation and clarity. Thank you very much.
3Blue1Brown "Sigmoid Squishification Function": 11:23 Most brilliantly named function I have ever heard named. Absolutely brilliant. The merger of the technical with the simple with a double alliteration for easy memory.
watching this for a second time and i can't believe how illuminating is to come back to the basics and get a renewed understanding -- grant, you're a treasure
One “like” is not enough for the work that has gone into making one such video. This video should be part of the curriculum and he should get the royalty for this. Awesome work!
I'm currently taking a computer science math course where the professor strongly advised everyone to watch this exact video series to get an intuition about what all the math is actually used for.
I am a Data Scientist and I would like to tell you THANKS. I have NEVER met anyone with the ability to teach complex things in this way. A M A Z I N G. Please continue like this, for example with other statistics videos. You can substitute many of the University courses.
Finally, a video that does more than just present some neurons and layers and say, “here’s an activation function.” Your video describes how the model is developed and why the algorithmic approach is appropriate for the problems neural networks try to solve. Thanks!
@3Blue1Brown - A quick suggestion: Red-green color deficiency is the most common form of colorblindness. When trying to represent information via a color spectrum, could you please choose colors other than red and green for this reason? Red and blue are good choices because they are distinguishable by both red-green color deficient people as well as blue-yellow color deficient people, which is the second-most common form of colorblindness. I was completely unable to tell which pixels have positive weights and which ones had negative weights in your example due to my colorblindness. Thanks, and keep up the fantastic videos :)
Upper row of this white zone had negative weights, central part had positive, and bottom row had negative weigths.This means that if you have horizontal line this neuron will have high values, but if vertical line or any other patern then it will have value that is closer to 0.
It's still refreshing to watch this video, even after so many years. I used to watch this video when I had started my DS journey and used to grasp these intriguing concepts. Such a remarkable video!
This is the first time I'm commenting on a RUclips video and honestly, I'm so thankful people like you exist! I wish only the best for you in whatever you do!
Neural networks is a topic I've wanted an intuitive understanding of for a while. 3b1b has the most intuitive explanations on RUclips. This video could not be any better.
N·J Media - Intuitive understanding is understanding that in a triangle, for example, the side across from a given angle has to increase or decrease in length relative to its opposite angle, without a mathematical proof.
3Blue1Brown is the go-to channel that explains complex math concepts with the highest clarity without any loss of complexity of the topic. Simply brilliant!
3blue1brown should probably start their degree now..What visuals and teaching they are providing..is just awestruckingly simplified and easy to understand...Hats off to you..!!
Another reason to be mentioned on why ReLU is used instead of Sigmoid is simply the fact that it calculates a lot simpler (obviously cutting negative values vs. exponential operations). Plus another important issue of the σ function is it's gradient which is always below .25. Since modern networks tend to have multiple layers and because multiplying multiple values < 1 quickly becom really small (vanish) networks with a larger number of layers won't train when using Sigmoid. And as always, amazing video, animation and explaination!
This channel and the visualizations it produces to teach subjects like this one is the best advance in the history of communicating mathematical ideas. It's extraordinarily inspiring that one person can have such a large impact on the world today (and for generations to come). Thank you, Grant Sanderson.
I am not really from a math background but I am hugely interested in programming, and I must say this video has made it easy for me to understand the math behind neural networks! I loved it , thank you!!!
I work in a company developing just this kind of stuff. I’m still baffled how incredibly intelligent people are and I have no idea how they can repeatedly accept me as worthy enough to be with them.
ML grad student here and hands down Grant covered an entire chapter concisely and very clearly in this video. I don’t think reading any academic books will give you this amount of intuition on this subject within a few minutes. Still mesmerized by the effort!
Thank you 3b1b. This video certainly gave me a deep enough understanding to allow my neural networks to retain the information. EDIT: seems like I'm not the only one making lame puns about the title.
For the first argument in the video: "You can recognize that all of these images are 3's, even though the pixels are very different." is complete bullshit. Handwriting varies *_EXTREMELY_* person by person and so humans are very used to looking at different ways to write the same thing, especially with things like cursive. It's not a surprise that we can identify the images, please don't talk like it is a surprise, makes me feel like you're less intelligent than you really are.
Calm down a little... Everything what's been said in this video is in context of machine learning, computers, mathematics, algebra etc. So if we want to treat brain as a complex computer than it's function to recognize letters from pixels is amazing and give food for thought how human's brain really works.
Even after 6 years from making of this video, when we already have something so advanced like GPT4, as a humble beginner in this domain, this video is so so valuable in understanding the very basics! Huge thank you and kudos sir!
Wow a lot of things that i've learned on this first year of system engineering are captured on this video, but previously I didn't understand the real essence of it. Thank you for these amazing vids! Greetings from Argentina :)
It took me one week to understand this when I was reading a university lecture. You explained it to me in 20 mins. You are such a savior. Thanks 3Blue1Brown!
I'm still in a shock that none of my friend recommended me about this channel. Since the day I've started watching videos of 3Blue 1Brown, my life has been most productive than ever! Never thought that a lockdown would result in so much productivity!
I just love the way the concepts of neural networks are explained in this video. After watching it, you feel like you have an idea about the "building blocks" of a neural network. Since I'm new to the topic, it's hard to judge whether crucial things are left out or over-simplified, but I feel it's a great introduction to the topic. Thanks a lot for sharing this!
I didn't realize that, I started to understand to the Neural Network which was nothing but a black box for me. I must say if every teacher teaches like you the world will produce quality Engineer and Scientist. you really don't need to ask us for subscription, your work is so admiring, we can stop ourselves without subscribing. You have redefined the Phrase " Simplicity is the best way to Handle Complexity", Thank you very Much sir , I wish you stay healthy, wealthy and wise.
I'm in my first year of engineering, looking to go into CS, and this video makes me extremely excited for my coming education. I've already watched so many of your videos, and they've all had a similar effect. Thank you so much!
This is probably the best video that I've seen on the topic of basic artificial neural networks (ANNs). Most of the videos that I've seen on the topic are either overly complex or leave out important information about ANNs, so you're forced to watch many videos on the subject to understand the basics. But this video gives you all the basics without making you feel like there's a lot of information left out. Granted that there's a lot left to learn, but I'm sure chapter two will get into some of that.
This was one of the best tutorials on the fundamentals of neural networks. Formerly, I was a dentist and now a neuroscience research fellow working on computer vision applications in behavioral neuroscience and have never encountered a tutorial explaining so simple and concise, Thanks for that :-)
You're kind of a genius man! I don't care how much you deny it. Your ability to distill these complex concepts into very simple ones and across so many fields in math is amazing. Also, the way you connect different fields of math to explain solutions REALLY shows a different type of mastery. Thank you for all these videos.
Same as davehx, actually it starts from 14:40, and what I also find a little bit deceiving is that matrix w and a are shown as having the same number of rows, but that’s not the case. n columns of W match the n rows of a
Cant wait to see the videos you release for generative models! I've seen your probability videos and they were so great! Personally, I want to say thank you for making all these videos. It really solidifies everything I've learned. The way you use visuals to describe certain concepts is amazing! Keep it up!
Your videos are singlehandedly keeping my PhD research on track. Thank you for your time and effort!
This is what you're studying for your PhD? This is what I learned in highschool...
Which class exactly did you learn about neural networks? Did you also learn multi-variable calculus (fundamental to even the simplest neural network) in your high school class? I would love to attend!@@randomguy4738
@@randomguy4738 learning is not about phd or high school it's about need
Whenever you need you learn
@@P_Stark_3786obviously there's a reason they are separated, you won't be awarded a PhD if what you "need" to learn isn't at PhD level
@@randomguy4738 What you learn, what he study are on the different level. Just shut your mouth son
I am blown away by the visual clarity of this description of otherwise a complex technology! More please, I am willing to pay!
thanks fellow indian bro.
= 2 USD😂😂😂😂😂😂😂😂😂😂😂😂
@@andrewl2787 Don't be rude bro, appreciate his good intention, 2$ might mean a lot from where he's from.
@@Dtomper Well said. Peace and love worldwide.
@@andrewl2787 Okay so how much did you pay? I will match you, common now
Humanity has benefited a lot from your work Grant. Eternally thankful for your extraordinary work 🙏🏼
Bhai, you're also here
There i watch your videos and learn 😂😂😂
You're also coming here
Nice 👍
Every few years I come back to watch this series. The most intuitive and understandable explanation of neural networks that exists
It's like coming to look back at art. Pretty much every 3b1b video is a masterpiece!
Most educational videos give viewers the impression that they are learning something, while in reality, they cannot reliably explain any of the important points of the video later, so they haven't really learned anything. But your videos give me the impression that I haven't learned anything, because all the points you make are sort of obvious in isolation, while in reality, after watching them I find myself much better able to explain some of the concepts in simple, accurate terms. I hope more channels follow this pattern of excellent conceptual learning.
Huh, I never thought about it this way, but that's a nice way to phrase what I'm shooting for.
Being able to explain it at a conceptual level isn't good enough. You can only understand it by practicing (i.e., build neural nets by yourself and play with it)
busi magen, I don't have a story nearly as touching as that of you and your Grandmother, but I think I would cite my dad as teaching this by example when I was growing up, in that the way that he would describe things centered on what they're actually doing in simple terms, rather than on learning the appropriate jargon.
:-D . Eebsterthegreat: not so obvious insightful complement in reality, as long as you don't read the "I haven't learned anything" part in isolation.
Particle accelerator is used for creating/verifying hypothesis. Your analogy is terrible.
Regarding learning a new skill, one needs to practice rather than just passively absorb information. This is why homework exists.
Regarding neural nets, anyone think they can "explain" NN after watching this video is frankly laughable. (not saying the content of this video is bad)
Quote: “Any fool can make something complicated. It takes a genius to make it simple.”…..nailed.
"What one fool can do, another can"
Some guy who wrote a really popular calc textbook
It's a statement I don't agree with. At university, we are taught things in a formal and abstract way, not just for the sake of overcomplicating things. I don't think professors, which are primarily researchers should be considered "fools" because they fail to teach their subject in a more intuitive manner.
@@genericperson8238 a good researcher is not necessarily a good teacher.
@@genericperson8238 Yes, they're a fool in pedagogy
A genius doesn’t really make it simply more they make it concise.
I study mathematics, physics and architecture. By definition this man is an ORACLE in the strict meaning of the word.
With all honesty I never imagined someone explaining complex topics with the dexterity this man has. He is literally an institution and an outstanding teacher.
The computer graphics and the illustrations are simply perplexing. This guy never evades complexity. He never evades complex arguments. He illustrate the complexity and dive into the exhaustive explanation of the details.
It's extremely rare to see a professor and a dedicated user to put a lot of effort explaining, animating and describing mathematics the way he does.
Well said sir. 🙌
@@sdc8591 This video never claimed to be an expert level tutorial so stop comparing it to those type of tutorials.
@@viharcontractor1679 When did I say that? Please read my comment again. I have no issues with the tutorial, I have objection on the comment to which I have replied. One should always make an appropriate comments. As it is incorrect to say something rude, it is always wrong to do false praising. Have you read the comment? of kummer45? Calling the tutor of the video as " Oracle"? Really? This kind of words should be used for someone like Swami Vivekanda and not for some ordinary tutorial. It almost hurts to see such misuse of words.
@@sdc8591 Come on man, what is wrong with / about that comment? The video is fantastic in every way, It is dense enough that I've had to watch it several times over, yet is able to communicate the concept of a neural network in such a way that even my pea brain can grasp this topic, please think before commenting and make a proper comparison.
@@JCake First of all, don't use the word 'man' , I am a girl. I never said video is bad.It is fine. Why everyone is coming over here and defending the video? Is is so difficult to understand what I am saying? The comment from kummer 45 is an exaggeration and I stick to it because it is. If the video is good enough , one does not need to watch it second time to understand the concept. I had seen one video by Mathew Renze on the same topic. That long tutorial, was first time I came across neural network. It was more that 1.30 hours of series of videos. I never watched it again and still remember every single concept. Now if this man is oracle what you will call him?
I'm literally experiencing the future of education right now, and this was posted 6 years ago
I am Programming for more than ten years and I never saw anyone explain a complex idea by such a clean and clear terms. Well done.
yeah thats power of manim !
can you explain the animation at @9:30
Because you only program without mathematics
secret scientist are much further than this, like they can wirelessly from satellite I think give dream images and change dreams you make yourself... for me they always try to make it ugly,...
Markov Chains ? how simple is that ?
I can't wait for neural networking to be able to recognize my doctor's prescription.
They need to study pharmacists to figure that out.
That would be magnificent!
from what i understand (i am also an dummy i just tell you what i think)
the inputs are the pixels
the weights are the pixels 's whiteness or blackness it is
like lets say we need first pixel to be white
so we need the computer to know there is a pixel there (hence it's an input)
we need the computer to change how white or black it is (hence the computer's ability to change weights)
its actually impossible, that level of calligraphy is indecipherable
Doctors writing make Strings Theory a piece of cake for humans, AIs and aliens.
Fantastic visualized learning!
Brilliantly explained
I had learned about neural networks and knew the mechanics of it. But this is way better explained - you nailed it - brilliantly.
@@uncommonsense9973 Sorry to disappoint you but the commentor isn't the creator of the video lol
@@randompersondfgb he was agreeing with the commenter bro
@@wnyduchess To quote the reply itself; “this is way better explained - *you* nailed it - brilliantly”
@@randompersondfgb yes, you're not understanding. Uncommon Sense is saying "you" nailed it. The you is Cliff Rosen, the original commenter. He's saying that Cliff Rosen nailed it when he wrote the comment "Brilliantly explained".
I'm currently taking a computer science math course where the professor strongly advised everyone to watch this exact video series to get an intuition about what all the math is actually used for.
Bro, which college you studying in now?
@@vgdevi5167 Aarhus University, Denmark
@@vgdevi5167 1st semester :)
Good to learn from, and also, entertaining to watch. double win.
Linear Algebra? That’s what I’m following in about 6 weeks, which is basically the math behind Neural Networks
No man, we don't get notifications for your videos. We search for 3b1b. That's how powerful your content is.
I don't understand notifications either. What, we're supposed to do things other than watch all the remaining 3b1b videos we haven't yet seen between notifications? Who would be so wasteful with their lives???
I just type questions into RUclips and always seem to get his videos as answers
@@FlyingSavannahsusually you'd enable notifications only for youtubers who make videos that you're almost always interested in
This is how you taught Deep Learning, people. I've seen lectures that either be categorized into 2 groups: too hard or too shallow/general. You have balanced between them. Thanks you so much!
In schools everyone taught us to practice maths but this man teaches us to imagine maths
True🔥🔥
best comment here, period!
Juju
Did you mean 'visualize' maths
They don't teach you maths, they teach you how to solve exam questions. Maths is what 3Blue1Brown teaches.
Around 2 years ago I was a sophomore statistics student and had no idea what deep learning is, until I met this video and 3b1b channel. His clear explanation of neural network and animations blew my mind. Since then I started my journey in machine learning. For a random reason I clicked onto this video again, and realized how long my journey in this field have been. This video really changed my life and I am really grateful about it.
@3Blue1Brown
Please give a heart .......
I am in class 11 currently and unfortunately I am not able to understand this. Could you point me to some prerequisites?
@@yashrathi6862 The linear algebra series that was recommended in the video is a good start, other than that you should keep watching this video and you will start to understand it better the more you do. I am also in class 11 and that is what helped me
One year ago I met this video. I couldn't understand any single word in it. A year later, I am back and I still cannot understand it.
I am fucking stupid.......
@@yashrathi6862 To be honest there are no real "prerequisites" for learning neural networks, in the end it just gets down to how familiar you are with the concepts of basic graph theory. However, I admit that it can be pretty overwhelming for someone to try and comprehend all the stuff at once, which is why being savvy with the use of linear algebra is a must.
Apart from that you should try your hand at programming once, perhaps the algorithmic mode of thinking would help you deveop an intuition for neural networks. And yes, of course try to explore graph theory, for neural networks will resonate much better with you once you do, imo.
It takes 3000-4000 lines of code to make those graphics possible, he's a freakin legend
Which is best for neuron? python or c++
@@omarz5009 The main downside of python is the fact it's a high-level language and hence kinda slow. But for ML and NN it has several powerful libraries (pandas, numpy, tensorflow) which make up for that. Given Python supports the implementation of C-Code, those libraries could be optimized like heck to the point bothering with the stuff in C++ is just wasted time. Plus Python is much easier to learn, hence more people use it and develope for it.
@@jagaya3662 That makes sense. Thanks for explaining :)
@@anelemlambo497thank you for explaining :)
How these graphics and animations were made actually?
My God! No words to express as to how you made such a complex topic to be understood using visuals so easily! Hats off!!
In class : printf("Hello world");
The exam :
kkkk
Why did introduce us to quantum mechanics. You sucks.
😂😭
@@ThomasJr {i>{
@ゴゴ Joji Joestar ゴゴ Lol that's because physicists couldn't find anything interesting
Our generation is lucky to have mentors like you, thank you so much sir!
Fine Indeed, Refreshing Super Tenacious
This is the 80s generation we were listening rock music and looking how to get things done better we grew without mobile phones just sitting front of a computer or playing basketball outside in the park. We grew without rap, hip-hop, either thinking that the gang is a cool guy! this is what now generations require badly!
@@elgary9074 I hope that someday scientists will be able to understand what you have written.
@@MiguelAngel-fw4sk 🤣
And our generation is unlucky that we had no such mentors and internet to deliver their videos. Taking this into account, we demand results, youngsters! We had, at least, an excuse for being dumb :)
this is my first introduction to machine learning and I watched this only twice to get it, really goes to show how good of a teacher this guy is, the effort he puts in is nothing short of amazing !
Definitely am gonna have to watch it again. Got half way through and it started to get pretty heavy
I am currently doing my Master's in Data Science and this 18 minute video is better than any course I have taken so far
maybe give up ?
maybe ligma
then you should get into a good university
The fact that I was sent here by my university lecturer is a testament to how good 3Blue1Brown is.
Same here😂
Same here😂😂
Same here 😂😂😂
Same here 😂😂😂😂
Hm......
_Daniel Shiffman?_
This is my very first time commenting on a RUclips video, and it's just to say: This is the best explanation of anything ever.
yet, people don't understand
Some Eng congrats man
i couldn't understand anything over a minute in
Wait until you see his video about the Fourier Transform. My GOD that vid is the best thing i've seen in ages.
One of the few teachers that don't make you feel stupid, but actually help you understand the topic. I appreciate the time you spend on this.
This video kickstarted my journey in ML a year back. Trust me, back then I watched this video three times to finally understand. It might be challenging for few to get it but when you get it, it just feels amazing
@@DawnshieId why do you think it cannot go beyond 1?
Felt like the brain chair meme when this video finally clicked (after the 4th watch)
@@chitranshsrivastav4648 How do you weight?
after 8:38 felt really hard to understand.. I will try again and comment back
Hell yeah. Im literally in your shoes rn
I just can't believe this came out 7 years ago.
You are the best.
Same😪
Bio teacher: what is a neuron?
Me: a thing that holds a number between 0 and 1
lmao good one.
get out of my class
InSomnia DrEvil Great explanation, but you ruined the joke lmao
@@prabeshpaudel5615 haha
Also known as FUSSY Logic
"Even when it works, dig into why" - 3B1B. Your lessons are pure gold sir. I'm here after watching the entire Essence of Linear Algebra. Thank you.
THE TEACHING ASIDE , THOSE GRAPHICS MAN! TAKES LOT OF EFFORT!
Exactly....Lot of effort is required to make this type of video.
why iam seeing Indians everywhere
@@hmm7458 cause u are also an indian...
@@kartikeya9997 that's not an answer lmaooo
I know I can't do better. I'll be referring students in my neural networks class to these videos, lol.
To think that someone would make a video of neural network and explain it in a way so simple yet insightful is such a bless especially for people who want to dig deep into machine learning/ deep learning. Thanks 3Blue1Brown!
Is anyone else nominating this series for the "Distill Prize for Clarity" in 2019? I really think he deserves it, excellent visualizations.
Yeah I would, every day of the year.
Totally! Animation and visualization here makes understanding as clear as a crystal!
yesssss
@@benisrood Can it be nominated for anything else?
PART 1? THERE WILL BE MORE? YAS 3BLUE1BROWN IS DOING NEURAL NETWORKS! TODAY IS A GOOD DAY
You will find this series very helpful as well.
ruclips.net/video/bxe2T-V8XRs/видео.html
I totally agree, my friend. Today is a very important day in the history of youtube mathematics. And since I am the 100th person who liked your comment, I would like to give a little inspirational speech:
To all mathematicians, physicists, engineers, computer scientists or people who want to become one of those in the future,
today is a very important day. The best youtube mathematician, 3Blue1Brown, has made a video about neural networks and plans to make others about it in the future. I think it's not necessary to explain the inherent significance this topic has concerning the future of our technology and our understanding of the universe and the processes going on in it. These videos will help the new scientific generations to cope with the structures still to be found and to bring on a new and deeper understanding of the things that have been found and examinated before. Humanity is reaching a point, where the wish to understand the world is higher than it has ever been before. You, dear future scientists, can all be a part of the progress we are just going through, you just have to have the Will and the Strength for it, never give up if things aren't working properly or as you expected and always remember: At the end, everything will be fine, so if it isn't fine, it's not the end.
Actually, I have reached the end of my little inspirational speech (and it is fine ;) ), and to complement it well, I want to quote a famous poem which plays an important role in a very good and famous science fiction movie....
"Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.
Though wise men at their end know dark is right,
Because their words had forked no lightning they
Do not go gentle into that good night.
Good men, the last wave by, crying how bright
Their frail deeds might have danced in a green bay,
Rage, rage against the dying of the light."
Thank you.
This comment is lit!
yas
RNN? LSTM?
I am just astounded. I spent so much time trying to understand this concept. Everywhere I looked people would show the similar neural network animation, but no one ever really explained and exemplified every single step, layer, term and mathematics behind it.
The video is really well structured and with amazing animations. Extremely well done. My mind is so blown I can barely write this comment.
"My mind is so blown I can barely write this comment." lmao
i took a deeep learning lecture in my last semester and my professor couldnt explain in 4 frickin months what u explained in 20 mins much much appreciated man you're doing awesome work hope to learn a lot from you
how is it possible that I can lie in my bed on a Sunday and am presented with mind-boggling cutting edge knowledge told by an incredibly soothing voice in a world class manner on a 2K screen of a pocket supercomputer basically for free
Welcome to the 21st century
yet 90% of people use that supercomputer to mindlessly scroll feeds.
@@Charge11 And software engineering advancements, thousands of years of intellectual history, biological evolution of conscious brains and so forth.
point is, it's miraculous if you step back far enough.
because it isn't
It's not free, Google's massive network of AI neuron is harvesting terabytes upon terabytes of information about you every time you click on anything.
I'm studying AI for my masters degree and my professor told everyone to watch this video to understand the concept :D
he knows...
amazing. :D
me too!
School is a scam
Where are you studying ai??, I mean what is the name of your college or university????
As a person who has self-learned a bit of python and is just trying to learn this stuff, this is exactly the best place to begin.
My thoughts exactly!
At the end of the video, he showed the relu function f(a)=a with a>0, so the value of the neuron doesnt have to be between 0 and 1?
That's me
@@duykhanh7746 A bit late but if your question hasn't been answered yet: It doesn't really matter if you have a value >1. Basically anything above 0 is an activation and you can also view it as the size of "a" being the intensity of the activation. Biological neurons can also be more active by firing in fast succession (up until they reach the maximum possible firing rate of like 250-1000Hz depending on the source), but you don't want to introduce things like loops in artificial neurons to not slow down your network. So to simulate this kind of behavior, you just let your output get bigger. You can compensate for the lack of an upper limit in the following neurons by adjusting the weights and the biases.
TL;DR: No. :D
🙂
This the most comprehensive and understandable explanation of a neural network. Thank you.
Written some notes from the video to read quickly. Hope it helps somebody.
l Neural Networks can recognize hand written digits, letters, words ( in general, tokens )
l What are Neurons?
○ Something that holds a number [ 0, 1]
○ The higher the number, the higher the "activation" rate
l Consider a 28*28 table in which each unit is represented by a value between 0 to 1 ( activation number )
○ Let us divide each row into a "layer", such that, if we were to divide all the layers, the last layer would contain 10 "cells" ( units ).
○ Values are passed from the previous cells to the last layer ( 10 unit layer ), again, between 0 and 1. The higher or closer the value is to 1, the more probability exists that the image scanned represents that unit cell.
So, a unit cell that contains the highest value is indication that the index of the unit cell is the value of the image scanned.
○ 16 cells in the second and third last cells are arbitrary.
○ Each cell is linked ( causes activation ) to some ( not all ) other cells in the next layer which further cause more activation.
○ Each 'cell' corresponds to some sort of identification about how much a certain region 'lights up', and then sends a value to another node which reacts based on the received value.
○ To find whether a certain cell with light us, like each cell be represented by 'a Cell_Number ', and let each cell be 'assigned' a certain weight 'w'. The sum of all the products of each cells 'a' and 'w' will be:
w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an
○ Let these weighted sums represent some 'grid cell'. Each cell is either 'on' or 'off' with respect to being positive or negative. In this case, 'green' represents on, and 'red' represents off.
○ Let us concern ourselves to a certain region where the cells are mostly on. Ergo, we would be basically summing up the weightages of those grid cells.
○ Then, if you suppose a region where there are brighter grid cells in some part which are surrounded by dark grid cells, then that area is the main edge we're looking for.
○ Of course the sum of weightages gives us very different value. In order to 'squish' that number line into 0 and 1 , we use the function:
Sigma(x) = 1/(1 + e^-x)
Which is a sigmoid function or a Logistic Curve. Our equation now becomes:
Sigmoid(w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an)
○ But what if you don't always want to light up when it's a positive value, and rather want it to light up when the weighted sum of that grid cell full fills some condition, such as > 10. This is called 'Bias For Inactivity'. Using this example, our equation becomes,
Sigmoid(w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an - 10)
Here, 10 is the "bias".
○ The possibilities of the different knobs and dials open us to the term of "Learning", which just means to find the correct relation of values which perform the expected behavior.
○ The complete expression above can be adjusted in the formula:
a(1) = Sigma(W*a(0) + b )
( (1) and (0) are superscript here )
Where W = k*n matrix whose elements are weights corresponding to a cell.
a(0) = n*1 matrix whose elements are the 'a' of each cell.
b= n*1 matrix whose elements are the biases of each cell
○ NOTE: Sigmoid function is not used very often now, instead it is replaced by ReLU ( Rectified Linear Unity ), which is defined as:
ReLU(a) = max(0, a), a linear function where f(a) = a for a>= 0, which for a < 0, f(a) = 0.
thanks..
Good work👏👏👍, thnx
Thank you sooooooo Much!!!!!!
You saved my life!!!!
Thanks!
I just watched Welch labs machine learning playlist a few weeks ago. It was mind-blowing. I'm glad you're getting into machine learning too! : )
Yes, Welch Labs is truly great.
Thanks for the recommendation!
Behind this material is an extreme shot of giftedness. Explaining something is not easy. You first need a solid physical model for the topic in your brain and then you need to translate this model into a mental model that can be faithfully exported into others' brains. I congratulate you for this excellent job and I hope that you appreciate what you are and what you are doing. This is much more important than how much money this business brings.
I know you read this all the time, but I must say it. You videos are simply incredible! Your work reshapes education. You deserve every cent that this platform puts in your pocket.
This channel is so damn good. Other channels give some terrible analogies and some other explain it in extreme technical detail. This strikes the perfect balance and provides a foundation to understand the more technical details
I wish this guy was my math teacher back in high school.
Just shows that good teaching skills are very rare.
Understandable animations on the perfect timing with the words, and no holes on the explanations, makes the trick
This is the best intro to neural networks I have ever seen. The presentation is excellent! The animations are very very very helpful especially in understanding the formulas and matrices and how they came to be. Thanks a million. Looking forward for the next one.
Every second of this video is a Pre-requisite to the next second of the video :D
My goodness, I’ve watched nearly 20 videos on neural networks, and none of them come close to this one in terms of visual representation and clarity. Thank you very much.
3Blue1Brown
"Sigmoid Squishification Function": 11:23
Most brilliantly named function I have ever heard named. Absolutely brilliant. The merger of the technical with the simple with a double alliteration for easy memory.
Sigma squishy function
watching this for a second time and i can't believe how illuminating is to come back to the basics and get a renewed understanding -- grant, you're a treasure
One “like” is not enough for the work that has gone into making one such video. This video should be part of the curriculum and he should get the royalty for this. Awesome work!
Yes!
I'm currently taking a computer science math course where the professor strongly advised everyone to watch this exact video series to get an intuition about what all the math is actually used for.
+1
Can’t believe this video is made 7 years back. Such a nice explanation. Thanks, man.
I am a Data Scientist and I would like to tell you THANKS.
I have NEVER met anyone with the ability to teach complex things in this way.
A M A Z I N G.
Please continue like this, for example with other statistics videos. You can substitute many of the University courses.
May I know which type network connection allows neural network
Finally, a video that does more than just present some neurons and layers and say, “here’s an activation function.” Your video describes how the model is developed and why the algorithmic approach is appropriate for the problems neural networks try to solve. Thanks!
@3Blue1Brown - A quick suggestion: Red-green color deficiency is the most common form of colorblindness. When trying to represent information via a color spectrum, could you please choose colors other than red and green for this reason? Red and blue are good choices because they are distinguishable by both red-green color deficient people as well as blue-yellow color deficient people, which is the second-most common form of colorblindness. I was completely unable to tell which pixels have positive weights and which ones had negative weights in your example due to my colorblindness. Thanks, and keep up the fantastic videos :)
Upper row of this white zone had negative weights, central part had positive, and bottom row had negative weigths.This means that if you have horizontal line this neuron will have high values, but if vertical line or any other patern then it will have value that is closer to 0.
windows 10 has colour filters that will fix this for you. go to settings, ease of access, and click on 'colour filters'
It's still refreshing to watch this video, even after so many years. I used to watch this video when I had started my DS journey and used to grasp these intriguing concepts. Such a remarkable video!
Thanks!
This is the first time I'm commenting on a RUclips video and honestly, I'm so thankful people like you exist! I wish only the best for you in whatever you do!
Neural networks is a topic I've wanted an intuitive understanding of for a while. 3b1b has the most intuitive explanations on RUclips.
This video could not be any better.
MTO Intuitive understanding?
It isn't intuitive understanding if you have been looking for a explanation in a while xd
N·J Media - Intuitive understanding is understanding that in a triangle, for example, the side across from a given angle has to increase or decrease in length relative to its opposite angle, without a mathematical proof.
Commendable beyond words! 7 years later and this video still explains the concept better than anyone today could
and it was GPT o1 that suggested your video 😂
3Blue1Brown is the go-to channel that explains complex math concepts with the highest clarity without any loss of complexity of the topic. Simply brilliant!
The most intuitive channel on RUclips...
You're the first person to explain bias in an intuitive manner. Thank you.
3blue1brown should probably start their degree now..What visuals and teaching they are providing..is just awestruckingly simplified and easy to understand...Hats off to you..!!
Another reason to be mentioned on why ReLU is used instead of Sigmoid is simply the fact that it calculates a lot simpler (obviously cutting negative values vs. exponential operations). Plus another important issue of the σ function is it's gradient which is always below .25. Since modern networks tend to have multiple layers and because multiplying multiple values < 1 quickly becom really small (vanish) networks with a larger number of layers won't train when using Sigmoid.
And as always, amazing video, animation and explaination!
The best introduction to Neural Net's I've ever seen. Kudos!
This will go down as one of the best lectures in history. What an amazing and concise explanation of something I thought I would never understand ...
totally agreed
Activation, weights, bias. I suddenly understood them all. I can't believe it. You're awesome.
This channel and the visualizations it produces to teach subjects like this one is the best advance in the history of communicating mathematical ideas. It's extraordinarily inspiring that one person can have such a large impact on the world today (and for generations to come). Thank you, Grant Sanderson.
Dude, inspiring comment yourself.
I am not really from a math background but I am hugely interested in programming, and I must say this video has made it easy for me to understand the math behind neural networks!
I loved it , thank you!!!
I work in a company developing just this kind of stuff. I’m still baffled how incredibly intelligent people are and I have no idea how they can repeatedly accept me as worthy enough to be with them.
impostor syndrome. There will almost always be someone better than you, but you are probably better than you give yourself credit for
ML grad student here and hands down Grant covered an entire chapter concisely and very clearly in this video. I don’t think reading any academic books will give you this amount of intuition on this subject within a few minutes. Still mesmerized by the effort!
Thank you 3b1b. This video certainly gave me a deep enough understanding to allow my neural networks to retain the information.
EDIT: seems like I'm not the only one making lame puns about the title.
For the first argument in the video: "You can recognize that all of these images are 3's, even though the pixels are very different." is complete bullshit. Handwriting varies *_EXTREMELY_* person by person and so humans are very used to looking at different ways to write the same thing, especially with things like cursive. It's not a surprise that we can identify the images, please don't talk like it is a surprise, makes me feel like you're less intelligent than you really are.
Calm down a little... Everything what's been said in this video is in context of machine learning, computers, mathematics, algebra etc. So if we want to treat brain as a complex computer than it's function to recognize letters from pixels is amazing and give food for thought how human's brain really works.
Peter Njeim it's not a surprise that you can identify images. The surprise is how complicated image recognition actually is if you think about it.
Peter Njeim, do people invite you for parties?
Faculty of Khan M
seriously, this is the first time i find that ML makes sense! you are amazing
Even after 6 years from making of this video, when we already have something so advanced like GPT4, as a humble beginner in this domain, this video is so so valuable in understanding the very basics! Huge thank you and kudos sir!
Wow a lot of things that i've learned on this first year of system engineering are captured on this video, but previously I didn't understand the real essence of it. Thank you for these amazing vids! Greetings from Argentina :)
Boludo
It took me one week to understand this when I was reading a university lecture. You explained it to me in 20 mins. You are such a savior. Thanks 3Blue1Brown!
Most fascinating channel on YT, hands down.
Can't believe how well explained and intuitive this is. I aspire to become a teacher like you.
His videos are so elegantly illustrated and flow of thought is so clear. Watching his videos is like listening to music of Mozart to me!
What a time to be alive, with such RUclipsrs around!
This is simplest and best way to explain neural networks. This is the best introductory video on Neural networks I watched so far.
I'm still in a shock that none of my friend recommended me about this channel. Since the day I've started watching videos of 3Blue 1Brown, my life has been most productive than ever! Never thought that a lockdown would result in so much productivity!
they aren't your friend.
@@userwheretogo 😥😥 Hits hard
cool
I just love the way the concepts of neural networks are explained in this video. After watching it, you feel like you have an idea about the "building blocks" of a neural network. Since I'm new to the topic, it's hard to judge whether crucial things are left out or over-simplified, but I feel it's a great introduction to the topic. Thanks a lot for sharing this!
I didn't realize that, I started to understand to the Neural Network which was nothing but a black box for me. I must say if every teacher teaches like you the world will produce quality Engineer and Scientist. you really don't need to ask us for subscription, your work is so admiring, we can stop ourselves without subscribing. You have redefined the Phrase " Simplicity is the best way to Handle Complexity", Thank you very Much sir , I wish you stay healthy, wealthy and wise.
Thank you! Such a great video! Please more!
I'm in my first year of engineering, looking to go into CS, and this video makes me extremely excited for my coming education. I've already watched so many of your videos, and they've all had a similar effect. Thank you so much!
Same energy brother!
How has your degree been going?
This is probably the best video that I've seen on the topic of basic artificial neural networks (ANNs). Most of the videos that I've seen on the topic are either overly complex or leave out important information about ANNs, so you're forced to watch many videos on the subject to understand the basics. But this video gives you all the basics without making you feel like there's a lot of information left out. Granted that there's a lot left to learn, but I'm sure chapter two will get into some of that.
Currently doing my capstone on deep learning and this is among the best, and easiest to understand descriptions I have seen.
Thank you for this amazing content. Pelase keep publishing about physics and CS!
I don't know how to express my gratitude to you ...
all your videos are just amazing and incredibly informative.
This was one of the best tutorials on the fundamentals of neural networks. Formerly, I was a dentist and now a neuroscience research fellow working on computer vision applications in behavioral neuroscience and have never encountered a tutorial explaining so simple and concise, Thanks for that :-)
what ? you never took Markov Chains ?
I wonder how many knobs and dials in my real neutrons get tweaked how fast to watch this video. Fantastic. Two thumbs up.
THIS should be the first lecture for everyone starting with NN. Such a gem! ❤
You're kind of a genius man! I don't care how much you deny it. Your ability to distill these complex concepts into very simple ones and across so many fields in math is amazing. Also, the way you connect different fields of math to explain solutions REALLY shows a different type of mastery. Thank you for all these videos.
I will forever be grateful to you for making learning so much fun!
This guy: Uses dark screen to illustrate a long concept so that it's easy on the eyes.
Everyone: "Carefully, he's a hero."
Careful, you might poke the boomers who think that black themes are just for edgy people.
Meanwhile schools: LET'S CHOOSE THE BRIGHTEST AND WHITEST COLORS FOR THE LIGHTS, WALLS AND ACTUAL BOARD
They're bound to pay attention then, right?
Me at night 👌
The term "squishification function" really drove the point home for me regarding what these functions were meant to do.
When you simplify the formula to matrix equation, there's a typo: at time 14:46, Bn should be Bk. The subscript should be k instead of n
Haha, paused and scrolled through looking for this comment. Had even started to doubt myself.
Same as davehx, actually it starts from 14:40, and what I also find a little bit deceiving is that matrix w and a are shown as having the same number of rows, but that’s not the case. n columns of W match the n rows of a
Cant wait to see the videos you release for generative models!
I've seen your probability videos and they were so great!
Personally, I want to say thank you for making all these videos. It really solidifies everything I've learned. The way you use visuals to describe certain concepts is amazing!
Keep it up!