How AI Learns Concepts
HTML-код
- Опубликовано: 4 июл 2024
- Why do neural networks need to be deep? In this video we explore how neural networks transform perceptions into concepts. This video unravels the mystery behind how machines interpret input data, such as images or sounds, and categorize them into recognizable concepts. From the basic structure of neurons and layers to the intricate play of weights and activations, get a comprehensive understanding of the learning process. Explore real-world applications like handwriting recognition and how layered processing aids in effective data categorization. Whether it's distinguishing between summer and winter days based on temperature and humidity or recognizing handwritten digits, the magic lies in the layered architecture of neural networks. This video elucidates how these artificial networks mimic the human brain's ability to interpret, recognize, and reason, marking a significant stride in AI research towards creating machines capable of reasoning. Why layers matter.
STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel"
SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1
WATCH AI series: ruclips.net/p/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ
so, if neural networks can´t reason.... why the people call it "articial intelligence"... when intelligence and learning aren´t the same thing? for me, neural networks are a good way to save patterns and return us the result we want ... with brutal force
Thank you so much.i had been struggling
to understand the concepts behind neural network.You explained it to us so nicely.
This is maybe the best explanation I have seen of a topic that is rather elusive. I will watch this video again!
@@stevesmith291 so happy to hear it
This was very informative and explained the depth advantage in a really easy to grasp manner. Thank you!
I don't mind that you take your time making these. Your meticulous script preparation & attention to production values allow you to pack massive amounts of information into these videos. You are creating "aha!" moments & rewiring neurons around the world. Bravo!
this one took a longtime: ruclips.net/video/OFS90-FX6pg/видео.html
I know a little about computers. Used to be a lot; but, then I retired and computers and computing move on. This was a wonderful explanation. Not too fast, not in the least boring, and I learned some things. Thank you and KUDOS!
so nice to hear
finally done: ruclips.net/video/OFS90-FX6pg/видео.html
Let's all be real here, that last layer is really just on LSD. That's how it all works. Those were some trippy images.
Joking aside, fantastic video!
never would have imagine this stuff in this way. the patience and care of thought behind it is just, like, therapeutic to take in. million thanks man
beautiful
i think the genius here, honestly, is the maintaining the whole way through the output neuron vector as points in 3d space. the way to divide points into groups, and combine them becoming oragami folds for depth. at 12:01 i finally understood that these differing output patterns all fit inside a 3d space, meaning, a brain, like, I can imagine these little lit up paths in a brain that the data goes through, but instead of like a radioactive isotope, it was a component of a stormcloud, and it routes down the pathway...
You illustrated the finitude of possible induction in perception space, and then at the end what a limited number of neurons can represent while keeping things distinct and recognizable, fulfilling their purpose. Yet we know there's this infinity of things that can be represented in that process. its really magical, because we go from finitude to infinity and back-- without stopping, and without doubling back the way we came.
and what just gave me the chills was that i paused just after 12:00 minutes to write these comments calling what at that moment i thought was magic, and your next line was "and so the magic is..."
Not to get corny about it but woah serendipity. read that as testament to the editing i guess. amazing job on this series. i really did wait this long to watch it all ahah
@@hafty9975 we are definitely in sync
Yes! A new Art of the Problem video!
hey keep going with the videos. The quality of your vids easily justifies 2M subs -- you’ll blow up eventually
Even though I've seen these concepts before this video does a great job of slowly building up the ideas and bringing the viewer along to the next level of understanding.
This was very good. Thank you for taking the time and effort to put this together.
next part: ruclips.net/video/OFS90-FX6pg/видео.html
Sorry for my English
I registered for this channel many years ago and waited eagerly for videos.
The channel is alive!
this deserves a lot more views
I'm leaving this comment here for the RUclips algorithm.
Watching these videos makes me feel just like how I did as a child watching the National Film Board of Canada videos. You've made the correct patterns, well done.
i grew up watching these
It was fascinating to see the images when probing the different layers. The paper folding example was great at explaining this at least for me.
3 years later i finish next part ruclips.net/video/OFS90-FX6pg/видео.html
Sometimes I wish RUclips had a super-like button or something to express how much I like this
Your videos are a thing of beauty! The attention to detail is fascinating, especially how it clarifies the concepts that are explained. I can only imagine how beautiful the world would be if everything was explained in this manner!
this comment made my day thank you
@@ArtOfTheProblem Yeah, learnt sin cos tan a bit by programming a circle and i understood kinda
finally done ruclips.net/video/OFS90-FX6pg/видео.html
i prefer your videos over 3b1b. you include a variety of backgrounds/contexts to help me pay more attention (and not get stuck to the monotone black bg with animations). thank you!!!
thank you for feedback, working hard on next video now
This is fascinating, and the best explanation I've ever seen for how neural networks actually work. You have earned my sub, and I look forward to more insightful explanations of a topic that boggles my mind!
You sir, you deserve much more attention. Very well illustrated and clearly explained. Thanks.
Wow very well done, and informative as usual. Thank you so much for the thoroughness
in your explanation. One of the most underrated RUclipsr's of all time!
Wow...! This was clearly the best ever explanation of neural networks I’ve ever seen! For awhile I even thought I understood them... ;-) great vid, thx!
me : researching how to print "Hello World"
others from history : researching "how a computer can learn"
You made amazing videos on Khan Academy years back and I've finally stumbled upon your criminally small channel. Keep up the good work, I hope the algorithm tips in your favor one day.
glad you found me Zion, I hope for the tip one day too. thanks for the support
Absolutely loved this! You're truly one of the best at teaching visually
Thanks for making these videos. The paper folding part analogy was really GOOD!
It's actually a paper by Yoshua Bengio - on the number of Linear Regions of deep neural networks
long time no see: ruclips.net/video/OFS90-FX6pg/видео.html
beautifully crafted... we can see the hardwork you have put into it.. subbed
Wow. This is brilliant. You guys are awesome. Thanks everyone involved in production. 👍🏿
Thanks I really hope to follow this up with another video eventually
I don't mind watching an hour of this with just you explaining. Thank you for creating this!
happy you found this channel, i've been dormant for a while but working on this next video...
Awesome presentation! Lots of information in such a short video. Absolutely love it!
awesome thanks for feedback, I have another follow up on the way
So good. The layering was such a important lesson to learn. With the 3D simulation it looks like a cloudy rainbow rubix cube being twisted and turned in our minds.
The ramifications of these learnings are infinite. Imagine what perceptions our minds as sensory identifiers are not perceiving yet, and the avenues of worlds that it has the ability to open up as we simply use more complex sets of neural sensory functions in our body, and increase our pattern recognition's as an individual, social and planetary society.
edit: I am going to have to go to the begining of the series and count my blessings
let me know what you think after finsihing the series as i'm working on a follow up
What a straightforward explanation.
The paper folding part is a genius explanation!
Love your videos. Every time i see them on my page I just have to watch them
this might be the best video i have ever seen on youtube to date. thank you so much, this was so informative and i absolutely loved the models! Thank you so so much, please keep doing these!
really appreciate it, this was the last video I made and since then I've pivoted my efforts but I really do want to come back and do a follow up, especially since the explosion in applications lately related to sequential nets (which is where this video ends)
Very nice. Not so sure about the folding paper, but the visualisations really show how the coordinates are transformed from the complicated manifolds to the relatively simple clusters, and that visualisation can possibly help guide neural network design. Shame you weren't able to answer the final question, ha ha ha!
thanks I got held up working on the last video, I will get to it eventually
Words can't describe this marvellous explanation!
thrilled to get this feedback
These videos are so extremely good! Thanks for making these!
Thank you! Extremely helpful visualizations
wow, new video, thank you so much!
Video is absolutely awesome, only thing that seemed missing to me, is difference between neural network and another well know mathematical models (relational databases design and analytics).
Like Graham Todd said in his comment, your vids always deliver waves of "Aha!" moments that join previously distant or incoherent bits of our minds. I hope these vids reach as many schools as possible, kids would benefit immensely and so the larger society of tomorrow. Thanks 🤘
Wow, you are good at teaching. Making obvious the nonobvious is extraordinarily complex.
thank you, i'm still planning to do a follow up to this on sequential problems
this is genuinely one of the best, if not THE best videos about neural networks I have ever seen. Never once have I understood the concept clearly before this!
Faaaantastic, so happy youtube is now showing people this video out of the blue. did you search for it or see it as a suggestion?
@@ArtOfTheProblem i was looking at videos related to these and just a LOT of math videos in general, so youtube recommended it to me. im so glad i clicked on this vid :)
This is the most helpful explanation of neural networks I have yet found. I now feel much more confident in my understanding of what is happening within such a network and the way in which added layers function. Thank you!
so happy to hear this, this was my goal of the series and so to hear it's connecting means a lot
Just awesome how you managed to explain a complex topic in a simple way!
thanks so much!
Thankyou for explaining the fundamental building blocks of a neural network in a way that's easy to understand.
appreciate the feedback
@@ArtOfTheProblem You know, these neural networks is just like building or fixing a car. It's a guy thing.
Dude , this is excellent work, you have explaned the secret of Neural Networks in a really beautiful way. It takes real understanding to be able to distil the information in such a beautiful way. Thank you s much for this.
appreciate it, i worked really hard on this and hit a wall after it. I'm still planning to follow up with more on sequential networks.
Thinking of a NN as partitioning a perception space... just awesome. Thank you so much for this beautiful way of thinking about it.
awesome!!
Absolutely amazing video! Thank you for making it!
thanks Zoe!
I have been going through the math behind neural networks for some time now. My goal was to form a good intuition for how they work. I understood some of it but still had lot of doubts. This video cleared all my doubts. It is an absolute gem. I think someone who is already reading about neural networks would make the most out of this video. This is the most intuitive video I have seen on neural networks. Your script and narration was perfect. Great job and I hope you keep making similar videos
i really appreciate this, you are right and you are the perfect audience. I spent a loooooong time working on this. and i hit a wall after it when trying to do the same with sequential networks, and so it's on ice for now...
I'm from accounting field. Randomly got this video from Reddit. I have to tell you, your explanation and way of presenting is not just good, it's interesting too.plase continue doing what you are doing.
Another incredible explanation. I’ve got to say, the more I learn about filmmaking the more I appreciate your videos - the use of music is truly a step above any other educational content. A million applauses.
wow really nice to hear this. I get many comments about the music "ruining" the video :) but if you see them as short films then it starts to make sense. I refuse to change!
@@ArtOfTheProblem haha yes! 👏👏 of course not simply short films - short cinematic masterpieces!
@@ArtOfTheProblem The music is way too loud. Please -- if it's distracting for some folks, I doubt having the volume on music a little less loud isn't going to ruin it for others.
thanks for feedback i'll work on it@@sams64sf
Holy crap... That was an amazing video!
Mind-blowing explanation! Just too overwhelmed by the amount of imagination and skills you’ve put behind this gorgeous show! Subscription done right away. Looking forward to have more in the days ahead.
thanks so much, i'm hard at work on the next video and pumped to finally get it out. I spent waaaay too long on the research, i hope it shows again :)
What a wonderful explanation of such a hardcore technical concept. I respect and appreciate the hard work you did to explain this concept easily. That paper analogy was best!
glaf you enjoyed this thanks :)
The legend is back!
Just a remarkable video. The most clear explanation of NNs I’ve ever seen. Really well done.
thank you, glad you found this as it's buried deep in the results!
New video is up on Evolution of Intelligence ruclips.net/video/5EcQ1IcEMFQ/видео.html
What an excellent video.
Fantastic video.
very underrated channel... Amazing work !!
thanks for the support, much appreciated
this has done more for me conceptually than actual ai classes i've taken. thank you.
yes! this is what I was hoping for. those classes are brutal and give no intuition
You've done well to cover an area of the explanation which is rarely done properly. Most of the time its all about the application of the maths, and such, rather than the intuition side of what the network is trying to do. Nice vid :)
so happy people are finding this, exactly what I wanted to do
Excellent video!! Thank you!!
Amazing video! Really appreciate this, great work :)
You are a great thinker and equally good presenter. Thank you for sharing.
Great work
Amazing video! I love how you explain things intuitively. It would be amazing if you did a video about gpt4 since it can sort of "reason" and memorize the conversation.
still working on this one, so much happening it's hard to keep up. thanks!
Thanks for beings a point in my neural network... I appreciate your genius 🎈❤️
glad you enjoyed this video thanks
👌👌👌 awesome explanation and visual representations…
The way you explain such a complex concepts is mind-blowing. thnak you so much for teaching us.
appreciate it! still working on the next video..
This by far the best explanation of Neutral Network I ever seen in my life very simple but without compromising technicality.
appreciate it!
This video cemented my understanding of neural nets, Thanks 👍
thrilled to hear this. i'm curious, what key questions do you come out of the video with?
This video is a much watch for the so called "machine learning experts"
Just watched this now, and this explanation is absolutely amazing. Please make more videos about ML :)
thank you for feedback. At this moment i'm in the rough drafting stage of the video which follows this one. Probably will take me another month to write or so
@@ArtOfTheProblem Thank you so much! I look forward to watching that. I really liked your use of visual analogies, such as paper folding, to better understand what's happening inside of the neural network.
Great video series. Definitely learned some valuable insights especially in the way that lead us to deep learning. Also you gave great intuition of the concepts that are embedded in learning concepts, neural networks and finally deep learning networks.
However I have have a not trivial addition to the video (although you kind of mention it in the previous video): A very fundamental building block inside of a single neuron is it‘s activation function. But the important thing is, that this function is non-linear and differentiable. Given that, the deep NN can transform the input space such that the output space can be separated into distinct regions (like we‘ve seen in the example of recognizing hand-written digits). In contrast a deep NN that only has linear activations is not having a chance if the problem gets more complex, as we see in the example where the input space is not separable by simple sections (like lines or hyper planes).
Thanks a lot for your time you put into this series. Hope to see a continuation if you see fit ;)
apprecite this feedback, i'm indeed working on the next video in this series, I'm still stuck one one part which is illuminating what the 'heads' inside a transformer are learning. and telling the story of recurrence in neural nets.
This is not just a great explanation and easy to follow, but also just so soothing
the best explaination ive ever heard
thruely intriguing
thrilled to hear it, still trying to crack the next video
Thank you for this video, it opens your mind to a lot of things
best explanation that i see about neural networks
more on the way!
I used to think that the last layer is the coolest one, but now I'll be a fan of the layer next to last.
yes! that's what the next video is all about too, i just figured something cool out. that last later acts as the "state of mind" of the network, and so in a recurrent network that's the "memory" of the computational process.
thank you so much, finally have a better understanding on how neural networks work
yay! so happy people are finding this
Thanks for sharing!
Hi, your pictures and explanations are just too good, clear and coherent and made sense. That's how things should be explained. I want to cite your pictures and some of the wording. And I have no problem mentioning a youtube link instead of a textbook even though its not peer reviewed. I was wondering should is it ok if I use link in bib or you have a proper article written on it.
That's so awesome i'd love if you used that link too please share whatever work you are doing too thanks
Amazing job!
Great series
Simply wonderful, Worth watching for all, Freshers, new comers, experienced persons etc. This video reminds me of a quote. "When something can be read without effort, great effort has gone into its writing". Same is true for this video as well. You have lived NNs. Watching for many more such videos, especially on basic CNN model.
This quote means a lot to me, I definitely put all the effort into this my brain could possibly muster
That was magnificent, I mean really really really super breathtaking.
thanks Youssef, thrilled youtube started to surface this video i worked my ass off on it
amazing work
Your intuitive explanation of why we need more than one neuron using graphs and folds blew my mind :D As someone who has worked in this field for over 6 years now, I must say I am super impressed :)
fantastic to hear, it gives me motivation to press on with the next video on transformers :)
@@ArtOfTheProblem Nice! Waiting for it and have subscribed :)
The beauty of this explanation made me smile
Loved it! Amazing video :D Thanks for sharing!
appreciate the feedback
If someday I prove P=NP, I'm donating half to you. This channel is that inspirational to me.
godspeed! thank you!
Amazing video!
Wow, this is amazing
I think there is a typo at 5:15. Active and inactive should be flipped for any 1 line drawn for consistency. If the circles represent 'active' data points, the active-inactive labels for the slant line at the right should be flipped.
I absolutely love this content, please make more such videos simplifying complex concepts in ML.
appreciate it, I do have a follow up planned on how nets solve sequential problems but got very busy and hit a few snags. going to take another run at it soon
Great video. Was hoping to see more "maths" in the video though
I now have a new understanding of neural networks thanks
woo hoo!
Well done. Thank you
great video