Thanks Brandon Rohrer for detailed explanation of concepts. Initially I saw your video on Linear Regression which is part of How Data Science Works. Later I had to search a lot to find your channel. Once again thanks for such videos and distributing your knowledge on Data Science.
Nice video, thanks. I think it would be even easier to follow if you had an example with distinct input/outputs though (instead of the am/pm working hours example).
Thanks salle rc. I think showing a classification example would be a good idea. I'll put that in my Future Work queue. In the meantime, here is an example like that, but for convolutional neural networks: ruclips.net/video/FmpDIaiMIeA/видео.html
“1.) No formalism of which we can know that it expresses correct (and only correct) thought, can capture our entire abstract thought. 2.) No formalism in which only objectively correct propositions can be derived, can, in its derivable formulae, capture all objectively obtaining conceptual relations.” Gödel, K. (2003). COLLECTED WORKS. (Vol. IV). (I. Oxford University Press, Ed.) Oxford: Oxford University Press, p. 521.
Everybody try this: Listen to this explanation for deep neural networks and simultaneous play the Una Mattina album from Ludovico Einaudi in background ... lean back close your eyes and relax ...
I've watched a dozen of these and I got to say that this was the best. It explained the basic concept added with some computational logic without going to deep in the math right away, this seems to be a tough combination to find. Many thanks!
That am-pm example is horrible. It is so hard to have a clear picture in mind of those am/pm, am/pm. Would be better to have something easily imaginable.
Brandon Rohrer how is the job of each neuron decided? After the network is trained is the programmer able to figure out what each neuron is doing? How is it decided how many neurons on each level and how many layers it has?
haha Sorry Brandon Rohrer, but I agree. It's very messy with all those am/pm. The rest of the presentation is very clear, thanks.
8 лет назад+1
Amazing vulgarisation of deep learning! It gets a bit confusing with the correcting gradient because (why would you correct the weight if overtime the algo is supposed to statistically make coherent and recurring results appear).
Thanks a lot Brandon, excellent presentation! I could easily understand the technical aspects thanks to the background of neural network, which I am more than familiar with.
Great tutorial, thank you. I have some questions. When we started learning in your example, did you first randomly assume in both output neurons that he worked in morning and not in the evening and then calculate those weights and focus on bigger one (.6)? So if i understood correctly would results for second day be: first output neuron .85 and second output neuron .7? I think i am missing something here, also i am not sure where is the place of the actual data in all of this. You are great teacher and i hope that you will continue to create videos like this.
Hi Brandon, just watched both your talks on deep learning. I found them to be the clearest presentation so far on the subject. Thanks. Why do image recognition nets use such low res photos to train with. I mean some of the feature like blood vessels or hairs on the skin would be useful for classification. Is this a hardware issue or do higher res pics just confuse the nets.
I appreciate it deeply CNC Scotland. There are a couple of reasons to use lower res photos. The first is that computational requirements go up dramatically (roughly polynomially) with number of pixel rows and columns. Second, if a feature, such as an eye, spans twice as many pixels, the features that identify it must also span twice as many pixels. That requires additional layers in the CNN, further increasing the computational requirements. Third, (this one is speculation on my part): doubling the resolution means that there are many more ways that the pixel inputs can combine to represent the same eye. This may fundamentally make the learning problem more difficult, at least in the way CNNS approach it. Going to higher resolution may make it easier to identify individual hairs and blood vessels, but tougher to recognize a characteristic nose profile.
If Estopa, an spanish band, is similar to Police or Bee Gees, something is not working well in Spotify; or Alaska y Dinarama, I love them, but they are quite different to Daft Punk. Thanks for the videos. You explain really well.
In the early slides you showed an axon touching many dendrites of a single downstream neuron with each touching point having a given weight. Yet in the nice clean circle and stick diagrams, it seems as if you are implying that the axon of one neuron only touches a single dendrite of the downstream neuron. Could one say that if an axon touches several dendrites of a downstream neuron that in effect we can call this one single connection (and thus diagram it this way) but with the weight of those many touch points added together?
Yes, you are spot on. This is a good interpretation for artificial neural networks (ANNs). In actual neurons, multiple connections probably allow for more complex functionality, but that is simplified away in ANNs.
I understood the 1 level example: compare the error of the output to adjust weights. but what do you do on the multi-layer example? for example, you input an image and test whether it's a dog: the system gets it wrong on the level 4 output. how do you adjust level 2 and 3 weights since we do not have a clear right/wrong for what those nodes represent?
Hi John. This problem is solved by using the error in the final guess to train every weight in every layer. If I adjust one weight in a lower level of the CNN and my final guess gets a little bit more accurate, then I keep that adjustment. This is the power of backpropagation. It takes the error in the final guess and propagates it back through all the previous layers.
I watched this hoping I'll learn something new about Deep Learning but all he talked about was Multi-Layer Perceptron. I haven't had the chance to study Deep Learning yet but I'm pretty sure it's not just a new name to MLP!
the am/pm example is not great, @10:50, you kind of lost me with this example. I was completetly unable to understand what comes after... Really, choose your examples wisely.
Are not numbers just names we give to quantities or values? And are not names just variables with multiple possible values based on context? Maybe it would be better to talk about value vs variable instead of name verses number.
Thanks Brandon Rohrer for detailed explanation of concepts. Initially I saw your video on Linear Regression which is part of How Data Science Works. Later I had to search a lot to find your channel. Once again thanks for such videos and distributing your knowledge on Data Science.
Correction at 21:00
Hierarchical Temporal Memory is not a deep learning algorithm.
Wonder what happens if that cooking robot gets hold of HowToBasic videos
that'll be epic
that's a great question lmao
Will probably poision the first best human.
Nice video, thanks. I think it would be even easier to follow if you had an example with distinct input/outputs though (instead of the am/pm working hours example).
Thanks salle rc. I think showing a classification example would be a good idea. I'll put that in my Future Work queue. In the meantime, here is an example like that, but for convolutional neural networks: ruclips.net/video/FmpDIaiMIeA/видео.html
Ya, the AM/PM example is very difficult to follow
“1.) No formalism of which we can know that it expresses correct (and only correct) thought, can capture our entire abstract thought. 2.) No formalism in which only objectively correct propositions can be derived, can, in its derivable formulae, capture all objectively obtaining conceptual relations.”
Gödel, K. (2003). COLLECTED WORKS. (Vol. IV). (I. Oxford University Press, Ed.) Oxford: Oxford University Press, p. 521.
Everybody try this: Listen to this explanation for deep neural networks and simultaneous play the Una Mattina album from Ludovico Einaudi in background ... lean back close your eyes and relax ...
Thank you Gerd. I was looking for the perfect soundtrack. ruclips.net/video/0Bvm9yG4cvs/видео.html
Your welcome ;) It just fits so perfectly ^^
now you made it dramatic LOL
Perfect!! :D
I've watched a dozen of these and I got to say that this was the best. It explained the basic concept added with some computational logic without going to deep in the math right away, this seems to be a tough combination to find. Many thanks!
That am-pm example is horrible. It is so hard to have a clear picture in mind of those am/pm, am/pm. Would be better to have something easily imaginable.
Noted Trackman2007. I'll put my thinking hat on and see if I can find something more intuitive for the next go-round.
Thank you sir!
Brandon Rohrer how is the job of each neuron decided? After the network is trained is the programmer able to figure out what each neuron is doing? How is it decided how many neurons on each level and how many layers it has?
haha Sorry Brandon Rohrer, but I agree. It's very messy with all those am/pm. The rest of the presentation is very clear, thanks.
Amazing vulgarisation of deep learning! It gets a bit confusing with the correcting gradient because (why would you correct the weight if overtime the algo is supposed to statistically make coherent and recurring results appear).
the best explanation i've ever seen.
Thanks a lot Brandon, excellent presentation! I could easily understand the technical aspects thanks to the background of neural network, which I am more than familiar with.
The am/pm example is kind of confusing, the rest of the presentation was great
Thank you Brandon. Very clear with great graphics. Very well explained!
is that moning at 1:32
but what is gradient and delta @14:38
Love it, the best explanation of the Deep learning that I have ever seen, it connects the biomedical mechanism, and AI together, brilliant!
Great tutorial, thank you. I have some questions. When we started learning in your example, did you first randomly assume in both output neurons that he worked in morning and not in the evening and then calculate those weights and focus on bigger one (.6)? So if i understood correctly would results for second day be: first output neuron .85 and second output neuron .7? I think i am missing something here, also i am not sure where is the place of the actual data in all of this. You are great teacher and i hope that you will continue to create videos like this.
Amazing, way to go Brandon it was surprisingly very easy to understand, great presentation
Hi Brandon, just watched both your talks on deep learning. I found them to be the clearest presentation so far on the subject. Thanks. Why do image recognition nets use such low res photos to train with. I mean some of the feature like blood vessels or hairs on the skin would be useful for classification. Is this a hardware issue or do higher res pics just confuse the nets.
I appreciate it deeply CNC Scotland.
There are a couple of reasons to use lower res photos. The first is that computational requirements go up dramatically (roughly polynomially) with number of pixel rows and columns.
Second, if a feature, such as an eye, spans twice as many pixels, the features that identify it must also span twice as many pixels. That requires additional layers in the CNN, further increasing the computational requirements.
Third, (this one is speculation on my part): doubling the resolution means that there are many more ways that the pixel inputs can combine to represent the same eye. This may fundamentally make the learning problem more difficult, at least in the way CNNS approach it. Going to higher resolution may make it easier to identify individual hairs and blood vessels, but tougher to recognize a characteristic nose profile.
"You can substitute magic for deep learning and it fits perfectly"
Magic Demystified
is there software for image recognition in deep learning and exit xml wait file
Play at 1.5x playback speed. =)
Nice, now I can watch more videos about the subject in a shorter time!
Niskinatorn
That's a-me!
Ha! Just scrolled down to share the same tip!
i did the same :)
great lecture, thumbs up. One suggestion, where a lavalier mic so we can here you when walking away from podium.
Great talk, you did an awesome job explaining. I'm interested in seeing future applications; is there any code available online?
Sorry, no code. These are just illustrative cartoon examples for explaining concepts.
Audio volume is rising and falling.
Yea, very distracting..
If Estopa, an spanish band, is similar to Police or Bee Gees, something is not working well in Spotify; or Alaska y Dinarama, I love them, but they are quite different to Daft Punk.
Thanks for the videos. You explain really well.
In the early slides you showed an axon touching many dendrites of a single downstream neuron with each touching point having a given weight. Yet in the nice clean circle and stick diagrams, it seems as if you are implying that the axon of one neuron only touches a single dendrite of the downstream neuron. Could one say that if an axon touches several dendrites of a downstream neuron that in effect we can call this one single connection (and thus diagram it this way) but with the weight of those many touch points added together?
Yes, you are spot on. This is a good interpretation for artificial neural networks (ANNs). In actual neurons, multiple connections probably allow for more complex functionality, but that is simplified away in ANNs.
I understood the 1 level example: compare the error of the output to adjust weights. but what do you do on the multi-layer example? for example, you input an image and test whether it's a dog: the system gets it wrong on the level 4 output. how do you adjust level 2 and 3 weights since we do not have a clear right/wrong for what those nodes represent?
Hi John. This problem is solved by using the error in the final guess to train every weight in every layer. If I adjust one weight in a lower level of the CNN and my final guess gets a little bit more accurate, then I keep that adjustment. This is the power of backpropagation. It takes the error in the final guess and propagates it back through all the previous layers.
Clear, easy to understand
Thank you, great visuals and very clear
Great clear, very visuals, wow!
your lectures are awesome , mind you make more lectures on deep neural networks :p
The audio is broken.
The am, pm example made no sense
I can kind of see what he's getting at with that one but it seems kind of a backward example.
I watched this hoping I'll learn something new about Deep Learning but all he talked about was Multi-Layer Perceptron. I haven't had the chance to study Deep Learning yet but I'm pretty sure it's not just a new name to MLP!
wow that was cool ,
thanks for create this
Very informative!
how is nobody bothered that he's showing the slideshow the wrong way?
thanks-it makes sense!
It is interesting, but the sound is off. The speaker should have carried the mic on him.
Audio is low
Santos Dumont's Air Plane is the true Air Plane. lol
the am/pm example is not great, @10:50, you kind of lost me with this example. I was completetly unable to understand what comes after... Really, choose your examples wisely.
thank you very much
Why is HTM even in the list of Deep Learning technologies??!
Calm down.
Yeah what's up with that!?!??
Are not numbers just names we give to quantities or values? And are not names just variables with multiple possible values based on context? Maybe it would be better to talk about value vs variable instead of name verses number.
I could say value vs context, but then that is the definition of a variable.
hi Mr Brandon Rohrer i need to use Deep Learning in character recognition could you help me please thank you
I got bored... Thoroughly bored....wanted to run away....but was forcing myself every moment to.complete the video..I slept in the end...
Santos Dumont Air Plane. :D
OrchOr. Brains are quantum
GREAT
Great
Lvl up Intelligence 1,000
19092024