This is one of the best explanations I have ever seen. You manage to cover the goal and the method intuitively, mathematically, and programmatically, and you did it with a concrete example that was simple enough to work out by hand. I also appreciate that you showed how we might code the rules for a solution, and then showed how are would program a machine learning approach to come up with a similar solution. I hope you continue to make more excellent videos like this!
My fellow data scientists were all about GANs, so I went to learn something about it so I know where I stand in regards of synthetic data. And I'm glad I stumbled upon your video. What a great introduction to the topic! I feel I understand a lot more of what has been said and done about GANs now. Thank you!
Thank you for the clear explanation. Just a couple of comments: a) In 6:35, I think it should be -1.5 (not -0.5) b) When creating the discriminator, if the bias is -1 then the threshold between good/bad images should be lower than 1. Otherwise some of the real faces would be labelled as false…
very nice explanation... I started learning GAN from zero, only have basic understanding about CNN. and from this video, I now understand how GAN works. Thank you
You provide by far some of the most descriptive explanations of Neural Network architecture, Machine Learning & statistics out there! Thank you! By the way, I think you forgot to subtract the bias from the result of the second, noisy image at 6:32. It should be -1.5 instead of -0.5 :)
Thank you Luis for the simplified and very clear explanation. Finally, I feel that I can confidently understand the how GANs work. I also really liked the idea of the simple toy examples that you usually start to explain the complicated concepts.
The best video so far that I have come across. It is very annoying that so many videos that teach ML uses some API (e.g. Keras) to give examples. It teaches how to use the API but it does not teach you how it really works. I wish more used this awesome method to teach concepts in ML.
The most informative and intuitive explanation of GANs, for a beginner this video is priceless as all other resources aren't so patient with the critical details and steps which doesn't help with the learning process
Год назад+1
Very nice and clear explaination. Everytime I'd need a recap on GANs, I come to this video. Having no code but simple math makes it more meaningful -- comparing the other channel's which includes ML libraries, that places as obstacles on our way to understand!
You really have a special talent to generalize and explain complex concept. You go through every questions that we could think about during your explanations and all of them are answered with such pedagogy.
No words would appreciate this rich explanation. I do like the visuals, mathematics and codes when they come together. Also, Your language was easy and smooth. You made the complex topic so easy to comprehend. Great thanks.
I am now watching this while waiting for the laundry. Just great! I also realized that I learned a lot from you about pytorch! Thank you sir! Keep up the great job!
BRAVO! BRAVO! You are really the Grand Master Explaining Machine Learning! Let me tell you why? about 2 years ago, when I was looking to learn python I also was looking to learn the process of coding a machine learning python script. By that time, I remember searching watching a lot of videos of machine learning, but things were very fuzzy since most people the people just talk and talk and talk and repeat what others said, but no one explained the real roots of the process of it until I found your Videos explaining the real process in calculating the weights, and until that day my head was able to understand the REAL process of creating the code. And here I am 2 years later, breaking my head with how a GAN really works or how is made, and BANG! I just finished washing about 40 videos and NO ONE BUT YOU EXPLAINED SO WELL, that is why you are the MASTER in explaining Machine Learning! BRAVO! BRAVO! God Bless You, Luis!!!! What could be my life without you!!! Thank you a million times!.
Senor Luis Serrano: Ud. ha hecho una excelente demostración pedagógica sobre un tema que otros han convertido en mistificación. Trabajo en la interpretación de Edipo Rey, de Sófocles, y estoy tratando de traducir mis conocimientos sobre la comunicación hecha por un texto literario clásico que "pinta" los rasgos de sus personajes mediante relaciones opuestas, principalmente, análogas a las de los slanted portraits de los "retratos" usados por Ud. Muchos de esos "retratos" se acercan a las imágenes que Ud. llamaría "noisy" hasta el extremo de ser desechables por su policía ... y por mí. Quiero decir que, no solo en esto, sino en otros muchos aspectos, encuentro un acercamiento entre la herramienta matemática por Ud. expuesta y la que yo quiero usar ... pero no soy matemático. A pesar de ello, gracias a mi búsqueda serendípica, he logrado encontrarme con su Serrano.Academy, y seguiré orientándome por sus lecciones, con agradecimiento.
Thanks for the slow transition and yeah i am not really good in maths and not yet much into more harder python. Its really useful than other videos out there.
Thanks for sharing,Crystal clear explanation. I remember ,Every one requested u for this and never imagined that request will be granted soon. If u are reading this, kindly give us an opportunity to talk with you on RUclips live session.
Dear Serrano. Thank you for your very interesting video. In the generator equation (image generator_math.png), I think there is a missing W_i. Note in your code in function "def derivatives(self, z, discriminator)", you have the line: "factor = -(1-y) * discriminator_weights * x *(1-x)". Parameter "discriminator_weights" represents W_i in the equation, although I believe it is missing in the generator equation. Please, let me know if I'm wrong. Thanks.
This is a really great video! You are a very good teacher, great video quality, you offer both intuition, as well as a an applied example - the code. Good Job!
This is a wonderful video, very easy to understand. You have presented such a complex part of deep learning in such a way that it looks so easy! Thank you so much.
Luis truly "KISS" (keep it sweetie simple) machine learning and re-ignites my love of scratching math!!! Whenever i find anything hard to crack i just search it in your channel...... Minor typos at Generator's Derivatives 00:19:00 where D weights are missing and the notations of G weights and D weights get mixed up, though everything is correct in your codes. Kind of cool to work on the simple math and detect those typos... Thanks again Luis for democratizing complex knowledge!! =)
Excellent video - great way to explain a quite complex concept! I learned about your video and github from MIT class "Designing and Building AI Products and Services." Hope that you are getting proper credits from MIT;-)
Thanks Hyosang!! I also checked out your videos, they’re great! Happy to see you over here after so long, hope all is going well in your side, my friend!
Love your video. I have a question about what you say at 16:32: Shouldn't the discriminator network only be updated with weights from the real images? In other word: why do we use back propagation to update the weights after feeding it an image from the generator? isn't the generator making a fake image, and therefore, if you update the weights on a discriminator network, shouldn't the discriminator network then be learning how to detect a fake image?
I find it a great vid. But a question: shouldn't the bias to be the same for all the weights of the same neuron? I would imagine that you have 1.7 for the diagonal values and 0.3 for the non-diagonal ones, since I would use a bias equal. Where am I wrong?
TO BE BE HONEST I DIDN'T UNDERSTAND BUT I'M GONNA WATCH THIS VIDEO AGAIN AND AGAIN TILL I UNDERSTAND EVEN IF IT'S 100 TIMES. BECAUSE I WANNA BE THE BEST PROGRAMMER
You sir are a fantastic teacher. No fancy gimmicks, no catch phrases. Just pure talent. Hoping to collaborate with you!
As a beginner in ML a lot of this still went over my head but it's the most accessible video I've found yet on GANs! Thank you so much
This is one of the best explanations I have ever seen. You manage to cover the goal and the method intuitively, mathematically, and programmatically, and you did it with a concrete example that was simple enough to work out by hand. I also appreciate that you showed how we might code the rules for a solution, and then showed how are would program a machine learning approach to come up with a similar solution. I hope you continue to make more excellent videos like this!
If you know the basics of CNN or ML and you are looking to learn basics of GAN this video is for you....very well explained thankyou
Amazing summary of GANs with the simplest but concise explanations. Thank you!
My fellow data scientists were all about GANs, so I went to learn something about it so I know where I stand in regards of synthetic data. And I'm glad I stumbled upon your video. What a great introduction to the topic! I feel I understand a lot more of what has been said and done about GANs now. Thank you!
Thank you for the clear explanation. Just a couple of comments:
a) In 6:35, I think it should be -1.5 (not -0.5)
b) When creating the discriminator, if the bias is -1 then the threshold between good/bad images should be lower than 1. Otherwise some of the real faces would be labelled as false…
you are right, having this confusion
This channel’s been a gem of a find, always a go to source to refresh seemingly complex algorithms in an absurdly intuitive way. Thank you, Luis.
I love his teaching, he makes complex things seem simple.
mad respect to you for explaning neural network so clear in 20 minutes, actually amazing
very nice explanation... I started learning GAN from zero, only have basic understanding about CNN. and from this video, I now understand how GAN works. Thank you
Finally Gan math explained is the most elegant way..Thank you Sir
You provide by far some of the most descriptive explanations of Neural Network architecture, Machine Learning & statistics out there! Thank you!
By the way, I think you forgot to subtract the bias from the result of the second, noisy image at 6:32. It should be -1.5 instead of -0.5 :)
Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!
Why would anyone dislike this? Seriously, why? Thank you sir!
Thank you Luis for the simplified and very clear explanation. Finally, I feel that I can confidently understand the how GANs work. I also really liked the idea of the simple toy examples that you usually start to explain the complicated concepts.
The best video so far that I have come across. It is very annoying that so many videos that teach ML uses some API (e.g. Keras) to give examples. It teaches how to use the API but it does not teach you how it really works. I wish more used this awesome method to teach concepts in ML.
Mr Serrano thank you for existing.
brilliance is the ability to take the complex and reduce it to simplicity. Brilliant work!
The most informative and intuitive explanation of GANs, for a beginner this video is priceless as all other resources aren't so patient with the critical details and steps which doesn't help with the learning process
Very nice and clear explaination. Everytime I'd need a recap on GANs, I come to this video. Having no code but simple math makes it more meaningful -- comparing the other channel's which includes ML libraries, that places as obstacles on our way to understand!
One of the best explanations of the subject I have ever seen, congratulations, you are an excellent teacher!
Best video on GAN explanation hands down
You really have a special talent to generalize and explain complex concept. You go through every questions that we could think about during your explanations and all of them are answered with such pedagogy.
By far, the best explanation of GANS ever! Well done, sir. Many, many thanks!!!!
No words would appreciate this rich explanation. I do like the visuals, mathematics and codes when they come together. Also, Your language was easy and smooth. You made the complex topic so easy to comprehend. Great thanks.
getting to see this after a heavy day at work is refreshing..
Thank you so much for sharing
I am now watching this while waiting for the laundry. Just great! I also realized that I learned a lot from you about pytorch! Thank you sir! Keep up the great job!
Love that you broke down of concepts to micro level. Made the understanding of GAN's so simple and yet detailed. Appreciate it.
BRAVO! BRAVO! You are really the Grand Master Explaining Machine Learning! Let me tell you why? about 2 years ago, when I was looking to learn python I also was looking to learn the process of coding a machine learning python script. By that time, I remember searching watching a lot of videos of machine learning, but things were very fuzzy since most people the people just talk and talk and talk and repeat what others said, but no one explained the real roots of the process of it until I found your Videos explaining the real process in calculating the weights, and until that day my head was able to understand the REAL process of creating the code. And here I am 2 years later, breaking my head with how a GAN really works or how is made, and BANG! I just finished washing about 40 videos and NO ONE BUT YOU EXPLAINED SO WELL, that is why you are the MASTER in explaining Machine Learning! BRAVO! BRAVO! God Bless You, Luis!!!! What could be my life without you!!! Thank you a million times!.
Thank you Alex, that's so nice to hear! It's an honor to be part of your machine learning journey!
This is one of the best explanation i ever read/watched
Senor Luis Serrano: Ud. ha hecho una excelente demostración pedagógica sobre un tema que otros han convertido en mistificación. Trabajo en la interpretación de Edipo Rey, de Sófocles, y estoy tratando de traducir mis conocimientos sobre la comunicación hecha por un texto literario clásico que "pinta" los rasgos de sus personajes mediante relaciones opuestas, principalmente, análogas a las de los slanted portraits de los "retratos" usados por Ud. Muchos de esos "retratos" se acercan a las imágenes que Ud. llamaría "noisy" hasta el extremo de ser desechables por su policía ... y por mí. Quiero decir que, no solo en esto, sino en otros muchos aspectos, encuentro un acercamiento entre la herramienta matemática por Ud. expuesta y la que yo quiero usar ... pero no soy matemático. A pesar de ello, gracias a mi búsqueda serendípica, he logrado encontrarme con su Serrano.Academy, y seguiré orientándome por sus lecciones, con agradecimiento.
I had to watch this video a few times, pause and rewind etc. but it did help me understand the intricacies of GANs. Thank you very much.
This is how the teachers explain, thanks a lot
you are the best AI teacher i have ever seen !
Loved the Slanting people demo... Thanks Luis.
Thanks for the slow transition and yeah i am not really good in maths and not yet much into more harder python. Its really useful than other videos out there.
This was a really good video! I'm happy my school included it in the suggested resources
plz sir, make more videos frequently your way of teaching is the blow of the mind it is osm....
Thank you Chetan! Trying to make them more often :)
What a great explanation! It's amazing how you teach complex concepts in such an easy way. I have learned a lot from your videos. Mil gracias!
I was struggling to understand this.... Your video made it so clear and easy to understand... Thank you soooooooo much...... ☺
Thanks for sharing,Crystal clear explanation.
I remember ,Every one requested u for this and never imagined that request will be granted soon.
If u are reading this, kindly give us an opportunity to talk with you on RUclips live session.
Thank you! Hoping to have another live session soon, will post an announcement when I know when!
Amazing explanation....
Love from India.....
Dear Serrano. Thank you for your very interesting video. In the generator equation (image generator_math.png), I think there is a missing W_i. Note in your code in function "def derivatives(self, z, discriminator)", you have the line: "factor = -(1-y) * discriminator_weights * x *(1-x)". Parameter "discriminator_weights" represents W_i in the equation, although I believe it is missing in the generator equation. Please, let me know if I'm wrong. Thanks.
The simplest but most effective explanation I've seen on GAN...Thank you :)
Your explanations are so simple that anyone can understand !!!
Thank so much
Excellent video. Teaches the basics in a very clear manner. Thank you very much!
Sr. Luis Serrano, os seus ensinamentos são demasiado preciosos. Por favor, continue fazendo versões em língua espanhola. Muito obrigado.
This is a really great video! You are a very good teacher, great video quality, you offer both intuition, as well as a an applied example - the code. Good Job!
Simple and easy narration. Thank you sir
So well explained that makes it easy to understand GAN. Thank you. Big thumb up.
the best in GAN tutorials
This video is really helpful to understand about GAN. The way of teaching is really awesome.I liked it a lot.
Really helpful! Explains GANs very simply for beginners!
Interesting example of GAN. Really enjoy your video. Keep a good works
Love your explanations and visuals! Thanks again for all you do here Luis, you sir are a gentleman and a scholar🎓🍻
Great explanation! Cheers to all slantland people!!
Muchas gracias, Luis. Primer video que veo sobre las GAN y ya entendí el concepto. ¡Impecable!
THIS IS SOOOOOOOOOOO DAMN GOOD. thanks a lot man. totally understood gans without even a computer vision background
Best ML videos in the Net for beginners
Wow, I never understood the error functions until now! Thank you Luis!
This is a wonderful video, very easy to understand. You have presented such a complex part of deep learning in such a way that it looks so easy! Thank you so much.
I didn't expect this video until the end of the video. This is really helpful! Thank you
Thank you so much for this video! I haven't found any good video explaining GANs as you did!
Its really great to understand a complex concept like GAN in a simple way
Awsome, you make neural networks easy and interesting.
Thank you 🙏🏻
Uau! Amazing! Thanks for the simplest explanation I've ever seen 🙂
What an amazing video. I really impressed by the example you provide to explain such a complex concept.
Amazing presentation with high quality slides!
Best explanation of GAN in YTB
ooooh I knew your voice sounded familiar! I'm doing your pytorch course
where is the course? I can't find it in the channel
@@chawza8402 try on google then
@@gumikebbap turns out he is the lead instructor on pytorch Udemy Course. and its free!
@@chawza8402 Link Please
Hi all! The course is here: www.udacity.com/course/deep-learning-pytorch--ud188
Luis truly "KISS" (keep it sweetie simple) machine learning and re-ignites my love of scratching math!!! Whenever i find anything hard to crack i just search it in your channel...... Minor typos at Generator's Derivatives 00:19:00 where D weights are missing and the notations of G weights and D weights get mixed up, though everything is correct in your codes. Kind of cool to work on the simple math and detect those typos... Thanks again Luis for democratizing complex knowledge!! =)
This is the best explanation on the internet. Thanks a lot!
Great Job Man. I understood the basics of GANs and now i can work with my StackGan project
Thank you very much for this awesome explanation, Luis. Very, very well done!
The best tutor! Excellent tutorial sir.
SIMPLY THE BEST EXPLANTION
Very informative for learning GAN's! Congratulations!
You're the best man. u deserve a million subscribers
I finally understand the lost key of GAN! Thank you a lot!
WOW! best tutorial on machine learning ever! Thank you
Thank you very much for this nice and very helpful explanation of GANs.
Excellent video - great way to explain a quite complex concept! I learned about your video and github from MIT class "Designing and Building AI Products and Services." Hope that you are getting proper credits from MIT;-)
This is a wonderful introduction to GANs. Many thanks for this - TRIPLE BAM!!!!!!!!!
No wait, this is not a StatQuest video, my bad 😁😁😁😁
LOL! :D I showed this to Josh, he found it hilarious too. BAM!
Wow, good job! You gave me a very good sense of how it works and explained the loss function really well. I finally understand. Thank you!
ThankYou Sir, your content is the best, the visuals you put into videos make it so much easier to understand concepts. Keep it going Sir.
OMG this is sooooooo friendly and easy to understand!!!!!! Thank you so much!!!!
best video for GANs thanks
Hi Luis! Great Videos! I'm very impressed and happy to see my old friend on RUclips!
Thanks Hyosang!! I also checked out your videos, they’re great! Happy to see you over here after so long, hope all is going well in your side, my friend!
Absolutely stunning explanation!
You are such a good teacher
Love your video. I have a question about what you say at 16:32: Shouldn't the discriminator network only be updated with weights from the real images? In other word: why do we use back propagation to update the weights after feeding it an image from the generator? isn't the generator making a fake image, and therefore, if you update the weights on a discriminator network, shouldn't the discriminator network then be learning how to detect a fake image?
Such a simple and great explanation. Thank you!
really nice illustrations!! Understand the gan now
I find it a great vid. But a question: shouldn't the bias to be the same for all the weights of the same neuron? I would imagine that you have 1.7 for the diagonal values and 0.3 for the non-diagonal ones, since I would use a bias equal. Where am I wrong?
TO BE BE HONEST I DIDN'T UNDERSTAND BUT I'M GONNA WATCH THIS VIDEO AGAIN AND AGAIN TILL I UNDERSTAND EVEN IF IT'S 100 TIMES. BECAUSE I WANNA BE THE BEST PROGRAMMER
So good & Easy to understand. Ty ! So generous w your knowledge
Such a blessing to have you
simply amazing. Thank you so much for your efforts🙏
Great vid also for your generate_random_image() function in your code, all you have to do is this --> np.random.random(4)
Thank you! Didn't know that trick.