The presenter is using the terms "high" and "low" entropy in a sense that is opposite to the usual interpretation of entropy. If the distribution is flat (equiprobable possibilities), that is characterized as high entropy. If the distribution has a sharp peak, that is described as low entropy. Edit: He got it right at first, then at 47:50 he switched usage.
In the superresolution example around 29:10, the presentation of the slide could have been better if he included an image of the downsampled version to compare with the results.
thank you for this, why we did make the updates of weights in the second step of discriminator when it determines that the image is real? or it's the process that we should make SGD all time ?
Thank you for this great work, please can I ask? What do you think about Modular neural networks implementation (MNN) for combing several models? what should I shoose PyTorch or TensorFlow? thank you once again
Also I was expecting brief explanations of how GANs were used in each application, for example how was superresolution achieved, or the image to image translation, with GANs. Not asking for detailed explanations, but insights would have been nice. Overall I did like this video though.
There are going to be a lot of lies and convoluted explanations to cover up for what GAN are really used for. Also it’s been around a lot longer than people think. The fact that they’re only talking about it now shows a pretty good bit of gate keeping.
The real use is generating pictures to cover up peoples identities. Military has already been using it for awhile. Face swap filters are way more powerful than you think. The face swap filters on snap and IG are intentionally of low quality. The real stuff they are using is way more powerful and almost always believable. But I imagine when this was released 4 years ago… They want the Reddit/Twitter/Science clickbait nerd consensus to be worried about “it has feelings” So they miss the point and implications of what GAN is really doing. It’s been in Military use since 2014. And that’s just the stuff that we know about.
It's so helpful for a GAN beginning. I am still not understanding a lot of things, but got the right direction from this video.
Excellent content added in the last few days by Tensor flow team
The presenter is using the terms "high" and "low" entropy in a sense that is opposite to the usual interpretation of entropy. If the distribution is flat (equiprobable possibilities), that is characterized as high entropy. If the distribution has a sharp peak, that is described as low entropy.
Edit: He got it right at first, then at 47:50 he switched usage.
Yes, I noticed this as well. Also, your name checks out ;)
Very good content and well explained and the application in beginning was too cool. I didn't expected this good video from google tf.
Your presentation skill is awesome
That's too impressive!! Quite easy to grasp and to the point! .. Thanks Joel for such great efforts! :)
تحياتي الخالصة من الجزائر thank you very mutch
Fantastically explained!
In the superresolution example around 29:10, the presentation of the slide could have been better if he included an image of the downsampled version to compare with the results.
Superb video am beginner thanks a lot
Jcm
Awesome!!! This makes it so easy to implement
Learnt a lot. Thank you for your time too. ;-)
Awesome Video, Enjoyed it !
thank you for this, why we did make the updates of weights in the second step of discriminator when it determines that the image is real? or it's the process that we should make SGD all time
?
hello sir can you tell me how to convert GANs generated dataset in to .jpg format??? please
Thanks. Very well done. And one of the best descriptions ever of the Inception Score
Thank you for this great work, please can I ask? What do you think about Modular neural networks implementation (MNN) for combing several models? what should I shoose PyTorch or TensorFlow? thank you once again
Also I was expecting brief explanations of how GANs were used in each application, for example how was superresolution achieved, or the image to image translation, with GANs. Not asking for detailed explanations, but insights would have been nice. Overall I did like this video though.
There are going to be a lot of lies and convoluted explanations to cover up for what GAN are really used for.
Also it’s been around a lot longer than people think.
The fact that they’re only talking about it now shows a pretty good bit of gate keeping.
nice work...please do some hands on code tutorials
Thank you, is there any tutorial about tf-gan for time series data set ?
hello I m new to GAN can you help me for how to implement simple handwritten digit task using TF-gan framework??
Superb!
I can't find the easy implementation of TF-gan. This is not developer friendly
Very interesting Thank YOU!
40:10 so guess and check bootstrapping
i'll probably never understand this... interesting video. thx
I thought the same 5 years ago! Just keep going, youll eventually get it.
تحياتي
The real use is generating pictures to cover up peoples identities.
Military has already been using it for awhile.
Face swap filters are way more powerful than you think.
The face swap filters on snap and IG are intentionally of low quality.
The real stuff they are using is way more powerful and almost always believable.
But I imagine when this was released 4 years ago…
They want the Reddit/Twitter/Science clickbait nerd consensus to be worried about “it has feelings”
So they miss the point and implications of what GAN is really doing.
It’s been in Military use since 2014. And that’s just the stuff that we know about.
SAGAN is no longer SOTA, you guys need to update your references. You should be talking about StyleGAN2-ADA or VQ-GAN.
subclassing tf.estimator is very painful while we can use tf.keras...