Thank you for this video! I'm really into the artistic aspect of GANS, especially asset generation for games. I want to try collecting a dataset of tilesets from classic games (Think snes) and train a GAN to create assets for me and others to use in their games! Art is such a pesky aspect of game development and usually takes longer to make than the actual code so I'd love to automate it! It's very accessible too since what I have in mind are samples of under 64x64
@@HeatonResearch +1 for @Kiran Randhawa's suggestion and +1 for your suggestion. 😊 It would be nice to see different options when it comes to buying GPU time (a comparison of prices, platforms, setup, and demos of the training processes).
@@HeatonResearchhi jeff do you know any way I can have a person's face that has been recorded in a video and then it's still a video it's not an image but the face has been changed a bit and it's still the same person so it's not a different person or a deep fake so I can change the expression or the angle or what position of the face if it's looking down or up a little bit just a little tiny tilt for a little smile a little different expression is there any way to do this even have the hair changed a bit thank you Jeff
Maybe this video too old. But I honestly tried to make this with other images but it always says 'could not broadcast input array from shape (96,96,3) into shape (96,96)'
@@HeatonResearch Thank you for the reply. I have been trying to find tutorials on the internet but most of them used GANs for image generation. I hope that maybe in the future you can cover the time series GANs topic.
Hello, Thanks a lot for the implementation! In the end, the generator model is saved as a .h5 file. If I already finished training and wanted to look at the metrics, how could use this generator.h5 file to extract losses etc. ?
Thank you for this great video! I wanted to learn abut generating images and the info you provide about the topic was a great starting point. I've tried to implement the same code with different datasets, but somehow after fifty epochs or so, all the sample generated images turn the same, and the "loss" of the discriminator tends to zero. Any idea about where might be the problem?
Really fantastic tutorial. Would have been great to have more shared knowledge on the generator and disc setup. In terms of "how deep", "how wide" and why . But regardless, this is probably the best on youtube
could you use an image for seeding instead of a random noise? I was thinking that if you could you could make really cool transitions like 'mutating' an image into something else
That is a great question, and sadly one I am not qualified to answer. I have not seen the lawyers pay it too much attention yet. My hope would be that they are derivative works.
Great explanation Jeff! Do you have any tips for hyperparameter tuning in GANs? I've been adapting your code, and the quality of the output for different training data sets and resolutions seems very sensitive to the hyperparameters
How do we get the faces to be more realistic? I ran the training for 50 epochs, and tried increments of 50 all the way upto 200. However, I am still quite underwhelmed by the images the generator was producing. Can someone give some pointers as to how we can get this quality to go up?
PLS HELP: How do you define this numbers "4*4*256" and "seed_size" in the following code? I mean, I want to adapt this to my data and I dont know how. def build_generator(seed_size, channels): model = Sequential() model.add(Dense(4*4*256,activation="relu",input_dim=seed_size)) model.add(Reshape((4,4,256))) Thanks for the video!
Your video's have been of great help! But I got a few questions or potential video idea's. 1. How could I create more realictic faces (less Frankenstein as you said it). Because more epochs doesn't seem to do much and I thought maybe by deepening the network, but I have no idea how to do so. 2. We train the network in your video's, but how do we use the weights that we just trained? Also in the fix of 7.2 when I ran it the first time in Colab it worked fine, except that the output weren't really close to faces and the disc_loss was always equal to an 1 point something value which seems a bit off. Now when I try running it again It gives me some errors eventhough I didn't modify anything since the first run. Any idea on what's wrong? Keep up the great work and I hope you could help me with my questions :)
I got error here, AttributeError Traceback (most recent call last) in 1 # This method returns a helper function to compute cross entropy loss ----> 2 cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True) 3 4 def discriminator_loss(real_output, fake_output): 5 real_loss = cross_entropy(tf.ones_like(real_output), real_output) AttributeError: module 'tensorflow._api.v1.keras.losses' has no attribute 'BinaryCrossentropy' How to solve this Sir? Please help me with this
I tried to do this with 16x16 pngs, but i always get "InvalidArgumentError: Matrix size-incompatible" in line decision = discriminator(generated_image), idk how to fix it, and i cant progress further without it
You have to modify the discriminator so it also intakes 16x16 images, and the generator to also create 16x16 images. Try seeing if somewhere along the way there isn't code that specifies it to have a higher input resolution than that
Thank you for this video! I'm really into the artistic aspect of GANS, especially asset generation for games. I want to try collecting a dataset of tilesets from classic games (Think snes) and train a GAN to create assets for me and others to use in their games! Art is such a pesky aspect of game development and usually takes longer to make than the actual code so I'd love to automate it! It's very accessible too since what I have in mind are samples of under 64x64
I'd really like to see something on improving resolution using Neural Networks.
Yes, I am looking at that. It will take more horsepower than CoLab, but would also make for a good cloud example.
@@HeatonResearch +1 for @Kiran Randhawa's suggestion and +1 for your suggestion. 😊
It would be nice to see different options when it comes to buying GPU time (a comparison of prices, platforms, setup, and demos of the training processes).
What about this
www.fast.ai/2019/05/03/decrappify/
@@HeatonResearchhi jeff do you know any way I can have a person's face that has been recorded in a video and then it's still a video it's not an image but the face has been changed a bit and it's still the same person so it's not a different person or a deep fake so I can change the expression or the angle or what position of the face if it's looking down or up a little bit just a little tiny tilt for a little smile a little different expression is there any way to do this even have the hair changed a bit thank you Jeff
@@HeatonResearchI really need it for an animation fo r movies to save time do you know any way or any sort of AI that can do this
i'd like to see something on generating 3d faces models with 2d discrimination from different angles
Maybe this video too old. But I honestly tried to make this with other images but it always says 'could not broadcast input array from shape (96,96,3) into shape (96,96)'
Great video sir.Coming from a poor country like India all this resouces are like gem.Thank you so much,sir.
Great Video. The explanations are simple and clear. I have one question. Is it possible to use GANs for time series data instead of only for images?
Yes you can! It would require some customization.The output layer can be constructed to be anything needed, in theory.
@@HeatonResearch Thank you for the reply. I have been trying to find tutorials on the internet but most of them used GANs for image generation. I hope that maybe in the future you can cover the time series GANs topic.
Hello,
Thanks a lot for the implementation!
In the end, the generator model is saved as a .h5 file. If I already finished training and wanted to look at the metrics, how could use this generator.h5 file to extract losses etc. ?
Very clear explanation, thank you!
Great video! Could you please do more on how to tune a realistic GAN?
Thanks. One question, Can the GAN be trained for images of different sizes? and then tested for different seed sizes?
There are not many resources for Transfer Learning in GANs!
It would be really great if you make a video on that!!
Thanks I will consider that.
Thank you for this great video! I wanted to learn abut generating images and the info you provide about the topic was a great starting point. I've tried to implement the same code with different datasets, but somehow after fifty epochs or so, all the sample generated images turn the same, and the "loss" of the discriminator tends to zero. Any idea about where might be the problem?
Really fantastic tutorial. Would have been great to have more shared knowledge on the generator and disc setup. In terms of "how deep", "how wide" and why . But regardless, this is probably the best on youtube
Best explanation.
Thankyou so much sir
Thanks for this! A video on how to pull and use Nvidea's weights and using them would be cool.
Gracias por compartir tus conocimientos! Este video es realmente muy valiioso.
could you use an image for seeding instead of a random noise? I was thinking that if you could you could make really cool transitions like 'mutating' an image into something else
Nice video Jeff . I have a question, who owns the right to the images created or generated by using gan ?
That is a great question, and sadly one I am not qualified to answer. I have not seen the lawyers pay it too much attention yet. My hope would be that they are derivative works.
Great explanation Jeff!
Do you have any tips for hyperparameter tuning in GANs? I've been adapting your code, and the quality of the output for different training data sets and resolutions seems very sensitive to the hyperparameters
Is ti possible to detect whether images where created by technology? Do they have specific pattern that discern them from natural faces?
Great video, thank you! May I ask how do you generate evolving seeds video?
Never mind, I got the answer from the video!
how to see the output?
How do we get the faces to be more realistic? I ran the training for 50 epochs, and tried increments of 50 all the way upto 200. However, I am still quite underwhelmed by the images the generator was producing. Can someone give some pointers as to how we can get this quality to go up?
Does it work to use an image datagen for generating training data? Or is flipping not enough. Would like to try it on sth else than faces
Datagen should be fine, I've not tried that with this code yet.
Hello !
why i'm getting this Error while executing the code :
ZeroDivisionError Traceback (most recent call last)
in ()
----> 1 train(train_dataset, EPOCHS)
in train(dataset, epochs)
15 disc_loss_list.append(t[1])
16
---> 17 g_loss = sum(gen_loss_list) / len(gen_loss_list)
18 d_loss = sum(disc_loss_list) / len(disc_loss_list)
19
ZeroDivisionError: division by zero
PLS HELP: How do you define this numbers "4*4*256" and "seed_size" in the following code? I mean, I want to adapt this to my data and I dont know how.
def build_generator(seed_size, channels):
model = Sequential()
model.add(Dense(4*4*256,activation="relu",input_dim=seed_size))
model.add(Reshape((4,4,256)))
Thanks for the video!
Which type of Gan is this ?
Thanks Jeff. Could you do some instruction on the Keras Concatenate method? I'm trying to combine an LSTM with a ConvLSTM.
I don't know if you saw my previous comment, but the 'fix' code gives me some errors. Any thoughts on what might be wrong?
Your video's have been of great help! But I got a few questions or potential video idea's.
1. How could I create more realictic faces (less Frankenstein as you said it). Because more epochs doesn't seem to do much and I thought maybe by deepening the network, but I have no idea how to do so.
2. We train the network in your video's, but how do we use the weights that we just trained?
Also in the fix of 7.2 when I ran it the first time in Colab it worked fine, except that the output weren't really close to faces and the disc_loss was always equal to an 1 point something value which seems a bit off. Now when I try running it again It gives me some errors eventhough I didn't modify anything since the first run. Any idea on what's wrong?
Keep up the great work and I hope you could help me with my questions :)
Great video. Can you help to apply DL in detecting intrusions in network
Did an intro video on that for the class, ruclips.net/video/VgyKQ5MTDFc/видео.html
Hey guys, where can i get my my images for the dataset? im new to gan sry (Image set has 11,682 images, and they are not in my drive). CHeers max
It feels like you cqn just modify eother the optimizer or loss qnd use buils in fit
Can you please work on voice dataset as input in GAN for constructing face images.
I will have to look into that, thanks!
I got error here,
AttributeError Traceback (most recent call last)
in
1 # This method returns a helper function to compute cross entropy loss
----> 2 cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
3
4 def discriminator_loss(real_output, fake_output):
5 real_loss = cross_entropy(tf.ones_like(real_output), real_output)
AttributeError: module 'tensorflow._api.v1.keras.losses' has no attribute 'BinaryCrossentropy'
How to solve this Sir? Please help me with this
I tried to do this with 16x16 pngs, but i always get "InvalidArgumentError: Matrix size-incompatible" in line decision = discriminator(generated_image), idk how to fix it, and i cant progress further without it
You have to modify the discriminator so it also intakes 16x16 images, and the generator to also create 16x16 images. Try seeing if somewhere along the way there isn't code that specifies it to have a higher input resolution than that
great video Jeff!
God what I would give to have an avocado generator...
Really Hpe u can put together a coursera or Udemy Course on this with proper assignments and certificates
please make a video on how to deploy Machine learning models..!!
I have a playlist on that topic: ruclips.net/video/H73m9XvKHug/видео.html
i am not able to download dataset from second link.its not working actually.anybody plz help me
Have you found it?
Borat is that you? Where is your mankini?
Watching this video has been a waste 21:50 minutes disappointing results