Correction: With this new training method, you can train a powerful generative model with one-tenth training images, not one-tenth fewer images! Even more impressive haha! The paper covered:► arxiv.org/abs/2006.06676 GitHub with code:► github.com/NVlabs/stylegan2-ada NVIDIA's Applied Research Program:► mynvidia.force.com/AccelerateResearch/s/Application What are GANs ? | Introduction to Generative Adversarial Networks | Face Generation & Editing:► ruclips.net/video/ZnpZsiy_p2M/видео.html
You can make neural nets with fixed dot products and adjustable (parametric) activation functions. You swap around adjustability. Those nets are highly statistical and act strongly like GANs when trained as autoencoders. The fixed dot products can be obtained from fast transforms like the FFT or WHT. Causing the nets to be very fast and requiring much fewer parameters than conventional nets. See Fast Transform fixed-filter-bank neural networks.
Awesome stuff indeed, it could help many applications in the medical field and other fields, since it's usually very hard to have enough data! And thank you very much. :)
@@WhatsAI Brother, I saw your post on ig. I joined your discord too. I am really glad that I found you. I hope you will post videos often. Keep Up The Good Work.
I'm currently training a gan, but in different stages images look similar to each other. They are still different, but contrast, brightness and other finer details are looking the same over all images. The every next epoch these details are changing, but they are still uniform over the entire latent space. Is it normal? Or is it a type of mode collapse that affects finer details?
I am not a GAN expert, but have you tried using different patch sizes (if you are using a patchgan loss) and weights for your losses? Sorry for the delay for the answer, I somehow missed your comment! If you fixed your problem, please let me know what was the solution!
I've never seen data augmentation in a gan architecture before, and it is especially designed to find the right amount of transformations to apply on the data sent to the discriminator to always find the perfect amount to apply for maximizing your results! :) Lete know what you've done! I'm quite interested in what you were doing if you already used data augmentation in your gan without affecting the generator results with them!
I am doing a multi class image classification using lightweight CNN . one class of my dataset contain below 100 images but the other classes has 2000+ images ..can I use Gan to avoid this imbalance
You could indeed use gans to generate images for such smaller classes, but you can always try other approaches or simply weighting during training. It depends on the task, it’s complexity and the images themselves!
Well, as I said, it depends on your dataset. If the images are complex to generate and if the generated images really add variations or if it just creates really similar imagea which wouldn’t help your model to generalize. It depends on the task and your data and how hard it is to create synthetic data. Some images are simpler to us gans for data augmentation because we have a better theoretical understanding like cr scans for example, but it is more challenging for realistic images. It really depends on your dataset. The best thing to do first is try to train using a weighing system that gives more importance based on the number of image per class.
Correction: With this new training method, you can train a powerful generative model with one-tenth training images, not one-tenth fewer images! Even more impressive haha!
The paper covered:► arxiv.org/abs/2006.06676
GitHub with code:► github.com/NVlabs/stylegan2-ada
NVIDIA's Applied Research Program:► mynvidia.force.com/AccelerateResearch/s/Application
What are GANs ? | Introduction to Generative Adversarial Networks | Face Generation & Editing:► ruclips.net/video/ZnpZsiy_p2M/видео.html
What a time to be alive!
I like when simple changes produce significantly better results, like Munchausen RL :)
Could the same principles be applied for text and other types of data?
Of course! It is called data augmentation and it is frequently used in other types of data as well!
You can make neural nets with fixed dot products and adjustable (parametric) activation functions. You swap around adjustability. Those nets are highly statistical and act strongly like GANs when trained as autoencoders.
The fixed dot products can be obtained from fast transforms like the FFT or WHT. Causing the nets to be very fast and requiring much fewer parameters than conventional nets. See Fast Transform fixed-filter-bank neural networks.
Awesome stuff. Thanks for the video.
Awesome stuff indeed, it could help many applications in the medical field and other fields, since it's usually very hard to have enough data! And thank you very much. :)
@@WhatsAI Brother, I saw your post on ig. I joined your discord too. I am really glad that I found you. I hope you will post videos often. Keep Up The Good Work.
Thank you so much! And I am glad to see you part of the discord community!
I'm currently training a gan, but in different stages images look similar to each other. They are still different, but contrast, brightness and other finer details are looking the same over all images. The every next epoch these details are changing, but they are still uniform over the entire latent space. Is it normal? Or is it a type of mode collapse that affects finer details?
I am not a GAN expert, but have you tried using different patch sizes (if you are using a patchgan loss) and weights for your losses?
Sorry for the delay for the answer, I somehow missed your comment! If you fixed your problem, please let me know what was the solution!
Wait, you mean to say that people didn’t previously do it this way? That’s the way I’ve always done it!
I've never seen data augmentation in a gan architecture before, and it is especially designed to find the right amount of transformations to apply on the data sent to the discriminator to always find the perfect amount to apply for maximizing your results! :)
Lete know what you've done! I'm quite interested in what you were doing if you already used data augmentation in your gan without affecting the generator results with them!
Data augmentation only on the discriminator?
The discriminator only sees augmented images!
I am doing a multi class image classification using lightweight CNN . one class of my dataset contain below 100 images but the other classes has 2000+ images ..can I use Gan to avoid this imbalance
You could indeed use gans to generate images for such smaller classes, but you can always try other approaches or simply weighting during training. It depends on the task, it’s complexity and the images themselves!
@@WhatsAI if I use gan to increase dataset size , will that reduce misprediction ?
Well, as I said, it depends on your dataset. If the images are complex to generate and if the generated images really add variations or if it just creates really similar imagea which wouldn’t help your model to generalize. It depends on the task and your data and how hard it is to create synthetic data. Some images are simpler to us gans for data augmentation because we have a better theoretical understanding like cr scans for example, but it is more challenging for realistic images. It really depends on your dataset. The best thing to do first is try to train using a weighing system that gives more importance based on the number of image per class.
@@WhatsAI okay got it ..thank u😍
My pleasure!
ONLY work with FOXUED, PORTRAIT-alike dataset and mostly faces. It is not ready for other complex dataset at all, never converged.