Enhancing Limited Data Applications with GAN Training and Augmentations

Поделиться
HTML-код
  • Опубликовано: 16 янв 2025

Комментарии • 23

  • @WhatsAI
    @WhatsAI  4 года назад +5

    Correction: With this new training method, you can train a powerful generative model with one-tenth training images, not one-tenth fewer images! Even more impressive haha!
    The paper covered:► arxiv.org/abs/2006.06676
    GitHub with code:► github.com/NVlabs/stylegan2-ada
    NVIDIA's Applied Research Program:► mynvidia.force.com/AccelerateResearch/s/Application
    What are GANs ? | Introduction to Generative Adversarial Networks | Face Generation & Editing:► ruclips.net/video/ZnpZsiy_p2M/видео.html

  • @zeryphex
    @zeryphex 4 года назад +10

    What a time to be alive!

  • @JordanMetroidManiac
    @JordanMetroidManiac 4 года назад +3

    I like when simple changes produce significantly better results, like Munchausen RL :)

  • @costa768
    @costa768 4 года назад +4

    Could the same principles be applied for text and other types of data?

    • @WhatsAI
      @WhatsAI  4 года назад +1

      Of course! It is called data augmentation and it is frequently used in other types of data as well!

  • @notgabby604
    @notgabby604 4 года назад +3

    You can make neural nets with fixed dot products and adjustable (parametric) activation functions. You swap around adjustability. Those nets are highly statistical and act strongly like GANs when trained as autoencoders.
    The fixed dot products can be obtained from fast transforms like the FFT or WHT. Causing the nets to be very fast and requiring much fewer parameters than conventional nets. See Fast Transform fixed-filter-bank neural networks.

  • @anishamalblanco7386
    @anishamalblanco7386 4 года назад +1

    Awesome stuff. Thanks for the video.

    • @WhatsAI
      @WhatsAI  4 года назад +3

      Awesome stuff indeed, it could help many applications in the medical field and other fields, since it's usually very hard to have enough data! And thank you very much. :)

    • @anishamalblanco7386
      @anishamalblanco7386 4 года назад +1

      @@WhatsAI Brother, I saw your post on ig. I joined your discord too. I am really glad that I found you. I hope you will post videos often. Keep Up The Good Work.

    • @WhatsAI
      @WhatsAI  4 года назад

      Thank you so much! And I am glad to see you part of the discord community!

  • @XOPOIIIO
    @XOPOIIIO 4 года назад +2

    I'm currently training a gan, but in different stages images look similar to each other. They are still different, but contrast, brightness and other finer details are looking the same over all images. The every next epoch these details are changing, but they are still uniform over the entire latent space. Is it normal? Or is it a type of mode collapse that affects finer details?

    • @WhatsAI
      @WhatsAI  3 года назад

      I am not a GAN expert, but have you tried using different patch sizes (if you are using a patchgan loss) and weights for your losses?
      Sorry for the delay for the answer, I somehow missed your comment! If you fixed your problem, please let me know what was the solution!

  • @zelexi
    @zelexi 4 года назад +2

    Wait, you mean to say that people didn’t previously do it this way? That’s the way I’ve always done it!

    • @WhatsAI
      @WhatsAI  4 года назад +3

      I've never seen data augmentation in a gan architecture before, and it is especially designed to find the right amount of transformations to apply on the data sent to the discriminator to always find the perfect amount to apply for maximizing your results! :)
      Lete know what you've done! I'm quite interested in what you were doing if you already used data augmentation in your gan without affecting the generator results with them!

    • @NicheAsQuiche
      @NicheAsQuiche 4 года назад +2

      Data augmentation only on the discriminator?

    • @WhatsAI
      @WhatsAI  4 года назад +1

      The discriminator only sees augmented images!

  • @Athulyanklife
    @Athulyanklife 2 года назад

    I am doing a multi class image classification using lightweight CNN . one class of my dataset contain below 100 images but the other classes has 2000+ images ..can I use Gan to avoid this imbalance

    • @WhatsAI
      @WhatsAI  2 года назад

      You could indeed use gans to generate images for such smaller classes, but you can always try other approaches or simply weighting during training. It depends on the task, it’s complexity and the images themselves!

    • @Athulyanklife
      @Athulyanklife 2 года назад

      @@WhatsAI if I use gan to increase dataset size , will that reduce misprediction ?

    • @WhatsAI
      @WhatsAI  2 года назад +1

      Well, as I said, it depends on your dataset. If the images are complex to generate and if the generated images really add variations or if it just creates really similar imagea which wouldn’t help your model to generalize. It depends on the task and your data and how hard it is to create synthetic data. Some images are simpler to us gans for data augmentation because we have a better theoretical understanding like cr scans for example, but it is more challenging for realistic images. It really depends on your dataset. The best thing to do first is try to train using a weighing system that gives more importance based on the number of image per class.

    • @Athulyanklife
      @Athulyanklife 2 года назад +1

      @@WhatsAI okay got it ..thank u😍

    • @WhatsAI
      @WhatsAI  2 года назад

      My pleasure!

  • @nguyenanhnguyen7658
    @nguyenanhnguyen7658 3 года назад

    ONLY work with FOXUED, PORTRAIT-alike dataset and mostly faces. It is not ready for other complex dataset at all, never converged.