Exploring Style GAN2 Latent Vector: Controlling Facial Properties

Поделиться
HTML-код
  • Опубликовано: 5 янв 2025

Комментарии • 38

  • @FernandoWittmann
    @FernandoWittmann 4 года назад

    Thanks for sharing your knoledge with us! I was very impressed by the results when you morphed one person into another. I will play with the latent vector to check which other attributes I can extract from it.

  • @sushmam4404
    @sushmam4404 4 года назад +1

    Thank you for the video sir ! Please keep making more such videos.

  • @noeltam75
    @noeltam75 4 года назад +1

    Can u explain the steps at 13:51-14:47 , you finally settled with [30,31,34,35,36] selections. But I did not see you using these selected vectors down the rest of the video. Please help. Thanks.

  • @hico816
    @hico816 4 года назад +1

    very instructive and entertaining

  • @BradHelm
    @BradHelm 4 года назад +1

    Excellent video exploring nVidia StyleGAN2. I've seen a few people port it over to TF2.x, but the implementations are not as readable as nVidia's code. I played with it for a week or so and generated several terabytes of images while playing with the generating vectors.

    • @HeatonResearch
      @HeatonResearch  4 года назад +1

      Yeah, I've been looking at some of those. For now, the desire to use 2.0 has not been enough to evaluate those. Might go there at some point.

    • @BradHelm
      @BradHelm 4 года назад

      @@HeatonResearch Am I correct in thinking that the generative vector manipulations you discuss in this video are really only applicable to the weights in the network at the time you generated the images? It would seem to me that since the GAN uses mutable functions mapping domain vectors to the co-domain of images and change to the mapping function (transfer learning for example) would render the vector practically useless (especially as more training is applied and the weights move further from their prior states.)
      I will admit it was quite fun exploring the vector space to discern which elements of the generating vector affected which aspects of the output images. It was also fun comparing the model structures (trained my own GANs in a couple of cases) to see how the convolutional layers affected feature detection. One intriguing paper I ran across was the 2017 paper on Shallow-GlassesNet (Basbrain, Al-Taie, et al.) The only reason I went with a different layer structure was that I wanted to be able to adapt my solution to other features beyond glasses/no glasses. Still, it was pretty cool so see how they accomplished a specific task with a minimally complex solution.

  • @adrianpetrescu8583
    @adrianpetrescu8583 4 года назад

    the faces generated with out glasses, can be added to database for that with out glasses and model can be retrain again...

  • @KlimovArtem1
    @KlimovArtem1 3 года назад +1

    If a single value in the latent space doesn't really represent a single visual feature (skin color, glasses, etc), then what does it represent? Do you have a video that explains the latent vectors better?

  • @nasiksami2351
    @nasiksami2351 4 года назад

    This one is next level stuff and awesome!

  • @nguyenanhnguyen7658
    @nguyenanhnguyen7658 3 года назад

    This is very helpful. Thank you !

  • @Techning
    @Techning 3 года назад +1

    Thank you! I'm currently trying to find the parts of the vector which affects the shadowing / lighting in the images. I'm using a stylegan encoder which creates 18x512 tensors (18 because of a mapping network I believe). Do you have any idea how to explore the latent space in this case?

  • @wold2180
    @wold2180 4 года назад +1

    You watched the video really well. What if I want to run my image, not nvidia image? / I want to try with double eyelids, not glasses. Thank you again for the video.

    • @HeatonResearch
      @HeatonResearch  4 года назад

      That is a bit more complex. I've seen some research that attempts to find a latent vector that matches the input image.

  • @liellplane145
    @liellplane145 4 года назад +1

    great video.. I am trying to implement controlled latent space myself - what is your opinion on "TL-GAN" - if youve ever come across it

  • @sangjunjeong242
    @sangjunjeong242 4 года назад +1

    Such a great video! Thank you for sharing.
    I have wonder about StyleGAN. is it possible just change eyes shape? Which is for example if just change some of point in that codes, can make 'Double eyelid' who from do not have Double eyelid or opposite way too! Kind of interesting video! Thank you again!

    • @HeatonResearch
      @HeatonResearch  4 года назад +1

      In theory you can, it is all about figuring out how to adjust latent vector. Usually I simply do it by using guide images and pulling the vector between them. Not that different from breeding dogs/animals. :)

    • @sangjunjeong242
      @sangjunjeong242 4 года назад

      @@HeatonResearch Thank you so much!

  • @krlospatrick
    @krlospatrick 3 года назад

    Thanks for sharing your knowledge.
    Does it work with StyleGAN2-ADA ?

  • @kidstabate7191
    @kidstabate7191 4 года назад +1

    do you have an example which sows how to develop coreference resolution using tensorflow?

    • @HeatonResearch
      @HeatonResearch  4 года назад

      No, new to me, will have to take a look.

  • @thepyre
    @thepyre 3 года назад

    I've tried doing this with my own pkl file, instead of nvidia's. I modified the code: network_pkl = 'gdrive:networks/stylegan2-ffhq-config-f.pkl' to: network_pkl = '/content/myfile.pkl' and I got an error. Any ideas on what I'm doing wrong?

    • @thepyre
      @thepyre 3 года назад

      is it because I am using stylegan2 ada?

  • @jeetshah8513
    @jeetshah8513 4 года назад

    Sir, you are awesome!!!!

  • @soapsydoopsy4694
    @soapsydoopsy4694 4 года назад +1

    Thumbs up for the share

  • @mshrawan4
    @mshrawan4 3 года назад

    That was awesome! Just curious to know if just manually trying to find the subspace through trial and error and manual intervention the only way to find which set correspond to glasses or not glasses?

  • @hoatong1670
    @hoatong1670 4 года назад

    Thanks so so sooooooooooooooo much.

  • @kakeruuma6710
    @kakeruuma6710 3 года назад

    Dear Prof. Jaff, I have a crazy question. What if there are only human faces in training set, and after training, I input a fish photo. The target is trying to give the fish a glass. Is this possible ?

  • @yilberrojas8306
    @yilberrojas8306 4 года назад +1

    Dear Sir, I can do this but instead of glasses paint the person's lips?

    • @HeatonResearch
      @HeatonResearch  4 года назад +1

      Yes, it would be a similar concept. Find samples with the lips in the form/color you desire.

    • @yilberrojas8306
      @yilberrojas8306 4 года назад

      @@HeatonResearch thank...

  • @uParticle
    @uParticle 3 года назад

    Very awesome stuff!! I got an error, though, when trying to load my pickle. Any idea what this is?
    /content/stylegan2/dnnlib/tflib/network.py in __setstate__(self, state)
    276
    277 # Set basic fields.
    --> 278 assert state["version"] in [2, 3, 4]
    279 self.name = state["name"]
    280 self.static_kwargs = util.EasyDict(state["static_kwargs"])

    • @uParticle
      @uParticle 3 года назад

      I now realize, that this error is caused by the pkl package being of the wrong format. This is because the package is trained using StyleGAN2-ADA and the explorer is using StyleGAN2. I figured out how to explore using SG2-ADA, but would still very much love to see how you would do it in a glasses / no glasses kind of situation :)

  • @tolaut
    @tolaut 4 года назад +1

    Hello Jeff, great content as always! :)
    Would you mind giving me your opinion? I am enrolled in a masters program with focus on Machine Learning / Data Science and i am looking to get a computer. I have my eyes on the new 16" macbook but obviously it doesn't have Cuda support for deep learning.
    Would you say that you feel limited by the Google Collab / Kaggle kernel GPUs or are they just fine? I honestly don't like the idea of getting a windows machine just for that aspect
    Thanks in advance!

    • @HeatonResearch
      @HeatonResearch  4 года назад

      IKR... Mac REALLY annoys me that they are not offering nVidia. Otherwise, that would be my dream computer. Currently, I am typing this from a Mac, and I do most of my GPU in the cloud. I am fond of System76, but have not owned one.... yet...

  • @aminesoulaymani1126
    @aminesoulaymani1126 4 года назад

    thanks a lot

  • @mauricioluisvega8342
    @mauricioluisvega8342 3 года назад

    HOW MANY TIMES HE SAID: GLASSES???

  • @moahaimen
    @moahaimen 2 года назад

    I AM A PHD STUDENT AND I NEED YOUR HELP PLZ