Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)

Поделиться
HTML-код
  • Опубликовано: 1 авг 2024
  • Generative adversarial networks (GANs) are a recently introduced class of generative models, designed to produce realistic samples. This tutorial is intended to be accessible to an audience who has no experience with GANs, and should prepare the audience to make original research contributions applying GANs or improving the core GAN algorithms. GANs are universal approximators of probability distributions. Such models generally have an intractable log-likelihood gradient, and require approximations such as Markov chain Monte Carlo or variational lower bounds to make learning feasible. GANs avoid using either of these classes of approximations. The learning process consists of a game between two adversaries: a generator network that attempts to produce realistic samples, and a discriminator network that attempts to identify whether samples originated from the training data or from the generative model. At the Nash equilibrium of this game, the generator network reproduces the data distribution exactly, and the discriminator network cannot distinguish samples from the model from training data. Both networks can be trained using stochastic gradient descent with exact gradients computed by maximum likelihood.
    Topics include:
    - An introduction to the basics of GANs.
    - A review of work applying GANs to large image generation.
    - Extending the GAN framework to approximate maximum likelihood, rather than minimizing the Jensen-Shannon divergence.
    - Improved model architectures that yield better learning in GANs.
    - Semi-supervised learning with GANs.
    - Research frontiers, including guaranteeing convergence of the GAN game.
    - Other applications of adversarial learning, such as domain adaptation and privacy.
  • НаукаНаука

Комментарии • 109

  • @RATMCU
    @RATMCU 4 года назад +205

    even the lecture on GAN had an Adversary

  • @Prithviization
    @Prithviization 5 лет назад +286

    Schmidhuber vs Goodfellow @ 1:03:00

  • @dasayan05
    @dasayan05 4 года назад +53

    Schmidhuber: Can I ask a question ?
    GoodFellow: Oh, You_Again ?

  • @TheAntrikooos
    @TheAntrikooos 4 года назад +84

    An adversarial interaction about adversarial methods.

  • @christophschmidl
    @christophschmidl 6 лет назад +59

    Jürgen Schmidhuber starts talking at 1:02:59.

  • @sairocks128
    @sairocks128 6 лет назад +1

    Thank you very much for uploading.

  • @mranonymous8815
    @mranonymous8815 5 лет назад +60

    Watch from 1:03:00 to 1:05:04 in double speed. The art of schmidhubering vs the art of patience while being schmidhubered.

    • @RATMCU
      @RATMCU 4 года назад +5

      He took time to answer the question to a bigger audience than Schmidhuber, by directing them to the places where it matters, also saving time of his presentation.

  • @MunkyChunk
    @MunkyChunk 2 года назад

    Very well organised lecture, thank you Ian!

  • @TapabrataGhosh
    @TapabrataGhosh 5 лет назад +161

    1:03:00 is what you're all here for

    • @eugenedsky3264
      @eugenedsky3264 4 года назад

      Learning Finite Automaton Simplifier: drive _ google _ com/file/d/1GSv89tiQmPDcnFEu4n4CqfaJcUJxVmL5KrSCJ047g4o/edit

    • @y.o.2478
      @y.o.2478 2 года назад +1

      So you come to a video about an amazing technology just for petty drama?

    • @olarenee208
      @olarenee208 2 года назад +6

      @@y.o.2478 yes

    • @adamtran5747
      @adamtran5747 2 года назад +2

      @@y.o.2478 1000%

    • @carlossegura403
      @carlossegura403 2 года назад +1

      @@y.o.2478 Indeed.

  • @masisgroupmarinesoftintell3299
    @masisgroupmarinesoftintell3299 3 года назад

    Thank you for uploading the wonderful video! The explanations were really clear.

  • @ruchirjain1163
    @ruchirjain1163 2 года назад +22

    i came here just to understand GANs a bit better, didn't realize i would strike gold at 1:03:00

    • @kanishktantia7899
      @kanishktantia7899 2 года назад +1

      Can you give your study plan as in how do you get it?

    • @ruchirjain1163
      @ruchirjain1163 2 года назад +3

      @@kanishktantia7899 just watch a couple of videos on this, derive the min-max loss atleast once on paper. See a basic implementation of GAN from scratch. I was doing it just for making a ppt on a research paper (StyleGAN2), so this was more than enough for me. Same goes for any other topic in DL. I was just dipping my feet in the deep sea of DL, i ain't touching it again.

    • @kanishktantia7899
      @kanishktantia7899 2 года назад

      @@ruchirjain1163 Sure , that might help. Im looking for higher study opportunities from abroad in this space only. Can you help me in any capacity?

    • @ruchirjain1163
      @ruchirjain1163 2 года назад

      @@kanishktantia7899 it depends mate, i would say you should try going for thesis if u have that in ur university

    • @kanishktantia7899
      @kanishktantia7899 2 года назад +1

      @@ruchirjain1163 I graduated last year, working these days.. not enrolled currently at any university..I'm looking for that only. Any lab or any university in US or maybe somewhere else.

  • @neuron8186
    @neuron8186 3 года назад +8

    Really good fellow

  • @wenboma4398
    @wenboma4398 6 лет назад

    Good speech, thx

  • @harshinisewani5095
    @harshinisewani5095 3 года назад

    Can someone give more insights to exercises?

  • @cueva_mc
    @cueva_mc 3 года назад +4

    Can someone explain in English the reason of the confrontation?

    • @Karl_Squell
      @Karl_Squell 3 года назад +2

      stats.stackexchange.com/questions/251460/were-generative-adversarial-networks-introduced-by-j%c3%bcrgen-schmidhuber/301280#301280

  • @Peace_in_you
    @Peace_in_you 4 года назад

    I have a question:is this kind of network just good for the same data as training data?

    • @bitbyte8177
      @bitbyte8177 4 года назад

      No

    • @busTedOaS
      @busTedOaS 4 года назад +8

      The generator, if successful, will recover the training distribution, and nothing else, if that answers the question.

  • @g.l.5072
    @g.l.5072 Год назад +1

    I dont think people realize this is one of the most important lectures in the past 100 years... GAN... it will be everywhere soon.

  • @backnforth8401
    @backnforth8401 4 года назад +15

    1:03:00 what a way of getting called out

  • @AvielLivay
    @AvielLivay 2 года назад

    13:24 when searching for the best theta, Ian is summing over the log of the probabilities rather than over the probabilities, why?

    • @piclkesthedrummer6439
      @piclkesthedrummer6439 Год назад +1

      If it's still relevant, I'll try to help. This is called maximum likelyhood estimation, which is the product of the estimated probabilities on the training dataset. As derivative of product is not easy to work with, we take a log of this, so this yields the sum of logs. As log is a monotonous function it doesn't change the local minimum and is easier to take the derivative of. Hope it helped

  • @svenjaaunes2507
    @svenjaaunes2507 4 года назад +83

    So many people in the wild came up with this idea of training two networks against each other. Some lacked the deeper knowledge to continue, some tackled specific problems instead of generalizing. Ian Goodfellow is just another person who came up with this exact idea (in fact while drunk or something but this may be wrong but in a casual context for sure). Schmidthuber's paper is also based on this exact idea. Goodfellow must acknowledge the overwhelming overlap, let alone similarity, but he doesn't, because if he did, that would take all attention away from him since the earlier work is the same. He probably wasn't even aware of Schmidthuber's paper because that paper dates back >20 years earlier. This doesn't justify not giving proper credit though. Goodfellow simply popularized GANs, certainly not invented. In fact, I bet there were even others before Schmidthuber who came up with this core idea but just didn't keep up.

  • @yoloswaggins2161
    @yoloswaggins2161 4 года назад +56

    If only Schmidhuber had more friends in the field he wouldn't be outmaneuvered in this manner.

    • @hummingbird7579
      @hummingbird7579 2 года назад +4

      It should not matter if someone has friends or not within the field!!!

    • @yoloswaggins2161
      @yoloswaggins2161 2 года назад +5

      @@hummingbird7579 It shouldn't but it does!

    • @hummingbird7579
      @hummingbird7579 2 года назад

      @@yoloswaggins2161 History has shown time and time again... you are right.
      It's really a shame.

  • @architjain6749
    @architjain6749 5 лет назад +50

    I dont know if its just me, but, I enjoyed and understood Sir Jürgen Schmidhuber more than Sir Ian Goodfellow. Will definitely go check his contributions.

    • @kanishktantia7899
      @kanishktantia7899 2 года назад +1

      How do you study can you share your plan with me?

  • @svenjaaunes2507
    @svenjaaunes2507 4 года назад +34

    predictability minimization is almost literally the same thing as GANs

    • @busTedOaS
      @busTedOaS 3 года назад +11

      almost literally kind of exactly vaguely the same thing.

  • @nikhilmuthukrishnan7222
    @nikhilmuthukrishnan7222 6 лет назад +1

    Pure Genius, available in R?

  • @OttoFazzl
    @OttoFazzl 4 года назад +8

    I think he misspoke at 31:39 - he said that we want to make sure that x has a higher dimension than z. Instead, z should have a higher dimension than x, to provide full support to space of x and avoiding learning lower-dimensional manifold.

    • @busTedOaS
      @busTedOaS 4 года назад +2

      It's almost never the case in practice where z has about 100 entries and x is 500x500x3 or something similar. If it was the other way around, the noise input would have superfluous entries, which is what he means by "learning lower-dimensional manifolds", I believe. However, in order to perfectly reconstruct the training distribution, I agree that it makes sense to have it exactly that way, and I'm confused by the way he worded that whole part.

    • @akshayshrivastava97
      @akshayshrivastava97 3 года назад

      yeah, that confused me too. Good to see someone agrees that it should be the other way round.

  • @dibyaranjanmishra4272
    @dibyaranjanmishra4272 5 лет назад +5

    THERE IS NO BETTER LECTURE ON INTRODUCTION TO GAN. PERIOD.

  • @Marcos10PT
    @Marcos10PT 4 года назад +32

    Schmidhuber has a point

    • @hummingbird7579
      @hummingbird7579 2 года назад +4

      I feel bad for him. He deserves more recognition.

  • @user-pz7sl4qq9v
    @user-pz7sl4qq9v 2 года назад

    Well well well another prodigy from stanford that has made a.i more complicated and sophisticated for the better good of humanity 😔... Isnt GAN whats CONDUCTING the war now lol 😂. I remember movie WARGAMES

  • @pranav7471
    @pranav7471 4 года назад +34

    Such a fake Schmidhuber is the true creator of Gans, and Goodfellow had the nerve to shut him up

    • @busTedOaS
      @busTedOaS 4 года назад +7

      Two people inventing more or less the same thing independent of each other has happened many times in history, for example Newton's method or Darwin's theory of natural selection. Calling either of them fake is rather presumptuous.

    • @pranav7471
      @pranav7471 4 года назад +9

      @@busTedOaS bro Goodfellow came decades after this guy that's not called simultaneous inventions

    • @busTedOaS
      @busTedOaS 4 года назад +4

      @@pranav7471 That's why I said independent, not simultaneous. Bro.

    • @pranav7471
      @pranav7471 4 года назад +9

      ​@@busTedOaS I completely understand that, but u need to credit the first creator too, that's the only problem I have. As far I know Goodfellow was the first one to make Adverserial Networks work, Schmidhuber worked with 1000x worser hardware and wasn't able to get any real results so it was just a cool idea with no solid results backing it up. Thus Goodfellow deserves credit but not completely. Even after all these arguments, Goodfellow refused to acknowledge the clear similarity between the work and cite it, this is blatantly unethical from an academic standpoint.

    • @busTedOaS
      @busTedOaS 4 года назад +2

      ​@@pranav7471Schmidhuber had modern hardware in 2014. Plus years of experience with adversarial models ahead of Goodfellow, presumably. I don't see any disadvantage for Schmidhuber there.
      I agree that one should cite related work and Goodfellow does this consistently - in fact that's what he did right before the confrontation. Why would he specifically ignore Schmidhuber's work while citing many other works with even stronger similarities? The reasonable explanations I can come up with are 1) personal spite or 2) he honestly thinks the techniques are sufficiently different.

  • @EB3103
    @EB3103 3 года назад +10

    This ian kid is rude and not so goodfellow. Schmidhuber politely just asked a question and got attacked

    • @EB3103
      @EB3103 3 года назад +1

      And also schmidhuber can be his father, he should show a little more respect

    • @NavinF
      @NavinF 2 года назад +3

      @@EB3103 Ok boomer

    • @Nickyreaper2008
      @Nickyreaper2008 2 года назад +3

      We're talking about a guy that says he invented "generative adversarial networks", when the paper clearly mentions 7 other people, university staff and more working on the project. Of course he's gonna talk like that.

    • @floydamide
      @floydamide 3 месяца назад

      @@Nickyreaper2008 He also likes to call himself "The industry lead"

  • @akshayshrivastava97
    @akshayshrivastava97 3 года назад +4

    Ok, people like Dr. Schmidhuber need to understand that a tutorial is not a place for these kind of discussions. This could have been easily taken offline. Also, once the presenter expresses they have no desire to discuss it at that moment, back off. It's too conceited and self-important to think your argument with the presenter is more important than everyone else - who paid for NIPS and had been looking forward to this tutorial - getting their time and money's worth.
    That being said, not downplaying what Dr. Schmidhuber was trying to point out, simply that it could've been discussed differently and elsewhere.

    • @MrMSS22
      @MrMSS22 3 года назад +11

      If Schmidhuber had only raised his concern offline, it would not have gotten in the focus of academic publicity in the way it did. It's reasonable to assume that the latter was his intention, therefore it didn't matter whether Goodfellows presentation was a tutorial.

  • @youtubeadventurer1881
    @youtubeadventurer1881 5 лет назад +37

    Why is he obfuscating everything with needless mathematical jargon that most ML researchers won't understand? This stuff actually isn't so complicated that you need a degree in mathematics to understand it. You can understand it on a deep level with only high school mathematics if it is explained properly.

    • @teckyify
      @teckyify 5 лет назад +43

      Oh sorry, what would you like to talk instead about? Visual Code themes or the latest JavaScript framework? Any ML course in the university is heavy math Einstein.

    • @OttoFazzl
      @OttoFazzl 4 года назад +15

      Engineering details are straightforward, however, this is a NIPS lecture, they have to have theoretical justification about how it works. If you want to engineer a working system you don't need all this, I agree with that.

    • @robbiedozier2840
      @robbiedozier2840 4 года назад +10

      Man, it’s almost like Computer Science is a subset of Mathematics...

    • @robbiedozier2840
      @robbiedozier2840 3 года назад

      @@sZlMu2vrIDQZBNke8ENmEKvzoZ lmao

    • @judedavis92
      @judedavis92 2 года назад +1

      GO write your HTML code, kid.

  • @TobiSemester
    @TobiSemester 2 месяца назад

    My role model we are science 🔭🧪