Generative Adversarial Networks (GANs) - Computerphile

Поделиться
HTML-код
  • Опубликовано: 27 сен 2024

Комментарии • 702

  • @vincentpeschar
    @vincentpeschar 7 лет назад +2542

    "Neural networks don't have feelings, yet...."

    • @RafidW9
      @RafidW9 7 лет назад +37

      Vincent Peschar this is why the AGI will fight back. we abuse them so much lol.

    • @TechyBen
      @TechyBen 7 лет назад +19

      Does a rock have feelings? If a rock had feelings, would it matter? Why? (honest questions on logic and peoples feelings)

    • @AlabasterJazz
      @AlabasterJazz 7 лет назад +63

      It could be said that any matter that is arranged into any pattern is at some level alive. While a rock wouldn't have feelings nearly as obvious as humans, it still might have some sense of being. Breaking a rock into pieces may not cause it to experience pain or anxiety or pleasure, as it's sensory capacity is not sufficient to notice such changes to itself. However it's current makeup and position in the universe is no more or less arbitrary than any other matter in the universe. I guess the follow up question might be: if all matter, including organisms, are ultimately made up of non-living particles, what is life?

    • @autolykos9822
      @autolykos9822 7 лет назад +23

      Yet. Growth mindset.

    • @tylerpeterson4726
      @tylerpeterson4726 7 лет назад +25

      TechyBen The problem comes when you start asking if mud has feelings and if people have feelings. Mud and people are generally made of the same materials. It’s just that we are organized in a way that gives us feelings. The religious and non-religious can debate if the soul exists or not, but scientifically we can only differentiate between mud and life based on its level of organization. And so it holds that a highly organized piece of silicon (a computer chip) could also have feelings.

  • @d34d10ck
    @d34d10ck 7 лет назад +365

    To call this impressive would be an understatement. That's amazing, fantastic, unbelievable, highly interesting and scary all at once.

    • @naturegirl1999
      @naturegirl1999 4 года назад

      Patrick Bateman why would it be scary?

    • @d34d10ck
      @d34d10ck 4 года назад +7

      @@naturegirl1999 Most technologies can be scary, since they all have the potential of being misused.
      AI can particularly scary, since we use it for systems that are to complex for us to understand.
      So what we do, is handing these complexities over to a computer to handle, in the hope that it handles them the way we think it should. But the truth is, that we don't really know what it does and if we decide to use such technologies in our weapon systems for example, then it starts getting scary.

    • @insanezombieman753
      @insanezombieman753 4 года назад +15

      @@d34d10ck Interesting. Now let's hear what Paul Allen has to say about this

    • @h0stI13
      @h0stI13 7 месяцев назад +2

      What do you think about it now?

    • @d34d10ck
      @d34d10ck 7 месяцев назад

      ​@@h0stI13I can no longer imagine a life without generative AIs. As a developer, I use them all the time and my productivity has increased immensely because of them.

  • @mother3946
    @mother3946 Год назад +10

    His Clarity and simplicity in unpacking a complex topic is just out of this world.

  • @Nalianna
    @Nalianna 7 лет назад +1100

    This gentleman explains high level concepts in ways that the layman can understand, AND has an interesting voice to listen to. A++ work

    • @AlexiLaiho227
      @AlexiLaiho227 5 лет назад +20

      you should check out, he made his own youtube channel. search for "Robert Miles AI"

    • @savagenovelist2983
      @savagenovelist2983 4 года назад +1

      299 likes, here we go.

    • @giveusascream
      @giveusascream 4 года назад +2

      And mutton chops that I can only dream off

    • @blackcorp0001
      @blackcorp0001 3 года назад +1

      Brain work ... like House work...but deeper

    • @ev6558
      @ev6558 2 года назад +9

      I like that they don't feel the need to do a camera cut every time he pauses to think of his next word. Makes me feel like the video was made for people who are actually interested and not just clickbait for zoomers.

  • @recklessroges
    @recklessroges 7 лет назад +393

    The Dell screens have come to worship the Commodore PET.

  • @JamesMBC
    @JamesMBC 6 лет назад +38

    Man, one of my favorite videos on this channel. How did I miss it?
    Not only does it make you think about the endless potential of machine learning, it also sheds some light into how natural brains might work. Maybe even a basic aspecto of nature of creativity.
    Getting my mind blown again!

  • @slovnicki
    @slovnicki 5 лет назад +69

    "..which is kind of an impressive result." - understatement of the century

    • @jork8206
      @jork8206 3 года назад +3

      Gotta love latent spaces. My favorite was a network that showed a significant correlation between - and - . Assigning any direct meaning to that could be a leap of logic but when you think about it, cats have more visually feminine features than dogs, generally speaking

  • @DotcomL
    @DotcomL 7 лет назад +24

    I love the "finding the weakness" analogy. Really helped me to understand.

  • @samre3006
    @samre3006 6 лет назад +20

    Never really understood GANs before. Thank you so much for making this so intuitive. Eternally grateful.

  • @bimperbamper8633
    @bimperbamper8633 7 лет назад +14

    Only discovered this channel recently and I've been watching nothing but Computerphile videos for a whole week. Love the content you do with Rob Miles - his field of study combined with his explanations make these my favorite videos to watch.
    Thank you!

  • @szynkers
    @szynkers 7 лет назад +11

    The only instance that I can remember when a science video presented on my level of understanding genuinely blew my mind at the end. The research on artificial neural networks will surely change computing as we know it.

  • @airportbum5402
    @airportbum5402 Год назад +1

    I think it's so cool that there is a Linksys WRT-54G and a Commodore PET in the background and they're discussing topics so modern.

  • @chrstfer2452
    @chrstfer2452 7 лет назад +39

    "Right now, they're just datapoints" I like this guy

  • @Felixkeeg
    @Felixkeeg 7 лет назад +350

    I honestly more often than not click the video based on whether Rob is hosting.

    • @dylanica3387
      @dylanica3387 7 лет назад +6

      Same here

    • @VentraleStar
      @VentraleStar 7 лет назад +12

      He's cute

    • @HailSagan1
      @HailSagan1 7 лет назад +34

      I like all the computerphile regulars, but yeah Rob is great. I recommend checking out his personal channel that focuses on AGI's, it's linked in the above description!

    • @cubertmiso
      @cubertmiso 6 лет назад

      Cast is great for any channel. Only Philip Moriarty gives weird vibes.

    • @JamesMBC
      @JamesMBC 6 лет назад +4

      This guy knows. Rob is the best, and this is fascinating!
      It makes it irresistible to get involved with machine learning.

  • @LP6_yt
    @LP6_yt 7 лет назад +66

    Love the Commodore PET on the shelf. Class.

    • @greywolf271
      @greywolf271 7 лет назад +3

      Stuff a GAN into 64k. Reminds me of the Chess player written for 4k ram

    • @meanmikebojak1087
      @meanmikebojak1087 4 года назад +1

      I've got a Commodore PET on a shelf too. Mine walks off during POST, so it isn't used anymore. But it looks classy on the shelf.

  • @macronencer
    @macronencer 7 лет назад +7

    Love the Commodore PET on the shelf! I played with one of the original PETs when they first came out (the one with the horrid rectilinear keyboard!). We eventually got four of the later models at my school, and before long we were happily playing Space Invaders when the teachers weren't looking... and then doing hex dumps of Space Invaders, working out how it worked, and adding a mod to give it a panic button in case the teacher came into the room so you could hit the button and look as if you were working. To be honest, I'm not sure they would have cared, because we probably learned more by doing the hex dump than we would have with our usual work!

    • @raapyna8544
      @raapyna8544 5 месяцев назад +1

      Oh the effort kids will put in in order to avoid work!

  • @BenGabbay
    @BenGabbay 6 лет назад +2

    This is literally one of the most fascinating videos I've ever seen on RUclips.

  • @fast1nakus
    @fast1nakus 5 лет назад +2

    Im pretty sure this is the best format of learning something on youtube

  • @tohamy1194
    @tohamy1194 7 лет назад +16

    I could watch this all day.. like I did yesterday with numberphile :D

  • @surrealdynamics4077
    @surrealdynamics4077 4 года назад +3

    This is so interesting! This is the way thispersondoesnotexist "photos" are made by the machine. Super cool!

  • @tumultuousgamer
    @tumultuousgamer 2 года назад +3

    That last bit was super interesting and mind blowing at the same time! Excellent video!

  • @knightshousegames
    @knightshousegames 7 лет назад +71

    "So cats equal zero and dogs equal one. You train it to know the difference" Ultimate final test: show it a Shiba Inu.

    • @GhostGuy764
      @GhostGuy764 7 лет назад +11

      knightshousegames Shiba look too happy to be cats.

    • @knightshousegames
      @knightshousegames 7 лет назад

      That is what they call a fringe case. My guess is the machine would try to return a 0.5

    • @homer9736
      @homer9736 7 лет назад +3

      knightshousegames i think you should ban 0.5 because thats right for both cases always, the machine cant learn from that

    • @hellfiresiayan
      @hellfiresiayan 5 лет назад +8

      No because in the end you can tell the network that it is a dog, and it could alter its biases based on that result, so the next 100 times you show it a shiba inu, it might be able to give a better answer. Whether that would negatively affect its ability to identify a cat, however, I have no idea.

  • @truppelito
    @truppelito 7 лет назад +2

    20 minute video about AI by Rob Miles? YES PLZ

  • @R.Daneel
    @R.Daneel 2 года назад +3

    I love seeing this in 2022, and comparing this to DALL-E, GPT-3, etc. Wow. Five years later, and it's generating "Pink cat on a skateboard in Times Square" at artist quality.
    (@16:25 - Yup. You do. And it does.)

  • @bipolarminddroppings
    @bipolarminddroppings 3 года назад

    The fact that he add "yet" is both exciting and chilling.

  • @viniciusborgesdelima2519
    @viniciusborgesdelima2519 Год назад +1

    Literally the best explanation possible for such a dense topic, congrats my man, you are incredible!

  • @dibyaranjanmishra4272
    @dibyaranjanmishra4272 7 лет назад +10

    excellent explanation!!! one of the best videos ever on computerphile

  • @tonyduncan9852
    @tonyduncan9852 Год назад +2

    The common room elephant: consciousness is _relative,_ and shared by electronic machinery, and all of Earth's animals, including elephants, and not excluding Man.

  • @picpac2348
    @picpac2348 7 лет назад +20

    Would love to see some example pictures of the generated and real pictures.

  • @cazino4
    @cazino4 5 лет назад +1

    This guy presents fantastically. Such an interesting topic... I remember seeing an online CS Harvard lecture around a decade ago that used the same concept (having the system compete with another instance of itself) to train a computer chess player...

  • @MetsuryuVids
    @MetsuryuVids 7 лет назад +18

    Another cool thing he didn't mention about that experiment with the faces:
    They also tried to generate a picture with only features that were found on men, and one with only pictures that were found on women, and the network ended up generating "grotesque" pictures, that were basically caricatures of a "man" or a "woman".

    • @naturegirl1999
      @naturegirl1999 4 года назад

      Metsuryu Is it possible to see these images?

    • @MetsuryuVids
      @MetsuryuVids 4 года назад +1

      @@naturegirl1999 I saw these somewhere a long time ago, but you can probably try googling something like "AI generated male/female faces"

    • @toomuchcandor3293
      @toomuchcandor3293 3 года назад +2

      @@MetsuryuVids bro thats too general of a search

    • @MetsuryuVids
      @MetsuryuVids 3 года назад

      @@toomuchcandor3293 Yeah, sorry, I don't remember much else. I tried to find it again sometime ago, but with no success.

  • @animanaut
    @animanaut Год назад +3

    wild to view this video again in 2023

  • @kashandata
    @kashandata 3 года назад

    The best explanation of GANs I have ever come across.

  • @milomccarty8083
    @milomccarty8083 4 года назад

    Studying computer science now. These videos give me inspiration to try to connect concepts outside of the classroom

  • @alienturtle1946
    @alienturtle1946 11 месяцев назад

    Bro understood the cyclical nature of GANs so well that even his explanation turned cyclical

  • @AdityaRaj-bq7dz
    @AdityaRaj-bq7dz 3 года назад

    the best video on gan I have ever seen, probably this can help me to return to ML

  • @lesbianGreen
    @lesbianGreen 5 лет назад +36

    holy moly, this dude has a gift for explaining. awesome work

  • @caty863
    @caty863 11 месяцев назад +1

    "...but neural networks don't have feelings yet."
    Robert Miles throws this out there nonchalantly. I think he knows something we don't. What is it?

  • @jonathanmarino7968
    @jonathanmarino7968 7 лет назад +331

    "Neural networks don't have feelings.. yet." lol

    • @maldoran9150
      @maldoran9150 7 лет назад +15

      He said it so matter of factly and by the by. Chilly!

    • @ArgentavisMagnificens
      @ArgentavisMagnificens 7 лет назад +4

      So you watched the video too?

    • @surrealdynamics4077
      @surrealdynamics4077 4 года назад

      I also payed specific attention to that "yet". It's super cool and scary to live in a time when we can confidently say that software might have feelings in the future

  • @ojaspatil2094
    @ojaspatil2094 6 месяцев назад +3

    6 years ago is crazy

  • @RobinWootton
    @RobinWootton 2 года назад

    Hard to imagine watching television again, when such interesting programs are broadcast here instead.

  • @MrCmon113
    @MrCmon113 5 лет назад +1

    "Kind of impressive" is a massive understatement. It's one of the most awesome and scary things I know.

  • @Athenas_Realm_System
    @Athenas_Realm_System 7 лет назад +34

    there are quite a few youtubers that have a lot of content on them playing around with GANs

    • @CaptTerrific
      @CaptTerrific 7 лет назад +5

      Any recommendations for particularly interesting ones?

    • @Athenas_Realm_System
      @Athenas_Realm_System 7 лет назад +20

      +Higgins2001 carykh being one I can think of that plays around with using a GAN to generate instrumental music by feeding it image representations.

    • @CaptTerrific
      @CaptTerrific 7 лет назад +1

      Thanks!!

    • @hanss3147
      @hanss3147 7 лет назад +2

      The GAN wasn't exactly very good though.

    • @keithbaton5493
      @keithbaton5493 7 лет назад +2

      If I recall, the most recent AI created for DOTA 2 game uses GANs to decimate professional gamers. OpenAI

  • @forkontaerialis5347
    @forkontaerialis5347 7 лет назад +3

    This man is the only reason I stay subscribed, he is fantastic

  • @AnindyaMahajan
    @AnindyaMahajan 5 лет назад

    It's completely flabbergasting to me how far science has come in the last decade alone!

  • @marcelmersch6797
    @marcelmersch6797 6 лет назад +2

    Well explained. Best video about gans i have seen so far.

  • @JotoCraft
    @JotoCraft 7 лет назад +29

    Are the generators producing the same image, for the same input?
    If so could it mean, that continuously changing the input by small steps creates kind of an animation?
    If this really is the case I would really like to see such a movie :)

    • @philipphaim3409
      @philipphaim3409 7 лет назад +27

      Check out arxiv.org/pdf/1511.06434.pdf , on page 8 the authors have essentially done that!

    • @fleecemaster
      @fleecemaster 7 лет назад +3

      Wow, thanks Philipp, fascinating! Page 11 also!

    • @JotoCraft
      @JotoCraft 7 лет назад +4

      Thanks, yeah I hoped, that the pictures would be better already, but I guess that will change over time :)
      Specially the faces fall in the uncandy-valley I'd say. But beside that those examples are exactly what I meant.

    • @RobertMilesAI
      @RobertMilesAI 7 лет назад +11

      Check out my follow-up video: watch?v=MUVbqQ3STFA

  • @meghasoni7867
    @meghasoni7867 Год назад

    High-level concepts explained so beautifully. Fantastic!

  • @PopeLando
    @PopeLando 6 лет назад

    This is how evolution works. This Generator/Discriminator mechanism is exactly how, for example, stick insects evolved to look like sticks and leaf insects like leaves. This is the dream of evolutionary computing I had 25 years ago, but didn't know how to implement. See Richard Dawkins "The Blind Watchmaker", where his attempts to "evolve" computerised insects (back in the '90s!) will also help you understand what Robert called latent space.

  • @peabnuts123
    @peabnuts123 7 лет назад +4

    The last part where Rob talks about how meaningful features are mapped to the latent space are a demonstration of how machine learning can strongly pick up on and perpetuate biases. e.g. If you fed a model a large dataset of people and included whether people were criminals or not as part of your dataset, and you fed it a large amount of criminal photos wherein the subject was dark-skinned, the model may learn that the "Criminal" vector associates with the colour of a person's skin i.e. you are more likely to be guilty of ANY CRIME if you are black.
    If we put these kinds of models in charge of informing decisions (say, generating facial sketches for wanted criminals) we might encode harmful biases into systems we rely on in our day-to-day lives. Thus, these kinds of machine learning need to handled very carefully in real-world situations!

    • @andrewphillip8432
      @andrewphillip8432 2 года назад +1

      I think this type of machine learning algorithm might actually be somewhat resistant to what you describe, because in order for the discriminator to be consistently fooled, the generator needs to be creating samples that span the whole population of criminal photos. Criminals might have a statistically most likely race, but if the generator is only outputting pictures of that race, then the discriminator would be able to do better than 50% at spotting "fakes" by assuming that all pictures of that race were generated and not real. So the discriminator would actually undo the generator's bias for some time by being reverse-biased. So I think once the generator was fully trained it would be outputting images of criminals of all races, weighted by how many images in the training set were of each race.
      But now that I think about it, if we are using current arrest records as the training material for the GAN, then any current biases that exist with who police choose to arrest will show up in the GAN also, so developing a completely unbiased neural network for what you describe could indeed be challenging.

  • @cl8484
    @cl8484 7 лет назад +3

    Very interesting topic and an excellent explanation by Rob! I hardly ever write youtube comments, but this video is great; it deserves all the love it is getting.

  • @Bloomio95
    @Bloomio95 2 года назад

    That last part about the latent space was really valueable insight! Hard to come by

  • @petercourt
    @petercourt 4 года назад

    Latent space description was great!

  • @Im-Hacker
    @Im-Hacker Год назад +1

    I'm working on GAN for data augmentation and will be happy to connect with interested ones

  • @wesleyk.8376
    @wesleyk.8376 5 лет назад

    Deeply sophisticated trial and error to produce meaningful visual results. Awesome

  • @mockingbird3809
    @mockingbird3809 5 лет назад +1

    Man, The Detective and Fragger Example Is the Best Example In The World. He is An Amazing Teacher. I want to Learn A LOT From Him

  •  6 лет назад +1

    Love this guy. Harnessing you concepts here!

  • @mortkebab2849
    @mortkebab2849 5 лет назад +7

    "As the system gets better it forces itself to get better." Uh oh, Technological Singularity ahead! lol

  • @Eskermo
    @Eskermo 7 лет назад +3

    I'm pretty excited about GANs, but what about dealing with when either the generator or discriminator gets a big edge over the other during training and basically kills further progress of the first network? Robert spoke on training on where the discriminator is weak, but it would be nice to have some more details.

  • @ScottMorgan88
    @ScottMorgan88 7 лет назад +2

    Great explanation. Thank you!

  • @ericmyrs
    @ericmyrs 7 лет назад

    Making cat pictures with neural networks. What a time to be alive.

  • @abcdxx1059
    @abcdxx1059 5 лет назад +1

    What is you could train a game on it and on the PC just run a low res game with the game being in a format easier for the network to understand and later the network generates the game and every time it presents a different world ofcoz with some static object such as building and then use DLSS for upscaling

  • @seditt5146
    @seditt5146 6 лет назад

    I love that he said Yet...."Neural networks don't have feelings yet" so nonchalant

  • @logan317b
    @logan317b 5 лет назад

    This guys explains very confusing topics in SUCH an understandable way

  • @jasurbekgopirjonov
    @jasurbekgopirjonov 10 месяцев назад

    an amazing explanation of GANs

  • @w000w00t
    @w000w00t 2 года назад

    2022 was the year of latent diffusion!! Disco diffusion, mid journey, and now Stable diffusion is about to make their weights public!! This stuff is so fascinating! :)
    Great talk about the way!!!

    • @ДмитроПрищепа-д3я
      @ДмитроПрищепа-д3я Год назад +1

      And the best thing is that diffusion models aren't GANs, so they won't suffer from mode collapse and other pain like that.

  • @CrusadeVoyager
    @CrusadeVoyager 2 года назад +1

    Nice explanation 👌

  • @sanketshah3568
    @sanketshah3568 4 года назад

    While half of the world is stuck at jobs they don't enjoy, filling spreadsheets and making powerpoint slides, I feel extremely privileged to be a part of something so surreal and otherworldly. As JFK puts it, "We choose to do this, not because it is easy, but because it is hard."

  • @Calligraphybooster
    @Calligraphybooster 2 года назад

    So moving around in latent space would also produce cats moving around. Fun.

  • @rpcruz
    @rpcruz 5 лет назад

    Very cool. The only quibble I have with the video is that Rob says things like "this doesn't apply only to networks" and "they can be other processes". Actually, the GAN procedure requires a gradient descent framework because it uses the discriminator's gradients to fix the generator. Maybe you can use other stuff, but it's not as open as he makes it sounds, and I don't know of anything other than neural networks being used. (EDIT: Actually, he explains all this at around 12:10.)

  • @aycayigit9582
    @aycayigit9582 5 лет назад +1

    Thank you for such interesting video, came here after checking "This Person Does Not Exist" web page.

  • @ZraveX
    @ZraveX 7 лет назад +1

    This episode would have greatly benefitted from some generated pictures, even if only as a link in the description.

  • @nateshrager512
    @nateshrager512 7 лет назад +1

    Fantastic explanation, love this guy

  • @ikennanw
    @ikennanw 3 года назад

    I wish I saw this earlier. You guys are amazing.

  • @chrisofnottingham
    @chrisofnottingham 7 лет назад

    Very interesting. I think perhaps the explanation focused on the interacted between the generator and discriminator such that we lost sight of the system still needing actual pictures of cats.

  • @amoghskulkarni
    @amoghskulkarni 5 лет назад +1

    For some reason, whenever someone says "the network have *understanding* of some topic", I get fascinated and freaked out at the same time.

  • @Rgmenkera
    @Rgmenkera 7 лет назад +3

    yes, more of this guy!

  • @leshamokhov
    @leshamokhov 5 лет назад +1

    This video should be titled as "Ian Goodfellow "Introduction to GANs" with math thrown away".

  • @onionpsi264
    @onionpsi264 5 лет назад +2

    Did i miss the part of the series where we learn how the generator is actually structured/produces images? The discriminator is a standard classification neural net, which I know has been covered but how does a neural net output an image rather than a class, is the final output layer one pixel in the image?
    Do the "directions in picture space that correspond to cat attributes" that he references around 17:30 correspond to eigenvectors of the generator matrix?

  • @jsbarretto
    @jsbarretto 7 лет назад +2

    Holy crap, the implications of this are awesome.

  • @stuartg40
    @stuartg40 4 года назад

    This guy is on the ball: a rare trait indeed.

  • @zappawench6048
    @zappawench6048 4 года назад

    All the oldies and techies keep mentioning the Commodore PET on the shelf. If it had a floppy drive and you inserted a floppy disc, you doubled its memory!

  • @hellothere17552
    @hellothere17552 Год назад

    "Neural networks don't have feelings yet"
    Nice

  • @namitasodhiya6581
    @namitasodhiya6581 Месяц назад

    12:11 to 13:00 great definition of gradient descent

  • @nicksundby
    @nicksundby 4 года назад

    See that Commodore PET on the shelf, I used to use those at college in the late 70's.

  • @BatteryExhausted
    @BatteryExhausted 7 лет назад +10

    I wonder if this issue of classifiers bleeds into the philosophical problem of perfect form?
    The issue being that while we all imagine an apple as a 'perfect form,' there is no perfect apple in reality. All apples are a process, not static objects. Perfect form only exists in the 'ideas space.'

    • @literallybiras
      @literallybiras 7 лет назад +2

      Well I u consider that perfect forms are really a branch of epistemology (rationalism) than its actually interesting and somewhat expected that the computer classifier holds a "perfect form" and use it to compare with the others, don't know if its a problematic topic in philosophy or more like a philosophical tool for those who understand it. We actually have incorporated this in our language trough what we call abstraction. And perfect forms would be abstractions that we consider the best models for that particular concept.

    • @jeffbloom3691
      @jeffbloom3691 6 лет назад

      Battery Exhausted. I thought the same thing when I watched this.

  • @audreyh6628
    @audreyh6628 5 лет назад +1

    Absolutely fantastic mind/teacher. I am a complete and utter noob to any of these ideas and even I could follow along. Thank you so much

  • @anonanon3066
    @anonanon3066 3 года назад

    6:46 love that "yet"

  • @gabrielebrunini3693
    @gabrielebrunini3693 3 года назад

    You're not the professor, you're the entire university

  • @dark808bb8
    @dark808bb8 7 лет назад +1

    wowww, tweak the weights in the direction that maximizes the discriminators error :O genius

  • @Kingstanding23
    @Kingstanding23 6 лет назад +1

    Well I definitely learned something from this. They used to grind up mummies to make paint!
    “Mum! I need you for my art homework!”

  • @mattg.2774
    @mattg.2774 5 лет назад

    In the context of the matrix, the architect is the discriminator, the oracle is the generator.

  • @equious8413
    @equious8413 Год назад

    I feel like the relational connections in latent space that we see now are a 2D version of our 3D brains, which effectively do the same thing. If compute advances to be able to process the data contained in latent space of exponentially higher dimensional matrices, we'll begin to see real world AGI. The first steps of this can be seen recently in making GPT4 multimodal.

  • @maxm1947
    @maxm1947 2 года назад

    Next time I draw my understanding of my human's latent space of a human, i will call it a stickman

  • @suicidalbanananana
    @suicidalbanananana 7 лет назад

    Gotta love the PET in the background of a talk about cats and dogs

  • @alissondamasceno2010
    @alissondamasceno2010 6 лет назад

    THIS is the best channel ever!

  • @praveshgupta1993
    @praveshgupta1993 3 года назад

    Nicely explained in layman terms...liked it

  • @tabnovasolutions1593
    @tabnovasolutions1593 5 лет назад

    Wow excellent explanation of GAN - thanks a lot

  • @vjp2866
    @vjp2866 3 года назад

    Awesome ! Excellent explaining !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • @nullptr.
    @nullptr. 6 лет назад

    Love the video, everything is well explained and easy to understand.