Generative Adversarial Networks (GANs) - Computerphile

Поделиться
HTML-код
  • Опубликовано: 26 ноя 2024

Комментарии • 702

  • @Nalianna
    @Nalianna 7 лет назад +1104

    This gentleman explains high level concepts in ways that the layman can understand, AND has an interesting voice to listen to. A++ work

    • @AlexiLaiho227
      @AlexiLaiho227 5 лет назад +20

      you should check out, he made his own youtube channel. search for "Robert Miles AI"

    • @savagenovelist2983
      @savagenovelist2983 4 года назад +1

      299 likes, here we go.

    • @giveusascream
      @giveusascream 4 года назад +2

      And mutton chops that I can only dream off

    • @blackcorp0001
      @blackcorp0001 3 года назад +1

      Brain work ... like House work...but deeper

    • @ev6558
      @ev6558 3 года назад +9

      I like that they don't feel the need to do a camera cut every time he pauses to think of his next word. Makes me feel like the video was made for people who are actually interested and not just clickbait for zoomers.

  • @mother3946
    @mother3946 Год назад +14

    His Clarity and simplicity in unpacking a complex topic is just out of this world.

  • @Iesmar
    @Iesmar 7 лет назад +2553

    "Neural networks don't have feelings, yet...."

    • @RafidW9
      @RafidW9 7 лет назад +37

      Vincent Peschar this is why the AGI will fight back. we abuse them so much lol.

    • @TechyBen
      @TechyBen 7 лет назад +19

      Does a rock have feelings? If a rock had feelings, would it matter? Why? (honest questions on logic and peoples feelings)

    • @AlabasterJazz
      @AlabasterJazz 7 лет назад +63

      It could be said that any matter that is arranged into any pattern is at some level alive. While a rock wouldn't have feelings nearly as obvious as humans, it still might have some sense of being. Breaking a rock into pieces may not cause it to experience pain or anxiety or pleasure, as it's sensory capacity is not sufficient to notice such changes to itself. However it's current makeup and position in the universe is no more or less arbitrary than any other matter in the universe. I guess the follow up question might be: if all matter, including organisms, are ultimately made up of non-living particles, what is life?

    • @autolykos9822
      @autolykos9822 7 лет назад +24

      Yet. Growth mindset.

    • @tylerpeterson4726
      @tylerpeterson4726 7 лет назад +25

      TechyBen The problem comes when you start asking if mud has feelings and if people have feelings. Mud and people are generally made of the same materials. It’s just that we are organized in a way that gives us feelings. The religious and non-religious can debate if the soul exists or not, but scientifically we can only differentiate between mud and life based on its level of organization. And so it holds that a highly organized piece of silicon (a computer chip) could also have feelings.

  • @d34d10ck
    @d34d10ck 7 лет назад +365

    To call this impressive would be an understatement. That's amazing, fantastic, unbelievable, highly interesting and scary all at once.

    • @naturegirl1999
      @naturegirl1999 4 года назад

      Patrick Bateman why would it be scary?

    • @d34d10ck
      @d34d10ck 4 года назад +7

      @@naturegirl1999 Most technologies can be scary, since they all have the potential of being misused.
      AI can particularly scary, since we use it for systems that are to complex for us to understand.
      So what we do, is handing these complexities over to a computer to handle, in the hope that it handles them the way we think it should. But the truth is, that we don't really know what it does and if we decide to use such technologies in our weapon systems for example, then it starts getting scary.

    • @insanezombieman753
      @insanezombieman753 4 года назад +15

      @@d34d10ck Interesting. Now let's hear what Paul Allen has to say about this

    • @h0stI13
      @h0stI13 9 месяцев назад +2

      What do you think about it now?

    • @d34d10ck
      @d34d10ck 9 месяцев назад

      ​@@h0stI13I can no longer imagine a life without generative AIs. As a developer, I use them all the time and my productivity has increased immensely because of them.

  • @madumlao
    @madumlao 7 лет назад +259

    I love how quickly he moved past neural networks having feelings.
    "But neural networks don't have feelings (yet) so that's really not an issue. You can just continually hammer on the weak points, find whatever they're having trouble with, and focus on that"
    You just know that our robot masters are just going to replay this over and over again in the trial against humanity.

    • @bionicgirl6826
      @bionicgirl6826 2 года назад +4

      haha you're so funny

    • @qwertysacks
      @qwertysacks 2 года назад +1

      fish dont have feelings either but i have no qualms against sardine canning companies for packing millions of sardines a year. its almost like most intelligent agents dont care about automatons nor should they

    • @harrygenderson6847
      @harrygenderson6847 Год назад +11

      @@qwertysacks Fish do have feelings. They have endocrine and nervous systems, and can act scared or whatever. Not that I care much about those feelings, but it's still non-zero. The narrow forms of AI we have at the moment do not have sufficient complexity for feelings.

    • @pigeon3784
      @pigeon3784 Год назад

      @@harrygenderson6847 Nor will they for many years. It’s a non-issue.

    • @KitsuneShapeShifter
      @KitsuneShapeShifter Год назад

      I'm starting to think you're right...

  • @CarterColeisInfamous
    @CarterColeisInfamous 7 лет назад +339

    these are some of the coolest networks ive seen so far

  • @JamesMBC
    @JamesMBC 6 лет назад +40

    Man, one of my favorite videos on this channel. How did I miss it?
    Not only does it make you think about the endless potential of machine learning, it also sheds some light into how natural brains might work. Maybe even a basic aspecto of nature of creativity.
    Getting my mind blown again!

  • @realityveil6151
    @realityveil6151 7 лет назад +117

    Lost it at "Neural Networks don't have feeling yet."
    It was just the casual way he threw it out there and took it as the most normal thing in the world. Like "Yet" makes total sense.

    • @naturegirl1999
      @naturegirl1999 4 года назад +1

      RealityVeil does it not? The first multicellular organisms didn’t have feelings(emotions) over time, emotions were produced, as well as brains

    • @PaulBillingtonFW
      @PaulBillingtonFW 3 года назад +1

      I'm afraid that is a common issue in AI. NN might become aware and acquire feelings. Some people still believe that animals do not have feelings. I keeps the world nice and simple.

    • @staazvaind3869
      @staazvaind3869 3 года назад +3

      just a matter of input data. hormones and brain / body health and their part in psychology in random situations. it will connect the dots at some point. one could argue "aren't those feelings simulated?" but then ask yourself: "aren't yours?". the structure of mind bases on the structure of input. thats why you shouldnt be afraid of AI with feelings but BIG DATA !

  • @samre3006
    @samre3006 6 лет назад +20

    Never really understood GANs before. Thank you so much for making this so intuitive. Eternally grateful.

  • @recklessroges
    @recklessroges 7 лет назад +393

    The Dell screens have come to worship the Commodore PET.

  • @bimperbamper8633
    @bimperbamper8633 7 лет назад +14

    Only discovered this channel recently and I've been watching nothing but Computerphile videos for a whole week. Love the content you do with Rob Miles - his field of study combined with his explanations make these my favorite videos to watch.
    Thank you!

  • @szynkers
    @szynkers 7 лет назад +11

    The only instance that I can remember when a science video presented on my level of understanding genuinely blew my mind at the end. The research on artificial neural networks will surely change computing as we know it.

  • @DotcomL
    @DotcomL 7 лет назад +24

    I love the "finding the weakness" analogy. Really helped me to understand.

  • @slovnicki
    @slovnicki 5 лет назад +69

    "..which is kind of an impressive result." - understatement of the century

    • @jork8206
      @jork8206 3 года назад +3

      Gotta love latent spaces. My favorite was a network that showed a significant correlation between - and - . Assigning any direct meaning to that could be a leap of logic but when you think about it, cats have more visually feminine features than dogs, generally speaking

  • @BenGabbay
    @BenGabbay 7 лет назад +2

    This is literally one of the most fascinating videos I've ever seen on RUclips.

  • @tarat.techhh
    @tarat.techhh 4 года назад +25

    I wish i could talk to this guy once... He seems so cool and intelligent at the same time

    • @awambawamb4783
      @awambawamb4783 3 года назад

      Approach him with wine and a supercapacitor. and a throwaway guitar.

  • @fast1nakus
    @fast1nakus 5 лет назад +2

    Im pretty sure this is the best format of learning something on youtube

  • @airportbum5402
    @airportbum5402 Год назад +1

    I think it's so cool that there is a Linksys WRT-54G and a Commodore PET in the background and they're discussing topics so modern.

  • @macronencer
    @macronencer 7 лет назад +7

    Love the Commodore PET on the shelf! I played with one of the original PETs when they first came out (the one with the horrid rectilinear keyboard!). We eventually got four of the later models at my school, and before long we were happily playing Space Invaders when the teachers weren't looking... and then doing hex dumps of Space Invaders, working out how it worked, and adding a mod to give it a panic button in case the teacher came into the room so you could hit the button and look as if you were working. To be honest, I'm not sure they would have cared, because we probably learned more by doing the hex dump than we would have with our usual work!

    • @raapyna8544
      @raapyna8544 7 месяцев назад +1

      Oh the effort kids will put in in order to avoid work!

  • @BatteryExhausted
    @BatteryExhausted 7 лет назад +6

    With the human analogy, an interesting idea is that; You don't just focus on the weak area of learning but you also adapt your teaching technique to enable learning. You change your approach. It may be the difficulty in learning is not a fault of the student but a 'bug' in the teaching method
    [1 & 7 look similar, our learning strategy is based on a simplistic shape recognition concept, we adapt our recognition concept (we focus on a particular aspect of the image for example)
    and thus the learner has a 'light bulb' moment as they 'get the point']

  • @tohamy1194
    @tohamy1194 7 лет назад +16

    I could watch this all day.. like I did yesterday with numberphile :D

  • @viniciusborgesdelima2519
    @viniciusborgesdelima2519 2 года назад +1

    Literally the best explanation possible for such a dense topic, congrats my man, you are incredible!

  • @kashandata
    @kashandata 4 года назад

    The best explanation of GANs I have ever come across.

  • @Lagrange_Point_6
    @Lagrange_Point_6 7 лет назад +66

    Love the Commodore PET on the shelf. Class.

    • @greywolf271
      @greywolf271 7 лет назад +3

      Stuff a GAN into 64k. Reminds me of the Chess player written for 4k ram

    • @meanmikebojak1087
      @meanmikebojak1087 4 года назад +1

      I've got a Commodore PET on a shelf too. Mine walks off during POST, so it isn't used anymore. But it looks classy on the shelf.

  • @chrstfer2452
    @chrstfer2452 7 лет назад +39

    "Right now, they're just datapoints" I like this guy

  • @georginajo8441
    @georginajo8441 4 года назад +3

    Wow, how can you make something so complex be so easy to understand? Thank you man

  • @Felixkeeg
    @Felixkeeg 7 лет назад +350

    I honestly more often than not click the video based on whether Rob is hosting.

    • @dylanica3387
      @dylanica3387 7 лет назад +6

      Same here

    • @VentraleStar
      @VentraleStar 7 лет назад +12

      He's cute

    • @HailSagan1
      @HailSagan1 7 лет назад +34

      I like all the computerphile regulars, but yeah Rob is great. I recommend checking out his personal channel that focuses on AGI's, it's linked in the above description!

    • @cubertmiso
      @cubertmiso 6 лет назад

      Cast is great for any channel. Only Philip Moriarty gives weird vibes.

    • @JamesMBC
      @JamesMBC 6 лет назад +4

      This guy knows. Rob is the best, and this is fascinating!
      It makes it irresistible to get involved with machine learning.

  • @MetsuryuVids
    @MetsuryuVids 7 лет назад +18

    Another cool thing he didn't mention about that experiment with the faces:
    They also tried to generate a picture with only features that were found on men, and one with only pictures that were found on women, and the network ended up generating "grotesque" pictures, that were basically caricatures of a "man" or a "woman".

    • @naturegirl1999
      @naturegirl1999 4 года назад

      Metsuryu Is it possible to see these images?

    • @MetsuryuVids
      @MetsuryuVids 4 года назад +1

      @@naturegirl1999 I saw these somewhere a long time ago, but you can probably try googling something like "AI generated male/female faces"

    • @toomuchcandor3293
      @toomuchcandor3293 4 года назад +2

      @@MetsuryuVids bro thats too general of a search

    • @MetsuryuVids
      @MetsuryuVids 4 года назад

      @@toomuchcandor3293 Yeah, sorry, I don't remember much else. I tried to find it again sometime ago, but with no success.

  • @tumultuousgamer
    @tumultuousgamer 2 года назад +3

    That last bit was super interesting and mind blowing at the same time! Excellent video!

  • @milomccarty8083
    @milomccarty8083 4 года назад

    Studying computer science now. These videos give me inspiration to try to connect concepts outside of the classroom

  • @R.Daneel
    @R.Daneel 2 года назад +3

    I love seeing this in 2022, and comparing this to DALL-E, GPT-3, etc. Wow. Five years later, and it's generating "Pink cat on a skateboard in Times Square" at artist quality.
    (@16:25 - Yup. You do. And it does.)

  • @surrealdynamics4077
    @surrealdynamics4077 4 года назад +3

    This is so interesting! This is the way thispersondoesnotexist "photos" are made by the machine. Super cool!

  • @cazino4
    @cazino4 5 лет назад +1

    This guy presents fantastically. Such an interesting topic... I remember seeing an online CS Harvard lecture around a decade ago that used the same concept (having the system compete with another instance of itself) to train a computer chess player...

  • @truppelito
    @truppelito 7 лет назад +2

    20 minute video about AI by Rob Miles? YES PLZ

  • @dibyaranjanmishra4272
    @dibyaranjanmishra4272 7 лет назад +10

    excellent explanation!!! one of the best videos ever on computerphile

  • @AdityaRaj-bq7dz
    @AdityaRaj-bq7dz 3 года назад

    the best video on gan I have ever seen, probably this can help me to return to ML

  • @caty863
    @caty863 Год назад +1

    "...but neural networks don't have feelings yet."
    Robert Miles throws this out there nonchalantly. I think he knows something we don't. What is it?

  • @peabnuts123
    @peabnuts123 7 лет назад +4

    The last part where Rob talks about how meaningful features are mapped to the latent space are a demonstration of how machine learning can strongly pick up on and perpetuate biases. e.g. If you fed a model a large dataset of people and included whether people were criminals or not as part of your dataset, and you fed it a large amount of criminal photos wherein the subject was dark-skinned, the model may learn that the "Criminal" vector associates with the colour of a person's skin i.e. you are more likely to be guilty of ANY CRIME if you are black.
    If we put these kinds of models in charge of informing decisions (say, generating facial sketches for wanted criminals) we might encode harmful biases into systems we rely on in our day-to-day lives. Thus, these kinds of machine learning need to handled very carefully in real-world situations!

    • @andrewphillip8432
      @andrewphillip8432 2 года назад +1

      I think this type of machine learning algorithm might actually be somewhat resistant to what you describe, because in order for the discriminator to be consistently fooled, the generator needs to be creating samples that span the whole population of criminal photos. Criminals might have a statistically most likely race, but if the generator is only outputting pictures of that race, then the discriminator would be able to do better than 50% at spotting "fakes" by assuming that all pictures of that race were generated and not real. So the discriminator would actually undo the generator's bias for some time by being reverse-biased. So I think once the generator was fully trained it would be outputting images of criminals of all races, weighted by how many images in the training set were of each race.
      But now that I think about it, if we are using current arrest records as the training material for the GAN, then any current biases that exist with who police choose to arrest will show up in the GAN also, so developing a completely unbiased neural network for what you describe could indeed be challenging.

  • @AnindyaMahajan
    @AnindyaMahajan 6 лет назад

    It's completely flabbergasting to me how far science has come in the last decade alone!

  • @knightshousegames
    @knightshousegames 7 лет назад +71

    "So cats equal zero and dogs equal one. You train it to know the difference" Ultimate final test: show it a Shiba Inu.

    • @GhostGuy764
      @GhostGuy764 7 лет назад +11

      knightshousegames Shiba look too happy to be cats.

    • @knightshousegames
      @knightshousegames 7 лет назад

      That is what they call a fringe case. My guess is the machine would try to return a 0.5

    • @homer9736
      @homer9736 7 лет назад +3

      knightshousegames i think you should ban 0.5 because thats right for both cases always, the machine cant learn from that

    • @hellfiresiayan
      @hellfiresiayan 6 лет назад +8

      No because in the end you can tell the network that it is a dog, and it could alter its biases based on that result, so the next 100 times you show it a shiba inu, it might be able to give a better answer. Whether that would negatively affect its ability to identify a cat, however, I have no idea.

  • @forkontaerialis5347
    @forkontaerialis5347 7 лет назад +3

    This man is the only reason I stay subscribed, he is fantastic

  • @picpac2348
    @picpac2348 7 лет назад +20

    Would love to see some example pictures of the generated and real pictures.

  • @meghasoni7867
    @meghasoni7867 2 года назад

    High-level concepts explained so beautifully. Fantastic!

  • @animanaut
    @animanaut Год назад +3

    wild to view this video again in 2023

  • @marcelmersch6797
    @marcelmersch6797 6 лет назад +2

    Well explained. Best video about gans i have seen so far.

  • @ojaspatil2094
    @ojaspatil2094 8 месяцев назад +3

    6 years ago is crazy

  • @RobinWootton
    @RobinWootton 3 года назад

    Hard to imagine watching television again, when such interesting programs are broadcast here instead.

  • @Eskermo
    @Eskermo 7 лет назад +3

    I'm pretty excited about GANs, but what about dealing with when either the generator or discriminator gets a big edge over the other during training and basically kills further progress of the first network? Robert spoke on training on where the discriminator is weak, but it would be nice to have some more details.

  • @MrCmon113
    @MrCmon113 5 лет назад +1

    "Kind of impressive" is a massive understatement. It's one of the most awesome and scary things I know.

  • @lesbianGreen
    @lesbianGreen 5 лет назад +36

    holy moly, this dude has a gift for explaining. awesome work

  • @JotoCraft
    @JotoCraft 7 лет назад +29

    Are the generators producing the same image, for the same input?
    If so could it mean, that continuously changing the input by small steps creates kind of an animation?
    If this really is the case I would really like to see such a movie :)

    • @philipphaim3409
      @philipphaim3409 7 лет назад +27

      Check out arxiv.org/pdf/1511.06434.pdf , on page 8 the authors have essentially done that!

    • @fleecemaster
      @fleecemaster 7 лет назад +3

      Wow, thanks Philipp, fascinating! Page 11 also!

    • @JotoCraft
      @JotoCraft 7 лет назад +4

      Thanks, yeah I hoped, that the pictures would be better already, but I guess that will change over time :)
      Specially the faces fall in the uncandy-valley I'd say. But beside that those examples are exactly what I meant.

    • @RobertMilesAI
      @RobertMilesAI 7 лет назад +11

      Check out my follow-up video: watch?v=MUVbqQ3STFA

  • @mickmickymick6927
    @mickmickymick6927 5 лет назад

    The videos on Rob's channel are so much better edited

  • @bipolarminddroppings
    @bipolarminddroppings 3 года назад

    The fact that he add "yet" is both exciting and chilling.

  • @PopeLando
    @PopeLando 7 лет назад

    This is how evolution works. This Generator/Discriminator mechanism is exactly how, for example, stick insects evolved to look like sticks and leaf insects like leaves. This is the dream of evolutionary computing I had 25 years ago, but didn't know how to implement. See Richard Dawkins "The Blind Watchmaker", where his attempts to "evolve" computerised insects (back in the '90s!) will also help you understand what Robert called latent space.

  • @Bloomio95
    @Bloomio95 3 года назад

    That last part about the latent space was really valueable insight! Hard to come by

  • @mockingbird3809
    @mockingbird3809 5 лет назад +1

    Man, The Detective and Fragger Example Is the Best Example In The World. He is An Amazing Teacher. I want to Learn A LOT From Him

  • @cl8484
    @cl8484 7 лет назад +3

    Very interesting topic and an excellent explanation by Rob! I hardly ever write youtube comments, but this video is great; it deserves all the love it is getting.

  • @tonyduncan9852
    @tonyduncan9852 Год назад +2

    The common room elephant: consciousness is _relative,_ and shared by electronic machinery, and all of Earth's animals, including elephants, and not excluding Man.

  • @wesleyk.8376
    @wesleyk.8376 6 лет назад

    Deeply sophisticated trial and error to produce meaningful visual results. Awesome

  • @TankSenior
    @TankSenior 7 лет назад +12

    That was extremely interesting, thank you for making this episode.

  • @onionpsi264
    @onionpsi264 5 лет назад +2

    Did i miss the part of the series where we learn how the generator is actually structured/produces images? The discriminator is a standard classification neural net, which I know has been covered but how does a neural net output an image rather than a class, is the final output layer one pixel in the image?
    Do the "directions in picture space that correspond to cat attributes" that he references around 17:30 correspond to eigenvectors of the generator matrix?

  • @DenisDmitrievDeepRobotics
    @DenisDmitrievDeepRobotics 6 лет назад

    GANs started the era of regularing feedbacks in artificial networks like in their natural prototypes.

  • @AxelWerner
    @AxelWerner 7 лет назад

    talking about developing skynet and advanced artificial intelligence, while in the background the keep a Commodore PET as their Backup-System ^-^ PRICELESS!

  • @w000w00t
    @w000w00t 2 года назад

    2022 was the year of latent diffusion!! Disco diffusion, mid journey, and now Stable diffusion is about to make their weights public!! This stuff is so fascinating! :)
    Great talk about the way!!!

    • @ДмитроПрищепа-д3я
      @ДмитроПрищепа-д3я Год назад +1

      And the best thing is that diffusion models aren't GANs, so they won't suffer from mode collapse and other pain like that.

  • @sanketshah3568
    @sanketshah3568 4 года назад

    While half of the world is stuck at jobs they don't enjoy, filling spreadsheets and making powerpoint slides, I feel extremely privileged to be a part of something so surreal and otherworldly. As JFK puts it, "We choose to do this, not because it is easy, but because it is hard."

  • @Dank_Engine
    @Dank_Engine 7 лет назад

    People do this intuitively. Competition creates the best among us. It's interesting that competition among peers facilitates growth in machine learning as well.

  • @Athenas_Realm_System
    @Athenas_Realm_System 7 лет назад +34

    there are quite a few youtubers that have a lot of content on them playing around with GANs

    • @CaptTerrific
      @CaptTerrific 7 лет назад +5

      Any recommendations for particularly interesting ones?

    • @Athenas_Realm_System
      @Athenas_Realm_System 7 лет назад +20

      +Higgins2001 carykh being one I can think of that plays around with using a GAN to generate instrumental music by feeding it image representations.

    • @CaptTerrific
      @CaptTerrific 7 лет назад +1

      Thanks!!

    • @hanss3147
      @hanss3147 7 лет назад +2

      The GAN wasn't exactly very good though.

    • @keithbaton5493
      @keithbaton5493 7 лет назад +2

      If I recall, the most recent AI created for DOTA 2 game uses GANs to decimate professional gamers. OpenAI

  • @seditt5146
    @seditt5146 6 лет назад

    I love that he said Yet...."Neural networks don't have feelings yet" so nonchalant

  • @alissondamasceno2010
    @alissondamasceno2010 6 лет назад

    THIS is the best channel ever!

  • @Im-Hacker
    @Im-Hacker Год назад +1

    I'm working on GAN for data augmentation and will be happy to connect with interested ones

  • @SlobodanDan
    @SlobodanDan 7 лет назад +1

    Wow. That was a pretty amazing insight. Hope for non-harmful super-intelligence? If we can do broad definitions of concepts like man's face, woman's face and glasses, then perhaps even trickier concepts can be tackled in time.

  • @logan317b
    @logan317b 5 лет назад

    This guys explains very confusing topics in SUCH an understandable way

  • @rpcruz
    @rpcruz 5 лет назад

    Very cool. The only quibble I have with the video is that Rob says things like "this doesn't apply only to networks" and "they can be other processes". Actually, the GAN procedure requires a gradient descent framework because it uses the discriminator's gradients to fix the generator. Maybe you can use other stuff, but it's not as open as he makes it sounds, and I don't know of anything other than neural networks being used. (EDIT: Actually, he explains all this at around 12:10.)

  • @alienturtle1946
    @alienturtle1946 Год назад

    Bro understood the cyclical nature of GANs so well that even his explanation turned cyclical

  • @achimvonprittwitz9508
    @achimvonprittwitz9508 5 лет назад +2

    Wow this video is amazing. Can he do some live coding/example? Would be interesting to see the pictures.

  • @ikennanw
    @ikennanw 4 года назад

    I wish I saw this earlier. You guys are amazing.

  • @stuartg40
    @stuartg40 4 года назад

    This guy is on the ball: a rare trait indeed.

  • @ZraveX
    @ZraveX 7 лет назад +1

    This episode would have greatly benefitted from some generated pictures, even if only as a link in the description.

  • @alexcordero6672
    @alexcordero6672 4 года назад

    We humans do the same thing or rather, coming up with new information is a challenge for a human as well. This is why we tell people to "think out of the box." Even then. a new out-of-the-box idea will usually originate from a collection of ideas.

  •  6 лет назад +1

    Love this guy. Harnessing you concepts here!

  • @kerr.andrew
    @kerr.andrew 7 лет назад +14

    "Neural networks don't have feeling YET"

  • @aycayigit9582
    @aycayigit9582 5 лет назад +1

    Thank you for such interesting video, came here after checking "This Person Does Not Exist" web page.

  • @jasurbekgopirjonov
    @jasurbekgopirjonov Год назад

    an amazing explanation of GANs

  • @audreyh6628
    @audreyh6628 5 лет назад +1

    Absolutely fantastic mind/teacher. I am a complete and utter noob to any of these ideas and even I could follow along. Thank you so much

  • @petercourt
    @petercourt 4 года назад

    Latent space description was great!

  • @chrisofnottingham
    @chrisofnottingham 7 лет назад

    Very interesting. I think perhaps the explanation focused on the interacted between the generator and discriminator such that we lost sight of the system still needing actual pictures of cats.

  • @leetmann
    @leetmann 5 лет назад +1

    Man o man that explaination was even better than Ian Goodfellow's, a phenomenal video. Hats off

  • @jonathanmarino7968
    @jonathanmarino7968 7 лет назад +331

    "Neural networks don't have feelings.. yet." lol

    • @maldoran9150
      @maldoran9150 7 лет назад +15

      He said it so matter of factly and by the by. Chilly!

    • @ArgentavisMagnificens
      @ArgentavisMagnificens 7 лет назад +4

      So you watched the video too?

    • @surrealdynamics4077
      @surrealdynamics4077 4 года назад

      I also payed specific attention to that "yet". It's super cool and scary to live in a time when we can confidently say that software might have feelings in the future

  • @namitasodhiya6581
    @namitasodhiya6581 3 месяца назад

    12:11 to 13:00 great definition of gradient descent

  • @mortkebab2849
    @mortkebab2849 5 лет назад +7

    "As the system gets better it forces itself to get better." Uh oh, Technological Singularity ahead! lol

  • @maclee2470
    @maclee2470 5 лет назад +1

    it sounds that GAN is almost similar to Actor-Critic Reinforcement learning. so what is the difference between the two? Thanks

  • @dark808bb8
    @dark808bb8 7 лет назад +1

    wowww, tweak the weights in the direction that maximizes the discriminators error :O genius

  • @equious8413
    @equious8413 Год назад

    I feel like the relational connections in latent space that we see now are a 2D version of our 3D brains, which effectively do the same thing. If compute advances to be able to process the data contained in latent space of exponentially higher dimensional matrices, we'll begin to see real world AGI. The first steps of this can be seen recently in making GPT4 multimodal.

  • @AB-Prince
    @AB-Prince 6 лет назад +1

    does that Commodore PET still work.
    i'd like to see a video on the PET's internals

  • @zahar1875
    @zahar1875 2 года назад +1

    I wish I had teachers like that at university

  • @imchukwu
    @imchukwu 6 лет назад +1

    hi, thanks for the video, really great. please i would like to know the least number of samples to train a GAN system with as well as how long an ideal training will last with a single GPU and 2 CPU Cores. just an estimate.

  • @BatteryExhausted
    @BatteryExhausted 7 лет назад +10

    I wonder if this issue of classifiers bleeds into the philosophical problem of perfect form?
    The issue being that while we all imagine an apple as a 'perfect form,' there is no perfect apple in reality. All apples are a process, not static objects. Perfect form only exists in the 'ideas space.'

    • @literallybiras
      @literallybiras 7 лет назад +2

      Well I u consider that perfect forms are really a branch of epistemology (rationalism) than its actually interesting and somewhat expected that the computer classifier holds a "perfect form" and use it to compare with the others, don't know if its a problematic topic in philosophy or more like a philosophical tool for those who understand it. We actually have incorporated this in our language trough what we call abstraction. And perfect forms would be abstractions that we consider the best models for that particular concept.

    • @jeffbloom3691
      @jeffbloom3691 6 лет назад

      Battery Exhausted. I thought the same thing when I watched this.

  • @briankrebs7534
    @briankrebs7534 4 года назад +2

    Seeing as the "data" which encodes the appearance of a face or a cat is hardcoded into the genome of the individual, would a GAN theoretically be able to train on matched images of faces and genomes, and then reverse engineered to output the most probable genome which would produce the face image as input?

    • @mme.veronica735
      @mme.veronica735 3 года назад

      There are also a large variety of epigenetic factors such as nutrition during growth, age, and bodyfat that changes the appearance of a face so probably not

  • @nateshrager512
    @nateshrager512 7 лет назад +1

    Fantastic explanation, love this guy

  • @cameroncroker8389
    @cameroncroker8389 4 года назад

    WD, love Rob's explanations!