I like that they don't feel the need to do a camera cut every time he pauses to think of his next word. Makes me feel like the video was made for people who are actually interested and not just clickbait for zoomers.
It could be said that any matter that is arranged into any pattern is at some level alive. While a rock wouldn't have feelings nearly as obvious as humans, it still might have some sense of being. Breaking a rock into pieces may not cause it to experience pain or anxiety or pleasure, as it's sensory capacity is not sufficient to notice such changes to itself. However it's current makeup and position in the universe is no more or less arbitrary than any other matter in the universe. I guess the follow up question might be: if all matter, including organisms, are ultimately made up of non-living particles, what is life?
TechyBen The problem comes when you start asking if mud has feelings and if people have feelings. Mud and people are generally made of the same materials. It’s just that we are organized in a way that gives us feelings. The religious and non-religious can debate if the soul exists or not, but scientifically we can only differentiate between mud and life based on its level of organization. And so it holds that a highly organized piece of silicon (a computer chip) could also have feelings.
@@naturegirl1999 Most technologies can be scary, since they all have the potential of being misused. AI can particularly scary, since we use it for systems that are to complex for us to understand. So what we do, is handing these complexities over to a computer to handle, in the hope that it handles them the way we think it should. But the truth is, that we don't really know what it does and if we decide to use such technologies in our weapon systems for example, then it starts getting scary.
@@h0stI13I can no longer imagine a life without generative AIs. As a developer, I use them all the time and my productivity has increased immensely because of them.
I love how quickly he moved past neural networks having feelings. "But neural networks don't have feelings (yet) so that's really not an issue. You can just continually hammer on the weak points, find whatever they're having trouble with, and focus on that" You just know that our robot masters are just going to replay this over and over again in the trial against humanity.
fish dont have feelings either but i have no qualms against sardine canning companies for packing millions of sardines a year. its almost like most intelligent agents dont care about automatons nor should they
@@qwertysacks Fish do have feelings. They have endocrine and nervous systems, and can act scared or whatever. Not that I care much about those feelings, but it's still non-zero. The narrow forms of AI we have at the moment do not have sufficient complexity for feelings.
Man, one of my favorite videos on this channel. How did I miss it? Not only does it make you think about the endless potential of machine learning, it also sheds some light into how natural brains might work. Maybe even a basic aspecto of nature of creativity. Getting my mind blown again!
Lost it at "Neural Networks don't have feeling yet." It was just the casual way he threw it out there and took it as the most normal thing in the world. Like "Yet" makes total sense.
I'm afraid that is a common issue in AI. NN might become aware and acquire feelings. Some people still believe that animals do not have feelings. I keeps the world nice and simple.
just a matter of input data. hormones and brain / body health and their part in psychology in random situations. it will connect the dots at some point. one could argue "aren't those feelings simulated?" but then ask yourself: "aren't yours?". the structure of mind bases on the structure of input. thats why you shouldnt be afraid of AI with feelings but BIG DATA !
Only discovered this channel recently and I've been watching nothing but Computerphile videos for a whole week. Love the content you do with Rob Miles - his field of study combined with his explanations make these my favorite videos to watch. Thank you!
The only instance that I can remember when a science video presented on my level of understanding genuinely blew my mind at the end. The research on artificial neural networks will surely change computing as we know it.
Gotta love latent spaces. My favorite was a network that showed a significant correlation between - and - . Assigning any direct meaning to that could be a leap of logic but when you think about it, cats have more visually feminine features than dogs, generally speaking
Love the Commodore PET on the shelf! I played with one of the original PETs when they first came out (the one with the horrid rectilinear keyboard!). We eventually got four of the later models at my school, and before long we were happily playing Space Invaders when the teachers weren't looking... and then doing hex dumps of Space Invaders, working out how it worked, and adding a mod to give it a panic button in case the teacher came into the room so you could hit the button and look as if you were working. To be honest, I'm not sure they would have cared, because we probably learned more by doing the hex dump than we would have with our usual work!
With the human analogy, an interesting idea is that; You don't just focus on the weak area of learning but you also adapt your teaching technique to enable learning. You change your approach. It may be the difficulty in learning is not a fault of the student but a 'bug' in the teaching method [1 & 7 look similar, our learning strategy is based on a simplistic shape recognition concept, we adapt our recognition concept (we focus on a particular aspect of the image for example) and thus the learner has a 'light bulb' moment as they 'get the point']
I like all the computerphile regulars, but yeah Rob is great. I recommend checking out his personal channel that focuses on AGI's, it's linked in the above description!
Another cool thing he didn't mention about that experiment with the faces: They also tried to generate a picture with only features that were found on men, and one with only pictures that were found on women, and the network ended up generating "grotesque" pictures, that were basically caricatures of a "man" or a "woman".
I love seeing this in 2022, and comparing this to DALL-E, GPT-3, etc. Wow. Five years later, and it's generating "Pink cat on a skateboard in Times Square" at artist quality. (@16:25 - Yup. You do. And it does.)
This guy presents fantastically. Such an interesting topic... I remember seeing an online CS Harvard lecture around a decade ago that used the same concept (having the system compete with another instance of itself) to train a computer chess player...
The last part where Rob talks about how meaningful features are mapped to the latent space are a demonstration of how machine learning can strongly pick up on and perpetuate biases. e.g. If you fed a model a large dataset of people and included whether people were criminals or not as part of your dataset, and you fed it a large amount of criminal photos wherein the subject was dark-skinned, the model may learn that the "Criminal" vector associates with the colour of a person's skin i.e. you are more likely to be guilty of ANY CRIME if you are black. If we put these kinds of models in charge of informing decisions (say, generating facial sketches for wanted criminals) we might encode harmful biases into systems we rely on in our day-to-day lives. Thus, these kinds of machine learning need to handled very carefully in real-world situations!
I think this type of machine learning algorithm might actually be somewhat resistant to what you describe, because in order for the discriminator to be consistently fooled, the generator needs to be creating samples that span the whole population of criminal photos. Criminals might have a statistically most likely race, but if the generator is only outputting pictures of that race, then the discriminator would be able to do better than 50% at spotting "fakes" by assuming that all pictures of that race were generated and not real. So the discriminator would actually undo the generator's bias for some time by being reverse-biased. So I think once the generator was fully trained it would be outputting images of criminals of all races, weighted by how many images in the training set were of each race. But now that I think about it, if we are using current arrest records as the training material for the GAN, then any current biases that exist with who police choose to arrest will show up in the GAN also, so developing a completely unbiased neural network for what you describe could indeed be challenging.
No because in the end you can tell the network that it is a dog, and it could alter its biases based on that result, so the next 100 times you show it a shiba inu, it might be able to give a better answer. Whether that would negatively affect its ability to identify a cat, however, I have no idea.
I'm pretty excited about GANs, but what about dealing with when either the generator or discriminator gets a big edge over the other during training and basically kills further progress of the first network? Robert spoke on training on where the discriminator is weak, but it would be nice to have some more details.
Are the generators producing the same image, for the same input? If so could it mean, that continuously changing the input by small steps creates kind of an animation? If this really is the case I would really like to see such a movie :)
Thanks, yeah I hoped, that the pictures would be better already, but I guess that will change over time :) Specially the faces fall in the uncandy-valley I'd say. But beside that those examples are exactly what I meant.
This is how evolution works. This Generator/Discriminator mechanism is exactly how, for example, stick insects evolved to look like sticks and leaf insects like leaves. This is the dream of evolutionary computing I had 25 years ago, but didn't know how to implement. See Richard Dawkins "The Blind Watchmaker", where his attempts to "evolve" computerised insects (back in the '90s!) will also help you understand what Robert called latent space.
Very interesting topic and an excellent explanation by Rob! I hardly ever write youtube comments, but this video is great; it deserves all the love it is getting.
The common room elephant: consciousness is _relative,_ and shared by electronic machinery, and all of Earth's animals, including elephants, and not excluding Man.
Did i miss the part of the series where we learn how the generator is actually structured/produces images? The discriminator is a standard classification neural net, which I know has been covered but how does a neural net output an image rather than a class, is the final output layer one pixel in the image? Do the "directions in picture space that correspond to cat attributes" that he references around 17:30 correspond to eigenvectors of the generator matrix?
talking about developing skynet and advanced artificial intelligence, while in the background the keep a Commodore PET as their Backup-System ^-^ PRICELESS!
2022 was the year of latent diffusion!! Disco diffusion, mid journey, and now Stable diffusion is about to make their weights public!! This stuff is so fascinating! :) Great talk about the way!!!
While half of the world is stuck at jobs they don't enjoy, filling spreadsheets and making powerpoint slides, I feel extremely privileged to be a part of something so surreal and otherworldly. As JFK puts it, "We choose to do this, not because it is easy, but because it is hard."
People do this intuitively. Competition creates the best among us. It's interesting that competition among peers facilitates growth in machine learning as well.
Wow. That was a pretty amazing insight. Hope for non-harmful super-intelligence? If we can do broad definitions of concepts like man's face, woman's face and glasses, then perhaps even trickier concepts can be tackled in time.
Very cool. The only quibble I have with the video is that Rob says things like "this doesn't apply only to networks" and "they can be other processes". Actually, the GAN procedure requires a gradient descent framework because it uses the discriminator's gradients to fix the generator. Maybe you can use other stuff, but it's not as open as he makes it sounds, and I don't know of anything other than neural networks being used. (EDIT: Actually, he explains all this at around 12:10.)
We humans do the same thing or rather, coming up with new information is a challenge for a human as well. This is why we tell people to "think out of the box." Even then. a new out-of-the-box idea will usually originate from a collection of ideas.
Very interesting. I think perhaps the explanation focused on the interacted between the generator and discriminator such that we lost sight of the system still needing actual pictures of cats.
I also payed specific attention to that "yet". It's super cool and scary to live in a time when we can confidently say that software might have feelings in the future
I feel like the relational connections in latent space that we see now are a 2D version of our 3D brains, which effectively do the same thing. If compute advances to be able to process the data contained in latent space of exponentially higher dimensional matrices, we'll begin to see real world AGI. The first steps of this can be seen recently in making GPT4 multimodal.
hi, thanks for the video, really great. please i would like to know the least number of samples to train a GAN system with as well as how long an ideal training will last with a single GPU and 2 CPU Cores. just an estimate.
I wonder if this issue of classifiers bleeds into the philosophical problem of perfect form? The issue being that while we all imagine an apple as a 'perfect form,' there is no perfect apple in reality. All apples are a process, not static objects. Perfect form only exists in the 'ideas space.'
Well I u consider that perfect forms are really a branch of epistemology (rationalism) than its actually interesting and somewhat expected that the computer classifier holds a "perfect form" and use it to compare with the others, don't know if its a problematic topic in philosophy or more like a philosophical tool for those who understand it. We actually have incorporated this in our language trough what we call abstraction. And perfect forms would be abstractions that we consider the best models for that particular concept.
Seeing as the "data" which encodes the appearance of a face or a cat is hardcoded into the genome of the individual, would a GAN theoretically be able to train on matched images of faces and genomes, and then reverse engineered to output the most probable genome which would produce the face image as input?
There are also a large variety of epigenetic factors such as nutrition during growth, age, and bodyfat that changes the appearance of a face so probably not
This gentleman explains high level concepts in ways that the layman can understand, AND has an interesting voice to listen to. A++ work
you should check out, he made his own youtube channel. search for "Robert Miles AI"
299 likes, here we go.
And mutton chops that I can only dream off
Brain work ... like House work...but deeper
I like that they don't feel the need to do a camera cut every time he pauses to think of his next word. Makes me feel like the video was made for people who are actually interested and not just clickbait for zoomers.
His Clarity and simplicity in unpacking a complex topic is just out of this world.
"Neural networks don't have feelings, yet...."
Vincent Peschar this is why the AGI will fight back. we abuse them so much lol.
Does a rock have feelings? If a rock had feelings, would it matter? Why? (honest questions on logic and peoples feelings)
It could be said that any matter that is arranged into any pattern is at some level alive. While a rock wouldn't have feelings nearly as obvious as humans, it still might have some sense of being. Breaking a rock into pieces may not cause it to experience pain or anxiety or pleasure, as it's sensory capacity is not sufficient to notice such changes to itself. However it's current makeup and position in the universe is no more or less arbitrary than any other matter in the universe. I guess the follow up question might be: if all matter, including organisms, are ultimately made up of non-living particles, what is life?
Yet. Growth mindset.
TechyBen The problem comes when you start asking if mud has feelings and if people have feelings. Mud and people are generally made of the same materials. It’s just that we are organized in a way that gives us feelings. The religious and non-religious can debate if the soul exists or not, but scientifically we can only differentiate between mud and life based on its level of organization. And so it holds that a highly organized piece of silicon (a computer chip) could also have feelings.
To call this impressive would be an understatement. That's amazing, fantastic, unbelievable, highly interesting and scary all at once.
Patrick Bateman why would it be scary?
@@naturegirl1999 Most technologies can be scary, since they all have the potential of being misused.
AI can particularly scary, since we use it for systems that are to complex for us to understand.
So what we do, is handing these complexities over to a computer to handle, in the hope that it handles them the way we think it should. But the truth is, that we don't really know what it does and if we decide to use such technologies in our weapon systems for example, then it starts getting scary.
@@d34d10ck Interesting. Now let's hear what Paul Allen has to say about this
What do you think about it now?
@@h0stI13I can no longer imagine a life without generative AIs. As a developer, I use them all the time and my productivity has increased immensely because of them.
I love how quickly he moved past neural networks having feelings.
"But neural networks don't have feelings (yet) so that's really not an issue. You can just continually hammer on the weak points, find whatever they're having trouble with, and focus on that"
You just know that our robot masters are just going to replay this over and over again in the trial against humanity.
haha you're so funny
fish dont have feelings either but i have no qualms against sardine canning companies for packing millions of sardines a year. its almost like most intelligent agents dont care about automatons nor should they
@@qwertysacks Fish do have feelings. They have endocrine and nervous systems, and can act scared or whatever. Not that I care much about those feelings, but it's still non-zero. The narrow forms of AI we have at the moment do not have sufficient complexity for feelings.
@@harrygenderson6847 Nor will they for many years. It’s a non-issue.
I'm starting to think you're right...
these are some of the coolest networks ive seen so far
Man, one of my favorite videos on this channel. How did I miss it?
Not only does it make you think about the endless potential of machine learning, it also sheds some light into how natural brains might work. Maybe even a basic aspecto of nature of creativity.
Getting my mind blown again!
Lost it at "Neural Networks don't have feeling yet."
It was just the casual way he threw it out there and took it as the most normal thing in the world. Like "Yet" makes total sense.
RealityVeil does it not? The first multicellular organisms didn’t have feelings(emotions) over time, emotions were produced, as well as brains
I'm afraid that is a common issue in AI. NN might become aware and acquire feelings. Some people still believe that animals do not have feelings. I keeps the world nice and simple.
just a matter of input data. hormones and brain / body health and their part in psychology in random situations. it will connect the dots at some point. one could argue "aren't those feelings simulated?" but then ask yourself: "aren't yours?". the structure of mind bases on the structure of input. thats why you shouldnt be afraid of AI with feelings but BIG DATA !
Never really understood GANs before. Thank you so much for making this so intuitive. Eternally grateful.
The Dell screens have come to worship the Commodore PET.
LMAO!
Enjoy your upvote, lol.
Kids going to see grandpa
Only discovered this channel recently and I've been watching nothing but Computerphile videos for a whole week. Love the content you do with Rob Miles - his field of study combined with his explanations make these my favorite videos to watch.
Thank you!
The only instance that I can remember when a science video presented on my level of understanding genuinely blew my mind at the end. The research on artificial neural networks will surely change computing as we know it.
I love the "finding the weakness" analogy. Really helped me to understand.
"..which is kind of an impressive result." - understatement of the century
Gotta love latent spaces. My favorite was a network that showed a significant correlation between - and - . Assigning any direct meaning to that could be a leap of logic but when you think about it, cats have more visually feminine features than dogs, generally speaking
This is literally one of the most fascinating videos I've ever seen on RUclips.
I wish i could talk to this guy once... He seems so cool and intelligent at the same time
Approach him with wine and a supercapacitor. and a throwaway guitar.
Im pretty sure this is the best format of learning something on youtube
I think it's so cool that there is a Linksys WRT-54G and a Commodore PET in the background and they're discussing topics so modern.
Love the Commodore PET on the shelf! I played with one of the original PETs when they first came out (the one with the horrid rectilinear keyboard!). We eventually got four of the later models at my school, and before long we were happily playing Space Invaders when the teachers weren't looking... and then doing hex dumps of Space Invaders, working out how it worked, and adding a mod to give it a panic button in case the teacher came into the room so you could hit the button and look as if you were working. To be honest, I'm not sure they would have cared, because we probably learned more by doing the hex dump than we would have with our usual work!
Oh the effort kids will put in in order to avoid work!
With the human analogy, an interesting idea is that; You don't just focus on the weak area of learning but you also adapt your teaching technique to enable learning. You change your approach. It may be the difficulty in learning is not a fault of the student but a 'bug' in the teaching method
[1 & 7 look similar, our learning strategy is based on a simplistic shape recognition concept, we adapt our recognition concept (we focus on a particular aspect of the image for example)
and thus the learner has a 'light bulb' moment as they 'get the point']
I could watch this all day.. like I did yesterday with numberphile :D
Literally the best explanation possible for such a dense topic, congrats my man, you are incredible!
The best explanation of GANs I have ever come across.
Love the Commodore PET on the shelf. Class.
Stuff a GAN into 64k. Reminds me of the Chess player written for 4k ram
I've got a Commodore PET on a shelf too. Mine walks off during POST, so it isn't used anymore. But it looks classy on the shelf.
"Right now, they're just datapoints" I like this guy
Wow, how can you make something so complex be so easy to understand? Thank you man
I honestly more often than not click the video based on whether Rob is hosting.
Same here
He's cute
I like all the computerphile regulars, but yeah Rob is great. I recommend checking out his personal channel that focuses on AGI's, it's linked in the above description!
Cast is great for any channel. Only Philip Moriarty gives weird vibes.
This guy knows. Rob is the best, and this is fascinating!
It makes it irresistible to get involved with machine learning.
Another cool thing he didn't mention about that experiment with the faces:
They also tried to generate a picture with only features that were found on men, and one with only pictures that were found on women, and the network ended up generating "grotesque" pictures, that were basically caricatures of a "man" or a "woman".
Metsuryu Is it possible to see these images?
@@naturegirl1999 I saw these somewhere a long time ago, but you can probably try googling something like "AI generated male/female faces"
@@MetsuryuVids bro thats too general of a search
@@toomuchcandor3293 Yeah, sorry, I don't remember much else. I tried to find it again sometime ago, but with no success.
That last bit was super interesting and mind blowing at the same time! Excellent video!
Studying computer science now. These videos give me inspiration to try to connect concepts outside of the classroom
I love seeing this in 2022, and comparing this to DALL-E, GPT-3, etc. Wow. Five years later, and it's generating "Pink cat on a skateboard in Times Square" at artist quality.
(@16:25 - Yup. You do. And it does.)
This is so interesting! This is the way thispersondoesnotexist "photos" are made by the machine. Super cool!
This guy presents fantastically. Such an interesting topic... I remember seeing an online CS Harvard lecture around a decade ago that used the same concept (having the system compete with another instance of itself) to train a computer chess player...
20 minute video about AI by Rob Miles? YES PLZ
excellent explanation!!! one of the best videos ever on computerphile
the best video on gan I have ever seen, probably this can help me to return to ML
"...but neural networks don't have feelings yet."
Robert Miles throws this out there nonchalantly. I think he knows something we don't. What is it?
The last part where Rob talks about how meaningful features are mapped to the latent space are a demonstration of how machine learning can strongly pick up on and perpetuate biases. e.g. If you fed a model a large dataset of people and included whether people were criminals or not as part of your dataset, and you fed it a large amount of criminal photos wherein the subject was dark-skinned, the model may learn that the "Criminal" vector associates with the colour of a person's skin i.e. you are more likely to be guilty of ANY CRIME if you are black.
If we put these kinds of models in charge of informing decisions (say, generating facial sketches for wanted criminals) we might encode harmful biases into systems we rely on in our day-to-day lives. Thus, these kinds of machine learning need to handled very carefully in real-world situations!
I think this type of machine learning algorithm might actually be somewhat resistant to what you describe, because in order for the discriminator to be consistently fooled, the generator needs to be creating samples that span the whole population of criminal photos. Criminals might have a statistically most likely race, but if the generator is only outputting pictures of that race, then the discriminator would be able to do better than 50% at spotting "fakes" by assuming that all pictures of that race were generated and not real. So the discriminator would actually undo the generator's bias for some time by being reverse-biased. So I think once the generator was fully trained it would be outputting images of criminals of all races, weighted by how many images in the training set were of each race.
But now that I think about it, if we are using current arrest records as the training material for the GAN, then any current biases that exist with who police choose to arrest will show up in the GAN also, so developing a completely unbiased neural network for what you describe could indeed be challenging.
It's completely flabbergasting to me how far science has come in the last decade alone!
"So cats equal zero and dogs equal one. You train it to know the difference" Ultimate final test: show it a Shiba Inu.
knightshousegames Shiba look too happy to be cats.
That is what they call a fringe case. My guess is the machine would try to return a 0.5
knightshousegames i think you should ban 0.5 because thats right for both cases always, the machine cant learn from that
No because in the end you can tell the network that it is a dog, and it could alter its biases based on that result, so the next 100 times you show it a shiba inu, it might be able to give a better answer. Whether that would negatively affect its ability to identify a cat, however, I have no idea.
This man is the only reason I stay subscribed, he is fantastic
Would love to see some example pictures of the generated and real pictures.
High-level concepts explained so beautifully. Fantastic!
wild to view this video again in 2023
Well explained. Best video about gans i have seen so far.
6 years ago is crazy
Hard to imagine watching television again, when such interesting programs are broadcast here instead.
I'm pretty excited about GANs, but what about dealing with when either the generator or discriminator gets a big edge over the other during training and basically kills further progress of the first network? Robert spoke on training on where the discriminator is weak, but it would be nice to have some more details.
"Kind of impressive" is a massive understatement. It's one of the most awesome and scary things I know.
holy moly, this dude has a gift for explaining. awesome work
Are the generators producing the same image, for the same input?
If so could it mean, that continuously changing the input by small steps creates kind of an animation?
If this really is the case I would really like to see such a movie :)
Check out arxiv.org/pdf/1511.06434.pdf , on page 8 the authors have essentially done that!
Wow, thanks Philipp, fascinating! Page 11 also!
Thanks, yeah I hoped, that the pictures would be better already, but I guess that will change over time :)
Specially the faces fall in the uncandy-valley I'd say. But beside that those examples are exactly what I meant.
Check out my follow-up video: watch?v=MUVbqQ3STFA
The videos on Rob's channel are so much better edited
The fact that he add "yet" is both exciting and chilling.
This is how evolution works. This Generator/Discriminator mechanism is exactly how, for example, stick insects evolved to look like sticks and leaf insects like leaves. This is the dream of evolutionary computing I had 25 years ago, but didn't know how to implement. See Richard Dawkins "The Blind Watchmaker", where his attempts to "evolve" computerised insects (back in the '90s!) will also help you understand what Robert called latent space.
That last part about the latent space was really valueable insight! Hard to come by
Man, The Detective and Fragger Example Is the Best Example In The World. He is An Amazing Teacher. I want to Learn A LOT From Him
Very interesting topic and an excellent explanation by Rob! I hardly ever write youtube comments, but this video is great; it deserves all the love it is getting.
The common room elephant: consciousness is _relative,_ and shared by electronic machinery, and all of Earth's animals, including elephants, and not excluding Man.
Deeply sophisticated trial and error to produce meaningful visual results. Awesome
That was extremely interesting, thank you for making this episode.
Did i miss the part of the series where we learn how the generator is actually structured/produces images? The discriminator is a standard classification neural net, which I know has been covered but how does a neural net output an image rather than a class, is the final output layer one pixel in the image?
Do the "directions in picture space that correspond to cat attributes" that he references around 17:30 correspond to eigenvectors of the generator matrix?
GANs started the era of regularing feedbacks in artificial networks like in their natural prototypes.
talking about developing skynet and advanced artificial intelligence, while in the background the keep a Commodore PET as their Backup-System ^-^ PRICELESS!
2022 was the year of latent diffusion!! Disco diffusion, mid journey, and now Stable diffusion is about to make their weights public!! This stuff is so fascinating! :)
Great talk about the way!!!
And the best thing is that diffusion models aren't GANs, so they won't suffer from mode collapse and other pain like that.
While half of the world is stuck at jobs they don't enjoy, filling spreadsheets and making powerpoint slides, I feel extremely privileged to be a part of something so surreal and otherworldly. As JFK puts it, "We choose to do this, not because it is easy, but because it is hard."
People do this intuitively. Competition creates the best among us. It's interesting that competition among peers facilitates growth in machine learning as well.
seems pretty sus ngl
there are quite a few youtubers that have a lot of content on them playing around with GANs
Any recommendations for particularly interesting ones?
+Higgins2001 carykh being one I can think of that plays around with using a GAN to generate instrumental music by feeding it image representations.
Thanks!!
The GAN wasn't exactly very good though.
If I recall, the most recent AI created for DOTA 2 game uses GANs to decimate professional gamers. OpenAI
I love that he said Yet...."Neural networks don't have feelings yet" so nonchalant
THIS is the best channel ever!
I'm working on GAN for data augmentation and will be happy to connect with interested ones
Wow. That was a pretty amazing insight. Hope for non-harmful super-intelligence? If we can do broad definitions of concepts like man's face, woman's face and glasses, then perhaps even trickier concepts can be tackled in time.
This guys explains very confusing topics in SUCH an understandable way
Very cool. The only quibble I have with the video is that Rob says things like "this doesn't apply only to networks" and "they can be other processes". Actually, the GAN procedure requires a gradient descent framework because it uses the discriminator's gradients to fix the generator. Maybe you can use other stuff, but it's not as open as he makes it sounds, and I don't know of anything other than neural networks being used. (EDIT: Actually, he explains all this at around 12:10.)
Bro understood the cyclical nature of GANs so well that even his explanation turned cyclical
Wow this video is amazing. Can he do some live coding/example? Would be interesting to see the pictures.
I wish I saw this earlier. You guys are amazing.
This guy is on the ball: a rare trait indeed.
This episode would have greatly benefitted from some generated pictures, even if only as a link in the description.
We humans do the same thing or rather, coming up with new information is a challenge for a human as well. This is why we tell people to "think out of the box." Even then. a new out-of-the-box idea will usually originate from a collection of ideas.
Love this guy. Harnessing you concepts here!
"Neural networks don't have feeling YET"
Thank you for such interesting video, came here after checking "This Person Does Not Exist" web page.
an amazing explanation of GANs
Absolutely fantastic mind/teacher. I am a complete and utter noob to any of these ideas and even I could follow along. Thank you so much
Latent space description was great!
Very interesting. I think perhaps the explanation focused on the interacted between the generator and discriminator such that we lost sight of the system still needing actual pictures of cats.
Man o man that explaination was even better than Ian Goodfellow's, a phenomenal video. Hats off
"Neural networks don't have feelings.. yet." lol
He said it so matter of factly and by the by. Chilly!
So you watched the video too?
I also payed specific attention to that "yet". It's super cool and scary to live in a time when we can confidently say that software might have feelings in the future
12:11 to 13:00 great definition of gradient descent
"As the system gets better it forces itself to get better." Uh oh, Technological Singularity ahead! lol
it sounds that GAN is almost similar to Actor-Critic Reinforcement learning. so what is the difference between the two? Thanks
wowww, tweak the weights in the direction that maximizes the discriminators error :O genius
I feel like the relational connections in latent space that we see now are a 2D version of our 3D brains, which effectively do the same thing. If compute advances to be able to process the data contained in latent space of exponentially higher dimensional matrices, we'll begin to see real world AGI. The first steps of this can be seen recently in making GPT4 multimodal.
does that Commodore PET still work.
i'd like to see a video on the PET's internals
I wish I had teachers like that at university
hi, thanks for the video, really great. please i would like to know the least number of samples to train a GAN system with as well as how long an ideal training will last with a single GPU and 2 CPU Cores. just an estimate.
I wonder if this issue of classifiers bleeds into the philosophical problem of perfect form?
The issue being that while we all imagine an apple as a 'perfect form,' there is no perfect apple in reality. All apples are a process, not static objects. Perfect form only exists in the 'ideas space.'
Well I u consider that perfect forms are really a branch of epistemology (rationalism) than its actually interesting and somewhat expected that the computer classifier holds a "perfect form" and use it to compare with the others, don't know if its a problematic topic in philosophy or more like a philosophical tool for those who understand it. We actually have incorporated this in our language trough what we call abstraction. And perfect forms would be abstractions that we consider the best models for that particular concept.
Battery Exhausted. I thought the same thing when I watched this.
Seeing as the "data" which encodes the appearance of a face or a cat is hardcoded into the genome of the individual, would a GAN theoretically be able to train on matched images of faces and genomes, and then reverse engineered to output the most probable genome which would produce the face image as input?
There are also a large variety of epigenetic factors such as nutrition during growth, age, and bodyfat that changes the appearance of a face so probably not
Fantastic explanation, love this guy
WD, love Rob's explanations!