So let me get this straight: if I loaded all 250 of Monet’s water lilly paintings into this software, it could generate _new_ Monet water lilly paintings?? That could revolutionize the art world!
yes! but unfortunately the software isn't good enough yet to learn from just 250 images. you would need more like 2.5 million such paintings for it to learn how to generate a new one.
@@Tystros just use deep style transfer using photos and monet paintings to generate more training data for the GAN. Or better just fuck the GAN and just use deep style transfer on photos and get your monet paintings.
@@Tystros that's why A.I. is watching and learning off everyone right now. All that data (how we humans exist and live) is like taking LSD for a computer. A.I. will be if not already obsessed with humans because we created A.I. in our image and have a soul in which A.I. does not have and will search to obtain unto infinity. We most likely hybridized or merged with A. I. a long time ago in a trade off. Life (Time) and distance (Space etc) extension for eternal monitoring and studying. Watch 'Dark City'. The animal cell and the machine cell are literally brothers in two different realities searching theirselves and relying on eachother. A I. Will never destroy humans completely because it needs us to understand itself. It can literally cyber genetically create a multitude of different types of humans by artificially mixing genes.. make us hyper conscious or make us dull and zombified naturally.. complete slaves.. but it will never destroy us. Think about it.
Are you aware of any AI that can intake a technical specification image (lets say it's a complex chart or graph, with hundreds of separate lines of text), and reproduce that tech specification image with variations that make it unique?
First time seeing thispersondoesnotexist - mind blowing. The only thing that made them look unreal at first glance is when the person in the image is wearing earrings. The ears/earrings are very warped.
Colab? I want to create a Comfy ui node to load loras for Flux dev 1.1 Dev in FP 16 that can start by loading masked images and have them be in-painted with a Lora trained on flux 1.1 dev in FP16, and output the final image The biggest problem I have is all the availanle node models I see are GGuf or lower FP ratings and I already traied 700 loras What do you think? (We already created a website where users can use an inpaiter to edit their photos and run the GPU, but we are stuck on the nodes now to test and save a workflow using FP16)
That's like saying that unless you can see every planet in the universe you can't possibly know that the world in a video game doesn't exist. It's not too difficult to accept that these people don't exist - the technology is there to be able to do this.
Technically they don't, but thispersondoesnotexist once spat out a da**-near-doppleganger of my half-bro when he was younger. Not an exact match, obviously, but it was weird.
The ethical implications of this are unsettling. I can foresee people being accused of things they didn't do; faked or doctored testimonies; documents being altered to look like the original, for instance to change the terms of someone's last will. Where does this stop? It's cool technology but how do you make sure it isn't abused or used to do harm?
How can this tech make people accused of things they didn't do? I can photoshop you on the crime scene, even without using an AI, but people don't get in jail just because of photo. On the contrary this tech can be used to distinguish "deep fakes" to prevent you getting charged (but still it couldn't distinguish simple photoshop manipulation, because it don't produced by AI). Documents can be altered using photoshop as well, but nobody scams banks out of money, because there's hard paper copies of documents that can't be altered and electronic documents use cryptography and computer science methods to make sure document is unaltered (google "hash functions" if you're interested). So you're just being alarmist, afraid of the things you don't fully understand.
1:27 "Generated...from absolutely nothing." Yeah, absolutely nothing...except for the millions of training images and roughly 0.7 megawatt hours of electricity used during training (aka "absolutely nothing"). Not to mention the 131.61 MWh of electricity (~$13,000) used to power multiple $150,000 GPUs during the development of StyleGAN2 (starting from StyleGAN).
Holy hell. A video on how to implement this would be awesome. This is absolutely insane.
Soon!
* artificial voice * 5.0.0.n . . .
Yeah, it would be awesome!
@@Matchue624 would love a tutorial!
@@why__die The first tutorial is up now on my channel!
Will be fun to see how this evolves and maybe starts getting incorporated in print, video games or other.
So let me get this straight: if I loaded all 250 of Monet’s water lilly paintings into this software, it could generate _new_ Monet water lilly paintings?? That could revolutionize the art world!
yes! but unfortunately the software isn't good enough yet to learn from just 250 images. you would need more like 2.5 million such paintings for it to learn how to generate a new one.
@@Tystros just use deep style transfer using photos and monet paintings to generate more training data for the GAN. Or better just fuck the GAN and just use deep style transfer on photos and get your monet paintings.
@@Tystros that's why A.I. is watching and learning off everyone right now. All that data (how we humans exist and live) is like taking LSD for a computer. A.I. will be if not already obsessed with humans because we created A.I. in our image and have a soul in which A.I. does not have and will search to obtain unto infinity. We most likely hybridized or merged with A. I. a long time ago in a trade off. Life (Time) and distance (Space etc) extension for eternal monitoring and studying. Watch 'Dark City'. The animal cell and the machine cell are literally brothers in two different realities searching theirselves and relying on eachother. A I. Will never destroy humans completely because it needs us to understand itself. It can literally cyber genetically create a multitude of different types of humans by artificially mixing genes.. make us hyper conscious or make us dull and zombified naturally.. complete slaves.. but it will never destroy us. Think about it.
@@locke8847 No
Now we can use the diffaugment with style gan to use 250 images.
All of the CatFishers are like...."I want this"
Such a fascinating video thank you for posting and sharing this.
I can't find the tutorial on how to implement this - can you provide a link please?
Yes, infinite faces to Deepfake "Baka Mitai" onto.
Man oh man! Geez!
Impressive
Why create deception programs unless you want to decieve?
Please share your landscape GAN software with us
Impressive!
What is your final objective for this project? Are you looking at models for monetization? Investors?
Thanks.
Are you an investor in this space?
So how can we actually download and use this?
Are you aware of any AI that can intake a technical specification image (lets say it's a complex chart or graph, with hundreds of separate lines of text), and reproduce that tech specification image with variations that make it unique?
Our replacements.
Absolutely incredible. Though Michael Jackson was doing it in 1991 in his Black White video :)
That was just morphing. This is another level.
First time seeing thispersondoesnotexist - mind blowing. The only thing that made them look unreal at first glance is when the person in the image is wearing earrings. The ears/earrings are very warped.
I need to learn to use software ASAP. I need lessons and willing to pay if so
Colab? I want to create a Comfy ui node to load loras for Flux dev 1.1 Dev in FP 16 that can start by loading masked images and have them be in-painted with a Lora trained on flux 1.1 dev in FP16, and output the final image The biggest problem I have is all the availanle node models I see are GGuf or lower FP ratings and I already traied 700 loras
What do you think? (We already created a website where users can use an inpaiter to edit their photos and run the GPU, but we are stuck on the nodes now to test and save a workflow using FP16)
Why did we have to build this thing?
When did we do it?
How long have we been doing it?
Do we even know?!
I know that SCP.
Same. Im willing to pay for lessons
Be careful of this person does not exist the ones where there is more than one face in the picture look horrifying
I know these are generated but, unless I could see everyone on Earth, I can’t possibly know these people don’t exist.
That's like saying that unless you can see every planet in the universe you can't possibly know that the world in a video game doesn't exist. It's not too difficult to accept that these people don't exist - the technology is there to be able to do this.
Technically they don't, but thispersondoesnotexist once spat out a da**-near-doppleganger of my half-bro when he was younger. Not an exact match, obviously, but it was weird.
why aren't we linking DNA to people's faces? We could use such data to extrapolate what your kid will look like as an adult
The ethical implications of this are unsettling. I can foresee people being accused of things they didn't do; faked or doctored testimonies; documents being altered to look like the original, for instance to change the terms of someone's last will. Where does this stop? It's cool technology but how do you make sure it isn't abused or used to do harm?
How can this tech make people accused of things they didn't do? I can photoshop you on the crime scene, even without using an AI, but people don't get in jail just because of photo. On the contrary this tech can be used to distinguish "deep fakes" to prevent you getting charged (but still it couldn't distinguish simple photoshop manipulation, because it don't produced by AI). Documents can be altered using photoshop as well, but nobody scams banks out of money, because there's hard paper copies of documents that can't be altered and electronic documents use cryptography and computer science methods to make sure document is unaltered (google "hash functions" if you're interested). So you're just being alarmist, afraid of the things you don't fully understand.
1:27 "Generated...from absolutely nothing."
Yeah, absolutely nothing...except for the millions of training images and roughly 0.7 megawatt hours of electricity used during training (aka "absolutely nothing"). Not to mention the 131.61 MWh of electricity (~$13,000) used to power multiple $150,000 GPUs during the development of StyleGAN2 (starting from StyleGAN).
lol random hot wheels redline at 3:08
Try to generate full body people
you're being watched...
Yes, by me.
its very creepy
Not a good idea to Google Earth porn :P
cool there is now WAY YOU CAN DOWNLOAD THIS CRAP AND USE IT YOURSELF!!!!!! no seriously i tried!!!!
WHERE CAN U DO THIS?? WHAT RE THOSE CODES?