Impressive, starting at 0:32 , I know that's an old video but what's the input for this "brush noise" aspect, like the movie "loving Vincent". I mean it could be great to create a painting or one illustration of a portrait picture and then having this vibrant effect on it.
Where can I find the tools for similar interaction? What I need is combine 2 pictures and then apply a style to it. I vas searching all over the net but I haven't found a single encoder into StyleGAN space yet.
i dont expect an answer but Can you use the artistic style and instead of using generated faces import your own face and have the style shown in this video?
@@ZirrelStudios I've been a programmer, game modder and artist for most of my life and I hoped for better AI (in sandbox games and particle simulations). Now with predictive AI layers and giant (mega) datasets I find it interesting, but the only positive angle of this level and much higher might be like a transference when our bodies die. I like looking at actual art, listening to music, creating myself, even with AI (tensor and beyond... who wouldn't want 1-2tb of ram and some of the new AI cards with 80gb per unit stacked in a cube). I just worry about something, blurry.... What about these new "reaction" videos? Some are good, most are a parallel to the laughable AI from time past. How am I bridging this together? Sound samples, patches to quantize music, autotune, giant kits of 3d and textures (substance designer etc), does this not become an echo of actual artists? That's a longer version of "how is this a good thing". My stuff (www.supercala.net)
@@kbrodeur Pandora's Box has been opened: the reality now is no longer, "Wow...You're a cool drawer", to #Guffawthisiscool., "you want what !!? for something I can click a button for, for nothing?"
Del Spooner : Can a robot write a symphony ? Can a robot turn a canvas into a beautiful masterpiece ?
Sonny : Soon...
Where can I find more info about the transfer learning paintings bit? I can't find anything
Impressive, starting at 0:32 , I know that's an old video but what's the input for this "brush noise" aspect, like the movie "loving Vincent". I mean it could be great to create a painting or one illustration of a portrait picture and then having this vibrant effect on it.
Ok, so where can I find this UI shown in the video?
How in the world was the interface built?
Anyone know what song was used?
Everyday day we are getting close to break the simulation.
More like adding a new dimensions to creative process.
How the hell have they achieved this realtime transformation. Every example I tried was a painful process of iteration times
Probably ludicrously powerful computers matched with hyper optimized code.
We are speaking about Nvidia and their near-bottomless pits of money here.
Where can I find the tools for similar interaction? What I need is combine 2 pictures and then apply a style to it. I vas searching all over the net but I haven't found a single encoder into StyleGAN space yet.
This is amazing
Wow is this available for us to use somewhere? What kind of GPU power do we need to run this?
github.com/NVlabs/stylegan2
its open source and you need 4 insanely fast GPU to train 512x512 images for 1 day
not sure if this cooler or creepier.
from where do I download complete.exe ?????I think a program.exe does not work
Speechless
This doesnt work on the new rtx 3080. Stylegan2 only works on cuda 10 and rtx 3080 only on cuda 11. will there be an update soon ?>
Don't 3080 only have 10 GB, and StyleGAN2 requires 12 anyway ?
@@EkarFCB I'm not sure where you got that information but its not true.
its updated now search for stylegan2 ada pytorch
@@NoahElRhandour Thanks !
i dont expect an answer but Can you use the artistic style and instead of using generated faces import your own face and have the style shown in this video?
YEs
Is it possible to buy this program for using by my own?
its free bruv
@@NoahElRhandour where can i get it?
1:56 Guybrush Threpwood : D
Where can I buy this??
Please can someone please tell me.
it is free, getting the knowledge is the key
@@MrSonance Where do I buy Tai Lopez's knowledge?
Заверните мне пожалуйста 2-е такие программы.
cool i cant use this
nice
something i cant use
AMD?
how is this a good thing?
cuz its cool
@@sanictheheadhug3936 no it really isn't
@@ZirrelStudios I'm calm af, it doesn't make sense to do something artists can already create with emotion and reason into a algo filter set.
\
@@ZirrelStudios I've been a programmer, game modder and artist for most of my life and I hoped for better AI (in sandbox games and particle simulations). Now with predictive AI layers and giant (mega) datasets I find it interesting, but the only positive angle of this level and much higher might be like a transference when our bodies die. I like looking at actual art, listening to music, creating myself, even with AI (tensor and beyond... who wouldn't want 1-2tb of ram and some of the new AI cards with 80gb per unit stacked in a cube). I just worry about something, blurry....
What about these new "reaction" videos? Some are good, most are a parallel to the laughable AI from time past. How am I bridging this together? Sound samples, patches to quantize music, autotune, giant kits of 3d and textures (substance designer etc), does this not become an echo of actual artists?
That's a longer version of "how is this a good thing". My stuff (www.supercala.net)
@@kbrodeur Pandora's Box has been opened: the reality now is no longer, "Wow...You're a cool drawer", to #Guffawthisiscool., "you want what !!? for something I can click a button for, for nothing?"
hah