Skip the waitlist and invest in blue-chip art for the very first time by signing up for Masterworks: masterworks.art/bycloud Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more. 🎨 See important Masterworks disclosures: masterworks.io/cd
I'd _really_ recommend looking into Masterworks. I'm not an expert either, but as you're risking numerous dollars of your viewer's money with this sponsorship, I think you have the responsibility to consult with one, or at least seek opinions. The caveat near the end sounds eerily similar to "this isn't financial advice" line from scams, not that I assume that was your intent. You should know whether or not what your promoting is good or not. You don't need to be an expert to give more clear financial advice, which is what this ad categorically is. Telling people to do their own research doesn't mean that will happen, or that they will do it well, but this implies you could do it just as well as them... so why didn't you use your own advice? Best regards, still appreciate your content. Just trying to consider your viewer's money because it's a company I don't trust. Regardless, some reputable RUclipsrs take their sponsor so there's no/little harm to your reputation✌
hey bycloud, could you do your insane magic and make a video on how to make a vtuber using A.I in 2023.. It's really complicated for me to understand but i thought id reach out to you and maybe its something you'd be interested in, if not thank you anyways :)
Hey, just wanted to back up Internet Hobo's point of view here. I created an account, we scheduled a call to "discuss and maybe activate my account". Turns out they wanted me to send them a wire to activate my account and do that on that introduction call. I told them I'd rather have them send me all the info by email first, not liking being pressured like that. The guy on the phone immediately flipped and said it is not an investment for me. They're really acting weird, please have a deep look at how they do business. I wanted to trust you because I value your work but I'm very suspicious about masterworks now...
There also a a closed source based on the open source stablediffusion called warpfusion, it can actually do a good job of keeping the figure being generated consistent on img2img, meaning you can actually run frames of a video through it and produce scenery or the figure consistently instead of looking like a acid trip that SD does defaul at the moment. Havent seen that one covered by anyone yet even though its actually kind of game chaning. Like said though it's closed source so you need to pay to get access. Would be great if something similair was released to the public
@ Same here, every time. The only things I've done in SD2.1 that worked well were large scale fantasy landscapes and nature photography type stuff. For everything else there always seems to be a fine tuned SD1.X based model that yields better results while being way easier to use. Also, having to use stupid long prompts full of magic words in SD2 to get good results is just too annoying. Honestly the most value I've gotten from SD2 was making me realize just how stupidly useful negative prompts were in general.
I've gotten some great results with sd2, just gotta know how to get a good long prompt. then I'll usually also have some stolen negative prompts where I'll add some extra things
Hey, is there any chance you could make a video (for your bycloudump channel) on how to install Stable Diffusion 2.1 locally? I'd really like to run it on my own machine and think you have the expertise to show a smart way to install and use it. Thanks for the great content either way!
whats the difference between dreambooth, stable diffusion and, midjourney? Are they all just different products run by same companys(like facebook runs fb,whatsapp and instagram)? Or are they different companies having the same name for their product(like google for google search engine)? Or are they just the products but with some companies behind them(like microsoft is behind windows)? new here, hence curious
in that case I highly recommend watching his previous videos 😅 he explains dreambooth very well. But in a nutshell, stable diffusion and midjourney are two different companies/competitors. Midjourney works on discord and costs money, while stable diffusion developed by stability ai is for free. Dreambooth on the other hand is a fine-tuning method to train/teach a stable diffusion model e.g. sd-1.5 a new person or style. It can be used locally on your pc only if you have a very advanced GPU hence most ppl use google colabs to do it. I for example like TheLastBen's fast-dreambooth. Stable diffusion tho can totally run on your regular PC even if you have low VRAM GPU with modifiers like --medvram on a 8GB VRAM GPU for example, which enables you to generate images at a higher resolution without SD crashing.
sd 1.5 trained on anime stuff gives better results than 2.1 funny how i knew about stable diffusion by your channel and knew all this before watching your vids
@@imblank6161 you can make infinite graphic novels, sure. but they won't be worth reading, and people won't spend money on them. There is very little money in AI art for the end users, the money is all for the AI devs.
what ai... he talks about three different AI's in the video. None can do temporal cohesion yet, I guess you could try to make an animation from assets you generate but you'd have to do it with whatever outside techniques you want to use.
Doesn't it make sense that banning an AI from drawing what humans look like naked would make them worse at drawing humans? We're just making it harder on oursselves.
@@Askejm Was it? Depends on consistency of results too, with all the changes to the prompt result efficacy, it´s hard to gauge if artist tag will work properly anymore. During My Novelai tests there were plenty of times were artist tags from Danbooru wouldnt give proper results, singlehandledly, without aditional prompts, and even then the results werent that good, 2.1 is clearly focusing on other aspects of the prompt results, so I wouldnt be surprised if Artist tags were left behind for a while, maybe in 2.2 or 2.3, we´ll see better results with them.
Skip the waitlist and invest in blue-chip art for the very first time by signing up for Masterworks: masterworks.art/bycloud
Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more. 🎨
See important Masterworks disclosures: masterworks.io/cd
I'd _really_ recommend looking into Masterworks.
I'm not an expert either, but as you're risking numerous dollars of your viewer's money with this sponsorship, I think you have the responsibility to consult with one, or at least seek opinions. The caveat near the end sounds eerily similar to "this isn't financial advice" line from scams, not that I assume that was your intent.
You should know whether or not what your promoting is good or not. You don't need to be an expert to give more clear financial advice, which is what this ad categorically is. Telling people to do their own research doesn't mean that will happen, or that they will do it well, but this implies you could do it just as well as them... so why didn't you use your own advice?
Best regards, still appreciate your content. Just trying to consider your viewer's money because it's a company I don't trust. Regardless, some reputable RUclipsrs take their sponsor so there's no/little harm to your reputation✌
hey bycloud, could you do your insane magic and make a video on how to make a vtuber using A.I in 2023.. It's really complicated for me to understand but i thought id reach out to you and maybe its something you'd be interested in, if not thank you anyways :)
Hey, just wanted to back up Internet Hobo's point of view here. I created an account, we scheduled a call to "discuss and maybe activate my account". Turns out they wanted me to send them a wire to activate my account and do that on that introduction call. I told them I'd rather have them send me all the info by email first, not liking being pressured like that. The guy on the phone immediately flipped and said it is not an investment for me.
They're really acting weird, please have a deep look at how they do business. I wanted to trust you because I value your work but I'm very suspicious about masterworks now...
4:32 can you share the image of the dude in hat standing in the middle of night? Thanks in advance
bro i just found out your channel and already love your videos.
Bycloud thanks for keeping us updated. It feels like things are moving so fast soon you wint be able to keep up with your videos😊
I mean, both midjourney v4 and stable diffusion 2 are kind of old news at this point. but whatever.
AI is evolving so fast
yeah, in the scale of art's history it's billion times faster, truly scary
An order of magnitude faster than Iphones or computers.
Deep learning imagine generator mostly.
I hope we will get project without filters
One day, I just wanted to generate some funny images of cats, the next, I was being shot at by an android in the middle of a fiery hellscape.
There also a a closed source based on the open source stablediffusion called warpfusion, it can actually do a good job of keeping the figure being generated consistent on img2img, meaning you can actually run frames of a video through it and produce scenery or the figure consistently instead of looking like a acid trip that SD does defaul at the moment. Havent seen that one covered by anyone yet even though its actually kind of game chaning. Like said though it's closed source so you need to pay to get access. Would be great if something similair was released to the public
Thanks so much for your updates! Waiting for more
I'm not really convinced about the 2.0 and 2.1 models. 1.4/1.5 either with our without custom models are way more flexible and in most cases better.
Every time I want to give 2.X models a chance, I quickly return to 1.5. Maybe I'm prompting 2.X wrong...?
@ Same here, every time. The only things I've done in SD2.1 that worked well were large scale fantasy landscapes and nature photography type stuff. For everything else there always seems to be a fine tuned SD1.X based model that yields better results while being way easier to use. Also, having to use stupid long prompts full of magic words in SD2 to get good results is just too annoying.
Honestly the most value I've gotten from SD2 was making me realize just how stupidly useful negative prompts were in general.
I've gotten some great results with sd2, just gotta know how to get a good long prompt. then I'll usually also have some stolen negative prompts where I'll add some extra things
Sweet side of AI in my opinion
Hey, is there any chance you could make a video (for your bycloudump channel) on how to install Stable Diffusion 2.1 locally? I'd really like to run it on my own machine and think you have the expertise to show a smart way to install and use it. Thanks for the great content either way!
just install it through the SD webui and don't forget the config file
whats the difference between dreambooth, stable diffusion and, midjourney? Are they all just different products run by same companys(like facebook runs fb,whatsapp and instagram)? Or are they different companies having the same name for their product(like google for google search engine)? Or are they just the products but with some companies behind them(like microsoft is behind windows)?
new here, hence curious
in that case I highly recommend watching his previous videos 😅 he explains dreambooth very well. But in a nutshell, stable diffusion and midjourney are two different companies/competitors. Midjourney works on discord and costs money, while stable diffusion developed by stability ai is for free. Dreambooth on the other hand is a fine-tuning method to train/teach a stable diffusion model e.g. sd-1.5 a new person or style. It can be used locally on your pc only if you have a very advanced GPU hence most ppl use google colabs to do it. I for example like TheLastBen's fast-dreambooth.
Stable diffusion tho can totally run on your regular PC even if you have low VRAM GPU with modifiers like --medvram on a 8GB VRAM GPU for example, which enables you to generate images at a higher resolution without SD crashing.
Did the distilled stable diffusion that need much less steps already released?
Could you do tests of posing irl and using image to image to get the exact pose you want?
sd 1.5 trained on anime stuff gives better results than 2.1
funny how i knew about stable diffusion by your channel and knew all this before watching your vids
I want something that can create frame-by frame 3D animation.
Did anybody else pause at 1 min 9 seconds and read the INSANE text ?
it's the navy seal copypasta lmao
Wonder how long until A.I. can write and draw a whole graphic novel .
mix chatgpt and stable diffusion = infinite graphic novel content = more $$$
@@imblank6161 you can make infinite graphic novels, sure. but they won't be worth reading, and people won't spend money on them. There is very little money in AI art for the end users, the money is all for the AI devs.
Can it run on AMD gpus?
no
Even more proof us artists are bieng drained of our income faster and faster
So... If I provide model input for characters in a story, could this AI build an animation from it?
what ai... he talks about three different AI's in the video. None can do temporal cohesion yet, I guess you could try to make an animation from assets you generate but you'd have to do it with whatever outside techniques you want to use.
not really an animation no
I do have to struggle with negative prompts on 2.1 I barely saw any meaningful difference.
what's your negative prompt
Doesn't it make sense that banning an AI from drawing what humans look like naked would make them worse at drawing humans? We're just making it harder on oursselves.
talking about ai "art" while being sponsored by masterworks...oh the irony
money is money
Nice Video 🌸😄🌸
Would it be possible to easily add back in the NSFW and live artist data that was censored?
Caught in 4k
@@enough2715 ?
good question. I havent tested SD 2.0 yet, but I've heard in a MattVidPro video that there is a google collab of 2.1 that is unlocked apparently.
by training your own model yes
No way 5 fingers
woof.
Holy shit
Lol @ the navy seal copy pasta
The fact that you can't be able to put the name of artist that still alive is a big loss in my opinion
I think it was changed in 2.1
@@Askejm Was it? Depends on consistency of results too, with all the changes to the prompt result efficacy, it´s hard to gauge if artist tag will work properly anymore.
During My Novelai tests there were plenty of times were artist tags from Danbooru wouldnt give proper results, singlehandledly, without aditional prompts, and even then the results werent that good, 2.1 is clearly focusing on other aspects of the prompt results, so I wouldnt be surprised if Artist tags were left behind for a while, maybe in 2.2 or 2.3, we´ll see better results with them.
@@sebas8225 After 2.0, the Stability AI has removed NSFW and Copyrighted images from training datasets.
@@ofulgor Is it possible to add them back in if we don't want to be censored?
@@CypherDND You can always use 1.5 or a custom model.
NFT is the future
ai generator is the future
do you see the pattern both will generate frauds then artists
First