Update: Check this video How to Install Forge UI & FLUX Models: The Ultimate Guide ruclips.net/video/BFSDsMz_uE0/видео.html Here are some useful resources for Stable Diffusion: Download Stable Diffusion Webui Forge from: github.com/lllyasviel/stable-diffusion-webui-forge Download Juggernaut XL version 9 from: civitai.com/models/133005/juggernaut-xl?modelVersionId=348913 More info on FreeU: github.com/ChenyangSi/FreeU Download more ControlNet SDXL models huggingface.co/lllyasviel/sd_control_collection/tree/main Extensions used github.com/ahgsql/StyleSelectorXL and github.com/thomasasfk/sd-webui-aspect-ratio-helper If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/ or Pixaroma Discord Server discord.gg/a8ZM7Qtsqq
I have been looking for a tutorial like this for months. You have a real talent for this tutorial style and I HIGHLY encourage you to keep making these videos. Information is packed and logically flowing from one point to the next. Subscribed!
Does it crash more or less often? I've been using A1111 for a while now, but it feels like it's been crashing more and more. Especially with SDXL models.
Since i switched to forge it didn't crash at all, only when i used control net ceashed if i didn't use image size divisible with 64 @@edouarddubois9402
The width and height of the image, sometimes i got that error when the size was not divisible with 64, but mostly when i used some extensions@@edouarddubois9402
I have to thank you very much for your tutorials. The calm nature of the presentation makes it very easy for the beginner to follow every time, you never feel overwhelmed. Kind regards and all the best from Germany.
HOLY COW! I've been using A1111 (and now Forge) for a year, so by now I know most of these "hacks", but I wish a video so clear and so thorough existed when I was starting my journey. I even picked up a new nugget here and there. Bravo! Subscribed. And Saved.
Watched 10 videos about SD Forge install, but your is the best, Quick to the point, super useful tips for beginners, you even say what to do if it crashes.
great info. Also a quick tip, below the image there is a button to upscale using hires fix, just a quicker way to do it. 09:25 I think that option is new with Forge, wasn't in A1111
Thanks for your video! I installed Forge yesterday (no stranger to A1111 here), but thought I'd check out a video or 2. For preferred defaults, I've been editing ui-config.json. Didn't realise there was a more straightforward method via settings haha! Dark mode is so much easier on the eye. I set it at browser level so that all pages appear dark, then display settings > high contrast in Windows (7, 8, 10, 11) will give dark mode OS-wide. Thanks for the heads up on SDXL styles and how its extra prompting works. 2 more extensions I use that might help is CivitAI Browser+ which integrates CivitAI into A1111/Forge, and ADetailer (After Detailer) which is an automatic in-paint utility that tidies up facial features and I find this better than GFPGAN and CodeFormer. Your 7-second image generation near the start of your video took 17 seconds for me with the same settings, coming from an RTX 3070 here. You covering file naming was very helpful too as I wanted to add the denoising value to the file names, and use a suffix instead of file numbers :-)
Those were on older versions of forge for a few months ago, there are not in the new forge, only if you downgrade to a really old version of forge that doesn't have flux and new stuff but has old stuff
I have a problem where I install Forge, extract the files, do the update and it still doesnt start it. I press 'run' and it even takes me to the command prompt but after I press any key to continue the command prompt just disappears without any message. It is true that I extracted Forge on my desktop but it shouldn't be that much of a deal. Or should it?
Sometimes not all the files are extracted, i used win rar to extract, and i put on a drive, also i have long path enable in Windows, when is on desktop there is a long path from c users and so on until the desktop so maybe that can be a cause
Hello, really great tutorial. I have a question I want to use the 4xVALAR upscaler but have no idea where to put it. Could you please answer if you have an idea in what folder exactly it should be.
Go to your webui\models folder and there create a folder named ESRGAN, so you will have webui\models\ESRGAN path, and in that ESRGAN you put that upscaler model. That worked for me, hope it works for you.
It should be noted, for those who stumbled upon this like I did without knowing any better, that this method only works for nVidia graphics cards. WebUI uses CUDA, which is a proprietary API specifically for nVidia...meaning if you don't have their drivers, you can't natively run Web UI. Luckily there are forks that exist that do work for AMD Radeon cards, but you'll have to jump through a few more hoops than what is shown here in order to install, and it probably won't run quite as fast as it does on nVidia cards.
I didnt play with that function yet, it always seems to be complicated to do trainings, I tried also on A1111 but I dont get always good results, needs good settings, good images, captions, too many things involved it seems. And now I saw an anoucement that forge is not going to be updated anymore, like is used more for tests or something.
fantastic video. But for some reason my changed parameters wont save. the "steps" parameter does, but not the sampling method, schedule type nor width and height...
6:21 Wow, didn't knew about it, I through the only way to change it is to edit it manually in some file I don't remember now. Still, I would like it to have different defaults for each checkpoint, is it possible?
Very helpful, i will be watching all the videos in this playlist, thanks! BTW what do you use for your voice It's great.( If it's not a trade secret that is)
Thankyou for this tutorial! ❤Do I need the automatic 1111 stable diffusion installed to be able to install forge? I have the oldest version of automatic 1111 installed and I hvnt used it or upgraded it as I cudnt keep up with the every new update and other troubleshooting issues as I hv zero knowledge of programming language 😢
not sure if all those works, but did you installed them from extension? Go to extension tab, click on available, click on Load From button, that will load all, search for an extension, for example tried ratio helper in search and installed just well when i clicked install, and restarted forge.
i wonder if any of the stable diffusion UI makers (forge, automatic, comfyui etc) has considered a method for capturing 'recommended model settings' like you point out at 3:29 - as going out and hunting down a model's recommended settings is a work slow-down; perhaps be able to configure a 'model or ksampler template' that can be a quick preset based on the model.. would be kinda cool to have the option to be able to on checkpoint load to trigger the preset (but again should be an optional thing, not everyone would want that in all cases). if this already exists someone let us know
There is a preset saving extension so you can just save settings and give it a name similar to the model you use to know for what it is, but many extensions have bugs since with updates
You can read more about here, i didn't play with them in forge only with canny control net mostly, also keep in mind the version you are using there are different forks of forge now, the main one is used for beta testing and many things might not work! github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178
What if you use both 1.5 and XL checkpoints? Do you have to keep manually changing the FreeU settings everytime? Also with Hypertile, should the tile size be half of the generated dimensions longest side? what if I am usijng x2 hires fix, should it be half the size of the hires fix? What does the hypertile swap size and depth settings do? and is there a way like in SDNext to have hypertile set to automatic mode?
I don't use those settings too be much to give you more details, i know that for control net you have to manually keep changing models, 1.5 is different from sdxl so it needs different models and settings
@@pixaroma yes that's to be expected for control net as 1.5 and xl use different model. That's easily done. What's not easily done is having to remember the precise settings for freeU for 1.5 and xl. Can't believe there is no preset option to load these on the fly. I also am disappointed that there is no auto mode for hypertile tile size. Damn, all these different UIs should come together and make one ultimate ui
Great video! I am planning to install locally a Stable Diffusion AI aswell, and a friend recommended me Forge, your tutorial seems very easy to follow, however I have a question, do I need to install Python aswell for this to work? Do I miss any step from the video prior installing Forge? Thanks in advance
First of all, I must say thank you. I started with your videos with the latest one, about Flux, and I am stuck here. Forge UI is fantastic! My only question is if I can find log files about prompts? It would be great to keep them.
Well each prompt and settings are saved in the png you generated, so if you drag the png you like into the png info tab you can see the prompt and setting. For more complex probably you need a script or an extension, on a quick search maybe an extension like this could do something similar, didn't test but maybe gives you some ideas github.com/ThereforeGames/unprompted
@@CsokaErno I use XnView MP as my default image viewer. It has meta-info tab on the right, no need to import images anywhere, you can just copy, alt-tab and paste prompt+properties into your browser. Besides that, it's a really handy piece of software compared to vanilla windows image viewers.
Maybe look here gist.github.com/ShMcK/d14d90abea1437fdc9cfe8ecda864b06 aws.amazon.com/blogs/machine-learning/use-stable-diffusion-xl-with-amazon-sagemaker-jumpstart-in-amazon-sagemaker-studio/ as I don't use aws I can't hellp
Just updated to latest Forge version , the one that can work with Flux, but using only Sdxl on my 8gb card : every time I do inpainting or img2img the result has lower saturation than the original, it's me or what? Assigning a VAE do not solve 😢
There are a lot of bugs on new version so it will take a while for all to get fixed, this has a similar problem github.com/lllyasviel/stable-diffusion-webui-forge/issues/1189 and if you look at the list of open issues are like 600 github.com/lllyasviel/stable-diffusion-webui-forge/issues
So, in the arguments section where you put the dark theme I can add: --pin-shared-memory --cuda-malloc --cuda-stream For optimization, right? Thanks for the video!
Yes, i tried all those as forge suggested but didn't make it faster on my rtx4090 but slower, maybe it does better for you but for me with bo argument was faster
is in the webui-user.bat look for set COMMANDLINE_ARGS= and there after equal you add it, like I added the dark theme you add more, set COMMANDLINE_ARGS=--theme dark --cuda-stream
I'm new to this Stable Diffusion GUI. Experienced people can you please answer is this Forge WebUI is better than Fooocus MRE? If Yes, then in what parameters is it better? Thanks!
Should be next to run.bat and environment.bat a file called update.bat i have it there since installation, your should have it to. Just careful with updates to have a good stable version, check this video ruclips.net/video/RZJJ_ZrHOc0/видео.htmlsi=rF-9wCmzResJiW3L
I am not sure, can you join my Facebook group and you can you show me maybe some screenshots or post there to take a look, do you get any errors or what it looks like
Usually those from automatic are also on forge, but not sure if all works, you can try and test it, i dont usually use outpaint because it doesn't always do a good job, for that i prefer Photoshop generative fill
From your video with the purse, and the drinks can in the desert, I understood that Inpaint Background took account of, say the lighting, in the masked-out subject when creating a completely different background, as compared with a simple remove/replace background ignoring the masked area. Have I misunderstood? Does Photoshop Generative Fill allow a completely different background prompt, or only an extension of the existing image within a larger canvas?
@@johnclapperton8211 when you do with inpaint it look around to be able to paint better, but is not always perfect. For photoshop when you expand with crop it does automatically, but after you can select that part that was generated and give it with the prompt what you want in there
@@pixaromathank you, hope that some one can answer. I don't have the requested performance machine to do local installation, so that will give me a great help. why I'm asking? it's just for the seamless pattern setting that exists in the models presented. this capability isn't offered right now in fooocus witch is easily accessible with colab.
hi my installed Sd forge doesn't have the update.bat file. Is there anyway to update SD forge without the file? Maybe by adding arguments to look for an update?
I wanted to ask, I have models and loras in my fooocus folder. is it possible to copy and paste them to the appropriate forge folders? Or do I have to redownload them from civitai? ORRR...(is there a better way to have these models linked from my fooocus to my forge folder that could help on saving space on my cpu)...... I rather not redownload them or copy the file over so i dont fill up my harddrive.
I am adding a few hours a tutorials for forge, and how to link the automatic folder to forge, probably you can do something similar with the fooocus paths. I dont have focus to test it but it worked with the path of automatic1111. Also if you have it once you just copy them in the right location no need to redownload them.
thank you for the reply! I wonder if it will work linking fooocus folder to forge..i guess it is worth a try. ANy idea when your vid will be dropping for that so I can keep an eye out for it?@@pixaroma
When I put my model in control net "control_v11p_sd15_openpose.pth" and when I try to generate the image, I have a error message "TypeError: 'NoneType' object is not iterable. My setup is OpenPOSE and processor Openpose_full, can you help me please
i see you are using v1.5, do you get the same on sdxl? I got that error recently but not on control net but when i used an extension at different image size. It works at 1024x1024px? I got that error on different sizes, but worked at 1024x1024px. I dont used v1.5 anymore since sdxl appeared.
so i tried with sdxl control net and i get the same error if i use certain sizes, for example it works if is 1024x1024px, or 1024x576px, but i get that error if i use 1200x672, or 912x512 or 1024x816
@@pixaroma thank you for your answer, I tried all the sizes 1024, 512 etc. it does not work, I have this problem that when I want to use controlnet otherwise to generate an image no problem but when I use controlnet impossible to generate an picture
I use forges deforum tab to create animations. I would like to know how to create the animations within a boundary. I projection map so I would like to know how to keep the animations within the map of my house. Would you know how to accomplish this? I have a png map file that I created but unsure what to use it with. TIA
Sorry, I didn't play yet with deforum, so i can't help there yet, I like to create HQ images and the video and animation isn't quite there yet, i am waiting for an improvement before i jump in to it
great video! thanks very much. quick question, do you use tts for narration? If so, it's incredible, may I ask which one? I've been trying to find something decent for my videos. Cheers :)
Mine is off also, can be activated with some command in the bat, I tried but made my generation slower not faster so i left it deactivated, it appears as suggestions on the cmd window when you start and there is also a command I think, i don't remember now, just i know activated was slower for me
You solved the problem I was actually worried about the output speed. So I don't have to worry about CUDA but the internet connection, it should be sometimes fast sometimes slow which affects the output speed. Thanks for the tutorial above.
@@pixaroma I have 4gb of vram as im generating with SD realisticVisionV51 works perfect As you suggested i have installed the Juggernaut_X_RunDiffusion_Hyper but still the same issue
The one you mention seems to be based on sd 1.5 so the sd 1.5 are smaller and you can run it because usually has like 2gb, but when you run an sdxl base model that is 5-6 gb your 4gb memory might not be able to handle that size, so either it takes a lot of time to load like a few minutes or it crashes. so maybe just work with sd v1.5 model for a while or if you can find an sdxl model that is smaller maybe
My built-in controlnet's IP-Adapter is missing its models, and thus, doesn't work. Any ideas? I wanted to install them manually, but the library is different, and so are the files.
Forge still has some problems with control net, check this discussion maybe it helps github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178#discussioncomment-8572388
1. Is there any performance drop if I don't install it on C drive? 2. My C drive is SSD and D drive is HDD. Can I still install on D drive? Will I face any performance difference or issues?
I tested on 6gb and worked, only control net give me some crashed, but other things worked for me, faster then automatic that took ages. It worth a try, and if you dont like it how it work you can just delete the folder with all files. But i believe you can use most of the functions, if automatic1111 works this should work even faster
I runned forge on 6gb rtx 2060 so it should work, other solution will be comfy ui since forge will not be updated anymore, and once they update automatic 1111 that will be also a good solution
Trying to use symbolic links for Forge since I have about 80 models and 180 loras, but it just doesn't seem to see the folder. Wondering if you could help on that.
Doesn't work if you copy and paste the link, make sure you reverse the slash symbol instead of \ should be this / , try that replacing the simbol in all the path and see if works, and you need to remove the @REM in front of it. you edit the webui-user.bat
Hmm, i replied a few hours ago but not seems to missing my comment. How did you tried to add the path with copy paste? You need to switch the backslash forward slash / \ when you copy the link is in one sign but in the bat is other, also remove that rem in front
How do I add the "ip-adapter_face_id_plus" preprocessor for IP-Adapter? It's not in Forge. "ip-adapter_face_id_plus" working better then "InsightFace+CLIP-H (IPAdapter)"
@@Diffusion-oi6ux i have mine in D:\Forge\webui\models\Stable-diffusion so basically where you install it in the webui folder have models and then stable-diffusion where you put the models. And also use safetensor models instead of ckpt are are safer
@@pixaroma can you make a video upon hands You have replied me very fast, for which i am very thankful and surprised too, you are doing great, Few people say Ai art is very very bad And few like us are excited, because we know there is just no boundary, post 5 years full length Ai movies would be able to be made, or it depends
@@Diffusion-oi6ux for the hands I usually just use inpaint like i did on this video with geisha hands , I used for a while in A1111 Adetailer extension but for me kind of worked faster with inpaint
Hey! I have installed forge, and any files that need to be on Stable-defusion folder works great. I watched a few of your videos, and every time I SVD file and put it into the SVD, reload and run locally I do not get that tab section. Any advice? I have tried reinstalling, and other stable fusioon github downloads and nothing makes the svd. I really want to make image to videos. Any advice? Or anyone reading this comment?
there is an old version that has svd but doesnt have flux and other things, so you can install a separate folder forge and go back to that version Create a bat file any name you want, something like rollback.bat and add this text inside @echo off set PATH=%DIR%\git\bin;%PATH% git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 pause When you run that rollback.bat it will load that specific version Then create a rollforward.bat and add this text inside @echo off set PATH=%DIR%\git\bin;%PATH% git -C "%~dp0webui" checkout main pause That will let you go back to the main version so you can see the latest updates. After that, run "rollback.bat" to return to the specified point , so start it and check the operation. Also, run "rollforward.bat" to return to the latest state (only if you have rolled back) Some had problems with creating the bat files so here shared those also. Get this 2 bat files from google drive: drive.google.com/drive/folders/1bS-6HdLl5AH3Rbd2wHUm_nILUOnu9hmJ?usp=sharing If is say is dangerous is because is a bat file, you can open it with notepad and is the same code from video, so is not a dangerous file. Download and place those two files in the main folder of your forge ui, where the run and update file is. Run the rollback.bat by double click on it and will go back to a previous commit. Press enter to exit that screen after it finished. Then double click to run.bat and it will start with that old commit. --- To get back to the current version run the rollforward.bat Press enter to exit that screen after it finished. Then double click to run.bat and it will start with new commit. that commit i put in the rollback it has the version of forge with svd but doesnt have the flux and new updates, is an older version.
hello, i downloaded the 7z file... i extracted using 7zip but it's been 1.5 hours and now 7zip says it will take 7 hours to extract. I stopped extracting... something must be wrong? I have a ssd with 6000 write speed and 32 gigs of ram with 12 gigs of ram. I invested in a new rig for AI so I'm hoping this is not a hardware issue so soon? Any advice would be much appreciated.
Forge has some basic prompt from image but is not so accurate, in img2img tab under generate it has a paperclip icon, first time will download a model but after that should work faster, and is giving a basic description of the image you uploaded in the img2img.
Either your video card is not good enogh or forge dont recognize it, I am a designer not a coder, but you can try this in webui-user.bat add the following arguments to see if it works, it need at least 6gb of vram and prefers nvidia cards but try it anyway: set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half
I see someone already posted taht on the bugs area, you can watch that to see if it gets any response if nothing else works, github.com/lllyasviel/stable-diffusion-webui-forge/issues
I don't think so, is a different interface based on ,a1111 but is made by different users, so is not an update is a different UI. Also seems that works only with Nvidia card
@@pixaroma Thanks! yeah i do have a Nvidia card and seems like everyone using this one. Mine, web ui looks really outdated and have many things missing.. Looks like i have to install all over again? should i uninstall the SD web ui?
@@ZeroCool22 i think it was some problems with ad detailer, and some extensions. For control net for example for me only works if image size width and height is divisible by 64. But just try for things that works and work faster and use a1111 or other for things that doesn't work :)
anytime i put seed and prompt on save image i got this error when generate new image.OSError: [Errno 22] Invalid argument: 'outputs\ why? how can i fix it?
I dont know it has errors after errors, I switched completly to comfyui now, you can check forge page and the issues it has maybe you can find the error there github.com/lllyasviel/stable-diffusion-webui-forge/issues
sorry I only use sdxl models, for that sdxl the lora i tested it worked, I can not tell you if it will work with 1.5 since i dont have any models or lora for it since I dont use them anymore since the sdxl appeared.
Update: Check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
ruclips.net/video/BFSDsMz_uE0/видео.html
Here are some useful resources for Stable Diffusion:
Download Stable Diffusion Webui Forge from: github.com/lllyasviel/stable-diffusion-webui-forge
Download Juggernaut XL version 9 from: civitai.com/models/133005/juggernaut-xl?modelVersionId=348913
More info on FreeU:
github.com/ChenyangSi/FreeU
Download more ControlNet SDXL models huggingface.co/lllyasviel/sd_control_collection/tree/main
Extensions used github.com/ahgsql/StyleSelectorXL and github.com/thomasasfk/sd-webui-aspect-ratio-helper
If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/
or Pixaroma Discord Server discord.gg/a8ZM7Qtsqq
Will this work on Mac M1?
Sorry I don't have a mac to test it, but didn't see something that says supports Mac, so probably not yet
@@pixaroma OK tnx
You should probably put these links in the video description. It's way more likely to be noticed.
when i open run.bat it says couldnt install pip, can you please help?
Thank you for showing more than just the installation like customizing the ui and settings :-)
Best video on SD Forge on youtube. great contribution to the community!
I have been looking for a tutorial like this for months. You have a real talent for this tutorial style and I HIGHLY encourage you to keep making these videos. Information is packed and logically flowing from one point to the next. Subscribed!
I like A1111 but I have found more performance in forge, I must say that you are very good at explaining, excellent video!
Thank you, yeah for me forge is faster and didn't crash like a1111 :)
Does it crash more or less often? I've been using A1111 for a while now, but it feels like it's been crashing more and more. Especially with SDXL models.
Since i switched to forge it didn't crash at all, only when i used control net ceashed if i didn't use image size divisible with 64 @@edouarddubois9402
@@pixaroma When you say image size, you mean the actual resolution?
The width and height of the image, sometimes i got that error when the size was not divisible with 64, but mostly when i used some extensions@@edouarddubois9402
I have to thank you very much for your tutorials. The calm nature of the presentation makes it very easy for the beginner to follow every time, you never feel overwhelmed.
Kind regards and all the best from Germany.
HOLY COW! I've been using A1111 (and now Forge) for a year, so by now I know most of these "hacks", but I wish a video so clear and so thorough existed when I was starting my journey. I even picked up a new nugget here and there. Bravo! Subscribed. And Saved.
Thank you ☺️
Great walkthrough! Just switched over from EasyDiffusion and Forge is a massive improvement in regard to generation speed.
We need more guides like these
Your guides are amazing! Thanks so much. I just learned of this today and I'm already making great stuff.
Watched 10 videos about SD Forge install, but your is the best, Quick to the point, super useful tips for beginners, you even say what to do if it crashes.
Thank you, glad it helped ☺️
Do you know minimum hardware requirements?
Nvidia video card, 6gb of vram tested, possible work with 4gb of vram but not sure
Glad I found this, very concise and well thought out tutorials, just what I needed. Thanks
This was exactly what I needed to get started. This is perfection and I can't thank you enough for your work. Bless you man.
Finally I found the best tutorial channel on YT. Thanks a lot!!!
Thank you very much! Loved how clear you brought everything across! Definitely am gonna hang around here :)
Duuuude!! So detailed, thank you!! Not hush hush, like well done on your style of explaining. Amazing
great info. Also a quick tip, below the image there is a button to upscale using hires fix, just a quicker way to do it. 09:25 I think that option is new with Forge, wasn't in A1111
Thank you, yeah I didnt notice that :) good tip
Thanks for your video! I installed Forge yesterday (no stranger to A1111 here), but thought I'd check out a video or 2. For preferred defaults, I've been editing ui-config.json. Didn't realise there was a more straightforward method via settings haha! Dark mode is so much easier on the eye. I set it at browser level so that all pages appear dark, then display settings > high contrast in Windows (7, 8, 10, 11) will give dark mode OS-wide. Thanks for the heads up on SDXL styles and how its extra prompting works. 2 more extensions I use that might help is CivitAI Browser+ which integrates CivitAI into A1111/Forge, and ADetailer (After Detailer) which is an automatic in-paint utility that tidies up facial features and I find this better than GFPGAN and CodeFormer. Your 7-second image generation near the start of your video took 17 seconds for me with the same settings, coming from an RTX 3070 here. You covering file naming was very helpful too as I wanted to add the denoising value to the file names, and use a suffix instead of file numbers :-)
glad it helped :) I use rtx4090 so that why the generation was faster. Referring to styles check the latest videos i have one with 260 art styles :)
@@pixaroma Thanks. I have found what you meant + subscribed
most helpful video on AI to ever exist give this guy an award please, very helpful saved me hours thankyou :)
Hello, there are some tabs on your installation that I don't seem to have, like Train, SVD, and Z123. How do you install those?
Those were on older versions of forge for a few months ago, there are not in the new forge, only if you downgrade to a really old version of forge that doesn't have flux and new stuff but has old stuff
Incredible and informative! Well done. Thank you so much for the video.
holy shit!!! didnt know where the controlnet files where suppose to go was tryin to use on forge. vid help alot thanks!!!!
I have a problem where I install Forge, extract the files, do the update and it still doesnt start it. I press 'run' and it even takes me to the command prompt but after I press any key to continue the command prompt just disappears without any message.
It is true that I extracted Forge on my desktop but it shouldn't be that much of a deal. Or should it?
Sometimes not all the files are extracted, i used win rar to extract, and i put on a drive, also i have long path enable in Windows, when is on desktop there is a long path from c users and so on until the desktop so maybe that can be a cause
Hello, really great tutorial. I have a question I want to use the 4xVALAR upscaler but have no idea where to put it. Could you please answer if you have an idea in what folder exactly it should be.
Go to your webui\models folder and there create a folder named ESRGAN, so you will have webui\models\ESRGAN path, and in that ESRGAN you put that upscaler model. That worked for me, hope it works for you.
It should be noted, for those who stumbled upon this like I did without knowing any better, that this method only works for nVidia graphics cards. WebUI uses CUDA, which is a proprietary API specifically for nVidia...meaning if you don't have their drivers, you can't natively run Web UI.
Luckily there are forks that exist that do work for AMD Radeon cards, but you'll have to jump through a few more hoops than what is shown here in order to install, and it probably won't run quite as fast as it does on nVidia cards.
Love your video, How do I run forge on google colab pro do I just change my automatic 1111 notebook or do I need something else?
Sorry i cant help with colab, usually there are just colabs made for forge, not sure if someone made for last versions
@pixaroma well thank you very much I will return to comfyui, and it says automatic 1111 is not taken care of yet yes many errors
Wow Thanks for putting in the time to make this!! Is there any guide on using the Train tab (embedding, hypernetwork, train)?
I didnt play with that function yet, it always seems to be complicated to do trainings, I tried also on A1111 but I dont get always good results, needs good settings, good images, captions, too many things involved it seems. And now I saw an anoucement that forge is not going to be updated anymore, like is used more for tests or something.
Very interesting. Good to know there is also another interface
fantastic video. But for some reason my changed parameters wont save. the "steps" parameter does, but not the sampling method, schedule type nor width and height...
It still has bugs since it was updated to the new version so there are still things that doesn't work
@@pixaroma thanks for answering
6:21 Wow, didn't knew about it, I through the only way to change it is to edit it manually in some file I don't remember now.
Still, I would like it to have different defaults for each checkpoint, is it possible?
Try this to see if still works , they keep updating the forge so it still have bugs ruclips.net/video/89YRfqArm-Y/видео.htmlsi=kGI45gnzc7iYeFHX
FLUX has been awesome!!!!!
This was VERY helpful, thanks"
Thanks for your Vdo. I have a question? My version doesn’t have training tab? How could I add it! Thank you in advance.
I don't know how to add it to new version but you can downgrade to the olde version that had that
Very helpful, i will be watching all the videos in this playlist, thanks! BTW what do you use for your voice It's great.( If it's not a trade secret that is)
The voice is from VoiceAir ai, they have it from eleven labs from what i know , i got a lifetime deal a while back
Thanks for yr tutorial. In my version, I don't have the SVD tab. Can you help me?
The ne version dont have it anymore only old versions
I do not see the SDXL styles that you show in the video at time 18:12 - how do I enable that.
I explain it in this video, is a file i created that you can download and put it in the right folder ruclips.net/video/UyBnkojQdtU/видео.html
@@pixaroma got it
Nice video and tricks bro, thanks!
...20:06 🤣😂😅
✨👌🙂🤗🙂👍✨
Thankyou for this tutorial! ❤Do I need the automatic 1111 stable diffusion installed to be able to install forge? I have the oldest version of automatic 1111 installed and I hvnt used it or upgraded it as I cudnt keep up with the every new update and other troubleshooting issues as I hv zero knowledge of programming language 😢
You don't need to have it installed it for forge to work, is different UI similar to Automatic, you just install it in different folder
excellent guide! subscribed!
How to use A1111 extensions? I'm trying but it auto turns em off. And integrated extensions are 💩. Help
not sure if all those works, but did you installed them from extension? Go to extension tab, click on available, click on Load From button, that will load all, search for an extension, for example tried ratio helper in search and installed just well when i clicked install, and restarted forge.
i wonder if any of the stable diffusion UI makers (forge, automatic, comfyui etc) has considered a method for capturing 'recommended model settings' like you point out at 3:29 - as going out and hunting down a model's recommended settings is a work slow-down; perhaps be able to configure a 'model or ksampler template' that can be a quick preset based on the model.. would be kinda cool to have the option to be able to on checkpoint load to trigger the preset (but again should be an optional thing, not everyone would want that in all cases). if this already exists someone let us know
There is a preset saving extension so you can just save settings and give it a name similar to the model you use to know for what it is, but many extensions have bugs since with updates
Check the extension in this video to see if still works ruclips.net/video/89YRfqArm-Y/видео.htmlsi=1va366VyvAt6s1f8
@@pixaroma You rock! thanks for those informative replies! will check that out. -- updated, yep the config preset still appears to work!
This was excellent!!!!!
You should definitely meet your goal with this video!!
Thank you ☺️
@@pixaroma do you use an app to get the time stamps?
Like the chapters on the RUclips? I use tubebuddy
quick question what's with ipadapters, I cannot acces any preprocesors there are only 3 encoders available, am I missing something?
You can read more about here, i didn't play with them in forge only with canny control net mostly, also keep in mind the version you are using there are different forks of forge now, the main one is used for beta testing and many things might not work! github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178
What if you use both 1.5 and XL checkpoints? Do you have to keep manually changing the FreeU settings everytime? Also with Hypertile, should the tile size be half of the generated dimensions longest side? what if I am usijng x2 hires fix, should it be half the size of the hires fix? What does the hypertile swap size and depth settings do? and is there a way like in SDNext to have hypertile set to automatic mode?
I don't use those settings too be much to give you more details, i know that for control net you have to manually keep changing models, 1.5 is different from sdxl so it needs different models and settings
@@pixaroma yes that's to be expected for control net as 1.5 and xl use different model. That's easily done. What's not easily done is having to remember the precise settings for freeU for 1.5 and xl. Can't believe there is no preset option to load these on the fly. I also am disappointed that there is no auto mode for hypertile tile size. Damn, all these different UIs should come together and make one ultimate ui
Great video! I am planning to install locally a Stable Diffusion AI aswell, and a friend recommended me Forge, your tutorial seems very easy to follow, however I have a question, do I need to install Python aswell for this to work? Do I miss any step from the video prior installing Forge? Thanks in advance
No, it installs everything it needs with forge is own embedded python environment you dont need to install it separately
@@pixaroma Thank you, it's working like a charm!
First of all, I must say thank you. I started with your videos with the latest one, about Flux, and I am stuck here. Forge UI is fantastic! My only question is if I can find log files about prompts? It would be great to keep them.
Well each prompt and settings are saved in the png you generated, so if you drag the png you like into the png info tab you can see the prompt and setting. For more complex probably you need a script or an extension, on a quick search maybe an extension like this could do something similar, didn't test but maybe gives you some ideas github.com/ThereforeGames/unprompted
@@pixaroma Genial, thank you!
@@CsokaErno I use XnView MP as my default image viewer. It has meta-info tab on the right, no need to import images anywhere, you can just copy, alt-tab and paste prompt+properties into your browser. Besides that, it's a really handy piece of software compared to vanilla windows image viewers.
Best tutorial ever
is it possible to setup all of this on AWS? could you please make a video? I am looking for an sketch to image model
I dont use aws but saw online that is possible, stable-diffusion-art.com/aws-ec2/ check this article maybe it helps
Could you please tell if this can also be run on AWS sagemaker?
Maybe look here gist.github.com/ShMcK/d14d90abea1437fdc9cfe8ecda864b06 aws.amazon.com/blogs/machine-learning/use-stable-diffusion-xl-with-amazon-sagemaker-jumpstart-in-amazon-sagemaker-studio/ as I don't use aws I can't hellp
Just updated to latest Forge version , the one that can work with Flux, but using only Sdxl on my 8gb card : every time I do inpainting or img2img the result has lower saturation than the original, it's me or what? Assigning a VAE do not solve 😢
There are a lot of bugs on new version so it will take a while for all to get fixed, this has a similar problem github.com/lllyasviel/stable-diffusion-webui-forge/issues/1189 and if you look at the list of open issues are like 600 github.com/lllyasviel/stable-diffusion-webui-forge/issues
@@pixaroma thank you for your answer!
I have problem with launching Web UI (1:42). "Found no NVIDIA drivers on your system..." Am i able to run it on rx 580 8gb?
I think it only works with Nvidia Video Card for now, that is why it says it didn't find a driver
Great Video
So, in the arguments section where you put the dark theme I can add:
--pin-shared-memory
--cuda-malloc
--cuda-stream
For optimization, right?
Thanks for the video!
Yes, i tried all those as forge suggested but didn't make it faster on my rtx4090 but slower, maybe it does better for you but for me with bo argument was faster
@@pixaroma gonna try it on my 4080.
It's not working. Is it really on web-ui.bat file that I should put the arguments?
is in the webui-user.bat look for set COMMANDLINE_ARGS= and there after equal you add it, like I added the dark theme you add more, set COMMANDLINE_ARGS=--theme dark --cuda-stream
I'm new to this Stable Diffusion GUI. Experienced people can you please answer is this Forge WebUI is better than Fooocus MRE? If Yes, then in what parameters is it better? Thanks!
You can have both installed and play around, just put it on a different folder. It has more options and extension then fooocus from what i know
i dont have a update.bat file. where do i find it? i can run forge just fine but been trying to find out how to update.
Should be next to run.bat and environment.bat a file called update.bat i have it there since installation, your should have it to. Just careful with updates to have a good stable version, check this video ruclips.net/video/RZJJ_ZrHOc0/видео.htmlsi=rF-9wCmzResJiW3L
Great tips, thanks!
Thank you very much! I tried it. It can generate and download images, but the window that shows the generated image does not work. What should I do?
I am not sure, can you join my Facebook group and you can you show me maybe some screenshots or post there to take a look, do you get any errors or what it looks like
Go to settings and paths for saving. Set save file paths (output dirs) to full paths, like C:\pathtoyourimagedir
It worked! Thank you very much!!@@Dark_Lobster
Great job keep up the good work
I have auto1111, now want to install forge, but but forge use cuda 12.1 menwhile i have 11.8 intalled for auto1111, should i uninstall the 11.8 cuda?
install in different folder, the forge is portable so it should create its own environment with what it needs
Is the inpaint background extension available in Forge?
Usually those from automatic are also on forge, but not sure if all works, you can try and test it, i dont usually use outpaint because it doesn't always do a good job, for that i prefer Photoshop generative fill
From your video with the purse, and the drinks can in the desert, I understood that Inpaint Background took account of, say the lighting, in the masked-out subject when creating a completely different background, as compared with a simple remove/replace background ignoring the masked area. Have I misunderstood?
Does Photoshop Generative Fill allow a completely different background prompt, or only an extension of the existing image within a larger canvas?
@@johnclapperton8211 when you do with inpaint it look around to be able to paint better, but is not always perfect. For photoshop when you expand with crop it does automatically, but after you can select that part that was generated and give it with the prompt what you want in there
Thank you so much. Can it be installed in collab as well as fooocus?
I am not sure, maybe someone else can answer that
@@pixaromathank you, hope that some one can answer. I don't have the requested performance machine to do local installation, so that will give me a great help. why I'm asking? it's just for the seamless pattern setting that exists in the models presented. this capability isn't offered right now in fooocus witch is easily accessible with colab.
hi my installed Sd forge doesn't have the update.bat file. Is there anyway to update SD forge without the file? Maybe by adding arguments to look for an update?
It should have it there next to run. In the folder you extracted not in the webui folder
I wanted to ask, I have models and loras in my fooocus folder. is it possible to copy and paste them to the appropriate forge folders? Or do I have to redownload them from civitai? ORRR...(is there a better way to have these models linked from my fooocus to my forge folder that could help on saving space on my cpu)...... I rather not redownload them or copy the file over so i dont fill up my harddrive.
I am adding a few hours a tutorials for forge, and how to link the automatic folder to forge, probably you can do something similar with the fooocus paths. I dont have focus to test it but it worked with the path of automatic1111. Also if you have it once you just copy them in the right location no need to redownload them.
thank you for the reply! I wonder if it will work linking fooocus folder to forge..i guess it is worth a try. ANy idea when your vid will be dropping for that so I can keep an eye out for it?@@pixaroma
I just added it like an hour ago, look for the thumbnail with under the hood in the title with an engine
When I put my model in control net "control_v11p_sd15_openpose.pth" and when I try to generate the image, I have a error message "TypeError: 'NoneType' object is not iterable. My setup is OpenPOSE and processor Openpose_full, can you help me please
i see you are using v1.5, do you get the same on sdxl? I got that error recently but not on control net but when i used an extension at different image size. It works at 1024x1024px? I got that error on different sizes, but worked at 1024x1024px. I dont used v1.5 anymore since sdxl appeared.
so i tried with sdxl control net and i get the same error if i use certain sizes, for example it works if is 1024x1024px, or 1024x576px, but i get that error if i use 1200x672, or 912x512 or 1024x816
Seems that the image size need to be divisible by 64 to work
@@pixaroma
thank you for your answer, I tried all the sizes 1024, 512 etc. it does not work, I have this problem that when I want to use controlnet otherwise to generate an image no problem but when I use controlnet impossible to generate an picture
@@REUBEUCOP75 maybe you can report it on their page, at issues github.com/lllyasviel/stable-diffusion-webui-forge/issues
I use forges deforum tab to create animations. I would like to know how to create the animations within a boundary. I projection map so I would like to know how to keep the animations within the map of my house. Would you know how to accomplish this? I have a png map file that I created but unsure what to use it with.
TIA
Sorry, I didn't play yet with deforum, so i can't help there yet, I like to create HQ images and the video and animation isn't quite there yet, i am waiting for an improvement before i jump in to it
thanks for the tutorial. i have many models in stable diffusion. can i use them in forge ui?
Yes you can use it just like in other interfaces if is in the right folder or your settings are changed so it can take it from the folder you put them
great video! thanks very much. quick question, do you use tts for narration? If so, it's incredible, may I ask which one? I've been trying to find something decent for my videos. Cheers :)
Is called voiceair they have the voices from elevenlabs
thanks alot! I'll have a look at it@@pixaroma
Greetings, it shows CUDA stream activated: False in cdm does it affect this Stable Difussion? If I have to also activate CUDA how do I do that?
Mine is off also, can be activated with some command in the bat, I tried but made my generation slower not faster so i left it deactivated, it appears as suggestions on the cmd window when you start and there is also a command I think, i don't remember now, just i know activated was slower for me
You solved the problem I was actually worried about the output speed. So I don't have to worry about CUDA but the internet connection, it should be sometimes fast sometimes slow which affects the output speed. Thanks for the tutorial above.
as im trying to generate a art with juggernaut model UI showing is in queue any solution?
Do you have enough vram on your video card? Maybe try the hyper version of juggernaut that doesn't need so many steps to generate
@@pixaroma I have 4gb of vram as im generating with SD realisticVisionV51 works perfect As you suggested i have installed the Juggernaut_X_RunDiffusion_Hyper but still the same issue
The one you mention seems to be based on sd 1.5 so the sd 1.5 are smaller and you can run it because usually has like 2gb, but when you run an sdxl base model that is 5-6 gb your 4gb memory might not be able to handle that size, so either it takes a lot of time to load like a few minutes or it crashes. so maybe just work with sd v1.5 model for a while or if you can find an sdxl model that is smaller maybe
@@pixaromathank you for the assistance 🤗
hi what browser do you use, cheers in advance
I use chrome, but should work in any browser
what AI do you use for voice generation?
VoiceAir Ai
@@pixaroma ty
My built-in controlnet's IP-Adapter is missing its models, and thus, doesn't work. Any ideas? I wanted to install them manually, but the library is different, and so are the files.
Forge still has some problems with control net, check this discussion maybe it helps github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178#discussioncomment-8572388
I have Forge installed via Pinokio but it doesn't look anything like this. Can I install this standalone version on a drive other than my C: drive?
You can install on any drive, i have on D mine, you just create a folder where you want and put the installation files there and run it
@@pixaroma Cheers! I'll give it a go, as no tutorials seem to match the Pinokio version
1. Is there any performance drop if I don't install it on C drive?
2. My C drive is SSD and D drive is HDD. Can I still install on D drive? Will I face any performance difference or issues?
Thanks again for the video! I have a question. Will this model work on my video card if it is only 8 GB? and if not, what options can you recommend?
I tested on 6gb and worked, only control net give me some crashed, but other things worked for me, faster then automatic that took ages. It worth a try, and if you dont like it how it work you can just delete the folder with all files. But i believe you can use most of the functions, if automatic1111 works this should work even faster
Wow! Thank you so much, I'll experiment. @@pixaroma
you only had to watch the video for 30 seconds and your question was answered holy shit
@@schinie3777 That question is not for this video!
I runned forge on 6gb rtx 2060 so it should work, other solution will be comfy ui since forge will not be updated anymore, and once they update automatic 1111 that will be also a good solution
Trying to use symbolic links for Forge since I have about 80 models and 180 loras, but it just doesn't seem to see the folder. Wondering if you could help on that.
Doesn't work if you copy and paste the link, make sure you reverse the slash symbol instead of \ should be this / , try that replacing the simbol in all the path and see if works, and you need to remove the @REM in front of it. you edit the webui-user.bat
Hmm, i replied a few hours ago but not seems to missing my comment. How did you tried to add the path with copy paste? You need to switch the backslash forward slash / \ when you copy the link is in one sign but in the bat is other, also remove that rem in front
Where should we put lora files? There is no lora folder in models
You should have a folder for lora, look at this video how i download and where i put them ruclips.net/video/q5MgWzZdq9s/видео.htmlsi=nKX2enJ7KPEAoGIF
How do I add the "ip-adapter_face_id_plus" preprocessor for IP-Adapter? It's not in Forge. "ip-adapter_face_id_plus" working better then "InsightFace+CLIP-H (IPAdapter)"
someone said that the name are different, check this page discussion github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178
you are not installing in c drive ?? , so we can also do in d or e ??
Yeah mine is on D, I install it where i have more free space ☺️ it doesn't matter the drive
@@pixaroma but all those heavy ckpt files, they must be in c drive
@@Diffusion-oi6ux i have mine in D:\Forge\webui\models\Stable-diffusion so basically where you install it in the webui folder have models and then stable-diffusion where you put the models. And also use safetensor models instead of ckpt are are safer
@@pixaroma can you make a video upon hands
You have replied me very fast, for which i am very thankful and surprised too, you are doing great,
Few people say Ai art is very very bad
And few like us are excited, because we know there is just no boundary, post 5 years full length Ai movies would be able to be made, or it depends
@@Diffusion-oi6ux for the hands I usually just use inpaint like i did on this video with geisha hands , I used for a while in A1111 Adetailer extension but for me kind of worked faster with inpaint
Hey! I have installed forge, and any files that need to be on Stable-defusion folder works great. I watched a few of your videos, and every time I SVD file and put it into the SVD, reload and run locally I do not get that tab section. Any advice? I have tried reinstalling, and other stable fusioon github downloads and nothing makes the svd. I really want to make image to videos. Any advice? Or anyone reading this comment?
there is an old version that has svd but doesnt have flux and other things, so you can install a separate folder forge and go back to that version
Create a bat file any name you want, something like
rollback.bat
and add this text inside
@echo off
set PATH=%DIR%\git\bin;%PATH%
git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
pause
When you run that rollback.bat it will load that specific version
Then create a
rollforward.bat
and add this text inside
@echo off
set PATH=%DIR%\git\bin;%PATH%
git -C "%~dp0webui" checkout main
pause
That will let you go back to the main version so you can see the latest updates.
After that, run "rollback.bat" to return to the specified point , so start it and check the operation. Also, run "rollforward.bat" to return to the latest state (only if you have rolled back)
Some had problems with creating the bat files so here shared those also.
Get this 2 bat files from google drive: drive.google.com/drive/folders/1bS-6HdLl5AH3Rbd2wHUm_nILUOnu9hmJ?usp=sharing
If is say is dangerous is because is a bat file, you can open it with notepad and is the same code from video, so is not a dangerous file.
Download and place those two files in the main folder of your forge ui, where the run and update file is.
Run the rollback.bat by double click on it and will go back to a previous commit. Press enter to exit that screen after it finished.
Then double click to run.bat and it will start with that old commit.
---
To get back to the current version
run the rollforward.bat
Press enter to exit that screen after it finished.
Then double click to run.bat and it will start with new commit.
that commit i put in the rollback it has the version of forge with svd but doesnt have the flux and new updates, is an older version.
Great tutorial, thanks
Thank you ☺️
hello, i downloaded the 7z file... i extracted using 7zip but it's been 1.5 hours and now 7zip says it will take 7 hours to extract. I stopped extracting... something must be wrong? I have a ssd with 6000 write speed and 32 gigs of ram with 12 gigs of ram. I invested in a new rig for AI so I'm hoping this is not a hardware issue so soon? Any advice would be much appreciated.
Use a different software to extract, like i used WinRAR that worked fast for me
you'Re the best
Brother, Do you know how yo generate prompt from image for free as some websites are charge
Forge has some basic prompt from image but is not so accurate, in img2img tab under generate it has a paperclip icon, first time will download a model but after that should work faster, and is giving a basic description of the image you uploaded in the img2img.
I did run.bat, it showed runtimeerror: Torch is not able to use GPU, what happened?
Either your video card is not good enogh or forge dont recognize it, I am a designer not a coder, but you can try this in webui-user.bat add the following arguments to see if it works, it need at least 6gb of vram and prefers nvidia cards but try it anyway:
set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half
I see someone already posted taht on the bugs area, you can watch that to see if it gets any response if nothing else works, github.com/lllyasviel/stable-diffusion-webui-forge/issues
Hey does this have reactor face swap built into it?
I don't see it in the extension list, comes with controlnet, svd, but I dont see reactor there in the list
@@pixaroma ah no worries ty!
Hello, i am using the SD web ui and anyway i can update it to forge ui without doing everything all over again?
I don't think so, is a different interface based on ,a1111 but is made by different users, so is not an update is a different UI. Also seems that works only with Nvidia card
@@pixaroma Thanks! yeah i do have a Nvidia card and seems like everyone using this one. Mine, web ui looks really outdated and have many things missing.. Looks like i have to install all over again? should i uninstall the SD web ui?
No, you can have in different folder forge ui, just depend on the space on your hard drivez I have both installed in different folder
@@pixaroma Oh great to hear, thank you for the info and the tutorial, i really appreciate.
Question, FORGE is compatible with HYPER and LIGHTING models?
I used a lightning juggernaut model and worked I so they released hyper also but didn't test it yet
@@pixaroma Ok, thx.
@@pixaroma ADETAILER works too?
@@ZeroCool22 i think it was some problems with ad detailer, and some extensions. For control net for example for me only works if image size width and height is divisible by 64. But just try for things that works and work faster and use a1111 or other for things that doesn't work :)
What TTS model/software are you using?
VoiceAir ai, they have the voices from elevenlabs
Thank You! Brilliant👍
Salut ! cand fac upscale la un personaj , de exemplu, il lungeste :)) cam care ar fi cauza?
Nu am pățit, numa daca ai folosit poate alt ratio la width and height. Eu am schimbat pe ComfyUI
anytime i put seed and prompt on save image i got this error when generate new image.OSError: [Errno 22] Invalid argument: 'outputs\ why? how can i fix it?
I dont know it has errors after errors, I switched completly to comfyui now, you can check forge page and the issues it has maybe you can find the error there github.com/lllyasviel/stable-diffusion-webui-forge/issues
How big size is the file? 60GB without model? And each model is around 3GB or 30GB?
I am not sure about the total size, but usually a sdxl model is 6gb and a 1.5 model is around 2gb.
which version of phyton i have to install?
You don't need to, when you install forge it gets anything you need
Using sd1.5 now. If i use this forge. Can i use the lora used 1.5 at forge?
sorry I only use sdxl models, for that sdxl the lora i tested it worked, I can not tell you if it will work with 1.5 since i dont have any models or lora for it since I dont use them anymore since the sdxl appeared.
@@pixaroma ohhh thx big help
Thanks for this tutorial!! I noticed you're able to generate very quickly. Can I ask for your PC specs? Thank you!
I speed up video sometimes, it takes like 5 sec for a 1024px image. I have rtx4090 with 24gb of vram, more vram the faster is the generation
I have attempted many times but am unable to get this to run with AMD R9 580 looking for the CPU ver
I think is looking for a Nvidia driver, you can try comfy UI or automatic 1111 but depends on the video card vram
which GPU do you recommend for the lower budget?
Any Nvidia rtx that you can afford, more VRAM the better. Minimum 6-8 GB of vram, but if you can get more you will generate faster
Thank You So Much - Bro 😍
You are welcome ☺️
How do I share existing models on A1111 with forge?
You can edit the forge bat file i explained in this video ruclips.net/video/q5MgWzZdq9s/видео.htmlsi=VQDUDjPvi256KCps
Nice! Thank you. But unfortunately ControlNET is dead in this built.
For me works if the image size, width and height is divisible with 64