Your humor and facts are 100% correct haha. I tried all those options and nope. waste of time. I'm glad I found this. I hope you make more content like this, your good at it
Ugg I know. It was so annoying trying one thing after another and manually trying to fix strings of problems just to end up saying even if it can be fixed and used it’s just not worth the effort. :-/
You are welcome. It actually took me way too long to figure out how to do it and get it to work correctly. It is not the most friendly setup. I’m glad it helped! Thank you for watching!
Asked a question on one of your other videos that has been answered here. Kept looking and found it, but I keep getting an error 'NoneType' object has no attribute 'lowvram'. Even when I followed the how to get dreamshaper off huggingface. Any suggestions?
Ugg sorry to be this way. I have a recent video explaining how to fix problems in automatic1111 directml. This lowvram problem came up for a lot of people. This video has it. Towards the end. Just skip forward to it. ruclips.net/video/mKxt0kxD5C0/видео.htmlsi=oNAqQLqrvmyCm28N
The washed out picture is usually a symptom of not using a VAE. Some models have VAE baked in, some do not. I noticed that you don't have visible the option to choose what (separately downloaded) VAE to use. I have enabled the UI element in settings and I usually have vae-ft-mse-840000-ema-pruned.safetensors selected.
That is absolutely possible. Unfortunately if you are running on windows with olive setting a VAE does not work or function properly. :-/. I tried that.
I didn't understand well, should I use the "Modelname" [Optimized]? or keep using the one that I downloaded from Civitai? I'm asking because in the video you still use the "Modelname" instead of using the one with "[Optimized]"
Hello. I would like to ask something about the optimization process. I tried following the instructions you said in the tutorial video and, for some reason this error keeps popping up. "AttributeError: 'NoneType' object has no attribute 'lowvram'" What does this mean? And how do I fix this?
I made a video recently 12/2023 about errors coming up for folks with automatic1111 directml. One of them is that lowvram error during optimization. In the video I show how to get around that error. It’s a hacky fix but it does work to get around it. ruclips.net/video/mKxt0kxD5C0/видео.htmlsi=Z432ctBls2kEFSOS
I have been having an issue where ONNX says that it only supports 77 or so nodes and that it is truncating the rest of the prompt. I used to use huge long prompts when I was using a NVIDIA card, but now I have this rx7900gre I am now limited to a handful of prompts? I must be missing something.
So I saw that as well last time I was fiddling around with my install on windows. I think this is something unique to running directml and onnx. When I run it on Linux I don’t run into that since I am using ROCm. While it’s definitely a hassle. If you want to use more of the features running it in Linux right now is a significantly better option overall.
I installed '' juggernautXL_v8Rundiffusion '' and '' dreamshaperXL_turboDpmppSDE '' models and optimized Olive - Optimized checkpoint. I always see '' AssertionError: '' . How i can load models ? and i cant see DPM++ 2M SDE Sampling method. I can see 6 models. Help me please. Can't we use these models without optimising them?
@@FE-Engineer Is there any possibility to remove onnx from my computer with amd processor to use this sampler and models? I want to be able to operate on all models. Can you help if there is?
The speed increase, I saw, was crazy, 1920 by 1080 images in 30 secs, but at the moment it's way too limiting. The measly 77 tokens is awful, can't use lora and textural inversions, even merging has negligible effect. And like most of you, I've found optimizing to be a crap shoot. Insights that may help some ppl, as FE stated, it uses a crap tom of memory, I've seen 19gbs, diskspace, 8gb pagefiles. If you try to put in a Vae, it will fail. Leave VAE Source Subfolder as "vae", changing image size also fails. Clearing out the ONNX Model tab also seems to help as I've seen that referenced during a fail as well. Lastly, thank you kindly Fe-Engineer for your tutorials. I'll be following your guide to Linux with Rocm, when i get around to buying another hd, but I suspect I'll be sorely underwhelmed with rendering speeds after experiencing olive. Like you I have a 7900xtx and think perhaps i should have spent another grand and given in to Nvidia's price gouging.
My speeds with ROCm on Linux are roughly 17-19 it/s on a normal 512x512 with sd1.5 models. Obviously size changes and sdxl change things. But overall even compared to olive, I found the slight performance hit to be 100% worth it to be able to just use everything the way it was initially built and intended to be used. I think you will be pleasantly surprised. And once we get ROCm 6 with PyTorch for ROCm 6 I think it will be even faster. ROCm 6 is out now. PyTorch is still not really using ROCm6 though from what i have seen.
Great video! Question: can we convert loras and vae to onnx too? I cant seem to figure it out. Gonna switch to Linux on my main gaming PC soon. But I have to back everything up and reinstall so thats gonna happen after Christmas is done for. Thanks :)
So can it be done. Yes. I went down a huge rabbit hole trying to convert things to ONNX. My general view is: don’t bother. Using rocm on Linux even if you have to set that all up from scratch is significantly easier and then you don’t need to convert to ONNX than trying to convert stuff to ONNX. If you do heavy AI and specifically want to get deep into ONNX explicitly then sure. For any casual user don’t bother. For reference I spent pretty much every moment not at work for 2 days trying to find a straight forward and easy way to convert to onnx. Everything I found was busted the only one that worked was the way I did it in the video. :-/
I'm on 6700 XT. It seems to work for now but do you think I should use any running arguments? I was using a lot of arguments while I was using default version.
Not if you are using directml and ONNX. When ROCm is available in windows and amd GPU users can get the full feature set then those arguments will come back.
ONNX has a lot of limitations currently. I have not used adetailer personally so unfortunately I don’t know what the problem is. My guess is it probably has to do with either directml or onnx format more than anything else.
man i love your channel. Do you know photo realistic SD models which work with ONNX i couldnt find any one there. SD 1.5 gives me always pics with bad faces particularly eyes looks horrible but the rest of the charackter looks amazing. Only the eyes are kinda cringe. Would be glad if you could suggest as photo realistic models
Dreamshaper is probably one of the best in my opinion. It is consistently good and pretty realistic overall. If you go ROCm on Linux you can do SDXL which opens up a lot of possibilities.
ROCm is not available on windows yet to let you run AI. ROCm is basically AMD's version of CUDA like Nvidia. For right now it really only runs AI on linux, it should be coming to windows "soon" but we have also been waiting for quite a while. I check the github progress, and it does look really close, but no idea when it will be up and running. So if you wanted to really run ROCm on linux, I have a video about making a dual boot PC. Then a video of installing ROCm on linux. I also will likely have a new video showing installing the newest version of ROCm up soon.
Also as a random side note. I have found Euler and Euler A (Euler Ancestral) tend to produce slightly better results generally for faces and eyes. It’s still wonky sometimes. But some of the other samplers really give me weird results more than I would like.
thanks have experienced the same euler is my favourite. But what i have noticed that if i generate pics with 1.5 only face and upperbody no problem with eyes, but if start full body images eyes and particularly faces starts looking horrible i dunno why@@FE-Engineer
Yes. But most of the ways I found to do it were either very complicated or did not work. I initially wanted to cut stable diffusion out entirely and just convert model types. It was not very straight forward especially in AMD cards.
Something has changed and been broken recently on this version of automatic1111 directml. I’m looking into it. For now the alternatives that work are either shark from nod ai. Or comfyui.
Thank you for your tutorial. Evertime I try to optimize a civitai model, I get an AssertionError. Does anyone know how to fix this? (RX 7900XTX gpu, win11, automatic1111 webui)
I saw this with a few models that were missing some config file inside them. Other ones you absolutely have to make sure the other tabs don’t have any information in them. Reload the ui and try again. I found most civitai models seemed to work without issues. Some were missing the VAE and did not function properly without it (looked washed out). Which model?
@@FE-Engineer it's almost all models, stable diffusion and stable diffusion xl. I got epicphotogasm to work on the third try, but only at 512px. Have you tried installing sd on wsl? I've tried installing it with Rocm, but haven't got it to work yet.
It's much easier for me to use Shark. Yes, you are limited to one lora, but you can use a different combination of resolutions, not only 512x512. For my 6700 xt, the generation speed is the same as olive.
Bit late but does not have to just be 512x512, by default can be any multiple of times ad well such as 1024x512. This is due to it being the values when you opermise the model, if you want you can opermise it with other values and go by multiples of that instead
assert conversion_footprint and optimizer_footprint AssertionError Edit: Fixed Cleared cashe and fixed it. Also my ssd where my os is located is running very low on memory. Don't know why, because my root folder is on my hdd
Hey @@FE-Engineer ! Thanks for response! I found a way of making SD work for my 6800XT combining your fixes and with very few steps. What i did i extract SD from git into my custom directory and followed your steps of fixing runtimeerror no cuda gpus are available. After that SD worked for me normally, except when trying to upscale, i was getting not enough vram error. I fixed that with these command lines --use-directml --opt-sub-quad-attention --medvram --disable-nan-check --no-half --precision full --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1 --autolaunch --listen --medvram --precision full --opt-split-attention-v1 --no-half --no-half-vae --opt-sub-quad-attention --disable-nan-check --use-cpu interrogate gfpgan bsrgan esrgan scunet codeformer After that everything worked smoothly and with no problems. My it/s seems to be low at only 2 to 2.38 max. However, it's still reasonably fast. I can't remember if i encountered any other errors but i think this is all i had to do. I installed python 3.10.6, Git and latest AMD GPU drivers. Thanks for your help, it really helped making it finally work.
You are welcome I am glad you got it working. For me. Usually I don’t have to put no half. But for some of the SD 2.1 models I do have to use no half. Just be aware that using no half for me at least cuts my performance in half I think. So you might want to test. If you can use it properly without needing no half you might get better performance. And maybe you can use sd1.5 based models at least with better performance.
@@FE-Engineer Oh nice, thanks for that info. I will test no half and see what it does. Even tho my it/s is low, times it takes for images to render is still reasonably fast. But those it/s is still pretty low for 6800XT in my opinion.
@@FE-Engineerended up uninstalling everything and just following your fix guide and it's working now. Taking a very long time to optimize with olive though.
the optimization process is absurdly expensive. Ram, hard drive space, and CPU absolutely go bonkers. It is REALLY hard on a computer to optimize into ONNX. :-/
great video, appreciate your work . I still get an error though to which i havent found any solution. After like 20-50 seconds i get "TypeError: StableDiffusionPipeline.__init__() got an unexpected keyword argument 'text_encoder_2'" . Got any idea of what can i do?
@@FE-Engineer how much ram you need to convert i looked at my task manager i have 16gb ddr 4 ram and it ate all of it and gave that error. How much ram do you have?
Great tutorial, managed to convert a few models successfully, I'm now running into a Problem after I launch SD and try to convert a model Nothing really hapens and get the following error message. "AssertionError: No valid accelerator specified for target system. Please specify the accelerators in the target system or provide valid execution providers. Given execution providers. Given execution providers: ['DmlExecutionProvider']. Current accelerators: [...]". I have the feeling I am not launching SD correctly, I use "webui.bat --onnx --backend directml", from an anaconda prompt in the SD directory, if i just use the shell to launch "webui.bat --onnx --backend directml" without running anaconda I don 't even see the onnx and olive tabs. Any pointers or solutions would be appreciated, and thanks for your videos
This is strange. I have not heard of anyone else having this problem. My first suggestion would be to do a full reboot of your computer. Exit out of everything reboot. And after you reboot. Anaconda run your startup script and see if it works then. Also I have like a 2 minute video about creating a startup script so that you can just like double click on a file and it starts stable diffusion for you. ruclips.net/video/vKIqd5FDLn0/видео.htmlsi=BTN_lmgD8YdXuZ6Z
Managed to optimize a different Civitai model and it works perfectly! Jumped from 2-3it/s to 12.36it/s!!! You SIR do ROCK!!! Keep the vids coming :)
That’s awesome! Don’t know why the first one didn’t work. Might try deleting the folders for that model and trying again.
Your humor and facts are 100% correct haha. I tried all those options and nope. waste of time. I'm glad I found this. I hope you make more content like this, your good at it
Ugg I know. It was so annoying trying one thing after another and manually trying to fix strings of problems just to end up saying even if it can be fixed and used it’s just not worth the effort. :-/
Great informational video! I also appreciate how you caught the disk usage. Appears that the olive method is a pain vs dual booting
Definitely. Anyone who is ok with dual booting and using it in Linux right now absolutely should.
@@FE-Engineer When I have my new Computer I will definitely follow your Linux Guide and Dual boot my computer...
I'm getting this epic error: AttributeError: 'ONNXStableDiffusionModel' object has no attribute 'lowvram' when I press on the optimize button.
Yes. This is both an epic error and and something seems to have broken in the code. I’m looking at what happened.
Where does the onnx exists on our pc ? i would like to try importing the onnx file in another program (houdini). It should work?
man I was clueless and wasting time. Thank you very much!
You are welcome. It actually took me way too long to figure out how to do it and get it to work correctly. It is not the most friendly setup. I’m glad it helped! Thank you for watching!
How do you get Olive installed?
Asked a question on one of your other videos that has been answered here. Kept looking and found it, but I keep getting an error 'NoneType' object has no attribute 'lowvram'. Even when I followed the how to get dreamshaper off huggingface. Any suggestions?
Ugg sorry to be this way. I have a recent video explaining how to fix problems in automatic1111 directml. This lowvram problem came up for a lot of people.
This video has it. Towards the end. Just skip forward to it.
ruclips.net/video/mKxt0kxD5C0/видео.htmlsi=oNAqQLqrvmyCm28N
I have a latest Automatic1111 fork for DirectML, and there is none of the ONNX or Olive tabs in web UI.
Ah yes. They went away now. I will have to update the video and description. Sorry about that.
i liked and subscribed you help me tons. you are not this fake channels that talk bs
I try to provide reasonably easy paths forward for people to get things up and running.
Thank you for watching and supporting my channel. It means the world to me!
nice, simple concise tutorial, thanks again!
You are welcome! Glad it helped :)
The washed out picture is usually a symptom of not using a VAE. Some models have VAE baked in, some do not. I noticed that you don't have visible the option to choose what (separately downloaded) VAE to use. I have enabled the UI element in settings and I usually have vae-ft-mse-840000-ema-pruned.safetensors selected.
Does the VAE work with ONNX?
My understanding was that the VAE did not work with ONNX format…
about the washed out look, i dont think onnx is baking in the Vae?
That is absolutely possible. Unfortunately if you are running on windows with olive setting a VAE does not work or function properly. :-/. I tried that.
Onnx doesnt appear as a tab on SD and it is not a run time argument for SD either, what am I missing?
Nothing. The code changed. It’s weird now. Stay tuned. I have a new video coming out about this that will be better overall.
I didn't understand well, should I use the "Modelname" [Optimized]? or keep using the one that I downloaded from Civitai? I'm asking because in the video you still use the "Modelname" instead of using the one with "[Optimized]"
Once optimized they will both work the same way. It will reference the one that is optimized anyway.
Hello. I would like to ask something about the optimization process.
I tried following the instructions you said in the tutorial video and, for some reason this error keeps popping up.
"AttributeError: 'NoneType' object has no attribute 'lowvram'"
What does this mean? And how do I fix this?
I made a video recently 12/2023 about errors coming up for folks with automatic1111 directml. One of them is that lowvram error during optimization.
In the video I show how to get around that error. It’s a hacky fix but it does work to get around it.
ruclips.net/video/mKxt0kxD5C0/видео.htmlsi=Z432ctBls2kEFSOS
I have been having an issue where ONNX says that it only supports 77 or so nodes and that it is truncating the rest of the prompt. I used to use huge long prompts when I was using a NVIDIA card, but now I have this rx7900gre I am now limited to a handful of prompts? I must be missing something.
So I saw that as well last time I was fiddling around with my install on windows. I think this is something unique to running directml and onnx. When I run it on Linux I don’t run into that since I am using ROCm. While it’s definitely a hassle. If you want to use more of the features running it in Linux right now is a significantly better option overall.
How do I add embeddings for the ONNX models? I have them in the folder but it doesnt show them in Automatic 1111
You can’t use them with ONNX. At least not yet as far as I am aware.
Why I cant just use the citae models normally like before? After fixing that cuda core error I can only use ONNX models on automatic1111 ?
Remove -onnx from your start command if you don’t want to use onnx. But there is a big performance hit.
I installed '' juggernautXL_v8Rundiffusion '' and '' dreamshaperXL_turboDpmppSDE '' models and optimized Olive - Optimized checkpoint.
I always see '' AssertionError: '' . How i can load models ? and i cant see DPM++ 2M SDE Sampling method. I can see 6 models.
Help me please. Can't we use these models without optimising them?
Can’t use sdxl
Can’t use those samplers on onnx
@@FE-Engineer Is there any possibility to remove onnx from my computer with amd processor to use this sampler and models? I want to be able to operate on all models. Can you help if there is?
The speed increase, I saw, was crazy, 1920 by 1080 images in 30 secs, but at the moment it's way too limiting. The measly 77 tokens is awful, can't use lora and textural inversions, even merging has negligible effect. And like most of you, I've found optimizing to be a crap shoot. Insights that may help some ppl, as FE stated, it uses a crap tom of memory, I've seen 19gbs, diskspace, 8gb pagefiles. If you try to put in a Vae, it will fail. Leave VAE Source Subfolder as "vae", changing image size also fails. Clearing out the ONNX Model tab also seems to help as I've seen that referenced during a fail as well. Lastly, thank you kindly Fe-Engineer for your tutorials. I'll be following your guide to Linux with Rocm, when i get around to buying another hd, but I suspect I'll be sorely underwhelmed with rendering speeds after experiencing olive. Like you I have a 7900xtx and think perhaps i should have spent another grand and given in to Nvidia's price gouging.
My speeds with ROCm on Linux are roughly 17-19 it/s on a normal 512x512 with sd1.5 models.
Obviously size changes and sdxl change things. But overall even compared to olive, I found the slight performance hit to be 100% worth it to be able to just use everything the way it was initially built and intended to be used.
I think you will be pleasantly surprised.
And once we get ROCm 6 with PyTorch for ROCm 6 I think it will be even faster.
ROCm 6 is out now. PyTorch is still not really using ROCm6 though from what i have seen.
I keep getting assertion error when i try to optimise, Any ideas?
Someone else I saw had a similar sounding issue. They found out it was eating all of their ram during optimization. Might try using a lowram flag?
You talk about optimised model results but you chose the unoptimised model to generate from checkpoint list.
Neat story once you optimize it, it does not matter. Even though it shows unoptimized and optimized it is the same files that both point to.
Great video! Question: can we convert loras and vae to onnx too? I cant seem to figure it out.
Gonna switch to Linux on my main gaming PC soon. But I have to back everything up and reinstall so thats gonna happen after Christmas is done for.
Thanks :)
So can it be done. Yes.
I went down a huge rabbit hole trying to convert things to ONNX.
My general view is: don’t bother.
Using rocm on Linux even if you have to set that all up from scratch is significantly easier and then you don’t need to convert to ONNX than trying to convert stuff to ONNX.
If you do heavy AI and specifically want to get deep into ONNX explicitly then sure. For any casual user don’t bother.
For reference I spent pretty much every moment not at work for 2 days trying to find a straight forward and easy way to convert to onnx. Everything I found was busted the only one that worked was the way I did it in the video. :-/
Thank you so much for the kind words :)
I keep getting "FileNotFoundError: [Errno 2] No such file or directory: 'footprints\\unet_gpu-dml_footprints.json"
If it is an SDXL model. Those will not work.
@@FE-Engineer i was a stable diffusion model, which ive used before without onnx
I’m having trouble using automatic on windows without onnx. So I’m not sure but something might be going on.
How to use upscaler with that? No matter what I select it doesn't upscale at all
You can use the upscale from the extras tab.
If you want inpainting and the AI upscalers you need to use ROCm on Linux to get all the fancy stuff
Great tut :) but with all things that needed for converting models... i switched to linux to run ROCm
Yea I generally use my Linux one as well. I mostly made it because several people asked how to convert civitai models specifically.
And thank you for the kind words.
I'm on 6700 XT. It seems to work for now but do you think I should use any running arguments? I was using a lot of arguments while I was using default version.
Not if you are using directml and ONNX. When ROCm is available in windows and amd GPU users can get the full feature set then those arguments will come back.
adetailer isn't working any way to get it working ?
ONNX has a lot of limitations currently. I have not used adetailer personally so unfortunately I don’t know what the problem is. My guess is it probably has to do with either directml or onnx format more than anything else.
thx for answer but render preview isn't working either just like in your video
ok im pretty much pissed, SD was working fine even without ONNX now i installed W11 and its not anymore. Gotta install linux and dual boot to it
man i love your channel. Do you know photo realistic SD models which work with ONNX i couldnt find any one there. SD 1.5 gives me always pics with bad faces particularly eyes looks horrible but the rest of the charackter looks amazing. Only the eyes are kinda cringe. Would be glad if you could suggest as photo realistic models
Dreamshaper is probably one of the best in my opinion. It is consistently good and pretty realistic overall.
If you go ROCm on Linux you can do SDXL which opens up a lot of possibilities.
Do you have a guide or video for seting up ROCm what it is or means. Is it possible to set it up on windows ?@@FE-Engineer
ROCm is not available on windows yet to let you run AI.
ROCm is basically AMD's version of CUDA like Nvidia.
For right now it really only runs AI on linux, it should be coming to windows "soon" but we have also been waiting for quite a while. I check the github progress, and it does look really close, but no idea when it will be up and running.
So if you wanted to really run ROCm on linux, I have a video about making a dual boot PC. Then a video of installing ROCm on linux.
I also will likely have a new video showing installing the newest version of ROCm up soon.
Also as a random side note. I have found Euler and Euler A (Euler Ancestral) tend to produce slightly better results generally for faces and eyes. It’s still wonky sometimes. But some of the other samplers really give me weird results more than I would like.
thanks have experienced the same euler is my favourite. But what i have noticed that if i generate pics with 1.5 only face and upperbody no problem with eyes, but if start full body images eyes and particularly faces starts looking horrible i dunno why@@FE-Engineer
Why i dont have onnx and olive tabs???
Updated code read the video description
Is there any way to do this without access to Stable Diffusion?
Yes. But most of the ways I found to do it were either very complicated or did not work. I initially wanted to cut stable diffusion out entirely and just convert model types. It was not very straight forward especially in AMD cards.
AttributeError: 'NoneType' object has no attribute 'lowvram'
Something has changed and been broken recently on this version of automatic1111 directml. I’m looking into it. For now the alternatives that work are either shark from nod ai. Or comfyui.
@@FE-Engineer maybe a tutorial for onnx on comfyUI? really wanna run onnx turbo models as I only have a 5600xt and cant get it working still with 1111
thanks alot
You are very welcome! Thanks for watching!
Thank you for your tutorial.
Evertime I try to optimize a civitai model, I get an AssertionError. Does anyone know how to fix this? (RX 7900XTX gpu, win11, automatic1111 webui)
I saw this with a few models that were missing some config file inside them.
Other ones you absolutely have to make sure the other tabs don’t have any information in them. Reload the ui and try again. I found most civitai models seemed to work without issues. Some were missing the VAE and did not function properly without it (looked washed out).
Which model?
@@FE-Engineer it's almost all models, stable diffusion and stable diffusion xl. I got epicphotogasm to work on the third try, but only at 512px.
Have you tried installing sd on wsl?
I've tried installing it with Rocm, but haven't got it to work yet.
@@FE-EngineerI tried photopedia
It's much easier for me to use Shark. Yes, you are limited to one lora, but you can use a different combination of resolutions, not only 512x512. For my 6700 xt, the generation speed is the same as olive.
Are you in windows? I tried shark on Linux with rocm and it was not working properly and I could not get around all the errors.
Bit late but does not have to just be 512x512, by default can be any multiple of times ad well such as 1024x512. This is due to it being the values when you opermise the model, if you want you can opermise it with other values and go by multiples of that instead
@@FE-Engineer Same, I have tried using Shark on windows a few month ago, and it was just throwing random errors.
assert conversion_footprint and optimizer_footprint
AssertionError
Edit:
Fixed
Cleared cashe and fixed it. Also my ssd where my os is located is running very low on memory. Don't know why, because my root folder is on my hdd
Hard to say. If they are on different drives it shouldn’t use much. But I’m not sure how it might cache and where that might end up. Temporarily.
Hey @@FE-Engineer ! Thanks for response! I found a way of making SD work for my 6800XT combining your fixes and with very few steps. What i did i extract SD from git into my custom directory and followed your steps of fixing runtimeerror no cuda gpus are available. After that SD worked for me normally, except when trying to upscale, i was getting not enough vram error. I fixed that with these command lines
--use-directml --opt-sub-quad-attention --medvram --disable-nan-check --no-half --precision full --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1 --autolaunch --listen --medvram --precision full --opt-split-attention-v1 --no-half --no-half-vae --opt-sub-quad-attention --disable-nan-check --use-cpu interrogate gfpgan bsrgan esrgan scunet codeformer
After that everything worked smoothly and with no problems. My it/s seems to be low at only 2 to 2.38 max. However, it's still reasonably fast. I can't remember if i encountered any other errors but i think this is all i had to do. I installed python 3.10.6, Git and latest AMD GPU drivers. Thanks for your help, it really helped making it finally work.
You are welcome I am glad you got it working.
For me. Usually I don’t have to put no half. But for some of the SD 2.1 models I do have to use no half.
Just be aware that using no half for me at least cuts my performance in half I think. So you might want to test. If you can use it properly without needing no half you might get better performance. And maybe you can use sd1.5 based models at least with better performance.
@@FE-Engineer Oh nice, thanks for that info. I will test no half and see what it does. Even tho my it/s is low, times it takes for images to render is still reasonably fast. But those it/s is still pretty low for 6800XT in my opinion.
keep getting an AssertionError 😓
What does it say?
@@FE-Engineerended up uninstalling everything and just following your fix guide and it's working now. Taking a very long time to optimize with olive though.
the optimization process is absurdly expensive. Ram, hard drive space, and CPU absolutely go bonkers. It is REALLY hard on a computer to optimize into ONNX. :-/
@@FE-Engineer do you know if I'd be able to share my ONNX folder with my friend? or would he have to do it himself?
There is no Olive tab in my webuı 😂
See video description. Updated videos. Code changed
great video, appreciate your work . I still get an error though to which i havent found any solution. After like 20-50 seconds i get "TypeError: StableDiffusionPipeline.__init__() got an unexpected keyword argument 'text_encoder_2'" . Got any idea of what can i do?
This sounds like SDXL? I have not been able to get SDXL to work with ONNX.
You are welcome! Thanks for watching!
You are right, i chose a different model, cyberrealistic which says its SD i think and after following you steps again it worked!@@FE-Engineer
Are you joking??? No friking Olive nor ONNX tab is here!!!!! HOOOOWWWW?????
I get key error:time_embed.0.weight'
What were you doing? I’ve never seen that error before?
@@FE-Engineer how much ram you need to convert i looked at my task manager i have 16gb ddr 4 ram and it ate all of it and gave that error. How much ram do you have?
Great tutorial, managed to convert a few models successfully, I'm now running into a Problem after I launch SD and try to convert a model Nothing really hapens and get the following error message. "AssertionError: No valid accelerator specified for target system. Please specify the accelerators in the target system or provide valid execution providers. Given execution providers. Given execution providers: ['DmlExecutionProvider']. Current accelerators: [...]". I have the feeling I am not launching SD correctly, I use "webui.bat --onnx --backend directml", from an anaconda prompt in the SD directory, if i just use the shell to launch "webui.bat --onnx --backend directml" without running anaconda I don 't even see the onnx and olive tabs. Any pointers or solutions would be appreciated, and thanks for your videos
This is strange. I have not heard of anyone else having this problem.
My first suggestion would be to do a full reboot of your computer. Exit out of everything reboot. And after you reboot. Anaconda run your startup script and see if it works then.
Also I have like a 2 minute video about creating a startup script so that you can just like double click on a file and it starts stable diffusion for you.
ruclips.net/video/vKIqd5FDLn0/видео.htmlsi=BTN_lmgD8YdXuZ6Z
I ran into the exact same error, but after unchecking "Safety Checker" directly above the convert button fixed it. Hope it works for you too :)