Best channel for anything related to animation, AI-powered artistry, and well-founded insights! Whenever there's something new from you, we know it's time to delve in and learn some new techniques. Thanks a lot for everything we've learned from you so far.
Wow what an uplifting comment. Thank you so much. I absolutely love learning and helping others on the way, if I can - my two passions in life. Made my day, thank you for your support and kind words.
M’yeah. But also many people will need to shell out to build the rest of the machine. Can get pricey. Depends on how long you want this for and how much change is in your pocket.
It’s not just the card you’re basically renting a virtual PC with storage memory network connection GPU CPU and you don’t have to worry about it interfering or using space on whatever computing device you use most of the time, and you don’t pay for electricity since it seems to be a monthly charge so you can run it all day at all night. So in that case picking a lower cost slower device might be perfectly fine since you can batch up lots of work and just go to sleep. Meanwhile every second you sleep with your 3060 it’s depreciating. And it’s either idle or chewing up electricity that you pay for, same with all the other components. Just the terabyte of storage connected to a full PC is quite useful. And if you pay by the month and don’t need it any more just stop paying and they will delete it and you can do something else.
Great job as always. Your deep knowledge of so many pipelines is so praise worthy. Once again I'm totally impressed. I've been reluctant to load up my computer with models, but the cloud alternative right out of the box directs me to a way I hadn't considered. I'm a fan!
If you already have A1111 installed, you can take advantage of a comfyui setting (config file) that points to the models of that install. This will save a ton of hdd space. Otherwise you can end up doubling up on all of those and it really adds up fast.
Thanks for another great video, PM. I was dabbling with this already so your vid was perfectly timed for me, i learned some new things and it was very helpful. Keep up the good work!
Awesome, it’s exciting to learn. Don’t get me wrong, still love A1111 UI, but this seems to produce faster results - we all love a bit of that haha. Good luck with your project !
I"m pretty sure you are actually just an advanced AI sent to help us become better people. Thanks for making this stuff. i'm not trying to be weird; guess im just that good. I have no right to say this, but holy cow Your videos have changed my days and I am grateful. Off to obscurity I go! thank you again!
Clicked on this because of the "even on low-end PCs" bit. my PC is really low-end, 4GB card and ComfyUi runs fine...well slow, but fine. AnimateDiff is also fine if you lower the resolution and then upscale afterwards. I made a 1080p music video on my Potato PC. ComfyUI is fine!
I downloaded a workflow similar to yours and just changed the resolution to 384x384, nothing else and it didn't run out of memory. Music video is the latest video on my channel, can't post a link, RUclips don't like me doing that. I use ConfyUI to make click-bait thumbnails too 😁
i have 6GB vram laptop, and tried , 1st run failed bc of CUDA memory outage, fixed with lower resolution from 512x896 to 640x360 and it works, time to render is 40 min for a 12s video loaded . hope this helps for this VIDEO Tutorial, as LOW GPU WORKFLOW :)
You are welcome, you can use it through an iPad or phone as well, just log in through your shadow tech browser and it there. Love it when tech becomes universal !!!
@promptmuse That's so good to know I was saving for a better computer for warp but will definitely try this instead. In the meantime, thanks so much. I messaged you on Instagram BTW x
Awesome and really helpfull Tutorial!! Didn´t you say, that there is a glossary on your Website? I Couldn´t find it. And can you tell me, what all those brackets and ":1.2" in the animatediffs are usefull for? (sorry...english is not my motherlanguage 🙂 )
Thanks for this great video. Liked, subscribed of course. I would love to see you taking a step further up by doing a tutorial on creating a 3 mins or longer story using the batch prompt schedule. This would be a challenge as we are so limited with the motions.
Hi, what about image to video in comfy with AnimateDiff. I create my images in another image generator, so I don't need the text to image part. Have you a workflow for this? or maybe another videoThx:-)
Here you go my friend, I haven’t tested this method myself. But from the comments looks like it works :) www.reddit.com/r/StableDiffusion/comments/1506nfu/how_do_i_install_comfyui_on_a_mac/
Thanks this is an awesome tutorial and got me up and running really quickly! I did get an issue after installing the motion model. When I try to open the UI I get the error: No module named 'comfy.options It looks like it might be something to do with incompatible version of python? I have the correct file structure still, but no options folder. Let me know if this is something you've run in to before, thanks!
Thanks so much, amazing content as usual! Really appreciate the step-by-step details 🙏. I'm wondering - and I couldn't find the answer anywhere - is there a node that allows you to start the animation with an image? I saw "text to video" and "video to video" but is there a workflow that starts with an image (like img2img), then fires through the prompt/checkpoint/video generator to create the animation based on the image, perhaps biased by denoising?
Thank you for the reply! If you have space I’d keep both, automatic1111 is still a great UI and can do everything in this video. It’s quite like marmite selecting a UI that you feel most comfortable with and which is most efficient for the task.
I need help, I have installed everything, but when trying to make a video 2 video. I get the following error: Error occurred when executing KSampler: 'ControlNetAdvanced' object has no attribute 'model_sampling_current'
thanks for a wicked tutorial, All well but im stuck here Error occurred when executing CLIPTextEncodeSDXL: 'g' even though I can see preview of both loaders & depth map images but this error pops right on my controlnet
I tried the text to video and it worked great except for the fact that the video was blurry. What do I need to adjust on the k sampler to try and get rid of the blur? Or is there something else I need to adjust. Thanks.
Very nice tutorial but im getting the error " "upsample_nearest2d_out_frame" not implemented for 'BFloat16' " When it comes to video-combine. Any suggestions? Tried googling but no one else seem to have the same problem. I'm using vlads standalone. Fully updated.
Thank you ! It really is like blue prints and the material editors. I think this will be user friendly for 3d artists and devs. However it may be a bit intimidating to those who are new to node base layout. I always advise people work with what they are comfortable with as everything here can be done on automatic1111 :) which is still an amazing UI in itself !
Though I am still learning the basics of SD & A1111 for still images, I found this to be a brilliant tutorial for when I get advanced enough to try ComfyUI and try animations. Thank you 🙏🏽 so much for the detailed and easy to follow tutorial.
Awesome to hear, I'm going to link some additional resources that you may find helpful for you comfyui journey on my website today as well :) www.promptmuse.com
Super helpful, tahnk you very much! Your videos are always rich and abundant for information. Trying to make my little AI- video-tale. Is there any idea how to start the animation with an image, image-to-video?
There is indeed, there is a multi control net workflow, just download the open pose model via the comfyui manager and drop it down from the list, just like we did with the lineart: civitai.com/api/download/attachments/213 This work flow is from inner reflections :)
Yeah for sure, it’s a RTX and will run all of this. If you do not want an entire operating system to rent you could opt for Runpod or Rundiffuion. I like having an entire OS so I can run 3d programs and games on !
I started using Stable Diffusion about a week ago, on Automatic1111 and Vladmantic, but decided to switch to ComfyUI due to recurring issues with the other WebUIs (I've spent more time troubleshooting than actually creating...) I've been been watching a lot of videos about ComfyUI and yours has been the most concise, with a lot of clear, easy to understand information in a short time. Highly appreciated! I have a question: Do you know how to "switch" the character in a video2video workflow? For example, I'd like to use a video, openpose, but have a completely different character in the final video. Would this be done through the use of LoRa? Thank you!
Yes! You are correct you can you a Lora, also prompting will help you. I.e “beautiful Angelina Jolie, Lara Croft, female, ” will get you that result. Loras will also help with consitancy. For the overal look selects a model(checkpoint) will drive the overall look, e.g Pixar checkpoint will make Angelina Jolie in a Pixar style. Hope that helps :)!
Hi @Prompt Muse, I have sub to your channel and followed your tutorial, It works to run but generating the image is very slow, I'm using Lenovo Y70 with 16 RAM GeForce GTX 860 (very old laptop) I want to ask if this is because my laptop or did I miss anything that cause my generating to be slow? It looks like using Comfy UI conveniently I need to upgrade my laptop? Although I still can do video editing with Adobe Premiere decently, but using Comfy UI it's very slow, I'd appreciate if you could me me direction and solution, any thoughts? Thank you
Hello friend ! First thing you can do is reduce sample steps in the k sampler. If that doesn’t work another way to speed up generations is to use a LCM sampler node. There are lots of workflows that can be found with a quick google search. I hope that helps. I think i’ll make a quick video about this 👍
So this is not because my laptop is old?I have been googling to find solutions too, If you can create a video for this, it will be great, I believe lots of people are having this issue too, thanks for your kind reply@@promptmuse Will be waiting for more of your channel
Me too . I was thinking Alicia Keys showcases her new role on RUclips. Anyways you are Beautiful Prompt Muse. And you are real with face . ❤🌹❤️ Congratulations
Hi Great video, follwed all the steps, already have A1111 installed, however, I get no interface loading i just get Using pytorch cross attention as the last command in the console and it never starts....any ideas?. I am using win 11...which im now fully aware is totally S*&T!!
FIXED...just for anyone coming across the same issue, Win 11 defender was stopping Comfi UI from loading and not informing me, so if this occurs simply allow the comfy folder through
Nice tutorial. I tried this on Macbook pro M1 Max, unfortunately it does not work with AnimateDiff. Though ComfyUi works for images. If anyone knows how to make AnimateDiff work, let me know.
@@larsisdahl Yes, I still need help. I followed the tutorial, all I get is a black image when rendering animation with AnimateDiff. I haven't found solution for this.
hi, i'm also using m1 running comfyui and when it runs to ksampler it shows error with "'ModuleList' object has no attribute '1' " . do you have any idea what should i debug? @@larsisdahl
The current released version does not support vid2vid - But man, I can see it replacing pretty much everything out there in a few week 🤯 Its only on experimental phase, so much to come from SVD. It still runs using ComfyUI setup 👍
I think this is my first comment on RUclips, but you made a very good explanation, not to mention that it's nice to see a pretty face while you explain to us 🫣 . I almost think I'm going to subscribe to your channel 🤔
xsqueeze me! $40pm. I'm not gonna act like buying the parts and building a suitable rig will replace the convenience of generating anywhere from anything but damn! Good tutorial btw
🤣 yeah it’s quite painful spend on top of everything else atm, but it served me well for the interim as my PC was on it’s legs. New rig sorted now and it’s heaven 🙏
FAQ For the Pin!
Timestamps in description :)
Great great tut! Thanks! Will it work with SDXL? Or only after we get SDXL motion models?
Error occurred when executing ControlNetLoaderAdvanced:
empty() received an invalid combination of arguments - got (tuple, dtype=bool, device=NoneType), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
what is the problem
Great, what about making less anime-ish imagery and make it more 3D realistic?
@@RareTechniques Just changer the checkpoint model and or Lora file. Some great ones on civil ai for 3d realistic ppl !
Best channel for anything related to animation, AI-powered artistry, and well-founded insights! Whenever there's something new from you, we know it's time to delve in and learn some new techniques. Thanks a lot for everything we've learned from you so far.
Wow what an uplifting comment. Thank you so much. I absolutely love learning and helping others on the way, if I can - my two passions in life. Made my day, thank you for your support and kind words.
Your video really took away the fear of the unknown after that first intimidating impression, that I get with every nodebased tool.
I’m so glad to hear that, and that’s why I love making this content!
This 26min took me some hours lol I was pausing and noting every step, and finally I could start my journey in ComfyUI, thank you so much!! 🙏🙏
Amazing !!! Thank you for letting me know, and glad I could help 🫶
This is for me the best tutorial there is. Thank you so much.
I appreciate that, thank you 🙌
$40 a month for a 1080!! You can buy one second hand for under £200.
M’yeah. But also many people will need to shell out to build the rest of the machine. Can get pricey. Depends on how long you want this for and how much change is in your pocket.
IMHO a 12G 3060 makes more sense, 50% more memory and support for the latest version of CUDA.
I bought one for 220€ ~ 190£ but prices are rising.
Yeah that is a lot of money for 1080
It’s not just the card you’re basically renting a virtual PC with storage memory network connection GPU CPU and you don’t have to worry about it interfering or using space on whatever computing device you use most of the time, and you don’t pay for electricity since it seems to be a monthly charge so you can run it all day at all night. So in that case picking a lower cost slower device might be perfectly fine since you can batch up lots of work and just go to sleep. Meanwhile every second you sleep with your 3060 it’s depreciating. And it’s either idle or chewing up electricity that you pay for, same with all the other components. Just the terabyte of storage connected to a full PC is quite useful. And if you pay by the month and don’t need it any more just stop paying and they will delete it and you can do something else.
@@tysond1500 I like to build and train my AI models on Google Collab. Doesn't cost a penny unless you start building monster models!
first video I see on your channel and I already subscribed. The amount of useful information is just at another level. Thank you for sharing
Great job as always. Your deep knowledge of so many pipelines is so praise worthy. Once again I'm totally impressed. I've been reluctant to load up my computer with models, but the cloud alternative right out of the box directs me to a way I hadn't considered. I'm a fan!
Thank you so much!
Wow, that was a really nice animation. and really nice tutorial and editing. The zoom ins are super helpful!
Hello my friend !
@@promptmuse hello 🤗
A perfect, succinct, tour de force video!!!
Thank you !
If you already have A1111 installed, you can take advantage of a comfyui setting (config file) that points to the models of that install. This will save a ton of hdd space. Otherwise you can end up doubling up on all of those and it really adds up fast.
do you have a video reference on doing dat? i already have locally A1111
@@Nekotico Edit extra_model_paths.yaml. change the 'base_path' value.
Mine looks like this: 'base_path: E:\AI\sd.webui\webui'
@@Nocare89 ty for sharing!
I don't have Manager as a menu option how do I get it? This has stopped my projected
Thanks for another great video, PM. I was dabbling with this already so your vid was perfectly timed for me, i learned some new things and it was very helpful. Keep up the good work!
Awesome, it’s exciting to learn. Don’t get me wrong, still love A1111 UI, but this seems to produce faster results - we all love a bit of that haha. Good luck with your project !
I don’t normally press the alert button. I did for this one!
I hope it didn’t disappoint. Quite a long one, but meant as more of a reference if anyone gets stuck :)
thank you so much. I have absolutely no clue you could drop outputs back into the nodal space!
Glad I could help! 🙌
I"m pretty sure you are actually just an advanced AI sent to help us become better people. Thanks for making this stuff. i'm not trying to be weird; guess im just that good. I have no right to say this, but holy cow Your videos have changed my days and I am grateful. Off to obscurity I go! thank you again!
Haha No no I’m just a bored human with too many interest’s 🤣 You are more than welcome, thanks for the comment !!!
Clicked on this because of the "even on low-end PCs" bit.
my PC is really low-end, 4GB card and ComfyUi runs fine...well slow, but fine. AnimateDiff is also fine if you lower the resolution and then upscale afterwards. I made a 1080p music video on my Potato PC. ComfyUI is fine!
Amazing ! That’s really good for others to know. Would love to see the music video you created. Did you have to reduce the sample by chance? 🔥
I downloaded a workflow similar to yours and just changed the resolution to 384x384, nothing else and it didn't run out of memory. Music video is the latest video on my channel, can't post a link, RUclips don't like me doing that. I use ConfyUI to make click-bait thumbnails too 😁
@@Satscape Awesome, just subscribed 🔥✊
can you share the workflow link please ?@@Satscape
Thank you for this!!!! I love your videos and you're my first AI crush! :)
Fabulous tutorial! You have fun energy. I’m looking forward to trying the video to video workflow. Thanks so much.
Excellent tutorial video, many thanks!
Thank you so much 🫶🔥
Fantastic and useful information, thanks for your vid.
Thanks so much !! I appreciate your feedback 🫶
this was incredibly informative! wow thank you so much, im really looking forward to seeing what I'm able to create!
i have 6GB vram laptop, and tried , 1st run failed bc of CUDA memory outage, fixed with lower resolution from 512x896 to 640x360 and it works, time to render is 40 min for a 12s video loaded . hope this helps for this VIDEO Tutorial, as LOW GPU WORKFLOW :)
Thank. Cleared all doubts and errors which i got.
Thank you so much ❤️🔥 I’m glad it all worked out.
Finally decided to try, you're a true Goddess. tsvm
It was so helpful, and it's 😊great to know we can use a fake computer remotely. That is so helpful! Thanks for your knowledge and sharing.
You are welcome, you can use it through an iPad or phone as well, just log in through your shadow tech browser and it there. Love it when tech becomes universal !!!
@promptmuse That's so good to know I was saving for a better computer for warp but will definitely try this instead. In the meantime, thanks so much. I messaged you on Instagram BTW x
Work of the Gods!
Would you happen the know what the differences are between Load Video (Path) and Load Video (Upload) nodes? Thanks for the video!
Thank you very much for this video quite helpful and informative. Thanks
Thank you for the kind comment, I’m glad it helped :) 🙏
Super helpful. Thanks for the tutorial.
You're welcome!
Hermoso video, como siempre perfecto, felicidades ❤
Very nice, thy angelic muse, anywhere we can get the jsons from?
Hello, bottom of the description. Just added. I’ll add some more as well and some links to some more templates :)
Absolutely charming videos from you on AI + 3D animation workflows. Liked, subscribed, bookmarked and eagerly awaiting more content. Cheers.
again a great video, thank you so much
No, thank you for watching !
Awesome and really helpfull Tutorial!! Didn´t you say, that there is a glossary on your Website? I Couldn´t find it. And can you tell me, what all those brackets and ":1.2" in the animatediffs are usefull for? (sorry...english is not my motherlanguage 🙂 )
Thanks for this great video. Liked, subscribed of course. I would love to see you taking a step further up by doing a tutorial on creating a 3 mins or longer story using the batch prompt schedule. This would be a challenge as we are so limited with the motions.
You're a very thorough explainer.
Hi, what about image to video in comfy with AnimateDiff. I create my images in another image generator, so I don't need the text to image part. Have you a workflow for this? or maybe another videoThx:-)
Can I download Mac versions of those apps? I have 16 gb of ram on my Macbook.
Here you go my friend, I haven’t tested this method myself. But from the comments looks like it works :) www.reddit.com/r/StableDiffusion/comments/1506nfu/how_do_i_install_comfyui_on_a_mac/
Great video! Thanks
Thanks this is an awesome tutorial and got me up and running really quickly! I did get an issue after installing the motion model. When I try to open the UI I get the error:
No module named 'comfy.options
It looks like it might be something to do with incompatible version of python? I have the correct file structure still, but no options folder.
Let me know if this is something you've run in to before, thanks!
which controlnet thing should I download if I wanted to use sdxl checkpoints to animate?
Thanks so much, amazing content as usual! Really appreciate the step-by-step details 🙏. I'm wondering - and I couldn't find the answer anywhere - is there a node that allows you to start the animation with an image? I saw "text to video" and "video to video" but is there a workflow that starts with an image (like img2img), then fires through the prompt/checkpoint/video generator to create the animation based on the image, perhaps biased by denoising?
Really good tutorial !
Pls can u make a tutorial about nightshade?
Oooh yeah, that’s a very interesting topic. Would love to see everyone’s opinion on it.
you just earned my sub. big time.
Thanks for the tut. I still need to install ComfyUI and migrate from A1111.
Thank you for the reply! If you have space I’d keep both, automatic1111 is still a great UI and can do everything in this video. It’s quite like marmite selecting a UI that you feel most comfortable with and which is most efficient for the task.
Thank you very much for the lesson
I need help, I have installed everything, but when trying to make a video 2 video. I get the following error:
Error occurred when executing KSampler:
'ControlNetAdvanced' object has no attribute 'model_sampling_current'
Hey, How can i install ffmpeg? I cannot seem to find in the video. Thank you!
thanks for a wicked tutorial, All well but im stuck here Error occurred when executing CLIPTextEncodeSDXL:
'g'
even though I can see preview of both loaders & depth map images but this error pops right on my controlnet
Super new to this, does t his mean iff I use SDXL 1.0 it LoRA it won't work?
Can you preview one frame, to see what the prompt does, before the ksampler takes forever rendering the whole animation?
You looking cool 💥
Hi. Can you use upscale node for downsizing video ?
I tried the text to video and it worked great except for the fact that the video was blurry. What do I need to adjust on the k sampler to try and get rid of the blur? Or is there something
else I need to adjust. Thanks.
rubbing off on you at the end!
Fizznodes doesn't seem to come up. Not sure if that's deprecated now
Great content, thank you.
Next time I'll wear sunglasses, cos Bright Lights :)
Got a new light, as that one was crazy !! 🤣
Perfect video.
Thanks so much ! Im glad you enjoyed :)
Mine says When loading the graph, the following node types were not found:
ADE_AnimateDiffLoaderWithContext
How would you do image to vid ?
Hi., Your prompts doesn't contains keyframes, how obtain animation without kf?
Very nice tutorial but im getting the error " "upsample_nearest2d_out_frame" not implemented for 'BFloat16' " When it comes to video-combine. Any suggestions? Tried googling but no one else seem to have the same problem. I'm using vlads standalone. Fully updated.
Great Video👍
This UI Sort of reminds me of Blueprints in unreal engine or even blenders node system.
Thank you ! It really is like blue prints and the material editors. I think this will be user friendly for 3d artists and devs. However it may be a bit intimidating to those who are new to node base layout. I always advise people work with what they are comfortable with as everything here can be done on automatic1111 :) which is still an amazing UI in itself !
Though I am still learning the basics of SD & A1111 for still images, I found this to be a brilliant tutorial for when I get advanced enough to try ComfyUI and try animations. Thank you 🙏🏽 so much for the detailed and easy to follow tutorial.
Awesome to hear, I'm going to link some additional resources that you may find helpful for you comfyui journey on my website today as well :) www.promptmuse.com
Super helpful, tahnk you very much! Your videos are always rich and abundant for information. Trying to make my little AI- video-tale. Is there any idea how to start the animation with an image, image-to-video?
Why did you decide to use animatediff instead of deforum? Was there any specific reason?
i do every thing up to 8.09 but when i clik on file the window oped does not do nothing :(
Versions prior to V0.22.2 will no longer detect missing nodes unless using a local database. Please update ComfyUI-Manager to the latest version.
Do you think there is a way to combine this with open pose?
There is indeed, there is a multi control net workflow, just download the open pose model via the comfyui manager and drop it down from the list, just like we did with the lineart: civitai.com/api/download/attachments/213 This work flow is from inner reflections :)
@@promptmuse Thanks a bunch! :) :)
Is NVDIA T4 good as Shadow is not available in my region
Yeah for sure, it’s a RTX and will run all of this. If you do not want an entire operating system to rent you could opt for Runpod or Rundiffuion. I like having an entire OS so I can run 3d programs and games on !
Thank you so much relly good video
After I get done, what am I supposed to do with it?
I started using Stable Diffusion about a week ago, on Automatic1111 and Vladmantic, but decided to switch to ComfyUI due to recurring issues with the other WebUIs (I've spent more time troubleshooting than actually creating...) I've been been watching a lot of videos about ComfyUI and yours has been the most concise, with a lot of clear, easy to understand information in a short time. Highly appreciated!
I have a question: Do you know how to "switch" the character in a video2video workflow? For example, I'd like to use a video, openpose, but have a completely different character in the final video.
Would this be done through the use of LoRa? Thank you!
Yes! You are correct you can you a Lora, also prompting will help you. I.e “beautiful Angelina Jolie, Lara Croft, female, ” will get you that result. Loras will also help with consitancy. For the overal look selects a model(checkpoint) will drive the overall look, e.g Pixar checkpoint will make Angelina Jolie in a Pixar style. Hope that helps :)!
anyone get this: Error occurred when executing ADE_AnimateDiffLoaderWithContext ... just running kosinkadinks animated prompt?
The inner reflections guide cannot be found
I had the same problem for a bit. Here's the link.
Your face is brighter than my future
Thank You.....Cool
Hi @Prompt Muse, I have sub to your channel and followed your tutorial, It works to run but generating the image is very slow, I'm using Lenovo Y70 with 16 RAM GeForce GTX 860 (very old laptop) I want to ask if this is because my laptop or did I miss anything that cause my generating to be slow? It looks like using Comfy UI conveniently I need to upgrade my laptop? Although I still can do video editing with Adobe Premiere decently, but using Comfy UI it's very slow, I'd appreciate if you could me me direction and solution, any thoughts? Thank you
Hello friend ! First thing you can do is reduce sample steps in the k sampler. If that doesn’t work another way to speed up generations is to use a LCM sampler node. There are lots of workflows that can be found with a quick google search. I hope that helps. I think i’ll make a quick video about this 👍
So this is not because my laptop is old?I have been googling to find solutions too, If you can create a video for this, it will be great, I believe lots of people are having this issue too, thanks for your kind reply@@promptmuse Will be waiting for more of your channel
@@TheAgeofAI_film the newer the better as with anything
11:54 That manager button is missing here. I am confused
The manager needs to be installed check 6:40 mark for that 👍 needs to be install
Thanks a lot. Now everything works like a charme. @@promptmuse
Me too . I was thinking Alicia Keys showcases her new role on RUclips. Anyways you are Beautiful Prompt Muse. And you are real with face . ❤🌹❤️ Congratulations
Hi Great video, follwed all the steps, already have A1111 installed, however, I get no interface loading i just get Using pytorch cross attention as the last command in the console and it never starts....any ideas?. I am using win 11...which im now fully aware is totally S*&T!!
FIXED...just for anyone coming across the same issue, Win 11 defender was stopping Comfi UI from loading and not informing me, so if this occurs simply allow the comfy folder through
Nice tutorial. I tried this on Macbook pro M1 Max, unfortunately it does not work with AnimateDiff. Though ComfyUi works for images. If anyone knows how to make AnimateDiff work, let me know.
I have AnimateDiff working in ComfyUI on an MBP M1 Max. Are you still in need of help?
@@larsisdahl Yes, I still need help. I followed the tutorial, all I get is a black image when rendering animation with AnimateDiff. I haven't found solution for this.
Do you see any error messages in the terminal window?@@rutababelyte7790
hi, i'm also using m1 running comfyui and when it runs to ksampler it shows error with "'ModuleList' object has no attribute '1'
" . do you have any idea what should i debug? @@larsisdahl
Was looking for a tutorial on the viral joker lil yachty walkout.
I seen a walkthru video on this setup with viggle.
Now that SVD is out, is this obsolete now?
The current released version does not support vid2vid - But man, I can see it replacing pretty much everything out there in a few week 🤯 Its only on experimental phase, so much to come from SVD. It still runs using ComfyUI setup 👍
is this SD only or does it also support SDXL?
It supports SDXL as well 👍
usualy i dont see often girls who dive so dipr on this matter .very well explained!
AHHHHA! white balance!
What is this ? Error occurred when executing MiDaS-DepthMapPreprocessor:
I think this is my first comment on RUclips, but you made a very good explanation, not to mention that it's nice to see a pretty face while you explain to us 🫣 . I almost think I'm going to subscribe to your channel 🤔
is 6gb vram is enough ?
I don't have a "Manager" button
Is it me or Prompt Muse talking cadence starting to sound a lot like Nerdy Rodent.
oK it works on my CPU , 30 min per image :)
Damn……… what nodes are you using. Are you upscaline or is this the basic default workflow??
this sounds like a rocket science cry
um 12 gb of vram isnt for "low end pcs"
nice clickbait!
i can runn high end thing on cloud as well if i wanted to -_-
xsqueeze me! $40pm. I'm not gonna act like buying the parts and building a suitable rig will replace the convenience of generating anywhere from anything but damn! Good tutorial btw
🤣 yeah it’s quite painful spend on top of everything else atm, but it served me well for the interim as my PC was on it’s legs. New rig sorted now and it’s heaven 🙏
She's too cute, I can't concentrate. Can she just read the phone book, that would be zexy.