It's funny how I think of things that would be good or helpful in the AI world and then BOOM, you have a new tutorial video on exactly that thing!! I've been thinking about how to do this for a while... perfect that it's bolted right into ComfyUI!! Great video! Up and running immediately... Kind of a pain that the text can't really be edited without cutting and pasting into a regular prompt window, etc... But that's not on you friend! 5 by 5! You earned FiDolla!! Thank you!
Will you be doing a video on animation in Flux using ComfyUI? Most of the tutorials I've seen are using external websites, rather than a local machine.
how did you add height, width / INT in purple with "control_after_generate" is that a special node that you need to install from comfy UI manager? I keep seeing that in samples but cannot find it.
flux loves long prompts?I am always cutting my prompts shorter and shorter till I stop getting this weird error "RuntimeError: stack expects each tensor to be equal size, but got ..." I can't figure out what it means but shortening the prompt a little usually fixes it. if not, shortening it some more usually fixes it,
I want to try this so badly but I can't get Searge-LLM to install. I get the message "(IMPORT FAILED) Searge-LLM for ComfyUI v1.0" in my manager. When I load your workflow I get missing node types for Searge_Output_Node and Searge_LLM_Node. Has anyone had this and have a fix?
Hello Sebastian, is it possible to load LLM with an image and have it captioned "ChatGPT" style? Or any method that could caption images somehow similar to ChatGPT but for free or the cheapest option, thanks.
I have an error when loading the workflow. It is related to the CheckpointLoaderNF4 node. "When loading the graph, the following node types were not found: CheckpointLoaderNF4"
Thanks for the great videos! It's odd how the new Flux Model can generate explicit content without issue, but when it comes to something as simple as showing the middle finger, it always ends up with the index finger instead. And what's this thing about the Flux female chin? Does anyone know how to crack this so it works as intended?
I finally had to move over to ComfyUI... I resisted forever because it seemed to ridiculous, but now that I'm using it, I really like it! I use if for Flux, Pony and XL... I had never tried Pony or XL, but in Comfy they are really easy to use... only took a couple of days and there are TONS of Workflow examples so that you don't have to reinvent the wheel! So, my advice: Jump in, the Comfy water is... Comfy!! @sebastiankamph, see what I did there! 😛
This is cool, but you can't edit it the created prompt after. If you love the prompt it creates, but you want to edit the subject or one word, you can't. You have to copy and paste it into the previous node, edit it there, then bypass the LLM node, then generate. So not impossible, but an extra step.
I figured it all out myself but it does not work for me. I had to add a check point loader because yours was red and said undefined, and while it does generate the new prompt after that it spits out a whole list of mismatch size errors so probably not for me thanks.
these two Searge nodes are a great addition I integrated them into 1 loRA Flux + flux1-dev-Q8_0.gguf + t5-v1_1-xxl-encoder-Q8_0.gguf + Mistral-7B-Instruct-v0.3.Q4_K_M.gguf and it work 5s/it 1.25min to generate. Thank you.
Hey man great as always but one thing I think a lot of people would love to see if a straight forward Flux LORA training tutorial. Is that in the works?
Guys, I know it's not part of this topic but I tried to get answers everywhere and could not find anything.... please help if you have a free moment, thanks 🙏🙏🙏😬: I tried installing the ComfyUI_UltimateSDUpscale through the manager, update it, manual install it through Git, download the raw files and placing them in the correct folder, but all methods failed. the node is considered missing on Comfy and the installation failed. does anyone else have this problem? maybe after recent ComfyUi update or something? thanks.
It was just a few months ago that a comfy UI node allegedly for integrating LLMs into your work flow was out there that executed malicious code on your machine. Be careful out there folks.
Always be careful! This node is created by Searge who is a well known good guy in the community (and also a moderator in my discord). That's of course not a 100% guarantee, but it's almost as good as it can get on the internet I suppose.
Same for me with Something like this: Python.exe: Entry point not found The procedure entry point ggml_backend_cuda_log_set_callback could not be located in the dynamic link library C:\ComfyUl\venv\lib\site-packages\llama_cpp_cuda\lib\llama.dll.
There are some troubleshooting tips on the official github regarding missing llama-cpp. Could check that out: github.com/SeargeDP/ComfyUI_Searge_LLM/tree/main
This is a very informative video. I had no idea LLM could integrate with Comfy. Concerning the usage of other models, I seem to be getting a NotImplementedError for 4-bit quantization with any model other than the Flux NF4 models. I am still researching this on my machine but it could be related to me using Comfy through SwarmUI.
But then you aren't even writing the prompt. One way AI art still takes imagination and effort is in figuring out the prompts. This just makes it as lazy as people that are against AI art say it is. Now you don't even have to think.
Sebastian you are the golden standard for AI creators. Top notch. IDK how but you keep getting exponentially better with each upload.
That's very kind of you, thank you :) 💫
It's funny how I think of things that would be good or helpful in the AI world and then BOOM, you have a new tutorial video on exactly that thing!! I've been thinking about how to do this for a while... perfect that it's bolted right into ComfyUI!! Great video! Up and running immediately... Kind of a pain that the text can't really be edited without cutting and pasting into a regular prompt window, etc... But that's not on you friend! 5 by 5! You earned FiDolla!! Thank you!
Thank you very much for the continued support, so kind of you! 😊💫
Thanks, this works really well with wildcard processor to feed the text into it.
Thanks. I already use Ollama and Florence in ComfyUI. This LLM is a nice resource-efficient alternative.
Can Ollama be used for anything else ?
@@SebAnt image to prompt (llava:7b-v1.6-mistral-q5_K_M)
or enchane prompt (you input just a sentence, but the llm output a detailed prompt)
@@kironlau thank you.
I had previously seen a video about Ollama and was planning to install it this weekend, and now wondering if Sarge will suffice.
Can Searge LLM be used in img2img for flux? I want an LLM model that can read my input image and generate a prompt for img2img.
Thks a ppreciate the local and cloud options recos for those without the fancy hardware!
lovely, thanks for sharing! btw, how'd you get that pretty little workflow icon on the sidebar?
Probably just the new ui. Showing how to load it in the video if you don't have it already.
Very exciting solution, again!
Hello Sebastian, is there an alternative method to incorporate a positive prompt (clip text encoder) into this workflow to enhance the visual output?
Will you be doing a video on animation in Flux using ComfyUI? Most of the tutorials I've seen are using external websites, rather than a local machine.
Would adding something like "for a T5 encoder" improve the output even more for flux?
Try "FLUX-Prompt-Generator" on hugging face. you can select different LLM's in the right hand generating window
how did you add height, width / INT in purple with "control_after_generate" is that a special node that you need to install from comfy UI manager? I keep seeing that in samples but cannot find it.
What app did you use? COmfyUI? Why doesn't my comfyui look like yours?
Does fooocus do something similar, when expanding your prompts?
I wonder if you ran into llama.dll error and how you resolved it. There is no resolution or fix on the Github page for that node.
👋 Looking forward to this video
🙌🙌🙌🙌
flux loves long prompts?I am always cutting my prompts shorter and shorter till I stop getting this weird error "RuntimeError: stack expects each tensor to be equal size, but got ..." I can't figure out what it means but shortening the prompt a little usually fixes it. if not, shortening it some more usually fixes it,
you can do more with ComfyUI node -Long-CLIP can give you a token length from 77 to 248 max
I want to try this so badly but I can't get Searge-LLM to install. I get the message "(IMPORT FAILED) Searge-LLM for ComfyUI v1.0" in my manager. When I load your workflow I get missing node types for Searge_Output_Node and Searge_LLM_Node.
Has anyone had this and have a fix?
I get it too
For me, install fails because I've configured my conda env with python 3.12.... but Searge-LLM is only complient with python 3.10 or 3.11
Hello Sebastian, is it possible to load LLM with an image and have it captioned "ChatGPT" style? Or any method that could caption images somehow similar to ChatGPT but for free or the cheapest option, thanks.
So I'm still missing this... CheckpointLoaderNF4 - where is this?
Update to latest comfy. There's an nf4 guide on my channel also btw
What server service do u use to run comfy if any?
huge thanks !!
Happy to help!
I got an error "the procedure entry point ggml_backened_cuda_log_set_callback could not be located in dynamic link library
As did I
Yeah same
same
Did you solve it?
@@Melike-oh1ir No sorry I haven't tried for a while
Also how did you get your manager to stick across the top? Thanks.
Showing in the video
Tutorial on how to create the thumbnail pic? It's gorgeous!
I have an error when loading the workflow. It is related to the CheckpointLoaderNF4 node. "When loading the graph, the following node types were not found:
CheckpointLoaderNF4"
How does that minstral llm compare to Florence large?
Thanks for the great videos! It's odd how the new Flux Model can generate explicit content without issue, but when it comes to something as simple as showing the middle finger, it always ends up with the index finger instead. And what's this thing about the Flux female chin? Does anyone know how to crack this so it works as intended?
Hello when i run it it seems to go through but i dont see the output in the output text field ?
how do i add the mistral model or any other model? I am missing only that.
How much VRAM do you get?
Thanks!
You bet! Thanks again ☺️💫
Now I just hope this makes its way into Forge.
I finally had to move over to ComfyUI... I resisted forever because it seemed to ridiculous, but now that I'm using it, I really like it! I use if for Flux, Pony and XL... I had never tried Pony or XL, but in Comfy they are really easy to use... only took a couple of days and there are TONS of Workflow examples so that you don't have to reinvent the wheel! So, my advice: Jump in, the Comfy water is... Comfy!! @sebastiankamph, see what I did there! 😛
This is cool, but you can't edit it the created prompt after. If you love the prompt it creates, but you want to edit the subject or one word, you can't. You have to copy and paste it into the previous node, edit it there, then bypass the LLM node, then generate. So not impossible, but an extra step.
crystools erscheint bei mir nicht in der Leiste oben. Wie haste das hinbekommen ? danke
How did you get the system usage stats on top of the menu bar?
Crystools
love you so much, Seb
🥰
Does anyone know how I would map this llm_gguf folder in the extra_model_paths.yaml? Is it just that key?
I figured it all out myself but it does not work for me. I had to add a check point loader because yours was red and said undefined, and while it does generate the new prompt after that it spits out a whole list of mismatch size errors so probably not for me thanks.
It's built on Flux, in this instance NF4
these two Searge nodes are a great addition I integrated them into 1 loRA Flux + flux1-dev-Q8_0.gguf + t5-v1_1-xxl-encoder-Q8_0.gguf + Mistral-7B-Instruct-v0.3.Q4_K_M.gguf and it work 5s/it 1.25min to generate. Thank you.
Can you share that WF?
You should do another Seb Ross Discord weekly challenge video, but this time with Flux. I really enjoyed those.
Thank you for the suggestion! I'll try again and see how the views are for those nowadays :)
Hey man great as always but one thing I think a lot of people would love to see if a straight forward Flux LORA training tutorial. Is that in the works?
Guys, I know it's not part of this topic but I tried to get answers everywhere and could not find anything.... please help if you have a free moment, thanks 🙏🙏🙏😬:
I tried installing the ComfyUI_UltimateSDUpscale through the manager, update it, manual install it through Git, download the raw files and placing them in the correct folder, but all methods failed. the node is considered missing on Comfy and the installation failed.
does anyone else have this problem? maybe after recent ComfyUi update or something?
thanks.
doesn't work. It gets stuck trying to download a 312 mb through git
For me Searge is not loading in Think Diffusion :(
Sorry to hear that. Go hop on their Discord, there's a very active support chat there.
How does it compare to Florence2?
Haven't done a comparison, but you can load any .gguf llms
Pleeeeeaaaaase let someone put this into Forge, pleeeeeaase!
"Octopuses." ;)
whats your system specs?
RTX 4090 24gb vram, 64gb ram.
@@sebastiankamph thanks. 1 gpu or 2?
@@naeemulhoque1777 1. Not much use for 2 as of yet. I mean you CAN, like in Swarm etc. But it's really not very useful.
👋
Best AI RUclipsr... Never Ask For Patreon For Workflow like Mosly Others
Thank you, very kind! But some of my posts are locked even if this wasn't ;)
It was just a few months ago that a comfy UI node allegedly for integrating LLMs into your work flow was out there that executed malicious code on your machine.
Be careful out there folks.
Always be careful! This node is created by Searge who is a well known good guy in the community (and also a moderator in my discord). That's of course not a 100% guarantee, but it's almost as good as it can get on the internet I suppose.
it gives (IMPORT FAILED) on the latest comfyui
Same for me with Something like this:
Python.exe: Entry point not found
The procedure entry point
ggml_backend_cuda_log_set_callback could not be located in
the dynamic link library
C:\ComfyUl\venv\lib\site-packages\llama_cpp_cuda\lib\llama.dll.
@@vaishnav7 same
There are some troubleshooting tips on the official github regarding missing llama-cpp. Could check that out: github.com/SeargeDP/ComfyUI_Searge_LLM/tree/main
@@sebastiankamph thankyou 🤍✌️
python -m pip install llama-cpp-python
did the trick
"Generate a random image prompt" Oh no! More floods of images that fill the civitai database. LOL!
I'd love to see a video about how to use custom loRA's for flux or other models
Cause I have no idea how that works!
Great video btw, subbed!
So what does this do exactly in more simple terms? Am wasted and don't have the time to watch the whole video. Would appreciate it thanks :)
This is a very informative video. I had no idea LLM could integrate with Comfy. Concerning the usage of other models, I seem to be getting a NotImplementedError for 4-bit quantization with any model other than the Flux NF4 models. I am still researching this on my machine but it could be related to me using Comfy through SwarmUI.
Solved it. I feel dumb. I didn't notice that CheckpointLoaderNF4 was being used in the workflow.
i prefer the ollama node
Why do you prefer it? 😊
@@sebastiankamph it has more options, also the service ollama can be running in a different computer saving you VRAM.
Where is the creative input?! So you type two or three words and……. that’s it.
Not sure if I like this way of working.
again flux... okay
You can run it for all the models, but it's extra powerful for Flux specifically.
But then you aren't even writing the prompt. One way AI art still takes imagination and effort is in figuring out the prompts. This just makes it as lazy as people that are against AI art say it is. Now you don't even have to think.
flux1-dev-bnb-nf4 is needed
From hugging face if anyone is wondering why it isn't working