LLM in ComfyUI Tutorial

Поделиться
HTML-код
  • Опубликовано: 7 ноя 2024

Комментарии • 110

  • @michaelkircher9094
    @michaelkircher9094 2 месяца назад +2

    Sebastian you are the golden standard for AI creators. Top notch. IDK how but you keep getting exponentially better with each upload.

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      That's very kind of you, thank you :) 💫

  • @GenoG
    @GenoG 2 месяца назад

    It's funny how I think of things that would be good or helpful in the AI world and then BOOM, you have a new tutorial video on exactly that thing!! I've been thinking about how to do this for a while... perfect that it's bolted right into ComfyUI!! Great video! Up and running immediately... Kind of a pain that the text can't really be edited without cutting and pasting into a regular prompt window, etc... But that's not on you friend! 5 by 5! You earned FiDolla!! Thank you!

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      Thank you very much for the continued support, so kind of you! 😊💫

  • @lordlucifer989
    @lordlucifer989 19 дней назад

    Thanks, this works really well with wildcard processor to feed the text into it.

  • @matze2001
    @matze2001 2 месяца назад +1

    Thanks. I already use Ollama and Florence in ComfyUI. This LLM is a nice resource-efficient alternative.

    • @SebAnt
      @SebAnt 2 месяца назад +1

      Can Ollama be used for anything else ?

    • @kironlau
      @kironlau 2 месяца назад +1

      @@SebAnt image to prompt (llava:7b-v1.6-mistral-q5_K_M)
      or enchane prompt (you input just a sentence, but the llm output a detailed prompt)

    • @SebAnt
      @SebAnt 2 месяца назад

      @@kironlau thank you.
      I had previously seen a video about Ollama and was planning to install it this weekend, and now wondering if Sarge will suffice.

  • @VaiTag08
    @VaiTag08 2 месяца назад +3

    Can Searge LLM be used in img2img for flux? I want an LLM model that can read my input image and generate a prompt for img2img.

  • @Cu-gp4fy
    @Cu-gp4fy 2 месяца назад

    Thks a ppreciate the local and cloud options recos for those without the fancy hardware!

  • @ronnykhalil
    @ronnykhalil 2 месяца назад

    lovely, thanks for sharing! btw, how'd you get that pretty little workflow icon on the sidebar?

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      Probably just the new ui. Showing how to load it in the video if you don't have it already.

  •  2 месяца назад

    Very exciting solution, again!

  • @RodrigoAGJ
    @RodrigoAGJ 29 дней назад

    Hello Sebastian, is there an alternative method to incorporate a positive prompt (clip text encoder) into this workflow to enhance the visual output?

  • @jonathanzeppa
    @jonathanzeppa 2 месяца назад

    Will you be doing a video on animation in Flux using ComfyUI? Most of the tutorials I've seen are using external websites, rather than a local machine.

  • @rodrimora
    @rodrimora 2 месяца назад

    Would adding something like "for a T5 encoder" improve the output even more for flux?

    • @Zuluknob
      @Zuluknob 2 месяца назад

      Try "FLUX-Prompt-Generator" on hugging face. you can select different LLM's in the right hand generating window

  • @kritikusi-666
    @kritikusi-666 Месяц назад

    how did you add height, width / INT in purple with "control_after_generate" is that a special node that you need to install from comfy UI manager? I keep seeing that in samples but cannot find it.

  • @SyamsQbattar
    @SyamsQbattar 17 дней назад

    What app did you use? COmfyUI? Why doesn't my comfyui look like yours?

  • @eduardmart1237
    @eduardmart1237 2 месяца назад

    Does fooocus do something similar, when expanding your prompts?

  • @baheth3elmy16
    @baheth3elmy16 2 месяца назад +1

    I wonder if you ran into llama.dll error and how you resolved it. There is no resolution or fix on the Github page for that node.

  • @LouisGedo
    @LouisGedo 2 месяца назад

    👋 Looking forward to this video

  • @joromask66
    @joromask66 2 месяца назад +1

    🙌🙌🙌🙌

  • @hsuan2323
    @hsuan2323 2 месяца назад +1

    flux loves long prompts?I am always cutting my prompts shorter and shorter till I stop getting this weird error "RuntimeError: stack expects each tensor to be equal size, but got ..." I can't figure out what it means but shortening the prompt a little usually fixes it. if not, shortening it some more usually fixes it,

  • @alg4668
    @alg4668 2 месяца назад

    you can do more with ComfyUI node -Long-CLIP can give you a token length from 77 to 248 max

  • @mrbendy
    @mrbendy 2 месяца назад +3

    I want to try this so badly but I can't get Searge-LLM to install. I get the message "(IMPORT FAILED) Searge-LLM for ComfyUI v1.0" in my manager. When I load your workflow I get missing node types for Searge_Output_Node and Searge_LLM_Node.
    Has anyone had this and have a fix?

    • @christofferbersau6929
      @christofferbersau6929 Месяц назад

      I get it too

    • @JarvisDroid-o5t
      @JarvisDroid-o5t 11 дней назад +1

      For me, install fails because I've configured my conda env with python 3.12.... but Searge-LLM is only complient with python 3.10 or 3.11

  • @Gaby-om5rd
    @Gaby-om5rd 2 месяца назад

    Hello Sebastian, is it possible to load LLM with an image and have it captioned "ChatGPT" style? Or any method that could caption images somehow similar to ChatGPT but for free or the cheapest option, thanks.

  • @SimpleTechAI
    @SimpleTechAI 2 месяца назад +1

    So I'm still missing this... CheckpointLoaderNF4 - where is this?

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      Update to latest comfy. There's an nf4 guide on my channel also btw

  • @henroc481
    @henroc481 2 месяца назад

    What server service do u use to run comfy if any?

  • @hatuey6326
    @hatuey6326 2 месяца назад

    huge thanks !!

  • @dustinjohnsenmedia1889
    @dustinjohnsenmedia1889 2 месяца назад +3

    I got an error "the procedure entry point ggml_backened_cuda_log_set_callback could not be located in dynamic link library

    • @excido7107
      @excido7107 2 месяца назад

      As did I

    • @nikgrid
      @nikgrid 2 месяца назад

      Yeah same

    • @THbeto8a
      @THbeto8a 2 месяца назад

      same

    • @Melike-oh1ir
      @Melike-oh1ir Месяц назад

      Did you solve it?

    • @excido7107
      @excido7107 Месяц назад

      @@Melike-oh1ir No sorry I haven't tried for a while

  • @SimpleTechAI
    @SimpleTechAI 2 месяца назад

    Also how did you get your manager to stick across the top? Thanks.

  • @MustRunTonyo
    @MustRunTonyo 2 месяца назад

    Tutorial on how to create the thumbnail pic? It's gorgeous!

  • @ghr1965
    @ghr1965 2 месяца назад

    I have an error when loading the workflow. It is related to the CheckpointLoaderNF4 node. "When loading the graph, the following node types were not found:
    CheckpointLoaderNF4"

  • @gohan2091
    @gohan2091 2 месяца назад

    How does that minstral llm compare to Florence large?

  • @jakkalsvibes
    @jakkalsvibes 2 месяца назад

    Thanks for the great videos! It's odd how the new Flux Model can generate explicit content without issue, but when it comes to something as simple as showing the middle finger, it always ends up with the index finger instead. And what's this thing about the Flux female chin? Does anyone know how to crack this so it works as intended?

  • @deandresnago2796
    @deandresnago2796 Месяц назад

    Hello when i run it it seems to go through but i dont see the output in the output text field ?

  • @vandaloart7131
    @vandaloart7131 Месяц назад

    how do i add the mistral model or any other model? I am missing only that.

  • @eduardmart1237
    @eduardmart1237 2 месяца назад

    How much VRAM do you get?

  • @GenoG
    @GenoG 2 месяца назад

    Thanks!

  • @ElSarcastro
    @ElSarcastro 2 месяца назад

    Now I just hope this makes its way into Forge.

    • @GenoG
      @GenoG 2 месяца назад

      I finally had to move over to ComfyUI... I resisted forever because it seemed to ridiculous, but now that I'm using it, I really like it! I use if for Flux, Pony and XL... I had never tried Pony or XL, but in Comfy they are really easy to use... only took a couple of days and there are TONS of Workflow examples so that you don't have to reinvent the wheel! So, my advice: Jump in, the Comfy water is... Comfy!! @sebastiankamph, see what I did there! 😛

  • @therookiesplaybook
    @therookiesplaybook 2 месяца назад

    This is cool, but you can't edit it the created prompt after. If you love the prompt it creates, but you want to edit the subject or one word, you can't. You have to copy and paste it into the previous node, edit it there, then bypass the LLM node, then generate. So not impossible, but an extra step.

  • @earthequalsmissingcurvesqu9359
    @earthequalsmissingcurvesqu9359 2 месяца назад

    crystools erscheint bei mir nicht in der Leiste oben. Wie haste das hinbekommen ? danke

  • @douchymcdouche169
    @douchymcdouche169 2 месяца назад

    How did you get the system usage stats on top of the menu bar?

  • @chenlin322
    @chenlin322 2 месяца назад

    love you so much, Seb

  • @JohnVanderbeck
    @JohnVanderbeck 2 месяца назад

    Does anyone know how I would map this llm_gguf folder in the extra_model_paths.yaml? Is it just that key?

  • @SimpleTechAI
    @SimpleTechAI 2 месяца назад

    I figured it all out myself but it does not work for me. I had to add a check point loader because yours was red and said undefined, and while it does generate the new prompt after that it spits out a whole list of mismatch size errors so probably not for me thanks.

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      It's built on Flux, in this instance NF4

  • @xyy2759
    @xyy2759 2 месяца назад +1

    these two Searge nodes are a great addition I integrated them into 1 loRA Flux + flux1-dev-Q8_0.gguf + t5-v1_1-xxl-encoder-Q8_0.gguf + Mistral-7B-Instruct-v0.3.Q4_K_M.gguf and it work 5s/it 1.25min to generate. Thank you.

    • @zRegicideTVz
      @zRegicideTVz 2 месяца назад +3

      Can you share that WF?

  • @Alchete
    @Alchete 2 месяца назад

    You should do another Seb Ross Discord weekly challenge video, but this time with Flux. I really enjoyed those.

    • @sebastiankamph
      @sebastiankamph  2 месяца назад +1

      Thank you for the suggestion! I'll try again and see how the views are for those nowadays :)

  • @Rustmonger
    @Rustmonger 2 месяца назад

    Hey man great as always but one thing I think a lot of people would love to see if a straight forward Flux LORA training tutorial. Is that in the works?

  • @idoshor4470
    @idoshor4470 2 месяца назад

    Guys, I know it's not part of this topic but I tried to get answers everywhere and could not find anything.... please help if you have a free moment, thanks 🙏🙏🙏😬:
    I tried installing the ComfyUI_UltimateSDUpscale through the manager, update it, manual install it through Git, download the raw files and placing them in the correct folder, but all methods failed. the node is considered missing on Comfy and the installation failed.
    does anyone else have this problem? maybe after recent ComfyUi update or something?
    thanks.

  • @themarlez
    @themarlez 2 месяца назад

    doesn't work. It gets stuck trying to download a 312 mb through git

  • @poldilite
    @poldilite 2 месяца назад

    For me Searge is not loading in Think Diffusion :(

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      Sorry to hear that. Go hop on their Discord, there's a very active support chat there.

  • @ThoughtFission
    @ThoughtFission 2 месяца назад

    How does it compare to Florence2?

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      Haven't done a comparison, but you can load any .gguf llms

  • @neme-ye5kl
    @neme-ye5kl 2 месяца назад +1

    Pleeeeeaaaaase let someone put this into Forge, pleeeeeaase!

  • @arothmanmusic
    @arothmanmusic 2 месяца назад

    "Octopuses." ;)

  • @naeemulhoque1777
    @naeemulhoque1777 2 месяца назад

    whats your system specs?

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      RTX 4090 24gb vram, 64gb ram.

    • @naeemulhoque1777
      @naeemulhoque1777 2 месяца назад

      @@sebastiankamph thanks. 1 gpu or 2?

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      @@naeemulhoque1777 1. Not much use for 2 as of yet. I mean you CAN, like in Swarm etc. But it's really not very useful.

  • @LouisGedo
    @LouisGedo 2 месяца назад

    👋

  • @nishu4288
    @nishu4288 2 месяца назад +1

    Best AI RUclipsr... Never Ask For Patreon For Workflow like Mosly Others

    • @sebastiankamph
      @sebastiankamph  2 месяца назад +1

      Thank you, very kind! But some of my posts are locked even if this wasn't ;)

  • @msampson3d
    @msampson3d 2 месяца назад

    It was just a few months ago that a comfy UI node allegedly for integrating LLMs into your work flow was out there that executed malicious code on your machine.
    Be careful out there folks.

    • @sebastiankamph
      @sebastiankamph  2 месяца назад +1

      Always be careful! This node is created by Searge who is a well known good guy in the community (and also a moderator in my discord). That's of course not a 100% guarantee, but it's almost as good as it can get on the internet I suppose.

  • @dkemil
    @dkemil 2 месяца назад +1

    it gives (IMPORT FAILED) on the latest comfyui

    • @vaishnav7
      @vaishnav7 2 месяца назад

      Same for me with Something like this:
      Python.exe: Entry point not found
      The procedure entry point
      ggml_backend_cuda_log_set_callback could not be located in
      the dynamic link library
      C:\ComfyUl\venv\lib\site-packages\llama_cpp_cuda\lib\llama.dll.

    • @cfcrow
      @cfcrow 2 месяца назад

      @@vaishnav7 same

    • @sebastiankamph
      @sebastiankamph  2 месяца назад +1

      There are some troubleshooting tips on the official github regarding missing llama-cpp. Could check that out: github.com/SeargeDP/ComfyUI_Searge_LLM/tree/main

    • @vaishnav7
      @vaishnav7 2 месяца назад

      @@sebastiankamph thankyou 🤍✌️

    • @dkemil
      @dkemil 2 месяца назад

      python -m pip install llama-cpp-python
      did the trick

  • @Radarhacke
    @Radarhacke 2 месяца назад +1

    "Generate a random image prompt" Oh no! More floods of images that fill the civitai database. LOL!

  • @bootinscoot5926
    @bootinscoot5926 2 месяца назад +1

    I'd love to see a video about how to use custom loRA's for flux or other models
    Cause I have no idea how that works!
    Great video btw, subbed!

  • @gatwick127
    @gatwick127 2 месяца назад

    So what does this do exactly in more simple terms? Am wasted and don't have the time to watch the whole video. Would appreciate it thanks :)

  • @ShakenBacon.
    @ShakenBacon. 2 месяца назад

    This is a very informative video. I had no idea LLM could integrate with Comfy. Concerning the usage of other models, I seem to be getting a NotImplementedError for 4-bit quantization with any model other than the Flux NF4 models. I am still researching this on my machine but it could be related to me using Comfy through SwarmUI.

    • @ShakenBacon.
      @ShakenBacon. 2 месяца назад

      Solved it. I feel dumb. I didn't notice that CheckpointLoaderNF4 was being used in the workflow.

  • @AlexanderGarzon
    @AlexanderGarzon 2 месяца назад

    i prefer the ollama node

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      Why do you prefer it? 😊

    • @AlexanderGarzon
      @AlexanderGarzon 2 месяца назад

      @@sebastiankamph it has more options, also the service ollama can be running in a different computer saving you VRAM.

  • @2008spoonman
    @2008spoonman 2 месяца назад

    Where is the creative input?! So you type two or three words and……. that’s it.
    Not sure if I like this way of working.

  • @alexblrus9825
    @alexblrus9825 2 месяца назад

    again flux... okay

    • @sebastiankamph
      @sebastiankamph  2 месяца назад

      You can run it for all the models, but it's extra powerful for Flux specifically.

  • @gilgamesh.....
    @gilgamesh..... 2 месяца назад

    But then you aren't even writing the prompt. One way AI art still takes imagination and effort is in figuring out the prompts. This just makes it as lazy as people that are against AI art say it is. Now you don't even have to think.

  • @yinodiaz4290
    @yinodiaz4290 Месяц назад

    flux1-dev-bnb-nf4 is needed
    From hugging face if anyone is wondering why it isn't working