Using Llama3.2 to "Chat" with Flux.1 in ComfyUI (8GB+ VRAM)

Поделиться
HTML-код
  • Опубликовано: 2 фев 2025

Комментарии • 86

  • @grahamulax
    @grahamulax 3 месяца назад +1

    This is the video ive been waiting for! Downloaded 3.2 like a week ago and sat on it. FINALLLLLLY THE TIME HAS COME!

    • @onlinehorseplay
      @onlinehorseplay 2 месяца назад

      4 weeks later can we get a quick count on pngs with 'emma watson' in the name? I'm doing a comparison as a sanity check

  • @MarceloPlaza
    @MarceloPlaza 3 месяца назад +1

    Great workflow, thanks for sharing.

  • @urbanthem
    @urbanthem 3 месяца назад +4

    Hello! Been following you from the start, but this is straight up amazing.

  • @scobelverse
    @scobelverse 3 месяца назад +1

    this was really well done

  • @Larimuss
    @Larimuss 3 месяца назад +5

    Whmm with my meager 4070ti 12gb vram. Wouldnt it be better to use guff lama in ram so the image gen doesnt compete with lama? Or does it load into ram every time you queue? Im guessing guffm might not be out yet for this model though.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      With 12Gb you’d want a gguf that is less than 9 Gb. With a 1gb llama 3.2, that should probably fit in!

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      @@malditonuke Yup, the small size of llama makes it great!

  • @juanjesusligero391
    @juanjesusligero391 3 месяца назад

    Oh, Nerdy Rodent, 🐭🎵
    he really makes my day, ☀😊
    showing us AI, 💻🤖
    in a really British way. ☕🎶

  • @p_p
    @p_p 2 месяца назад

    i installed ollama from the website with the exe installer, not really sure if this is running locally or not🙄🙄

    • @NerdyRodent
      @NerdyRodent  2 месяца назад

      Yup, as it’s installed & running on your pc it’s running locally! 👍🏼

  • @devnull_
    @devnull_ 3 месяца назад +2

    Thanks! I thought that rat was some Gordon Freeman wannabe :D

  • @PugAshen
    @PugAshen 3 месяца назад +2

    Great way for a nice workflow. But like many others have mentioned, it will not even open. Clean install ComfyUI, installed the packages mentioned on your page, but unfort. nothing happens. Any chance of a checkup?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      Start by clicking “update all” in manager and restarting to ensure you’ve got the latest version! Should be Oct 12 as a minimum

  • @MilesBellas
    @MilesBellas 3 месяца назад +3

    Open Source is becoming amazing !
    NR works in R&D at "The Mouse"?

  • @build.aiagents
    @build.aiagents 3 месяца назад +1

    Phenomenal

  • @rifz42
    @rifz42 3 месяца назад

    I found this video "Ollama does Windows?!? Matt Williams" that helped get Ollama working, and I was able to use the workflow. I learned a lot getting it going.
    "

  • @quercus3290
    @quercus3290 3 месяца назад +2

    it almost looks like auto1111, well done.

  • @amkire65
    @amkire65 3 месяца назад +8

    I quite like that complicated messy version of ComfyUI... makes me look clever knowing how to use it, if anyone sees me working on some images. :) Certainly give this a try once I fix my computer.

    • @Bicyclesidewalk
      @Bicyclesidewalk 3 месяца назад

      Yeah, I like the original ComfyUI as well...yet to sail into these uncharted waters...lol~

  • @hungi
    @hungi 3 месяца назад

    24G vram recommended -- what's the equivalant in apple silicone? would M3/16G ram suffice for this exact model?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      I have no idea about Mac stuff, but your best bet for anything AI is Linux + Nvidia!

  • @lucvaligny5410
    @lucvaligny5410 3 месяца назад +1

    can't get the APIllm general link to work , while the basic WF start with ollama from LLM-party is working , but there's so few explanation on how it works it's a pity
    i had the error first loading Rodent WF but everything went in place after installing missing nodes

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      If you want to change from the LLM party node API loader, you can check out the GitHub page for more information on whichever options you’d like to use instead. It does indeed support a lot 😊

  • @MohammedAli-tq8ln
    @MohammedAli-tq8ln 3 месяца назад

    Very useful!

  • @DaveTheAIMad
    @DaveTheAIMad 3 месяца назад +1

    Can it be used with Textgen webui? Ollama is awful lol, no way to use it across a network, it wont load your already downladed llms with out converting them and duplicating them, pain to set it for a new folder.
    I love your video's and find them informative, though it does seem your trying to turn comfyui into auto1111 lol complexity is not as much an enemy as tools that over-simplify can be... though perhaps thats just from personal standpoint.

  • @freestylekyle
    @freestylekyle 3 месяца назад

    I had some problems getting to work, I did and update and refresh but no go. In the end I gave chatgpt the output and asked it how to fix the errors. now I got it going, so maybe give that a try if your having problems.

  • @ajedi6127
    @ajedi6127 3 месяца назад +17

    Waaaiiiit a second, you're telling me that comfyUI is now actually comfortable to use?...impressive.

    • @IdRadical
      @IdRadical 3 месяца назад

      Comfy ui is great, it forces people to learn and if youve had the pleasure of trying to run flux on Comfyui with amd gpu youll know there was plenty of support on github and hugging face and users helping eachother, we should give thanks the coder friends we made along the way. Python is awesome, this shift in the number of users in this AI game. We are forced to step our game up

  • @dkamhaji
    @dkamhaji 3 месяца назад

    is there a setting to see the sampling progress as its happening so that you can cancel it if its not what you want? not sure if its the node sampler custom advanced that doesn't show you the progress

  • @NeptuneGadgetBR
    @NeptuneGadgetBR 3 месяца назад

    Hi, you got a subscriber here, congratulations for the amazing work, I have an issue after upscaling it doesn't looks perfect, the edges has some blurry edges, any idea how to solve it? I enabled High res... but it didn't fix the issue

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      Hires mode will do the upscale for you, yes

  • @fullflowstudios
    @fullflowstudios 3 месяца назад +1

    This is sooo nerdy and sooooo weird. The workflow you show is nowhere to be found in my Comfy install when I browse the 4 templates that are offered. What miracle do you perform to load this new layout into the program?

  • @DezorianGuy
    @DezorianGuy 3 месяца назад

    Can I switch the LLAMA 3.2 and use another variant of the 3.2 models?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      Yup. Press 2 to go to the LLM settings and change there like in the video!

    • @DezorianGuy
      @DezorianGuy 3 месяца назад

      @@NerdyRodent What exactly should I change? Which node?

  • @saltygamer8435
    @saltygamer8435 Месяц назад

    Can you create a new docker image for this on runpod please

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      I would imagine so. Go for it!

  • @onlinehorseplay
    @onlinehorseplay 2 месяца назад

    thanks for using the quotes, and I feel like all things DIY AI should be in quotes because you're gonna git pipped as a windows newb (I learned most of it here tho)

  • @antonpictures
    @antonpictures 3 месяца назад +1

    I wish I had the hardware

  • @purposefully.verbose
    @purposefully.verbose 3 месяца назад

    so it works well, but it is loading this huge dev model every time... slowly on a 3090. is there some hidden setting to keep in loaded?

  • @13-february
    @13-february 3 месяца назад

    I watched this video with interest and wanted to try this amazing LLM ability. Unfortunately, I have not been able to get to work your workflow which one I found on huggingface. Many nodes in it do not have connections, and I had to guess where to attach them. I managed to connect some nodes, but some did not give in, for example, the value slot of the Control Bridge node remained lit red. I had to set this this whole group of nodes to bypass, because I don't even understand what they are for.
    After that, workflow started working, but the generated image did not match the prompt at all, as if the Sampler did not see the prompt, although the LLM group is working properly and generates the desired text. I don't understand how to make workflow work correctly. It will be very kind of you if you fix the LLM workflow on huggingface.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +2

      If you change the way everything is connected then it will definitely work in unexpected ways! The best thing is to simply enter your prompt, and then press queue 😎

    • @13-february
      @13-february 3 месяца назад

      @@NerdyRodent it doesn't work because many nodes have lost connections. And queue just stops and does not do anything. So can you please check the workflow and fix it?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      @@13-february To fix your environment try updating ComfyUI, or go with a fresh install!

    • @rifz42
      @rifz42 3 месяца назад

      @@13-february you have to work through the nodes one by one. I did but it took a long time. do you have ggug, hyper-flux, a vae, clip 1 and 2(Vit-L-14-text) ?

    • @13-february
      @13-february 3 месяца назад

      @@rifz42 Yes, of course. I am well versed in the ComfyUI. And I'm pretty sure that workflow is the problem. I downloaded the one called Flux-Simple-LLM_v0 from huggingface and installed all the missing nodes and ollama of course. But the problem is that the some nodes' slots are not connected to anything. For example, the value slot of the Control Bridge mode has no connections and there occurs error during generation. In addition some image and models connections were also missing in this workflow. Perhaps the author provides a correct workflow on the patreon, but the one that I found on huggingface is completely damaged.

  • @fionaliath6326
    @fionaliath6326 3 месяца назад

    Weirdly, when I attempt to load these workflows in ComfyUI, literally nothing happens. No errors, no load, just leaves whatever was already open there. Not even a message in the Terminal. I thought it might be my ComfyUI having too many old and/or conflicting nodes, so I did a clean reinstall of the portable version with only the Manager, but that didn't change anything. One of my workflows (in .png format) does load, but the .jsons I downloaded from the repository do absolutely nothing. I re-downloaded them, in case they broke somehow, but that didn't change anything either, and I opened them up to make sure they have contents, and they do (quite a lot). Dunno what's going on there, but it's too bad, because this workflow looks fun.

  • @MissingModd
    @MissingModd 3 месяца назад

    Nerdy's famous! Wow!

  • @randymonteith1660
    @randymonteith1660 3 месяца назад

    Error occurred when executing ImpactControlBridge:
    No module named 'comfy_execution'

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      My guess would be an old version of that custom node is installed. Click “update all” in manager to ensure you’re up to date!

  • @MilesBellas
    @MilesBellas 3 месяца назад

    Nvidia Sana test next ?
    😊

  • @dracothecreative
    @dracothecreative 3 месяца назад

    Hey all, so i am using pinokio for my comfi ui stuff and everything works fine exept ollama, i installed it but its in my user files and i installed it in pinokio but i still get this: Error code: 404 - {'error': {'message': 'model "llama3.2" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}} i know what to do i just dont know the parent folder it needs to be in... anyone knows?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      Did you try pulling it first like in the video?

    • @dracothecreative
      @dracothecreative 3 месяца назад

      @@NerdyRodent yeah and it works now, only lama is in a wierd folder unrelated, Thanks!

    • @rifz42
      @rifz42 3 месяца назад

      @@NerdyRodent I have git installed but don't know where you ran the pull command, what folder?. I tried in my comfyui install folder and with the the git cmd window.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      @@rifz42 git? No, it’s “ollama pull” like in the video 😉

  • @4thObserver
    @4thObserver 3 месяца назад

    Finally, Spaggetti monster begone!

  • @phridays
    @phridays 3 месяца назад

    You lost me super big time in the first minute. I downloaded comfy portable on Windows, I installed manager, I see 0.2.3, and it opens in a web browser. I don't have any workflows top menu, no new buttons on the side, no way to hide the spaghetti. What did I miss?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      If you haven’t turned the beta interface on yet, you can do so in settings - ruclips.net/video/g8W3xe5kRBQ/видео.html

  • @nemonomen3340
    @nemonomen3340 3 месяца назад

    The use of Llama3.2 in interesting, but I don't think I'd ever use it. It doesn't really seem to do anything that actually improves the image quality or Flux's comprehension.

  • @Chogj5
    @Chogj5 3 месяца назад

    Would you be able to create a workflow on comfyui in which at the very beginning you would add a photo of the character (let's say it would be a photo in just underwear), then you create a so-called bra and panties mask. From this mask, the bra and panties are created on a white background. And at the very end add an upscale to these photos on a white background. I think it wouldn't be a problem for you and you would really help me a lot.

  • @pink_fluffy_sky
    @pink_fluffy_sky 3 месяца назад

    I just updated ComfyUI and nothing changed. Do you have to turn on this new one-screen workflow thing somehow?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      If you haven’t turned the beta interface on yet, you can do so in settings - ruclips.net/video/g8W3xe5kRBQ/видео.html

  • @davidberserker6625
    @davidberserker6625 3 месяца назад +1

    You should show the link connections of the nodes so we understand better the workflow, it doesn't make sense you show the nodes without the connections.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      No. The point of the workflow is that you don’t see or care about any of the spaghetti 😃

    • @davidberserker6625
      @davidberserker6625 3 месяца назад

      @@NerdyRodent Yes, I got that but it will be better if you show the links to explain the logic, I know is a very simple workflow but for people learning comfy it will be convenient even at the end you hide all links. Take it as a constructive feedback please, just a thought for your future videos ;)

  • @LouisGedo
    @LouisGedo 3 месяца назад

    Hi 👋

  • @Wattsepherson
    @Wattsepherson 3 месяца назад

    So it's not complicated now because... umm.. because they put a screen in front of the complicated stuff so you don't see it but it's still there and if you want everything to work properly you still need to visit the complicated stuff otherwise the umm... the umm the front screen that is less complicated, won't work properly.....
    So that's like having a car with no shell... and everyone's saying it's really complicated to fix and maintain but then someone built a shell for it, to hide the engine and electronics and now everybody knows how to fix and tweak it because it's hidden....
    Excuse me for a moment whilst I go and stroke my beard and try to work out what this means.

  • @MilesBellas
    @MilesBellas 3 месяца назад +1

    Comfyui quantized Pyramid Flow with Flux iterative upscaler next ?😊

    • @MilesBellas
      @MilesBellas 3 месяца назад

      via PI
      To upscale and add details to the output of Pyramid Flow in ComfyUI with Flux, you can use the "Iterative Upscale" workflow. Here's a step-by-step guide on how to do this:
      1) Open ComfyUI and select the "Iterative Upscale" workflow.
      2) Set the "Base Image" to the output of Pyramid Flow that you want to upscale.
      3) Choose a suitable upscaler model, such as LDSR or Lanczos, and set the "Upscale Factor" to the desired value.
      4) In the "Add Detail" section, select a model such as "DVV D8" or "Enhance1024" to add additional details to the upscaled image.
      5) Adjust the "Prompt Weight" and "Prompt Text" to fine-tune the added details.
      6) Click on "Run Workflow" to generate the upscaled and detailed image.
      7) You can also add a "Loop" step to iteratively upscale and add details multiple times for even higher resolution and detail.

  • @RedDragonGecko
    @RedDragonGecko 3 месяца назад +3

    This channel used to be good. Now it just promotes workflows locked behind a paywall.

    • @sven1858
      @sven1858 3 месяца назад +3

      @@RedDragonGecko that's not strictly true with this one. Yes he has patreon. Not disputing that, I'm not a paying member. However, this workflow is freely available, suggest watching and listening to the video.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +2

      Nope. Huggingface has no paywall! Supporters packs are available for those who want to support, so the choice is yours! 😃

  • @a.akacic
    @a.akacic 3 месяца назад +1

    🤦‍♂

  • @mossom
    @mossom 3 месяца назад

    I didn't get the 'reset view' option without adding "--front-end-version Comfy-Org/ComfyUI_frontend@latest" to the end of the launch variables in .
    un_nvidia_gpu.bat for anyone missing them.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      Interesting, as it simply showed up for me when I started as usual! I take it you already had the new workflow and menu beta on already?

    • @mossom
      @mossom 3 месяца назад

      @@NerdyRodent yes, i was confused to as why it wasn't there. Not sure if it was just my install.