Prompting Pixels
Prompting Pixels
  • Видео 47
  • Просмотров 173 246
How to Use Flux GGUF Files in ComfyUI
#comfyui #aitools #flux
Flux requires a lot of VRAM. However, with GGUF files you can reduce the amount VRAM required to achieve good results.
This video demonstrates how to add a GGUF file to your workflow in ComfyUI.
Time Stamps
Intro - 00:00
Installing the ComfyUI-GGUF Custom Node - 0:05
Downloading GGUF Files (Flux & T5XXL) - 0:20
Where to Place GGUF Files - 1:17
Setting Up The ComfyUI Workflow - 1:40
Reviewing VRAM Usage - 3:50
Reviewing Results for Flux Q2 Guff - 5:18
Workflow:
promptingpixels.com/flux-gguf/
Resources:
ComfyUI GGUF Custom Node - github.com/city96/ComfyUI-GGUF
FLUX.1 Dev GGUF - huggingface.co/city96/FLUX.1-dev-gguf
FLUX.1 Schnell GGUF - huggingface.co/city96/FLUX.1-schnell-gguf
T5-...
Просмотров: 1 456

Видео

How to do Soft Inpainting in ComfyUI
Просмотров 4,2 тыс.2 месяца назад
#comfyui #aitools #stablediffusion Soft inpainting edits an image on a per pixel basis resulting in much better results than traditional inpainting methods. Here's how to do soft inpainting in ComfyUI. Time Stamps Intro - 00:00 Explaining Soft Inpainting - 0:06 Setting up the Workflow - 0:28 Reviewing Final Results - 3:31 Workflow: promptingpixels.com/soft-inpainting-in-comfyui/ Image Source: u...
How to Change Clothes with Precision in ComfyUI (IPAdapter & Grounding Dino)
Просмотров 1,4 тыс.3 месяца назад
#comfyui #aitools #stablediffusion Combining IPAdapter and Grounding Dino you can quickly segment and apply styles to any portion of an image. Here's a demo on how to change a shirt using AI: Time Stamps Intro - 00:00 Overview of the Final Workflow - 0:04 Installing IPAdapter & Segment Anything Nodes - 0:47 Connecting IPAdapter Nodes - 1:35 Connecting Segment Anything Nodes - 2:57 Reviewing Fin...
Quickly Generate Prompts with LLMs in Automatic1111 WebUI or Forge
Просмотров 1,1 тыс.5 месяцев назад
#stablediffusion #generativeart #promptengineering Fine-tuned large language models are a terrific way to exand prompts. This video goes over a few options available. Intro: 0:00 Prompt Generator: 0:09 ChatGPT: 4:37 Ollama: 9:04 Extensions: GPT-2: github.com/imrayya/stable-diffusion-webui-Prompt_Generator/tree/master ChatGPT: github.com/hallatore/stable-diffusion-webui-chatgpt-utilities Another...
Turn 2D Images into 3D Depth Animations (Parallax) With Automatic1111 WebUI & DepthMaps
Просмотров 2,1 тыс.5 месяцев назад
#stablediffusion #generativeart #parallaxeffect Making a 2D image come to life through the use of depth maps is fairly straight forward. This video demonstrates how to do this using Automatic1111 WebUI / Forge and a custom extension. Extension used: github.com/thygate/stable-diffusion-webui-depthmap-script Blog post: medium.com/@promptingpixels/creating-3d-parallax-type-images-with-automatic111...
Are Inpainting Models Worth Using? Yes, Here’s Why…
Просмотров 1,4 тыс.5 месяцев назад
#stablediffusion #generativeart #inpainting Inpainting models perform well when editing existing images. In addition to inpainting specific checkpoints, ControlNet inpainting is also quite powerful. This video explores what is going on. Model Card on Hugging Face: huggingface.co/runwayml/stable-diffusion-inpainting How to make an inpainting model: github.com/AUTOMATIC1111/stable-diffusion-webui...
How to Create MEANINGFUL AI Art
Просмотров 4065 месяцев назад
#generativeart #aiart #aiartwork AI Art is cheap and easy to make - but it also creates endless amount of digital clutter. Here's why the creative process cannot be overlooked. Time Stamps Intro - 00:00 Why: 1:32 Value: 2:11 Inspire: 3:03 Tools: 4:08 Refine: 4:49 Revise: 5:28 Important links: - 👾 Discord: discord.gg/wWBUndrdTK - 🌐 Website: promptingpixels.com/ - 📸 Instagram: promp...
INSTANTLY Bring Your Imagination to Life with SDXL Lightning
Просмотров 7546 месяцев назад
#stablediffusion #generativeart SDXL Lightning was released by ByteDance and provides incredible results in 2, 4, or 8 step generations. This video highlights how SDXL Lightning performs, where you can get fine-tuned models, along with how you can add it to your workflow. Time Stamps Intro - 00:00 How to Get Lightning Models: 0:40 Running SDXL Lightning: 1:19 Fine Tuned Models: 2:43 Model Card ...
Unlock INFINITE Fashion Possibilities with AI
Просмотров 5286 месяцев назад
#characterdesign #characterart #charactercreation There are many ways you can quickly change a characters outfit using a diffusion model including inpainting, LoRAs, ControlNets, and prompt modification. This video walks through each of these methods. Intro: 0:00 Inpainting: 0:07 LoRA: 1:21 ControlNet: 2:39 Prompt Modification: 4:31 Download Styles: promptingpixels.com/stable-diffusion-styles/ ...
OCD-Friendly Disk Space Management For Stable Diffusion
Просмотров 8996 месяцев назад
#aitools #stablediffusion Stable Diffusion assets including models, upscalers, ControlNets, and much more can quickly fill up the disk space on your computer. This video goes over various methods you can use to streamline file management to save disk space. Time Stamps Intro: 0:00 Native Solutions: 0:08 Symbolic Links: 1:51 Storage Location: 3:05 Outro: 3:40 Companion Article (Provides Addition...
ComfyUI Workflows: How To Find, Manage, And Share Easily!
Просмотров 3,5 тыс.7 месяцев назад
#comfyui #aitools #stablediffusion Workflows allow you to be more productive within ComfyUI. This video shows you where to find workflows, save/load them, and how to manage them. Time Stamps Intro: 0:00 Finding Workflows: 0:11 Non-Traditional Ways to Find Workflows: 0:54 Saving / Loading Workflows: 1:27 Workflows for Power Users: 1:58 Outro: 3:07 Workflows promptingpixels.com/comfyui-workflows/...
Discover the Secret to Quick, Effective Image Prompts
Просмотров 4997 месяцев назад
#aiart #aitools #automatic1111 Styles allow you to quickly build prompt to generate better images. This video demonstrates how to use style along with how to create new ones to add them into your workflow. Download Styles: promptingpixels.com/stable-diffusion-styles/ Important links: - 👾 Discord: discord.gg/wWBUndrdTK - 🌐 Website: promptingpixels.com/ - 📸 Instagram: promptingpixel...
Inside Sora: OpenAI's Jaw-Dropping Text-to-Video Breakthrough
Просмотров 3537 месяцев назад
#openai #aivideoart #aitools OpenAI announced Sora, it's brand new video model that provides unbelievable outputs from textual prompts, images, and existing videos. Here's the initial reactions to the new model, reviewing the outputs, and what this means for the future of media consumption. Important links: - 👾 Discord: discord.gg/wWBUndrdTK - 🌐 Website: promptingpixels.com/ - 📸 Instagram: inst...
Expand Your Horizons: Mastering Outpainting with ControlNet!
Просмотров 6897 месяцев назад
Expand Your Horizons: Mastering Outpainting with ControlNet!
Stable Cascade Just Announced! First look
Просмотров 3,6 тыс.7 месяцев назад
Stable Cascade Just Announced! First look
How to Fix Bad Faces Within ComfyUI: ADetailer Alternative
Просмотров 12 тыс.7 месяцев назад
How to Fix Bad Faces Within ComfyUI: ADetailer Alternative
Unlock the Power of Comfy UI - Manager Installation Guide!
Просмотров 6177 месяцев назад
Unlock the Power of Comfy UI - Manager Installation Guide!
Stable Video Diffusion v1.1 First Look
Просмотров 3,6 тыс.7 месяцев назад
Stable Video Diffusion v1.1 First Look
Simple Outpainting With ComfyUI
Просмотров 13 тыс.7 месяцев назад
Simple Outpainting With ComfyUI
How to Use Frame Interpolation in ComfyUI for Fluid Animations
Просмотров 7 тыс.7 месяцев назад
How to Use Frame Interpolation in ComfyUI for Fluid Animations
How to Use AnimateLCM in ComfyUI
Просмотров 4,9 тыс.7 месяцев назад
How to Use AnimateLCM in ComfyUI
Quick and EASY Inpainting With ComfyUI
Просмотров 8 тыс.7 месяцев назад
Quick and EASY Inpainting With ComfyUI
Comparing 3 Upscaling Methods for AnimateDiff in ComfyUI
Просмотров 2,3 тыс.7 месяцев назад
Comparing 3 Upscaling Methods for AnimateDiff in ComfyUI
How to Add ControlNet & AnimateDiff Together in ComfyUI
Просмотров 4,2 тыс.7 месяцев назад
How to Add ControlNet & AnimateDiff Together in ComfyUI
Easy Face Swaps in ComfyUI with Reactor
Просмотров 10 тыс.7 месяцев назад
Easy Face Swaps in ComfyUI with Reactor
Txt2Vid Made Easy with ComfyUI & AnimateDiff
Просмотров 3,4 тыс.7 месяцев назад
Txt2Vid Made Easy with ComfyUI & AnimateDiff
How To Do img2img Within ComfyUI - Beginner's Guide
Просмотров 15 тыс.7 месяцев назад
How To Do img2img Within ComfyUI - Beginner's Guide
How To Do "Hires Fix" In ComfyUI
Просмотров 13 тыс.8 месяцев назад
How To Do "Hires Fix" In ComfyUI
How to Install ComfyUI Mac (M1/M2/M3): Step-by-Step Guide
Просмотров 14 тыс.8 месяцев назад
How to Install ComfyUI Mac (M1/M2/M3): Step-by-Step Guide
Installing AI Horde on Windows: Step-by-Step Guide
Просмотров 8248 месяцев назад
Installing AI Horde on Windows: Step-by-Step Guide

Комментарии

  • @JuanPabloZamudio-q8m
    @JuanPabloZamudio-q8m День назад

    thanks!!! i was going crazy because neither the load diffusion model nor the checkpoint loader was detecting the model. Im new in Comfyui

  • @DschungelKatze
    @DschungelKatze 3 дня назад

    Unfortunately this is nowhere near real hires-fix but it'll do for now. Thanks for a great explanation UPD: ComfyUI docs have a workflow for hires fix with a couple extra steps

  • @rocren6246
    @rocren6246 7 дней назад

    If you are like me ran into Allocation on device ** torch.OutOfMemoryError, go to your [SAMLoader] - device_mode - CPU.

  • @SylvainBergeon
    @SylvainBergeon 7 дней назад

    Thanks, it took me only 30mn to get it working and outputting my first image.

  • @captainayaan
    @captainayaan 8 дней назад

    Thanks very much for the tutorial, really helpful and easy to follow up step by step!

  • @tmquan199
    @tmquan199 8 дней назад

    I got this error, anyone who know how to fix it? TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. Prompt executed in 100.55 seconds huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

  • @5323h
    @5323h 8 дней назад

    I am used to drawing dozens of small images first, then selecting the satisfactory ones, using SD's PNG INFO to recognize the information of the small images, and then using Hire. fix to enlarge them; How to operate in ComfyUI?

  • @kajukaipira
    @kajukaipira 12 дней назад

    Thx matte.

  • @SeanieinLombok
    @SeanieinLombok 12 дней назад

    Any idea how to use flux for outpainting with foocus for irregular base image size to a fixed canvas edge? I.E I have a bunch of cut outs frmo other generations I have done, I messed up placing a crazy frame that i now no longer want, so i batch cut the middle part of the imges out, I wish to expand them to the base canvas size 512 x 768 BUt i am stuck! all the outpaintings are to a certain side, of a rectangle, and by a fixed number of pixels.. PLus none of hte outpaintings will work with Flux. (can work with sdxl and I have foocus inpainting, which I think would help...) Any pointers or a video would be really appreciated.

  • @Geffers58
    @Geffers58 12 дней назад

    On your website you set your final denoise to 0.60 - but this results in no changes at all! But I see you have it at 1.0 here.

  • @DonClassifieds
    @DonClassifieds 13 дней назад

    how do you install install torch==2.3.1 torchvision==0.18.1?

  • @DonClassifieds
    @DonClassifieds 13 дней назад

    :( /ai-art/stable-diffusion-webui/venv/bin/python" -m pip install torch==2.3.1 torchvision==0.18.1 Error code: 1

  • @susanguo7578
    @susanguo7578 13 дней назад

    Thank you so much for your tutorial! It's really helpful.

  • @eucharistenjoyer
    @eucharistenjoyer 13 дней назад

    Great video. Is your search bar a custom plugin? It seems different from the default one.

    • @PromptingPixels
      @PromptingPixels 13 дней назад

      It's funny - I had to step away from ComfyUI for a month or two, then noticed it changed after running a git pull and it threw me off for a second. Haven't dug deeper into it - but perhaps related to this release from a few weeks ago: github.com/comfyanonymous/ComfyUI/releases/tag/v0.1.0

  • @Joejitsu-101
    @Joejitsu-101 14 дней назад

    absolute genius. I'm a computer idiot and I still got it working. Thanks.

  • @phorestro
    @phorestro 15 дней назад

    Very helpful, thanks! 155 frames ~5 minutes on a RTX 4060ti-16Gb. Fantastic.

  • @tornadowatcher-my5ej
    @tornadowatcher-my5ej 15 дней назад

    what folder do i put the modes in for the ultralytics?

  • @hiurylurian6539
    @hiurylurian6539 15 дней назад

    Why are you deleting my comment?

  • @edieleonhart9396
    @edieleonhart9396 16 дней назад

    Thanks bro!

  • @HPMichalke
    @HPMichalke 17 дней назад

    Thank you so much for your tutorial! that is a great starting point

  • @Stef_frenchtouch
    @Stef_frenchtouch 17 дней назад

    For Mac user use drawthings is optimized for Mac .. with flux+comfyUI >> 45 min for 1 pic // with drawthings >> 2 min .

  • @ianfinch294
    @ianfinch294 17 дней назад

    Hi I have a question, Can I change the width and height of an image to image?

  • @isaacslayer
    @isaacslayer 18 дней назад

    Interesting video I am an automatic user and I would like to fix some images of some video game characters using LoRas, my question is, facedetailer will respect the character I used with LoRas ? Or will it make me a totally different face?, that's my doubt that's why I don't let go automatic 1111 adetailer helps me a lot in that but I would like to do the same in comfyui and make more complex images here

  • @ODINCODES
    @ODINCODES 20 дней назад

    thank you so much for tutorial. the only problem is that it changes the face as well when adding the glasses

    • @Bpmf-g3u
      @Bpmf-g3u 10 дней назад

      you can work on prompts to generate precise pics, I found mimicpc helped me generate images i wanted.

  • @Jeanjean-nq8lk
    @Jeanjean-nq8lk 20 дней назад

    Thanks for your work, i try to install "prompt generator" on forge but it's not appear to the menu. Any chance you have answer ?

  • @TentationAI
    @TentationAI 21 день назад

    Hi, on a1111 we can tweak some settings like Schedule bias, Preservation strength, Transition contrast boost etc.. who preserve the inpainted area but with the node of the video we have 0 control on the soft inpainting, is there a solution ?

  • @idoshor4470
    @idoshor4470 22 дня назад

    Hey, I tried installing the ComfyUI_UltimateSDUpscale through the manager, update it, manual install it through Git, download the raw files and placing them in the correct folder, but all methods failed. the node is considered missing on Comfy and the installation failed. does anyone else have this problem? maybe after recent ComfyUi update or something? thanks.

  • @SeanOdyssey
    @SeanOdyssey 22 дня назад

    Thankz yow <3

  • @ImmortalShiro
    @ImmortalShiro 23 дня назад

    TY for the quick and straight tutorial. I just want to ask. The node says Face Detailer, but will it still work if i changed the ultralytics to hand, person, etc, or whatever other than face? TIA

  • @PlayGameToday
    @PlayGameToday 24 дня назад

    How I can setup hires_steps param when upscaling? This is useless video for me..

  • @MrRandomnumbergenerator
    @MrRandomnumbergenerator 24 дня назад

    amazing video

  • @freakguitargod
    @freakguitargod 26 дней назад

    Quick and simple i love it, thank you so much for this!

  • @merodack3721
    @merodack3721 29 дней назад

    I recently saw a video about a product photographer who implemented comfyui in his workflow, he basicaly take a photo and tether it to his pc, then comfyui automatically use the photo and removes the background and add all the details for a "finished" image, I found this mindblowing. Do you have any advice to where can I start with AI (in this regard) so I can eventually give a try to ComfyUi? Your advise and time is very much appreciated

  • @vijayeditzz-nt3lq
    @vijayeditzz-nt3lq Месяц назад

    how to remove sunglasses from a face and swap the face without sunglass picture showing eyes. reactor face swap not working properly on image wearing sun shades

  • @faaf42
    @faaf42 Месяц назад

    Thanks, clear and to the point.

  • @thebullhornstreamer4742
    @thebullhornstreamer4742 Месяц назад

    My name is Shawn too, same spelling :)

  • @vivekkarumudi
    @vivekkarumudi Месяц назад

    Thanks a lot that fixed mine

  • @6mystique
    @6mystique Месяц назад

    hey, thank you for this tutorial, however im complete beginner and from 3rd minute I'm really don't know what are you talking about and what you are doing. everything was so clear to me before but later... you do something what I completely not follow. ehh

  • @seis6-r8u
    @seis6-r8u Месяц назад

    great tutorial thanks! could you recommend a Mac setup similar to having a desktop pc with a rtx4090?

  • @KlausMingo
    @KlausMingo Месяц назад

    Why do they call it a hi-res fix? It's not fixing anything, it's just a normal upscaling.

  • @Mr_Mazed
    @Mr_Mazed Месяц назад

    Super simplistic tutorial thanks

  • @TheChyamp
    @TheChyamp Месяц назад

    Thanks. You going to make one for SD3?

    • @PromptingPixels
      @PromptingPixels Месяц назад

      Err, given how bad the reception was to SD3 I think that needs to be held off till Stability fixes it (stability.ai/news/license-update). Instead, Flux seems to be the much better successor that recently was released as it has many folks from the original stable diffusion team (github.com/black-forest-labs/flux)

    • @TheChyamp
      @TheChyamp Месяц назад

      @PromptingPixels absolutely correct. I actually switched to Flux a few days after. Night and day

  • @tosvus
    @tosvus Месяц назад

    any way to use this with flux? My workflow doesn't use ksampler but rather samplercustomadvanced. I tried to route based on the note names, but it seems to just ignore my input image and simply use the text prompt

    • @PromptingPixels
      @PromptingPixels Месяц назад

      I've been away for a bit but just started taking a look at Flux this evening - super cool stuff! Still need to learn the basics before I am ready to share anything here on the channel, but suspect some new videos in the next couple of weeks or so. In the mean time, here's a thread that uses the same `Sampler Custom Advanced` node in an img2img flow that might help: www.reddit.com/r/StableDiffusion/comments/1eigdbk/img_2_img_with_flux/

    • @tosvus
      @tosvus Месяц назад

      @@PromptingPixels thank you!

  • @soundguy421
    @soundguy421 Месяц назад

    WOW~~~~~~you made me wanna learn this one

  • @alicanbirey
    @alicanbirey Месяц назад

    Helal olsun abim sevdim seni

  • @milivanilli
    @milivanilli Месяц назад

    Thnaks, you are an Angel!

  • @jippalippa
    @jippalippa Месяц назад

    Excellent tutorial; thanks! Generally speaking, have you found the experience on mac stable enough? I'm asking as automatic1111's implementation didn't always work properly on macOS (M1 Ultra).

    • @PromptingPixels
      @PromptingPixels Месяц назад

      Honestly, for just basic txt2img or img2img tasks, I find that a Mac can work just fine. However, for anything labor-intensive (such as controlnets+loras, animatediff, etc.), I think it's best to rely on a Windows or Linux machine with a dedicated GPU, as the Mac isn't nearly as performant. If you're having app issues, I recommend checking out WebUI Forge (github.com/lllyasviel/stable-diffusion-webui-forge) or DrawThings (drawthings.ai/), as they might be a bit more stable - especially the latter of the two.

  • @rudyNok
    @rudyNok Месяц назад

    How does it work that whenever you hit queue the generation doesn't start from scratch? Seems like you always continue from the last generated image, how?

    • @PromptingPixels
      @PromptingPixels Месяц назад

      Is this a general ComfyUI question? If so, if there are no changes in the workflow then it won't render a new image/repeat processes. To prevent redundant processing, I always use a fixed seed value rather than random. Hopefully this response answers your question 😅

  • @FlowFidelity
    @FlowFidelity Месяц назад

    Helpful, what is your preferred inpainting model these days?

    • @PromptingPixels
      @PromptingPixels Месяц назад

      Really loving soft inpainting as the results are more seamless. The checkpoint i use depends on the image being retouched (photographs use a realistic checkpoint, illustrations typically use a general or anime-based checkpoint, etc.)

  • @pavelgorovoy903
    @pavelgorovoy903 Месяц назад

    Hi, I`ve noticed that I don`t have an ultralyticsDetector Provider after installing Impact Pack