- Видео 20
- Просмотров 68 976
Zuhaib R
Добавлен 3 фев 2009
"I'm a freelancer with a background in video editing and photography, along with a degree in computer science. As I explore AI image generation, it feels like a new beginning, blending my creative skills with cutting-edge technology. I'll be sharing my struggles, discoveries, and insights from my ongoing journey into AI-based graphics."
Comfy-UI Workflow developer
AI influencer expert
__________________________________________________
For business related queries angelino00@gmail.com
I only work with AI Image/Video generation now
Comfy-UI Workflow developer
AI influencer expert
__________________________________________________
For business related queries angelino00@gmail.com
I only work with AI Image/Video generation now
ComfyUI Flux: Refine Your AI Influencer
Take your AI influencer game to the next level with ComfyUI Flux! In this video, I’ll show you how to refine your AI creations with advanced techniques, creating stunning and realistic characters effortlessly.💡 Stay Updated: Subscribe for more tips on AI tools and workflows!
INSTALL COMFYUI: github.com/comfyanonymous/ComfyUI
Update Comfy UI and if some nodes can't load go to install missing custom nodes in Comfy-UI Manager
MANAGER: github.com/ltdrdata/ComfyUI-Manager
Install Comfy UI Manager by cloning the repo in comyui\custom_nodes
Update Comfy-UI and if some nodes can't load go to install missing custom nodes in Comfy-UI Manager
INSTALL COMFYUI: github.com/comfyanonymous/ComfyUI
Update Comfy UI and if some nodes can't load go to install missing custom nodes in Comfy-UI Manager
MANAGER: github.com/ltdrdata/ComfyUI-Manager
Install Comfy UI Manager by cloning the repo in comyui\custom_nodes
Update Comfy-UI and if some nodes can't load go to install missing custom nodes in Comfy-UI Manager
Просмотров: 2 602
Видео
ComfyUI: Flux Controlnet Upscaler
Просмотров 8942 месяца назад
In this video I explain the workflow for using the new ControlNet flux upscale, especially for enhancing AI influencer characters. MimicPC Website: www.mimicpc.com/?co-from=zuhaib Workflow Template Share link: home.mimicpc.com/app-image-share?key=a68d8476d3bd40399f73f5c3054d0571 Appsumo LTD Lifetime Gift for only $49! On sale for a limited time! appsumo.com/products/mimicpc/ MimicPc Discord Com...
ComfyUI: Style Changer Workflow
Просмотров 1,4 тыс.2 месяца назад
Learn how to effortlessly change the style of any image using ComfyUI! In this quick tutorial, I'll walk you through a simple yet powerful workflow that allows you to transform images by just typing a new style prompt. Download the new Controlnet union model from the manager Workflow file: github.com/techzuhaib/WORKFLOWS/blob/main/style_changer.json Checkpoint: civitai.com/models/133005/juggern...
ComfyUI: Consistent Character Using Flux Model
Просмотров 15 тыс.3 месяца назад
In this video, we explore how to create a super realistic and consistent character in ComfyUI using the new Flux model. Discover the techniques and tips for achieving lifelike images with natural details, ensuring your AI-generated character maintains coherence across different scenes. I'll be demonstrating some new flux lora's to improve the results. Workflow: github.com/techzuhaib/WORKFLOWS/b...
ComfyUI: Extreme Realism With This SDXL Model !
Просмотров 1,8 тыс.4 месяца назад
In this video, we explore the realism capabilities of the Analog Madness Quick-Gen model and demonstrate a streamlined beginner friendly workflow for generating highly realistic images. Additionally, we delve into the new face restoration model in Reactor Face Swap, highlighting how it significantly enhances facial textures. Model link: civitai.com/models/412376/analog-madness-sdxl-quickgen-rea...
ComfyUI: Outfit Changer Workflow | Using A Reference Image
Просмотров 4,3 тыс.7 месяцев назад
This video is about my attempt to create a dress changer workflow in ComfyUI. Thanks for tuning-in. I am a freelancer working in AI image generation and currently I am obsessed with stable diffusion using Comfyui. The workflow will be available on my github page: github.com/techzuhaib/WORKFLOWS INSTALL COMFYUI: github.com/comfyanonymous/ComfyUI Update Comfy UI and if some nodes can't load go to...
ComfyUI: Let's improve the skin texture of our consistent character
Просмотров 7 тыс.9 месяцев назад
In this video I will show you some methods I use for improving the skin texture in ComfyUI. INSTALL COMFYUI: github.com/comfyanonymous/ComfyUI Update Comfy UI and if some nodes can't load go to install missing custom nodes in ComfyUI Manager MANAGER: github.com/ltdrdata/ComfyUI-Manager Install Comfy UI Manager by cloning the repo in comyui\custom_nodes Update ComfyUI and if some nodes can't loa...
ComfyUI: Consistent Character In ComfyUI Part 3
Просмотров 2,7 тыс.10 месяцев назад
In this video, I'll try to improve my method of establishing a consistent character within ComfyUI. github.com/techzuhaib INSTALL COMFYUI: github.com/comfyanonymous/ComfyUI Update Comfy UI and if some nodes can't load go to install missing custom nodes in ComfyUI Manager MANAGER: github.com/ltdrdata/ComfyUI-Manager Install Comfy UI Manager by cloning the repo in comyui\custom_nodes Update Comfy...
ComfyUI: Vid To Vid AnimateDIFF Workflow Part 2 | Inpaint AnimateDiff
Просмотров 3,3 тыс.11 месяцев назад
In this video I will show you how to In-paint Animation Videos in Comfy UI. #comfyui #ipadapter #stablediffusion #animatediff #animationmastery Link to workflow: github.com/techzuhaib/WORKFLOWS/blob/main/VidtoVid With Inpaint.json Inner Reflections Guide on CivitAI: civitai.com/articles/2379/guide-comfyui-animatediff-guideworkflows-including-prompt-scheduling-an-inner-reflections-guide Open Pos...
ComfyUI: Vid To Vid AnimateDIFF Workflow Part 1
Просмотров 7 тыс.11 месяцев назад
👍 If you found this tutorial helpful, give it a thumbs up, share it with your fellow creators, and hit the bell icon to stay updated on my latest content! Let's create amazing videos together! 🌟 INNER REFLECTIONS GUIDE ON CIVITAI: civitai.com/articles/2379/guide-comfyui-animatediff-guideworkflows-including-prompt-scheduling-an-inner-reflections-guide LINK TO WORKFLOW: civitai.com/api/download/a...
ComfyUI: Consistent Character In ComfyUI Part 2
Просмотров 4,8 тыс.11 месяцев назад
In this 2nd part of the video, I'll guide you through my method of establishing a uniform character within ComfyUI. WORKFLOW: github.com/techzuhaib/WORKFLOWS/blob/main/CONSISTENTFACEWITHREACTOR.json INSTALL COMFYUI: github.com/comfyanonymous/ComfyUI Update ComfyUI and if some nodes can't load go to install missing custom nodes in ComfyUI Manager MANAGER: github.com/ltdrdata/ComfyUI-Manager Inst...
ComfyUI Tutorial: Consistent Character In ComfyUI Part 1
Просмотров 11 тыс.11 месяцев назад
In this video, I'll guide you through my method of establishing a uniform character within ComfyUI. Initially, we'll leverage IPadapter to craft a distinctive facial representation, which will serve as the foundation for generating consistent facial images in our generations. WORKFLOW (UPDATED) github.com/techzuhaib/WORKFLOWS/blob/main/UNIQUE FACE UPDATED.json The face swap node group is discon...
"Outfit Anyone: The Ultimate Virtual Wardrobe Experience"
Просмотров 458Год назад
"Step into the Future of Style with 'Outfit Anyone: The Ultimate Virtual Wardrobe Experience'! 🌟 Unleash the power of ultra-high-quality virtual try-on technology that transforms your fashion game.
ComfyUI Tutorial: Unique Images from Reference image using IP Adapter
Просмотров 4 тыс.Год назад
ComfyUI Tutorial: Unique Images from Reference image using IP Adapter
ComfyUI Tutorial | AnimateDIFF with IPadapter
Просмотров 1,4 тыс.Год назад
ComfyUI Tutorial | AnimateDIFF with IPadapter
I tried to connect IPADAPTER with SVD IN COMFYUI
Просмотров 306Год назад
I tried to connect IPADAPTER with SVD IN COMFYUI
Optimizing Animated Diff Videos with LCM LoRa for Faster Rendering in ComfyUI
Просмотров 315Год назад
Optimizing Animated Diff Videos with LCM LoRa for Faster Rendering in ComfyUI
Brother you completed Virtually University degree almost 10 years ago? Right?
Second time i give a try at this upscaler, and it dont cut it for me. It's so slow. From 1024x1024 to 1535x1535 takes 3 minutes and a half.... and the results are not better then ultimate sd upscale. It seems to burn the image and it becomes whiteish. I used your workflow and flux 8b version. Am i missing something 🤔😊
Im getting this error all time after doing all and reinstalling everything _pickle.UnpicklingError: invalid load key, '\x03'. Y
Anyone following this tutorial will notice a lot of the nodes don't exist anymore. That because when IP Adapter plus was updated it broke a lot of work flows. It just makes it harder for people to jump into Stable Diffusion and Comfyui at a later date. But Here's the fix. In google search "Maintaining Legacy IPAdapter Nodes Alongside New Updates: A Step-by-Step Guide." There you will find a link to the old IP Adapter plus nodes and a step by step guide on how to install them along side the new IP Adapter plus. The workflows will still be broken because in order to use the old along with the new, the class of the old nodes had to change. You'll have to replace those nodes with ones of which you changed their class. If you are just building the workflow along with the tutorial, you should not have a problem. You'll understand more of what going on after reading the step by step guide. In case RUclips allows links(probably not- so just search as I said above) the guide is at www.reddit.com/r/comfyui/comments/1bov4xw/maintaining_legacy_ipadapter_nodes_alongside_new/
What is your PC specs? BTW how can I connect to the prompt to modify the result image as I want?
I keep getting this error at the last step: "Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. !!! Exception during processing !!! unexpected EOF, expected 4827818 more bytes. The file might be corrupted." Looking online, people are saying to re-download GFPGANv1.3.pth facerestore_models. That doesn't seem to work. What do you suggest?
Check your torch version , that's what it says.
your pc gpu ram ?
It's wayyy easier to do this on glambase
is glambase free?
Flux website pe Ai avatar kaise create kare?
cfg 8.0 for a flux model to fix the hand, I think that might be a error haha
yea.. might have missed it.. but that worked !
CLIPSeg: Input image size (352*352) doesn't match model (224*224)? Help
i can't find only node IPAdapterApplyEncoded. Help
Post your workflow please, link says file not found.
I could not fix it in the manager. (Reactor face swap node) no matter what i tried. I will install via Pinokkio and if that is not gonna work i am gonna try Forge. Edit - And a fresh installation with Pinokkio solved the issue.
Follow the instructions : github.com/Gourieff/comfyui-reactor-node
Wich is your video card and cpu?
Rtx 3070 Ryzen 7
thanks for the video and i learned a lot. but what is 'nobody face'? can you kindly explain that?
It is just used to describe face that doesn't exist in the real world. AI made face
Hello, I'm very new to comfyui, i have a request if its possible for you to create a workflow and show us how to generate cartoon characters by feeding in an image of the cartoon and the controlnetpose together, to finally generate an image where the cartoon stand as per the poses in the controlnet pose image
Creating a video is very time consuming. You can email me for the requirement
Amazing tutorial and very well articulated. Thank you.
Glad it was helpful!
My friend. Nice video, thank you. What PC specs do you need to run flux locally? Currently my OPC can handle only SD1.5
For flux you will need at least 12gb vram Nvidea GPU, and 32GB RAM. But you can run it on cloud with few cents an hour on mimicpc.com. I have the links in the description
I got that error when I start your workflow: VAEDecode **Exception Type:** RuntimeError **Exception Message:** Given groups=1, weight of size [4, 4, 1, 1], expected input[1, 16, 128, 128] to have 4 channels, but got 16 channels instead Can you tell me where I have to look to find the error?
Can you make sure you are loading the flux VAE
Hm, regarding the face_restore_model i just got 2 options, but not GPEN-BFR-512.onnx. How do i get it?
Update Comfy and Re_actor
@@techzuhaib99 I did, and i only got GFPGAN v1.3 and 1.4. i will try to figure it out later, but i have no idea yet.
@@techzuhaib99I have done all the updates, but nothing helps. I think this is the issue, this is what happens when i try to run the update. [SKIP] Downgrading pip package isn't allowed: albumentations (cur=1.4.18) [SKIP] Downgrading pip package isn't allowed: onnx (cur=1.17.0) but everything i try doesnt work. This should be my current version: [ReActor] - STATUS - Running v0.5.1-b2 in ComfyUI."
@@techzuhaib99 Solved, i guess it was the Facerestore CF (Code Former) Node that i installed + To use this node, you need to download the face restoration model and face detection model from the 'Install models' menu.
Thanks, my friend, as this was an awesome workflow. No doubt, I will be using it to generate new images from scratch for now on, as I love how it's enhanced with the face swap feature. I no longer have to use stuff like Rope or FaceFusion to swap faces out. I did have one question, though and that was if there was a way to incorporate image to image with this as well? I wanna be able to load and use images I created elsewhere and take advantage of the face swap feature.
Thankyou. Offcourse you can use other images for faceswap. Just load them with load image node and feed into re_actor
re actor just don't work with me Always the problem "import failed" i tried everything (updating comfy, etc) but i still have the same error and its same for every swap nodes
Install insightface
@@techzuhaib99 Can you please give more details on how to install it?
@@tungnguyenuc6415 e-mail your issue in detail angelino00@gmail.com
@@techzuhaib99 what is the solution here?
Hi i want to make a short video, with 3 consistent characters in one image, using this method is it possible? Also the video has to be 16.9 format, please let me know, ive been looking ages but nothing seems to work.
Yea, you can use reactor faceswap on them one by one.
thanks man. i was just curious to find out. can comfyui be used if i wanted to make image2image input person fat . change bodyweight. etc. using control net..? any ideas
Not controlnet ,you need inpainting for that
@@techzuhaib99 but I want process based output, without human user input etc...
@@lalamax3d you can use auto segmentation for that
Do you teach?
??
@@techzuhaib99 how do i learn comfyui?
I need a tutorial for this tutorial :D
:) can you be more specific
excellent again <3
excellent <3
If we wanted to replace let’s say the flooring in one house interior image with the flooring of another house’s image like you did with the faces here. How would we go about it?
This method will not work for that. It will need controlnet and IPadapter, depending what exactly are you trying to do. You can refer to my latest style changer video to get the idea
how many frames each time you can create, my max number of frames usually 32, and need to concat all short mpeg together
I can go beyond 32 with vid2vid, but with text2vid only 16 frames can be fairly consistent
There are no Reactor things here :)
??
@@techzuhaib99 At 4:45 you use "Reactor Face Swap" that I can't find in ComfyUI :/
Perfect. Very well explained. It worked perfectly! Thank you very much!
thank you so much bro you are truly amazing , looking forward for other control net features
More to come!
Is there a way to run a lora model this way?
I meant like a lora model trained on a custom face
You can load it the same way
@@techzuhaib99 I'm not too familiar with comfy ui and image generation in general. Would I have to replace the flux dev model with the lora model or do I add another load lora module?
@@zappist751 you have to add a lora model. What exactly are you trying to do
@@techzuhaib99 I have a lora model trained on a specific celebrity.
'VAE' object has no attribute 'vae_dtype' getting this error
Make sure to download the right vae for flux
amazing!
where Can I downloade the workflow file. thans
This workflow is outdated now, but you can use the same concept to build the new workflow
GGUF versions which run on low ram is out , will you make video on them ?
Coming up, with better solutions
Subscribed. 🎉
Hi, in the very first step when "Load diffudion model" is used i am not able o change unet_name it always remain "undefined" and same with weight_dtype "default" please help
Make sure to download the model first to the correct folder ComfUI/models/unet and hit refresh again
@@techzuhaib99 hi thanks alot for replying. Can you please share any video of your or any other creator
great value, subbed with notifications on. look forward to learning something on the next 1
Awesome, thank you!
hello. I could not fix it in the manager. (Reactor face swap node). How to do this using cmd. Please explain more.
Make sure to download insightface, watch this video on how to download insightface ruclips.net/video/vCCVxGtCyho/видео.html
@@techzuhaib99 hi i have the same issue and i followed the instructions of that video i keep getting the same error.
do you have a video on how to get comfy working I already have comfy installed but its not set up
Not really, you can list your issues here, I'll check where can I help
@@techzuhaib99 just how to like load a checkpoint to get it working I get this error "Prompt outputs failed validation CheckpointLoaderSimple: - Value not in list: ckpt_name: 'v1-5-pruned-emaonly.ckpt' not in [] " maybe I'm trying to use the wrong check or not doing it correctly IDK
@@freewheelburning8834 ahh, you just need to select the checkpoint in checkpoint loader that you installed in models/checkpoints, if you have installed one there.
Great tutorial, thank you for sharing!
thanks, quick question, why all the faces looks quite blurry?
Can you explain the issue more
Great video and right to the point! Thanks! I can't get Reactor to work though in Flux. The error is: local variable 'model' referenced before assignment
Have you used my workflow?
@@techzuhaib99 yes it has this error and just don't have time so I was using FaceID instead.
How do I get the SamplerCustomAdvanced node to show the image progression as it's being generated? mine doesn't show anything until it's put out to the preview image node. Please respond.
Enable previews in the manager
excelente! muchas gracias
Awesome Video 🤝 Thank you bro