- Видео 251
- Просмотров 196 393
Cognibuild AI - GET GOING FAST
США
Добавлен 24 апр 2024
This blog is not for the uber techie. This blog isn’t for the Comp-Sci major. This blog is for you.
I’m gonna assume a few things…
You’re fairly smart (or you wouldn’t be getting into this)
You’re new to a lot of this (or you wouldn’t be reading this)
You don’t totally know what you’re doing (or you wouldn’t be researching)
You’re persistent (or you wouldn’t have found this blog)
I’m moving up the ladder and I’m using the rungs of the fellows that made it to the next level. The truth is this: we’re getting into this in a second wave. And here’s the thing, the first wave left behind their footprint. We’re doing the same for the third.
The solutions here are not always elegant or well fleshed out into a zappo bam wowey post, but it’s what you need. And it’s spoken to you in an more understandable language. Which is what’s awesome about AI anyway.. we get to use words that make sense to us.
Let’s climb the ladder together. Here’s the rungs I’ve already climbed. See you at the top
I’m gonna assume a few things…
You’re fairly smart (or you wouldn’t be getting into this)
You’re new to a lot of this (or you wouldn’t be reading this)
You don’t totally know what you’re doing (or you wouldn’t be researching)
You’re persistent (or you wouldn’t have found this blog)
I’m moving up the ladder and I’m using the rungs of the fellows that made it to the next level. The truth is this: we’re getting into this in a second wave. And here’s the thing, the first wave left behind their footprint. We’re doing the same for the third.
The solutions here are not always elegant or well fleshed out into a zappo bam wowey post, but it’s what you need. And it’s spoken to you in an more understandable language. Which is what’s awesome about AI anyway.. we get to use words that make sense to us.
Let’s climb the ladder together. Here’s the rungs I’ve already climbed. See you at the top
Google’s Notebook LLM Just Changed Everything! Speaking with Podcasts and AI Like Never Before
Google's Notebook LLM just got even better with a powerful new update that allows us to speak directly with our own recorded podcasts! In this video, we join a live podcast, seamlessly interjecting into the conversation to discuss how people are connecting with AI on emotional levels. After sharing our thoughts, and going back and forth with the AI personalities, the podcast continues as usual, showcasing the AI’s ability to interact in real-time and enhance the discussion.
The big race is on for interacting with AI LLMs, and Google has just taken it to the next level with this update.
#GoogleAI #NotebookLLM #AIInteraction #AIRevolution #PodcastAI #AIUpdate #ProductivityBoost #NextLevelAI ...
The big race is on for interacting with AI LLMs, and Google has just taken it to the next level with this update.
#GoogleAI #NotebookLLM #AIInteraction #AIRevolution #PodcastAI #AIUpdate #ProductivityBoost #NextLevelAI ...
Просмотров: 92
Видео
Hunyuan's GGUF Models Are a Game-Changer for Low VRAM GPUs! Step-by-Step Installation 🎥
Просмотров 6 тыс.4 часа назад
Good news! This new release revolutionizes the Hunyuan Video Generator with GGUF models designed for low VRAM systems! Whether you're working with 12GB of VRAM or higher, you can now unlock creative possibilities with unprecedented ease. in this installation tutorial we cover installing comfyUI, downloading the models, adding needed nodes, and modifying the workflow. ✨ Key Features:🎁 Low VRAM F...
You Won’t Believe How Realistic These Hunyuan AI-Generated Babies Are | 16GB VRAM Magic! 🤖✨
Просмотров 2317 часов назад
Look at these beautiful AI-generated babies! Who would think something so lifelike could be created on just 16-24GB VRAM using Hunyuan’s FP8/BF16 models? Each scene took just 1:30 to 5:00 minutes to process, with the video smoothly transitioning from FP8 to FP16 models as the frame count increases. It’s incredible to see how far AI-generated video has come-prepare to be amazed! 🤖✨ And you can d...
MagicQuill: ComfyUI Edition - Step-by-Step Installation
Просмотров 6657 часов назад
In this step-by-step tutorial, I show you how to install the ComfyUI version of MagicQuill: the revolutionary open-source image editing software that’s changing everything. With just a few simple steps, you’ll be ready to dive into the next evolution and ease of inpainting technology. MagicQuill makes photo editing effortless, and getting started is just as easy! This guide walks you through th...
Rope-Next Faceswap: (The Brain that Wouldn't Die - "What's Locked Behind that Door?")
Просмотров 46814 часов назад
This scene was modified / faceswapped in mere minutes using our one-click Rope-Next installer. And you can do it too! At home! Find Rope Next here: www.patreon.com/posts/introducing-rope-110571649 Discord: discord.gg/jRYeUbBt6P LinkedIn: www.linkedin.com/in/gjnave Patreon: www.patreon.com/cognibuild Reddit: www.reddit.com/user/FitContribution2946/ Website: www.cognibuild.ai Personal Music: www....
Google AI Studio: The Future of Multimodal AI-You Won’t Believe What I Made it Do!
Просмотров 1,3 тыс.14 часов назад
In this first look at Google AI Studio, I just barely scratch the surface of its powerful, multimodal capabilities. This tool isn't built for chatting-it’s designed to run functions, making it a versatile resource for creators and developers. Watch as I demonstrate how it handles audio input (talking to the model), live video responses, and even video uploading and processing. There are countle...
Install Hunyuan in WSL (w/Sage) for Incredible Results! Step-by-step Installation!
Просмотров 90116 часов назад
🔥 Ready to unlock the full potential of Hunyuan? In this tutorial, I show you how to install and set up Hunyuan in WSL for seamless access to the Sage Attention mechanism, which doubles generation capability! Whereas installing Sage in Windows requires jumping through a lot of hoops, by using WSL, with a single command, you avoid the hassle of making drastic changes to your base Windows environ...
Hunyuan Video Generation: Step-by-Step ComfyUI Installation
Просмотров 4,5 тыс.День назад
Ready to dive into the BEST open-source AI video generation available? I show you step-by-step how to install and set up ComfyUI for Hunyuan, the most realistic open-source video generator yet! With quantized FP8 and FP16 models, you can create stunning videos on GPUs with just 16-24GB VRAM. Not only is the quality amazing but it can be done in minutes! This is the video generator you've been w...
Hunyuan Video Generator - You Won't Believe Santa's Big Night Out
Просмотров 291День назад
🎅 Santa like you’ve never seen him before! Using the Hunyuan AI video generator, I created this hilarious and lifelike clip of Santa’s big night out-all on just 16-24GB VRAM with FP8/BF16 models. Each scene took between 1:30 and 5:00 minutes to process, depending on the number of frames. As the video progresses, the frame count increases, transitioning from FP8 to FP16 models. See the future of...
How to Run Sana: Step-by-Step Installation Tutorial for 4K Image Creation
Просмотров 522День назад
Unlock the power of Sana in this step-by-step guide! In this tutorial, I’ll show you exactly how to set up and run the Sana framework on WSL to generate high-resolution, 4K-quality images-right on your laptop! 🚀 With Sana’s advanced text-to-image capabilities, you can create stunning visuals in seconds,, even with a 9GB GPU. We’ll walk through the entire process: from setting up your Subsystem ...
LTX Video Sample
Просмотров 10614 дней назад
Here is a montage of just a few of the videos created with the excellent new LTX Video Generator. BLAZING Fast speed. Each of these videos was made with 161-257 frames, and all took aproximately 30 seconds. Incredible! The Installation tutorial can be found here: ruclips.net/video/zN8l8ZUnoNc/видео.html If you just want to Get Going Fast, installers can be found here: www.patreon.com/posts/ltx-...
LTX Video Wipes out the Competition! Step-by-Step Install & Usage Tutorial
Просмотров 85714 дней назад
LTX is hands down the fastest open-source video generator out there right now! Generate 10 second videos on LESS THAN A MINUTE🚀 Whether you’re animating an image, crafting a video from text, or giving a fresh look to existing footage, LTX gets it done blazing fast. Here’s what it can do: 💬 Text-to-Video - Create cinematic magic from words. ✨ Image-to-Video - Bring your images to life. 🔄 Video-t...
Command Prompt w/ Admin Rights from File Explorer - Quick Utility Script!
Просмотров 20314 дней назад
This is a quick tip to streamline your Windows Terminal experience with instant administrator access through a simple right-click! For those of us working with AI and machine learning, having elevated privileges is often crucial for installing packages, managing system resources, and running specialized software. No more having to type your way deep into your working directory - just right-clic...
Set Up Your AI Workstation in Minutes! Everything you Need in a Click
Просмотров 32014 дней назад
The fastest way to get your system AI Ready!! Install all the essentials for open-source AI development and content creation with a single click and without having to visit multiple websites. This includes NVIDIA drivers, CUDA, cuDNN, and essential software like Chrome, Git, Miniconda, FFmpeg, Audacity, OpenShot, GIMP, WinRAR, Python, Discord, K-Lite, and Notepad . It’s perfect for beginners wh...
The Jim Day Show - Why We Still Celebrate Thanksgiving
Просмотров 12321 день назад
The Jim Day Show - Why We Still Celebrate Thanksgiving
Rope-Next Faceswap: (Charlie Chaplin: Modern Times)
Просмотров 58521 день назад
Rope-Next Faceswap: (Charlie Chaplin: Modern Times)
OmniParser Full Tutorial: Simplifying UI Parsing for Precise Automation
Просмотров 48121 день назад
OmniParser Full Tutorial: Simplifying UI Parsing for Precise Automation
Heroes of the Faith: AI Brings Preachers Back to Proclaim the Gospel (Tozer, Lewis, Spurgeon)
Просмотров 49528 дней назад
Heroes of the Faith: AI Brings Preachers Back to Proclaim the Gospel (Tozer, Lewis, Spurgeon)
Step-by-Step: Install MagicQuill and Transform Your Edits Today!
Просмотров 3,5 тыс.28 дней назад
Step-by-Step: Install MagicQuill and Transform Your Edits Today!
Workflow in Action: Master Lip-Syncing with Open-Source Tools!
Просмотров 85828 дней назад
Workflow in Action: Master Lip-Syncing with Open-Source Tools!
Workflow in Action: Isolate and Perfect w/ MagicQuill, Rembg & GIMP !
Просмотров 496Месяц назад
Workflow in Action: Isolate and Perfect w/ MagicQuill, Rembg & GIMP !
MagicQuill Changes Everything! The Next Evolution of Inpaint Editing!
Просмотров 2 тыс.Месяц назад
MagicQuill Changes Everything! The Next Evolution of Inpaint Editing!
Rope-Next Faceswap: (Rod Serling: Prophetic Warning )
Просмотров 95Месяц назад
Rope-Next Faceswap: (Rod Serling: Prophetic Warning )
✨ KCPP & Flux - Step-by-Step Guide to Chatbot Magic with Flux Imaging!
Просмотров 353Месяц назад
✨ KCPP & Flux - Step-by-Step Guide to Chatbot Magic with Flux Imaging!
VideoGen Pack Tutorial: Mastering AnimatedDiff, Stable Video, and ControlNet (Step-by-Step)
Просмотров 254Месяц назад
VideoGen Pack Tutorial: Mastering AnimatedDiff, Stable Video, and ControlNet (Step-by-Step)
Mochi: Unlock Open-Source Video Creation with Simple Setup & Powerful Features!
Просмотров 1,2 тыс.Месяц назад
Mochi: Unlock Open-Source Video Creation with Simple Setup & Powerful Features!
SwarmUI Flux & VideoGen Pack (Mochi, AnimatedDiff, SVD)
Просмотров 190Месяц назад
SwarmUI Flux & VideoGen Pack (Mochi, AnimatedDiff, SVD)
Rope-Next Faceswap: (Bert Reynolds: "Bandit's Laugh")
Просмотров 73Месяц назад
Rope-Next Faceswap: (Bert Reynolds: "Bandit's Laugh")
Rope-Next Faceswap: (Hee-Haw: "Oh that's Good.. No that's bad")
Просмотров 937Месяц назад
Rope-Next Faceswap: (Hee-Haw: "Oh that's Good.. No that's bad")
Never Heard of the Clipping Tool? Here's What You’re Missing
Просмотров 126Месяц назад
Never Heard of the Clipping Tool? Here's What You’re Missing
I am glad that I got rid of Pinokio AI some time back. As you say, this can mess things up, but you taught me how to install AI apps correctly, and I agree Microsoft Edge I don't use ever.👍🏼
anyone here use the docker of f5? im having issues
Do you stream here on youtube?
terrible. waste of time. downvoted.
I was able to follow these instructions and get my a video working - however - i always get and error and warning in the python runtime output -> here is the warning -> L:\aiedit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF odes.py:79: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_numpy.cpp:212.) torch_tensor = torch.from_numpy(tensor.data) # mmap here is the error -> clip missing: ['text_projection.weight']
Exactly the reason i started two months ago to learn ComfyUI the right way, instead of relying on Pinokio, this is so true. not to mention how much space it takes on your drive
Hey, love your videos man I really like you, there are two things I think you could improve on Nr. 1 clean up the audio, sounds like theres a war going on in the background. You figured out comfyui, shouldnt be hard for you to figure this one out Nr. 2 clean up the act a little, I like your persona but a tiny little bit more professionalism would prob help for the viewer to follow along, not saying become a robot your personality is cool You know your shit, maybe calm it down a little things dont need to be sensational its really interesting anyways :)
18:12 How come this fresh instance of ComfyUI loads the previously used workflow? Kinda scary stuff
13:10 I just don't get it. You seem to mix up python's native venv and conda. What is the point of doing so?
In the past month I wiped my computer 5 times.
you must be doing something right - you explained it well enough for an old man of 67 to get it working with a rtx 3090 gpu ( 24 GB VRAM ) - i was able to use the hunyuan-video-t2v-720p-BF16.gguf file for the Unet Loader ( GGUF ) ( i have 64 GB RAM ) i got a 6 second vid at 640 x 400 at 15 fps - ( i had to change the tiling size to 16 and the overlap to 32 ) it looks close to the original - not as many intricate lines on the clothes - i will put it on my yewspewed channel after i mess around with the setting more and see what i get
dude thats awesome! Youre blazing now! my new favorite app.. just incredible
@@cognibuild yewspewed erased my reply here where i told you about my channel and my video that i put shouts out and links for your video and channel - i guess because i put the name of my video in the message ? is that stupid or what ? - yewspewed CONSTANTLY wipes my messages let's see if this message persists or gets erased
@@cognibuild the name of my vid is ' First AI ComfyUI / Hunyuan Video '
thankyou for sharing but video sound volume is too low
i noticed that too... will have to correct that next time. Turn up the volume though and its ok
♥ Thanks for sharing your wisdom brotha... I appreciate you walking the line to help people brand new, but staying straight to the point and insightful enough to help anyone
The only thing I don't like about is them interrupting each other, because it gets annoying when they keep interrupting
true.. its definitely a new technology.. im excited for when they add new personalities
Note: This model (HunYuan) can also generate still images by setting the video length to 1 (!checking 1st frame prior to lengthy generation!)
@@DM-dy6vn if you want to take me to the next level you can add an output knowed that saves all of the images used in the video.
thanks for that info
Too fast. In the AI world, yesterday's stuff is old news
This is the dopest shit to me so far. Got to try this out myself. Not ganna lie, I’ve been sleeping on google’s AI work. They are really knocking this shit out of the park lately.
i gotta agree. its hitting next level. I love this stuff
@@cognibuild it’s going to be a game changer when we get the ability to write them personas like Silly Tavern. I just imagine the AI’s being a DM in a DND game. Like imagine having the narrator of BG3 now being the narrator of a story like that. It really opens up a whole new realm of possibility, plus even more accessible as this tech develops. The creativity that’s going to come from all this stuff will be astonishing.
Can anyone tell me if this is worth trying to run on my build? 4070 Super, 64GB DDR4, I7 9700k CPU, Im not sure for CPU heavy it runs.
16GB right? Should totally work! You can also try the WSL install with Sage Attention that I mention in another video
@cognibuild well then I'll give it a shot. This weekend I think. Most likely just gonna sub to your patreon for that installer.
@@Appreciation-Community awesome! Follow the instructions but if you need help hit me up Disocrd and I'll be glad to assist
Good, your venv is a bit long to start with.
Any ideas to fix: "RuntimeError: "replication_pad3d_cuda" not implemented for 'BFloat16'" ?
that sounds like a workflow error.. look for sometihng on the left side of the workflow that says "bfloat" . .change it to to float16
@@cognibuild a new comfy install solved it, now it has "pytorch version: 2.5.1+cu124", old install had "pytorch version: 2.1.1+cu121". Cuda was the problem...
@@q6g36 ahh.. you have CUda 12 ?
@@cognibuild thats what the portable version of comfy comes with...
Sageattention delivered the speedup. However, in the same time, I experience extremely low read rates from the mounted SSD. I did some research, and it seems an old actively discussed problem of WSL in general, and of WSL2 in particular
yes.. the initial load in WSL is PAINFUL.. however, once you invest in the initial load the inference speed is excellent. I usually load it up and do 20-30 generations at a time and it pays off
@@cognibuild In my case of RTX 3090, the generation time was not halved, but still reduced by 25%. I guess, this older GPU is not as fit for fp8 as 4090 is. I checked the read rate of mounted drives vs. native. The mounted suck big time. Wondering if it would make sense to have models in the "native" Linux folders
@@DM-dy6vntheres still hope! Go check out the GGUF models that just released!
so ruclips.net/video/ZBgfRlzZ7cw/видео.html how to murge the files is not included and it cuts forward
merge?
When you create the REG files for the context menu, make sure to use correct quotation marks: straight double quote ( " )
Thanks!
@@DM-dy6vn thank you. Is that missing from the code I shared?
@@cognibuild The REG code in your blog is using Left Double Quotation Mark (“) and Right Double Quotation Mark (”) which are perfectly preserved if copy-n-pasted into Notepad++. I guess, they are translated into (normal) Straight Double Quotes if pasted in a more simple text editor.
@@DM-dy6vn hmm.. let me go see if I can change the way it dispays them. thank you
@@DM-dy6vn seems the best I can do is leave a note to remove formatting :< seems to be a restriction of my template., I'll have to look to switching that ast some point. again, thatnks for pointing that out
is there any image to video workflow for this?
no.. however a friend and I have been working on an improvement to LTX image to Video that i should be realeasing soon
I hope i can make it work with 6gb vram.
I just ran ltx on a nvidia gtx 1060 6gb so I don't know if Hunyuan Video Generator is smaller and quicker but willing to give it a try love watching your videos but not working due to mental and physical situation so I can't access the other stuff on patreon but it's really cool you can still help others that can't afford it oh forgot with my card it took ltx an hour and 5 minutes to generate a video wish they would do ltx in gguf.Thanks for all you do for us your awesome
im glad to hear youre enjoying it. It really is a wothwhile hobby, Yeah, Hunyuan isnt going to work on a 6gb but theres a lot of other cool stuff you can do with images and even getting into chatbots. Let me know if you want help finding projects to work on :)
@@cognibuild you are right it probably won't work but worth a try and if something does go through with it I'll let you know so you can let your fanbase know cause there are people like me that can't afford it at the moment and sure they be like me very grateful for people like you that are willing to help others wish the world was like us how we all use to share and help others like the days we were grew up in.
I am not able to go pass installing conda. keeps saying path error. No idea how to resolve it
hmm.. weird. Post the actual arror and i;ll take a look. Come find me on Discord
If you can get install triton and sage-attention , workflow from kijai is still faster. it can generate 848*480 97 frames with in 8 mins with 4060 ti 16 gb.
yes it awesome huh?! i use the Sage workflow in WSL so I dont have to mess up my Windows build changing stuff around.
I got the low vram workflow working last night, then decided to try the GGUF workflow today. I downloaded the hunuan-video-t2v-720p-BF16 model and the GGUF workflow. This, as well as the low vram workflows run on my RTX 4070TI Super on Linux but I am close to the vram limit. It's hit or miss if I get an out of memory error running either one, not unexpected. I did get a successful video generation with the same prompt for both workflows, and I think the GGUF model gives me better quality video.
thats awesome! And you can always try a lower quant as well and see how that works> let me know the results!
@@cognibuild I can fairly reliably create 50 frame videos at 768*512 using the BF16 GGUF. I run out of memory at 100 frames. Generation takes about 270 seconds. The Q8 model is close but slightly worse. It's 15-20% faster, and a bit more likely to get a bad video. I had one that wasn't even close. Q5 is worse and I'm not impressed. Now that I tried these I'm not impressed with the low vram non-quantized model. I'm not getting good quality video at all, buildings and such are pretty distorted, fall foliage looks like psychedelic trees. I also had to reduce the size to 640x480 All from the perspective of a casual user learning about AI stuff and with far too many AI toys to play with.
@@davidwootton7355 thank you for sharing your experiment feedback! I'm with you.. the Q versions are interesting in that they can load very fast on my machine, but tbh, my sage wsl build works great and I have no need to move from it. These quants ar better for people who cant get into the fp8 /bf16 versions.
I can actually run the Q6 with my 3060 Laptop 6GB 512x288 @ 8 FPS 49 frames. 517 seconds. And the results are pretty good. I wonder how much more I can push this. I assume this is because of the shared memory? It doesn't seem like I should be able to run this on this card
That's exciting! It's because of the "quantizied" nature of the differnet versions. Think of it like precision decimals.. ie. you have the number: 1.23456789 which is pretty precice but takes up 9 slots .. what if we made it: 1.2345 instead? It's still the same number but not quite as precise. However it only holds 5 slots. Quants work in the same way. 517 seconds still seems long to me.Try going down to Quant 5 and see if you can get the speed up! Let me know the results!
@@cognibuild Yeah but I saw someone on an 8GB card not being able to run it in the comments so I'm wondering why I can with only 6GB VRAM, I'm assuming cause I'm on Windows and it shares some of the RAM with the GPU but I'm not sure I'll play with the other quants to try to find a good perf/quality ratio.
"don't worry about that you're going to definitely need that" - and proceeds to skip the reference to the video altogether lmao is this the video? ruclips.net/video/D_nbsiuMNcQ/видео.html
@@robertbencze8205 I'm not certain what you mean. LOL did I say that in the video? You definitely need Vb tools installed.. that video should lead you to a link that you can just type in your command prompts to download it
If you have 8Gig and get an OOM error(out of memory), unloading all loaded models, change the Tiled size in VAE Decode(Tiled) node to something like tile_size 128 and overlap to 32. Also reduce the video size to something like 208x208 and framerate to 14. If it works, then raise these numbers as you see fit. So it actually works with 8Gigs (takes a while but it's just for testing).
@@rogersnelson7483 that's good to know but that's a bummer. Try dropping it down to an even smaller dimensions (200 x 200) taking the frames down to 35. I'd be interested to see if it works
hey love your vids... check out the FastHunyuan model that's floating around.... I wasn't able to get it to work w/comfyUI but maybe you'll have better luck.
the link to download the workflow? I downloaded all the files but I don't see the workflow anywhere
What you have to do is drag the image on top of your comfy screen UI screen. You can also right click on the json file beneath it and choose save as
@@cognibuild What I can't find is the json file, I can't find it to download, can you put the link please?
@@marcosgrima2237 comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/hunyuan_video_text_to_video.json
@@cognibuild I downloaded it but it doesn't work, the screen goes black and nothing can be seen
It's definitely better than the vanilla workflow, but the low VRAM workflow seems to be better still since you can use sage attention with the torch compile and blockswap nodes which all reduce VRAM usage. I'm able to make 720x480 videos up to 377 frames with a 4080 although that length takes about 30 minutes. It may even work on 6 or 8 GB cards. I'm curious how long of a video a 4090 could get.
@@Nelwyn I'm with you where having a 4090, my sage attention in WSL build tends to work better. There's a distilled version that just came out yesterday as well and I'll try to get that up.... That might help your speed at the cost of quality
@@cognibuild I think it just comes down to frames and steps. If I disable the blockswap node then I get 49 frames in 2 minutes with 20 steps, but that's the limit of the vram. I've been settling on 225 frames in about 10 minutes using 20 double blocks as it seems like a good balance. I can't wait to get a 5090. Imagine what you could do with 32GB of VRAM!
@@Nelwyn drools in 32gb .. tbh, i want it to run bigger and better chatbots!!
@@cognibuild Same I love using 70B but it's SO slow.
@@Nelwyn i can get a 30b running at low quant. .but then at that point is like why bother
This worked so much better than the earlier Hunyuan that needed the Sageattention, which never worked on my Windows system. This was pretty fast, 1248 seconds for 73 frames at 8 fps and 848x480 size. The only glitch was I had to reduce the tile size to 128 and overlap to 32. The video I got using your workflow with no modifications except the tiles, and switching the two nodes you switched in the video, was very high quality, like something from an AI site. It's finally possible to make high quality videos with a 3060 GPU, and in a reasonable time. Good work on explaining this and providing the workflow.
you can just keep reducing the tile size it uses less and less vram and i cant tell the difference. what version of the worked best on your 4080?
@@Baka_Oppai The tile size won't reduce more than to 128, there's only two settings available in the node, unless you can type ones in, I didn't try that. I also tried the Q8 model and it worked on my 3060 too by the way.
Comment in support of the best channel )Thanks a lot for the info. I didn't know there were gguf models available, I'll have to compare how it will work with my 16gb.
@@Alex_Niko_Y honestly if your other install is already working really good you're not going to gain much here. But if you need a speed boost this might help
And thanks to a Super Fan!!! :D <3
Cries in 6Gb of vram 😂
My way of figuring out of using the higher models is to just host a shadow PC so that you don't have to spend a lot of money over vram or such, Since they already have all the built-in PC parts with it, Although it's cost $50 per month, it could be worth it in a time being. If you are trying to do a big project, it's just up to you if you have the time and need for that entire month with the included local programs you installed, Even the good part is that they have Nvidia Card in their PC Spec so You don't have to worry if you don't have a video card or AMD card for sure if some only certain requirements and such, That's why I did last time when I use it for upscaling my images because it takes a while to upscale and hire sizes when using the UI program I guess you could say the main idea is that the lower the model, the less it might be for the best of it compared to the better versions of it, which reasons why you need a powerful computer toward to make this thing work, or make exporting a lot faster when trying to do photo editing or try and make Ai videos generations
The HuanLatentVideo node came up as missing and there's nothing in Manager when you click Install Missing Comfy Nodes.
@@Chaz-x1i go to the video manager and update comfy. Close the server and start again
@@cognibuild Yup, that worked, thanks.
@@cognibuild Hey! What do you mean by video manager? Running into the same issue lol. Disregard, I just had to update comfy via the batch, custom manager update wasn't working. Thanks for the video!
@@loktar00 glad to hear it! btw, video manager is a game changer with comfy UI.. heres installation directions. real easy and I walk through how to do it in the video: github.com/ltdrdata/ComfyUI-Manager
Hi, does this model provide img2video model ?
@@Heisenberg238 nope just incredibly real text to video
Estimated release for an image to video version is in January
@@kingfirebone2000 thanks!
Fantastic news....one question...can this work with lora? thanks
you know.. im unceratain. I'll keep my eyes open on the subreddits and if you hear something as well please let me know!
Couldn't run rope next. got this error message: E:\Rope-v.1.0.0>if exist "activate.bat" (call activate.bat ) venv3.10\Scripts\activate.bat file not found. Already on 'main' Your branch is up to date with 'origin/main'. Traceback (most recent call last): File "E:\Rope-v.1.0.0\Rope.py", line 3, in <module> from rope import Coordinator File "E:\Rope-v.1.0.0 ope\Coordinator.py", line 4, in <module> import torch ModuleNotFoundError: No module named 'torch' Press any key to continue . . .
its becaues your dependencies arent being installed correctly. RUn this system checker and see if Git, conda, and python are installed. www.patreon.com/posts/automated-system-117200313
@@cognibuild I ran the checker and these are installed: Miniconda: Installed - Version conda 24.7.1 Git: Installed - Version git version 2.47.1.windows.1 Python: Installed - Version Python 3.12.4 CUDA: Installed - Version v10.1 CUDA: Installed - Version v11.8 CUDA: Installed - Version v9.0 Visual Studio Build Tools: Installed FFmpeg: Installed - ffmpeg version 6.0 What am I missing? Thanks!
@whatup2003 so the issue is that you need to downgrade your python to 3.10.11 on a side note you should probabloy get ridf of the CUDA 10.1 and 9.0 unles you know exactly why you have them (you shuld only have 1 cuda installed)
i need it to install for my omnigen installation process kindly help thanks..
simply follow the directions in the video and you wil be fine
Thank you so much for you help not rushing through your video, the program is brilliant, do you have a link to one of your video`s for a quick start for MagicQuill
AssertionError: Torch not compiled with CUDA enabled However, when I follow their solutions, installing CUDA, it returns to the error that there are no NVIDIA drivers, then I realize that it is not possible on WINDOWS/AMD, I believe that it is only possible on LINUX/AMD (I cannot confirm , because I don't have Linux. Thanks again for the excellent tutorial
yes the issue is that yhou are using an AMD.. the trorch and CUDA is for NVIDIA. Im uncertain it will work, but try the install but use a normal versaion of torch (w/out cuda) and dont install the CUDAtoolkit
@@cognibuild but when using a normal version of torch, it gives the first error: AssertionError: Torch not compiled with CUDA enabled I'll have to wait if something appears in the future for the AMD/WINDOWS version. Thanks again for your attention and top quality tutorial
@@geniodestemido then it probably means that it doenst work with NVIDIA at all.. you can still try the ComfyUI version
Thanks!
Thank you so very much for your gift support! Super fan Weilandsmith! <3 I'll mention you in my next video :D
Does this work if I use thinkdiffusion?
I can't use nvcc --version or install the AI toolkit version 11.8 on my Windows 11 system even after downgrading to cuda version 11.8 and my torch python script still says GPU: false. I'm running a RTX 3060ti
ty🌺🌺