Yay! I got it to work. Thank you for such a detailed video. I spent half the day troubleshooting something stupid, but it sure feels good when it works!
@@cognibuild The REG code in your blog is using Left Double Quotation Mark (“) and Right Double Quotation Mark (”) which are perfectly preserved if copy-n-pasted into Notepad++. I guess, they are translated into (normal) Straight Double Quotes if pasted in a more simple text editor.
@@DM-dy6vn seems the best I can do is leave a note to remove formatting :< seems to be a restriction of my template., I'll have to look to switching that ast some point. again, thatnks for pointing that out
Sageattention delivered the speedup. However, in the same time, I experience extremely low read rates from the mounted SSD. I did some research, and it seems an old actively discussed problem of WSL in general, and of WSL2 in particular
yes.. the initial load in WSL is PAINFUL.. however, once you invest in the initial load the inference speed is excellent. I usually load it up and do 20-30 generations at a time and it pays off
@@cognibuild In my case of RTX 3090, the generation time was not halved, but still reduced by 25%. I guess, this older GPU is not as fit for fp8 as 4090 is. I checked the read rate of mounted drives vs. native. The mounted suck big time. Wondering if it would make sense to have models in the "native" Linux folders
I have some questions, if you don't mind: 1. I have a dedicated AI PC with a 1TB SSD. How much space am I gonna need to complete this full task as I only have 160GB left and will need to clear out some stuff or install onto a second SSD if that is possible (or does WSL need to be on the system drive) ? 2. Could I simply install Ubuntu or Mint on a separate clean SSD on my PC (disconnecting the Windows SSD) and then install all the cuda, comfyui, hunyuan etc etc to achieve the same thing but with a totally native separate physical install dedicated to linux AI stuff going forward ? 3. Do I just wait a few days for yet another "better" AI install video from you (lol) on Windows ? Thanks, I haven't watched the entire vid yet, so you may have answered some of the above anyway.
1) the Ubuntu distirbntuon takes about 2gb and the hunyuan cond comfy installer abiout 30-40gb 2) ? 3) i dont thnnk were gonna get a better isntall .. the issue with WIndows Sage is that you have to change too much stuff to make it work.. and then it breaks all our other installs
@@DM-dy6vn the good thing is that the way the installers and directions above are given, you dont really have to interact in WSL other than to simply click the run button
I don't know what I did wrong. I followed the instructions for installing the wsl, but when I ran "nvcc --version" it gave me "nvcc command not found." I don't know what I missed. I guess I'll try again. Does it matter if the wsl control panel is taking me to the root directory and not a user?
@@wielandsmith go to chat GPT and ask it how to set the environment variables for cuda in WSL. If that doesn't work you can come get my WSL setup script from the patreon site
@@cognibuild thanks. I've been chatting with it and we were going in circles. It actually ended up having to do with a windows update. I don't know what the conflict was. It wasn't even identifying the wsl, etc. I appreciate the one click but I really like knowing what is happening.
@@cognibuild It just needed an OS update. I really don't know why it suddenly began working--and yes, I had restarted the computer multiple times before that, so it wasn't just a matter of a reboot. I was chatting with GPT and ran out of credits, so then I went over to Grok, which suggested maybe my software wasn't up to date. I thought it was a pretty dumb answer, but I tried everything else, and that worked. What I really love about your tutorial is now I have the wsl installed, I can just do all my installations with that. I've never used it before (no software background). I'm loving the way it works.
Have you tried the workflow in the examples directory for low VRAM? it does something called block swapping which reduces memory usage. It's set to 20 double blocks by default which is the max. I use on my 4080 and even though it's not necessary, it further reduces memory usage although it slows down inference. You can just add the BlockSwap node and connect it to the model loader too.
have you generated many of your own Videos, the videos that are all over the internet are all the same and they come from Hunuan themselfs We need to see it what can it really do like real staff they say is uncensored show us the limits of this as well if blur is need do it just give us the real generated video where is not about installation but about generated videos
@@cognibuild Thanks for the quick response, So this Model seems very interesting then, Lots of Artistic Work and lighting can be done. if you have time do a video comparing your genarated staff with Minimax Ai and Kling i looked through your list of videos maybe i missed it. Great Woek Man.
Yay! I got it to work. Thank you for such a detailed video. I spent half the day troubleshooting something stupid, but it sure feels good when it works!
Amen to that!
@@cognibuild Yeah, I think my video card is hating me about now.
@@wielandsmith 😂
When you create the REG files for the context menu, make sure to use correct quotation marks: straight double quote ( " )
Thanks!
@@DM-dy6vn thank you. Is that missing from the code I shared?
@@cognibuild The REG code in your blog is using Left Double Quotation Mark (“) and Right Double Quotation Mark (”) which are perfectly preserved if copy-n-pasted into Notepad++. I guess, they are translated into (normal) Straight Double Quotes if pasted in a more simple text editor.
@@DM-dy6vn hmm.. let me go see if I can change the way it dispays them. thank you
@@DM-dy6vn seems the best I can do is leave a note to remove formatting :< seems to be a restriction of my template., I'll have to look to switching that ast some point. again, thatnks for pointing that out
Great video! Thank you very much.
First time generating took ~440 sec. using main_device and 330 sec. offload_device.
hmm.. what dimensions?
Sageattention delivered the speedup. However, in the same time, I experience extremely low read rates from the mounted SSD. I did some research, and it seems an old actively discussed problem of WSL in general, and of WSL2 in particular
yes.. the initial load in WSL is PAINFUL.. however, once you invest in the initial load the inference speed is excellent. I usually load it up and do 20-30 generations at a time and it pays off
@@cognibuild In my case of RTX 3090, the generation time was not halved, but still reduced by 25%. I guess, this older GPU is not as fit for fp8 as 4090 is. I checked the read rate of mounted drives vs. native. The mounted suck big time. Wondering if it would make sense to have models in the "native" Linux folders
@@DM-dy6vntheres still hope! Go check out the GGUF models that just released!
Thanks so much I appreciate you 🔥🔥🔥
thanks! and i appreciate you
Nice job ! 👍
Thanks! 👍
Nice!
Thanks!
I have some questions, if you don't mind:
1. I have a dedicated AI PC with a 1TB SSD. How much space am I gonna need to complete this full task as I only have 160GB left and will need to clear out some stuff or install onto a second SSD if that is possible (or does WSL need to be on the system drive) ?
2. Could I simply install Ubuntu or Mint on a separate clean SSD on my PC (disconnecting the Windows SSD) and then install all the cuda, comfyui, hunyuan etc etc to achieve the same thing but with a totally native separate physical install dedicated to linux AI stuff going forward ?
3. Do I just wait a few days for yet another "better" AI install video from you (lol) on Windows ?
Thanks, I haven't watched the entire vid yet, so you may have answered some of the above anyway.
1) the Ubuntu distirbntuon takes about 2gb and the hunyuan cond comfy installer abiout 30-40gb
2) ?
3) i dont thnnk were gonna get a better isntall .. the issue with WIndows Sage is that you have to change too much stuff to make it work.. and then it breaks all our other installs
2) It will work. You might miss the Windows environment though. WSL is all about staying in Windows while using Linux
@@DM-dy6vn the good thing is that the way the installers and directions above are given, you dont really have to interact in WSL other than to simply click the run button
I don't know what I did wrong. I followed the instructions for installing the wsl, but when I ran "nvcc --version" it gave me "nvcc command not found." I don't know what I missed. I guess I'll try again. Does it matter if the wsl control panel is taking me to the root directory and not a user?
@@wielandsmith go to chat GPT and ask it how to set the environment variables for cuda in WSL. If that doesn't work you can come get my WSL setup script from the patreon site
@@cognibuild thanks. I've been chatting with it and we were going in circles. It actually ended up having to do with a windows update. I don't know what the conflict was. It wasn't even identifying the wsl, etc. I appreciate the one click but I really like knowing what is happening.
@@wielandsmith oh interesting...... Did you have to roll back and update you had done or did you have to actually update
@@cognibuild It just needed an OS update. I really don't know why it suddenly began working--and yes, I had restarted the computer multiple times before that, so it wasn't just a matter of a reboot. I was chatting with GPT and ran out of credits, so then I went over to Grok, which suggested maybe my software wasn't up to date. I thought it was a pretty dumb answer, but I tried everything else, and that worked. What I really love about your tutorial is now I have the wsl installed, I can just do all my installations with that. I've never used it before (no software background). I'm loving the way it works.
Is there any point for me to do this if I only have 6GB of VRAM? Can it still generate decent short and tiny videos?
@@leeweichoong unfortunately not. There's no chance I'm sorry
ty🌺🌺
What kind of processing are you running on your eyes dude
haha.. i tried something and it caused probs LOL
Can run in 12gb
we tried and couldnt get it working. did you?
@cognibuild nope asking as don't have a 16g I have 12 g
@@377omkar yeah.. im sorry. we werent able to get it working with 12gb :
Have you tried the workflow in the examples directory for low VRAM? it does something called block swapping which reduces memory usage. It's set to 20 double blocks by default which is the max. I use on my 4080 and even though it's not necessary, it further reduces memory usage although it slows down inference. You can just add the BlockSwap node and connect it to the model loader too.
@@Nelwyn i saw it but didnt play with it. So it works nicely for you?
so ruclips.net/video/ZBgfRlzZ7cw/видео.html how to murge the files is not included and it cuts forward
merge?
have you generated many of your own Videos, the videos that are all over the internet are all the same and they come from Hunuan themselfs We need to see it what can it really do like real staff they say is uncensored show us the limits of this as well if blur is need do it just give us the real generated video where is not about installation but about generated videos
ive generated all my own stuff. Im very careful about posting NSFW work stuff but yes .. it is fully explicit
@@cognibuild Thanks for the quick response, So this Model seems very interesting then, Lots of Artistic Work and lighting can be done. if you have time do a video comparing your genarated staff with Minimax Ai and Kling i looked through your list of videos maybe i missed it. Great Woek Man.