Not convinced but it’s nice to have more options given recent fiasco at Stability. The game changer would be some adapter allowing new base model to use existing SD ecosystem, eg Lora, etc.
You said that the HunYuan couldn't do text, but it did do Hanzi (I am not sure of the correctness - would have to EN-CN OCR). Glad to see this model. Dual or more locale models coming out of China in many modals have been skyrocketing in qual and quant lately.
It does do text, and it does perform better when the English prompt is translated to Chinese by a high quality translation model first. Of course, it doesn't do ENGLISH text very well, but who's counting?
Should probably be using stock SDXL to completely avoid wondering if a problem is with SDXL itself or not. :P (Even if those models do generate often better imagery in general.) Additionally, while I don't know what your actual workflow is before editing the video, I'd probably want to generate 3-4 images for a prompt (or maybe cherry picking the best out of 6 or more?), since it's quite common for a generated image to miss some aspect of the prompt. I know this isn't a strictly scientific review, but if we're going to be pitting A vs B even in a fluff intro video like this, it's not really doing anyone favors for either one of them to be at the mercy of a random number generator. (Maybe you do this behind the scenes and just edit that bit out -- that's fine, but I can't tell that from the end result! :D) Anyway, cheers -- interesting seeing alternative models coming out.
Love your rigor and skepticism, but it's open source and you can totally test it once for yourself. I hope you keep that same rigor and skepticism when you see other videos and ask for proof yourself.
According to this chart, HunYuan is supposed to be almost as good as Midjourney in terms of aesthetics and Dalle-3 is supposed to be even better than Midjourney... The curator of this chart cleary needs glasses x-D
When I try to queue prompt, it gives me this error "DiffusersPipelineLoader: - Value not in list: pipeline_folder_name: 'ckpts' not in [ ]". Any idea why, Mr. Nerdy Rodent?
Don't know for sure, when I had a similar problem it was because I had my models at a different directory than the regular directory (I set the path of the models to where I have the A111 models), maybe it's expecting something else there or is expecting the ComfyUI default models path. I ended up installing ComfyUI at a different drive to use with models like SD3 (which is the model that I had a similar problem with).
this thing is too bloated and my my comfy is set up to work in bfloat while this is set up for float so mat1 and mat2 errors. When I go to 32bit this model is slow. I mean my 6,5it/s is now 2s/it. Not very good quality either.
Definitely, review the models you use, they showed that they are too biased, there are too many mixes of models that are very ruined in their understanding, the base model contains ample knowledge but it is lost the more they train it without a good balance, now the mixes Most models put everything inside and there are models that improve certain concepts but ruin others. In my tests using your prompts I didn't have problems with women's bias or terribly deformed hands.
the SD1.5 plague has arrived to SDXL, and it plays hell on we who train loras for it. Might work on yours, or might not, or do weird stuff. Just too many mixes, of mixes that was trained on a mix type fiasco.
If you look closely, isn't every iteration of SD actually not true open source? Open weighs, yes - but not open source since the training data, scripts, and exact methodologies are not published. I think for this reason, SAI always carefully refers to 'open release' and similar things regarding SD3, to avoid the term 'open source' as much as possible.
The amount of webscraped Horrendous Atrocities in Picture Form present in SD1.5 would be quite a pill to swallow. :- ) and SD3 still responds to webscraped metadata tags, so it's still there
These new models do look pretty nice! I do find it odd that we end up dedicating more and more GBs to text understanding and seemingly smaller and smaller fraction of GBs to actual images... I wonder if this impacts how much we would be able to finetune these models. I also would not mind seeing some models going the other direction, terrible natural language understanding but cramped with image data, like a PonyXXL...
nope, sd3 has better architecture but safety training as well as license are dogsh*. And you know these scores can be gamed. No way in hell dalle3 beat midjourney. I use all of them on a daily basis.
Whether it does NSFW ist the first question any reviewer should ask and answer. Considering this is Chinese, the answer will be obvious though, won't it?
132 million active user on Steam. 1% of steam gamers have a 4090 that would be 1.23 million peoples. More than the view this video will every have. (I have a 3090 btw)
Better than the unreleased weights of SD3? Huh amazing that they can compare their model with a model that isn't finished training and can definitively tell theirs is better. What a time to be alive, technology is marvelous.
@@j5545 "Stable Diffusion 3 (SD3) was announced by Stability AI as their most advanced text-to-image model to date. The early preview of SD3 was made available in February 2024, with the general release of a more accessible version, known as SD3 Medium, scheduled for June 12, 2024. This model aims to improve photorealism and prompt adherence, making it suitable for both consumer and business applications (Stability AI) (THE DECODER) (Decrypt)." Where I live is now 9. of june. So it doesnt exist for me yet. You must come from future.
Using the negatives in Chinese isn't a problem for the SD xl version? Isn't that unfair?
I suggest giving it a go and running your own tests too! You’d be surprised how many other languages do things… 😉
Not convinced but it’s nice to have more options given recent fiasco at Stability. The game changer would be some adapter allowing new base model to use existing SD ecosystem, eg Lora, etc.
Fine tuning & loras are next 😊
You said that the HunYuan couldn't do text, but it did do Hanzi (I am not sure of the correctness - would have to EN-CN OCR). Glad to see this model. Dual or more locale models coming out of China in many modals have been skyrocketing in qual and quant lately.
It does do text, and it does perform better when the English prompt is translated to Chinese by a high quality translation model first.
Of course, it doesn't do ENGLISH text very well, but who's counting?
thanks for making video on this
It's my pleasure
that outro song is fire 🤣
Oh nice! We need a greater variety of models like this one. I also managed to get it running in just 6GB VRAM too - you don't need 11 now
China No.1, thanks Rodent
I'm not able to run this on TensorArt, do any know any working flow?
Should probably be using stock SDXL to completely avoid wondering if a problem is with SDXL itself or not. :P (Even if those models do generate often better imagery in general.)
Additionally, while I don't know what your actual workflow is before editing the video, I'd probably want to generate 3-4 images for a prompt (or maybe cherry picking the best out of 6 or more?), since it's quite common for a generated image to miss some aspect of the prompt. I know this isn't a strictly scientific review, but if we're going to be pitting A vs B even in a fluff intro video like this, it's not really doing anyone favors for either one of them to be at the mercy of a random number generator.
(Maybe you do this behind the scenes and just edit that bit out -- that's fine, but I can't tell that from the end result! :D)
Anyway, cheers -- interesting seeing alternative models coming out.
I am sure you are the type that wants it in an X/Y grid too. Am I right?
Love your rigor and skepticism, but it's open source and you can totally test it once for yourself.
I hope you keep that same rigor and skepticism when you see other videos and ask for proof yourself.
Hunyuan seems not to work on windows as a build wheel error occurs
Nice
According to this chart, HunYuan is supposed to be almost as good as Midjourney in terms of aesthetics and Dalle-3 is supposed to be even better than Midjourney...
The curator of this chart cleary needs glasses x-D
When I try to queue prompt, it gives me this error "DiffusersPipelineLoader:
- Value not in list: pipeline_folder_name: 'ckpts' not in [ ]". Any idea why, Mr. Nerdy Rodent?
Don't know for sure, when I had a similar problem it was because I had my models at a different directory than the regular directory (I set the path of the models to where I have the A111 models), maybe it's expecting something else there or is expecting the ComfyUI default models path. I ended up installing ComfyUI at a different drive to use with models like SD3 (which is the model that I had a similar problem with).
Your maths are wrong, 2.3% better than sd 3 (56.7) is 58.0041, not 59
Actually this one might be better than SD3 that we got. I should invest in bigger GPU
Well can someone create safetensor versions of these ? xD
do it yourself as the tools are out there in kohya with lycoris.
sadly it is only 1024x1024 :(
Asking for a friend, can Hunyuan do NSFW stuff?
this thing is too bloated and my my comfy is set up to work in bfloat while this is set up for float so mat1 and mat2 errors. When I go to 32bit this model is slow. I mean my 6,5it/s is now 2s/it. Not very good quality either.
Lots of handsome rodents, and a bunny? What happened to my badger? You promised (you didn't but I took it that way) 😜
The badger is currently chilling 🦡
I also tested it in a long video, but it really didn't convince me.
ruclips.net/video/0k4BeKNomss/видео.html
Definitely, review the models you use, they showed that they are too biased, there are too many mixes of models that are very ruined in their understanding, the base model contains ample knowledge but it is lost the more they train it without a good balance, now the mixes Most models put everything inside and there are models that improve certain concepts but ruin others. In my tests using your prompts I didn't have problems with women's bias or terribly deformed hands.
the SD1.5 plague has arrived to SDXL, and it plays hell on we who train loras for it. Might work on yours, or might not, or do weird stuff. Just too many mixes, of mixes that was trained on a mix type fiasco.
If you look closely, isn't every iteration of SD actually not true open source? Open weighs, yes - but not open source since the training data, scripts, and exact methodologies are not published.
I think for this reason, SAI always carefully refers to 'open release' and similar things regarding SD3, to avoid the term 'open source' as much as possible.
The amount of webscraped Horrendous Atrocities in Picture Form present in SD1.5 would be quite a pill to swallow. :- )
and SD3 still responds to webscraped metadata tags, so it's still there
Thanks Nerdy! 😊
No problem 😊
it's much better
Hunyuan is absurdly slow for some reason, it's slower than anything that isn't Flux by like a lot, for unclear reasons
Thanks for the video
Fascinating comparison.
These new models do look pretty nice! I do find it odd that we end up dedicating more and more GBs to text understanding and seemingly smaller and smaller fraction of GBs to actual images... I wonder if this impacts how much we would be able to finetune these models. I also would not mind seeing some models going the other direction, terrible natural language understanding but cramped with image data, like a PonyXXL...
Hello! Have you tried installing v1.2 ? Tried it today but couldn't get it to work. Everything was fine with version 1.0. ☹
nope, sd3 has better architecture but safety training as well as license are dogsh*. And you know these scores can be gamed. No way in hell dalle3 beat midjourney. I use all of them on a daily basis.
time to put my rtx 3060 12gb + 64gb ddr4-2400 ubuntu pc to use
One major drawback - UncomfyUI. :/
In what world is Dall-e better than Midjourney?
Can it do any nsfw, or it's heavy censored?
Whether it does NSFW ist the first question any reviewer should ask and answer. Considering this is Chinese, the answer will be obvious though, won't it?
@@joechip4822 the level of censorship in any particular model is one of the most important aspects.
It's a non problem.
Sure it's an arm cut off the integrity but talented fine tunner are able to add that back seamlessly.
While my answer may not be entirely accurate, it's subject to Chinese law and NSFW content is prohibited.
@@suliao-lv6gj very probably true - but also a good reason not to use and support it in its original form
sir it's pronounced Hung-Yu-Whan, hunyuan
Thank you! I’m not good with anything other than British pronunciations 😅
If that's supposed to be a Chinese word, @nerdyrodent's pronunciation is closer.
But the name makes no sense to me anyway.
😃
According to steam hardware survey, about 1% has 4090 and 0.58% have 3090
132 million active user on Steam.
1% of steam gamers have a 4090 that would be 1.23 million peoples.
More than the view this video will every have. (I have a 3090 btw)
Dear Nerdy Rodent, could you please publish a link to your udio-generated outro?
I'd like to extend it to a full track.
I may do a short one day 😉
@@NerdyRodent Hope this day comes sooner than later.
I was hooked by the minimalistic yet vibing melody since your udio review.
Wait, I can't run ComfyUI in Linux?
Other way around - ComfyUI runs best in Linux 😉
Better than the unreleased weights of SD3? Huh amazing that they can compare their model with a model that isn't finished training and can definitively tell theirs is better. What a time to be alive, technology is marvelous.
That closing credits song reminds me a lot of a British band from the early 2000s called Fat Truckers. You'd like them I think.
It would be interesting to compare it with a actual good trained model like Colossus Project XL 10 for example.
The song is a hit 😂
😎
who would have guessed, a model that doesnt obey to some self restriction obtain better result
problem mate my loader does not have vae or clip or model but instead "pipeline, auto encoder, scheduler ? how to change it
The HunYuan pipeline loader is detailed at github.com/Tencent/HunyuanDiT/tree/main/comfyui-hydit#hunyuan-pipeline-loader
SD3 aint released yet so how can he compare something that doesnt even exist yet?
I imagine they used the api 😉
It doesn't exists? Lol
sd3 api has been released
@@j5545 "Stable Diffusion 3 (SD3) was announced by Stability AI as their most advanced text-to-image model to date. The early preview of SD3 was made available in February 2024, with the general release of a more accessible version, known as SD3 Medium, scheduled for June 12, 2024. This model aims to improve photorealism and prompt adherence, making it suitable for both consumer and business applications (Stability AI) (THE DECODER) (Decrypt)."
Where I live is now 9. of june. So it doesnt exist for me yet. You must come from future.