This free AI video generator crushes everything

Поделиться
HTML-код
  • Опубликовано: 14 янв 2025

Комментарии • 1,3 тыс.

  • @theAIsearch
    @theAIsearch  Месяц назад +81

    Thanks to our sponsor Abacus AI. Try their ChatLLM platform here: chatllm.abacus.ai/?token=aisearch

    • @mostafamostafa-fi7kr
      @mostafamostafa-fi7kr Месяц назад

      video to video wont work for me i have rtx 4080.. it give me length error

    • @SiliconSouthShow
      @SiliconSouthShow Месяц назад +1

      that was the "train my dragon" dragon, exactly.

    • @SCHOOLERstyle
      @SCHOOLERstyle 27 дней назад +3

      🤣LOL i create better animations on my channel than any of this AI generated JUNK!

    • @darkaurel9
      @darkaurel9 26 дней назад

      Is it working on apple M3 Max with 128G ram... can it work on mac?
      when I click on "try" it turn to a Chinese version.. Any advice?

    • @NormalUser7392
      @NormalUser7392 23 дня назад

      i hate spons-

  • @JustFor-dq5wc
    @JustFor-dq5wc Месяц назад +2215

    WTF is going on? USA have close source censored models and China is dropping open source uncensored models.

    • @RhysAndSuns
      @RhysAndSuns Месяц назад +120

      lies coming to the surface

    • @EzekielDeLaCroix
      @EzekielDeLaCroix Месяц назад

      Probably spyware. Trying to catch people to use it so the Chinese can spy on you.

    • @armartin0003
      @armartin0003 Месяц назад +216

      I wouldn't trust a Chinese model with a 10ft pole. I don't care how good it does, I know there's gotta be a catch.

    • @userrjlyj5760g
      @userrjlyj5760g Месяц назад

      @@armartin0003
      Lol .... look who is saying that! A brainwashed US individual who is wearing an underwear that was made in China! 🤣😅

    • @4mIlr
      @4mIlr Месяц назад +2

      china : now who is the real evil?

  • @joehick8102
    @joehick8102 Месяц назад +408

    As a person with a career in tech I go into these tutorials knowing that I am gonna have to jump through hoops or technicalities that aren't mentioned. This went so gosh darn smoothly just following your instructions. many thanks for this.

    • @RhysAndSuns
      @RhysAndSuns Месяц назад

      @@joehick8102 the more quickly you follow a tutorial the less chance there is that some part of the process has been updated and broken the tutorial

    • @RDUBTutorial
      @RDUBTutorial 28 дней назад +1

      Exactly my experience too as most of these tutes if more than a week or two old won’t work as comfyui and basically everything gets updated very quickly while many of the dependents nodes / code is abandoned.

    • @majorhavoc9693
      @majorhavoc9693 25 дней назад +5

      And you don't mind that the AI code you just installed on your PC is made in China? You should seriously reconsider.

    • @lianghao7128
      @lianghao7128 22 дня назад

      @@majorhavoc9693 Open source means that you can check the code to see if there are malicious programs, genius.

    • @edphonez
      @edphonez 21 день назад

      Thank u, as a person of similar caliber that is nice to know.

  • @Razumen
    @Razumen Месяц назад +377

    2:54 What's really impressive here is that it actually changes the position of her chest to match her taking in a deep breath.

    • @santi23aparicioh
      @santi23aparicioh 27 дней назад +14

      I was going to say the same, impressive!

  • @AntiPolarity
    @AntiPolarity 29 дней назад +333

    11:58 "Does not look like a dragon". It's literally the Toothles from "How to Train Your Dragon" with red eyes!!! I can't belive that someone does not know this character!

    • @StevenFarnell
      @StevenFarnell 25 дней назад +54

      However, it still loses points because Toothless is a Dreamworks style dragon, and the prompt was Disney/Pixar🤣

    • @ceo_reuben
      @ceo_reuben 23 дня назад +8

      at least he kinda recognized Po

    • @genin69
      @genin69 22 дня назад +4

      yeah blows the mind

    • @genin69
      @genin69 22 дня назад +10

      also the soldier scene at 16:00. like wtf is this guy living in a cave, does he not a fps shooter when he sees one? im convinced this channel is some AI robot voice BS channel

    • @The1stGiant
      @The1stGiant 21 день назад +1

      Yeah, it would seem as though the AI model was trained on a lot of DreamWorks content

  • @ryshabh11
    @ryshabh11 20 дней назад +26

    Thanks

  • @RikkTheGaijin
    @RikkTheGaijin Месяц назад +1347

    "crushes everything" and then proceed to demonstrate how literally every other model is better.

    • @SecondLifeAround
      @SecondLifeAround Месяц назад +79

      LOL! Exactly what I was thinking!

    • @armondtanz
      @armondtanz Месяц назад +83

      crushes every open source?

    • @hdfsgervda
      @hdfsgervda Месяц назад

      Previous leader in open source video was CogVideoX 1.5 from The Knowledge Engineering Group (KEG) & Data Mining (THUDM) at Tsinghua University.
      This model from Tencent looks much better. And it doesn't matter that it's from China since the model is a "safetensor" file so just the neural network weights, and the code to do inference is open-source from whomever you want like the GitHub user Kijai

    • @hdfsgervda
      @hdfsgervda Месяц назад

      Looks much better than the old open-source king, CogVideoX 1.5 5b

    • @LincolnWorld
      @LincolnWorld Месяц назад +60

      Using examples he says is his that are the exact same examples I saw other YT people use yesterday, because they are actually the examples on the website from the people that made the program. Looks like somebody is playing the money game. Look what YT has become.

  • @CheerfulNE
    @CheerfulNE 22 дня назад +24

    When you said uncensored, well ... I will use this for research purpose only.

  • @jonmichaelgalindo
    @jonmichaelgalindo Месяц назад +82

    "Doesn't look like a dragon" that is literally Toothless from How to Train Your Dragon.

    • @alaskanman825
      @alaskanman825 9 дней назад

      That's because this guy is actually an A.I. and it doesn't get the dragon description very well...

    • @5BReiningHorses
      @5BReiningHorses 5 дней назад

      It's because Toothless does not look like a dragon.

  • @1mGotcha
    @1mGotcha Месяц назад +221

    00:40 i know what you thinking 💀

    • @JAKEAVALON-rg8xm
      @JAKEAVALON-rg8xm 23 дня назад +14

      I am just thinking about physics😂

    • @LilB.B
      @LilB.B 20 дней назад +11

      @@HikingWithCooperproof?

    • @lecoup_
      @lecoup_ 19 дней назад +2

      ok what the sigma

    • @amrazaq
      @amrazaq 19 дней назад +7

      @@HikingWithCooperso what would be the current industry leader? Asking for a friend

    • @HikingWithCooper
      @HikingWithCooper 19 дней назад +3

      @@amrazaq I suppose this for video. Still not great but it’s only 2024.

  • @276-
    @276- Месяц назад +79

    that's actually insane! the video model looks so unreal!

    • @GOD_AND_FLAT_EARTH_MUSIC_VIDEO
      @GOD_AND_FLAT_EARTH_MUSIC_VIDEO Месяц назад

      fake bot paid by this channel cause no comment only like bro just stop you propaganda with ugly chinnesmodel not free and not in 2k hd

    • @geekley
      @geekley 21 день назад

      I'm pretty sure the AI advancements in this decade will result in the end of video proof being accepted as legal evidence of anything (on its own).
      Like, there's gotta be a point where it's literally impossible to tell real and fake apart, even by experts.

  • @phkxv
    @phkxv 18 дней назад +4

    9:28 i love the way the bottom left one just spits out all of the spaghetti

  • @mariopadilla1445
    @mariopadilla1445 Месяц назад +70

    Ultimately this is also very very early. I suspect by the end of January we will have multiple refined models that will run on different gpu’s. Source has been out less than a week and we already have a first community build. I can’t wait to see what happens over the coming weeks. I personally use ai for vfx and composites. I think the goal will be tp have models running on 8-12gb gpu’s and at 720p. There’s plenty of video upscalers already available

    • @theAIsearch
      @theAIsearch  Месяц назад +16

      yes, things move so fast! can't wait to see what updates we get next

    • @mariopadilla1445
      @mariopadilla1445 Месяц назад +6

      @@theAIsearch I think the most logical step will be smaller more specialized models like we have with Img generation. Id like to see one specifically for photorealism, then for animation, then for 3d animation. Obvioly there would be nsfw variants for thos who ummm. . .need it lol. That being said i'm really interested in the lipsync mode. That could be its own dedicated model.

    • @mariopadilla1445
      @mariopadilla1445 Месяц назад +1

      @@theAIsearch Are you going to do a comparision of all the open source models available?

    • @AGoodHeartedSoul
      @AGoodHeartedSoul Месяц назад +2

      god daym, 8-12 gb vram for just 720p videos? i aint doing nothing with my rtx 3050 4gbvram

    • @mariopadilla1445
      @mariopadilla1445 Месяц назад +1

      @ lol I mean I got my 3060 for $300. Honestly they are really fair priced now. Mines a 12gb. Plus you can upscale 720p easily with like Topaz or Krea

  • @thays182
    @thays182 Месяц назад +78

    Eta on image to video? This is the most important one that changes everything.

    • @hdfsgervda
      @hdfsgervda Месяц назад

      Kijai's plugin for CogVideoX has image to video. CogVideoX, even 1.5 with 5b param is much worse than this though

    • @prdsatx4467
      @prdsatx4467 Месяц назад +4

      Agreed. I'm looking forward to this as well.

    • @Rucky888
      @Rucky888 19 дней назад

      7 minutes to generate

    • @thays182
      @thays182 19 дней назад +1

      @@Rucky8887 minutes for what? Image to video has been released?

    • @lowkeynerd
      @lowkeynerd 17 дней назад +2

      @@thays182 also wanting this bad.

  • @__Jah__
    @__Jah__ Месяц назад +174

    11:32 it does look like a dragon, just specifically toothless from how to train your dragon
    (Edit: timestamp was wrong)

    • @MuffShuh_PA
      @MuffShuh_PA Месяц назад +10

      was about to write it^^ exactly, and cause "How to Train Your Dragon" is an animated movie I would say it has done this part pretty good, even if dreamworks isn't disney :D

    • @fatih.tavukcu
      @fatih.tavukcu Месяц назад +1

      Only much much scarier. Even scarier than all the other dragons. It was directly staring into my soul :|

    • @theAIsearch
      @theAIsearch  Месяц назад +21

      True - thx for sharing! Admittedly, I have not watched it

    • @StrongRespect
      @StrongRespect Месяц назад +6

      ​@@theAIsearch Shame on you. Go watch it. But i liked only first chapter

    • @julx97
      @julx97 Месяц назад +2

      What is this time stamp? 😂😂

  • @todeilfungo
    @todeilfungo 21 день назад +7

    It looks like kling is still way better, and it impressed me way more than the other AI.

  • @sisisisi1111
    @sisisisi1111 24 дня назад +14

    That anime scene was insane. It really looks drawn and especially - Framerate is like animation, not like this "Gatcha game style"-Kling look.
    That's really impressive

  • @sentinelah7
    @sentinelah7 Месяц назад +8

    another tricky prompt is to ask the AI to generate a turnaround video. sometimes the face changes after the turn; or the model acts like an owl; or it seems the head is rotating but it only show the back of the hair.

  • @WingsandBlades257
    @WingsandBlades257 20 дней назад +4

    For a second I thought this was a suspicous advertisement for a second because of the thumbnail 😭🙏

  • @Toxic2T
    @Toxic2T 21 день назад +5

    2025 is going to be a freaking wild year for AI and AI generated content.

  • @UmairAamir01
    @UmairAamir01 27 дней назад +7

    21:24 lmao I wasn't expecting Po to appear as a result of this prompt 🤣

  • @joshflorence1998
    @joshflorence1998 Месяц назад +35

    Umm... The title of the video says it "crushes everything", but it performs worse in almost every one of the Text-to-video generations...

    • @vanandsan27
      @vanandsan27 15 дней назад +5

      well it was compared to not open source paid generators that are older and so more trained?

  • @JulianHarris
    @JulianHarris Месяц назад +8

    I think it’s useful for the audience to appreciate that if you did those prompts 3-5x you’d get very different results from them all.

  • @vrynstudios
    @vrynstudios Месяц назад +5

    Thanks for your hard work. Great video. In your opinion, in a long run which would you suggest between Kling and Hailuo?

  • @animateclay
    @animateclay Месяц назад +152

    The requirements are.... "An NVIDIA GPU with CUDA support is required. We have tested V100 and A100 GPUs. Minimum: The minimum GPU memory required is 11GB. Recommended: We recommend using a GPU with 32GB of memory for better generation quality."

    • @user-on6uf6om7s
      @user-on6uf6om7s Месяц назад +13

      The estimates from the model creators tend to be pretty conservative. I've heard of people getting it working on 8GB with the right settings but expect to wait 10+ minutes for a single generation.

    • @KManAbout
      @KManAbout Месяц назад +6

      Cloud compute is a thing. You can rent a GPU like an h100 for 5 dollars an hour.

    • @armondtanz
      @armondtanz Месяц назад +2

      @@KManAbout how many generations will you get in 1 hour? 4 seconds like the premium ones out there? ive worked out you getting around 100 vids for $30 per month

    • @hdfsgervda
      @hdfsgervda Месяц назад +1

      Both CogVideoX 1.5 5b and this can run on lower-end consumer GPUs with quantization and tiling. The quality may be worse and inference takes longer, but 12GB GPU isn't that far out of reach for many people

    • @TPCDAZ
      @TPCDAZ Месяц назад +4

      @@KManAbout Then it's not free is it...

  • @fredrickbambino
    @fredrickbambino Месяц назад +84

    Kling is still better by a long shot but for a free program, it's impressive.

    • @Psychopatz
      @Psychopatz Месяц назад +22

      the uncensored part is what hooks me up

    • @ADHDGamerdude
      @ADHDGamerdude Месяц назад +2

      ​@@Psychopatz have you tried it?

    • @AnonymerTraum
      @AnonymerTraum Месяц назад +1

      kling is censored, no boobies in kling

    • @RasmusSchultz
      @RasmusSchultz Месяц назад +4

      it's sometimes better, sometimes worse. I think a lot of it might come down to sheer randomness. run the same prompt a few times and you might get something entirely different, with either model. comparing videos based on a single attempt is actually kind of silly - these models are very random and might create you something with each attempt that you happen to either love or hate.

    • @LagiohX3
      @LagiohX3 Месяц назад +2

      its really impressive

  • @OSRSNuub
    @OSRSNuub 11 дней назад

    13:30 this is what I found the most amazing out of it all. That looks like it was actually animated by a studio. Incredible.

  • @InfoRanker
    @InfoRanker Месяц назад +13

    Text to video, video to video but where's the image to video?

    • @lowkeynerd
      @lowkeynerd 17 дней назад

      not out yet

    • @fredexdelivers
      @fredexdelivers 2 дня назад

      Where is video to video? All I can find is text to video on the website.. and they’re only 5 seconds long

  • @Comenta-san
    @Comenta-san Месяц назад +5

    holy moly, using this with 45vram rn (on lightningai, L40S gpu on the free tier)
    the quality is really next level, now I'm getting interested in video generation, we're finally getting out of the weird "embryonic" phase, like when we had stable diffusion 1.4
    also, free and uncensored? 😯wow

    • @Comenta-san
      @Comenta-san Месяц назад

      @@HikingWithCooper yes, free tier. You download the model there, free up to 100gb of storage. You need to register with phone number to get the free mothly credits

    • @theyellowishlimeace4735
      @theyellowishlimeace4735 14 дней назад

      Yeah, I'm also interested in this lighting AI thing. How's the free tier work?

  • @Scosar
    @Scosar Месяц назад +6

    I hope you'll make another tutorial on the hunyuan picture to video after its releasing. You always explain things really well.

  • @chariots8x230
    @chariots8x230 Месяц назад +7

    I just need something that works well for AI filmmaking. So, it has to have a good ‘Image to Video’ tool, camera controls, a start frame & end frame feature, video to video, video inpainting, lip-syncing, the ability to have multiple consistent characters interacting with each other, etc.

  • @CHRIS-ELID
    @CHRIS-ELID Месяц назад +3

    Eureka moment for Avatar Music Video!!!
    Thank you Admin❤❤❤❤

  • @abelect1682
    @abelect1682 13 дней назад +4

    how come 500k people agreed on watching a 39 minutes ad

  • @manonamission2000
    @manonamission2000 Месяц назад +28

    13 billion parameters, not too shabby

  • @Arick_Lee
    @Arick_Lee 10 дней назад

    I think the more you can specifically correlate a new, abstract, innovative idea to existing ideas while providing differentiating features..it will probably be more likely to provide results that match the picture in your mind.

    • @Arick_Lee
      @Arick_Lee 10 дней назад

      Based on the understanding of what it's "foundational model" was trained upon. As with all communication it is important to KNOW YOUR AUDIENCE..Understanding where they come from..thier frame of mind..or..the dataset it was trained and...weighted...on. Is important more than all else. As it is with humans.

    • @Arick_Lee
      @Arick_Lee 10 дней назад

      Until it sports an onboard universal "wild human" translator.

  • @MotionMg-admin
    @MotionMg-admin Месяц назад +8

    Hunyuan can work on Google collab, or free colud source like kaggle environment, please try on that

  • @TheBludgeoningEffect
    @TheBludgeoningEffect 18 дней назад +1

    7:30 Kling AI is definitely the best out of what is shown here. Ultra realistic details on the horse and environments, looks exactly like Id expect from a Hollywood production, the only giveaway is the infinite sand/fog around the feet

  • @wowzande
    @wowzande Месяц назад +21

    Yo bro keep it up the video is fire!

    • @captaintroll7294
      @captaintroll7294 Месяц назад +2

      Cia spying vs ccp spying, you can choose

    • @2MilyonTM
      @2MilyonTM Месяц назад

      ​@@captaintroll7294I'mma take the ccp one

    • @ericm.3919
      @ericm.3919 14 дней назад

      @@captaintroll7294 Why not both?

  • @lowkeynerd
    @lowkeynerd 17 дней назад +2

    When do we think the image to video workflow will be released?

  • @__________________________6910
    @__________________________6910 Месяц назад +4

    You know how the RUclips algorithm works, putting naughty thumbnail 😂

  • @nolimitchix
    @nolimitchix 27 дней назад +2

    🎉 great review video🎉 you did a great job comparing these

  • @Hathathorne
    @Hathathorne Месяц назад +49

    Are we sure we want to run a chinese owned a.i on our computers.

    • @HateBeinAlive
      @HateBeinAlive 22 дня назад +10

      Yes

    • @SuperCocoKiller
      @SuperCocoKiller 21 день назад +6

      it's free for a reason, they can earn by selling your data :V

    • @sinfulgrace
      @sinfulgrace 21 день назад +12

      How's it any different from running Western Owned AI? You think your precious NSA isn't spying on your ass? XD How CUTE.

    • @SuperCocoKiller
      @SuperCocoKiller 21 день назад +1

      @@sinfulgrace One doesn't sell your ID and Password to russian kids, other does.

    • @jaideepshekhar4621
      @jaideepshekhar4621 20 дней назад +2

      As opposed to... ClosedAi? 😂😂😂

  • @SoloJetMan
    @SoloJetMan Месяц назад +8

    i died at the terra cota dancing lmfao

    • @ShifterBo1
      @ShifterBo1 Месяц назад +1

      Where, time stamp?

    • @coltynstone-lamontagne
      @coltynstone-lamontagne Месяц назад

      ​@@ShifterBo1 I am only 5 minutes in and already saw it. Too busy to go back but you'll see it pretty early on

    • @encartauk
      @encartauk 29 дней назад

      it was pretty camp too.

  • @davidr007
    @davidr007 Месяц назад +26

    And this is free... that's just nuts! Will need to give this a try.

    • @majorhavoc9693
      @majorhavoc9693 25 дней назад +10

      It's free because it's from china. Keep an eye on your bank account if you do any banking on that PC.

    • @davidr007
      @davidr007 24 дня назад

      @@majorhavoc9693 Good point, that's why I use VMs

    • @vluhrs
      @vluhrs 21 день назад

      @@majorhavoc9693 its open source...

    • @lowkeynerd
      @lowkeynerd 17 дней назад

      @@majorhavoc9693 lmao.

  • @GUN2kify
    @GUN2kify 24 дня назад +20

    11:55 -- actually it is the dragon from "how to train dragons" a dreamWorks movie universe.

  • @Arick_Lee
    @Arick_Lee 10 дней назад

    In the soldier combat dip and swing prompt. I think if you look carefully at each result. You can see the average Ai interpretation of "dip and swing". It shows up..order of execution wise..towards the end of the clis

  • @tarelethridge8937
    @tarelethridge8937 Месяц назад +14

    That looks like toothless from How to train a dragon which was made by DreamWorks Animation. The others look like a generic dragon. So this does look like a dragon firm, a 3D animated film. Just not Pixar.

    • @unfunnyfailure
      @unfunnyfailure Месяц назад

      They did him dirty

    • @theAIsearch
      @theAIsearch  Месяц назад +3

      True - thx for sharing! Admittedly, I have not watched it

  • @See_thru_it_all
    @See_thru_it_all 26 дней назад +3

    What’s most impressive to me here, 2:42 is his chest when he inhales. 🤯

  • @AiifhcDkjdhff
    @AiifhcDkjdhff 4 дня назад

    不得不说,这个视频的创意真是满分!

  • @SecondLifeAround
    @SecondLifeAround Месяц назад +8

    Wow MiniMax is pretty amazing.

  • @SirPogsalotCreates
    @SirPogsalotCreates 16 дней назад +1

    7:49 I think you might be underestimating just how many Pomeranian chefs are out there tbh

  • @anonsimpch.7344
    @anonsimpch.7344 Месяц назад +2

    I noticed there is no Image to video after following all directions. Well, at least I made a video.

  • @thays182
    @thays182 11 дней назад

    Hi there, when can you make an updated video with the ehnace-a-video node, and IP2V (If it's working) to hold us over until I2V comes out? As always, great video!

  • @RDD87z
    @RDD87z Месяц назад +4

    the image to video(dancing) making the wrinkles in the dress appear as she moves , is incredible.

  • @syoudipta
    @syoudipta 22 дня назад

    I wasn't convinced with the terracotta warrior clip, i needed the anime girl clip as well to be sure 😂😂

  • @SiliconSouthShow
    @SiliconSouthShow Месяц назад +1

    that dragon is the "train my dragon" dragon, exactly.

  • @dailydoseofshitpost751
    @dailydoseofshitpost751 Месяц назад +14

    so basically every other ai model coming out every week "crushes" all the previous ones?

    • @hdfsgervda
      @hdfsgervda Месяц назад +4

      the new iPhone is the fastest iPhone ever LOL

    • @darmok072
      @darmok072 Месяц назад +1

      Apparently it's a game changer and will blow your mind.

    • @vargskelethor
      @vargskelethor Месяц назад +3

      its been like that for years

  • @muumipahpa
    @muumipahpa Месяц назад +20

    No one: The camels: 0:12

    • @IN-pr3lw
      @IN-pr3lw Месяц назад +6

      no one: muumipahpa 4 hours ago:

    • @RazelolGaming
      @RazelolGaming Месяц назад +3

      No one: IN-pr3lw 18 hours ago:

    • @DivijHere
      @DivijHere Месяц назад +5

      wtf is going on here

    • @Nyx_z_
      @Nyx_z_ 29 дней назад +5

      here on going is wft

    • @unkown110
      @unkown110 19 дней назад +1

      wtf is going on here

  • @3DJapan
    @3DJapan 25 дней назад +4

    Dragon clearly ripped off of How to Train Your Dragon.

  • @MrRandomPlays_1987
    @MrRandomPlays_1987 Месяц назад

    Unbelievable, but when is it likely we would also have image to video feature with Kijai's version of Hunyuan?

  • @DivijHere
    @DivijHere Месяц назад +20

    HOW IS IT FREE???

    • @AaronFleshner
      @AaronFleshner 20 дней назад +7

      Cause you spend $4k on the local computer you run it on. So it's "free"

  • @NoEgg4u
    @NoEgg4u 11 дней назад

    How much VRAM do you need to create still images / photos?
    Can you provide an image, and have the tool change parts of it (such as a different face, or different skin tone / color, or different hair style, or different weight (make the person skinnier or heavier))?
    I would like to try this Hunyuan, free, open source software. I have a 4070 ti with 12 GB of VRAM.
    Based on our host's demonstrations, videos would be very short and somewhat low resolution, with only 12 GB of VRAM. But how about still images or photos?
    Also, once an image is created, can it be tweaked? For example, tell the tool to have the person smile, or wink, etc? Will each tweak require more and more VRAM?

  • @tech-utuber2219
    @tech-utuber2219 Месяц назад +32

    Has anyone gone through all the code to verify that this is not some kind of Honey-pot/Trojan/Backdoor scheme? Why would a Chinese company just give this away? How does this benefit them?

    • @lxnd_
      @lxnd_ Месяц назад +7

      I mean whats the difference to a US company doing this? Maybe they just wanna show they‘re not falling behind and become as relevant in the ai industry as possible
      Or it‘s just a backdoor by the ccp 😂 guess we‘ll never know

    • @chibisayori20
      @chibisayori20 Месяц назад +3

      oh it's definetly a backdoor alright, but you prob should be safe if you run it on a VM you don't care about, and in the case this can leak trough host, maybe it'd only work on windows hosts, so consider using a linux host or smth
      but i doubt this is a backdoor, like it just why would it be? the least they do is sell your personal data

    • @thedesireguardian2470
      @thedesireguardian2470 Месяц назад

      ​@@chibisayori20conflicting statements

    • @marcomoreno6748
      @marcomoreno6748 Месяц назад +4

      Why are people worried about "CCP backdoors" when its OPEN SOURCE 😂 why do i get the feeling 99% of the online "techbro" fanbois have Z-E-R-O clue when it comes to basic software terms?

    • @marcomoreno6748
      @marcomoreno6748 Месяц назад

      US companies do something: zermg! 🥴 😍 ❤️ Capitalism 🗽
      China does something: 😡 😡 😡 THEY MUST BE UP TO SOMETHING, THOSE DEVILS

  • @Delllatitude7490
    @Delllatitude7490 Месяц назад

    Even AI learned how to pull crowds with thumbnails

  • @DepressedJellybean
    @DepressedJellybean Месяц назад +13

    Tried this on my 4GB VRAM laptop... It's as bad as it sounds

  • @galvinvoltag
    @galvinvoltag 19 дней назад +2

    I am starting to think that one guy was just determined enough to not read any books that he invented AI text to image generators for the sole purpose of being able to directly watch the books and never ever read again.

  • @beachcomberfilms8615
    @beachcomberfilms8615 Месяц назад +13

    The Pomerianian puppies generation did follow your prompt. One puppy is the sous chef teaching the others who are not yet chef's hence they do not have the chef's hat or uniform, they're new students. Case of garbage in garbage out, you didn't specify they all needed to be wearing outfits.

  • @rapdog96
    @rapdog96 13 дней назад +2

    11:56 it made the dragon toothless from how to train your dragon :O

  • @PolyscopeStudios
    @PolyscopeStudios 27 дней назад +3

    Thanks for another great comparison and installation guide!

  • @menosproblemos6993
    @menosproblemos6993 15 дней назад

    The dragon in "Princess runs away from dragon" is clearly based on Toothless from How to Train Your Dragon.

  • @High-Tech-Geek
    @High-Tech-Geek Месяц назад +3

    I can't even get Midjourney to produce a static image when I use the term "running away". It always renders subject 1 running towards subject 2. or both subjects running with their backs to the camera. I wonder if you try different variants like "dragon chases princess" or "princess flees dragon" or something similar?

    • @NLPexperts
      @NLPexperts Месяц назад

      Of course try variants, like princess stands in front of dragon then runs towards camera. Try synonym generators like fleeing or being chased by

    • @barronhelmutschnitzelnazi2188
      @barronhelmutschnitzelnazi2188 Месяц назад +2

      Things like putting in "frontal view shot" helps

  • @Nehji_Hann
    @Nehji_Hann 13 дней назад

    Very impressive indeed, though there are few clips shown here that don't at least have a bit of "hmm... feels off" with clear signs that it was at minimum assisted with A.I.
    Passing most of these off as legit camera recorded vids would be idiotic at best.
    But damn this is insane and amazing

  • @Hegenbrecht
    @Hegenbrecht Месяц назад +5

    Downloaded both models. I cannot choose bf16 quantization in Comfy for big model. Only fp8. Both models are generating without errors, but the video is black.

    • @Strakin
      @Strakin Месяц назад +4

      My Vids are black too.

    • @petejorative
      @petejorative 13 дней назад

      @@Strakin did you figure out the problem?

    • @Strakin
      @Strakin 12 дней назад

      @@petejorative No sorrry. Maybe something about the torch installation.

    • @vinegro4579
      @vinegro4579 8 дней назад

      I can't even generate anything, it keeps saying it can't find the text encoder but idk wtf any of that means

  • @kalki-avatar
    @kalki-avatar 7 дней назад

    Thank you 🙏 but i want to know how to make a movie, or short movie with consistent characters?

  • @sour_lemon_zest
    @sour_lemon_zest 13 дней назад +3

    27:51 i get a missing node types but more than just the one you got mine says its missing
    HyVideoModelLoader
    HyVideoSampler
    HyVideoVAELoader
    DownloadAndLoadHyVideoTextENcoder
    HyVideoTextEncode
    VHS_VideoCombine
    HyVideoDecode
    is there any way to fix this?

    • @Clark971
      @Clark971 12 дней назад

      For me, it's because I didn't restart python main.py before refreshing. It didn't work like in the video, I had to re-run the command before refreshing

    • @WalintHUN
      @WalintHUN 11 дней назад

      Same problem, except I could find and install the VHS_VideoCombine, any else not in the Manager/custom nodes list

    • @WalintHUN
      @WalintHUN 11 дней назад

      oh sh*t... :) run the .bat file as Admin, something couldn't update w/o rights...

    • @sour_lemon_zest
      @sour_lemon_zest 11 дней назад

      @ ok I’ll try Ty

  • @marshalshaydi7985
    @marshalshaydi7985 Месяц назад +1

    It's looks like they used very big amout of data of films, anime and cartoons which tenct can showing in China as training data, so sometimes it literally showing fragments of animation from cartoons or anime in results.

  • @zengrath
    @zengrath Месяц назад +4

    I tried 3 videos including examples provided, exact same models and settings, and i get videos of only a black image and nothing else. Nvidia 4090 here. EDIT: Updating comfyui bat file as well as from manager didn't fix issue, but running update_comfyui_and_python_dependencies bat file fixed it for me. others on github said a fresh install also works, but i seen no reason to do a fresh install when i just installed comfyui only weeks ago.

    • @theAIsearch
      @theAIsearch  Месяц назад

      looks like you need to update torch: github.com/kijai/ComfyUI-HunyuanVideoWrapper/issues/55

    • @sadshed4585
      @sadshed4585 Месяц назад

      ima try this now thanks glad to see im not the only one

    • @sadshed4585
      @sadshed4585 Месяц назад

      it is fixed

    • @zengrath
      @zengrath Месяц назад +1

      @@HikingWithCooper working great on my 4090, i'd still use smaller sizes shown in this video to give you some room to generate videos of a couple seconds longer, but yea it blows my mind how well it's doing considering all the other local ones i tried in past have been absolutely terrible. I am not sure it's something i'll personally play with for long, it's kind of a cool thing to try out then move on. but i have been impressed with what it has done so far with my prompts. It is still hit or miss of course, like it may take a few minutes to generate a video by default settings then maybe 25% of time you get something fairly decent, 75% it's okay but with glitches, and like 5-10% of time you get something really cool. I only been playing with it for a few hours.

    • @DC_Adventures
      @DC_Adventures 16 дней назад

      @@zengrath,can it work without internet? also, i have OLD GPUs can i RUN them with 2 3070TI + 1 3070? thanks

  • @propylaeen
    @propylaeen 25 дней назад

    Super natural... may you need to adjust your empathy sensors...

  • @udance4ever
    @udance4ever Месяц назад +3

    @1:31 my mind is absolutely blown at the ability to use stick models to make a single image dance🤯

  • @christianstachl
    @christianstachl 19 дней назад +1

    I don't care that much about that stuff...
    uncensored!
    Well, i might give it a try 😅

  • @GS195
    @GS195 Месяц назад +26

    Image to video, then we'll talk

    • @ytubeanon
      @ytubeanon Месяц назад +4

      isn't 1:39 doing that?

    • @thays182
      @thays182 Месяц назад +2

      Not yet. At the end it says it’s not released yet. Image to video was a released example, but looks like we can’t use it yet.

    • @GOD_AND_FLAT_EARTH_MUSIC_VIDEO
      @GOD_AND_FLAT_EARTH_MUSIC_VIDEO Месяц назад +1

      try hotshot, its in 2k hd and the best since long time

    • @NostalgicMem0ries
      @NostalgicMem0ries Месяц назад +6

      i cant wait for my crush to do what ever i please :)))

    • @templarking6819
      @templarking6819 Месяц назад

      Ese whey​@@NostalgicMem0ries

  • @Techduturfu
    @Techduturfu Месяц назад

    Great video! I do have one question, though: is it possible to segment the project and use just one of its features? I'm particularly interested in the image-to-animation/lip-sync functionality, as it seems like a great alternative to Hedra. Additionally, would focusing on just that feature make the model lighter? I’m not a developer, so I’m not sure if what I’m suggesting is feasible or even makes sense.

  • @badmovi
    @badmovi Месяц назад +4

    Arent you supposed to have 60gb of vram to even get to 720p?

    • @pengiPenguin
      @pengiPenguin Месяц назад

      Watch the whole video

    • @TPCDAZ
      @TPCDAZ Месяц назад +2

      @@pengiPenguin We did. Anything less creates a 2 second horrible quality gif pretty much. so YES You do need a beast of a GPU

    • @digitalmagicAR
      @digitalmagicAR 29 дней назад

      @@HikingWithCooper thank you that is exactly the answer I was looking for

  • @Lorentz_Factor
    @Lorentz_Factor 25 дней назад +1

    If you want to use local models like this, you need to do image to video starting with the text as you want it to appear and then modify it to video using the AI. Not expected to do it. Start to finish.. 13:22

  • @unfunnyfailure
    @unfunnyfailure Месяц назад +5

    11:32 They did toothless dirty

  • @Wedgetail96
    @Wedgetail96 20 дней назад

    It struggled with the soldier request. It looked like it pulled it straight from an old version of “Call of Duty”.

  • @markmuller7962
    @markmuller7962 Месяц назад +4

    Amazing, let's hope this opens a new era of free and democratic models

  • @ducking...
    @ducking... 13 дней назад

    12:25 maybe tell the placement of dragon and direction of running for the princess

  • @ArthurGenius
    @ArthurGenius Месяц назад +3

    That moment when the model is so good the GPU can't handle it

    • @woritsez
      @woritsez Месяц назад

      that was my first thought, i had heard for a local install you need some industrial scale gpu to get results

  • @silverfiste
    @silverfiste 25 дней назад

    I think you should have tried a tilt shift video

  • @ian2593
    @ian2593 Месяц назад +3

    My 16gig vram almost works but gets an OOM error on the final step. Any advices on how to tweak it a bit more to save on ram/vram? [edit] Reducing the frames works. Should've watched the whole thing.

  • @MrRomanrin
    @MrRomanrin 21 день назад

    14:00 WOW !!!!
    THIS IS THE BEST ANIME STYLE !

  • @SickAcousticCovers
    @SickAcousticCovers Месяц назад +3

    I went to high school with a white guy named Will Smith. That wasn't him.

  • @maskedmischief932
    @maskedmischief932 20 дней назад +1

    There was definitely a few impressive generations from Hunyuan.. I'm not denying that! But in my opinion, overall, Kling still performed the best out of all of them. But thank you for taking all the time and presumably money to put this research together.
    If it's the only locally compatible generator out there, I may still try it!

  • @GiblikJovanovic
    @GiblikJovanovic 28 дней назад +73

    it's really crazy how nobody is talking about the book Nifestixo The Hidden Path to Manifesting Financial Power, it changed my life

  • @pomitjoe2327
    @pomitjoe2327 17 дней назад

    itadori and megumi sneaking in at the top right clip at 13:57

  • @7RStudios
    @7RStudios Месяц назад +17

    Ah yes. Free. Tencent. Nothing fishy here.

    • @monica46549841
      @monica46549841 Месяц назад

      xd

    • @hdfsgervda
      @hdfsgervda Месяц назад +5

      It's good to be suspicious, but the models are "safetensor", which is a format that's merely neural network weights. The inference code is open-source. That said, people should never trust random ComfyUI extensions and their Python dependencies, so maybe being suspicious is smarter.

    • @dirremoire
      @dirremoire Месяц назад

      There isn't. Ignore at your own peril.

  • @StoneClone
    @StoneClone 21 день назад

    for text i have learned to use . and on the wall there is written : Subscribe . works with flux for example

  • @digiart-cgi-ai-9152
    @digiart-cgi-ai-9152 5 дней назад

    Can this also do image to video, ? thanks for the great tutorial, but without image to video, its no good for me.

  • @vladrusu7478
    @vladrusu7478 Месяц назад +13

    i'm 3 days into these open source models and my mind still cannot comprehend how this is legal

    • @Ddotkay
      @Ddotkay Месяц назад +6

      why?

    • @youtibe2320
      @youtibe2320 Месяц назад +4

      Why do you say that?

    • @aegisgfx
      @aegisgfx 29 дней назад

      @@Ddotkay Im more confused about what this technology is actually good for. I think all the hype about these techs is just that, hype, hype over nothing. This may end up going down in history as the ultimate vaorpware

    • @TK_090
      @TK_090 17 дней назад

      @@aegisgfxwrong. Major disruption for advertising and marketing.