NewGenAI
NewGenAI
  • Видео 77
  • Просмотров 68 039
Can LTX-Video Create Stunning Text-to-Video on Low VRAM (6/8 GB)? Find Out Now!
LTX-Video
github.com/Lightricks/LTX-Video/
Installation guide
drive.google.com/file/d/18lEmS3tP1ZMeElYhEyctk8MhTv57yq27/view?usp=sharing
#AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #texttovideo #LTXVideo
0:00 Benchmark
0:09 Introduction
0:36 Installation on Windows
Просмотров: 766

Видео

Allegro Quantized: Text-to-Video Model Now Runs on 8GB VRAM!
Просмотров 1987 часов назад
Allegro huggingface.co/rhymes-ai/Allegro Installation guide drive.google.com/file/d/1mNE1S6VKQKOtkYHn_XqLkPVuts4HSQE4/view #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #texttovideo #allegro 0:00 Introduction 0:31 Benchmark 2:0...
Can NVlabs SANA Generate 4096x4096 Images on Just 8GB VRAM? Let’s Find Out!
Просмотров 38612 часов назад
NVlabs SANA github.com/NVlabs/Sana Installation guide drive.google.com/file/d/1R6K_-vRen5BL-PXijnO8UzYQK7FG0cHh/view #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #texttoimage #sana #NvlabsSana 0:00 Introduction 0:33 Benchmark ...
Flux on 8GB VRAM? Witness the Magic of Lightning-Fast Image Generation using Nunchaku / SVDQuant
Просмотров 481День назад
Nunchaku / SVDQuant github.com/mit-han-lab/nunchaku Installation guide drive.google.com/file/d/1qtr00-PusMrbdNz5mBs7bCh_THg5VufG/view?usp=sharing #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #texttoimage #fluxdev #fluxschnell ...
Pyramid Flow: Lightning-Fast Video Generation from Text or Images - Only 8 GB VRAM Needed! Windows
Просмотров 69021 день назад
Pyramid Flow github.com/jy0205/Pyramid-Flow Updated files drive.google.com/file/d/1S_eh_TadJ1If26DTcmYdBvJdHdkBmQmK/view?usp=drive_link #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #texttovideo #imagetovideo #pyramidflow #text...
OmniGen: Transforming Multi-Modal Prompts into Stunning Visuals on 8GB VRAM
Просмотров 2,1 тыс.28 дней назад
OmniGen github.com/newgenai79/OmniGen/ Installation guide drive.google.com/file/d/17mFxfAj3JH0Wfr-Ouf618bufiPl03eKN/view #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology 0:00 Introduction 0:50 Benchmark on 8GB VRAM 1:05 Installation
The Beginner's Guide to Creating Your Own Talking-Head / Lip sync videos using EchoMimic
Просмотров 630Месяц назад
Forge github.com/lllyasviel/stable-diffusion-webui-forge EchoMimic tutorial ruclips.net/video/WtHdvSSQlWo/видео.html Extract frames ffmpeg -i video.mp4 -vf fps=30 input\%d.png Combine frames after post-processing ffmpeg -framerate 30 -i %d.png -vcodec libx264 -crf 1 video.mp4 #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInA...
Ctrl-X: Revolutionizing Text-to-Image Control Without Guidance
Просмотров 159Месяц назад
Ctrl-X github.com/genforce/ctrl-x Installation guide drive.google.com/file/d/1KdxQkjWQaPvgBTS4YGBV3ewMUjL477E2/view?usp=drive_link #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #CtrlX #T2IGeneration #StructureControl #Appearanc...
CtrLoRA Explained: Next-Level Control for Your Text-to-Image Creations!
Просмотров 409Месяц назад
CtrLoRA github.com/xyfJASON/ctrlora Installation guide drive.google.com/file/d/14fwXYLkbEcd1FHjOOPxMunpIkCW9zDTK/view?usp=sharing #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #CtrLoRA #ImageGeneration #EfficientAI #Controllabl...
Meissonic: Lightning-Fast 1B T2I Model for Jaw-Dropping 1024x1024 Images on Consumer GPUs!
Просмотров 318Месяц назад
Meissonic github.com/viiika/Meissonic Installation guide drive.google.com/file/d/1qTiJm_4az_ud4rCKxM6xZFzTLkwDnFx6/view?usp=sharing Gradio WebUI drive.google.com/file/d/1cgFhMKpDicF-lUV8xzRDZMhemXQ49oEd/view?usp=sharing #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthro...
The BEST voice cloning app ever? Clone Any Voice with F5-TTS: The Most Accurate TTS Yet!
Просмотров 3,1 тыс.Месяц назад
F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching github.com/SWivid/F5-TTS Fix NUMPY package version pip install force-reinstall -v "numpy 1.25.2" Quick installation guide 1. Clone and navigate inside the folder 2. Create virtual environment python -m venv venv 3. Activate virtual environment venv\scripts\activate 4. Install Wheel pip install wheel 5. Install require...
From Low to Pro: Frame Interpolation with REAL-Video-Enhancer on Windows
Просмотров 259Месяц назад
REAL-Video-Enhancer github.com/TNTwise/REAL-Video-Enhancer #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #VideoEnhancer #FrameInterpolation #Upscaling #REALVideoEnhancer #VideoEditing #RIFEESRGAN #AIUpscaling #AiVideoInterpolat...
Think 8GB VRAM Can't Handle Controllable AI Generation? Naaaaaah! Introducing ControlNeXT SVD
Просмотров 2,2 тыс.Месяц назад
ControlNeXT github.com/dvlab-research/ControlNeXt/ ControlNeXt-SVD-v2 for Low VRAM systems (atleast 8 GB VRAM ) 8 GB shared github.com/newgenai79/ControlNeXt-SVD-v2 #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #ControlNeXT #AI...
Makeine Magic: Create Reels & Shorts from Just a Text Prompt!
Просмотров 125Месяц назад
Makeine github.com/Kither12/Makeine Updated files for Windows drive.google.com/file/d/1hhqBADXnufZzbTfROl92dxv-6fDE9QSK/view?usp=sharing ImageMagick for Windows imagemagick.org/script/download.php#windows #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearc...
Deep Live Cam: Face Swaps for Live camera, Images, Videos, and Multiple Faces!
Просмотров 680Месяц назад
Deep-Live-Cam github.com/hacksider/Deep-Live-Cam Fix for transparent window github.com/hacksider/Deep-Live-Cam/issues/668 #AI #StableDiffusion #TechInnovation #ArtificialIntelligence #DeepLearning #AIExploration #TechEnthusiast #CreativityInAI #StableAIHub #AICommunity #InnovationHub #TechBreakthroughs #AIResearch #futuristictechnology #DeepLiveCam #FaceSwap #RealTimeFaceSwap #ImageToVideo #Liv...
SadTalker: Audio-Driven Single Image Talking Face Animation on Windows
Просмотров 1 тыс.Месяц назад
SadTalker: Audio-Driven Single Image Talking Face Animation on Windows
OOTDiffusion: The Future of Virtual Try-ons with AI Fashion
Просмотров 8012 месяца назад
OOTDiffusion: The Future of Virtual Try-ons with AI Fashion
ResShift: Lightning-Fast Super-Resolution & Face Restoration
Просмотров 3072 месяца назад
ResShift: Lightning-Fast Super-Resolution & Face Restoration
Master Voice Cloning with CosyVoice: Multilingual AI for Realistic Speech Generation
Просмотров 8112 месяца назад
Master Voice Cloning with CosyVoice: Multilingual AI for Realistic Speech Generation
Unlock Emotions in Talking-head Videos with EDTalk
Просмотров 9983 месяца назад
Unlock Emotions in Talking-head Videos with EDTalk
AniTalker: Lightning-Fast Talking Head Animations with Unique Facial Motion Encoding
Просмотров 9493 месяца назад
AniTalker: Lightning-Fast Talking Head Animations with Unique Facial Motion Encoding
Ultimate Vocal Remover: Effortless Vocal Extraction with Deep Neural Networks
Просмотров 2053 месяца назад
Ultimate Vocal Remover: Effortless Vocal Extraction with Deep Neural Networks
Make Backgrounds Disappear: Quick and Easy Transparent Background Tool | Powered by InSPyReNet
Просмотров 2493 месяца назад
Make Backgrounds Disappear: Quick and Easy Transparent Background Tool | Powered by InSPyReNet
AICoverGen: Create Song Covers with RVC v2 AI Voices!
Просмотров 5284 месяца назад
AICoverGen: Create Song Covers with RVC v2 AI Voices!
EchoMimic Magic: Audio and Landmarks Bring Portraits to Life! The BEST talking head generation app.
Просмотров 2,6 тыс.4 месяца назад
EchoMimic Magic: Audio and Landmarks Bring Portraits to Life! The BEST talking head generation app.
How to Create Perfect Lipsync Videos with LipSick
Просмотров 5044 месяца назад
How to Create Perfect Lipsync Videos with LipSick
FSRT: AI-Powered Next-Gen Face Reenactment Technology
Просмотров 4664 месяца назад
FSRT: AI-Powered Next-Gen Face Reenactment Technology
LivePortrait: Create Hilarious Portrait Animations Effortlessly!
Просмотров 3,5 тыс.4 месяца назад
LivePortrait: Create Hilarious Portrait Animations Effortlessly!
MimicMotion: Revolutionizing Human Motion Videos
Просмотров 2,8 тыс.4 месяца назад
MimicMotion: Revolutionizing Human Motion Videos
Hallo: Breakthrough in Audio-Driven Portrait Animation
Просмотров 1,8 тыс.4 месяца назад
Hallo: Breakthrough in Audio-Driven Portrait Animation

Комментарии

  • @SarveshJoshi-q4k
    @SarveshJoshi-q4k Час назад

    Hey thanks for this, while I'm running it on my system I'm encountering an error: Traceback (most recent call last): File "/home/sr/sarvesh/videoGen/LTXVideo/inference.py", line 369, in <module> main() File "/home/sr/sarvesh/videoGen/LTXVideo/inference.py", line 231, in main vae = load_vae(vae_dir) File "/home/sr/sarvesh/videoGen/LTXVideo/inference.py", line 39, in load_vae vae_state_dict = safetensors.torch.load_file(vae_ckpt_path) File "/home/sr/anaconda3/envs/LTXVideo/lib/python3.10/site-packages/safetensors/torch.py", line 313, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge any workaround for this???

  • @Crisisdarkness
    @Crisisdarkness 9 часов назад

    Thanks friend, I will try this, I hope it will run with 6 GB of GPU, I will tell you later if it was possible. Your channel is valuable, glad I found you

  • @slim420-e8v
    @slim420-e8v 13 часов назад

    Does it work on AMD?

  • @Parthi97
    @Parthi97 14 часов назад

    can i run this on 1050 gtx 4gb VRAM?? with t5xxl_fp8_e4m3fn.safetensors text encoder and ltx-video-2b-v0.9.safetensors as checkpoint??

    • @StableAIHub
      @StableAIHub 5 часов назад

      You will have to make code changes to load FP8 model.

  • @ivoxx_
    @ivoxx_ 14 часов назад

    OMFG this is fast! I wish I could run in comfy!

    • @StableAIHub
      @StableAIHub 14 часов назад

      Yes you can run on Comfyi, there is a workflow on their github page github.com/Lightricks/LTX-Video

  • @bause6182
    @bause6182 15 часов назад

    Can it run on comfyui ?

    • @StableAIHub
      @StableAIHub 15 часов назад

      This is for standalone installation. You can find Comfyi workflow in their github repo.

    • @WileyHickok-sd6ov
      @WileyHickok-sd6ov 15 часов назад

      Yes

    • @StableAIHub
      @StableAIHub 14 часов назад

      Yes you can run on Comfyi, there is a workflow on their github page github.com/Lightricks/LTX-Video

  • @stable-ai
    @stable-ai 19 часов назад

    hi there , how do do image to video with this , thanks

    • @StableAIHub
      @StableAIHub 19 часов назад

      I have not tested if it is working. try 1. Copy the image in LTXVideo folder 2. In config.yaml update the image file name input_image_path: "" 3. Update prompt and other settings. Please let me know if it works. If not I will have a look

    • @StableAIHub
      @StableAIHub 16 часов назад

      Tested, Verified I2V working.

  • @Crisisdarkness
    @Crisisdarkness День назад

    You have a new subscriber, I'm glad to know this, I would like to know, which of these two tools do you consider to be more efficient and achieves better results? Allegro or Pyramid Flow?

    • @StableAIHub
      @StableAIHub День назад

      Thank you. Just wait for another day or 2, new t2V tool is working on low VRAM. I am preparing guide.

    • @Crisisdarkness
      @Crisisdarkness День назад

      @@StableAIHub Thanks friend, I will be attentive to that, since I'm interested in these tools requiring few resources, I have an RTX with only 6GB of ram, but I have been able to do things with some of these tools, at the moment with LivePortrait I have been able to make videos (at 12 fps), but generating videos in another way I have not tried it yet

    • @StableAIHub
      @StableAIHub 20 часов назад

      @@Crisisdarkness LTX-Video made changes to make it work on 6 GB too provided you have atleast 16 Gb shared available. Checkout and let me know if it works for you

    • @Crisisdarkness
      @Crisisdarkness 10 часов назад

      @@StableAIHub There is one thing that I don't understand, when you refer to having 16 GB shared, how can I do that? In the "performance" tab, I see shared GPU memory usage of 11 GB, could I increase it in some way?

    • @StableAIHub
      @StableAIHub 5 часов назад

      @@Crisisdarkness If it is working than fine. That means enough memory is available to load the models. To increase shared memory your motherboard should support it. Search in google and see if you can find.

  • @Crisisdarkness
    @Crisisdarkness День назад

    I came to find out about this great news late, I thought that to generate videos locally I had to have a monster GPU, this is great, I'll try it, it seems that this tool is efficient and could improve even more

  • @Vanced2Dua
    @Vanced2Dua 2 дня назад

    thanks

  • @HlebniyBu
    @HlebniyBu 2 дня назад

    Doesn't work with the “forge” version The only way I have found to use a VAE with the correct model is as follows: Give the VAE the same name as the model checkpoint.

  • @ivo2296
    @ivo2296 3 дня назад

    Hi, i'm stuck - after i run python app.py and fetching 10 files 100% - i don't see the line: 127.0.0.1:port - please help - thanks

  • @taobabay
    @taobabay 3 дня назад

    Nice as usual, keep going friend ❤

    • @StableAIHub
      @StableAIHub 3 дня назад

      Thank you. If you choose to try please do post what VRAM and processing time.

  • @ArtificialDevLabs
    @ArtificialDevLabs 4 дня назад

    I followed your instructions exactly but it will not work I even installed Anaconda to the latest version. ModuleNotFoundError: No module named 'gradio' ModuleNotFoundError: No module named 'spaces' ModuleNotFoundError: No module named 'torchvision' ModuleNotFoundError: No module named 'transformers' ModuleNotFoundError: No module named 'app' ModuleNotFoundError: No module named 'app.sana_pipeline'

    • @trishul1979
      @trishul1979 4 дня назад

      I also faced this issue. Solved using Launch command prompt from within cloned repo activate Virtual environment conda activate sana pip install -e .

    • @StableAIHub
      @StableAIHub 4 дня назад

      Please try the steps as mentioned by @trishul1979 and let me know if you are still facing issues

    • @ArtificialDevLabs
      @ArtificialDevLabs 3 дня назад

      @@StableAIHub It does not work for me on Windows 11 The installation asked about a gated model gemini 2. and now when I followed every step correctly I am getting this error with a new latest Anaconda installation: "ImportError: DLL load failed while importing libtriton: A dynamic link library (DLL) initialization routine failed."

    • @playgnition3071
      @playgnition3071 3 дня назад

      @@StableAIHub I have the same issue on win 11 : "ImportError: DLL load failed while importing libtriton: A dynamic link library (DLL) initialization routine failed."

    • @StableAIHub
      @StableAIHub 3 дня назад

      @@playgnition3071 Surprising. I have been using these wheels for a long time now, works each time. I suggest install from original repository as they have removed triton dependency

  • @ArtificialDevLabs
    @ArtificialDevLabs 4 дня назад

    Where is the bat file mentioned in your instructions?

    • @StableAIHub
      @StableAIHub 4 дня назад

      It's in github. Once you follow the steps you will see it.

    • @ArtificialDevLabs
      @ArtificialDevLabs 4 дня назад

      @@StableAIHub Thank you :)

    • @StableAIHub
      @StableAIHub 4 дня назад

      @@ArtificialDevLabs u r welcome

  • @ROKKor-hs8tg
    @ROKKor-hs8tg 4 дня назад

    Not run on colab t4 with 16gb viga

    • @StableAIHub
      @StableAIHub 4 дня назад

      Sorry this tutorial is for Windows not for Collab

  • @ROKKor-hs8tg
    @ROKKor-hs8tg 4 дня назад

    When will version 0.6b be released and how can I use version 1.6b in Colab without problems?

    • @StableAIHub
      @StableAIHub 4 дня назад

      They will release in some time. I don't know about collab.

  • @c-weed28
    @c-weed28 5 дней назад

    please i'm interested in your techniques for running notebooklm

    • @StableAIHub
      @StableAIHub 5 дней назад

      Sorry I only know Windows installation.

  • @pollop-o6m
    @pollop-o6m 5 дней назад

    Sana-0.6B?????? on colab t4

    • @StableAIHub
      @StableAIHub 5 дней назад

      0.6B model is not released, only 1.6B is released. I only know Windows installation. You can check here github.com/NVlabs/Sana/issues/33

  • @ROKKor-hs8tg
    @ROKKor-hs8tg 5 дней назад

    Not run on colab t4 how to run it on colab t4 with sana 0.6b

    • @StableAIHub
      @StableAIHub 5 дней назад

      0.6B model is not released, only 1.6B is released. Please refer this github.com/NVlabs/Sana/issues/33

  • @manasdas2473
    @manasdas2473 5 дней назад

    Running app_sana.py gives an module not found error. Of gradio, Then spaces Then torchvision Then app.sana pipeline. How to fix this ?

    • @jyotishuniverse
      @jyotishuniverse 5 дней назад

      Did you activate conda activate sana? if still issue Launch command prompt from within cloned repo activate V.E. pip install -e .

  • @ROKKor-hs8tg
    @ROKKor-hs8tg 5 дней назад

    In colab t4 not run

    • @StableAIHub
      @StableAIHub 5 дней назад

      Please check here github.com/NVlabs/Sana/issues/33

  • @istoriesscary
    @istoriesscary 5 дней назад

    Does this work with hindi?

    • @StableAIHub
      @StableAIHub 5 дней назад

      You can fine tune for Hindi language. Refer the Discussion tab on their github.

  • @trishul1979
    @trishul1979 5 дней назад

    When it was released everything was Linux specific. Thank's for making it work on Windows, I will try tomorrow.

    • @ArtificialDevLabs
      @ArtificialDevLabs 4 дня назад

      With his instructions it does not work on Windows 11.

    • @trishul1979
      @trishul1979 3 дня назад

      @@ArtificialDevLabs It is working fine. Just see the other comment you made, I followed the steps and it worked fine.

    • @ArtificialDevLabs
      @ArtificialDevLabs 3 дня назад

      @@trishul1979 It does not work for me on Windows 11 The installation asked about a gated model gemini 2. and now when I followed every step correctly I am getting this error with a new latest Anaconda installation: "ImportError: DLL load failed while importing libtriton: A dynamic link library (DLL) initialization routine failed."

  • @abujr101
    @abujr101 5 дней назад

    getting error " need conda init first before activating conda" when trying to activate conda omnigen

    • @StableAIHub
      @StableAIHub 5 дней назад

      Don't use powershell, use command prompt.

  • @ZorlacSkater
    @ZorlacSkater 9 дней назад

    I got errors when following your exact steps. Turned out I was missing the Visual Studio Build Tool (Make sure to select C++ for Desktop when installing them, otherwise it will fail!)

  • @ZorlacSkater
    @ZorlacSkater 9 дней назад

    Where to put the model?

    • @StableAIHub
      @StableAIHub 9 дней назад

      The models are automatically downloaded depending on which app you launch, you don't have to do anything. I have covered this in the video.

  • @ZorlacSkater
    @ZorlacSkater 9 дней назад

    I get "you need to call conda init first" but if I do it give me another error.

    • @StableAIHub
      @StableAIHub 9 дней назад

      For which step are you getting this error?

    • @ZorlacSkater
      @ZorlacSkater 9 дней назад

      @@StableAIHub For `conda activate omnigen"

  • @radmirshayakhmetov7836
    @radmirshayakhmetov7836 10 дней назад

    Большое спасибо за данное видео!

  • @Reinalexander71
    @Reinalexander71 10 дней назад

    Mine said Error code:2 Runtimeerror:couldn't install torch it was so close to finishing only 3 gb left😭

    • @StableAIHub
      @StableAIHub 10 дней назад

      Use Forge github.com/lllyasviel/stable-diffusion-webui-forge

  • @DeltaNovum
    @DeltaNovum 12 дней назад

    Damn dude this looks awesome! I was just looking for a way to do upscaling on some older tv shows and do offline interpolation on some 4k content, as svp RIFE won't do 4k in realtime properly. I'll be trying this out this weekend hopefully, and I'll let you know how easy it was to use, and how much I enjoyed it. Thank you for making something like this <3!

  • @jyotishuniverse
    @jyotishuniverse 13 дней назад

    I actually heard the conversation thrice, it doesn't look like TTS. How come there are expressions, laugh, etc..

  • @jyotishuniverse
    @jyotishuniverse 13 дней назад

    Did you recorded the conversation for each speaker separately? This looks really good.

  • @trishul1979
    @trishul1979 13 дней назад

    How did you generated the conversation used during installation guide. It is really amazing. Please make it little slow next time.

    • @jyotishuniverse
      @jyotishuniverse 13 дней назад

      Yeah conversation is life-like. Doesn't look like recorded separately.

    • @KimiMorgam
      @KimiMorgam 13 дней назад

      This is google notebook lm

    • @trishul1979
      @trishul1979 13 дней назад

      @@KimiMorgam Please share the link

    • @KimiMorgam
      @KimiMorgam 12 дней назад

      @@trishul1979 google for notebooklm google?

  • @nguyenhongduong2906
    @nguyenhongduong2906 14 дней назад

    Awesome, thank you so much, this tutorial is so convenient and easy!

  • @content1
    @content1 14 дней назад

    Hi, thanks for the tutorial, I don't have the webui_en file in the folder? where did you get it from?

    • @StableAIHub
      @StableAIHub 14 дней назад

      It's in the video description. "Additional files"

    • @content1
      @content1 14 дней назад

      @StableAIHub Thank you I manage to install everything with some GPT help. By the way, After generating the audios, which are great, i press download but it creates an empty file, OK .wav file. Any ideas?

    • @StableAIHub
      @StableAIHub 14 дней назад

      @@content1 For functional issues please post here github.com/FunAudioLLM/CosyVoice/issues

  • @jyotishuniverse
    @jyotishuniverse 16 дней назад

    Very good tool.

  • @indecomsh
    @indecomsh 17 дней назад

    Awesome.

  • @indecomsh
    @indecomsh 17 дней назад

    Working fine on 12 GB VRAM and very fast too. Appreciate the guide.

  • @indecomsh
    @indecomsh 17 дней назад

    Working fine. Appreciate the easy tutorial.

  • @trishul1979
    @trishul1979 18 дней назад

    Thank you. Very easy guide.

  • @baseerfarooqui168
    @baseerfarooqui168 18 дней назад

    what tool you used to lipsync generation

    • @StableAIHub
      @StableAIHub 18 дней назад

      ruclips.net/video/iVy2bXPQNKY/видео.html

  • @ChikadorangFrog
    @ChikadorangFrog 18 дней назад

    something better than liveportrait is coming. X-Portrait 2: ByteDance’s AI Lip-Sync Tool

    • @StableAIHub
      @StableAIHub 18 дней назад

      I failed to install X-Portrait and looking at the comments the output was very bad. Let's hope v2 is better. Please let me know when it's released.

  • @behrampatel4872
    @behrampatel4872 20 дней назад

    As your videos progress I can see how your AI mascot lady gets better and better with each revision. The lips are moving more naturally for this one. I'm sure the expressions will get more nuanced soon. So within 1 year you will automatically have a archive of footage of how the mascot went from primitive to fully fleshed AI human clone. Cheers. ps : Can you do a video on the recent Facebook sapiens ? b

    • @StableAIHub
      @StableAIHub 19 дней назад

      I know. Ai is progressing too fast. EchoMimic is the best in terms of skin textures and neck movement. I figured changing some settings produce even better results, did a video for that. Let me check github.com/facebookresearch/sapiens

  • @Michael-b7z8y
    @Michael-b7z8y 20 дней назад

    use pinokio

  • @boykogeorgiev5092
    @boykogeorgiev5092 21 день назад

    Does anybody faced the problem with "CUDA out of memory" effect and have you set this environment variable to ty to fix it - "PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512" Does the model could be optimized somehow to work on 8GB VRAM?

    • @StableAIHub
      @StableAIHub 20 дней назад

      The project is not updated for a long time. try OOTDiffusion, video is posted in this channel.

  • @trishul1979
    @trishul1979 22 дня назад

    Thank you

  • @ChikadorangFrog
    @ChikadorangFrog 23 дня назад

    what image generation AI did you use at 0:43? It is really nice. The woman looks realistic

    • @StableAIHub
      @StableAIHub 22 дня назад

      Don't remember, I'm using this image in all my videos. Might be chilloutmix in Auto1111.

  • @MAHATHSOMINA
    @MAHATHSOMINA 23 дня назад

    How to install in python 3.9 ?

    • @StableAIHub
      @StableAIHub 23 дня назад

      Have you tested it it is working with python 3.9? You can install miniconda and then Step 3: conda create -n ootdiffusion python==3.10 Step 4: conda activate ootdiffusion Rest everything remains same

  • @ROKKor-hs8tg
    @ROKKor-hs8tg 23 дня назад

    Can you provide a video on how to run the inference with nemotron 70b unsloth 4bit on vram 8GB or on colab t4

    • @StableAIHub
      @StableAIHub 22 дня назад

      What is Nemotron. I never heard about this. What is this used for?

    • @Gamatoto2038
      @Gamatoto2038 22 дня назад

      @@StableAIHubnemotron is a massive large language model developed by nvidia and u can run it locally if u have enough ram

    • @StableAIHub
      @StableAIHub 22 дня назад

      @@Gamatoto2038 I have not came across this so far as I am focusing on SD only. Will see if there is any case study I can do around this.