AniPortrait - AI Audio-Driven Synthesis of Portrait Animations - Local Install!
HTML-код
- Опубликовано: 29 мар 2024
- Using python & AI for animation images is always something I find fun, and there's a new kid on the block - AniPortrait! Using the power of Stable Diffusion, this repo can create animated avatars with fairly minimal effort. There have been quite a few research advances over the years, and this brings us a step closer. Want to install some of the latest, cutting edge, AI research at home? Well now you can, as this will run on consumer hardware too 😃 This step-by-step tutorial will guide you through getting it running locally, meaning you too can be on the cutting edge! Crack that anaconda terminal open once again to begin your journey…
Want to support the channel?
/ nerdyrodent
AniPortrait - github.com/Zejun-Yang/AniPort...
AniPortrait paper - arxiv.org/abs/2403.17694
Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
RVC WebUI - • RVC Web UI - FREE, Ope... - Наука
Ok, so hear me out.
This with SD3 models + Tavern AI text to speech with Llama 3 or Grok derived models.
2024 is shaping out to be a truly fun year.
It'd likely take a nasa quantum super computer to run stable diffusion image prompts, a moderate llm, rvc/applio and then this all at the time. But yes, yes please.
@@jackrabbit1704 Not really. Anything above a RTX 5070 should be able to handle it comfortably.
Considering the Blackwell architecture and the improvements in AI efficiency we're seeing.
Here comes Peter cottontail. Hopping down the bunny trail. Hippity hoppity nerdy’s on his way! 🐭🐇
😀
@@NerdyRodent 🤘😉💕
@@NerdyRodent My suggestion is to try experimenting with fourier transforms to eliminate the flickering in stable diffusion videos as the frequency level. Please do you understand me?
This is so cool!
Can’t wait for a few more papers down the line! 😉
I wonder if we'll ever get something like LucidSonicDreams but with SD, that would be incredible!
Yes!
Maybe one day ☹️
Fix those eyes to look directly into the camera and this is great!
there is software for that too
@@leavemealoneandgoaway What is that software? descript?
+1 points.
How to use "instruct pix 2 pix" & "SDXS" in comfyui?
now this tool can make , two hand ,mouse , Natural movements and realistic performances
How long time we can make out put video 5 nminutes or 10 or 30 minutes if we want
Each video can be as long as you have the hardware for!
How's the inference time with audio driven mode? is it near real time?
Not even close 😉
@@NerdyRodent Thanks, Do you know any talking head with near real-time inference? Something like D-ID real-time avatars
Where you can do your own custom stuff easily, locally and for free… not that I can think off 🫤
Strabismus attack
and any tool can make voice clone and train voice , thanks
Do you mean like the example in this video?
Hi
we want a speaking rodent in the corner of your videos. ahaha
Ikr!
creepy
Now this is actually very cool. Must be great for people that want to do VTuber content but don't want to go through the whole rigmarole of setting one up.
-
Jesus Christ loves you 💙
He has a plan and a purpose for your life, plans to prosper you and not to harm you, plans to give you hope and a future.
Jesus Christ loves you 💙
Thank you for the information. I would like to ask if you know some Colab options or Kaggle notenook for AniPortrait