Honestly I would be perfectly happy hearing you speak Japanese and simply utilizing the captions for English subtitles. It would make me feel a more authentic connection to the content creator even if it's in another language. Japanese is quite nice to listen to even if you don't understand it.
DUDE! You literally have skipped the most important untrivial point|feature in 4:08 to 4:10. That is one of the silliest things i saw in the several weeks) Thank you!
Imagine, that you are some deep-diving instructor. You' ve done an outstanding lecture upon warienties of the seas... But you've foken forgot to describe how to turn on the oxygen))))
dude, the thing that asians look similar is true to me. I always tought it was a joke, but now i watch asian content much more and ALL THE TIME i get confused. "Why he is doing that", then i realize is another new character that is dressing something similar. I personally like historic shows, in historic shows they have some officials or things like that that use the exact same outfit, and some of them make me really confuse because 2 guys that look similar LOL
Help me I did install it but when I click generate it won't even have loading or showing any results or error just stuck and I can't even skip or intrupt
@@ai_lab_tutorial i had the same error as him and there is no error or anything in the cmd, it just does nothing. I think it is likely running out of memory. I just used a different method with the batch img2img which works just the same but has a manual aspect to it
@@nsl1117 same problem on PC version. not use google colab. rtx 3090. tutorial not work. no error or anything in the cmd. just stuck.... sad............img to img is well work.
i used a fork version of it, but now that broke too. used to have it working before in the past, but new udpate broke it again. unable to fix as of now and still looking for solution
Is mov2mov actually working for you? I have installed version 1.5 of auto1111 still it doesnt work… right after installing, cmd shows errors in some .py files, it looks like it fails to read them, im about to give up. Do you know best alternative to mov2mov?
I also found a rough fix: in folder "javascript", edit the file "ui.js" in line 125, change: gradioApp().getElementById(tabname + '_interrupting').style.display = showInterrupting ? "block" : "none"; To: try{ g radio App().getElementById(tabname + '_interrupting').style.display = show Interrupting?'block':'none';} catch(e){}
Okay mate! Now I uploaded video of how to install stable diffusion to local. Check it out👀 ruclips.net/video/wzBjGXTtSrs/видео.htmlsi=2X3nmX1rLx3QFe4P&t=291
Thank you for the tutorial, but ReActor is not giving realistic face results. I even tried ReActor on img2img. The face is very different from original and is not realistic. Is there something I can do?
I run a RTX 4090 on automatic 1111 its like butter. fast and smooth,,,, BUT.. this Mov2Mov and reactor nonsence took 25 mins to create a 9 second video. just to find out at the end , it was just like his..... THE SAME FREAKIN GIRL ..... what a waste of time.. Stick this this video and your Mov2Mov up ya trumpet
❤Hope there is global AI law soon to mandate to all public social platform players to implement AI solution to automatically removal of deepfake indecent content posting? Those insist or want their blocked sensored deepfake video to release can request from permission with full understanding heavy fines and/or risk jail terms. Dont waste tax payer money hiring people to do filtering, investigating and add unnecessary wasteful processes , work load or asking teachers to help monitoring..
you set the modifiers to 0, of course it will look like the original. you didnt want to show the abomination that comes when you set those sliders to full, eh?
Honestly I would be perfectly happy hearing you speak Japanese and simply utilizing the captions for English subtitles. It would make me feel a more authentic connection to the content creator even if it's in another language. Japanese is quite nice to listen to even if you don't understand it.
Thanks for your opinion. I'm gonna try to make only the English subtitle video from next time not using AI dubbing
@@ai_lab_tutorialim your 1000th sub man congrats also yeah speak japenese it sounds cool :)
i love to hear japonese with just captions
I on the other hand appreciate the english voice because I'm clicking on the thing on other tab while listening to this!
I think they were targeting for more audience. but again, subtitles would be good as well without the extra effort. Thanks for the video!
DUDE! You literally have skipped the most important untrivial point|feature in 4:08 to 4:10. That is one of the silliest things i saw in the several weeks) Thank you!
Imagine, that you are some deep-diving instructor. You' ve done an outstanding lecture upon warienties of the seas... But you've foken forgot to describe how to turn on the oxygen))))
@@alkomedvedev Thanks for your comments. You are talking about Movie Editor function?
@@alkomedvedev so can you explain it please? 😅
i didt notice any difference between the faces :D
He could have use a drastic dif faces or to him they were drastic differences haha
dude, the thing that asians look similar is true to me. I always tought it was a joke, but now i watch asian content much more and ALL THE TIME i get confused. "Why he is doing that", then i realize is another new character that is dressing something similar. I personally like historic shows, in historic shows they have some officials or things like that that use the exact same outfit, and some of them make me really confuse because 2 guys that look similar LOL
@@alexs2195east asia* 😂😂😂
Такая же хуйня 😂😂😂
@@alexs2195 White dudes look similar to east asia*...vice versa
Help me I did install it but when I click generate it won't even have loading or showing any results or error just stuck and I can't even skip or intrupt
same problem. any solution?
Plz let me know the what’s error showing on your terminal
@@ai_lab_tutorial i had the same error as him and there is no error or anything in the cmd, it just does nothing. I think it is likely running out of memory. I just used a different method with the batch img2img which works just the same but has a manual aspect to it
@@nsl1117 same problem on PC version. not use google colab. rtx 3090. tutorial not work. no error or anything in the cmd. just stuck.... sad............img to img is well work.
Same here, no reaction on Generate button.
Nice video, but what about videos with many faces? Any way to select the one you need to change?
Love ur side profile!
Thanks!
Ok!
after installing the extention and restarting sd i dont have move2move tab
Same. It might be broken now.
@@goatgoat0 naa i fixed somehow
How? It doesn't work on new version of Stable Diffusion.
Can you do a video of A1111 install with old version, new version doesn’t allow mov2mov
Mov2Mov tab appearance doesn't work on the new version of Stable Diffusion.
Why do I have the mov2mov extension already there but the menu doesn't appear on the main page at the top?
same problem
@@oscarzap69 use stable diffusion old version 1.7.0
Hey is it still working? or do you have a better update on this?
I did everything perfectly but I can't see the the mov&mov tab
Please share me how to create save file after completed because when the video finished they not show and I cannot find its in Google Drive
I have already shared the download directory. Plz check the 6:00.
If not see anything, It might have some trouble to make the video.
cool this is what i was looking for, ai is still so nerdy and not mainstream, was hard to find info
Not working with latest update. Any guide if we can use old version and make these things possible?
i used a fork version of it, but now that broke too. used to have it working before in the past, but new udpate broke it again. unable to fix as of now and still looking for solution
please give the video of the girl dancing and image to test it out can you please give the link to download
So sorry I couldn't find out this source video
I want to swap the face, hair, and clothes in a video. Is it possible? How?
I'm running an error here,cmd:PIL.UnidentifiedImageError: cannot identify image file
Is mov2mov actually working for you? I have installed version 1.5 of auto1111 still it doesnt work… right after installing, cmd shows errors in some .py files, it looks like it fails to read them, im about to give up. Do you know best alternative to mov2mov?
possible that you havent installed something else , some libraries or python on c
Thanks for the tutorial. How do you use the saved frames of the video for another face swap ? or do you always have to run the process again ?
could you please share a link to google colab for St Def WebUI? I hvae M1 and it is dramatically slow and has many errors time to time
bro, i download mov 2 mov plugin in my web ui. but it can't work. I click generate , sd do noting? do you know why?
if this problem with a1111 v1.8, then you can try use more old version a1111. like 1.7 or 1.6.1
I also found a rough fix:
in folder "javascript", edit the file "ui.js"
in line 125, change:
gradioApp().getElementById(tabname + '_interrupting').style.display = showInterrupting ? "block" : "none";
To:
try{ g radio App().getElementById(tabname + '_interrupting').style.display = show Interrupting?'block':'none';} catch(e){}
Can we use to change with beautiful place?
This method couldn't change the background
Does mov2mov still work? Why does it not load on my SD even though it is installed?
The link for the stable diffuser pls.
I never expected that this method will take forever to render a 1080 video for 18 sec clips. In Facefussion it takes 3mins Tho
It takes long time to render because of using stable diffusion. If you wanna use ReaActor much faster, download it to local.
You mean because you're using Colab, not a local instance of stable diffusion with reaActor@@ai_lab_tutorial
The resolution was too small to detect any notable change on the face.
Yeah, but I could notice the face difference for asian haha
Please make a video on hot to train LORA from image(s) and make deep-fake from that LORA.
Thanks for sharing . can you tell what kind of GPU are you using ?
I launched SD by google colab pro and using T4 GPU
I just dont like how it only allows you to use 1 sample image, it comes out so bad, I wish you could use a lora model
Im gonna try use a lora model and show how it works someday
Woow! Amazing!
Thx!!
Awww!😍😍😍😍 thank you for appreciating!!!!!!!!!!!
Is this better than deepfacelab?
mov2mov tab, dont appear, not work
How to automatically translate your voice to English in a video?
can u share your sd colab and how to use it? thx for the video
plz check this video. I shared how to launch stable diffusion and sd colab.
ruclips.net/video/TVi2NhgAWRY/видео.html
@@ai_lab_tutorial thx a lot my friends!!
Is it work with Forge ..!
Can you add language translation to your videos, I think it will help everyone to understand your video tutorials 😊
Thank you so much for your comment. You mean it's gonna be helpful to make only subtitle without ai dubbing?
Make a detailed video of how to install stabble diffusion on pc
Okay mate! Now I uploaded video of how to install stable diffusion to local. Check it out👀
ruclips.net/video/wzBjGXTtSrs/видео.htmlsi=2X3nmX1rLx3QFe4P&t=291
does this work in stable difussion XL?
I don't try this in SDXL... Plz try it.
Dear sir please provide email talking about this tool
Thx for comment but I just accept email for commercial use.
thx
Is it Possible to change the hair Colour with that? Also amazing work :)
thx for youre video!! why not doing a video to explains how to faceswap photos with very hight quality?
go img2img then enable reactor then put in your photo set denoise to 0 (if you want to change the face only) then hit generate
Wait so you put a face over a ai generated model? The one that’s dancing isn’t a real person at all?
Deep fake is scary thing these day.
Which GPU u have? My 1050ti can gold such load of work?
Thank you for the tutorial, but ReActor is not giving realistic face results. I even tried ReActor on img2img. The face is very different from original and is not realistic. Is there something I can do?
It doesn't work anymore
"i know what kinda man you are"
Awesome tutorial!
I run a RTX 4090 on automatic 1111 its like butter. fast and smooth,,,, BUT.. this Mov2Mov and reactor nonsence took 25 mins to create a 9 second video. just to find out at the end , it was just like his..... THE SAME FREAKIN GIRL ..... what a waste of time.. Stick this this video and your Mov2Mov up ya trumpet
I can't see any differences kiddo.
is roop better?
Yes. Reactor is improved version of the roop.
ill subscribe to you as the 46th subscriber @@ai_lab_tutorial
❤Hope there is global AI law soon to mandate to all public social platform players to implement AI solution to automatically removal of deepfake indecent content posting?
Those insist or want their blocked sensored deepfake video to release
can request from permission with full understanding heavy fines and/or risk jail terms.
Dont waste tax payer money hiring people to do filtering, investigating and add unnecessary wasteful processes , work load or asking teachers to help monitoring..
I want to make a flipbook animation video ,
why using old software.. u dont have comfyui??? shinjirarenaiyo...
ComfyUI is just a UI lol
you set the modifiers to 0, of course it will look like the original. you didnt want to show the abomination that comes when you set those sliders to full, eh?
Colab ?
Yeah
Wtf AI voice is this 😂😂😂
wow
😍😍😍
dosnt work
it did nothing...
This plugin no longer works and the author has stopped supporting it.
Deep fake video ...... Think that was the title of his own creation here, and who was that talking lol
He used AI voice I guessed.
RIP low end PCs
It's better to use google colab if you don't have high end PCs
Weww kernaon mang
5/5
Thanks!
your face is ai
Are you real
Deep fake channels are not monatized so don't make such videos
All Chinese, Japnese, Korean, Nepali, looks same 😂 If I missed any, let me know...