Make Art Talk - Wav2Lip Lip Sync Deepfake Google Collab Tutorial

Поделиться
HTML-код
  • Опубликовано: 12 янв 2025

Комментарии • 649

  • @WhatMakeArt
    @WhatMakeArt  2 года назад +6

    ruclips.net/video/ca9rcQYTIS0/видео.html Trippy Wav2Lip demo of paintings reading Alice in Wonderland - ruclips.net/video/ca9rcQYTIS0/видео.html
    Become a member
    ruclips.net/channel/UCmGXH-jy0o2CuhqtpxbaQgAjoin
    👍 Support on Patreon
    www.patreon.com/WhatMakeArt
    Tip Jar to Support the Channel
    paypal.me/pxqx?country.x=US&locale.x=en_US
    www.venmo.com/JimmyKuehnle
    Paypal: @pxqx
    Venmo: @jimmykuehnle

    • @halswift
      @halswift 2 года назад +1

      Wow! Great tutorial! I tried running it and it looks like the Google Colab gets tripped up on dependencies. I am getting a ton of red errors near the end right before the "Let's Try It". Even tried their updated version which wasn't any better.

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      I would check back with the original authors of the Collab and their GitHub page, they have links to some drag and drop examples

    • @Artificial_
      @Artificial_ 2 года назад +1

      Wow this is Mind Blowing 🤯

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      🦜⛄👍

  • @BetterVersionByCreativeInsight
    @BetterVersionByCreativeInsight 3 года назад +38

    Oh my goodness. I just watched FIVE more different videos explaining Wav2lip lip-sync that are twice as long as yours and much more confusing. Your video is definitely the best video describing this procedure in a succinct, precise, concise and simple way. I am so thankful that I stumbled on your video first as it is definitely the clearest and simplest explanation that I have seen given. You really should have more subscribers as your explanations (as noted in comments) are the best.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Thanks for the feedback, hopefully you made some fun videos

    • @nehapatel6332
      @nehapatel6332 2 года назад +2

      @@WhatMakeArt I wish there was an even easier way to do all this...like a drag and drop.

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      There is a drag and drop away. If you look back at the original GitHub repository, someone made a drag and drop way

    • @nehapatel6332
      @nehapatel6332 2 года назад

      @@WhatMakeArt what is it titled?

  • @govblokk
    @govblokk 3 года назад +18

    This video is so good, you deserve more subscribers

  • @greenoicmusic3848
    @greenoicmusic3848 4 года назад +52

    Mona Lisa looks scarier than ever!

  • @bomxacalaka2033
    @bomxacalaka2033 Год назад +1

    jeez, took me 30 mins getting all the updated/compatible libs, but it worked in the end, cheers.

    • @WhatMakeArt
      @WhatMakeArt  Год назад +1

      Yes, dependency hell can be frustrating, glad you got it to work

  • @wildh4rt
    @wildh4rt 4 года назад +18

    Very informative! I'm glad I did stick around to learn what's this about.

    • @WhatMakeArt
      @WhatMakeArt  4 года назад +2

      Glad you liked it, the researchers sure made an interesting lip sync algorithm

  • @Whacky1984
    @Whacky1984 Год назад +1

    "Using cuda for inference.
    Reading video frames...
    Number of frames available for inference: 1562
    (80, 4161)
    Length of mel chunks: 1556
    0% 0/13 [00:00

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Try with the sample MP4 and wav linked in the description to rule out a bad video file

  • @sheridanpickle429
    @sheridanpickle429 3 года назад +4

    even though u were doing it on mac os and im on windows 10 it was still the exact same and easy to follow unlike some tutorials which are completely different so gg for that :)

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Thanks for the feedback, good point, I should probably boot into Windows more often and record tutorials in Windows so people can see the different OS user interfaces

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      You need to convert a single PNG image to an MP4 and a video editor. You can just stretch out the PNG for as long as you want. There are video converters online as well

    • @sheridanpickle429
      @sheridanpickle429 3 года назад +1

      @@WhatMakeArt ive been expirmenting with a few different files i have on my computer and ive come across an odd problem. once the process was complete i looked in the results folder and saw tht there wasnt any file there. i looked back over the steps but couldnt find any error codes. do you have any idea what went wrong?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Remember you got to download the output video from the browser

  • @yaseen1478
    @yaseen1478 4 года назад +3

    I wonder what's the point of downloading the model locally at 2:04 if everything is done on the cloud tho?

    • @WhatMakeArt
      @WhatMakeArt  4 года назад +1

      You need to have the model in your Google Drive. If everyone linked to the same Google Drive version of the model then it might get timed out.

  • @davidatkinson7354
    @davidatkinson7354 Год назад +4

    Hey! This used to work but I keep getting this fail TypeError: mel() takes 0 positional arguments but 2 positional arguments (and 3 keyword-only arguments) were given
    Any reason known why?

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      That is usually a comma, semi colon typo, reload the Colab and do a test with the sample files

    • @keerthanareshaboina
      @keerthanareshaboina 2 месяца назад

      hey, hii did you resolve it? please help if you did

  • @EnberFeres
    @EnberFeres 2 года назад +1

    It didnt gave me the link that you say at 2:41 what do i do? Also how do i stop them from having access to may drive?

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      The way Google mounts the drives has changed a bit, once you close out of the collab session then no access remains

  • @BetterVersionByCreativeInsight
    @BetterVersionByCreativeInsight 3 года назад +5

    Although this was a very helpful video I have spent over 30 hours trying to get it to work unsuccessfully. Most of my mistakes included things you mentioned such as leaving no spaces in the labels for the audio and video, for example. However there are other problems that I could not easily solve even after watching your video several times in slow motion. These were 1. when I went to paste my google code into the box to authorize me using it, it would not allow me to paste it. When I used the keys control and V it caused an error until I realized I had to press them instantly together for a split second. 2. As I could not get it to work, I downloaded Google Drive thinking that was my problem. I then discovered that Google drive on my Lenovo computer does not make any distinction between small "l" and capital "L" so it indicated they were the same file. I had to uninstall google drive from my computer so that I had the correct files in the cloud. 3. I observed that you moved your files "Kennedy" and "ai" over into the Wave2Lip file but then later I realized you "had to change" the file names as you inputted them as that. I thought this was a later option and realized I needed to change my file names to what they are in the program "input_audio.wav" and "input_vid.mp4" . In spite of all my eventual corrections, I am still not provided a result that I can download even though it appears to have processed it all the way through. So I will keep comparing your video of the computer language you show to figure out where I went wrong. I only mention this stuff in case a complete newbie like me who knows nothing about computer code has similar problems. thanks.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +2

      When using control and v to paste. You can always press and hold control as long as you want and then press v to paste. You don't have to do it at the same time The same is true for using control plus c to copy.
      Yes, case insensitive file systems can make a problem for making sure something works on a case-sensitive system.
      The file naming could work better. If you go to the GitHub site, there are some other versions of Google Collabs that other researchers have made. They may be a bit more user friendly. I recommend going to the original GitHub site linked in the description and exploring.
      Appreciate the feedback and hopefully you get a result that works.

    • @RonalRomeroVergel
      @RonalRomeroVergel Год назад

      brother has u found any succesfull solution??? because i am struggling with the same mistakes i have been here for 3 days and no resultsss

    • @RonalRomeroVergel
      @RonalRomeroVergel Год назад

      can u try to repeat the tutorial bruh.... maybe something was wrong... plss@@WhatMakeArt

  • @TheStargraves
    @TheStargraves Год назад +2

    Thanks for this but my google drive is not giving me a code. The pop up just disappears. Can I pull it from anywhere else?

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      All the code is available on the original creators' GitHub page linked in the description

  • @ZIOJONES
    @ZIOJONES 2 года назад +3

    It seem it doesn't work anymore. I used this for years and I never had any problem. Now it doesn't work. It always says "cannot stat '/content/gdrive/MyDrive/Wav2Lip/wav2lip_gan.pth': No such file or directory" in STEP 3 of "Get the code and models" section.
    It seems something changed in the source code. It says there's no "wav2lip_gan.pth" in "Wav2Lip" folder but it should be in "Wav2lip" (with lower case L) folder.

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      That's strange, maybe there's a simple typo that's being overlooked or the dependencies changed

    • @ZIOJONES
      @ZIOJONES 2 года назад +1

      @@WhatMakeArt Are you the author of Wav2Lip or can you contact him? I need this to work :(

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      Not the author at all, their contact information is on the GitHub page linked in the description. On that GitHub page. There are also some online GUIs that are drag and drop and work fine, I recommend trying those

    • @ZIOJONES
      @ZIOJONES 2 года назад +1

      @@WhatMakeArt Ok, Thank you so much.

  • @davidatkinson7354
    @davidatkinson7354 3 года назад +1

    Aw i keep making a mistake and don't know what I'm doing wrong! on the "now let's try" part!
    Using cuda for inference.
    Reading video frames...
    Number of frames available for inference: 1454
    (80, 4653)
    Length of mel chunks: 1451
    0% 0/12 [00:00

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      The video could be too high of resolution, or it could be too long, or there might not be a face in every single frame of the video
      Try it with the sample video linked in the description to see if there is something wrong with the setup to eliminate variables
      It should work with a simple wav file and an MP4 with a face detectable in each frame
      Sometimes a video editor will export a single frame at the end of the video and that makes it not work

    • @davidatkinson7354
      @davidatkinson7354 3 года назад +1

      @@WhatMakeArt oh man i got it! This is fantastic, so interesting. Amazing feature.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Woohoo

  • @dino-flix3644
    @dino-flix3644 3 года назад +1

    At 2:37 when you clicked at it how much time did it take to load the while proces where we have to type y mine just keeps loading pls reply fast

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      I'm very based on all the GPU use and which type of GPU you're assigned. If it takes a long time I recommend starting a new browser session and trying again

    • @dino-flix3644
      @dino-flix3644 3 года назад +1

      @@WhatMakeArt I did it I tried it again and again opening new page pls help me with this I need it

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Just like when your internet router malfunctions, best plan is to restart your computer, try a new browser, reload into your Google account or a different Google account and try again
      Frustrating sometimes...

    • @dino-flix3644
      @dino-flix3644 3 года назад +1

      @@WhatMakeArt ya really frustrating you explained very well but I am suffering with this problem but will try again

  • @studywhatever4063
    @studywhatever4063 Год назад +1

    Hi. Thanks so much for the tutorial. I'm getting stuck at 2:41 as there is no link there to click. Has this process changed since then??

    • @WhatMakeArt
      @WhatMakeArt  Год назад +1

      The processes change the bit, the best thing to do is check the original researchers' GitHub page that is linked in the description

    • @studywhatever4063
      @studywhatever4063 Год назад +1

      @@WhatMakeArt thanks!

  • @HowTo-jn9nt
    @HowTo-jn9nt 2 года назад +2

    The code worked once and it wouldn't work anymore in the "Now lets try!" section it keeps saying "Using cuda for inference.Reading video frames...^C" even tho every frame has a face and the names are correct please help thanks

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      That is frustrating when it doesn't work, did you try with the sample video and audio linked in the description?

    • @HowTo-jn9nt
      @HowTo-jn9nt 2 года назад +1

      @@WhatMakeArt thanks for the reply. I think i figured it out. I didn't have enough ram, but the thing is the first time i did it i used a 4 min video and it works

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      It kind of depends on how much RAM Google lets you use for that session, it can vary depending on how much your account has used the GPUs that month or recently, glad it worked

    • @HowTo-jn9nt
      @HowTo-jn9nt 2 года назад +2

      @@WhatMakeArt thanks!! (subscribed and liked)

  • @CampfireCrucifix
    @CampfireCrucifix Год назад +2

    For everyone thats getting the ^C error at the end of the output then here is what I did to fix the issue. I maxed the length to > 30 seconds and dropped the framerate to 30 from 60. My resolution was still 1080p. After that it actually started rendering.

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Thanks for sharing those tips

    • @AdarshSingh-rm6er
      @AdarshSingh-rm6er Год назад +1

      I still can't get the output and get ^C error

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Try the example files linked in the he description

  • @Jisoosthumbs
    @Jisoosthumbs 4 года назад +1

    how do you make the text reappear in the cell? I tried to fill in my google drive code but as soon as I ran that cell the text and bar disappeared within the cell

    • @WhatMakeArt
      @WhatMakeArt  4 года назад +1

      You might need to make a new session in Google Collab. go to the manage sessions in the upper right and terminate all sessions. Close your browser tab. Then open up a new Google code app and then you should be able to edit the text as you need to.

    • @Jisoosthumbs
      @Jisoosthumbs 4 года назад +1

      @@WhatMakeArt thanks :)

  • @theartifact1193
    @theartifact1193 3 года назад +10

    Don't you just love it when technology makes things so easy to use?

  • @WickJKR
    @WickJKR 4 года назад +2

    I really appreciate your video but was wondering if you could potentially inform me or help me with an issue I'm having. Is it possible to input 1080p footage and have the program work? Everytime I put in 1080 footage, the "Now lets try!" stops really early, giving me no result. It just spits out the usual stuff, and then "^C". I've done a couple of tests, where I put in 720p footage, it works, then I put in the same exact footage, just scaled to 1080p, and it no longer works. If you have found it to work with 1080p footage, please do tell me your secrets friend, should I be exporting the 1080p footage in some sort of way that the program can read it better? For reference I'm cutting and rendering the footage in Adobe Premiere, but I also have Handbrake to do any modifications if you are familar with that program. I'd appreciate just about any help you could have, I've been working on this on and off for about a week now and can't seem to crack it. Thanks.

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      Try running it through media encoder to export an H.264
      You can also try and coding it with FMPEG
      It may be running out of available RAM at the larger size. Since you have it working at 720p you know there is a face in each frame so that shouldn't be the problem.
      Try a shorter version of the video to avoid any out of memory problems.

    • @sasatoan9092
      @sasatoan9092 3 года назад +1

      @@WhatMakeArt can you explain how to do that? I'm always have bad resolution every result of video

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      The algorithm was trained on lower res footage so you might only get it to work with low res footage unless you retrain the model with the info on GitHub

  • @aretherelakerstotalkabout8911
    @aretherelakerstotalkabout8911 4 года назад +6

    when i go to download, no save as box pops up. Basically making the following steps incapable of doing

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      You just need to download the file on your computer to be able to put it in your Google Drive. Download it how you would save a video from a web page or any file from a web page. It could be under safe page as or just save. You could also try to download via Google Drive.

  • @jay-tbl
    @jay-tbl 3 года назад +4

    on the arrow at 3:11 that says !cp -ri it loads forever even though i have the correct file in that folder
    edit: i just started over and it worked huh

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Sometimes just starting over is all it takes just like unplugging your router

  • @BattleBladeWarrior
    @BattleBladeWarrior Год назад +1

    Holy cow dude!
    I think The Mona Lisa took the win here.
    It seems characters that are farther away from the camera, look a lot more convincing with deepfake technology. Characters closer to the screen, or with larger mouths tend to have more artifacting (if that's what its called?) For example, the presidents chin kept glitching out, and sometimes the lips seemed to fuze together for a frame or two. But overall this is amazing stuff.
    And the more this is done, I assume the more refined and better the software will get at doing this.

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      This is a much older technique now, new methods are much better and convincing

    • @harden1362
      @harden1362 7 месяцев назад

      @@WhatMakeArtwhere are the new techniques?

  • @peterelkhoury4624
    @peterelkhoury4624 Год назад +3

    Please how to fix this:
    TypeError: mel() takes 0 positional arguments but 2 positional arguments (and 3 keyword-only arguments) were given

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Likely missing a, or a; where you changed the code.
      Best thing to do is reload the page and then try it with the example audio and video files linked in the description to eliminate variables

    • @keerthanareshaboina
      @keerthanareshaboina 2 месяца назад

      hey hii did you solve it, please help me if you did

  • @brentmarquez4157
    @brentmarquez4157 Год назад +2

    Is there an updated version of how to do this today? Like a lot of other people trying this in the collab notebook throws tons of errors. If someone has an updated video of the gotchas and how to get around these, would be helpful.

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Yes, that would be great, I don't have time to make a new version of the original researcher's colab notebook now, but if a community member doesn't do it first, I will give it a go

  • @LaVaZ000
    @LaVaZ000 3 года назад +3

    What if I want to do another one? For me it just doesn't work.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Make sure you do a test with the sample footage provided in the description. Then make sure that your video has a face in every frame. It should work if you have an MP4 and a wav file

    • @LaVaZ000
      @LaVaZ000 3 года назад +2

      @@WhatMakeArt No worries mate, I'm sorry for the inconvenience. It appears as though my previous session wasn't terminated, my apologies.

  • @sethchristeverlastinggodki3819
    @sethchristeverlastinggodki3819 2 года назад +1

    MAN! BIG THANK YOU!!! THANK YOU SO MUCH! I made some mistakes to begin with but i got it wowrking!!! Thank you thank you thank yoU! I see you responding to everyone's questions. You're a legend!! While I'm here. Any idea on how to execute the tensorflow uninstall [Y enter] command??? I don't have a background in coding. I can't get past that stage in the new updated notebook... I don't know if the notebooks make any difference. i suppose not.

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      When you get to that point after pressing the play button it will pause, then just press Y on your keyboard and then press the enter or return key
      Then you have to wait for a bit

  • @KGB95140
    @KGB95140 4 года назад +3

    Like always awesome video !
    Shame tho that the lips keep moving when the speaker doesn't speak when using a video, but do it correctly when using a picture...
    Hope they can enhance that part.

    • @WhatMakeArt
      @WhatMakeArt  4 года назад +3

      Yes, it works perfectly with a still mouth such as us still image. One way to make it work well is to find footage of someone where they're not talking much and then it'll make their mouth open when the words happen in the wav file. Another option is to time your speaking to when they were originally talking and then the pauses will line up.

  • @electroswine
    @electroswine 3 года назад +1

    Hi thanks for this. I don't get a code when I link my google account and it doesn't display a URL to follow when I click the play button. It just pops up a window and I select my account. It doesn't seem to be linked afterwards as it can't find my Wav2Lip folders. Any thoughts?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Best thing to do is reset the cash on your browser and make a new collab session. Make sure you're logged into the correct Google account.
      You can reset the cash by holding shift and then clicking the refresh button on Chrome
      You can also clear out all cookies and session IDs to make sure you have a fresh start then it should work

  • @naimastef
    @naimastef 8 месяцев назад +1

    getting error : Could not find a version that satisfies the requirement opencv-python==4.1.0.25

    • @WhatMakeArt
      @WhatMakeArt  8 месяцев назад

      This is an older Google colab, check the original authors' GitHub page for any potential updates

  • @alem6757
    @alem6757 3 года назад +1

    hello it failed to run when i try to click the arrow at 4:23 and it says SyntaxError: Invalid syntax

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Try it again in the restarted browser with the example files to see if it works

    • @alem6757
      @alem6757 3 года назад

      @@WhatMakeArt yeah it worked

    • @alem6757
      @alem6757 3 года назад +1

      @@WhatMakeArt by the way does this works with picture?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      You need to make an image still into an MP4 video and then you can use it on a picture

    • @alem6757
      @alem6757 3 года назад +1

      @@WhatMakeArt oh ok thanks

  • @BigVicMedia
    @BigVicMedia 2 года назад +1

    Good stuff! Here cause Corridor Crew mentioned this AI and featured your video 👍🏼

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      Sweet, thanks for mentioning

  • @VertexMarketingAgency
    @VertexMarketingAgency 3 года назад +1

    How long can the video be? I think I'm having a time-out error when I try to use a 2-minute video. Is that possible?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Since there's only a certain amount of free processing time this practice is to break your video up into parts and then link it back together afterwards. You can overlap the cut points so you get rendered video before and after each transition to make it more seamless

  • @Franckie.G
    @Franckie.G 4 года назад +2

    Thank you it worked like a charm , still i don't know if the part where you allowed acces to your drive is safe so can you know how we desactivate this authorization please ?

    • @WhatMakeArt
      @WhatMakeArt  4 года назад +1

      I believe that access is revoked after the session is terminated
      You could also use a separate Google account that doesn't have your personal information to avoid any security issues

    • @Franckie.G
      @Franckie.G 4 года назад +1

      @@WhatMakeArt Thank you for your fast response and also for the quality of your content , thank to you I found a funny and easy way to make speak my 3D models =D

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      👍

  • @markzuckerberg3041
    @markzuckerberg3041 3 года назад +5

    I think this is a stupid question, but can I input .mp3 files instead of .wav files?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +2

      I believe Wav2Lip needs.wav files, you can use the open source audacity to convert any mp3s to wavs

  • @sesarman
    @sesarman 2 года назад +1

    so does this alter the video or only sync your audio to match lip sync? Im curious if we can simply change the voice on an existing video without altering video, for instance on a cloned audio file. Does the audio file then have to be the exact lenght? how does the machine know where each word goes in sequence with the video? thanks.

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      It is kind of magical. It just works

    • @sesarman
      @sesarman 2 года назад +1

      @@WhatMakeArt alright, still curious if there are any apps that don't alter the video 🙍

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      Ah, I understand...
      I don't know of any program that swaps voices
      If you have a cloned audio file of a voice then just use that and it will update the video to match

  • @CRIMEWATCHE
    @CRIMEWATCHE 3 года назад +1

    Do we have to have a spret audio thing. because my audio is in my mp4 file. and my two files are saved as untitled project (1) and the second one is untitled project (2) . so do i put untitled project (1) in the input video

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Your MP4 video file can't have sound in it but that sound won't have an effect on the lip syncing. If your sound is in another video you need to export that sound and save it as a wav file

    • @CRIMEWATCHE
      @CRIMEWATCHE 3 года назад +1

      @@WhatMakeArt it doesnt let me export it as wav

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      If you use a video editor such as Adobe premiere you can save the file as a wav.
      You could also extract the audio with FFMPEG and then edit it in audacity. Both of those programs are free anf open source and have instructions online.

  • @egoldbeatz2619
    @egoldbeatz2619 Год назад +2

    I have a error with result folder...erro 2 no such file or directory..../content/WavLip/result/voice.mp4

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      That error is usually a typo, I'm sure you have checked a couple times, but it is easy to miss a capitalization or other file path error

  • @BrianIvanCusuanto
    @BrianIvanCusuanto 4 года назад +2

    hey it's a great video. by the way, 1) my output video has been rotated 180 degrees. could you gimme some clue?
    2) and I've tried with self-record with my camera's phone, eventually the script's output told that something like "can't recognize the face". Why is that?
    thanks in advance.

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      It needs to see a face in every frame. Sometimes when you export a video from an editor it puts an empty frame at the end, double check that.

  • @aritryamlbbytlorddarkstar6362
    @aritryamlbbytlorddarkstar6362 Год назад +1

    ouch too complex , but great idea , i hope someday we get the software in GUI version as drag and drop, i hope you come up with it soon! since then lets watch some podcast on jefferson!

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      There is a fui version linked on the GitHub website of Wav2lip

  • @tcoffa
    @tcoffa 4 года назад +1

    when I run the last cell in "get the code and models" section i get this error: "cp: cannot create regular file '/content/Wav2Lip/checkpoints/': No such file or directory" but I have already connected my drive to the colab and already named the folders in the drive "Wav2lip" and "Wav2Lip", any help?

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      Did you run all previous cells and put the pre weighted model file in the folder in your Google Drive?
      The is likely some step overlooked or a timed out remote file.
      It's not much help but I would start a new session with the original collab from GitHub and try again.

  • @kidslife6501
    @kidslife6501 3 года назад +1

    What if I want to make another video a different day? The first one I made worked just fine, I saved the Google collab file on my drive. The next day when I tried to make another video, I added the files to the folder nad replaced the names on the Google collab file and it didn't work, now it says: "/bin/bash: line 0: cd: Wav2Lip: No such file or directory" on the second line in "Now let's try"

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Remember each time that you start a new session you need to reconnect your Google Drive. That's where you click on the link and copy the code and paste it in. Your drive will only stay connected for a short amount of time so you need to reauthorize it. Easiest way is to restart your browser go to the code lab page and start from the beginning and do all the steps

  • @egoldbeatz2619
    @egoldbeatz2619 Год назад +1

    [Errno 2] no such file or directory sample data/ content/sample data...what can i do please

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Double check your filepaths for typos

  • @agent_w.
    @agent_w. Год назад +1

    this does not seem to be working anymore. i tried using this google collab to make new content and it will not generate a result anymore. what is going wrong? is this just a me thing? if you look at my past content, i have successfully done this before?

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Could be a problem with the new video file, try it with a file that you know worked in the past or with the example file from the description to eliminate all variables

    • @agent_w.
      @agent_w. Год назад +1

      @@WhatMakeArt I just tried it with video files I know have worked in the past, and have used in videos in the past and it sill will not work. It tells me "TypeError: mel() takes 0 positional arguments but 2 positional arguments (and 3 keyword-only arguments) were given."

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      That sounds like you have a missing; or, in your argument line for the video files or the parameters added
      I would start with a completely blank collab page, get the original one from the GitHub page then make sure you have no typos when you add your video files, if you miss a quote or a, or a; then you will have an error
      Frustrating when something unknown is causing it not to work

  • @zathaca6928
    @zathaca6928 4 года назад +1

    Stupid question but, I didn't quite understand what I need. I need one picture, and then one mp4 video of someone talking and then sync them? Can the mp4 video be only visual or does it also need an audio of someone speaking? TIA!

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      You just need an MP4 video of someone's face. There needs to be a face in every frame of the video. The person can be talking or it can be a video of a still image. Then you need a .wav audio file. You upload the video and the audio file and then the video will be lip synced to the audio file.

    • @zathaca6928
      @zathaca6928 4 года назад +1

      @@WhatMakeArt Thank you very much for your quick response. You wouldn't believe how helpful you are!

  • @LjHundred
    @LjHundred 2 года назад +1

    great video! Quick question though; I follow all the steps and I'm careful to make sure everything is done properly, but it only seems to generate a result whenever I try the more lo-res version? When I try the first version, I only get the message:
    "Using cuda for inference.
    Reading video frames...
    ^C"
    and nothing pops up in the folder. The same happens when I try using more padding. But when I try using resize_factor, then it goes through the entire process to generate a result. I've tried this by inputing different pictures (as .mp4 files), and audio, and this has been the case every time. Any idea what that could be?

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      It might be running out of RAM on the GPU, try the same MP4 file but just a few seconds long, and make sure there is a face in every frame, if it takes too long to process then the Google collab will time out

    • @LjHundred
      @LjHundred 2 года назад +1

      @@WhatMakeArt thanks! I’ll try that 😊

    • @LjHundred
      @LjHundred 2 года назад +1

      @@WhatMakeArt I can confirm that a shorter clip did indeed work, and also clips that were already lo-res, so I think you're correct about the RAM on the GPU

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      Glad you figured it out, maybe you can stitch together shorter clips to have one big long clip

  • @JD-Media
    @JD-Media 11 месяцев назад +2

    This is the best tutorial so far. Although it doesn't work.

    • @WhatMakeArt
      @WhatMakeArt  11 месяцев назад

      Haven't tried wav2lip in a while, some of the dependencies may have changed since the original Collab notebook was published by the creators of wav2lip

  • @Nitwitz321
    @Nitwitz321 4 года назад +6

    Hey man! I make motion comics and saw your Wav2lip and wanted to ask if it could possibly work with that? 😊

    • @WhatMakeArt
      @WhatMakeArt  4 года назад +2

      Wave2lip works on videos with faces. If you have a motion comic that has a video file of the character then you can animate its face with wave2lip.
      If you don't have an animated character art face, you could use the first order model to animate it. I'm making a tutorial on how to use the first order model to animate artwork and drawings.

    • @Nitwitz321
      @Nitwitz321 4 года назад +1

      @@WhatMakeArt Yeah i have it as a video but there's no movement. Great! Looking forward to seeing that man!

    • @WhatMakeArt
      @WhatMakeArt  4 года назад +1

      Yeah it should work even if there's no movement as long as it's a video and it can see a face

    • @Nitwitz321
      @Nitwitz321 4 года назад +1

      @@WhatMakeArt Gotcha! Thanks man!

  • @Team_Maguire
    @Team_Maguire 4 года назад +2

    Thank you so much for this tutorial!

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      Glad it helped, hope you made some fun lip sync videos 👍

    • @Team_Maguire
      @Team_Maguire 4 года назад

      @@WhatMakeArt haha yeah you are so good at explaining it. Keep it up!

    • @TobeysMaguire
      @TobeysMaguire 3 года назад

      @@Team_Maguire Hello Maguire

    • @Team_Maguire
      @Team_Maguire 3 года назад

      @@TobeysMaguire hi

    • @Team_Maguire
      @Team_Maguire 3 года назад

      @stuff lmao

  • @missasyan
    @missasyan 3 года назад +5

    Face not detected? Damnit, I guess the model doesn't work so well for anime characters...
    Nice video by the way, easy to understand and follow! I didn't run through any problems, so thanks!

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Maybe try shortening the clip. Also make sure when you're edited it, your video editor didn't add a single blank frame at the end of your clip. If there's just one frame the entire video that doesn't have a face then it won't work try it with a short version that you know has faces in every single frame.

  • @walsh1
    @walsh1 2 года назад +1

    This is great, it’s on a painting witch is very photo real but still a painting. Do you believe this would work on a puppet or a action figure as long as the face is visible

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      It seems to work on anything that has a face

  • @speculaBOND
    @speculaBOND Год назад +1

    is it safe to let something access all your google drive files? is there a way to not use the google collab space?

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      You can use a separate Google Drive account or you can install the code from the original researcher's available on their GitHub repository

  • @Bobbyjimbo2
    @Bobbyjimbo2 Год назад +1

    Keep getting: FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/wav2lip_gan.pth' how do you fix?

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Try the example files linked in the description and double check for typos

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Frustrating when it doesn't work, double check for typos

    • @Bobbyjimbo2
      @Bobbyjimbo2 Год назад

      @TurtlesSkull yeah I got it but it’s still mad low quality

  • @teztezza2202
    @teztezza2202 Год назад +1

    i`m only getting a read me file in my results folder... any suggestions please

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Did you run through all the steps? Did you try the sample audio and video files linked in the description?

  • @AmaroqStarwind
    @AmaroqStarwind 2 года назад +1

    You should try this out with Akira and all of its different dubs!

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      I am sure studios will start using similar technology to overdub video content

  • @ChanNaFUn
    @ChanNaFUn 4 года назад +1

    thank you very much! I am learning about it and I am so happy to find your video!

  • @josephdalcin1384
    @josephdalcin1384 2 года назад +1

    Where on the code is there a place to type in exactly what you want the model or picture to say for you??

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      You need to record your own wav file with the audio and then it will say that

  • @gloriapeek3131
    @gloriapeek3131 2 года назад +1

    I encountered an error, how can I get the required torch installed
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    kapre 0.3.7 requires tensorflow>=2.0.0, which is not installed.
    torchtext 0.12.0 requires torch==1.11.0, but you have torch 1.1.0 which is incompatible.
    torchaudio 0.11.0+cu113 requires torch==1.11.0, but you have torch 1.1.0 which is incompatible.
    tables 3.7.0 requires numpy>=1.19.0, but you have numpy 1.17.1 which is incompatible.
    pywavelets 1.3.0 requires numpy>=1.17.3, but you have numpy 1.17.1 which is incompatible.
    panel 0.12.1 requires tqdm>=4.48.0, but you have tqdm 4.45.0 which is incompatible.
    pandas 1.3.5 requires numpy>=1.17.3; platform_machine != "aarch64" and platform_machine != "arm64" and python_version < "3.10", but you have numpy 1.17.1 which is incompatible.
    kapre 0.3.7 requires librosa>=0.7.2, but you have librosa 0.7.0 which is incompatible.
    kapre 0.3.7 requires numpy>=1.18.5, but you have numpy 1.17.1 which is incompatible.
    jaxlib 0.3.2+cuda11.cudnn805 requires numpy>=1.19, but you have numpy 1.17.1 which is incompatible.
    jax 0.3.4 requires numpy>=1.19, but you have numpy 1.17.1 which is incompatible.
    datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
    albumentations 0.1.12 requires imgaug=0.2.5, but you have imgaug 0.2.9 which is incompatible.

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      Sometimes the environment gets messed up on Google Collab
      Best thing to do is just reset the browser and start a new session
      If that doesn't work, then try it with the example video and audio files linked in the description to eliminate variables of things that could be going wrong

  • @jjb2385
    @jjb2385 3 года назад +2

    Hey dude, would I be able to hire you for a less than 10 second edit involving manipulated lip movement like what's going on in this video? $$$

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Contact info in the description but the data sets used by the researches don't allow commercial use

    • @jjb2385
      @jjb2385 3 года назад +1

      Forgive my continuous questions, I have Dyslexia so I want to make sure I follow. Would I need to ask a 3rd party for their lio moving services or would it be someone from your team? The edit itself is a non-profit, never for commercial, fan edit. :)

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Send an email to the address in description to discuss

  • @giovanabush2724
    @giovanabush2724 2 года назад +1

    please i made a mistake when trying this exercise on my own, How do i clear it and start all over again..... Please i need help

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      You can just click up in the top right where it says manage sessions and terminate all sessions
      Then close your browser and reopen it, and you can go to the original GitHub repository and open up the collab page again, then everything should be reset and you can start a new session

  • @issigri9395
    @issigri9395 3 года назад +2

    This is wonderful, thank you! They seem to have updated their Colab page and the new version does not match your amazing tutorial. After I paste a link to my uploaded audio, and upload a video, it always says "cannot find custom.mp3". I clicked the "show code" button where I can see this file path, but it's way too complicated for me. I have found their old version and I will try that with your tutorial but maybe you could do an updated video for the other version?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Yeah, the new collab version is in some ways easier and in some ways more confusing. I think overall it is easier
      That's a good idea to make a new video. I'll look into it

    • @issigri9395
      @issigri9395 3 года назад +3

      @@WhatMakeArt Thanks so much for the reply. I thought I was finally getting somewhere with their old colab, but after processing for 41 minutes and reaching 100%, it reported: FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/wav2lip_gan.pth'. Any ideas?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Double check that you have the proper filename and folder name with the correct capitalization
      Sometimes there is a simple typo that one overlooks, hopefully it works out

    • @MrBoxxxed
      @MrBoxxxed 2 года назад +1

      @@WhatMakeArt I have the same issue. Checked all filenames and directories.

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      That's frustrating. Sometimes it just doesn't work, only advice I can give is to reset everything and try again with the example files provided into the description

  • @ic_beatz7411
    @ic_beatz7411 3 года назад +2

    Thank you. Really easy to repeat. I have a lot of fun with that)

  • @BearcatJamboree
    @BearcatJamboree 2 года назад +1

    This was really helpful. I wonder: 1) can you save the notebook to G-drive and keep using it without running the initial steps, and 2) is there a way to keep the notebook RAM from getting exceeded. Any idea from your experience?

    • @WhatMakeArt
      @WhatMakeArt  2 года назад +1

      Yes, you can save a copy of the notebooks to your Google Drive and it'll keep most of your settings, you still have to reinitiate a session, but it can save a lot of time
      I think the only way to have more RAM access is to sign up for a pro Google Collab account

    • @BearcatJamboree
      @BearcatJamboree 2 года назад +1

      @@WhatMakeArt Thanks! Do you by any chance have a similiar video for Lip2Wav?

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      No but there is updated research on GitHub from the original scientists,
      github.com/Rudrabha/Wav2Lip
      I'll have to look into making a video about it

  • @UnicornLaunching
    @UnicornLaunching 3 года назад +1

    Yup - followed every step and works like a charm. TERRIFYING.

  • @twinniedavee1928
    @twinniedavee1928 Год назад +2

    It shows no such file or directory. I followed all your steps. Points by points. What must I do? 😕

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      That's frustrating, are you using a tablet or mobile device? Sometimes those have issues
      If you go to the original researchers' GitHub page they have links to some drag and drop options

    • @lil_bills
      @lil_bills Год назад

      @@WhatMakeArt there is no such link and their most recent colab is broken. i believe they no longer support this product

  • @testrun-vu7bq
    @testrun-vu7bq Год назад +1

    Hey, I'm still having trouble with it and was wondering if I could just hire your services if available? Or maybe someone here in the comment section that can do it, for pay of course? all for entertainment nothing commercial on my end

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Try with the example video and audio linked in the video description to eliminate variables. If the example files work, then try shortening the files you are using, should eventually work

  • @kbproductionsgaming
    @kbproductionsgaming 4 года назад +1

    I'm getting this error message on the first cell of the "Now Lets try!" section, could you help please?
    /bin/bash: -c: line 0: unexpected EOF while looking for matching `"'
    /bin/bash: -c: line 1: syntax error: unexpected end of file
    anscombe.json mnist_test.csv
    california_housing_test.csv mnist_train_small.csv
    california_housing_train.csv README.md
    All the names and file directories are correct.

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      Drive mounted? Double check everything run before? Change file name both places? Could restart the session. Try with different mo4

  • @Italianragazza17
    @Italianragazza17 2 года назад +1

    I’m confused - I need a file named wav2lip_gan.pth. What exactly is that?

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      It is the weights for the air model that the original researches created, there is more info in their GitHub page

  • @molarvfx2240
    @molarvfx2240 3 года назад +2

    At the end on the last step, it says ../sample_data/hold: No such file or directory. Pls help. Thanks you.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      You need to make sure you don't have any typos in your file path

    • @molarvfx2240
      @molarvfx2240 3 года назад +1

      @@WhatMakeArt It still says the same thing.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Try it with the sample files included in the description

    • @molarvfx2240
      @molarvfx2240 3 года назад +2

      @@WhatMakeArt Well I wanted to make one with mines.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      That's just to make sure everything is working. If it works with the example files then something's wrong with your files. If it doesn't work with the example files then there's something wrong with the collab page

  • @ZinaidaKovrova
    @ZinaidaKovrova Год назад +1

    I have an error at first- does not ask for an authorization code from Google 🥶

  • @LemonJezus
    @LemonJezus 4 года назад +2

    i get the error in lets try it finishes at ^C and doesnt test the frames

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      Make sure you have a face in every frame. Sometimes when you trim a video and a video editing software you can have one blank frame at the end. Even this one frame at the end won't let it work. Also check to see if you have something covering the face in one of the frames.

    • @LemonJezus
      @LemonJezus 4 года назад +1

      @@WhatMakeArt can it like not recognize a face?

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      It can detect faces but the faces have to be clear in the video. Also no frame in the video can be without a face.
      Also, check that all your file names are correct and there are no typos.

    • @LemonJezus
      @LemonJezus 4 года назад +1

      @@WhatMakeArt i changed it and this is what i got Length of mel chunks: 1887
      0% 0/15 [00:00

    • @LemonJezus
      @LemonJezus 4 года назад +1

      @@WhatMakeArt can it be too long? like if its a minute, is it too long?

  • @stevehunt3723
    @stevehunt3723 3 года назад +1

    can i ask a question, whilst you sink lips to wav or mp3 made files, how can we get celebs to say what we would like them to say?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      You can use any video but just use it for fun and laughs, don't try to deceive people

    • @stevehunt3723
      @stevehunt3723 3 года назад +1

      What Make Art Yes understand that wasn’t my intention, just fun with friends etc, but I was still wondering how to get what I would like the video to say if you get what I mean , for example , Tom Cruise wishing my wife a happy birthday, hope your understanding what I’m trying to say, thank you.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Yes, I understand.
      You would need to find a video clip of the actor. Sometimes it works best with the actor's mouth not moving sometimes it works best with them talking. You can look at the audio waveform of the clip and time your speaking with when the actor is moving their mouth. You can use audio recorded by you and it will work fine to make the actor's lips move based on the words you say.
      If you want the audio in the actor's voice you need to have a large sample of their voice and use a general adversarial network to create an audio print but that is beyond the Wav2Lip technique.

    • @stevehunt3723
      @stevehunt3723 3 года назад +1

      What Make Art Thanks for information, very helpful indeed.

  • @anniewhereyougoo369
    @anniewhereyougoo369 2 года назад +1

    Thats GOLD ! Where do i record my text?

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      Use your phone or your computer to record the audio file that you want to use

  • @lalo346
    @lalo346 4 года назад +2

    How do I crop the face detection area from a picture so just one of any more faces is detected? Is a parameter there?

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      I don't think there is a parameter. I would use a video editor to crop out the section you don't want. Then after Wav2Lip you can recombine the footage.
      Here is a face detection cropping tool on GitHub - github.com/1adrianb/face-alignment

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      It's free, sometimes Google does great things but you are limited on a certain amount of GPU time per day based on the usage of the entire system

  • @anhtruong7741
    @anhtruong7741 3 года назад +3

    I love this work ! Thanks for sharing.

  • @amauryd
    @amauryd 3 года назад +2

    what does ValueError: --face argument must be a valid path to video/image file mean?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      You might not have your Google Drive mounted correctly
      also, you likely have a typo in the path to the file or the name of the file in the try section

    • @amauryd
      @amauryd 3 года назад +1

      thanks for answering! so can i just refresh the site and do it again, trying to fix it?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Yes, just need to make sure file paths are exact

    • @amauryd
      @amauryd 3 года назад +1

      i did it again but it's stuck at the third step of 'Get the code and models' and it says _cp: overwrite '/content/Wav2Lip/checkpoints/wav2lip_gan.pth'?_

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Hard to say, sometimes it's best just to restart the browser and then really double check that all the files are the right folders, and everything is spelled correctly.
      Frustrating but it'll eventually work if you follow the instructions exactly

  • @trimlyaustralia3108
    @trimlyaustralia3108 3 года назад +1

    Thanks. So If this is all run in the cloud, does one still need to DL python, update torch etc?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      It pulls the versions it needs, all requirements are listed on the authors' GitHub page

    • @trimlyaustralia3108
      @trimlyaustralia3108 3 года назад

      @@WhatMakeArt Great thanks. That's all squared away. Now it's not recognising the file paths I've set up. Arh, tech. :p

  • @TheGeniuschrist
    @TheGeniuschrist 3 года назад +1

    I don't think this works anymore. there's a "WARNING: Skipping tensorflow as it is not installed.
    WARNING: Skipping tensorflow-gpu as it is not installed." In the first segment of the 'get the prerequisites' section. I'm not having any luck getting it to work

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      You need to press y and press enter and then it'll uninstall the Tensorflow

    • @TheGeniuschrist
      @TheGeniuschrist 3 года назад +1

      @@WhatMakeArt I'll try again. I got the more recent one to work but it only seems to do 15 second videos. Do you have a resource that will teach me how to do this on my own machine? Have I already overlooked it somehow?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      The creator of Wav2Lip has a GitHub repository with all the source code you need to get up and running on your own machine
      github.com/Rudrabha/Wav2Lip

    • @TheGeniuschrist
      @TheGeniuschrist 3 года назад +1

      @@WhatMakeArt Hey, I got it to work. Thanks so much for your help.

  • @__.
    @__. 3 года назад +1

    hey i have no error but i am not getting the video in result ???

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Test with the demo video to see if it works

  • @mycloudvip
    @mycloudvip 3 года назад +1

    What was the maximum video length in seconds or minutes that you tried? Thanks

  • @intim007
    @intim007 2 года назад +1

    Hi, I faced the same issue as latest commentators below: cannot stat '/content/gdrive/MyDrive/Wav2Lip/wav2lip_gan.pth': No such file or directory" in STEP 3
    I just put all the files (audio, viseo and wav2lip-gan.pth) in the Wav2Lip folder with two capital letters. So there is nothing in the second folder with lower case at all. Also I found it obligatory to change the names of the files on the Let's Try stage from Collab default to the names that uploaded files have on your Google drive. Making this changes I get the succesfull result.

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      The answer is in your comment ... you seem have a dash "-" instead of an underscore "_" in the file name
      If you look at the error message it has an underscore
      I hope that fixes it

    • @intim007
      @intim007 2 года назад +1

      @@WhatMakeArt My files uploaded on the Google Drive have the text only, without - or _. The error on the stage Let's try was like "can't find the file" or like this so I just rename them on the Collab within this stage.
      BTW, thank you for the great tutorial!

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      Glad you found a solution 👍

  • @gregdwyer9671
    @gregdwyer9671 3 года назад +1

    Hey man, I am getting the following
    Using cpu for inference.
    Traceback (most recent call last):
    File "inference.py", line 280, in
    main()
    File "inference.py", line 183, in main
    raise ValueError('--face argument must be a valid path to video/image file')
    ValueError: --face argument must be a valid path to video/image file

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      That usually means that you have a typo in your image path file. Make sure it's in the right place and you don't have any titles without further lowercase letters. Try it with the sample video and audio file to see if it works

  • @myyoutubechannnel4
    @myyoutubechannnel4 3 года назад +1

    so it doesnt need visual reference for lip sync? it recognize how to move lips on image from audio file? how its possible?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      It uses machine learning and then synthesizes the mouth shapes. I did not create the process but you can read more at the creator's github page.
      github.com/Rudrabha/Wav2Lip

    • @myyoutubechannnel4
      @myyoutubechannnel4 3 года назад +1

      @@WhatMakeArt thanks!

  • @CRIMEWATCHE
    @CRIMEWATCHE 3 года назад +1

    can you help me, so basicly i learned how to do wav 2 lip, but i do the deepfake, and my image doesnt move with the video, just the mouth

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      You need to use a video to make it move

    • @CRIMEWATCHE
      @CRIMEWATCHE 3 года назад +1

      @@WhatMakeArt i did i used an mp4 for the image and mp4 for the video and wav for audio

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Did your MP4 have a moving figure in it?

    • @CRIMEWATCHE
      @CRIMEWATCHE 3 года назад

      @@WhatMakeArt only thhe video that was supposed to make the other video move, not the video image

  • @leodahvee
    @leodahvee 3 года назад +1

    If I were to somehow mess this up, would I be able to restart by using another Google account? Thank you for this!

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Yes, you can use wave2lip on another Google Collab account

    • @leodahvee
      @leodahvee 3 года назад +1

      @@WhatMakeArt Thank you!!!

  • @user-gg4uo5ne6g
    @user-gg4uo5ne6g 2 года назад +1

    Nice tutorial ! But i meet the issue " ValueError: Face not detected!ValueError: Face not detected". Help me

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      If there's just one frame that doesn't have a face in it that it won't work, sometimes a video editor will put a blank frame at the end or beginning and then it won't work. Make sure there's a face in every frame

    • @user-gg4uo5ne6g
      @user-gg4uo5ne6g 2 года назад +1

      @@WhatMakeArt im sure that my face is in every frame. How to check if there is any blank frame at the end or the beginning?

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      Look at it in a video editor try using just part of the clip to see if that works

    • @user-gg4uo5ne6g
      @user-gg4uo5ne6g 2 года назад

      @@WhatMakeArt i solved that issue but got another one: "RuntimeError: unexpected EOF, expected 4749057 more bytes. The file might be corrupted." Help me

  • @annahari610
    @annahari610 8 месяцев назад +1

    Shall we make the images talk with wav2lip

  • @IrvanApriana
    @IrvanApriana Год назад +1

    ls: cannot access '/content/gdrive/MyDrive/Wav2Lip': No such file or directory

    • @WhatMakeArt
      @WhatMakeArt  Год назад

      Check for typos in the folder you created in Google Drive

  • @pkay3399
    @pkay3399 2 года назад +1

    Thank you. For now, it generates no file in the Result folder

    • @WhatMakeArt
      @WhatMakeArt  2 года назад

      Did you refresh the folder?

    • @pkay3399
      @pkay3399 2 года назад

      @@WhatMakeArt Yes, trying to use the updated Colab Notebook now. I think in the old one I got a OOM Error. But thank you, this is a great tutorial

  • @infinty372
    @infinty372 4 года назад +4

    Might be a dumb question, but I'm guessing the .mp4 has to match the .wav length? Or if I put a video shorter than the audio length then the program will expand it?

    • @WhatMakeArt
      @WhatMakeArt  4 года назад +2

      Don't have to be the same length but you have to have at least enough audio for your video.
      I haven't done an experiment recently but I think it truncates your video if the audio audio goes silent.

  • @vt9848
    @vt9848 3 года назад +1

    Hello all, I am facing trouble here. Can anyone please help me
    !cp -ri "/content/gdrive/MyDrive/Wav2lip/wav2lip_gan.pth" /content/Wav2Lip/checkpoints/
    error: cp: cannot stat '/content/gdrive/MyDrive/Wav2lip/wav2lip_gan.pth': No such file or directory

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      You're likely don't have your Google Drive mounted properly and or you have a typo in your file path. Make sure you check all the capital and lowercase letters and that everything matches

  • @yassinezer8516
    @yassinezer8516 3 года назад +2

    face argument must be a valid path to video/image file . I got this error?

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      You probably have a typo in your file name or the path to the video file or it's not a proper video file

    • @trimlyaustralia3108
      @trimlyaustralia3108 3 года назад +1

      Me too. No typos. Mp4 and Wav. :/

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Try with the sample video linked in the description

  • @noalupart9372
    @noalupart9372 4 года назад +1

    pls help
    RuntimeError: unexpected EOF, expected 294091 more bytes. The file might be corrupted.
    terminate called after throwing an instance of 'c10::Error'

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      Make sure there is a face in every frame. Even the last frame. Maybe try trimming the video down to a shorter length. You could also lower the resolution of the video. Restarting the collab session from scratch can also help

    • @noalupart9372
      @noalupart9372 4 года назад +1

      @@WhatMakeArt I have shortened the length to 1sec and lower the resolution but still the same error

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      It might be the type of mp4 that you're using, it's finding an unexpected end of frame, try the MP4 linked in the description to see if that works and if it does then there's something wrong with the file.

    • @noalupart9372
      @noalupart9372 4 года назад +1

      @@WhatMakeArt I change to the MP4 in the description but now I have this :
      FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/wav2lip_gan.pth'

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      Make sure your Google Drive is mounted. It's up in the top steps. also make sure you have that file in the correct folder and that the l is lowercase and wave2lip It should be capitalized where you put your MP4 and wav files

  • @jasonchen2357
    @jasonchen2357 3 года назад +2

    For the very last step, why does it say this:
    Using cuda for inference.
    Reading video frames...
    ^C
    Thanks!

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      It is reading the video frames and finding faces in each frame. It needs to find a face in each frame to apply the wave2lip transformations

    • @jasonchen2357
      @jasonchen2357 3 года назад +2

      @@WhatMakeArt I didn't expect you to reply so soon, thank you!

    • @WhatMakeArt
      @WhatMakeArt  3 года назад +1

      Post link to any fun creations you make

    • @jasonchen2357
      @jasonchen2357 3 года назад +1

      @@WhatMakeArt It still doesn't work for me, every frame definitely has a face inside, but it doesn't work.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      That's frustrating when it doesn't work.
      It's annoying but usually the best thing to do is to start fresh with a restarted browser and reload all the components. Sometimes the GPU can time out on the Google collab and then it won't work. also even though your video may seem like it has a face and every frame sometimes the last frame exported from a video editor may not have a face.
      Try the example videos and wave file included in the description to see if that works. If that works with your setup then there's something wrong with either your video or sound. If those videos don't work then there's something wrong with the Google collab
      Hope it works out

  • @RiGz_Nz
    @RiGz_Nz Год назад +1

    Thanks mate !!! Much love from New Zealand mah man... now I can try some Rick & Morty voice overs well that's the plan half way into your video and its very well explained and you made it easy ( tho there's always one person the easier you make it the harder they find it hahaha )

  • @yeetpizza7452
    @yeetpizza7452 3 года назад +1

    when i download the file and try to run it it says the file extension is wrong but its an mp4 file extension please help.

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Double check all your file paths for typos and make sure you're including the right file

    • @yeetpizza7452
      @yeetpizza7452 3 года назад +1

      @@WhatMakeArt yes everything is correctly done, when I open the file the exact message I get is "This file isn't playable. That might be because the file type is unsupported, the file extension is incorrect, or the file is corrupt.".

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Hmm, that is strange. Try it with the example files in the description to see if it works with those, make sure you're using a laptop or a desktop

    • @yeetpizza7452
      @yeetpizza7452 3 года назад

      @@WhatMakeArt I've tried using the Kennedy.mp4 and a custom wav file, tomorrow I will try a different wav file

    • @yeetpizza7452
      @yeetpizza7452 3 года назад +1

      Because my wav file j used is converted from a MP3 but I don't know if that makes a difference

  • @Isaac1539oblock
    @Isaac1539oblock 4 года назад +2

    Thanks for this, it was very helpful for me.

    • @WhatMakeArt
      @WhatMakeArt  4 года назад

      Great, post a link of the cool stuff you make

  • @stevenreynolds3911
    @stevenreynolds3911 3 года назад +1

    it doesnt seem to be working - it is giving file not found errors and so on - thats with the correct filenames

    • @WhatMakeArt
      @WhatMakeArt  3 года назад

      Double check that you have your drive correctly linked to the collab and that you granted it access, and there still might be a slight typo, it happens all the time when I think there are no typos and then I triple check in. There might be an extra space or something, hopefully it works out