Face Tracking + Stable Diffusion img2img

Поделиться
HTML-код
  • Опубликовано: 15 ноя 2022
  • Trying motion tracking to get a heavy consistent style.
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    _________________________________________________________________________
    Corridor Crew:
    • We Put TOM HOLLAND int...
    How to install Stable Diffusion:
    • Installing Stable Diff...
    Midjourney Artstyle
    mega.nz/folder/Z0xS1BpI#S40xU...

Комментарии • 84

  • @themightythor1974
    @themightythor1974 Год назад +3

    Dude great work, I like the fact that the background characters are also staying the same.

  • @My123Tutorials
    @My123Tutorials Год назад +1

    Corridors video also blew me away. Thanks for making a brief tutorial to this topic. :D

  • @Overlorddunham
    @Overlorddunham Год назад

    absolutely amazing !

  • @RussellKlimas
    @RussellKlimas Год назад

    Fantastic my man. Really well done. Definitely gonna see how I can play with this.

  • @Red.Rabbit.Resistance
    @Red.Rabbit.Resistance Год назад +19

    pro tip for editing zooming shots, is to render and edit it in reverse so the pixels generate easier, then flip it back before you put it in your video.

    • @enigmatic_e
      @enigmatic_e  Год назад +2

      Interesting. Will have to try that.

    • @lil-zeta
      @lil-zeta Год назад +1

      I don’t understand what would cause this difference.

    • @Red.Rabbit.Resistance
      @Red.Rabbit.Resistance Год назад +2

      @@lil-zeta when you zoom out, the AI has to generate new information on the outside of the frame. It introduces new imagery based on the seed/noise. So even if you stabilize your seed, the new information/images will look choppy or "cursed".
      But if you do it in reverse, the AI expands on the existing image in the center, and pulls information from the picture instead of having to generate anything new, it just stretches the pixels and generates a new image with its existing noise. it looks more organic this way.
      You can edit in reverse and flip it back to give the same effect, but a more consistent image. If that makes sense, sorry my English isnt the best.

    • @ethan2023
      @ethan2023 Год назад +1

      @@Red.Rabbit.Resistance you need to make a tutorial lol

    • @Red.Rabbit.Resistance
      @Red.Rabbit.Resistance Год назад +4

      @@ethan2023 thats a good idea, i am still trying to create my channel but putting up a obscure tutorial might be a start 😅

  • @a.s.h.5774
    @a.s.h.5774 Год назад +2

    Cool channel and great job on reddit! This stuff is so amazing to me (not a graphic designer), I just LOVE watching how fast this is advancing and keeping shining light on my ignorances

    • @enigmatic_e
      @enigmatic_e  Год назад

      Thank you. I appreciate you checking the channel out!

  • @bendyarms
    @bendyarms Год назад +1

    Sick results on this one!

  • @3oxisprimus848
    @3oxisprimus848 Год назад

    I really appreciate your work Bro!

  • @a.s.h.5774
    @a.s.h.5774 Год назад

    The metroid ones seriously turned out so so cool!

  • @400acresLLC
    @400acresLLC Год назад

    Dope af definitely an inspiration to create daily

  • @themightyflog
    @themightyflog Год назад

    Awesome stuff man! Ready to make a movie!

  • @BryanKesler
    @BryanKesler Год назад

    Sick! Thanks man

  • @lithium534
    @lithium534 Год назад +1

    well done. Sick result.
    I tried with a 3d animation render but it jumped to much. I'm going to try it again with this in mind.

  • @RemonBerkers
    @RemonBerkers Год назад

    thats nice man

  • @PabloMartinez-ut8on
    @PabloMartinez-ut8on Год назад

    great thanks!

  • @yajastudio8241
    @yajastudio8241 Год назад +8

    One way to reduce jitter might be to render out sped up footage (like 3x to 4x), then use the AE Timewarp effect to blend the frames. You can timewarp it back to the desired speed/framerate but AE will try to do a morph with Pixel Motion between the frames to make it smoother. It will introduce another layer of artifacting, but it all depends on what you're okay with and it might not be that bad since the shapes are fairly consistent between frames on the subject. It might go crazy on the background though. Just mess around with the smoothness and vector detail attributes and mask out some egregious areas with the original footage underneath with the same Timewarp effect but use Mix Frames instead of Pixel Motion. I apologize for the longwinded comment if you've already tried this.
    Happy arting!

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Thanks! Will have to try this!!!

  • @cowlevelcrypto2346
    @cowlevelcrypto2346 Год назад

    Genius !

  • @Magnify.
    @Magnify. Год назад

    Wow

  • @themidstream
    @themidstream Год назад

    Unique.

  • @Bukulmus
    @Bukulmus Год назад

    nice work definitely gonna fool around with this. good excuse for me to go shooting again

  • @michaelrichardson6125
    @michaelrichardson6125 Год назад

    🔥🔥🔥

  • @spearcy
    @spearcy Год назад +1

    Great tactic to know about for keeping renders of faces more consistent! I wonder how much different those renders would have looked using the same seeds you used but without the tracking applied. Also, I would think that if you took a rendered frame from a rendered batch and used it as a key in EbSynth, that would also create several consistent frames, especially if the face was tracked and that tracked set was the source of original frames for EbSynth.

    • @enigmatic_e
      @enigmatic_e  Год назад

      Yes want to try combining this technique with EbSynth

  • @TheSouthTownSheriff
    @TheSouthTownSheriff Год назад

    Hey man! Your channel is incredibly inspiring and you're experimenting with some groundbreaking techniques in regard to modern, visual art style.
    If I wanted to shoot a fairly active subject in motion (dancing visual), would shooting at a high FPS (say, 60 or 120) at a high shutter speed have a significant effect on bettering the smoothness and accuracy of my results in EbSynth? Would like to know the range of possibility here, as I don't prefer the high FPS and hope to produce a cinematic result(23.976fps), but am not sure if EbSynth is powerful enough to follow active subject movement at 23.976fps.

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Yea, might be that a lot of movement could make it difficult for Ebsynth but it may be possible with extra work. I did a video on EbSynth, check it out if you haven‘t. It shows how to use multiple keyframes ruclips.net/video/DlHoRqLJxZY/видео.html

  • @lucasmartson7665
    @lucasmartson7665 Год назад

    The best, muito foda amigo !

  • @Kyus2001
    @Kyus2001 Год назад

    YT's new Android feature let's you zoom in on videos. I have to say the face is pretty detailed cartoonist. Pretty cool!

  • @ConspireOfficial
    @ConspireOfficial Год назад

    I did some experiments like this by replacing heads and faces in post before img2img with mixed results
    A face tracker/denoiser would be perfect for faceswapping and locking onto a consistent style
    I just dropped another img2img video, since I was using an already edited music video most of the shots were at the same distance from the face/head, so now I'm thinking that's why it kept such a consistent style..

  • @yutupedia7351
    @yutupedia7351 Год назад

    Subscrito muy buenos videos!

  • @MrKikegraphics
    @MrKikegraphics Год назад

    big fan of your videos man. May a ask, do you mind to share the Midjourney model you are using here?. Thank you.

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Thank you! Check description.

    • @MrKikegraphics
      @MrKikegraphics Год назад

      @@enigmatic_e means a lot man. thank you so much. 🙏

  • @filmdon2475
    @filmdon2475 Год назад +2

    Great video, was just searching for this! How would you apply the stabilized animation footage back onto the original video pre-stabilization ?

    • @enigmatic_e
      @enigmatic_e  Год назад

      You can create a null and attach both to null and move null however you want. You can also move them all together into a new comp and make the adjustment that way. It really depends on what you're trying to accomplish after the AI is generated.

    • @evilcorp
      @evilcorp Год назад

      @@enigmatic_e Super sick and exactly the thing I've been stuck on! Can you describe attaching to a null object once you pull your SD frames back into AE? We have the exact same outcome in mind so if you would be so kind it would be far better than any tut I could find.

    • @fiftytwoyenfilms
      @fiftytwoyenfilms Год назад

      if you stabilize in mocha AE, tracking the eyes / nose, you can apply it as a power pin effect + and use that same PP inverted on an adjustment layer above so you can toggle on / off as needed. andrew kraemer also has an great ancient tutorial on reversing stabilization using the built in AE tracker

    • @evilcorp
      @evilcorp Год назад

      @@fiftytwoyenfilms Yeah, Im not paying for Mocha. @engimatic_e is this not a native AE funtion?

    • @fiftytwoyenfilms
      @fiftytwoyenfilms Год назад +1

      @@evilcorp Mocha AE is built in! If you're just doing a position stabilize, you can pick whip the position of the null object to the anchor point of the stabilized layer 👍🏻

  • @iamYork_
    @iamYork_ Год назад

    cool cool...

  • @iraklipkhovelishvili1252
    @iraklipkhovelishvili1252 Год назад

    What model are you using, is it inpainting version or a standard Dreambooth fine tuned one?
    When I try img2img with my dreambooth models I’m getting very little consistency (unlike inpaint checkpoints)

    • @enigmatic_e
      @enigmatic_e  Год назад

      Inpainting works great but here i used midjourney checkpoint. Normally I wouldn’t get this consistency but with this technique I got good results.

  • @DVNT
    @DVNT Год назад

    Look into PYTTI, it has 0 jitter & is made for animation, it is more difficult to master than SD but you seem to come up with some interesting workarounds, would love to see a tutorial on PYTTI

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I’ve heard of it but never tried it. Will look into!

  • @gauravscreativecanvas
    @gauravscreativecanvas Год назад

    Sir can we import all sequence if you say yes then how?

  • @furrrevayoung
    @furrrevayoung Год назад

    Thanks for sharing! Greetings from Ukraine

  • @tatanburgos924
    @tatanburgos924 Год назад +1

    HERMANO SALUDOS TENGO UNA DUDA!!! hay que hacer algo como un entrenamiento previo para poder lograr ese resultado? pasa que con la 1.5 y con la 2.0 no me quedan resultados ni remtamente parecida a mi cara, es algo frustrante jeje... agradeceria cualquier ayuda al respecto.... simplemente solo capta la gama cromatica y en la mayoria de casos ni siquiera arroja algo similar a mi, solo cosas abstractas ... de nuevo muchas gracias saludos

    • @enigmatic_e
      @enigmatic_e  Год назад

      Tengo la version 1.5 todavia. Hiciste el tracking de la cara? Quisas tienes el denoising strength muy fuerte, o tienes que ajustar el prompt.

    • @tatanburgos924
      @tatanburgos924 Год назад

      @@enigmatic_e de entrada muchas gracias por tu respuesta hermano !!! yo tengo juntas versiones no se si se puedan estar pisando de algunamanera eh ....no hermano revisaré con detenimiento el seguimient de rostro ...si de casualidad tienes un tutorial especifico de ello te agradeceria... un saludo desde colombia

  • @aranbenjo-eb9eb
    @aranbenjo-eb9eb Год назад

    hi, your videos are cool: i have a question for img2img
    If i have a set of photos of a boy in 4 different poses, how can i get the same result using the prompt in img2img? Thank you

    • @enigmatic_e
      @enigmatic_e  Год назад

      Definitely possibles. My new video that will be coming soon will be talking about this.

  • @mzrendy8120
    @mzrendy8120 Год назад

    did you train a model for the face and style you were looking for?

    • @enigmatic_e
      @enigmatic_e  Год назад

      no i didnt train, it was all in the prompts and the model which i cant remember which one i used for this.

  • @returnofthestoic2837
    @returnofthestoic2837 Год назад

    where can I find this midjourney model? Great video !

  • @Mrduirk
    @Mrduirk Год назад

    el gfc scale contra mas le añadas mas tarda?como se calcula ese parametro?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Creo que no. Lo que hace tardar son los steps y el tamaño de las dimensiones.

  • @vidvisual9622
    @vidvisual9622 Год назад

    Do you have some favourite video artists making stuff with only AI images and Stable Diffusion? I'm looking for people to collab with making music videos

  • @tenchimuyogx339
    @tenchimuyogx339 Год назад

    What is the mid journey art style pack for?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Thats the style the AI will use. You put it into model-stablediffusion folder

  • @user-bo6yh3dc6c
    @user-bo6yh3dc6c Год назад

    required system ?

  • @walidflux
    @walidflux Год назад +1

    have you tried flicker free?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Never heard of it. You have any samples of what it can do?

    • @walidflux
      @walidflux Год назад

      @@enigmatic_e it's plugin on after effect that removes flicker from your video

    • @enigmatic_e
      @enigmatic_e  Год назад

      @@walidflux ahhh ok. I will have to try that!

  • @VFXkabilan
    @VFXkabilan Год назад

    Hi how to install stable defusion (2.0)

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I haven‘t done it. People says its not good right now.

  • @locurasdebenjayfer
    @locurasdebenjayfer Год назад

    Error: '"upsample_bilinear2d_channels_last" not implemented for 'Half''. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \. Full error message is in your terminal/ cli. , That message me sale, some solution=??

  • @user-bo6yh3dc6c
    @user-bo6yh3dc6c Год назад

    how to install ?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Link in description to the tutorial

  • @themightyflog
    @themightyflog Год назад +1

    Are you using EBsynth?

  • @2in1mixed8
    @2in1mixed8 Год назад

    hi man
    txt2img working very good but img2img nothing happen "revolver ocelot, art by Yoji. Shinkawa" just load the cartoon img and nothing happen . i just put a test img not from AE

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      mess around with denoising strength. bring it down.

  • @UNKNOWNEdits69
    @UNKNOWNEdits69 Год назад

    plz check out your insta dm... i want a help from you