Stable Diffusion IMG2IMG animation settings Pt. 1. I bought a new GPU for THIS!!

Поделиться
HTML-код
  • Опубликовано: 27 апр 2024
  • Got some great AI topics I will be diving into for the next couple of videos!
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    _________________________________________________________________________
    NerdyRodent:
    / nerdyrodent
    Aitrepreneur:
    / aitrepreneur
    Midjourney Artstyle
    mega.nz/folder/Z0xS1BpI#S40xU...
    Disco Diffusion Checkpoint:
    huggingface.co/sd-dreambooth-...
    Timecodes:
    00:00 Intro
    00:40 Purchased GPU
    01:36 Local SD recommendations
    02:35 Setting up images
    03:30 Working SD UI
    09:00 Generating Images
    09:53 Importing AE

Комментарии • 281

  • @Aitrepreneur
    @Aitrepreneur Год назад +27

    Great video man! Nicely presented and the final result (especially the one you showed in your shorts video) looks fantastic! Well done! ;)

    • @enigmatic_e
      @enigmatic_e  Год назад +7

      The legend himself. Thank you for sharing your knowledge with everyone and thank you for checking out my vid!

  • @magejoshplays
    @magejoshplays Год назад +1

    Thanks for all your work helping us figure these tools out, and i will love you forever for that Goosebumps reference!

  • @luandrakko4631
    @luandrakko4631 Год назад +3

    really interested to see the next video explaining how you get the final results refined. The frames images are way more consistent in the second video.

  • @JoeMultimedia
    @JoeMultimedia Год назад +1

    Amazing! I love this special effect. Thanks a lot.

  • @ARTificialDreams
    @ARTificialDreams Год назад +2

    Great video! Great content!

  • @Copperpot5
    @Copperpot5 Год назад +1

    Appreciate the tutorital - very cool results - nice you credited back also. Continued success!

  • @amitdas1967
    @amitdas1967 11 месяцев назад

    Goosebump was my life..thank you

  • @octimus2000
    @octimus2000 Год назад +2

    I'm on the edge of my seat waiting for your next video. Really good job!
    I had never subscribed to anyone after just only one video

    • @enigmatic_e
      @enigmatic_e  Год назад

      🙏🏽🙏🏽🙏🏽🙏🏽

  • @robojobot77
    @robojobot77 Год назад +1

    Dope video. Looking forward to more.

  • @PeterParnes
    @PeterParnes Год назад +1

    Very good introduction. Thank you!

  • @AscendantStoic
    @AscendantStoic Год назад

    Awesome work, truly inspiring ;)

  • @AdrianMagni
    @AdrianMagni Год назад +22

    Something I've found helps with the inconsistency is to render only a quarter of the frames then run it through DAIN to smooth the motion. It also smooths out the transitions when elements pop in and out.

    • @qq373163143
      @qq373163143 Год назад

      you mean 6 frames/s,if the video is 24F/s?

  • @shogaartx8643
    @shogaartx8643 Год назад

    THANK YOU SO MUCH FOR THIS VIDEO

  • @jams2blues
    @jams2blues Год назад

    Go Goosebumps brother! 🤣🤣 Thanks for the amazing tutorial amigo

  • @kaizey
    @kaizey Год назад +94

    As someone who is studying graphic design, I am definitely scared. My school is COMPLETELY sleeping on this AI stuff, they're saying "there's always going to be a demand for real, human made stuff". But it's pretty obvious I've gotta think about what areas of visual communication have any longevity any more. Animation is changing, graphic design, logo design are all changing, even pro level photography and videography are becoming more and more accessible to "normal people" through AI work + phone cameras that can do the job a lot of the time.

    • @NameIsDoc
      @NameIsDoc Год назад +18

      As a professor of the subject..
      Art is always changing its the one profession that never sits still. Stagnation is death. Keep a thumb on the pulse of the world. That said...
      People claimed photoshop would end photography, 3d animation would kill 2d animation and so much more.
      What's likely going to happen is physical arts will become more valuable as well as 3d aspects. Likely some jobs will be lost but artists will be able to integrate this into their work flow to speed up render passes.
      Ai is great at getting weird concept or causing varirients to a nearly done piece. In 20 years I bet it'll be another tool in our belt to hammer out thumbnails or provide some render pass on their work.

    • @timmydotlife
      @timmydotlife Год назад +7

      See the ai as your pencil and the prompt as your canvas and you have creativity as your best partner

    • @synthdream
      @synthdream Год назад +11

      As someone from that world, the ship in graphic design job security sailed a long time ago. Photoshop did make people loose jobs, so will AI. This industry requires you to continuously learn the past, and the future, then they'll still pay you like trash. People will tell you physical arts and human made things will be what people want in the future, but just so you know, everyone including themselves knows, they're full of it. Companies and people will continuously invest in finding ways to screw you over, even if it cost more than paying you a fair wage. Help from the government to aid your security won't arrive, and you'll keep having to be continuously paranoid that something new will come out of nowhere, and you won't be needed anymore.
      This is the truth, if it scares you, just know everyone is also very scared. This is the reality of the world, all you can do is just keep learning and hope tomorrow won't be the day your employers need to have "a talk".

    • @brexitgreens
      @brexitgreens Год назад +2

      @@synthdream In addition to everything you have said, there've already been too many very talented artists in the world for actual demand - even before AI. Even very, very long before AI. Anyone still remember Van Gogh? 😂
      P.S. I've got a crush for your user image.

    • @OfficialGOD
      @OfficialGOD Год назад

      You still need to learn theory and last year ai..

  • @duythinhnguyen6705
    @duythinhnguyen6705 Год назад +1

    Worked smoothly, tysm

    • @enigmatic_e
      @enigmatic_e  Год назад

      Awesome! I see some people had issues, glad to hear its working for others.

  • @digital_magic
    @digital_magic Год назад

    A very awesome video :-) Thanx Mate

  • @slickrick37
    @slickrick37 Год назад

    Grreat stuff. Thanks!

  • @IGORTSARENKO
    @IGORTSARENKO Год назад

    Your videos are amazing. Pls do more test of img2img animation. Looks fantastic

  • @aiculture6486
    @aiculture6486 Год назад

    Sick tutorial thx!

  • @ethan-fel
    @ethan-fel Год назад +5

    I've been studying img2img/inpait animation for a while now.With Krita and auto-sd-plugin, increasing the image size (base setting) really help with consistency

    • @f.scribbles9546
      @f.scribbles9546 Год назад

      Do you know or have any tutorials for doing this with Krita? I barely found out I could do animation with it let alone AI stuff that program is an absolute beast

  • @synthoelectro
    @synthoelectro Год назад +2

    it's like the old 80's music videos that did animation, such as Take on Me by Ah Ha.

  • @maricii33
    @maricii33 Год назад

    Thank you so much this was so helpful 🙏

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Wow thank you for the super thanks!!! It mean a lot to your support!!

  • @gemini9775
    @gemini9775 Год назад +1

    interesting thanks 😀

  • @clenzen9930
    @clenzen9930 Год назад

    I liked your video, keep it up.

  • @mcgibs
    @mcgibs Год назад

    The flickering in the animation is still good for replicating the flip book effect like in the music video for Take On Me, for example.

  • @DJ-Illuminate
    @DJ-Illuminate Год назад +1

    OMG. I knew something was creating the animations. I did download Stable Diffusion but had no idea that I could batch images or what the sliders meant. I also heard people mention Style as being a thing but had no idea how I add new styles. I wish Stable Diffusion had documentation on mouseover or links to FAQs from the software. Thank you!!!

  • @CanalDojogames
    @CanalDojogames Год назад

    Man this is amazing! honestly you can do draws or get videos and make AI do the complement for you like i want to draw the sequence of poses to make an character getting his sword and attacking enemy, but i dont want to draw all the details and colorize,so AI do it for me...
    Honestly i love this setting because i can get same pose as the photo. now i want to do inverse, get photo from someone's back doing something and AI will creat his ''front'', this will be an challenger for sure,hahahaha.
    Thank you so much

  • @oli333
    @oli333 Год назад

    hey mate, don't know if this has already been said, but at about 10:28, the reason the video is a different length is because your imported sequence is interpreted at 30fps and you've put it on a 24fps timeline. You can just specify that by right-clicking on the sequence you've imported and going to 'interpret footage' and changing it. Great video. amazing results.

    • @enigmatic_e
      @enigmatic_e  Год назад

      Thank you! Yea someone mentioned this and now I always do this.

  • @rauld8355
    @rauld8355 Год назад

    just wow!....so anyone can make awesome content now!

  • @lioncrud9096
    @lioncrud9096 11 месяцев назад

    I LOVED GOOSEBUMPS AS A KID!!

  • @mageenderman
    @mageenderman Год назад +16

    "I feel for those that animate cause this will be difficult to compete with"
    Animators have an advantage, don't act like they can't use these tools too
    When digital artists use AI I've seen insane results
    I imagine the same would be true for Animators
    Sure we can do impressive things with Stable Diffusion without having an animation or art background but goddamn the stuff people with those skills can do + the ai toolset is wild.

    • @theunderdowners
      @theunderdowners Год назад

      Animators already have an advantage most of the terms used in SD and the tech to run it on are staples of animators, doubt they're crying too much, they've got some serious new toys coming too, this is just like the first bucket of sand, of a crystal palace. ruclips.net/video/EKJXI1xW4gw/видео.html

    • @marcthenarc868
      @marcthenarc868 Год назад +3

      I agree. When MIDI and sampling came out, people thought that musicians were dead and anyone could write a score. The younger (30 years ago) generations of musicians lapped it up and today, you can still distinguish those with talent from those who dabble in it thinking it's all too easy. History is repeating itself in so many ways from that particular adventure.

    • @theunderdowners
      @theunderdowners Год назад

      ​@@marcthenarc868 What is happening now with A.I. art is beyond anything I thought possible in my life time, when I was a young one, has reignited an old passion. Basically wonder what this button does? WOW!

  • @spearcy
    @spearcy Год назад +1

    Nice! I also installed that yesterday using Aitrepreneur's video links and then tried it out a bit. It seemed to work well, but I need open it later today to make sure it's using my GPU (RTX 3090) rather than my CPU. Maybe that's in the settings, but I just forgot to look for that. The scene I used was one of my daughter on a trail run. It kept spitting out images of her running away from the camera rather than towards the camera as in the original. So I moved the sliders all over the place to try and get that right, but then it would show two people on the trail. I can see it's gonna take some time to get it to do what I want... 😆

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I feel you man. It takes a while to get something to look right and not one setting works for all circumstances.

    • @westingtyler2
      @westingtyler2 Год назад

      check out Royal Skies video Understanding Prompt Formats in 3 minutes. super helpful. I do 150 sample steps, use the Euler (not Euler a) sampler, get a good image first to use as a "base" for the rest of the attempts, and I put the artist names at the end. sometimes I open an image in an image editor and reframe, add or remove details. very roughly, with colors and shapes. then I use that in img2img to generate new things.

  • @chessmusictheory4644
    @chessmusictheory4644 Год назад

    excellent video. I did not realize that people were using this to create videos by using the img to img. I have been using stable diffusion for some time now to create the skins for my cgi characters to great effect. Now with this img to img video idea I think Il be exploring this to make the backgrounds for my cgi characters. you mentioned no one has created a smooth render so far by doing the img to img for the frames? perhaps you should try doing one at 60 frames per second but keep in mind that you need to use an animation that was rendered at 60 fps or record a video at 60 fps. this may give you a much better fluidity. there is another part of the prompt that you can play with that can give you more exacting images. down below there is a script section in the img to img . play with the x/y script and add more steps and lower the resolution but no lower then 512x512 for speed and exactness then you can upscale later with the ersgan+4anima6b for super smooth img's cheers.

  • @KentHambrock
    @KentHambrock Год назад +23

    For doing a combo with ebsynth that's similar to this one, I'd recognize filtering out the model and choosing a single background image that suits the style. If you process everything in ebsynth with a transparent background and add the background in your video editor, it should look a lot cleaner, even if the model's basic image keeps changing. You could also try finding a style you like and trying to generate enough images with similar enough features that you can run reverse inference on it to train the model's basic image and help ensure that it generates the same basic model design each time which should also help with flickering and inconsistencies. This is all stuff I've been meaning to try personally, but haven't had time. Really looking forward to seeing what you do with ebsynth and img2img combined

    • @800pieds
      @800pieds Год назад

      So you're saying you haven't tried it yet with a separate background?

    • @cwdoby
      @cwdoby Год назад +1

      This is exactly what I was thinking. Green screen an actor and then try this. I think we can really smooth this out with just a few tricks. It's so damn fascinating where We are now with this technology.

    • @jamesdrummond187
      @jamesdrummond187 Год назад

      Not sure if you are saying you can AI generate just the model and keep the background the same/untouched by the AI. I would REALLY like to see this if it is possible without a lot of post editing. I mean even if there was something that could provide a mask automatically would be nice for post production but on the fly would be even better. Would be cool to see streamers somehow use this technology. Somehow sync the output of AI and the content they are showing. Exciting times for individual creators.

    • @KentHambrock
      @KentHambrock Год назад

      @@jamesdrummond187 You would likely need to do the img2img like he does here, then rotoscope out the model. I haven't explored any methods of doing this automatically, but I'm sure some good methods exist to speed up the process. It's possible he could also request the img2img to replace the background with a green screen, though I haven't tested this so I'm not sure if it's possible or how reliable it would be.

    • @KentHambrock
      @KentHambrock Год назад

      @@800pieds I haven't had the time to try it at all, but I know ebsynth works best when the background is applied in post. It can get warped and distorted as the model moves, otherwise.

  • @JamesPound
    @JamesPound Год назад +1

    welp, everything I want to know is going to be in the 2nd video I guess. I want to know how you got it looking smooth

  • @billionaeris1183
    @billionaeris1183 Год назад +28

    This is insane, give it 5 years or less and i can see this being consistent and the animation industry changing for better or worse.

    • @jagz888
      @jagz888 Год назад +1

      It wont be able to produce anticipation squash and stretch exaggeration (core animation principals) unless the reference material contains this which is not possible since we cant break real humans in footage, also hyper dynamic camera angles like attack on titan would require absurd camera cranes hundreds of thousand of dollars of equipment stunt teams to create reference footage for the AI & there is a reason holly wood doesn't do attack on titan camera angles & action POV its just too difficult. the AI is as good as the reference footage supplied. Animators will be fine this is a fancy tweening tool for the production.

    • @creativeleodaily
      @creativeleodaily 10 месяцев назад +1

      Its already happening, being an Indian with 2d and 3d animation background, i talk to all my links in INDIA and many are already using and developing the AI further to improve the process

  • @ivideogameboss
    @ivideogameboss Год назад

    I posted this comment on ur other video. Your 13:12 animation just got shown live on stage at the Stable Diffusion presentation today. Look up "Stability Diffusion announcements" from Robert Scoble. It appears at 50:27. Congrats!

  • @top115
    @top115 Год назад +2

    Temporal consistency is missing, but we have the image which was generated before, can't be too hard to diffuse in something very similar looking SOON (tm)

  • @lockswap
    @lockswap Год назад +1

    idk if you've heard but there is a script, img2img HD that has long render times but you can upscale the images and it renders the img 9 to upscale the detail in the changes made. future is here fr

  • @izac9382
    @izac9382 Год назад +2

    FYI you can have your animation match your original by making sure they are the same frame rate. When you inport your jpg/png sequence right click > Interpret footage > Main > Change the frame rate to match your original > Ok

    • @enigmatic_e
      @enigmatic_e  Год назад

      Oh yea thats right! Ive done that in premiere before. Thank you for the info!

  • @creativeleodaily
    @creativeleodaily 10 месяцев назад

    Thanks for your other videos, I am running AMD 2700x + GTX 1660 Super and Half HD Image Size gets easily rendered in 2 mins, I did a test of a footage which was 15 seconds long at 60fps and divided the frames to 30fps and conversion took almost + -13 hours and quality came out awesome....

    • @creativeleodaily
      @creativeleodaily 10 месяцев назад

      My recent Shorts Video is my first Control net test, it was very basic, I didnt even knew about the feature to restore faces, but yet, my face almost came out perfectly with very less flickering. Now I hope to convert some more shots, So i am very excited for whole Nmax Moto vlog...😅

  • @ZeroCool22
    @ZeroCool22 Год назад +1

    Also, would be nice a video for training Hypernetworks, kind of Dreambooth but some ppl saying it's better.

  • @maricii33
    @maricii33 Год назад

    Great video thank you! Can you please share some of your opinions on the copyrights and commercial usage for the technology? I’m hoping to create a music video and from what I’ve heard AI tech has issues with copyright violation.

  • @ANGEL-fg4hv
    @ANGEL-fg4hv 11 месяцев назад

    He said....." Works fine eith a 3080' .
    Such humble words

    • @enigmatic_e
      @enigmatic_e  11 месяцев назад

      😂 I was fairly new to all this when I made this video. I had no idea about gpu’s.

  • @Amelia_PC
    @Amelia_PC Год назад +1

    0:01 Terrible anatomy. As an artist, it takes me some time to fix AI-generated image anatomy issues, but it saves some time because of the colors. HOWEVER, img2img generates super cool environments based on my colored sketches, and it follows the perspective (if it's below 0.5 influence strength. I love it :)
    (congrats on your new GPU! RTX 3080 is great!)

  • @John_Krone
    @John_Krone Год назад

    Davinci Resolve generates frames in between to create smoother animations.

  • @samikshanchandrabiswas7243
    @samikshanchandrabiswas7243 Год назад

    ur a fcking legend dude honestly

  • @outboxfpv4360
    @outboxfpv4360 9 месяцев назад

    Good one but how to reduce flickering and random animation notice?

  • @James-ip1tc
    @James-ip1tc Год назад

    Where do I find that program that makes still images move you mentioned. Thank you for your videos.

  • @iamYork_
    @iamYork_ Год назад +1

    Wow... You bought a brand new GPU just to run locally... Now I know your serious about AI... haha... great video... keep it up my friend...

    • @enigmatic_e
      @enigmatic_e  Год назад

      Yea that and my old GPU was already giving me trouble but lets pretend it was cause of my passion for AI! 😂

    • @iamYork_
      @iamYork_ Год назад

      @@enigmatic_e it is for the AI!!! RUclips monetization coming soon to an enigmatic_e near you!!!

  • @juwu8
    @juwu8 Год назад

    IN TIKTOK IF I SAW A VERY WELL DONE VIDEO I REMOVED THE BACKGROUND AND THE VIBRATIONS DISAPPEAR IN THE IMAGE EVEN IN THE BODY.

  • @RagingGamingBear
    @RagingGamingBear Год назад +1

    Can you make a vid on how to make PC wallpapers? :D

  • @plachenko
    @plachenko Год назад +9

    11:15 this isn't a competition... this is a tool that enables a different form of creativity and encourages exploration of iterative ideas. Seeing it as competition forces everyone to lose. Great work on the result! Was impressed when I saw it a few days ago

  • @subhankitbasu620
    @subhankitbasu620 Год назад

    Corridor Crew just did an animation !

    • @enigmatic_e
      @enigmatic_e  Год назад

      Just watched! That was very impressive!

  • @LegendD112
    @LegendD112 Год назад

    lol good tutorial, how do you use midjorney's style? Thank you!

    • @enigmatic_e
      @enigmatic_e  Год назад

      Download it from description and put into models-stablediffusion folder.

  • @ohmikegold
    @ohmikegold Год назад +2

    where can i download the disco diffusion style ? nice video as always

    • @AscendantStoic
      @AscendantStoic Год назад +1

      Check the Aitreprenuer video about styles, he shared both the Disco Diffusion and the MidJourney styles models.

  • @sub-jec-tiv
    @sub-jec-tiv Год назад

    It will improve, wait a year or two. AI will be taking hella jobs.

  • @gxrsky
    @gxrsky Год назад +1

    Like for the Goosebumps!

  • @ceciliatabbi4218
    @ceciliatabbi4218 Год назад

    Do you have a particular tutorial you could recommend on how to install Stable Diffusion? Thanks!

    • @enigmatic_e
      @enigmatic_e  Год назад

      Yes! Here ruclips.net/video/vg8-NSbaWZI/видео.html

  • @movietrailersvfx1318
    @movietrailersvfx1318 Год назад

    great tut! just found you. I can swear you wanted to say "first of all the two settings you want to FUCK with" 04:57

  • @tayballtop
    @tayballtop Год назад

    Have you made a video on stable diffusion and Ebsynth yet?

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      No not yet. I want to do that for my next video.

  • @Im_Derivative
    @Im_Derivative Год назад

    add frame blending, it'll make it wayyyyyyyy smoother

  • @imshypleasebenicetome.5344
    @imshypleasebenicetome.5344 Год назад

    My 1080ti runs the locally installed stable diffusion no problem. I didn't even know about the whole compatibility issues lmayo

  • @purposefully.verbose
    @purposefully.verbose Год назад

    try 15fps, doubled, with frame interpolation set to optical flow

  • @TOPFACTSjames
    @TOPFACTSjames Год назад

    what are the ram and gpu requirements foe stable diffusion to work smoothly ?

  • @RenderBenderProductions
    @RenderBenderProductions Год назад +1

    do you have a video on how to install disco diffusion locally?

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I dont. I tried it once but hit a roadblock. Couldn’t find any answers since most people moved on to Stable Diffusion.

    • @RenderBenderProductions
      @RenderBenderProductions Год назад

      @@enigmatic_e that’s sad, I’ve been looking into trying to do that but I can’t find anyways to really do it.

  • @kasialovska1206
    @kasialovska1206 8 месяцев назад

    People help
    IndexError: list index out of range

  • @user-kl6pg9ie4z
    @user-kl6pg9ie4z Год назад +1

    hello, what is the version of Stable difussion that you use in that tutorial? thanks

    • @enigmatic_e
      @enigmatic_e  Год назад

      It’s an older version. You can check out some of my newer videos for more current versions.

  • @tonystark4872
    @tonystark4872 Год назад

    What version of stable diffusion are you using, can you share the link?

  • @bustedd66
    @bustedd66 Год назад

    my main issue is i would need a second system so i can work while it runs sd. some animations i did took 5 or 10 hours meaning if it was local i couldnt use the system while it was running i am assuming.

    • @enigmatic_e
      @enigmatic_e  Год назад

      Yea I totally get that issue. The great thing about google colab is that it’s not processing anything on your actual computer. Maybe there are alternatives. I’ll definitely look into it because I know this doesn’t work for everybody.

    • @bustedd66
      @bustedd66 Год назад +1

      @@enigmatic_e i am saving for a system next year just for AI. all i can say is that sd is awesome i would hate to be stuck with dall-e or mj

  • @weareeternal3230
    @weareeternal3230 11 месяцев назад

    what SD model are you using?

  • @TruthSeekah
    @TruthSeekah Год назад

    Are you for hire? lol. I'd love to have a music video done in this style.

  • @evilra6552
    @evilra6552 Год назад

    Can I applied this in the collabs version?

  • @spider853
    @spider853 Год назад

    what repository are you using? Automatic?

  • @alexxx4434
    @alexxx4434 Год назад

    I suppose, we just need more precise seed developed for smooth, consistent results.

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Check my part two of this, I got some good consistent results! ruclips.net/video/xtFFKDgyJ7A/видео.html

  • @MatichekYoutube
    @MatichekYoutube Год назад +1

    Hey, how did you get midjourney ckpt ??? If you can say, link it in desc please :)

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Its in description now

    • @MatichekYoutube
      @MatichekYoutube Год назад

      @@enigmatic_e oh, thank you, that is the set from that other tutorial right? the guy you mentioned

  • @scofield168
    @scofield168 Год назад

    For the image you used in 05:57. The girl shut her eyes. How can we make sure img2img create the same facial expression?

    • @enigmatic_e
      @enigmatic_e  Год назад

      There’s a new thing called control net that you could use with SD. It helps keep track of footage way better. Here’s a video about it ruclips.net/video/1SIAMGBrtWo/видео.html

  • @Wijnfles
    @Wijnfles Год назад

    do u know why on mac (on the M1 app silicon) you can't see any seed numbers? nothing pops up? where do i find these numbers?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Mmm not sure. Sorry i mainly use windows.

  • @EricHKmedia
    @EricHKmedia Год назад

    I am use google colab not work path, internal stable diffusion can work, do need any setting for this case

  • @ramilgr7467
    @ramilgr7467 Год назад

    where did you get such an interface from? thank you!

    • @enigmatic_e
      @enigmatic_e  Год назад

      I have a tutorial in how to install

  • @jinyizhang455
    @jinyizhang455 Год назад

    Hello, can I put Midjourney Artstyle files on Google's hard drive?
    Thank you!

  • @zvit
    @zvit Год назад

    What dark mode extension or theme do you use to get those blue colors?

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      its appearance settings in my edge browser

  • @SouvikKarmakar1
    @SouvikKarmakar1 Год назад

    Hi where did you get other models like the disco and midjorney styles ??

  • @Snafu2346
    @Snafu2346 Год назад +1

    when I see the animation I think
    Take on meeee. Take on me
    Take meeee on. Take on me
    I'll beeeee gooone. Dooo Doo da dooo DOOOOOoooo

  • @Rivershield
    @Rivershield Год назад +1

    I'm wondering if Ai could, for example, generate in between frames of an animation. Do you guys know if there's any that can do that?

    • @MEATHEADBooYA
      @MEATHEADBooYA Год назад

      Yeah Nvidia, AMD, Intel all started doing it with their latest graphics cards. :D

  • @craftywafty7869
    @craftywafty7869 Год назад

    i thought it can generate animated frames on its own, so you still need to have a series of animated frames for it to copy?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Yea you need sequence of images.

  • @Complaints-Department
    @Complaints-Department Год назад

    I'm a bit of a novice when it comes to this stuff, still trying to wrap my head around it though just curious if people think an RTX 3070 card would be sufficient to run these processes on a gaming laptop?

    • @fritt_wastaken
      @fritt_wastaken Год назад

      Basically for the quality only video memory matters. You can get away with 4GB for small resolution, but higher resolution tends to give a better detailed result. There's practically no upper limit on how far you can push this

  • @Techonsapevole
    @Techonsapevole Год назад

    I hope also AMD gpu will work with that

  • @daciansolgen
    @daciansolgen Год назад

    hm? i dont have batch img2img on my stable diffusion

  • @Coltography
    @Coltography Год назад

    Where do you get the midjourney ckpt model?

  • @SUNRIDERMR6
    @SUNRIDERMR6 Год назад

    hi, please help. I exactly repeat your actions after you, after generation a black picture comes out. I looked on reddit and followed the instructions to turn off the check, but it still gives a black picture. Picture like yours and promt also

  • @Dah.Kaal.Of.Ktu-luum
    @Dah.Kaal.Of.Ktu-luum Год назад

    8:25 the results are amazing dude!!! great job, by the way!!!😉👍...it is necessary to use a rtx 3080 graphic card???🤔🤨😶

    • @enigmatic_e
      @enigmatic_e  Год назад

      I know Nvidia cards work but not sure about minimum requirement. I think it may need to be 6gb or 8gb and higher

  • @Dglinski2
    @Dglinski2 Год назад

    Roughly how long are you waiting for generations with the new GPU? I’m thinking of running to Best Buy to grab a 3060 12gb just to start playing around before I commit $1000+ on a better gpu

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I would take roughly 10-15 second each image. I haven’t really timed it but it doesn’t feel like its very long.

    • @PiratePrinceEdward
      @PiratePrinceEdward Год назад +3

      Hey dude - I bought a 3060 ($400) and it works pretty well. Obviously going to take longer than a 3080 - but I have been able to do basically anything I want with it, no problem.

    • @Dglinski2
      @Dglinski2 Год назад +1

      @@enigmatic_e ​ @Mark Hall Thanks ya'll just picked up a 3060 and hope to have it setup by tonight :)

  • @EduardoAlves-gd4oo
    @EduardoAlves-gd4oo Год назад

    This run with GTX 1660 super ? or i need upgrade my gpu ?

  • @dan323609
    @dan323609 Год назад

    Where can I download the midjourney style in ckpt format?

    • @enigmatic_e
      @enigmatic_e  Год назад

      AItreprenuer has a link. I have a link to his channel in my description

  • @okachobe1
    @okachobe1 Год назад

    When i drag my height and weight around it doesnt highlight anything on the image to get the correct resolution, is there an option to toggle that on and off?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Interesting. Not sure why it’s not doing it. Might have to check settings or update?

    • @okachobe1
      @okachobe1 Год назад

      @@enigmatic_e yeah I'll have to look around I guess, i just recently got the program setup too.
      I'll take a look thanks for the fast reply!

  • @sadau16dec
    @sadau16dec Год назад

    where can i get the disco diffusion checkpoint model? can u please share it?

  • @jamesdrummond187
    @jamesdrummond187 Год назад +2

    People will still need graphic designers to actually use the AI software for those that are worried. Just as we moved from pen and paper to drawing on computer to be faster, to me it is the same with using AI. As someone that cannot draw that well but loves art, I love that I can create art with AI. However graphic designers still have way more knowledge on what make things look "better", being able to modify that art manually and creating a consistent style across a product line. Also until the AI can make videos that can compete with higher quality content, graphic designer will be needed but maybe less to design the original content and more to refine the AI results. Art is kinda this way already as we take images into our brains or lookup concepts art to base our art from. The problem is we are slow at generating our art or at least mockups for others to sign off on. I would argue that more graphic designers might be needed in the future as demand for unique content grows and the ease of creating application/websites for companies continues.

  • @PlayerGamesOtaku
    @PlayerGamesOtaku Год назад +1

    hello, in settings I don't have that screen that appears to you. In fact I could not check if I had Disco Diffusion, Could you help me?

    • @enigmatic_e
      @enigmatic_e  Год назад

      you have to install stable diffusion ruclips.net/video/qmnXBx3PcuM/видео.html