NEXT-GEN NEW IMG2IMG In Stable Diffusion! This Is TRULY INCREDIBLE!

Поделиться
HTML-код
  • Опубликовано: 13 фев 2023
  • ControlNet is a brand new neural network structure that allows, via the use of different special models, to create image maps from any images and using these informations to transfer data from one image to another, making it a really powerful option in the image-to-image tab inside Stable Diffusion! This gives you even more power and more control over the final result than before. So in this video, I will show you how to install the extension and how to use the differents models to get the best results possible!
    Did you manage to install the extension? Let me know in the comments!
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    SOCIAL MEDIA LINKS!
    ✨ Support my work on Patreon: / aitrepreneur
    ⚔️ Join the Discord server: bit.ly/aitdiscord
    🧠 My Second Channel THE MAKER LAIR: bit.ly/themakerlair
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Runpod: bit.ly/runpodAi
    Miro board: miro.com/app/board/uXjVPnB9L2...
    Extension URL: github.com/Mikubill/sd-webui-...
    The models: huggingface.co/lllyasviel/Con...
    Special thanks to Royal Emperor:
    - BSM
    Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
    #stablediffusion #controlnet #stablediffusiontutorial
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    WATCH MY MOST POPULAR VIDEOS:
    RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
    ►► bit.ly/stablediffusion
    RECOMMENDED WATCHING - My "Tutorial" Playlist:
    ►► bit.ly/TuTPlaylist
    Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

Комментарии • 346

  • @NC17z
    @NC17z Год назад +13

    This extension is amazing! I'm having an absolute blast with it. It is solving so many of my problems with matching the look of realism photos with what I'm feeding it for an image and with my prompt. Thank you so much for what you do. You've been my first go to on RUclips for weeks!

    • @peterbelanger4094
      @peterbelanger4094 Год назад +1

      I'm having fun with the sketch pre processor. Runs fine on my gpu (1060GTX 6GB), not even using the --lowvram option.

  • @Rafael64_
    @Rafael64_ Год назад +8

    Not even week goes by and there's image to image significant enhancement. What a time to be alive!

  • @coda514
    @coda514 Год назад +2

    Saw info about this on Reddit, I knew you would put out a how-to video so I waited to install. Glad I did, you did not disappoint. Sincerely, your loyal subject.

  • @Alex-dr6or
    @Alex-dr6or Год назад +7

    This is exactly what I need. Last night I was having fun with the blend option in Midjourney and wished SD has something similar. This video came at the perfect time

  • @StiffPvtParts
    @StiffPvtParts Год назад +2

    Yet again, you have manged to blow my mind! Thank you for showing this new amazing functionality! Feels like these tools are getting more insane every single day.

  • @o.b.1904
    @o.b.1904 Год назад +80

    The pose one looks great, you can pose a character in a 3d program and use it as a base.

    • @mactheo2574
      @mactheo2574 Год назад +9

      What if you use your own body to pose?

    • @vielschreiberz
      @vielschreiberz Год назад +7

      Perhaps it will be useful with some simplified solutions like Daz 3d
      Or with poses library

    • @Amelia_PC
      @Amelia_PC Год назад +10

      Yup. I've been using Daz to help me with comic book character poses for years and it took only some seconds to put a character in different poses (if a person says it takes more than that, that's because they're newcomers or have not much experience with 3D programs).

    • @pladselsker8340
      @pladselsker8340 Год назад +4

      and yeah, the 3D software thing is actually a good idea if you implement a good inverse cinematic thingy on it. It can probably save time and be faster than a google search for simple (or really specific and complex) poses. I've been learning how to do that last night in blender, I'm almost done with the model, it's actually not too hard to make.
      You don't even have to render anything btw, you just have to take a screenshot of it when the angle and everything seems okay, and then paste that in the webui. Works like a charm.

    • @sonydee33
      @sonydee33 Год назад

      Exactly

  • @Aitrepreneur
    @Aitrepreneur  Год назад +41

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @MrArrmageddon
      @MrArrmageddon Год назад +1

      Do we know if these are safe? I don't even know how to scan pickle anymore. I have avoided them for months. Amazing video by the way. Thank you.

    • @joachim595
      @joachim595 Год назад

      “Type cimdy”

    • @peterbelanger4094
      @peterbelanger4094 Год назад +1

      👍👍👍👍👍👍👍 Great extension! paused the video, got it all downloaded and installed before I finished the video. Runs fine on my 1060GTX 6GB, even without the lowvram option. Actually 10x faster without it. only the xformers option is needed.

    • @MrArrmageddon
      @MrArrmageddon Год назад

      @@peterbelanger4094 If you can Peter? I have a RTX 4080 16GB. I've never used any Xformers. Should I look into them? And if so what purpose they server? lol If you can't explain that is fine.

    • @zwojack7285
      @zwojack7285 Год назад

      what SD version are you using? I only have..1.5 I think and the extension only appears in txt2img, not in img2img

  • @ysy69
    @ysy69 Год назад

    downloading the models now and will try. thank you so much. this seems very powerful. In fact, I've been spending more time on img2img lately. even at the current state, it is fantastic... can't imagine the possibilities with this new extension

  • @gloxmusic74
    @gloxmusic74 Год назад

    Installed it straight away and can honestly say I'm super impressed 👍👍👍

  • @dreamzdziner8484
    @dreamzdziner8484 Год назад +2

    Wow. So exciting. Thank you dear Overlord 💪🙏🏽

  • @Unnaymed
    @Unnaymed Год назад +4

    It's epic, power of stable diffusion is upgraded ! ❤️

  • @ristopaasivirta9770
    @ristopaasivirta9770 Год назад +16

    My biggest complaint about SD has been the lack of control.
    In order to make comic books and alike you need to be able to precisely control the pose of the characters.
    Gonna see how well this holds up.
    Thank you for the video!

  • @SteveWarner
    @SteveWarner Год назад

    Top notch training! Thanks for this comprehensive overview! Looking forward to testing this out!

  • @DjDiversant
    @DjDiversant Год назад +2

    Installed it just couple of hours before a vid. Thx for a tut!

  • @A_Train
    @A_Train Год назад +6

    Thanks for being on the bleeding edge of this and imparting your knowledge for artists like me. My question is, what is the best way to implement Stable Diffusion in Blender. I used a version like 3 months ago and now that seems so outdated.

  • @toddzircher6168
    @toddzircher6168 Год назад

    Thank you for the wonderful walk through on this new extension. I have a lot of 2d pose sheets/sketches from various artists and I can totally see using them with a controlnet.

  • @upicks
    @upicks Год назад

    Simply amazing, thanks for the video! This makes img2img even better than I could have imagined. .

  • @thebonuslvl7181
    @thebonuslvl7181 Год назад

    magic... so much more to come thank you for keeping us on the leading edge..

  • @kylehessling2679
    @kylehessling2679 Год назад +1

    I've been wishing for this since day one of using SD! This is going to be so useful for generating versions of my graphic design work!

  • @purposefully.verbose
    @purposefully.verbose Год назад

    i saw people talking about this concept on several other channels, and all were like "i hope this comes out for auto1111", and i'm all "it is!" - and linked this video.
    hopefully you get more subs.

  • @yeahbutcanyouredacted3417
    @yeahbutcanyouredacted3417 Год назад

    Amazing tool-
    solves a lot of RNG for us to get closer to the designs we are looking for
    ty again Aitrepreneur for helping get my home studio going

  • @harambae117
    @harambae117 Год назад

    This looks like a lot of fun and really good for professional use. Thanks for sharing dear AI overlord

  • @cinemantics231
    @cinemantics231 Год назад +2

    This just keeps getting better and better! Thanks for putting this together. Is there any way to merge two different images? Like take the pose from one image and implement it in the style or background of another?

    • @pastuh
      @pastuh Год назад

      its called inpaint. just use photoshop plugin for this.
      paint over (or put image) on different layer and click inpaint

  • @Varchesis
    @Varchesis Год назад

    This is insanely great! Thanks for sharing this info.

  • @SnowSultan
    @SnowSultan Год назад +80

    If this works as well as it appears to, this is both game-changing and life-changing for artists like myself that work in 3D but want more illustrated or toony results. I've waited 24 years to be able to make true 2D art with 3D methods. Even if it's not perfect yet, this gives me hope.

    • @muerrilla
      @muerrilla Год назад +2

      I'm playing around with the scribble model and i'm absolutely blown away!

    • @Smokeywillz
      @Smokeywillz Год назад +2

      topaz studio 2 was coming close but THIS is next level

    • @PriestessOfDada
      @PriestessOfDada Год назад +2

      I had the same thought. Makes me want to train my cc4 characters

    • @SnowSultan
      @SnowSultan Год назад +3

      @@PriestessOfDada I've had decent luck using untextured DAZ figures as ControlNet pose references, but I do not know what the results would be if you train a checkpoint or LoRA on a complete 3D character. If you can still get 2D or anime results from it...well, I'll have a lot training to do. ;)

    • @mrhellinga9440
      @mrhellinga9440 Год назад

      this is pure gold

  • @haidargzYT
    @haidargzYT Год назад +1

    Cool 😮 A.I community keep surprising us every day

  • @DOntTouCHmYPaNDa
    @DOntTouCHmYPaNDa Год назад +1

    Awesome thanks for sharing! Btw, if you click the down arrow just to the right of the letters LFS it just downloads the models without opening another tab. Small but useful tip :)

  • @cybermad64
    @cybermad64 Год назад

    Thanks a lot for sharing your miro boards, those are normaly test that I would do myself to understand how the system works, you are saving us a lot of tech investigation time! :)

  • @xnooknooknook
    @xnooknooknook Год назад +3

    I really like txt2img but img2img has been where I've spent most of my time. Scribble mode looks amazing! I need it in my life.

  • @user-bm9oy4gx2l
    @user-bm9oy4gx2l Год назад

    Thanks again for good contents! Pose thing looks interesting 👀

  • @GolpokothokRaktim
    @GolpokothokRaktim Год назад +1

    I recently experimented with blue willow and I'm really amazed to use it. Blue Willow recently launched V2 with a brand new model upgrade. Now I got better-quality images with more aesthetic outputs

  • @EntendiEsaReferencia
    @EntendiEsaReferencia Год назад

    I've been waiting por the 1.5 depth model and now it's here, and with a few friends 🤗🤗

  • @jivemuffin
    @jivemuffin Год назад

    Nice, comprehensive video -- and thanks for the Miro board in particular! Makes me think there's great potential for AI workflows in there. :)

  • @JohnVanderbeck
    @JohnVanderbeck Год назад

    I'm droolling at the thought of using the pose model!

  • @XaYaZaZa
    @XaYaZaZa Год назад +1

    My favorite youtuber 🧡

  • @Einscrest
    @Einscrest Год назад +4

    Thanks for another great vid! I'm more interested in the open pose model because then you won't need to prompt a pose too much. Seems like in the video it can retain some details like the clothes color so it also needs a prompt for it to change. Very interesting.
    Edit: Some more interesting things
    - It can accept characters provided the model knows them (Azur Lane characters for example)
    - Like from the video it does eat up vram fast, my Linux almost crashed one time lol.

  • @digitalkm
    @digitalkm 8 месяцев назад

    Awesome, thank you!

  • @azmodel
    @azmodel Год назад

    Absolutely Crazy. thanks!

  • @SuperEpic-vb8nq
    @SuperEpic-vb8nq Год назад +4

    This is absolutely amazing, my only complaint is that it doesn’t seem to work with batch img2img. If that gets to working, then this could easily solve the issue with stable diffusion videos where details tend to be “sticky” due to the seed not shifting with the video. This could help stabilize it.
    Edit, after an update, it works with batch img2img and it does exactly what I wanted. What a time to be alive!

    • @Ich.kack.mir.in.dieHos
      @Ich.kack.mir.in.dieHos Год назад

      yoo do you make videos in stable? because i do and I m just interested in batchmode /animation. consistens characters and places. can we connect on insta ?

  • @IlRincreTeam
    @IlRincreTeam Год назад +1

    This is VERY impressive

  • @rickardbengtsson
    @rickardbengtsson Год назад

    Great breakdown

  • @CrazyEditsCrazy
    @CrazyEditsCrazy Год назад +1

    this is awesome

  • @s.foudehi1419
    @s.foudehi1419 10 месяцев назад

    this is truly nextlvl stuff. im glad i found this video. has anyone already tried to create a depthmap with controlnet and then using that to create a 3model in blender? there's some good tutorials on here as well, you might wanna check that out :)

  • @sebastianclarke2441
    @sebastianclarke2441 Год назад

    Why have I only heard about this for the first time today!? Wow!!

  • @h8f8
    @h8f8 Год назад

    Never knew you could type cmd on the top directory... thank you so much

  • @phatbuihong4014
    @phatbuihong4014 Год назад

    Thank you so much.

  • @xd-vf1kx
    @xd-vf1kx Год назад

    so cool! I love ya!

  • @dovrob
    @dovrob Год назад

    thanks so much mate

  • @bryan98pa
    @bryan98pa Год назад

    Wooow, i like this new tool!!

  • @girasan
    @girasan Год назад

    thank you so much 🙂

  • @gohan4585
    @gohan4585 Год назад

    Thank you sensei bro 🙏

  • @BenPhelps
    @BenPhelps Год назад +1

    Brilliant. Any tips to install on M1 Mac?

  • @flonixcorn
    @flonixcorn Год назад

    Very Nice!

  • @leafdriving
    @leafdriving Год назад +1

    Dear AIOverlord: Thank you as always for being ahead of the pack, and showing me something useful and amazing. In every example, your two images (input above and input below) are the same ~ What happens if they are different? (Naturally, I couldn't possibly just try it lol)

  • @oldaccountfornow1111
    @oldaccountfornow1111 Год назад

    Big Thanks

  • @Irfarious
    @Irfarious Год назад

    I love the way you say "down below"

  • @unknowngodsimp7311
    @unknowngodsimp7311 Год назад +1

    This is awesome. But I'm kinda confused about how the 3 "inputs" relate to each other. Perhaps this is me just not understanding img2img. Basically my question is how do the prompt, image, and 2nd image (for depth map) relate to each other for the resulting image? Doesn't this new extension mean that we could (also) use only an image or prompt with the image for depth map to generate an image? I would love an in-depth answer 🙏

  • @desu38
    @desu38 Год назад

    Goddamn, the webui just keeps getting more and more powerful. 😯

  • @Rickbison
    @Rickbison Год назад

    I finished my last short with the old img2img. Downloading all the models and lets see how this goes.

  • @Fingle
    @Fingle Год назад +1

    NO WAY THIS IS INSANE

  • @brokencreationlordmegatrol3037

    Ooooooo! Very useful

  • @StrongzGame
    @StrongzGame Год назад

    i need this video on a flash drive for reference for ever

  • @artfoolmonkey2866
    @artfoolmonkey2866 Год назад

    Hi and thanks for your amazing guides. I followed thru the steps but for some reason i'm only having access to controlnet-m2m and can't find any other options. Is there any other requisites before install the controlnet extension or something to configure??

  • @mikishomeonyoutube2116
    @mikishomeonyoutube2116 Год назад

    This is TRULY INCREDIBLE!

  • @shongchen
    @shongchen Год назад +1

    Hello, I have a question , how to find the control stable diffusion with human pose tab ? Thank you for your share.

  • @kuromiLayfe
    @kuromiLayfe Год назад +2

    cannot wait for this to be an extension to txt2img also : prompt for a specific character and then add a scribble or pre process image to the script to get the described character in the pose you want.

    • @BlackDragonBE
      @BlackDragonBE Год назад +1

      It already has this.

    • @BlackDragonBE
      @BlackDragonBE Год назад

      @@ClanBez Open the txt2img tab with the extension this video explains installed. At the bottom of the tab you can use the ControlNet just like in img2img, including the scribble model. By providing a prompt and a scribble, you can generate images with lots of control. I suggest lowering the Weight to 0.25-0.5 to start with as you ca get some weird results depending on your drawing skills otherwise. Good luck.

    • @zwojack7285
      @zwojack7285 Год назад

      for some reason it only shows up in txt2img for me lmoa

  • @MrOopsidaisy
    @MrOopsidaisy Год назад +1

    Are you able to create an updated installation video? I've been out of the loop for a few months with Stable Diffusion and feel lost with all the updates.......... :(

  • @AmirZaimMohdZaini
    @AmirZaimMohdZaini Год назад

    This feature finally able to make new image with exact style from original input picture.

  • @MarkHarris-bt4po
    @MarkHarris-bt4po Год назад +1

    Another useful video, cheers. I have a question I was wondering if you might know the answer to. I want to train some models (loras probably) to recognise some designer clothing. So it would be a bunch of items in a particular category. Such as Shirts, Long sleeve/Short sleeve/distressed, Nero collar etc, but I'm not sure which method is best and If I should fully describe the source images or leave out some parts of the caption to get best results, Can you make any recommendations ?

    • @pladselsker8340
      @pladselsker8340 Год назад +1

      I would suggest using around 50 to 200 images of whatever you're trying to generate, and see how it does with such a dataset. Then, iterate over it untill you're happy with the lora you made.

  • @ryanp515
    @ryanp515 8 дней назад

    This is cool. I was wondering could this be used to make line art? That would be a time saver with poses, etc.

  • @YarivTawiliArt
    @YarivTawiliArt Год назад

    I will definitely try it. I wonder if it works with multi characters

  • @TheDkmariolink
    @TheDkmariolink Год назад

    The open pose is a game changer, it would be great for video input as well, for things such as dances or movement, is that doable? Seems like doing this in batch would take ages. Maybe this could be implemented within Deforum?

  • @JohnVanderbeck
    @JohnVanderbeck Год назад +5

    So a few thoughts playing with this. I specifically zeroed in on the pose model because one of the things I've been trying (and failing) to do for a long time now is mix my own photography with generation, but having any sort of control over posing was nearly impossible. Until now!
    First off, this works in txt2img as well, so you can supply it a pose reference image and then do your normal txt2img prompts and get a completely new generation in that pose. Mind fracking blown! That said, temper expectations at least for now as the pose estimation is not so accurate. This is rather surprising actually given 2d pose estimation has been a pretty well solved problem for longer than SD has been a popular thing, so not sure whats up there.
    Still it has already allowed me to start making fusions of my studio photography with SD generations and it is amazing!

    • @ysy69
      @ysy69 Год назад

      Are you saying this extension can also be used over at txt2img?

    • @JohnVanderbeck
      @JohnVanderbeck Год назад +2

      @@ysy69 Yes! That's mostly where I am using it right now actually. I'm talking some of my models that i've shot in studio, bringing them into SD in txt2img, putting that photo in for controlnet pose, and then using the txt2img to generate a completely new person in roughly the same pose as the one I photographed.

    • @Seany06
      @Seany06 Год назад

      @@ysy69 txt2img is where it works, not img2img

    • @Seany06
      @Seany06 Год назад

      According to github it should be possible with the open pose model to control the skeleton but gradio isnt easy to work with. I'm sure in a few months we'll have a lot more control.
      These tools are insane so far!

    • @ysy69
      @ysy69 Год назад

      @@JohnVanderbeck when you say model, you're referring to SD custom models right? and not people as models? When you bring a photo to SD, that is img2img... in text2img, one doesn't use an image as reference, so I guess you meant img2img and then using the prompt to make the change into a new person, correct?

  • @kallamamran
    @kallamamran Год назад

    Finally! If you can say that for something that didn't take 24h 😃

  • @Smiithrz
    @Smiithrz Год назад

    Thank you so much for making this video. I was just talking about how this stuff will solve a lot of posing problems we have without it. Can’t wait to try it with my model photography as references 👏🏻

  • @cycla
    @cycla Год назад

    amazing; but how do we use the batch option? if I wanted to generate 100, we have to drag the input images 100 times?

  • @mrrooter601
    @mrrooter601 Год назад +1

    This is great, the hand one seemed to work for me, at least on one base image. It refuses to work at all with waifu diffusion 1.4e2 though.

  • @boythee4193
    @boythee4193 Год назад

    i had to restart the whole thing to get the models part to show up, but it did work. too late to test it out now, though: )

  • @luclenders
    @luclenders Год назад

    Great content! thanks for all the work. If i may give one suggestion. I often find myself coming back to a video to checkout the process again. But I noticed like in this video for example it does not mention Controlnet in the thumbnail or title. Its difficult to look for a specific video in all your videos :) Thanks again.

  • @KingZero69
    @KingZero69 Год назад

    amazing

  • @thays182
    @thays182 9 месяцев назад +1

    Is there way to use img to img and control net to move an existing character, with clothing and style, into a new pose/new position, and have it still be that initial character? (WIthout a Lora, only going off of one image)?

  • @AnimatingDreams
    @AnimatingDreams Год назад +2

    That's amazing! Will it work on Colab as well?

  • @shaman_ns
    @shaman_ns Год назад

    Does it work with embeddings? Wood be cool to have consistent characters using this method

  • @ThisOrThat13
    @ThisOrThat13 Год назад

    Do the measurement at 2:24 always show like that? We can take the OpenPose Wirefigure and make anything we can imagine?

  • @BobDolelol
    @BobDolelol Год назад +1

    Under 'Preprocessor' I have 'depth' but not 'Midas'. Is that a problem or can I use that one with the control_sd15_depth model?

  • @CrazyEditsCrazy
    @CrazyEditsCrazy Год назад

    can we use this for Batch ??

  • @nic-ori
    @nic-ori Год назад

    Thanks.

  • @nathancanbereached
    @nathancanbereached Год назад

    I'd love to see a video/cartoon where you transition from one scene to another by increasing midas weight slowly. Would be a very trippy/dream like way to transition to a totally different place.

  • @dayswdan
    @dayswdan Год назад

    Hi! Just wanna ask if this is possible. We have multiple portraits (pre-made not AI generated), I want to use my customers' cat or dog or pets image to transfer it or use it as a head on my pre-made portraits.

  • @XavierCliment
    @XavierCliment Год назад

    Hello Robot, how to colorize an old photo in greyscale or black and white with controlnet? which options or parameters are better? Thanks

  • @artoke84
    @artoke84 Год назад

    Hi. Been trying to figure this out since yesterday. The problem I'm having is enabling the Extension tab. I git pull to make sure everything is updated, but that still didn't work. Do I need to enable it by adding something into the command line?

  • @Cneq
    @Cneq Год назад

    Man control net open pose + VR FBT with 11pt can seriously open up some possibilities.

  • @OsakaHarker
    @OsakaHarker Год назад +8

    K you forgot to copy the checkpoints in the ControlNet/annotator/ckpts if you do then the hand pose works amazingly using preprocessor openpose_hand and the model openpose, thnk you for this amazing video, this changes a lot how we create images

    • @rmb6037
      @rmb6037 Год назад

      where are those? I don't see them on the github page

    • @OsakaHarker
      @OsakaHarker Год назад

      @@rmb6037 on the models link page go back once into ControlNet and then enter the annotator/ckpts folder

    • @ThisOrThat13
      @ThisOrThat13 Год назад

      @@rmb6037 look within "annotator" just above the models folder. Then ckpts into those files.

    • @ThisOrThat13
      @ThisOrThat13 Год назад

      That is where I'm lost at now. We would those files (pth & pt) go? Just into the model folder with everything else?

    • @OsakaHarker
      @OsakaHarker Год назад

      @@ThisOrThat13 i noticed they were auto downloading but wasn't working for me and i put them all in here \extensions\sd-webui-controlnet\annotator\ckpts

  • @wjm123
    @wjm123 Год назад

    is there anyway we can use controlnet to batch a bunch of img2img png sequence? keep getting an error trying to save past the 1st control net image generated.

  • @TomiTom1234
    @TomiTom1234 Год назад +1

    Something you could have covered in the video, is the inpainting using this method. I noticed in their site that you can inpaint a part of an image. I wish you explained that.

  • @gilz33
    @gilz33 Год назад +1

    Can I install this on my macbook M1Max ?

  • @itsalwaysme123
    @itsalwaysme123 Год назад

    There exists a safetensors version of the models that take up *signifigantly* less space! but other than that, golden.

  • @rickland1810
    @rickland1810 Год назад

    Amazing videos! Thank you. Just a suggestion, maybe the part where you download models in your vids should be first, so they download while we do the other steps. I already know your videos so I get ahead of this. But again, thank you.

  • @gabe22222
    @gabe22222 Год назад

    nice!

  • @joeysteverston6636
    @joeysteverston6636 Год назад

    how do you use scribble mode for coloring books?