CONVERT ANY IMAGE TO LINEART Using ControlNet! SO INCREDIBLY COOL!

Поделиться
HTML-код
  • Опубликовано: 14 ноя 2024

Комментарии • 189

  • @arvinds2300
    @arvinds2300 Год назад +54

    Simple yet so effective. Controlnet is seriously magic.

  • @winkletter
    @winkletter Год назад +26

    I find mixing DepthMap and Canny lets you specify how abstract you want it to be. Pure DepthMap looks like more illustrated vector line art, but adding Canny makes it more and more like a sketch.

  • @visualdestination
    @visualdestination Год назад +3

    SD came out and was amazing. Then dreambooth. Now Controlnet. Can't wait to see what's the next big leap.

  • @titanitis
    @titanitis 10 месяцев назад +2

    Would be awesome with an updated edition of this video now as there is so many new options with the comfyUI. Thank you for the video Aitrepreneur!

  • @IceTeaEdwin
    @IceTeaEdwin Год назад +33

    This is exactly what artists are going to be using to speed up their workflow. Get a draft line art and work their style from there. Infinite drafts for people who struggle with sketch ideas.

  • @thanksfernuthin
    @thanksfernuthin Год назад +7

    I need to test this of course but this might be another game changer. For someone with a little bit of artistic ability changing a line art image to what you want is A LOT easier than changing a photo. So I can do this, edit the line art and load it back into canny. Pretty cool.

  • @ryry9780
    @ryry9780 Год назад +1

    Just binged your entire playlist on ControlNet. That and Inpainting are truly like magic. Thank you so much!

  • @iamYork_
    @iamYork_ Год назад +2

    I havnt had much time to dabble with controlnet but one of my first thoughts was making images into sketches as opposed to everyone turning sketches into amazing generated art... Great job as always...

  • @廖秋华
    @廖秋华 7 месяцев назад +2

    Why I follow the same Settings as the video tutorial. For large models, controlnet is also set up. The image I generated was still white, no black and white lines,

  • @jackmyntan
    @jackmyntan Год назад +5

    I think the models have changed because I followed this video to the letter and all I get is very very faint line drawings. I even took a screen shot of the example image here used and got exactly the same issue. There are more controllers on the more recent iteration of ControlNet, but everything I try results in ultra feint line images.

    • @Argentuza
      @Argentuza Год назад

      If you want to get the same results use the same model: dreamshaper_331BakedVae

    • @hildalei7881
      @hildalei7881 Год назад

      I have the same problem. The line is not as clear as his.

  • @Snafu2346
    @Snafu2346 Год назад +2

    I .. I haven't learned the last 10 videos yet. I need a full time job just to learn all these Stable Diffusion features.

  • @Semi-Cyclops
    @Semi-Cyclops Год назад +2

    man control net is awesome i use it to colorize my drawings

    • @Semi-Cyclops
      @Semi-Cyclops Год назад

      @Captain Reason i use the canny model it preserves the sketch then i describe my character or scene. my sketch goes in the the control net and if i draw a rough sketch i add contrast. the scribble model does not work good for me atleast it creates its own thing from the sketch

  • @MissChelle
    @MissChelle Год назад +1

    Wow, this is exactly what I’ve been trying to do for weeks! It looks so simple, however, I only have an iPad so need to do it in a web up. Any suggestion? ❤️🇦🇺

  • @GS-ef5ht
    @GS-ef5ht Год назад +1

    Exactly what I was looking for, thank you!

  • @alexwsshin
    @alexwsshin Год назад +2

    wow, it is amazing! But I have a question, here. My line art color is not black, it is very bright. Is there any way to make black color?

  • @CaritasGothKaraoke
    @CaritasGothKaraoke Год назад +9

    I am noticing you have a set seed. Is this the seed from the generated image before?
    If so, does that explain why this is much harder to get it to work well on existing images that were NOT generated in SD? Because I'm struggling to get something that doesn't look like a weird woodcut.

    • @MrMadmaggot
      @MrMadmaggot Год назад

      Dude where did yuou downloaded teh dreamshaper model?

  • @online-tabletop
    @online-tabletop Год назад +9

    Mine turns out quite light/grayish. the lines are also quite thin. Any tips?

    • @Argentuza
      @Argentuza Год назад +1

      Same here, no way I can obtain the same results! why is this happening?

    • @archael18
      @archael18 7 месяцев назад

      You can try using the same seed he does in his img2img tab or changing it to see which lineart style you prefer. Every seed will make a different lineart style.

  • @desu38
    @desu38 Год назад +2

    3:33 For that it's better to pick a "solid color" adjustment layer.

  • @vi6ddarkking
    @vi6ddarkking Год назад +3

    So Instant Manga Panels? Nice!

  • @Aitrepreneur
    @Aitrepreneur  Год назад +11

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @thanksfernuthin
      @thanksfernuthin Год назад +2

      Wow! This is not working for me at all! I get a barely recognizable blob even though the standard canny line art at the end is fine. So I switched to your DreamShaper model. No good. Then I gave it ACTUAL LINE ART and it still filled a bunch of the white areas in with black. I also removed negative prompts that might be making a problem. No good. Then all negs. No good. I'm either doing something wrong or there's some other variable that needs to be changed like clip skip or something else. If it's just me... ignore it. If you hear from others you might want to look into it.

    • @hugoruix_yt995
      @hugoruix_yt995 Год назад

      @@thanksfernuthin It is working for me. Maybe try this Lora with the prompt: /models/16014/anime-lineart-style (on civitai)
      Maybe its a version issue or a negative prompt issue

    • @thanksfernuthin
      @thanksfernuthin Год назад +2

      @@hugoruix_yt995 Thanks friend.

    • @nottyverseOfficial
      @nottyverseOfficial Год назад

      Hey there.. big fan of your videos.. I got your channel recommendation from another YT channel and I thank him a thousand times that I came here.. love all your videos and the way you simplify things to understand so easily ❤❤❤

    • @MarkKaminari00
      @MarkKaminari00 Год назад

      Hello humans? Lol

  • @sailcat662
    @sailcat662 Год назад +4

    Here's the negative prompt if anyone wants to control-paste this:
    deformed eyes, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, poorly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing limbs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry

  • @sumitdubey6478
    @sumitdubey6478 Год назад +1

    I really love your work. Can you please make a video on "how to train lora on google colab". Some of us have cards with only 4gb vram. It would be really helpful.

  • @vaneaph
    @vaneaph Год назад +2

    This is way more effective than anything i have tried with Photoshop.

    • @krystiankrysti1396
      @krystiankrysti1396 Год назад

      which means you did not tried it because its not as good as that video makes it to be , he cherrypicked example image

    • @vaneaph
      @vaneaph Год назад +1

      @@krystiankrysti1396 not sure what your point here. but AI is does not mean Magic! you still need to edit the picture in Photoshop to ENHANCE the result to your liking.
      Using controNet indeed saves me hell of a lot of time.
      (do not forget, the burgers on the pictures NEVER look like what you really order !)

    • @krystiankrysti1396
      @krystiankrysti1396 Год назад

      @@vaneaph well, i got it working better with hed than canny , its just if i would make new feature vids, id premade a couple examples to show more than one so people can also see fail cases

  • @alexandrmalafeev7182
    @alexandrmalafeev7182 Год назад

    Very nice technique, thank you! Also you can tune canny's low/high thresholds to control the lines and fills

  • @mattmustarde5582
    @mattmustarde5582 Год назад +5

    Any way to boost the contrast of the linework itself? I'm getting good details but the lines are near-white or very pale gray. Tried adding "high contrast" to my prompt but not much improvement.

    • @bustedd66
      @bustedd66 Год назад

      i am getting the same thing

    • @bustedd66
      @bustedd66 Год назад

      raise the denoising strength i missed that step :)

    • @zendao7967
      @zendao7967 Год назад +3

      There's always photoshop.

    • @randomscandinavian6094
      @randomscandinavian6094 Год назад +2

      The model you use seems to affect the outcome. I haven't tried the one he is using. And of course the input image you choose. Luck may be a factor as well. All of my attempts so far have looked absolutely horrible and nothing like the example here. Fun technique but nothing that I could use for anything if the results are going to look this bad. Anyway, it was interesting but now on to something else.

    • @TheMediaMachine
      @TheMediaMachine Год назад

      I just save it, bang it in Photoshop and I just use adjustment layers i.e. contrast, curves. Until I get good with stable diffusion I am doing this for now. For colour lines, try adjustment layer colour, then make overlay mode screen.

  • @NyaruSunako
    @NyaruSunako Год назад

    I swear when I try these mine just doesnt want to listen to me lol Mine Might be broken Granted I just use it to make my workflow better now even wht I have atm its still as strong anyway just not to the lvl of this lol now Knowing This I can make my line Art even better from Learning with using different brushes Man This makes things even Fun and Easier for me to test out different line art brushes for this Always enjoyable to see new Stuff being evolved just So Fascinating

  • @nolanzor
    @nolanzor Год назад +13

    The negative prompt used makes a big difference, here it is for anyone that is struggling:
    (bad quality:1.2), (worst quality:1.2), deformed eyes, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutated)), out of frame, extra fingers, mutated hands, pooraly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing arms)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry

    • @bobbyboe
      @bobbyboe Год назад

      For me, the negative prompt makes it even worse. I wonder if it is maybe important to use the same model as he does

    • @arinarici
      @arinarici Год назад

      you are the men

  • @ssj3mohan
    @ssj3mohan Год назад +3

    Not Working for me.

  • @coda514
    @coda514 Год назад

    Amazing. Thank you. Sincerely, your loyal subject.

  • @loogatv
    @loogatv Год назад

    thanks! i looked for a good way for hours and hours ... and everything what i needed to do is search quick on youtube ...

  • @sestep09
    @sestep09 Год назад +2

    Can't get this to work it just results in a changed still colored image. I followed step by step and have triple checked my settings, I've only got it to work with one image no others. They all just end up being changed images from high denoising and still colored.

  • @Botatoo-b9b
    @Botatoo-b9b 10 месяцев назад +1

    What's this program called ?!

  • @segunda_parte
    @segunda_parte Год назад

    Awesome Awesome Awesome!!!!!!!!!!!!! You are the BOSS!!!

  • @Knittely
    @Knittely Год назад

    Hey Aitrepreneur,
    thanks for this vid! I recently read about TensorRT to speed up image generation, but couldn't find a good guide how to use it. Would you be willing to make a tutorial for it? (or even other techniques to speed up image generation, if any)

  • @ribertfranhanreagen9821
    @ribertfranhanreagen9821 Год назад

    Dang using this with illustrator will be a lot time saver

  • @swannschilling474
    @swannschilling474 Год назад

    My god, this is crazy good!!!!!!!!!! 😱😱😱

  • @amj2048
    @amj2048 Год назад

    so cool!, thanks for sharing!

  • @KolTregaskes
    @KolTregaskes Год назад

    Another amazing tip, thank you.

  • @nierinmath7678
    @nierinmath7678 Год назад

    I like it. Your vids are great

  • @coulterjb22
    @coulterjb22 Год назад

    Very helpful! I'm interested in creating vector art for my laser engraving business. This is the closest thing I've seen that helps. Anything else you might suggest?
    Thank you=subd!

  • @flonixcorn
    @flonixcorn Год назад

    Great Video

  • @serjaoberranteiro4914
    @serjaoberranteiro4914 Год назад +2

    it dont work, i got a totally different result

  • @jpgamestudio
    @jpgamestudio Год назад

    WOW,great!

  • @brandonvanderheat
    @brandonvanderheat Год назад +1

    Haven't tried this yet but this might make it easier to cut (some) images from their background. Convert original image to line-art. Put both the original image and line art into photoshop (or equivalent) and use the magic background eraser to delete the background from the line art layer. Select layer pixels and invert selection. Swap to the layer with the original color image, add feather, and delete.

  • @stedocli6387
    @stedocli6387 Год назад

    way supercool!

  • @edmatrariel
    @edmatrariel Год назад +1

    Is the reverse possible? line art to painting?

    • @kevinscales
      @kevinscales Год назад

      sure, just put the line art into controlnet and use canny. (txt2img) write a prompt etc
      Wait, does this make colorizing manga really easy? I never thought of that before

  • @hildalei7881
    @hildalei7881 Год назад

    It looks great. But I follow your steps but it doesn't work anymore...Maybe it's because the different version of webUI and controlnet.

  • @kiillabytez
    @kiillabytez Год назад

    So, it requires a WHITE background?
    I guess using it for comic book art is a little more involved or is it?

  • @angelicafoster670
    @angelicafoster670 Год назад

    very cool, i'm trying to get "one line art" drawing, do you happen to know how ?

  • @Argentuza
    @Argentuza Год назад

    What graphic card are you using? thanks

  • @kushis4ever
    @kushis4ever Год назад +1

    Hi, I replicated the steps on an image but the image came out with blurred lines like brush marks with no distinguishable outline. BTW, it took me nearly 4-5 minutes to generate on a macbook pro i9 32GB RAM.

  • @r4nd0mth0_ghts5
    @r4nd0mth0_ghts5 Год назад

    Is there any probability to create one line art forms using Controlnet. I hope next version will be bundled with this feature..

  • @MaxKrovenOfficial
    @MaxKrovenOfficial Год назад

    In theory, we could use this same method, with slight variations, to have full color characters with white backgrounds, so we can then delete said background in Photoshop and thus have characters with transparent backgrounds?

  • @TesIaOfficial.on24
    @TesIaOfficial.on24 Год назад

    Hey, I would like to use Runpod with your Affiliate Link.
    If I do Seed Traveling I've to wait about 1-3 hours on my Laptop. Thats long^^
    So one question.. If I've found some good prompts with some good seeds.
    Can I copy the prompts and seeds after I'm happy with them to Runpod and just make the Seed Travel their?
    Will I get the exact same images with this way?

  • @paulsheriff
    @paulsheriff Год назад

    Would there be away to batch video frames like this ..

  • @cahitsarsn5607
    @cahitsarsn5607 Год назад

    Can the opposite be done? sketch , line art to image ?

  • @Isthereanyescape
    @Isthereanyescape Год назад

    I'm using Automatic1111 and installed Controlnet, but Canny model isn't available, how come?

  • @fantastart2078
    @fantastart2078 Год назад

    Can you tell me what I have to install to use this?

  • @kamransayah
    @kamransayah Год назад

    Hey K, what happen? did they delete your video again?

  • @Niiwastaken
    @Niiwastaken 4 месяца назад +2

    It just seems to make a white image. Ive triple checked that I got every step right :/

    • @edsalad
      @edsalad День назад

      I got this problem too

  • @reeceyb505
    @reeceyb505 Год назад +3

    Eh, if you use something like InstructPix2Pix to 'Make it lineart' it does it. So, this kind of thing kinda already existed

    • @R3V1Z3
      @R3V1Z3 Год назад

      InstructPix2Pix "lineart" changes the image to a specific type of lineart style which loses some of the original image's structure. It works, it just has artistic character of its own.

  • @copyright24
    @copyright24 Год назад

    That looks amazing but I have an issue, I have recently installed Controlnet and in the folder I have the model control_v11p_sd15_lineart but it's not showing in the model list ?

    • @klawsklaws
      @klawsklaws Год назад

      I had same issue i downloaded control_sd15_canny.pth file and put in models folder

  • @welovesummernz
    @welovesummernz Год назад +1

    The title says any image, how can I apply this style to one of my own photos? Please

    • @OliNorwell
      @OliNorwell Год назад

      Yeah exactly, I tried with one of my own photos and it wasn't as good

  • @MrMadmaggot
    @MrMadmaggot Год назад

    Where did u get that Canny model?

  • @andu896
    @andu896 Год назад +1

    I followed this tutorial to the letter, but all I get is random lines, which I assume is related to Denoise Strength being so high. Can you try with a different model and see if this still works? Anybody got it to work?

    • @Argentuza
      @Argentuza Год назад

      If you want to get the same results use the same model: dreamshaper_331BakedVae

    • @sudhan129
      @sudhan129 Год назад

      @@Argentuza Hi, I found the only link about dreamshaper_331BakedVae. It's on hugging face, but seems not a file for download. Where can find a usable dreamshaper_331BakedVae file?

  • @maedeer5190
    @maedeer5190 Год назад +1

    i keep getting a completely different image can someone help me.

  • @tetsuooshima832
    @tetsuooshima832 Год назад +2

    I found the first step unecessary. What's the point sending to img2img if you delete the whole prompt later on, just start from img2img directly then tweak any gen you have ? or any pic really

    • @TheDocPixel
      @TheDocPixel Год назад

      Don't forget that it's good to have the seed

    • @tetsuooshima832
      @tetsuooshima832 Год назад +1

      @@TheDocPixel I think seed become irrelevant with a denoise strength of 0.95. Besides, if your source is AI generated then the seed is in the metadata; if it's any images from somewhere else there's no meta = no seed. So I don't get your point here

  • @joywritr
    @joywritr Год назад

    Is keeping the denoising strength very low while inpainting with Only Masked the key to preventing it from trying to recreate the entire scene in the masked area? I've seen people keep it high and have that not happen, but it happens EVERY TIME I use a denoising strength more than .4 or so. Thanks in advance.

  • @iseahosbourne9064
    @iseahosbourne9064 Год назад

    Hey my k my ai overlord, how do you use openpose for objects? like say i wanted to generate a spoon but have it mid air at 90o?
    Also, does it work for animals?

  • @edwardwilliams2564
    @edwardwilliams2564 9 месяцев назад

    Any idea how to do this in comfyui? Auto1111 is really slow.

  • @PlainsAyu
    @PlainsAyu Год назад

    I dont have the guidance start in the settings, what is wrong with mine?

  • @方奕斯
    @方奕斯 Год назад

    hello I met an error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 1024x320)
    Is there any way to solve this? Thanks

  • @grillodon
    @grillodon Год назад

    It's all ok before Inpaint procedure. When I click generate after all settings and black paint on face the Web UI tells me: ValueError: Coordinate 'right' is less than 'left'

    • @grillodon
      @grillodon Год назад

      Solved. It was Firefox. But the Inpaint "new detail" works only f I select Whole Picture.

  • @cheruthana005
    @cheruthana005 Год назад +2

    not woking for me

  • @global_ganesh
    @global_ganesh 3 месяца назад

    Which website

  • @davidbecker4206
    @davidbecker4206 Год назад +2

    Tattoo artists... Ohh I hate AI art! ... oh wait this fits into my workflow quite well.

  • @vishalchouhan07
    @vishalchouhan07 Год назад

    Hey i am not able to achieve the quality of linework you are able to achieve in this video. is it a good idea to experiment with different models?

    • @Argentuza
      @Argentuza Год назад +1

      If you want to get the same results use the same model: dreamshaper_331BakedVae

  • @tails8806
    @tails8806 Год назад

    I only get a black image from the canny model... any ideas?

  • @nemanjapetrovic4761
    @nemanjapetrovic4761 Год назад

    I still get some color in my image when i trg to turn it i to sketch is there a fix for that?

  • @solomonkok1539
    @solomonkok1539 Год назад

    Which app?

  • @goldenshark6272
    @goldenshark6272 Год назад

    Plz how to download controlnet ?!

  • @St.MichaelsXMother
    @St.MichaelsXMother Год назад

    how do I get controllnet? or if it's a website?

  • @theStarterPlan
    @theStarterPlan Год назад

    What does the seed value say?

  • @theStarterPlan
    @theStarterPlan Год назад

    When I do it, I just get an error message (with no generated image) saying- AttributeError ControlNet object has no attribute 'label_emb". Does anybody have any idea what I could be doing wrong? please help!

  • @Bra2ha1
    @Bra2ha1 Год назад

    Where can I get this canny model?

  • @OtakuDoctor
    @OtakuDoctor Год назад

    i wonder why i only have one cfg scale, not start and end like you, my controlnet should be up to date
    edit: nvm, needed an update

  • @HogwartsStudy
    @HogwartsStudy Год назад +2

    and here i trained 2 embeddings all night long to do the same thing...

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      Ah.. well😅 sorry

    • @HogwartsStudy
      @HogwartsStudy Год назад

      @@Aitrepreneur no no, this will be excellent! Right after I get done with this Patrick Bateman scene...

    • @HogwartsStudy
      @HogwartsStudy Год назад

      @@Aitrepreneur Just tried to do this and I do not have a guidance start slider, only weight and strength.

  • @dinchigo
    @dinchigo Год назад

    Can anyone assist me? I've installed Stable Diffusion but it gives me RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` . Not sure what to do as my pc meets necessary requirements.

  • @olgayuryevich1123
    @olgayuryevich1123 3 месяца назад +1

    Great content but why are you in such a rush?! Please slow down to make it a little easier to follow.

  • @takezosensei
    @takezosensei Год назад +9

    As a lineart artist, I am deeply saddened...

    • @psych18art
      @psych18art Год назад +1

      Yeah

    • @nurikkulanbaev3628
      @nurikkulanbaev3628 2 месяца назад

      Im a comic artist. Just looking for a way to shorten amount of background tracing I have to do

  • @krystiankrysti1396
    @krystiankrysti1396 Год назад +1

    Meh0 this works like 0.5% of the time , mostly doesnt work

  • @ChroyonCreative
    @ChroyonCreative 11 месяцев назад

    Mine always looks like a grayscale or a fluffy model with shading. Its never lineart

  • @proyectorealidad9904
    @proyectorealidad9904 Год назад

    how can i do this in batch?

  • @MonologueMusicals
    @MonologueMusicals Год назад +1

    ain't working for me, chief
    Edit. I figured it out, the denoising is key.

  • @apt13tpa
    @apt13tpa Год назад

    I don't knwo why but this isn''t working for me at all

  • @sojoba3521
    @sojoba3521 Год назад

    Hi, do you do personal tutoring? I'd like to pay you for a private session

  • @dylangrove3214
    @dylangrove3214 Год назад

    Has anyone tried this on a building/architecture photo?

  • @diegomaldonado7491
    @diegomaldonado7491 Год назад

    where is the link to this ai tool??

  • @aliuzun2220
    @aliuzun2220 Год назад

    gelişir gelişir

  • @crosslive
    @crosslive Год назад

    You know what? ppl that sell that kind of thing on facebook (there are a LOT) are not going to like this.