Upscale and Enhance with ADDED DETAIL to 4K + (Better than Topaz)

Поделиться
HTML-код
  • Опубликовано: 27 дек 2023
  • Similar to Krea and Magnific but offline using Stable Diffusion. Just follow these steps and enhance a low resolution image better than you ever thought possible.
    If you need to install Controlnet and Automatic1111, please check my video and written descriptions here:
    www.hallett-ai.com/getting-st...
    Links from the Video #
    Checkpoint Model: civitai.com/models/132632/epi...
    Upscale Model Database: openmodeldb.info/
    Personal Links #
    Website: hallettvisual.com/
    Website AI for Architecture: www.hallett-ai.com/
    Instagram: / hallettvisual
    Facebook: / hallettvisual
    Linkedin: / matthew-hallett-041a3881
  • ХоббиХобби

Комментарии • 43

  • @oamph
    @oamph 3 месяца назад

    Thank you Matt for this tutorial! You're a blessing! ❤

    • @matthallettai
      @matthallettai  3 месяца назад

      Wow, thats a nice thing to day, thank you!

  • @stryik3r
    @stryik3r 5 месяцев назад +3

    Exactly what I was looking for! Thanks Matt for this awesome tutorial. Can't wait to try it out!!😊

    • @matthallettai
      @matthallettai  5 месяцев назад

      Glad it was helpful! I'm going to make a full tutorial today for the website which will more challenging images.

  • @marcoyaca
    @marcoyaca 15 дней назад

    genio! gracias por tan buen video

  • @danielmorgado1613
    @danielmorgado1613 2 месяца назад +1

    Hey, Matt. I hope youre doing well! Thanks for sharing this, your time invested in this tutorials is very valuable. Ill definately buy some of your courses. Everything is very well explained and its clear that youve invested a lot of time testing and understading all this new tec. Thanks, again!!

    • @matthallettai
      @matthallettai  Месяц назад +1

      Wow, thanks Daniel! Thats so kind and thoughtful, someone might think you're a bot I paid to write reviews ;)

    • @danielmorgado1613
      @danielmorgado1613 Месяц назад

      @@matthallettai Im human! I swear! haha

  • @jankvis
    @jankvis 3 месяца назад +1

    Very usefull, thx ;)

  • @Knightstrikes
    @Knightstrikes 2 месяца назад

    @Matt Hallett Visual
    Hi Matt, Fantastic Tutorial; it will come helpful in the future for sure. I was wondering if there were any updates on scaling with SDXL models? Specifically, upscaling line art to make "image ready print files". Topaz is an expensive option but I was wondering if any further improvements have occur since this tutorial for Stable Diffusion... Your help in this matter is greatly appreciated.
    God bless and thank you!

    • @matthallettai
      @matthallettai  Месяц назад

      I'm currently working on an SDXL solution, and so is the rest of us AI nerds, but without a tile resample controlnet, which for reasons beyond my pay grade, it hasn't been created yet, this still is currently one of the best methods. SDXL is best for overall composition, but in fine detail it really doesn't matter, even after all these months of training and experimenting.

    • @Knightstrikes
      @Knightstrikes Месяц назад

      @@matthallettai
      Thanks Matt for taking the time to respond. I look forward to a resolution in the matter. From what I remember in this tutorial is not just to make the photo larger but also enhances the quality of pixels and rendition. Do you see a fix sometime soon. Like imagine a face that has a weird eye. Instead of inpainting the eye, as you upscale with ai, the ai is smart enough to see that those pixels look weird and redraws it and fixes them.
      I saw a tutorial by the photoshop guy called pixel perfect or something like that , that leads me to believe we might get close to this soon.
      Take care and God bless all your endeavors and contributions to the community.

  • @mixocg
    @mixocg 2 месяца назад

    hey Matt, thanks for the video! Which GPU are you using in this tutorial? i have 1080 8GB and it's so slow, planning for upgrade in the near future.

    • @matthallettai
      @matthallettai  Месяц назад

      I'm using a 4090. I use it mostly for GPU rendering. 1080 is pretty old now. I've tested it on my old 2080ti and its pretty good for SD 1.5. I'm sure you could find those used for cheap. Look for a GPU with as many CUDA cores as you can afford and at least 12GB of VRAM You can also try Runpod or other online GPU rental services. Try sticking to 512 or 768 resolutions when upscaling and samples at 20.

  • @llgabb
    @llgabb 3 месяца назад

    Could you make one of these with SDXL? I'm thinking of some way to test some people's XL for Upscale on archviz images.

    • @matthallettai
      @matthallettai  3 месяца назад +1

      There's no tile resample controlnet for SDXL! Technically there are 3, but two are for Anime, ( isn't everything? ) and the other doesn't work the same way with Ultimate. I've tried every which way, but there's no solution yet I'm afraid. You can try doing everything the same with an XL model and not use Tile Resample. I've had some success, but you have to use every huge overlaps to hide the seams.

  • @rothauspils123
    @rothauspils123 4 месяца назад +2

    Hello, this method works great, but hands get absolutely destroyed. Any way to use this but get better results on hands?

    • @matthallettai
      @matthallettai  4 месяца назад

      Good question. Hands and faces alway have had their own challenges. There are additional extensions like Adetialer the add a post process fix to faces and I think now Hands as well. It's a whole other problem to solve, which I usually do after upscaling in Photoshop using the photoshop plugin. It's is own process unfortunately.

    • @rothauspils123
      @rothauspils123 4 месяца назад

      @@matthallettai Hello, thank you for the reply! I've replaced the crooked hands in Photoshop using their AI. I wonder, have you tried this method using SDXL? How important is it to use the model you specified here, or can it be replaced for any other model?

  • @orlandorodriguez1549
    @orlandorodriguez1549 3 месяца назад +1

    Hey Mat any advice on doing this to a self portrait , I used reactor and I have a face on the original image , but it seems to keep changing the face features .

    • @matthallettai
      @matthallettai  3 месяца назад +1

      Message me on Facebook or Instagram. Or email. Not sure what you're asking but it sounds interesting. Or would you prefer a Discord server? Curious what the latest messaging methods are.

    • @orlandorodriguez1549
      @orlandorodriguez1549 3 месяца назад

      @@matthallettai ok going to send you a message on Instagram now

    • @blender_wiki
      @blender_wiki 2 месяца назад +1

      For portraits use a line art or canny CNet to force the face feature, if it is not enough you can also use a Face ID or a full face IP adapter.

    • @orlandorodriguez1549
      @orlandorodriguez1549 2 месяца назад

      @@blender_wiki thank you. I’ll give this a try

    • @rambabumeena5649rohit
      @rambabumeena5649rohit 2 месяца назад

      P​@@matthallettai

  • @balotellidonna1167
    @balotellidonna1167 5 месяцев назад

    Can you compare it with magnific result? it seems magnific is always better but so expensive

    • @matthallettai
      @matthallettai  5 месяцев назад +2

      That was the goal. I didn't want to call them out specifically, but I used their examples in my testing. I was able to get close, but in some cases their results were better and I couldn't duplicate the results from 1K to 4K. However, on their 1K to 2K examples, this method matches the results.

  • @Arthuur9
    @Arthuur9 2 месяца назад

    I have RTX 3060 12 Gb VRAM, is it enought to upscale using your method? Or would it take a lot of time. I haven't tested it yet

    • @matthallettai
      @matthallettai  2 месяца назад +1

      It should, but try SD FORGE. It has better VRAM management built in. Its on Github, and has a clean installer. All other steps apply. Its the same as A1111.

  • @gianlucadecicco5921
    @gianlucadecicco5921 2 месяца назад

    I'm doing the exact same process step by step but my final image ends up looking kinda like a cartoonish style... You have any idea why could this be?

    • @matthallettai
      @matthallettai  2 месяца назад

      Hard to guess, but probably steps in the case it looks too flat, or sampling method. To debug, and anytime you get results you're not expecting, its best to go through each of the main settings and set them to defaults and then only adjust the noise. No controlnet, that comes after you get results, and need to hold the image closer to your original. Hold shift and refresh the brower page. refresh the models, and make sure you have the correct model you're expecting. Then sample steps to 30+, Sample Method = DPM++ 2M SDE Karras is my defaukt. and CFG = 7. Or the numbers I'm suggesting. Then finally it adjust the Denoise. Noise makes the biggest difference when using img2img. Beyond 0.7 without controlnet you will have almost nothing of the orginal. 0.2 = no change. 0.44 is typically a good number to view results, without crazy changes. Higher the value, the more change. Ok, I hope that helps!

  • @ignaciocasotto5315
    @ignaciocasotto5315 Месяц назад

    Hi!! Amazing tutorial and results. I also work on Real State Visualization at "Almost Real Viz" study. I have a RTX 3070, 128gb ram and 5950x, with a same size image that you, and same settings/parameters, on my computer it take 24hs to finish the process.... Do you know what can I doing wrong or if take too long just because my computer?

    • @ignaciocasotto5315
      @ignaciocasotto5315 Месяц назад

      I had something wrong on SD configuration, but still working to slow, take 2 hours for me...

    • @matthallettai
      @matthallettai  Месяц назад +1

      Its not your computer. All the processing is done on the GPU. Go back to basics and check all your settings, things that can make it slow are sampling steps and resolution. Try using "NEAREST" upscaler for testing, since that is the fastest. Run all of this on a SSD if you can... Thats all I can think of for now.

  • @renanarchviz
    @renanarchviz 2 месяца назад

    How to do this process in comfyui?

    • @matthallettai
      @matthallettai  2 месяца назад

      I don't use Comfy often enough to be able to write an upscale process thats similar to whats presented here. I mostly use others users workflows when I need to accomplish something that's only available in Comfy. You can download workflows for comfy from dedicated sites. There are 3 links in the comfy manager.

  • @twilightfilms9436
    @twilightfilms9436 3 месяца назад +1

    Can you try it with hair? A person with hair? Because Krea and Magnific are unparalleled when it comes to create realistic hair out of nothing……thanks!

    • @matthallettai
      @matthallettai  3 месяца назад

      I try to stay away from people since most videos on RUclips cover faces with AI. The process works the same, give it a go!

  • @blender_wiki
    @blender_wiki 2 месяца назад +1

    In 2024 better than topaz is really not a reference.🤷🏿‍♀️

    • @matthallettai
      @matthallettai  2 месяца назад

      I was going to reference it against Magnify.ai but didn't achieve the same results in all cases using their web examples. Besides Krea do you have upscaling sites or methods I can check out?

  • @EnhancedMusicVideos
    @EnhancedMusicVideos 5 месяцев назад

    It does a nice job, but generating new HQ content instead of enhancing the real one is just AI Generated stuff...

    • @matthallettai
      @matthallettai  5 месяцев назад +1

      The only way to generate this without AI is to re-render with the exact same file it was produced but at a higher resolution. I just happened to use a rendering in this example. Could be a low res photograph.
      Any type of upscaling is still using Machine Learning models, so call it all AI if you want, but its creating something from nothing either way.