Super Realistic Pictures with RealVisXL

Поделиться
HTML-код
  • Опубликовано: 11 сен 2023
  • In this video, we will review the new SDXL fine-tuned model, RealVis XL!
    Although still in the Beta version, the new model generates super realistic pictures; quality is way above the previous Realistic Vision SD 1.5. Let's test it together!
    🤩 Get 20% more credit on Diffusion Hub using the PROMO CODE: LAURA20
    💬 Social Media:
    [Discord] / discord
    [Patreon] patreon.com/IntelligentArt?ut...
    [Instagram] / lacarnevali
    [TikTok] www.tiktok.com/@lacarnevali?i...
    ____________________________________________________________________
    🤙🏻 Learn More:
    / membership
    / lauracarnevali
    📌 Links:
    DiffusionHub (try it for FREE, copy the full link): diffusionhub.io?fpr=laura17
    RealVis XL Model: civitai.com/models/139562/rea...
    ADetailer: github.com/Bing-su/adetailer
    00:07 Introduction to RealVis
    00:30 Diffusion Hub
    01:00 What model and VAE to use
    02:39 Generate the first realistic picture
    03:38 Generate the same picture using different seeds
    04:13 Generate the same picture with different aspect ratios (portrait and landscape)
    05:19 Generate a realistic landscape
    05:51 Generate a portrait
    06:52 ADetailer to improve faces/hands
    #aiart #stablediffusion #generativeart #stabilityai #stablediffusiontutorial
    #sdxl #diffusionhub

Комментарии • 21

  • @jamesbriggs
    @jamesbriggs 8 месяцев назад +1

    Super useful thanks :)

  • @twri128
    @twri128 8 месяцев назад +4

    @LaCarnevali First thanks for producing excellent videos on stable diffusion! Issues with face details most often will be fixed when upscaling as you say but can be improved if you stick with a resolution that is close to 1024^2 pixels. The latent image is 1/8:th of the pixel image and simply sometimes are not enough for a small eye or mouth to generate properly. These resolutions are also mentioned in the paper you are referencing. Stability AI has provided the community with a subset of those in the table in the paper.
    1024x1024 (1:1), 1152x896 (1.28:1), 896x1152 (0.78:1), 1216x832 (1.46:1), 832x1216 (0.68:1), 1344x768 (1.75:1), 768x1344 (0.57:1), 1536x640 (2.4:1), 640x1536 (0.42:1)

  • @Vestu
    @Vestu 8 месяцев назад +1

    Love your channel and your Italian accent 😊

  • @mostafamohamed-jk5jk
    @mostafamohamed-jk5jk 8 месяцев назад +3

    The accent ❤❤❤❤

  • @kevinehsani3358
    @kevinehsani3358 8 месяцев назад

    Thanks for the video, I take it I can download the check point and vae to my stable diffusion and try it there. I do have some recent problems when I use controlnet and was wondering if you are getting them too and was wondering if the updates are doing that. I have everything as the latest. I get this error no matter which control type I use in contronet(1.1.14). For example for "tile" and model "control_v11f1e_sd15_tile [a371b31b]" I get the same error as I others which is " RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320) ". Thanks for any feedback

    • @LaCarnevali
      @LaCarnevali  8 месяцев назад

      Hi! ControlNet for SDXL is not yet integrated within A1111 - you should try ComfyUI if you want to try it :)

    • @kevinehsani3358
      @kevinehsani3358 8 месяцев назад

      @@LaCarnevali I used it perfectly fine for weeks

  • @ekkamailax
    @ekkamailax 5 месяцев назад

    Is it possible to fine tune this model using the same techniques as your previous tutorial?

    • @LaCarnevali
      @LaCarnevali  5 месяцев назад

      yes you can - the training will be slower so you want to use GPU and need to apply some minor adjustments like tick for SDXL model

  • @AliKhan-vt3uk
    @AliKhan-vt3uk 8 месяцев назад

    ComfyUi not working on Google Colab. Can you please make video on that. How I can hse ComfyUI with any other cloud? Complete guide😢

    • @LaCarnevali
      @LaCarnevali  8 месяцев назад

      If you're running on windows, you can install it locally:
      ruclips.net/video/sIkbDhhC5iY/видео.html
      Will have a look into Colab :)

  • @DrOrion
    @DrOrion 8 месяцев назад

    Do one on hand fixing please.

  • @LouisGedo
    @LouisGedo 8 месяцев назад

    👋

  • @user-io8gh4yi4s
    @user-io8gh4yi4s 6 месяцев назад

    please make a video kohya lora train for a face on mac apple silicon Laura

    • @LaCarnevali
      @LaCarnevali  6 месяцев назад

      I have a video, but you cannot train on a Mac, you'll need to use an external GPU, i.e., Colab/ThinkDiffusion/DiffusionHub

  • @wtfchoir
    @wtfchoir 7 месяцев назад

    Why are you super cute?

  • @asphoarkimete9500
    @asphoarkimete9500 5 месяцев назад

    congratulations you look great, I am currently using this model: {from diffusers import StableDiffusionPipeline
    model_id = "SG161222/Realistic_Vision_V6.0_B1_noVAE"
    # Initialize the Stable Diffusion pipeline for image generation.
    pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=token)
    pipe.to(device)} ,do you think the realVisionV6 model is newer and better than the model : "Realistic_Vision_V6.0_B1_noVAE"? i saw your video about the custom train model, how can i use a Hypernetwork ,in this function : "StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=token)"?