Reliberate Model is INSANELY GOOD - Stable Diffusion A1111 Model

Поделиться
HTML-код
  • Опубликовано: 28 май 2024
  • The Reliberate Model is insanely good. It's hosted on CivitAI. This Stable Diffusion Model is for A1111, Vlad Diffusion, Invoke and more. It can create extrem details in upscaled images. In this Video I show you how to get the most out of it. I combine it with the LowRA Lora and the Detail Tweaker Lora to get even more Details from the Upscale. We are also going to use the SD Uscale Script with the Ultrasharp Model to get the best and highest details from this Model.
    #### links from the Video ####
    Detail Tweaker Tutorial: • Detail Tweaker Lora - ...
    Ultrasharp Tutorial: • ULTRA SHARP Upscale! -...
    Reliberate Model: civitai.com/models/79754/reli...
    LowRA Lora: civitai.com/models/48139/lowra
    Detail Tweaker Lora: civitai.com/models/58390/deta...
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
  • ХоббиХобби

Комментарии • 232

  • @mingtech5670
    @mingtech5670 11 месяцев назад

    Thank you, Olivio. What you do here is valuable. You have just the right amount of enthusiasm and technical ease for being a great person for getting this info out.

  • @cat-star5403
    @cat-star5403 11 месяцев назад +1

    Trying out this model and am very impressed. In many cases, does a better job than my old favorite realisticVision 2.0. Thanks for bringing this model to our attention.

  • @pedro3000
    @pedro3000 11 месяцев назад +10

    Thanks for another great video Bro ! Pro tip - on Civitai, if you see an image you want to "borrow", just drag it to the "Process Image" tab in SD (I am using Vlad). If the prompts are in the meta, they will come with it. Cheers !

  • @ThoughtFission
    @ThoughtFission 11 месяцев назад

    Loving your videos. Making the complex easy to understand. Thanks!

  • @XpucT
    @XpucT 11 месяцев назад +11

    Great review 🤟

    • @aearone
      @aearone 11 месяцев назад +3

      Вот оно, заслуженное признание!:)

  • @Toasty27
    @Toasty27 11 месяцев назад +6

    this has become my favorite model to use very quick since i discovered it! ty for the vid!

    • @OlivioSarikas
      @OlivioSarikas  11 месяцев назад +2

      Awesome. Same here. It also create more playful results than most other models

  • @heinzerbrew
    @heinzerbrew 11 месяцев назад

    You have helped me so much since I started watching your videos

  • @Macieks300
    @Macieks300 11 месяцев назад +34

    Realistic Vision V3.0 came out today, that seems like even a bigger deal as V2.0 was one of my favs.

    • @Devalocka
      @Devalocka 11 месяцев назад +2

      Thanks for the heads up. Getting it asap!

    • @Showbiz_CH
      @Showbiz_CH 10 месяцев назад +4

      @@Devalocka Realistic Vision 4.0 is out now!

    • @marcelkuiper5474
      @marcelkuiper5474 10 месяцев назад +1

      ​@@Showbiz_CHlol, the humanity

    • @anujpartihar
      @anujpartihar 9 месяцев назад +2

      @@Devalocka Realistic Vision v5 has been released!

  • @IDJOT
    @IDJOT 11 месяцев назад

    amazing work as always my friend

  • @JohnWeland
    @JohnWeland 10 месяцев назад

    I have a server rack with a few boxes in my basement. I need to get something let this setup locally. This is awesome!

  • @76abbath
    @76abbath 11 месяцев назад

    Thanks Olivio for this new video!

  • @Elwaves2925
    @Elwaves2925 11 месяцев назад +57

    Reliberate is great but I've been loving aZovyaPhotoreal V2 recently, with the Heun sampler and it's giving out really great results.

    • @sub-jec-tiv
      @sub-jec-tiv 11 месяцев назад

      I love it too

    • @felipeitsui
      @felipeitsui 11 месяцев назад +1

      Zovya is too biased, always renders the same single girl

    • @remghoost
      @remghoost 11 месяцев назад +5

      @@felipeitsui You might look into the roop extension for A1111. Put any face you want on her that way.

    • @wykydytron
      @wykydytron 11 месяцев назад +2

      ​@@felipeitsuithen prompt better, use nationality prompts, prompts that specify look, use face loras. Generally models generate same face if you are too vague with prompts it will go for whatever has highest weight internally since you didn't precisely said what you want.

    • @Elwaves2925
      @Elwaves2925 11 месяцев назад +1

      @@felipeitsui Not if you know how to prompt. The only times I've got the same person is when I've deliberately written the prompt to get the same person. If you keep your prompt simple, vague and worded the same, you probably will get the same person. That's how it works. 🙂

  • @felipeitsui
    @felipeitsui 11 месяцев назад +20

    I have 900gb of checkpoints and my favoritenso far is Awportrait. No one really talks about it but it's the most realistic model i ever tried on many different styles

    • @Elwaves2925
      @Elwaves2925 11 месяцев назад

      Not heard of that one. I'll check it out and I know that feeling of having a lot of checkpoints. I don't have anywhere near as much as you do but they still need pruning back.

    • @pedro3000
      @pedro3000 11 месяцев назад +2

      Not everyone is just making waifus Bro ! 🤣🤣🤣

    • @asmr_reviews
      @asmr_reviews 10 месяцев назад

      Sounds like something I’m trying to do 😂have any videos suggestions? I’m a super beginner

  • @LikoKZN
    @LikoKZN 11 месяцев назад

    Спасибо за вашу работу!

  • @weirdscix
    @weirdscix 11 месяцев назад +3

    I tend to use Controlnet tile resample along with Ultimate SD Upscale to resize, then just feed it back into img2img and run it again if I need it larger still

  • @DigitalNeutrinos
    @DigitalNeutrinos 11 месяцев назад

    Amazing , great video

  • @zorilov_ai
    @zorilov_ai 11 месяцев назад

    it took you some time, 😁 usually I am using euler but will try dimm, and I am using add_detail:1, everything higher than 1.2 is already too much. But I apply it to warpfusion mostly. Thanks for tutorial🖐

  • @jvtosiartist
    @jvtosiartist 11 месяцев назад

    thanks for the video!!

  • @johnhayne
    @johnhayne 11 месяцев назад +2

    New to this! Mind blowing if I am looking at what I think I am looking at. Can you please provide a link to a basic intro to this software/process? MUCH appreciated!

  • @wykydytron
    @wykydytron 11 месяцев назад +2

    I use ddim for everything lately, i dont know what changed but it seems to work best since last major a1111 update for ome reason. Also i would strongly recommend using adetailer insted of face restore that gives terrible results most of the time. As for uoscaler i suggest tiled diffusion it gives best results out of all methods iv tested and is very fast.

  • @phantasyphotography3813
    @phantasyphotography3813 11 месяцев назад +6

    Realistic Vision V2.0 has been my go to for a while. I'll check this one out for sure

    • @nightcall4668
      @nightcall4668 11 месяцев назад +2

      They uploaded V3.0 today.

  • @ernesto.iglesias
    @ernesto.iglesias 11 месяцев назад +1

    Cool video. Will all this loras work with SDXL too? If don't how can I know which one will or will not?

  • @ahminlaffet3555
    @ahminlaffet3555 11 месяцев назад +1

    DDIM seems to be great for inpainting especially

  • @ex0stasis72
    @ex0stasis72 10 месяцев назад

    I recommend checking out "C3" (named because its original model is a merge of "Colorful", "Clarity", and "Consisency".

  • @audiogus2651
    @audiogus2651 11 месяцев назад

    mmm, quite nice for architecture too!

  • @AndyShrestha
    @AndyShrestha 10 месяцев назад

    love this skin. wish there was a lora model just for the skin texture

  • @Julia-mm2pv
    @Julia-mm2pv 5 месяцев назад +2

    where can I find those models now that they are not available at Civitai anymore?

  • @sb6934
    @sb6934 11 месяцев назад

    Thanks!

  • @FutonGama
    @FutonGama 11 месяцев назад +5

    Olivio let me ask, what is your trick to avoid NSFW on livestreams? did you uses only Negative prompts or is some other things? Nice vídeo btw

  • @kevinmillan5952
    @kevinmillan5952 11 месяцев назад

    3:36 he did it again ! The G !

  • @micbab-vg2mu
    @micbab-vg2mu 10 месяцев назад

    Great!

  • @Rjacket
    @Rjacket 6 месяцев назад +2

    @OlivioSarikas Do you have a reliberate v1.0 available to dl anywhere?

  • @architect1580
    @architect1580 10 месяцев назад

    Awesome!!!!!

  • @idanis5948
    @idanis5948 11 месяцев назад

    Is there a way to quickly apply words need for loras like there is when u apply a negative prompt? Or do we jusy make like 20 different styles then

  • @kaokong7226
    @kaokong7226 10 месяцев назад

    man, these prompt artists are insane.

  • @alexgoiadev
    @alexgoiadev 10 месяцев назад

    As usual... It would be nice to share a link to those usual videos..

  • @zephilde
    @zephilde 11 месяцев назад

    Hi Olivio!
    I never understood what is the role of the sampler and what are the difference.
    Of course I can render a xyz plot to check the diff, but I would like to understand more why and when use which sampler...
    Can you make a video on that or maybe a live?
    thx a lot

  • @HypnotizeInstantly
    @HypnotizeInstantly 11 месяцев назад

    I like your shirt amigo!

  • @Pico52
    @Pico52 11 месяцев назад +4

    Me: Ok, time to clean house and prune out all the models I no longer use to free up space and make it simpler to find what I do want.
    Actual Me: Ok, time to add Reliberate and LowRA.

    • @JohnVanderbeck
      @JohnVanderbeck 11 месяцев назад +1

      Reliberate == Deliberate. So there's some spring cleaning for you :)

  • @thepseudowhite
    @thepseudowhite 11 месяцев назад

    ^^! Great works ^^! :3

  • @Leadstar1985
    @Leadstar1985 11 месяцев назад +5

    Где рэдхэд? Хачатур же накажет.

  • @ivanrolim
    @ivanrolim 11 месяцев назад +1

    Sd upscale? What happened to *Ultimate SD upscale* you used to choose? Did 'SD upscale' end up better?

  • @mooncryptowow
    @mooncryptowow 11 месяцев назад +1

    You should try out "Lunar Diffusion 1.28" . I'd love to hear your thoughts on it :)

  • @opos4202
    @opos4202 11 месяцев назад +17

    There is one more I haven't seen anyone cover yet - epiCRealism. In my tests it was always better in realistic images than reliberate. I would love to see you cover this one as well. Always great videos!

    • @ryansetiawan2202
      @ryansetiawan2202 11 месяцев назад

      According to you, if you would rank these models based of their capability of producing Realistic Image, how the rank would go ?
      Cyberrealistic, Deliberate, epiCRealism, ChillOutmix

    • @Pico52
      @Pico52 11 месяцев назад +1

      I came to the comment section to say this as well. I've tried many others, most of them being the ones mentioned in the comments here, but epiCRealism is just better.

    • @Pico52
      @Pico52 11 месяцев назад +3

      @@ryansetiawan2202 Imo, epiC > Cyber > Delib > ChillOut, but they aren't poor quality by any means.

    • @opos4202
      @opos4202 11 месяцев назад +1

      ​@@Pico52 I agree! I would put them in the same order.
      However I rarely stick with just one checkpoint. Usually I test my prompts with different combinations of checkpoints and sampling methods to find which result I like the most.
      X/Y/Z plot is my favourite tool by far :D

    • @ryansetiawan2202
      @ryansetiawan2202 11 месяцев назад +2

      @@Pico52 thx a lot for the answer.. appreciate it.. 🤝

  • @billmelater6470
    @billmelater6470 8 месяцев назад

    Is there any progress on hands and feet? Seems like models and embeddings still don't do too well with them. Arm proportions get strange at times too.

  • @6Eternal9
    @6Eternal9 11 месяцев назад +2

    My fav realistic models are cyberrealistic, epicrealism and realistic vision

    • @Neolisk
      @Neolisk 10 месяцев назад

      +1 for epicrealism.

  • @mateuszskodowski3615
    @mateuszskodowski3615 11 месяцев назад

    in the description of the model there is information that, among other things, it is not allowed to sell images created with it. Does this also mean that you can't use this model to create sets of your own commercial projects (e.g. in your own comic or game)?

  • @redbrainart
    @redbrainart 11 месяцев назад

    nice

  • @YourBeatzSupport
    @YourBeatzSupport 9 месяцев назад

    Hey great video. Before i research the whole internet i ask here. I got a good model at tensor art but every picture is a bit different. Now im searching a method to create a model in tensort art so that it will be the same face everytime
    Or a good free software where i can create a model which doesn’t change

  • @atlanteum
    @atlanteum 11 месяцев назад +7

    Hey, Mr. Olivio... thanks for another great video. I had not heard of Reliberate, so thank you for the info! Just FYI, though... Euler [Leonhard Euler, the Swiss mathematician/ engineer/ astronomer/ and much, much more] is actually pronounced "oil-er" rather than "yule-er". So... now we know!

    • @Mocorn
      @Mocorn 11 месяцев назад +4

      Do you also pronounce French words with perfect pronunciation? What about words of Asian heritage?
      Euler can be pronounced several ways. It all depends on nationality, geography etc. Let's not bog down content creators with silly stuff like this.

    • @atlanteum
      @atlanteum 11 месяцев назад +3

      ​@@Mocorn It's a man's name and it has an actual pronunciation. Yes, I do speak a little French and a little Cantonese - none of it perfectly, but I do make an effort to be respectful of other languages. Mainly, I brought up the pronunciation in this case because the term Euler appears throughout a number of CG packages, such as the Euler Filter in Maya's graph editor. If I mention it to Olivio - who is kind enough to put thall these wonderful videos together and share them with the rest of us - it is done, thoughtfully and considerately, to help inform him in the same way he generously informs us. My guess is that if someone mispronounced your name, you'd correct them - as well you should. Where is the harm in that?

    • @zorilov_ai
      @zorilov_ai 11 месяцев назад +1

      yeah it is the worst claim😄

    • @Mocorn
      @Mocorn 11 месяцев назад +1

      @@atlanteum people mispronounce my name all the time actually and I do not correct them because that is how the name is pronounced in this country. Where I was born the name is pronounced differently. Interestingly, my name is pronounced a third way if you look at the country of origin which is only an hour away by flight.
      So, three different ways to pronounce my name, which one is the correct one!?

    • @atlanteum
      @atlanteum 11 месяцев назад +1

      @@Mocorn It doesn't matter - your name is not Euler.

  • @Digital_Architects
    @Digital_Architects 11 месяцев назад +12

    Hi @Olivio can you show us how to make architectural renders more realistic please 😇🙌🏻

    • @manticoraLN-p2p-bitcoin
      @manticoraLN-p2p-bitcoin 11 месяцев назад

      Yeah... I'm looking for something like that. Sadly I didn't find anything helpful related to architecture.

  • @geneoverride3725
    @geneoverride3725 11 месяцев назад +2

    would be great if you could provide links to negative embeddings too! would be easier for viewers to click on it and download

  • @Kuresuto
    @Kuresuto 11 месяцев назад +3

    I've been trying to do something and I can't seem to make it so I'm here asking for help. Is it possible to create a character turnaround by using 2 different controlnet models, one that for example becomes the reference character and the second the poses? Because I created a character I really like but by using the img2img, and of course I can't simply recreate it in txt2img, and whenever I do try with any settings, any combinations (and even without poses but only reference image) I can't do it :/
    Found people trying to do the same in reddit 6 months ago but with no follow up. Do you know any way with the new reference only control that can make it happen?

    • @SantoValentino
      @SantoValentino 11 месяцев назад

      You can probably recreate it in txt2img by
      copying the settings from the PNG info tab
      after placing your character in the png info tab.
      Maybe?
      I have a character woman I love but it was an accident. So I had three pictures only. I trained her face as a Lora and it came out perfectly. Now I insert her Lora anytime I want. Didn’t think it would work but it did.
      If you only have one face, I would try making a Lora from it. Who knows

    • @Kuresuto
      @Kuresuto 11 месяцев назад

      @@SantoValentino if it was a character completely made from txt2img it would work, but since it's from img2img from another image I had created, the parameters aren't accurate to those in infotab

    • @SantoValentino
      @SantoValentino 11 месяцев назад

      @@Kuresuto this si where the fun begins

  • @AadilDar
    @AadilDar 10 месяцев назад

    Help...! How can I keep the detail when swapping face with "Roop" extension.
    Whenever I try to swap face, it renders a clean face.

  • @eskim21
    @eskim21 11 месяцев назад

    Can this be used with Openpose? Also can we make a full body image with this model?

  • @speaktruthtopower3222
    @speaktruthtopower3222 11 месяцев назад +1

    4:39 How did you do these sample images? By hand or is there a plugin to do this?

  • @tuck9
    @tuck9 10 месяцев назад

    im new to this stuff. is there particular software needed? I keep seeing things about models, checkpoints etc. But i never come accross what program everyone's using to generate the models.

    • @kensaiix
      @kensaiix 10 месяцев назад +1

      the "software" is called *stable diffusion* and it is commonly described as a picture-generating artificial intelligence (ai for short).

  • @prozacgod
    @prozacgod 11 месяцев назад +2

    I've been defaulting my portrait renders to a 640 x 800 resolution, Instead of like 512x768. Just a slight bump, it will very rarely doubles up a head. But I feel the bump is worth it.

    • @fabiano8888
      @fabiano8888 10 месяцев назад

      Great! I will try that. Thanks for the tip!

  • @moniqued8932
    @moniqued8932 10 месяцев назад

    Is it also possible in midjournmey?

  • @ashtyler8
    @ashtyler8 11 месяцев назад

    Hi @Olivio Sarikas or anyone who can help me. Thanks for your videos, excellent stuff! When I go to my Scripts section, there's many Scripts missing for me compared to yours, & SD Upscale script is also missing. Can you please let me know why?

  • @Gg_system
    @Gg_system 11 месяцев назад

    Am i perfection is a solid base model, have you used it??

  • @CabrioDriving
    @CabrioDriving 11 месяцев назад

    Can you show how to work with tiles to make big resolution AI images?

  • @DuduMaroja
    @DuduMaroja 11 месяцев назад

    hi.. i was using stable diffusion since the beginning.. but im studing hard for some exames and after some months. its like i dont know nothing anymore... lora? checkpoints? clip?? i will need a resume what is happening...

  • @hogstarful
    @hogstarful 11 месяцев назад

    Is there any model that I can use to create illustration and art for T-shirt

  • @fingerprint8479
    @fingerprint8479 11 месяцев назад

    Hi, great tutorial. I am new to Stable Diffusion. What is Automatic1111 for? Have you got any tutorial on how to install and use it?
    Thanks

    • @AgustinCaniglia1992
      @AgustinCaniglia1992 11 месяцев назад +1

      Automatic 111 is a tool for using stable diffusion. There are a couple of videos explaining how to install. Search for a not so outdated one.

    • @trickydicky8488
      @trickydicky8488 11 месяцев назад

      ruclips.net/video/3cvP7yJotUM/видео.html

  • @nanigh2913
    @nanigh2913 11 месяцев назад

    How to give photos batch process inpatient?

  • @kdzvocalcovers3516
    @kdzvocalcovers3516 6 месяцев назад +3

    dead link for model..error 404

  • @abrogard142
    @abrogard142 10 месяцев назад

    stumbled into this. got very little idea what it is about. it reworks your pics and makes them better? so why didn't he show a before and after? what's so great about it?

  • @rschena
    @rschena 11 месяцев назад

    @OlivioSarikas Do you have a video where you teach how to create an RPG or NPC character with the face of a friend of yours or a photo that already exists? how could i do this?

  • @manticoraLN-p2p-bitcoin
    @manticoraLN-p2p-bitcoin 11 месяцев назад +6

    As a long time ChaiNNer user, I'm surprised you've never made a video about it.... It's a must for graphic designers as me, especially for batch precesses... Free of out of VRAM errors.
    Also I'm having much better results using SD Upscale than Ultimate Upscale.

    • @The_Daily_Meow
      @The_Daily_Meow 11 месяцев назад +1

      ChaiNNer is amazing. Ultimate Upscale is really bad. SD Upscale is really all you need. If you know how to use it, of course

    • @relaxation_ambience
      @relaxation_ambience 11 месяцев назад

      @@The_Daily_Meow Both SD upscale and Ultimate upscale are bad. You can test your images in PS- add curves layer and play to extreme and you will see that your images are in tiles (if you can't see without curves layer). Only tiled diffusion method works, but with larger scales it loses details. But still without tiles.

    • @relaxation_ambience
      @relaxation_ambience 11 месяцев назад +1

      @edu_machado correct if I'm wrong: ChaiNNer is the same type upscaler as Topaz Gigapixel (Except that in ChaiNNer you can upload multiple upscaling models) ? If so, I don't see the point to compare as it acts totally different from Auto111. In Auto1111 you can add extra small details with denoise and sampling steps. Is that possible in ChaiNNer ? There is also another software called Upscayl where you can also upload your own upscaling models. But again- it's not the same, as Auto1111- you just upscale and can't add extra small details.

    • @The_Daily_Meow
      @The_Daily_Meow 11 месяцев назад

      @@relaxation_ambience No. It doesn't loose details but adds them when you go in smaller steps. I said 'if you know how to use it, of course'. otherwise, it's bad

    • @manticoraLN-p2p-bitcoin
      @manticoraLN-p2p-bitcoin 11 месяцев назад

      @@relaxation_ambience Topaz also uses AI to Upscale but I believe they use a proprietary model. With ChaiNNer you have the freedom to use different models.... So yes you're right. I believe it's an obsolete app now (Topaz) but they have another app for video that will keep relevant for some time IMO.

  • @4538304544
    @4538304544 9 месяцев назад

    when i use the sd upscale i dont have the option "4x ultrasharp"

  • @robertpaulson4960
    @robertpaulson4960 11 месяцев назад

    Another one just released is Epic realism pure evolution v3. Prob the best I've used so far

  • @cameriqueTV
    @cameriqueTV 10 месяцев назад

    I'm getting a blue tint to faces with "restore faces" on. It appears on the final render frame. I've reloaded SD/A1111, and it still happens.

  • @interestedinstuff1499
    @interestedinstuff1499 11 месяцев назад

    On Civitai, I noticed that if I right click and save the image, then load it into PNG Info, it loads all the prompt data when I send it to Text to Image. Saves a lot of copy and pasting. Can delete the image afterward if you want. I don't since I want to see if I can replicate what the artist did.

    • @JohnVanderbeck
      @JohnVanderbeck 11 месяцев назад +1

      How is that easier? You are just doing the same thing only with an image, but have the extra step of having to save it then browse for it, etc. Whereas with copy/paste you just click two buttons.

  • @2f4r4u4
    @2f4r4u4 10 месяцев назад

    Hi i would like to know if its possible to create bulk images for more than 1 person, for example i want to generate 10 different people with 5 images of them. If someone know how to do this feel free to answer.

  • @allenraysales
    @allenraysales 3 месяца назад

    Is there an updated version of this? It looks like it's no longer available. Can someone send over the working link?

  • @049ajeetrawat3
    @049ajeetrawat3 11 месяцев назад

    Is there any model like : mid journey

  • @Termonia
    @Termonia 11 месяцев назад +1

    chainner ? what's that? if you're making a tutorial about that : SUBSCRIBED right now. Hope for that video, liked and shared. Thank you.

  • @user-rp2tq3ep8t
    @user-rp2tq3ep8t 11 месяцев назад +1

    3:17 Wait, am I hallucinating or the title on the paper says "Уже который год во всём мире п*здец"? 🤣🤣🤣

  • @vl6736
    @vl6736 11 месяцев назад +3

    I've been trying to get a Character Lora from one SD generation. I made and image with 5x6 head OpenPose-poeses to use it with Controlnet. That way I got 30 head poses of the exact same person, enough to train a Lora. However, some of the 30 images missed detail (and were shot from front instead of the back of the head (standard problem). I think what you show in this video can help with some of my challenges here.

    • @OlivioSarikas
      @OlivioSarikas  11 месяцев назад

      that sounds like a really interesting idea :)

  • @leojyh
    @leojyh 10 месяцев назад

    If face in the picture wears glasses (or sunglasses) Roop will erase the glasses, then replaces face in the video but it does not erase clear.
    Roop can not choose to reserve the glesses (or sunglasses) in the video. Can this problem be solved ?

  • @darkman237
    @darkman237 11 месяцев назад

    Will it work offline when downloaded?

  • @user-rp3om5je7g
    @user-rp3om5je7g 11 месяцев назад

    Olivio you are the best ... Please help, I just learned about this AI image generating media, already blown away. Just tried "hypernetwork" training, got beautiful image but not photo-realistic. May I know is it possible to combine this amazing model with my hypernetwork trained models ? really appreciate for your advice, thanks

    • @ShawnStevensNZ
      @ShawnStevensNZ 10 месяцев назад

      try mid-journey, it's simple and a lot more creative ability off the go, you will get photorealism straight away by just saying "I want it photorealistic " It has its limits, but they all do. enjoy

  • @RaptorJesus.
    @RaptorJesus. 9 месяцев назад

    3:34 someone definitely added more weight ;)

  • @JohnVanderbeck
    @JohnVanderbeck 11 месяцев назад +4

    FYI "Euler" is pronounced "Oiler"

  • @randomgameplay523
    @randomgameplay523 Месяц назад

    i get a 404 error for the model link, searching it on civit ai is also no results

  • @moayadamoor7925
    @moayadamoor7925 10 месяцев назад

    guys is there an tutorial on how to use this modell? is it kind of a programm that i can download and use r how is it done ? i have dwnloaded the files, wht is next ? would appreciate your help!

  • @gkbasil
    @gkbasil 11 месяцев назад +1

    Привет Христу от подписчика!

  • @ElLindoClan
    @ElLindoClan 11 месяцев назад

    I'm doing the same steps exactly but when generating it always makes drastic changes to the final result, like adding faces, extra fingers, deforming the existing ones, the image never stays the same

    • @sheedee2
      @sheedee2 10 месяцев назад

      Same here...but know one explains why this happens or how to fix it 🤪

    • @ElLindoClan
      @ElLindoClan 10 месяцев назад

      @@sheedee2 I discovered that it's related to the prompts, make sure it doesn't have any prompts related to descriptions, just related to quality of render

  • @ParvathyKapoor
    @ParvathyKapoor 11 месяцев назад

    wow

  • @fixelheimer3726
    @fixelheimer3726 11 месяцев назад

    Isn't it the model where someone commented it's just a renamed deliberate model? Cause I did not download it for that reason..

    • @weirdscix
      @weirdscix 11 месяцев назад

      Well it's by the same author so there would be no point in renaming his own model

    • @fixelheimer3726
      @fixelheimer3726 11 месяцев назад

      @@weirdscix I will try for myself.. that person claimed to get the same results with the same seed etc..

    • @DarkStoorM_
      @DarkStoorM_ 11 месяцев назад +1

      The person who claimed this model is the same is just plain wrong - I assume it's the one that posted an x/y/z plot. You can clearly see on his plot, that the images are different. The guy just didn't read the model description :) people expect every model to give completely new images, but they don't understand, that new versions won't necessarily give something new, especially when the author continues the model training or merges two same models, which differ in style.
      The model author even explicitly stated, that he separated his model versions, as I guess he didn't want to use a checkpoint merge (or continued training merge) in the same model post for some reason. After plotting both Reliberate and Deliberate v2 (also Realistic Vision v2 and RunDiffusion for a test), Reliberate seems to give similar results with less exposure for most prompts.

  • @1Know1tHurts
    @1Know1tHurts 11 месяцев назад +2

    Olivio, it is interesting that you mentioned Restore Face feature. The author of this model hates restoring face this way and says he even can see it when others use it. He is an awesome smart guy who definitely knows what he is doing.

    • @anonymousmuskox1893
      @anonymousmuskox1893 11 месяцев назад

      Does that creator have a RUclips or something?

    • @1Know1tHurts
      @1Know1tHurts 11 месяцев назад +1

      @@anonymousmuskox1893 Yes, he has a YT channel but it is in Russian. The channel's name is ХрисТ (copy my text because it is in Cyrillic though it looks like Latin letters).

    • @SavelevTema
      @SavelevTema 11 месяцев назад +1

      @@1Know1tHurts ты не прав. его канал как раз на латинице пишется

  • @LokiDWolf
    @LokiDWolf 10 месяцев назад +3

    I love this but it's still too "cartoony" for me. Still, this is all fascinating! Thanks for the video!

  • @Macieks300
    @Macieks300 11 месяцев назад +1

    4:56 btw are you sure Euler is pronounced like that? I thought it's pronounced the same as the mathematician it is named after.

  • @ravnOne65
    @ravnOne65 11 месяцев назад

    Now we are talking

  • @toomanius
    @toomanius 10 месяцев назад

    3:15 is the headline also generated? It translates from russian something like "the world is f*cked up year after year". Is it deliberate? 🤔

    • @OlivioSarikas
      @OlivioSarikas  10 месяцев назад

      i think they made a joke on their page

  • @user-hc3bl2ji8s
    @user-hc3bl2ji8s 11 месяцев назад

    Could you please advise, how to create a picture with a smoking woman? I always have an issue with hands and cigarette.

    • @OlivioSarikas
      @OlivioSarikas  11 месяцев назад +1

      have you tried controlnet or photobashing?

  • @pavlosslavinskyi4871
    @pavlosslavinskyi4871 11 месяцев назад

    Imagine training dreambooth using this model or merging with it

  • @sebastianszwarc4162
    @sebastianszwarc4162 8 месяцев назад

    These images are almost too perfect - they look like photos from the best digital camera and heavily edited. Can this model be used to generate images that appear realistic but are of lower quality? So that they resemble photos taken with a medium-quality mobile phone?

  • @Hassan_Omer
    @Hassan_Omer 10 месяцев назад

    Just saw the video thumbnail on my feed and came here, so it is basically a tool to convert AI generated faces to realistic faces ?