Realistic Vision 5.1 - This is CRAZY GOOD!!!

Поделиться
HTML-код
  • Опубликовано: 7 июн 2024
  • Realistic Vision 5.1 is the best model for stunning photos create with AI. Learn the A1111 settings and best prompts to get amazing results. This Stable Diffusion Model from Civitai is free to download. Realisitic Vision is specialized on photography ai images.
    #### Links from my Video ####
    Realistic Vision 5.1 Download civitai.com/models/4201
    Prompt + Negative Embeddings: docs.google.com/document/d/1e...
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
  • ХоббиХобби

Комментарии • 167

  • @guilhermecastro3671
    @guilhermecastro3671 10 месяцев назад +17

    Olivio your videos are amazing, the quality, the effort you put, your charisma. please never stop, you're a blessing to the AI community.

  • @ayrengreber5738
    @ayrengreber5738 10 месяцев назад +59

    It's nice that SD 1.5 still gets attention.

    • @BoRysunki
      @BoRysunki 10 месяцев назад +19

      You bet it does! XL, although really accurate, it is still missing the whole controlNet and extensions support for now. As well as a nice range of Loras and embeddings to choose from in comparison to 1.5

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 10 месяцев назад +8

      @@BoRysunkiBut that is gonna change in about 1 or 2 months, especially after people come back from summer vacation.

    • @BobDolelol
      @BobDolelol 10 месяцев назад +14

      I still prefer 1.5 over XL in all honesty. XL seems to be missing something. It’s fun, but I’m not really mind blown over it 🤷🏼‍♂️

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 10 месяцев назад +6

      @@BobDolelol It's missing customization. It will come eventually. I remember when I first used the SD1.4 model. Back then, it was missing pretty much everything but it better than nothing. Then came SD1.5, then Dreambooth, extensions, and then SD became great.

    • @ShiroKage009
      @ShiroKage009 10 месяцев назад +4

      @@BoRysunki XL is developing super fast as we speak. The models and LORAs are being released very rapidly.

  • @nalisten
    @nalisten 10 месяцев назад

    Thanks Olivio This is extremely Helpful 🙏🏾🙏🏾🙏🏾

  • @gwcstudio
    @gwcstudio 10 месяцев назад +41

    Here is a trick you may want to try: ask midjourney or sd to create a certain pose against a blank background, then hand it to the online openpose engine to infer the pose, tweak or rotate it then render with a controlnet

    • @brockly7916
      @brockly7916 10 месяцев назад +1

      No need to just add lines and a circle in gimp and photoshop or draw it yourself.... also you can add images with blank backgrounds with characters and is far easier than posing a 3d model...

    • @plejra
      @plejra 9 месяцев назад

      It's cery interesting. Can you tell more about this technique? Especially about last steps.

    • @cryptojedii
      @cryptojedii 6 месяцев назад

      @@plejra Yes...just do it!

  • @ZeroIQ2
    @ZeroIQ2 10 месяцев назад +1

    Thanks for this!
    I was looking at some AI images I made almost a year ago (late August 2022) and it's so crazy to see how far AI images have come in such a short time.

  • @DiffusionHub
    @DiffusionHub 10 месяцев назад +4

    Even after SDXL was released, Realistic Vision results seem incredible. Thank you for the video!

    • @Elwaves2925
      @Elwaves2925 10 месяцев назад +2

      I've just said this in another comment but for me right now, SDXL is better for non-humans and RV5.1 for humans. Obviously, RV has the benefit of greater training, loras and so on which SDXL doesn't, not yet.

  • @micbab-vg2mu
    @micbab-vg2mu 10 месяцев назад

    Thank you for the video - I have to test this new model.

  • @andrewheisler3842
    @andrewheisler3842 9 месяцев назад

    Works really well in comfyui as well.

  • @brootalbap
    @brootalbap 10 месяцев назад +22

    Olivio, you don't need the clickbait titles. You and your videos are better than this.

    • @nuejidub
      @nuejidub 10 месяцев назад +5

      i love his videos, i hate his titles

    • @nathanlewis42
      @nathanlewis42 9 месяцев назад

      Google analytics could be telling him that he does need them.

  • @themanfrommars5488
    @themanfrommars5488 10 месяцев назад

    We like you. Very informative

  • @mosalahmosah5633
    @mosalahmosah5633 10 месяцев назад +2

    Hey Olivio, just wanted to say a quick thanks for the awesome content you've been putting out on Mid-journey and stable diffusion AI. It's been super helpful for someone like me who's new to this topic. Keep up the great work!
    As a newcomer to those topics, I've been trying to find the very first video that covers the installation and usage of stable diffusion. However, it seems that this information is spread across numerous live streams, with the earliest one being the 161st episode, and each video being over an hour long. It's quite challenging to go through all of them step by step to reach the current state of knowledge.
    So, I was wondering if you could create a guide or perhaps a new video that highlights the key topics and the best models to work with. It would be immensely helpful if you could also provide references or links to where we can find more detailed information about each model.
    Thank you in advance for your consideration, and I look forward to your response.

  • @hplovecraftmacncheese
    @hplovecraftmacncheese 9 месяцев назад +1

    I like using this one and also CyberRealistic 3.3

  • @shorerocks
    @shorerocks 10 месяцев назад +2

    Without your guidance, this would be much to complicated for me. Thx for doing this, Olivio

  • @phizc
    @phizc 10 месяцев назад +20

    2:16
    ENSD (eta noise seed delta) is irrelevant for quality. It's only required if you want to replicate an image that was generated with it set to something other than default.
    E.g, you find a cool image on CivitAI and you want to slightly alter the prompt. If they used ENSD 31337 in their settings, you must too, otherwise it's like using a different seed.
    If you're generating your own images, leave it at default. You won't get _better_ images with 31337, just *different*.

    • @4.0.4
      @4.0.4 10 месяцев назад

      Specifically, NovelAI used (uses?) that specific seed, and was required for anons to replicate NovelAI results when the model first leaked.

    • @phizc
      @phizc 10 месяцев назад

      @@4.0.4 and it's become part of the mythos of the cargo cult that has sprung up 😄, and many users use it without knowing why.
      If I understand it correctly, it's kinda the same with clip skip 2. Their models were trained in a way that required it, and since the leak it's infected other models that have it in the merge history. To a degree anyway. I haven't used any model that breaks with clip skip 1. At least clip skip actually does something except changing the seed. If I understand it correctly, it makes the prompt less specific at higher values.
      E.g. "actress" -> "woman" -> "person"

    • @makers_lab
      @makers_lab 9 месяцев назад +1

      @@phizc People tend to copy rather than explore. As well as standard values, I like pushing clip skip as high as 6 or 7 to see what solution results, as while it's now deviated more from the prompt, it can end up being a new groove that merits exploring.

  • @Xinkgs
    @Xinkgs 9 месяцев назад

    The future is going to be these models coming to life. I have no doubt about it

  • @aaronknight1009
    @aaronknight1009 10 месяцев назад

    Nice work, what's even more amazing are the AI virtual friends like Vicki Verona and AlexisIvyEdge

  • @emiliowhetteckey2244
    @emiliowhetteckey2244 9 месяцев назад

    Thanks!

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад

      thank you very much for your support

  • @flat-eric
    @flat-eric 10 месяцев назад +2

    I like realistic vision and epicrealism very much. I even like epicrealism a little more, but it doesn't work with the reference_only technique of controlnet so I mostly switched to realistic vision. For upscaling I prefer to use controlnet tiles together with ultimate sd upscale with a denoise of 0.15, and after that I often upscale it again with either Lanczos or Remacri.

  • @manticoraLN-p2p-bitcoin
    @manticoraLN-p2p-bitcoin 10 месяцев назад +8

    I was having some fun with SDXL today, but truth is, at least for now, if you need a lot of controls over the final results A1111 with ControlNet and good Models, it's still the way to go.

    • @Elwaves2925
      @Elwaves2925 10 месяцев назад +2

      So far I've found SDXL is better for non-human generations but SD1.5 is better for humans. Obviously that will likely change and it's because of the models and loras available to SD1.5, which you mention.

    • @manticoraLN-p2p-bitcoin
      @manticoraLN-p2p-bitcoin 10 месяцев назад +2

      @@Elwaves2925 Now that you mentioned... Yesterday I've made some images, painting style, of 1800's ship battles. It was amazing.

    • @Elwaves2925
      @Elwaves2925 10 месяцев назад +1

      @@manticoraLN-p2p-bitcoin Nice idea for a prompt, I'll be trying that myself. Cheers.

  •  10 месяцев назад +2

    My prefered models : Photon, Realistic Vision 5.1, RunDiffusion, Reliberate, DreamShaper and the mix of them during upscaling :)

  • @hleet
    @hleet 10 месяцев назад

    very nice. wondering if'it would be much better with sdxl😮

  • @BAIGirls
    @BAIGirls 8 месяцев назад

    Love your videos and so informative.. I changed from Realistic Vision to Westmix recently. Both are very good. 1 thing I find that all these models still have trouble with are fingers. Being a programmer myself, I am wondering if it is possible for SD to detect if it is a humanoid, do not exceed 5 fingers. Reduces the need for photoshopping xD

  • @ParvathyKapoor
    @ParvathyKapoor 10 месяцев назад

    One of the Best Model.

  • @gu9838
    @gu9838 8 месяцев назад +1

    i dont think real modeling is going away anytime soon as ai still has some issues and ai to video still has a ways to go but for me as a photographer in real life it helps me to better visualize the types of photos i want to do,

  • @NeoFlashXx
    @NeoFlashXx 9 месяцев назад +1

    I love this checkpoint and constantly use it.. but we're not there yet. I'd say we are able to replace models when ImageGen AI can do more complex poses like breakdance poses.. I tried DW pose with the editor (from your other video - which is an amazing tool) and still getting nothing close to real complex poses.

  • @tomb.3808
    @tomb.3808 10 месяцев назад

    Thanks Olivio! When using your instructions for the img2img upscaling with the SDupscale script and having the same settings like you have. For each upscaled tile its generates a complete new picture. How can I fix that?

  • @ayrengreber5738
    @ayrengreber5738 10 месяцев назад +1

    Excited to try this out!

  • @captnrobin275
    @captnrobin275 8 месяцев назад

    6:22 Du schickst das Bild zu img2img, stellst den SD Upscaler ein, packst add details Lora dazu, aber welches denoising level stellst du vor dem generieren ein?

  • @plejra
    @plejra 9 месяцев назад +1

    Can you explain VAE and noVAE model variations with use case demo?

  • @japrogramer
    @japrogramer 10 месяцев назад +1

    Influencers can't compete from a branding perspective. It's game over for them.

  • @johnmcaleer6917
    @johnmcaleer6917 10 месяцев назад +4

    I use this model and find it very very good, my main criticism of most of these realistic models is the yellow colour cast they seem to produce, nearly always need colour correction in PShop, minor gripe but something I've noticed... Nice work as always..

    • @southcoastinventors6583
      @southcoastinventors6583 10 месяцев назад +2

      Can you imagine how much you would have to pay if you wanted IRL versions of these models $$$

    • @polystormstudio
      @polystormstudio 9 месяцев назад +1

      @@southcoastinventors6583 would it cost more than the time it would take an AI artist to get the model to wear an accurate representation of the client's new line of clothing, if that's even possible? And if the client wants it to be blown up to a billboard, would a 20K upscaled version be as photorealistic as a photograph?

  • @rolandv.6663
    @rolandv.6663 10 месяцев назад +2

    The little typo (photgraphy, 3.57) leads me to a question: Is there a way to see which parts of the prompt are really used/understood and which need to be rewritten to work?

  • @malcolmreynolds4099
    @malcolmreynolds4099 10 месяцев назад

    great video, i still hope that you will take my Serenity model for a spin some day :)

  • @iljabuinitski9745
    @iljabuinitski9745 9 месяцев назад

    When I do exactly like on the video, is the result one to one? Or it can still have difference? In my case it is almost the same, but for example the lights on the background are missing, the face expressino is not totally the same etc, but the model itself is pretty smiliar, is it normal?

  • @phillipberenz4284
    @phillipberenz4284 10 месяцев назад +1

    I'd like to use this to generate a model and then train Lora on that model so I consistently output the same-looking person. Can you tell me the best way to go about this?

  • @AlgorithmInstituteofBR
    @AlgorithmInstituteofBR 10 месяцев назад +2

    I love how you always show Black women love!!

    • @southcoastinventors6583
      @southcoastinventors6583 10 месяцев назад +1

      No such thing as black or white women

    • @masterkc
      @masterkc 10 месяцев назад

      @southcoastinventors6583 LOL, don't be delusional. There are black women, white women, and Asian women, and there's absolutely nothing wrong with that. We are all different and beautiful in our own ways.

    • @southcoastinventors6583
      @southcoastinventors6583 10 месяцев назад

      @@masterkc Our primary pigment is not capable of reflecting or absorbing all the visible spectrum hence my previous statement

  • @ronaldtaylor9279
    @ronaldtaylor9279 10 месяцев назад

    How can I train the model that you just showed me to be able to attach my artwork to print-on-demand clothing, backpacks, hats, and other items through an artwork website called Printful?

  • @saverioc2929
    @saverioc2929 9 месяцев назад

    Can you please talk about how to get this A1111 for a mac user, What is Realist Vision in relation to A1111, I'm dumb I don't know anything about this. Is there any fees? I've used MidJ, and Lenoardo, blue willow. But this seem more complex. can you please unpack for me Olivio??

  • @zacharysherry2910
    @zacharysherry2910 9 месяцев назад

    It's scary good

  • @HerrSchobert
    @HerrSchobert 10 месяцев назад +1

    Moin, Kollege,
    I love your style, your enthusiasm and most of your videos. I do NOT love this headline 🙂
    Just as little as Chatty and his colleagues are going to replace programmers (a professional spends about 10% of the "programming" time on writing code, the rest is mostly understanding what the client actually meant when they asked for a "make it awesome" button), these models will replace "models" (pun intended).
    If you have ever worked with a (professional) photo model, you will notice IMMEDIATELY - right off the bat - what an intergalactic difference between that and any of the "generative AI" tools is. There simply isn't a way to really compare the two 😛
    That said, sure, obviously, these tools are shifting jobs around. Just like industrial revolution has done. We all have to constantly learn new tricks. But I bet a keg of self-brewed ale that "models" won't be out of business, just as actresses won't be, just because of these boring, always-the-same-looking stylizers ^_^

  • @MatthewEverhart
    @MatthewEverhart 9 месяцев назад

    I guess I'm dumb. Where do you put the 4x-Ultrasharp Upscaler????

  • @gohan2091
    @gohan2091 10 месяцев назад +1

    Why use SD Upscaler script and not the ultimate SD upscaler script?

  • @Philmad
    @Philmad 9 месяцев назад

    Hi Olivio, Antoher cool video! One question that haunts me for a while: Is there a trick to make the "person" always look to the "camera", something similar to NVidias Broadcaster functionality?

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад

      try "looking at camera" or "looking at viewer" in the prompt

  • @NamikMamedov
    @NamikMamedov 9 месяцев назад

    How we can set this dark theme?

  • @agelessstranger964
    @agelessstranger964 10 месяцев назад

    I am getting terrible results with or without Restore faces. I am using the suggested prompt and the top suggested negative prompt.

  • @michaelbayes802
    @michaelbayes802 10 месяцев назад +3

    agree sdxl still is not better than 1.5 for controlled portraits. I use Epic Realism v5

  • @KaizokuPim
    @KaizokuPim 10 месяцев назад

    I keep getting double people in my output. What do I do wrong?

  • @dreamscapeyoutube
    @dreamscapeyoutube 9 месяцев назад

    It’s very nice review of RV Olivio 👍 if you don’t have good gpu like I did, I made a Google colab for RV 5.1 if that helps you

  • @lassebauer
    @lassebauer 10 месяцев назад +1

    Is there an online version of SDXL 1.0 that allows you to add these different models, or do you have to download and do all the setup yourself?

    • @BorSam
      @BorSam 10 месяцев назад

      Leonardo AI, it has SDXL model there.

    • @lassebauer
      @lassebauer 10 месяцев назад

      @@BorSam Thanks for the suggestion. I signed up but they don´t even have SDXL 1.0 let alone Realistic Vision as far as I can tell...

  • @robotbobby7354
    @robotbobby7354 10 месяцев назад +2

    i switch between this model and epicrealism. some feel realistic vision v4 is better than v5

  • @sparetent
    @sparetent 9 месяцев назад

    It doesnt say WHERE to install the upscaler :(

  • @AntonRellik1123
    @AntonRellik1123 9 месяцев назад

    How do I add the VAE ,clip skip setting, Noise multiplier on the top ?

  • @XiaoHui3103
    @XiaoHui3103 9 месяцев назад

    你好,期待下个视频能加上字幕,这样我就可以在手机上观看了,没有上传的字幕文件,自动翻译启动不了

  • @nefwaenre
    @nefwaenre 10 месяцев назад

    How do i get the hires fix option for SD? This model looks amazing i wanna try!!

    • @DiffusionHub
      @DiffusionHub 10 месяцев назад

      should come out-of-the-box on the latest A1111

  • @richardglady3009
    @richardglady3009 10 месяцев назад

    Beautiful models…striking actors watch out. The creation process…way over my head (that’s a reflection on my ignorance). Thank you.

  • @JoshMcCann-qq8wb
    @JoshMcCann-qq8wb 9 месяцев назад

    I don't get your prompt Analog Photography, and then DSLR. They are the opposite.

  • @mAur3lius
    @mAur3lius 10 месяцев назад +1

    why is the also an inpaint model? Can the inpaint one be used for both initial render and inpainting? or do I use the normal one to render then switch to inpaint one when doing that?

    • @DiffusionHub
      @DiffusionHub 10 месяцев назад

      Yes, you can use Realistic Vision with inpaint and you can use it as an initial render too :)

  • @scottgust9709
    @scottgust9709 9 месяцев назад

    I wonder what this guys real living room looks like ;)

  • @igniteparth_
    @igniteparth_ 10 месяцев назад

    please make more videos of deform for making ai videos

  • @therookiesplaybook
    @therookiesplaybook 10 месяцев назад +1

    You can just remove tool the finger or content aware fill it.

  • @maxmillion4216
    @maxmillion4216 10 месяцев назад

    This all sounds so complex. I 'll stick to taking photos with my camera.

  • @modimihir
    @modimihir 10 месяцев назад

    What's that editing software that looks like Photoshop?

    • @modimihir
      @modimihir 10 месяцев назад

      nvm found it, Affinity Photo in case anyone else is wondering

  • @mindlessfreak2000
    @mindlessfreak2000 10 месяцев назад +1

    SD Upscale seems to make the photos Metadata disappear so I end up losing my prompts. Does anyone have any ideas on how to fix that? I've been adding it back in manually once I noticed it was doing it.

    • @wrOngplan3t
      @wrOngplan3t 10 месяцев назад

      How do you add it back though?

    • @Elwaves2925
      @Elwaves2925 10 месяцев назад

      @@wrOngplan3t I'm curious about that too. I have the same issue but with Photoshop.
      I wonder if IrfanView can do it as I know it can read the metadata?

    • @wrOngplan3t
      @wrOngplan3t 10 месяцев назад

      @@Elwaves2925 Apparently a program called Tweakpng (haven't looked into it yet)

  • @lonniehensley661
    @lonniehensley661 9 месяцев назад

    ad on youtube : happiness this summer is a new chevy .. my response ... HAPPINESS IS NOT HAVING ALL THE OVER PRICED PAYEMENTS OF A NEW CHEVY THAT DEPRECIATES IN VALUE AS SOON AS YOU DRIVE IT OFF THE LOT!! LOL

  • @mamandapanda185
    @mamandapanda185 10 месяцев назад

    dystopic. the wave's going wall up and crash out.

  • @telifsiz
    @telifsiz 10 месяцев назад +1

    hi, I have 2 queastions bro.
    1-4060TI 16GB 128 bit GDDR6 vs 4070 12GB 192 bit GDD6X? which one should i choose for stable diffision?
    2- I HAVE 1660Tİ . CAN I USE Realistic Vision 5.1?
    THANK YOU BRO

    • @wrOngplan3t
      @wrOngplan3t 10 месяцев назад

      More memory is better, but I don't know the other differences between 4070 and 4060 Ti to justify any "advice". Maybe you also game for example, so could be other factors to consider. Or you could consider an AMD GPU. The best AMD's RX 7900 XTX have 24 GB, and in my country is around the same price as a 4070 with 12 GB... I got a 4080 end of last year, had I known about all this AI / SD stuff that's been happening lately I'd probably gone AMD. It's got 16 GB, and I sometimes get out of memory error (not often tho).
      The memory size of 1660 Ti of 6 GB should be enough (4 GB is mimimum afaik). But you'll need to enable some memory saving options (I haven't really looked into that as I skimmed over that section of Stable Diffusion IIRC). But as you have it, it's free to try and see how it goes.

    • @DiffusionHub
      @DiffusionHub 10 месяцев назад +1

      go for more VRAM

    • @Elwaves2925
      @Elwaves2925 10 месяцев назад +1

      You might notice a bit of speed difference but it'll be neglible and soon you won't even notice that. So, as others have said, go for more VRAM. Earlier this year I did a similar thing and went for more VRAM over speed and it was the right decision.
      If you're capable of running one of the Stable Diffusion UI's on a 1660Ti (A1111, Comfy etc), then you should be able to run RV5.1. The best way to know is to try it.

    • @telifsiz
      @telifsiz 10 месяцев назад

      @@DiffusionHub but gddr 6x vs gddr6 is it important? and 128 196 bit adn cuda.all this more on the 4070 card. Should I still get 4060 16 gb?

    • @thebrokenglasskids5196
      @thebrokenglasskids5196 10 месяцев назад

      @@telifsizWhen it comes to AI generation, VRAM is king above anything else.
      No matter what else the card is capable of, it’s game over once you run out of VRAM.
      If you plan on doing any training that can happen quicker than you might think. Always go with the card that had the most VRAM. It’s all that matters when it comes to AI.

  • @Steamrick
    @Steamrick 10 месяцев назад +1

    I'm not big into realistic models. Instead, I've been really enjoying the SXZ Luma model. It's got a semi-realistic look with strong lighting effects that I really like.

  • @Prelmable
    @Prelmable 10 месяцев назад +6

    The worst part of RV5.1 is that all female eyebrows are the same exact eyebrow. Sometimes darker, sometimes lighter, but always the same shape. Once you saw it you can't unsee it.

    • @ccl1195
      @ccl1195 10 месяцев назад +6

      Thank you- it gets worse. RV is polluted by the appearance of one specific woman. She has thick eyebrows, generally green or teal piercing eyes, thick generally auburn hair, and a very recognizable nose with a pronounced ridge, terminating in a sharp pinch, and then fanning out into a large triangular bulb. Once you see it, you will see how this likeness has leaked out into absolutely *tons* of your RV results.
      This is not limited to RV- it's in other models too, probably due to loads and loads of merges. I've tried to talk about this but very few people have recognized it yet. But yes- once you see it, you cannot un-see it, and a lot of the novelty wears off.

    • @southcoastinventors6583
      @southcoastinventors6583 10 месяцев назад +2

      Is that really what people are looking at

    • @Prelmable
      @Prelmable 10 месяцев назад +3

      @@ccl1195 Right. I've often had to force darker eyes via prompt, because otherwise they became too light. After working with different models for a while, you realize how inflexible all the available models are in the end. Not bad to get inspired, but in any case still more toy than tool.

    • @ccl1195
      @ccl1195 10 месяцев назад +3

      @@Prelmable Thanks. I agree. I can see you are a knowledgeable user so I'll complain a little. The most valuable use for me at this point is slowly refining or re-imagining my own actual artwork slowly, in very small sequential steps. Trying to spin prompts into gold is a fantasy that falls by the wayside at some point if you start paying attention, and learn a little bit about the technical side of the tools. I mean, not 100%, but I'm sure you get my meaning.
      I was pretty offended to watch Emad Mostaque in some interviews recently with his big s***-eating grin, hyping his company's tech with some empty platitudes about how it's going to change the world. He was like Edward Norton's CEO character from the movie Glass Onion. Why am I being so harsh? Because while this tech absolutely is changing the world right now, what he's conveniently leaving out is how much more these models actually *fail* than get things right. And if I'm putting my science hat on for a moment, this is really darn important if we're going to let this tech do anything important for us.

    • @pedrogorilla483
      @pedrogorilla483 10 месяцев назад +2

      You can always automatically change faces or any other part of the image with a different prompt, loras, settings or even a different model. If you care about this level of detail, a one-click generation is not enough. Even in real photography you don’t get the perfect image that you’re looking for with only one click.

  • @thewebstylist
    @thewebstylist 10 месяцев назад

    I’ll see how these awesome prompts work in MidJourney 🎉

  • @0A01amir
    @0A01amir 10 месяцев назад

    Yep, it is superb and Chloe is my new Girlfriend.

  • @DejayClayton
    @DejayClayton 10 месяцев назад

    Somehow, the model is only 1.99GB now. Also, "elegant french woman" seems to be Gal Gadot

  • @iKNuDDeL
    @iKNuDDeL 10 месяцев назад

    Channel seems very RV based. Why not test juggernaut, epiCRealism or absolutereality?

  • @rando6836
    @rando6836 10 месяцев назад

    I would just inpaint the hands.

  • @dreamlit8500
    @dreamlit8500 10 месяцев назад +2

    when they get these AI models to wear custom clothing from brands or startup brands, its over

  • @kittyinasock
    @kittyinasock 10 месяцев назад

    I'll believe that when I see the first AI version of Sports Illustrated's Swimsuit Edition. :)

  • @aguyfromnothere
    @aguyfromnothere 9 месяцев назад

    Unless they need to hold your product....

  • @theunboxingroom
    @theunboxingroom 8 месяцев назад +2

    i over use this

  • @ith83
    @ith83 10 месяцев назад

    Merci !

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад

      Thank you very much for your support :)

  • @allPhoto2008
    @allPhoto2008 9 месяцев назад

    i personally think that cyber Realistic is much better model than even this 5.1

  • @GS195
    @GS195 10 месяцев назад

    "Replaces human models?"
    Careful how you say that. People are already raging against possibly being replaced by AI.

  • @Hypersniper05
    @Hypersniper05 10 месяцев назад +1

    it looks so real that it looks fake 🤣

    • @wrOngplan3t
      @wrOngplan3t 10 месяцев назад

      So, It's gone so far over the uncanny valley that it ended up on the uncanny mountain? 😂

  • @WifeWantsAWizard
    @WifeWantsAWizard 10 месяцев назад +2

    (1:10) Guys, this is insanity. If you don't use the negative prompts, you'll get the same results. There is no AI that deliberately thinks to itself, "I was going to mess with this guy by giving him a dehydrated model with jpeg artifacts, but since he used both of those in the negative box I guess I'll do it right". Think it through. The prompt boxes are just gibberish that adds "noise" to the generation engine and only the first 100 characters of the "positive" prompt define subject unless there's a background void--like in bad-bullseye photography.

  • @gregs2649
    @gregs2649 10 месяцев назад

    Nobody needs models now when you can hire your free aí model you trained for your business purposes.

  • @klarion
    @klarion 10 месяцев назад

    Nice... Working on putting people out of business. It already happened to musicians.. keep working.

  • @MONTY-YTNOM
    @MONTY-YTNOM 10 месяцев назад +1

    Just to complicated

    • @holotape
      @holotape 10 месяцев назад

      What part are you stuck at?

  • @Ashleyapples
    @Ashleyapples 9 месяцев назад

    Still I like photon better

  • @futurepresident2598
    @futurepresident2598 7 месяцев назад

    yeah realisticvison still can't work on fingers and toes. epicrealism is so much better

    • @stormdesertstrike
      @stormdesertstrike 5 месяцев назад

      Merge Realistic Vision and EpicRealism and you will get the best images

  • @D0UCHEBAGGINS
    @D0UCHEBAGGINS 10 месяцев назад

    So what are we gonna do when AI replaces every single job in every industry in existence?

  • @jeremysomers2239
    @jeremysomers2239 9 месяцев назад

    wow, epic - though buddy you have to stop just doing hot women with these realistic ones...

  • @sneedtube
    @sneedtube 10 месяцев назад +1

    There's still that dull plastic skin effect, not impressed

  • @bruhmoment23123
    @bruhmoment23123 10 месяцев назад +1

    0:13 WOMEN NO I WANT MEN

    • @southcoastinventors6583
      @southcoastinventors6583 10 месяцев назад

      More importantly where are the thicck women

    • @bruhmoment23123
      @bruhmoment23123 10 месяцев назад

      @@southcoastinventors6583 NO BUFF MEN

    • @CanadaBlue85
      @CanadaBlue85 10 месяцев назад

      🤮@@bruhmoment23123

    • @wrOngplan3t
      @wrOngplan3t 10 месяцев назад

      @@southcoastinventors6583Weight slider. Lora IIRC

    • @muggzzzzz
      @muggzzzzz 10 месяцев назад

      ​@@southcoastinventors6583overweight women is not normal. Are you a pervert?

  • @sheedee2
    @sheedee2 10 месяцев назад +2

    I have said it many times before and I will say it again...
    All these "realistic " models are the same...the only thing they are doing is adding a higher number to the model.
    This is not progress
    Progress is the day that AI does not produce mutated limbs extra heads bad hands etc. As long as that doesn't happen these models don't mean anything 🥸

  • @l.x6
    @l.x6 10 месяцев назад

    booooobbbbaaaaaaaaa