LATENT Tricks - Amazing ways to use ComfyUI

Поделиться
HTML-код
  • Опубликовано: 19 мар 2023
  • Here are amazing ways to use ComfyUI. This node based UI can do a lot more than you might think. Especially Latent Images can be used in very creative ways. You can inject prompt changes. You can combine latent images to new results. Stop render steps and finishe the rendering after you changed to prompt, sampler and settings. A world of possibilities.
    #### Links from the Video ####
    Join my Discrod: / discord
    ComfyUI Projects ZIP: drive.google.com/file/d/1MnLn...
    ComfyUI Install Guide: • ComfyUI - Node Based S...
    Support my Channel:
    / @oliviosarikas
    Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorials.podia.com/new...
    How to get started with Midjourney: • Midjourney AI - FIRST ...
    Midjourney Settings explained: • Midjourney Settings Ex...
    Best Midjourney Resources: • 😍 Midjourney BEST Reso...
    Make better Midjourney Prompts: • Make BETTER Prompts - ...
    My Facebook PHOTOGRAPHY group: / oliviotutorials.superfan
    My Affinity Photo Creative Packs: gumroad.com/sarikasat
    My Patreon Page: / sarikas
    All my Social Media Accounts: linktr.ee/oliviotutorials
  • ХоббиХобби

Комментарии • 168

  • @DJVARAO
    @DJVARAO Год назад +12

    Man, you are a wizard. This is a very advanced use of SD.

  • @bjornskivids
    @bjornskivids 11 месяцев назад +4

    Ok, this is awesome. You inspired me to make a 4-sampler comparison-bench which lets me get 4 example pics from one prompt when exploring different engines. It makes sampler/settings comparisons simple and I can crank out sample pics at a blistering pace now. Thank you :)

  • @andresz1606
    @andresz1606 9 месяцев назад +3

    This video is now #1 in my ComfyUI playlist. Your explanation at 17:50 on the LatentComposite node inputs (samples_to, samples_from) is priceless, as the rest of the video. Looking forward to ask some questions in your Discord channel.

  • @jorgeantao28
    @jorgeantao28 Год назад +20

    This is an amazing tool for professional artists. The level of detail you can achieve reminds me of Photoshop... AI art is not a threat to artists, but rather a complement to their work.

  • @JimmyGunawan
    @JimmyGunawan Год назад

    Great tutorial on ComfyUI! Thanks Olivio~ I just started using this today, reloading the "workflow" really help with efficiency.

  • @mrjonlor
    @mrjonlor Год назад +18

    Very cool! I’ve been playing with latent composition in ComfyUI for the past couple days. It gets really fun when you start mixing different art styles within the same image. You start getting some really wild effects!

    • @OlivioSarikas
      @OlivioSarikas  Год назад +3

      Thank you. That's a great idea too. I was thinking about using different models in the same image, but then thought that might be too complex for this video

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ Год назад +2

    Found this particular course super inspiring. Makes me keen to experiment

  • @lovol2
    @lovol2 Год назад +1

    Okay I'm convinced, I will be trying this out, fantastic demo

  • @AllYouWantAndMore
    @AllYouWantAndMore Год назад

    I asked for examples, and you delivered. Thank you.

  • @LICHTVII
    @LICHTVII 9 месяцев назад

    Thank you! Hard to find a no-bs explanation of what-what does, helps a lot!

  • @andrewstraeker4020
    @andrewstraeker4020 Год назад

    Thank you for your excellent explanations. I especially appreciate your excellent English, which is understandable even for non-native speakers.😸
    Every time I watch your videos, I want to run and experiment. New ideas and possibilities every time. 😺👍👍👍

  • @MrGTAmodsgerman
    @MrGTAmodsgerman Год назад +14

    The node system can make things complicated, but this system really empowers the potential of a lot of stuff. And seeing this for AI pictures now, gives it more meaning and control that could be then be considered as artistic again, as with ComfyUI the human takes huge control.

    • @KyleandPrieteni
      @KyleandPrieteni Год назад +1

      YES have you seen the custom nodes on civit AI? They are nuts and you get even more control

    • @MrGTAmodsgerman
      @MrGTAmodsgerman Год назад +1

      ​@@KyleandPrieteni Actually no, i haven't. Thanks for the info.

  • @Dizek
    @Dizek Год назад

    wow, discovered Comfy just recently but it is more than it looks, you can even use the same prompt into all the aviable samplers to test the ones that work the best with the style you are going for

  • @workflowinmind
    @workflowinmind Год назад +12

    Great examples, the first one you should pipe the primary latent into the subsequent ones as you are over stepping at each image (last image has all the previous steps in your example)

    • @bonecast6294
      @bonecast6294 7 месяцев назад

      could you possibly explain it in more detail or provide a nose setup? is his node setup not correct?

  • @caiubymenezesdacosta5711
    @caiubymenezesdacosta5711 Год назад

    Amazing, i will try it this weekend. Like always thanks for share with us .

  • @remzouzz
    @remzouzz Год назад +3

    Amazing video ! Could you also make a video where you get more in depth onto how to install and use controlnets in ComfyUi ?

  • @METALSKINMETAL
    @METALSKINMETAL Год назад

    Excellent, Thank so much for this video!

  • @JonathanScruggs
    @JonathanScruggs Год назад +3

    The more I play with it, the more I'm convinced that this is the most powerful UI to Stable Diffusion there is.

    • @user-hz4fz5qy7l
      @user-hz4fz5qy7l 10 месяцев назад

      the moment HOUDINI mlops is updated to be able to use loras,and lycorish its going to be the most powerfull

  • @Spartan117KC
    @Spartan117KC 7 месяцев назад

    Great video as always, Olivio. I have one question - you say at 9:28 that 'you can do all of this in just one go' - where you referring to the 4x upscale with less detail that you had already mentioned or were you referring to another way to do the latent upscale workflow with better results and less steps?

  • @ColePatterson-mw2gy
    @ColePatterson-mw2gy 7 месяцев назад +1

    Whoa! Jeez! This looks complicated. All I searched for was how to use prompt weights. I can handle anything: algebra, calculus, etc., but when it comes to node editors, I check out ASAP.

  • @CrimsonDX
    @CrimsonDX 10 месяцев назад

    That last example was insane O_O

  • @stephancam91
    @stephancam91 Год назад +10

    Awesome video - very educational - thank you! I've been meaning to get ComfyUI installed - just have to find the time. (I swear, I'm having to update my AI skills weekly - it's nearly as time consuming as keeping up with Unreal Engine, lol).

    • @OlivioSarikas
      @OlivioSarikas  Год назад +3

      Thank you very much. ComfyUI is a blast to play with. This will suck up your hours like nothing 😅

    • @jeremykothe2847
      @jeremykothe2847 Год назад +1

      The good news is it's easy to install. The "bad" news is that it really needs more functionality to be useful, but it as a lot of promise if it's extended. If they managed to get the community to write nodes for them....

    • @Mimeniia
      @Mimeniia Год назад +2

      Waaaaaaay easier and quicker than Auto1111 to install...but a bit more intimidating to use on an advanced level.

    • @stephancam91
      @stephancam91 Год назад

      @@Mimeniia Thanks so much. I'm used to using node based programs (DaVinci Resolve + Red Shift). Hopefully, I'll be able to pick it up quickly! Just a matter of finding the time.

  • @rsunghun
    @rsunghun 10 месяцев назад

    you are so smart and amazing!

  • @wolfganggriewatz3522
    @wolfganggriewatz3522 10 месяцев назад

    I love it.
    Do u have plan on more of this?

  • @panzerswineflu
    @panzerswineflu Год назад

    I'm a sea of ai videos i started skimming through, this got my subscribe. Now if only I had a rig to play with this stuff

  • @digitalfly73
    @digitalfly73 9 месяцев назад

    Amazing!

  • @TSCspeedruns
    @TSCspeedruns Год назад +1

    ComfyUI is amazing, I love it

  • @enriqueicm7341
    @enriqueicm7341 5 месяцев назад

    It was useful!

  • @rakly3473
    @rakly3473 6 месяцев назад

    This UI needs some Factorio influence, it's so chaotic!

  • @silentwindyou
    @silentwindyou Год назад

    This method seems similar to a sequence of [from:to:when] prompts in webUI, the steps added up, and added image output after each prompt with custom steps finished,nice process!

    • @Mirko_ai
      @Mirko_ai Год назад

      Never heard about that in the WebUI. Is this possible?o.o

    • @silentwindyou
      @silentwindyou Год назад

      @@Mirko_ai cause [from:to:when] also applied in Latent space, so the logic applies,but webUI output the result from last step not after each [from:to:when] prompt by default.

  • @alexlindgren1
    @alexlindgren1 6 месяцев назад

    Nice one! I'm wondering if it's possible to use Comfy UI to change the tint of an image, let say I have an image of a livingroom, and I want to change the tint of the floor in the livingsroom based on an image I have of another floor, how would you do that?

  • @Kyvarus
    @Kyvarus Год назад +6

    The only thing I wish that comfy had, is the ability to sequentially take frames from a video in order to use them as an open pose mask for each generation over time. Video generation would be amazing.

    • @Dizek
      @Dizek Год назад

      im new, but can you select a folder of images? you could pre-split the images and use them

    • @Kyvarus
      @Kyvarus Год назад +1

      @@Dizek There is no way for you to within comfyui control the selection of images in a sequential order. which means that you can only have a static reference image and no one has bothered to program in a way for us to load in multiple images from a folder in order yet. Honestly if i get the time this week i'll throw the script together. the addons for comfy UI are very powerful and it's likely to not be a big issue. the main issue is that we need the end of image generation event to call the next image to load. which will require someone to go learn the api for the software.
      So even if you have some presplit images into a folder there is no way to call the next image in the folder by index.

    • @anuroop345
      @anuroop345 10 месяцев назад

      @@Kyvarus We can save the workflow in API format, then use python script to input image in sequence, save output images, and later combine them.

    • @Kyvarus
      @Kyvarus 9 месяцев назад

      @@anuroop345 never heard of using the saved workflow files as an api format for python scripts, but that sounds really quite nice, something along the lines of "Break down loaded video into input frames, standardize the input frame size, decide the fps of the final render, then pick an appropriate number of frames, load up the workflow api, enter in the input picture, model selection, loras, prompt, etc and run per image. in a for loop for number of images. Recompile mp4 from folder image sequence; done?" I guess this could also be used to compile open pose videos from standardized characters acting in natural video. Which would be great; allowing more natural posing without the artifacts over video of other control net types.

  • @petec737
    @petec737 6 месяцев назад +2

    The latent upscaler is not adding more details as you mentioned, it's using the nearest pixels to double the size (as you picked), similar to how you'd resize an image in photoshop; the ksampler is the one who adds more details. That's a confusion I see many making. For best quality you don't upscale the latent, you upscale the image with the upscalemodelloader then pas it through the ksampler.

    • @bobbyboe
      @bobbyboe 5 месяцев назад

      I wonder then what a latent-upscaling is useful for?

  • @pratikkalani6289
    @pratikkalani6289 Год назад

    I love comfyui, this has so many use cases. I’m VFX compositor by profession so I’m very comfortable with node base ui (I work on Nuke). I wanted to know if we want to use comfyui as a backend for website, can I run this on serverless GPU?

  • @HN-br1ud
    @HN-br1ud Год назад

    잘 보았습니다~감사합니다^^

  • @roroororo7088
    @roroororo7088 Год назад

    I like videos about this UI, can you do exemples for clothes changing plz ? (it's harder and inpaint like but more friendly to use)

  • @darmok072
    @darmok072 Год назад +1

    How did you keep the image consistent when you did the latent upscale? When I try your wiring the face of the upscaled image is quite different?

  • @NoPr0gress
    @NoPr0gress 10 месяцев назад

    thx

  • @MishaJAX_TADC4
    @MishaJAX_TADC4 Год назад

    @OlivioSarikas Hi, can you explain, when i am use Latent Upscale, my smaller image is converted to a different image, do you have any idea how to fix it, or is there something wrong with what I'm doing?

  • @MAKdotCZ
    @MAKdotCZ 9 месяцев назад

    Hi Olivio, I wanted to ask you if you could give me some advice. I have been using SD AUTOMATIC1111 so far and now I am trying ComfyUI.
    And my question: is there any possibility to push a prompt and settings to ComfyUI from the images generated by SD A1111?
    In SD A1111 I use PNG INFO and then send TXT2IMG. Is there any similar way to do this in ComfyUI but from the image I generated in SD A1111 ?
    Thank you very much, MAK

  • @lisavento7474
    @lisavento7474 4 месяца назад

    ANYTHING YET to fix wonky faces in Dall-e crowds? I have groups of monsters! I've tried prompts like "asymmetrical, detailed faces" and it did a little better but i have perfect images except the crowds in the background that i need to fix.

  • @ryanhowell4492
    @ryanhowell4492 Год назад

    Cool Tools

  • @amva3455
    @amva3455 Год назад

    With comfyUI is possible train my custom models like dreambooth? or is just to generate images?

  • @DemonPlasma
    @DemonPlasma 9 месяцев назад

    where do i get the RealESRGAN upscaler models?

  • @LeKhang98
    @LeKhang98 Год назад

    Awesome channel. I have 2 questions please help:
    - Is there any way to import real-life images of some objects (such as cloth, watch, hat, knife, etc.) into SD?
    - Do you know how to keep these objects consistent? I know about making consistent characters but it works for facial and hair only while I want to know how to apply that to objects. (Example: 1 knife with multiple different girls and different poses)

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      Thank you :)
      - Yes you can do that in comfyUI with the image loader
      - if you want a model that is trained on a object you would need to create a lora or dreambooth model

    • @krz9000
      @krz9000 Год назад +1

      Create a lora of your thing you want to bring into your shot

    • @LeKhang98
      @LeKhang98 Год назад

      @@OlivioSarikas ​ @Chris Hofmann Thank you. I'm not sure if it can work with clothes, though. I have some t-shirts and pants with logos, letters, or images on the front. Depending on the pose of different characters, the t-shirt, pants, and their images will change accordingly. That's why I'm hesitant to learn how to use AI tools since I don't know if I could do it or if I should just hire a professional photographer and model to do it the traditional way. Anyway, I do believe that in the near future, everyone should be able to do it easily. This is so scary & exciting.

  • @kennylex
    @kennylex Год назад +2

    I see that you use things like "RAW Photo", "8k uhd DSLR" and "High quality" that I often say are useless prompts that do not do what folk think they will do, like RAW is just uncompressed data that later can be converted, so you do not want that in a image then it give flat colors, what folk often want is a stule like "Portrait Photo" that often is a color setting in cameras. BUT!
    My idea is if you can use the nodes to make images that is side by side but where "RAW photo" is compared with image that do not have that prompt or replace it with other prompts like "Portrait photo". "warm colors" and "natural color range", with nodes you can make sure you get same seed and so and that the result are almost made at the same time.
    For when you write "high quality", what to you want? For the AI can not make higher graphical quality than it can do, but I guess it change something then so many use that prompt, so can you just do some test to see what the most popular prompts do, like is Trending on Artstation" better than "Trending on Flickr" or "Trending on e621"?
    Edit: This is a tips for all, rather than write "African woman" use a nationality like "Kenyan woman" to get that that nice skin tone and great looking females, if you take nations down south you get that rounder face on males that can give a rather cool look, nations in the north Africa have a lighter skin tone and often a arabic or ancient roman look.

  • @PaulFidika
    @PaulFidika 8 месяцев назад

    Olivio woke up this morning and chose violence lol

  • @MaximusProxi
    @MaximusProxi Год назад

    Hey Olivio, hope your new PC is up and running now!

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      yes, it is. It really was the USB-Stick that was needed. Didn't connect the front RGB yet though ;)

    • @MaximusProxi
      @MaximusProxi Год назад

      Glad to hear! Enjoy the faster rendering :)

  • @DezorianGuy
    @DezorianGuy Год назад +3

    I appreciate your work, but can you make a video in which you share the basic working process - I literally mean a step by step guide. In your 2 released videos about ComfyUI I barely understood what you were talking about or what nodes are connected to which (looks like spaghetti world to me).
    If you could just create single projects from the start.

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      hm... that could be a interesting idea. In the meantime, the best way to go about this is to look at A1111 and compare the individual parts to the nodes in ComfyUI, because they are often similar or the same. Like the Empty Latent Image is simple the size setting you have in A1111. Or the k-smapler is just the Render settings in A1111, but with some more options in there

    • @DezorianGuy
      @DezorianGuy Год назад

      @@OlivioSarikas i finally managed to replicate your project now, was a bit confusing at first. Those checkpoint files one can choose from do provide different art styles?

    • @lovol2
      @lovol2 Год назад

      I think if you've not used automatic 1111 before looking at this. Your head will explode !
      It will be worth the time and effort to install automatic 1111 and then you will be familiar with all of the terms he is using here, and also see the power of all the mess and chaos of the little lions flying over the place

  • @benjamininkorea7016
    @benjamininkorea7016 Год назад

    Having a lineup of beautiful girls of different races like this is going to make me fall in love about 10 times per hour I think. Fantastic work as always!

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      Thank you very much. Yes this is great to show the beauty of different ethnicities :)

  • @BrandinoTheFilipino
    @BrandinoTheFilipino 5 месяцев назад

    where can i get the deliberate _v2 model ??

  • @paulopma
    @paulopma Год назад

    How do you resize the SaveImage nodes?

  • @miasik1000
    @miasik1000 Год назад

    Is there a way to set upscale factor? 1.5;2...

  • @void2258
    @void2258 Год назад

    Any way to variable this? I ask because a well known issue with this kind of repetition is the accidental forgotten/mistaken edit breakage. When you have to edit in a bunch of different places, you can either forget one or more or make one or more mistakes between them and break the symmetry. Being able to feed it "raw portrait...of a X year old Y woman..." and write the rest of the prompt 1 time would make this more easily handled. Also, in theory, you can produce the latent WITHOUT the X and Y filled in and add that on at each, so feed them all from a single latent instead of chaining, though not sure if that would work. Similar to the second thing you did but more automatic.
    I am speaking from a coders perspective and am not sure if any of this is sensible or not.

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      ComfyUi is still in early development. Most nodes need more in/out-puts and there are more nodes needed. So, for now things are rather complex and you need to have dublicate nodes for all the new steps you want to do, instead of being able to rout it through the same node several times. I'm not sure how you imagine combining different latent images without the x/y setting. if the latent image you provide is smaller it will stick to the top left corner. If it is not smaller it will simpley replace the latent image you put it on top of. so it needs to be smaller, as there is not latent image weight that can be used to mix the strength and no mask to maks it out - that would be a different process (the one i showed before)

  • @matthewjmiller07
    @matthewjmiller07 8 месяцев назад

    How can I set up these same flows?

  • @im5341
    @im5341 9 месяцев назад

    5:30 I used same flow but instead of KSampler I put KSampler Advanced at second and third stage. 1st KSampler:steps:12, | 2nd KSampler Advanced:start_at_step:12, steps:20 | 3rd KSampler Advanced:start_at_step:20, steps:30

  • @ajartaslife
    @ajartaslife Год назад

    Can comfyui batch img to img for animation?

  • @jeffg4686
    @jeffg4686 3 месяца назад

    Trying to understand what a latent consists of for a previous image.
    Like, I can see that somehow it's still using the seed or something.
    Assuming the seed itself is stored in the latent or something?
    Any thoughts?
    Update: nm on this actually. I see that it likely just holds that as part of the "graph", and the next one has access to it because it's part of the branch that led up to it (guessing)

  • @VisualWebsolutions
    @VisualWebsolutions Год назад

    :) looks familiar :D

  • @maadmaat
    @maadmaat Год назад +1

    I love this UI.
    can you also do batchprocessing and use skripts with this already?
    Creating animations with this workflow would be really convenient.

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      Thank you. Not yet, unless you build a series of nodes. I really hope patch processing and looping is coming soon

  • @beardedbhais4637
    @beardedbhais4637 Год назад

    Is there a way to add Face restoration to it?

  • @Ibian666
    @Ibian666 Год назад

    How is this different than just rendering the same image with a single word changed? What's the benefit?

  • @toixco1798
    @toixco1798 Год назад

    it's the best UI, but I don't think its creator is the kind of person to seriously maintain it, and I think he did it more for fun or curiosity before surely moving on

  • @arnaudcaplier7909
    @arnaudcaplier7909 Год назад

    Hi @OlivioSarikas, let me share what I think: I have been working in the domain of creative intelligence (orginally CNN based) since 2017, and your insights are solving problems that I have been facing for years ... just crazy staff ❤‍🔥, you are an absolute genius!
    Great respect for your work .Thank you for the insane value you share with us 🙏

  • @teslainvestah5003
    @teslainvestah5003 11 месяцев назад

    pixel upscale: the upscaler knows that it's upscaling white rounded rectangles.
    latent upscale: the upscaler knows that it's upscaling teeth.

  • @Avalon19511
    @Avalon19511 Год назад

    Olivio a question how would I go about putting my face on a image without training, besides photoshop of course or is training the only way?

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      Why not do the lora training? it''s very easy and fast.

    • @Avalon19511
      @Avalon19511 Год назад

      @@OlivioSarikas Does A1111 recognize image links like midjourney?

  • @mickeytjr3067
    @mickeytjr3067 Год назад

    One of the things I read in the tutorial is that "bad hands" doesn't work, while (hands) in the negative will remove bad hands.

  • @chinico68
    @chinico68 Год назад

    Will it run on Mac??

  • @benjamininkorea7016
    @benjamininkorea7016 Год назад

    I have a question-- in A1111, I can inpaint masked only. I like this, because I can inpaint on a huge image (4K) and get a small detail added but it doesn't explode my GPU.
    Can you think of any way to do this in ComfyUI?

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      I'm not sure if comfyui has "mask-only" inpainting yet.

    • @Max-sq4li
      @Max-sq4li Год назад

      You can do it in auto1111 with (only mask) feature

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      @@Max-sq4li that's what he said. but the question was how to do it in comfyui

    • @benjamininkorea7016
      @benjamininkorea7016 Год назад

      @@OlivioSarikas Since I watched this video and started using ComfyUI more, I figured you'd have to make the mask in Photoshop (or something) anyway, so probably wouldn't be worth it until they can integrate a UI mask painter.
      So I tried working with a 4K image and using the slice tool in Photoshop instead of a mask, and just exporting the exact section I want to work on. Then I can inpaint what I want, but with the full benefit of the entire render area.
      Working on just a face in 1024x1024 makes things look so amazing, and the ouput image snaps perfectly back into place in Photoshop. At that resolution, I can redo each eye, or even parts of the eye, with very high accuracy.

  • @Vestu
    @Vestu 7 месяцев назад

    I love how your ComfyUI setup is not overly OCD but a "controlled noodle chaos" like mine are :)

  • @linhsdfsdfsdfds4947
    @linhsdfsdfsdfds4947 Год назад

    Can yopu share this workflow?

  • @mb0133
    @mb0133 Год назад

    have you figured out how to redirect the models folder to your existing automatic1111 model folder? that's way too much GB for duplicate files

    • @benjaminmiddaugh2729
      @benjaminmiddaugh2729 Год назад

      I don't remember what Windows calls it, but the Linux term you want is "symlink." You can make a virtual file or folder that points to an existing one (a "soft" link) or you can make it so the same file/folder is in multiple places at once (a "hard" link - soft links are usually what you want, though).

  • @Silversith
    @Silversith Год назад

    The latent upscale randomised the output too much from the original for me, especially if it's a full body picture. I've output the latent upscale before sending it through the model again and it basically just redcues the quality more before reprocessing it. I ended up just passing it through the model twice to upscale it.

    • @Silversith
      @Silversith Год назад

      Tomorrow I'm gonna try tweaking the code a bit or including some custom nodes to pass the seed from one to the next so it stays consistent and does a proper resize fix

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      I latent upscale you have different upscale methods. Give them a try and see if that changes your result to what you need

    • @Silversith
      @Silversith Год назад

      @@OlivioSarikas I submitted a pull request that passes the seed value through to the next sampler. Seems to work well 🙂

    • @Dizek
      @Dizek Год назад

      @@OlivioSarikas or better, create different nodes with all the aviable upscale methods and try all at once

  • @digwillhachi
    @digwillhachi Год назад

    not sure what im doing wrong as i can only generate 1 image the others dont generate 🤷🏻‍♂

  • @animelover5093
    @animelover5093 Год назад

    sigh .. not available on Mac at the moment : ((

  • @redregar2522
    @redregar2522 9 месяцев назад

    for the 4 girls example i have the issue that the face of the first image is always messed up(rest of images is fine. Someone an idea or the same issue?

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад +1

      might be because you render it low res. If you upscale it, it should be fine. Or try more steps on the first image, or try a loop on the first image to render it twice

  • @dax_prime1053
    @dax_prime1053 Год назад

    this looks ridiculously complex and intimidating.

  • @dxnxz53
    @dxnxz53 19 дней назад

    bester mann!

  • @LouisGedo
    @LouisGedo Год назад

    👋

  • @MisakaMikotoLuv
    @MisakaMikotoLuv Год назад

    tfw you accidentally put the bad tags into the positive input

  • @GiggaVega
    @GiggaVega Год назад

    Hey Olivio, this was an interesting tool, but I really don't like the layout, it's too all over the place. Sorry to spam you but I tagged you in a video I just uploaded to youtube about: Why I don't Feel Real Artists have anything to worry about regarding Ai Art Replacing them. Feel free to leave your thoughts on that topic. Maybe a future video?
    Cheers from Canada bro.

  • @OlivioSarikas
    @OlivioSarikas  Год назад +1

    #### Links from the Video ####
    Join my Discord: discord.gg/XKAk7GUzAW
    ComfyUI Projects ZIP: drive.google.com/file/d/1MnLnP9-a0Pif7CZHXrFo-pAettc7KAM3/view?usp=share_link
    ComfyUI Install Guide: ruclips.net/video/vUTV85D51yk/видео.html

  • @dsamh
    @dsamh Год назад

    Olivio. Try Bantu, or Somali, or other specific culture or peoples rather than referring to races by color. It gives much better results.

  • @blisterfingers8169
    @blisterfingers8169 Год назад +1

    So there's no tools for organizing the nodes yet, I take it? xD

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      not sure what you mean by that. you can move them and group them if you want

    • @jurandfantom
      @jurandfantom Год назад +1

      Even simple spread out would help. I think that shows person who was working with node base system from those without experience.
      But I see you have middle point to spread connector so it's not that bad

    • @blisterfingers8169
      @blisterfingers8169 Год назад

      @@OlivioSarikas I use node systems like this a ton and I've never seen a messier example. No big deal, just makes me assume the organisation tools aren't quite there yet.

  • @sdjkwoo
    @sdjkwoo 9 месяцев назад

    24,000 STEPS? MY PC STARTED FLING IS THAT NORMAL??

  • @zengrath
    @zengrath Год назад +1

    ugh another app that only works on Nvidia or CPU only. My 7900 xtx would really like to try some of these new things.

    • @scriptingkata6923
      @scriptingkata6923 Год назад

      why new stuff should be using amd lol

    • @jeremykothe2847
      @jeremykothe2847 Год назад +2

      When you bought your 7900 xtx, were you aware that nvidia cards were the only ones supported by ML?

    • @zengrath
      @zengrath Год назад

      @@jeremykothe2847 Everything i read when doing my research indicated it also worked with AMD, at least on Linux with support coming to windows, even if that wasn't the case I still wouldn't support Nvidia, now with how they are treating their business partners in same way Apple does these days, by forcing them to say only good things about them or withhold review samples which they already done over and over, not to mention the things they are doing to their manufacturing partners as well. However what I didn't know before buying the card is that 7900 xtx doesn't work even on Linux and it appears AMD could be months away or more from updating ROCM for RDNA3. All the AMD fanboys acted like it wasn't an issue at all and so on. I've even had long arguments with AMD users claiming I just don't know what I am doing, yet i've spoken with several developers now trying to see if they can walk me through getting their stuff working on AMD on Linux and sadly. they confirm we have to wait. At least on Windows a program called Shark is making incredibly strides in doing various tasks like img generation and even language models and hopefully it's only a short time before most common features are working and can compete with other platforms that only support NVidia but makes me wonder if they can do it, why can't others and why do they continue to only use protocols that support only Nvidia yet anytime something comes out that uses more open platforms for AMD, Nvidia users can also use it with no issue. How is it fair that AMD consumers can't touch products made exclusively for Nvidia but Nvidia users can go other way. it's same stuff with Steam's Index/Oculus vs Meta, Meta buys up all the major VR dev's,, kills the VR market as a result by segmenting it to death, lies when they bought the crowd sourced open Oculus tech saying they would keep it open and not require facebook accounts but they did anyway and all kickstarter people can't do anything about it now, facebook has too much money and can do whatever they want. Yet when games come out on steam only, people with Meta or any other headset can come to steam and play games with no issue. It's incredibly unfair and only reason this situation keeps happening over and over again is because the public allows it. And it's publics fault when these horrific companies end up forming monopolies then taking over the world one day as described in most sci-fi novels.

    • @GyroO7
      @GyroO7 Год назад +4

      Sell it and buy Nvidia one
      Amd is useless in anything other than gaming (even that has poor ray tracing and no dlss)

    • @zengrath
      @zengrath Год назад

      @@GyroO7 Not true at all i, i really hat fanboys on both sides who lie. Your not different then republican and democrats who fight over bullshit and constantly lie and twist facts. I have been using ray tracing with no issue, and AMD doesn't have DLSS they have FSR which works very well with FSR 3.0 coming soon that will work vary similiar to DLSS as well. And I get to enjoy the fact that I am not part of the crowd of ignorance supporting hateful practices of Nvidia. I was an Nvidia fan for about 20 years until what they have done in just the past few years, clearly you haven't been keping up. Let me guess, you probably also love facebook Meta and love Apple products too. you like companies who tell you how to think and how to use their products and if you don't like it then they tell you your stupid and any reviewers who don't praise them like gods gets put on thier ban lists.

  • @kallamamran
    @kallamamran Год назад

    More like UnComfyUI ;)

  • @michaelphilps
    @michaelphilps 9 месяцев назад

    ja wohl!

  • @str84wardAction
    @str84wardAction Год назад

    this is way to advence to process whats gong on here

  • @blacksage81
    @blacksage81 9 месяцев назад +1

    Yeah, it isnt easy to get black people calling it that way, I've found that using Chocolate, or Mocha colored skin, and other brown colors will get the skin, in my limited testing the darker colors will help the characters gain more African features.

  • @spider853
    @spider853 Год назад

    I don't really understand how LatentComposition works without a mask

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      it combines the two noisees and since they are both just noises yet, they can melt into a final image in the later render. however, because the noise of your character has a different background, you will often see that the background around the character is different than the background of the rest of the image by a bit

    • @spider853
      @spider853 Год назад

      @@OlivioSarikas I see, it's kind of a average, it will benefit from a mask

  • @arturabizgeldin9890
    @arturabizgeldin9890 8 месяцев назад

    I'll tell you what: you're a natural born tutor!

  • @jasonl2860
    @jasonl2860 Год назад +1

    seems like img2img, what is the difference? thanks

  • @Noum77
    @Noum77 Год назад

    This is too complicated

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      you take the things i showed in the video and simplify them. start by just rendering a simple AI image with a prompt and then you can add things to that

  • @akratlapidus2390
    @akratlapidus2390 Год назад

    In Midjouney you won't be able to show a black woman, because the word "black" is banned. It's one of the reasons why I take much attention to your advices about Stable Diffusion. Thanks!

    • @hfycentral
      @hfycentral Год назад

      That's not entirely true. I use it all the time.

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      stop spreading misinformation. I just tried "black woman --v 5"and it worked perfectly

  • @Kaiya134
    @Kaiya134 Год назад

    No disrespect to your work, but the concept itself is just sickening. These pictures are basically a window into the future of webcam filters. Our life is rapidly becoming a digital shitshow.

  • @user-kt7uz9xc5m
    @user-kt7uz9xc5m Год назад

    they can do all this changes at videos too wright? changing faces, emotions etc😂 in cia etc, as a media wars.

  • @user-kt7uz9xc5m
    @user-kt7uz9xc5m Год назад

    can you download another picture, not connected to cyberpunk, lets say "fatima diame" photo, and make a kind of corelation 50% so youre character to change some kind of rational way - become a black woman athlet with fantastic body but in cyberpunk view?

  • @user-kt7uz9xc5m
    @user-kt7uz9xc5m Год назад

    that is why putin is allways so unhappy in youtube😂

  • @nikolesfrances1532
    @nikolesfrances1532 5 месяцев назад

    Whats your discord?