Photoshop is DEAD?! Can ComfyUI Layer Diffusion unseat the champ?!
HTML-код
- Опубликовано: 28 май 2024
- 🔥 Prepare to be amazed as this tutorial dives deep into the heart of the secrets to an optimized photo real workflow, an updated lightning optimization, the newest techniques of Layered Diffusion and layer effects just like Photoshop! And then, the burning question: Is Photoshop Dead? 💀 Join us as we figure it out together...
* Support Quality Content: www.paypal.com/cgi-bin/webscr...
* 1 on 1 Personalized AI Training / Support Session (Web Launch 15% Off!): www.grockster.com/grocksterwe...
* Top 10 SDXL Model Leaderboard - docs.google.com/spreadsheets/...
RESOURCES
Workflow and Install Instructions - civitai.com/models/338073/tut...
KEY STABLE DIFFUSION COMFYUI TOPICS DISCUSSED
* Updated Lightning Optimized Workflow
* RealVis Photo Realistic Workflow
* Layered Diffusion (Layer Transformation)
* Layer Effects (Drop Shadow, Glow, Levels, New Text Overlay, Color Grading
* Face Detailer discussion
* High Res Fix Script
* SD Ultimate Upscale
* Discord - / discord
#ComfyUI #DesignRevolution #PhotoshopIsDead #LayerDiffusion #InnovationInProgress #photoshop #ai @sebastiankamph @OlivioSarikas @MattVidPro @NerdyRodent #training #youtubeislife, #subscriber, #youtubeguru, #youtubecontent, #newvideo, #subscribers, #youtubevideo, #youtub, #youtuber, #youtubevideos #stablediffusion #women
Incredibly helpful, thankyou 😊
That's awesome! So glad it's helpful.
Great content, thanks for sharing this. Would LOVE to see more content related to photo bashing: taking real photos and integrating them into comfyui workflow.
So glad! I'm not sure if you saw this tutorial (ruclips.net/video/kbPM4YnZOoA/видео.html) or this one (ruclips.net/video/S6PBOvofYZo/видео.html)? Also, curious if there's a particular situation you're thinking about?
I've seen the first one (from 5 months ago), not the second one though so I'll have to check that out too. Essentially I'd love to see a start to finish tutorial of starting with 2 or more photos of real people (or photo-realistic AI generated people), and then successfully integrating them into a dynamic scene (skydiving/dancing/whatever). The main thing is the faces need to stay the same from start to finish. And then handling the usual issues with lighting, scale, etc.
So many tutorials using ipadapter/faceid/etc show how to start with a face but by the time they've integrated it into the scene the face has changed quite a bit. I've been able to use Reactor and external apps (e.g. - FaceFusion) with some success to achieve good results, but I'd like to have a soup to nuts comfyui workflow that speeds up the process. Essentially, automated photo bashing to the extent that that's possible.🙂
This is awesome, and I've added it to the list for a future video!
Great video! I just tried out the same realvisxl but my results (even with the same prompt) are very undercooked at 4 steps. Is there anything your workflow does differently between checkpoint loader and first image? In order to get a similar result with cfg 1.35 i either had to go up in steps (6-8) or go down to 0.8 cfg to make 4 steps look ok.
Oh i see. The highres fix cleaned it up a lot. Its probably not meant to "finish the job" but the result is what counts lol.
Yes exactly! I'm going to have a future video to walk through a new multi-stage high-resolution workflow soon.
I was already doing layered compositions with automatic masking from seg preprocessors and mask subtraction.
This stable layer set of nodes look really handy.
I also didn't know I could send images as layers to canvas!
Thanks for all these pointers.
Nice work! So glad it was helpful!
To me this video felt like a stream-of-conciousness, produced by pointing at nodes in rapid succession rather than presenting the main concepts with an educational taxonomy. What I believe to be the primary topic wasn't clarified until the 16m mark. In my opinion this content, as presented, is better represented by the workflow itself. I appreciate you sharing this information, but I'm also selective about how I spend my time; I would prefer a more organized presentation in the future.
I really appreciate this detailed feedback! I've found that by talking through end output with the concepts while reinforcing different techniques and "gotchas" along the way, people have found this to be valuable. I also completely see your point though in being more direct in the learning objective and hitting particular areas. I'll definitely try my best to be better about the time, so thank you so much!
This is better than all the A.I. text to speech crap being posted.
@@terellr23 Thanks!
First of all... LOVE this... gonna try it shortly... but first... How did you get your system resources to show up in the manager box??
Thanks so much! Cryss monitor is what you're looking for... vid tutorial here - ruclips.net/video/TFfKE3Jyy-w/видео.html
@@GrocksterRox YES!!! That was it... got in it there! I love it! Thank you!!! Now I don't have to keep a window open to my server as well!
So glad!
Great video! In my opinion, as a graphic designer and decades long user of Photoshop, AI is an amazingly powerful tool. But, no, I don't think PS or those who know how to use it will be out of the job any time soon. We must learn to work in harmony!
Agreed on all thoughts and there's a purpose and tool for everything :) Thanks for commenting!
@@GrocksterRox 100%
LayerDiffusion in Comfy changes the image A LOT when I tried day before yesterday. So un-useable in that state for me.
Understandable - I've used it more as a compositing factor and then using image 2 image as the finalizer/polisher
Fantastic tutorial. I like this channel; maybe, hope it will soon get a lot more subscribers. wanted to know if it was possible to Colour Grading Imag2Img?
I'm attempting to transfer the colour grading from one of my many reference photographs to another. There are a lot of extensions available with ComfyUI, so I'm wondering what the best method is for this.
(Colour grading: shadows, highlights, and other colour tones)
Similar to Photoshop, we may add layers, alter colours, apply makeup, and more. not a particularly interesting composition, but a tidy edit, and I wondered if I could do the same editing to other pictures. I want the topic to be replicated without altering the real subject; I simply want the backdrop editing, including hue, saturation, and other factor retouching.
Whether or not we can pull it off is my primary concern. also how to adjust the overall edit to fit the topic and background, as background affects colour tone, reflection, and shadow. Keep up the amazing work, and I'm excited for the next video.
Thank you so much! Feel free to spread the Grockster all over the place :) Yes you can definitely do a lot of color grading with several techniques (video tutorial here - ruclips.net/video/7P9bvrWE8-o/видео.html). With makeup, etc. you can do that with inpainting, and there's even ways to connect Comfy to Photoshop/Krita as well (though I believe it's limited to Stable Diffusion 1.5 models right now)
@@GrocksterRox thanks a lot, would you mind suggesting nodes to extract color grade from an existing image?
I would check out GetColorTone from the same library
Thank you.
You're welcome!
I´ve got an error when executing the layer diffusion decode: "Error occurred when executing LayeredDiffusionDecode:
UNetMidBlock2D.__init__() got an unexpected keyword argument 'attn_groups'" Anyone? thx.
Just to make sure, you have the layerdiffuse encode node still hooked up into the use everywhere? It sounds like the encode isn't happening and so it's complaining that it's not set up for the decode properly?
@@GrocksterRox I dunno what you mean with "hooked up into the use everywhere", I´ve go the layer diffuse decode conected to the latent from de ksampler and the image coming from a vae decode... Also I don´t have a layer diffuse encode installed, I only have decodes...Could it be the problem? please help...thx.
So there are some Use Everywhere nodes (nodes that auto-wire model, clip, etc. to beam anywhere you need them. If they got disconnected, that may be a possible cause, here's a quick vid tutorial on how to use those ruclips.net/video/hGnLhDe0ceE/видео.html). Otherwise you could connect to my Discord server and we can chat a bit more about it in detail? discord.gg/CeHwGDgK
Very useful video
Thanks a lot, I'm so glad it was helpful for you!
Hey Grockster, you seem very knowledgable in Comfy UI. I have a use case which I want to discuss with you. Do you think we can speak over a zoom call to see if you want to collaborate on that idea?
Sure always happy to chat, you can find me on my discord server - discord.gg/r8Nag8He
photoshop is not dead it has some built in ai tools now that are not too bad.
Yup for sure, the interface also is still much sleeker/more efficient than Comfy. I think down the road the question will depend on the new custom nodes out there to make layer management and effect persistence easier, that may tilt the balance...
I want to teach gpt 4 this tool so I will be just as good as you. Has anyone tried doing this yet?
Yup - there are some custom nodes that let you connect via API to GPT (you need to have a paid account though). If you want to do it locally, you can use several LLM chatbots within Comfy as well (video tutorial here - ruclips.net/video/oZY4Iem5Oz4/видео.html)
@@GrocksterRoxNice, Thank you for the fast response.
I don't really think this is a Photoshop killer. Just by looking at the workflow...
I appreciate that - it's always great to evaluate as new platforms come out, and I think for those that are in the AI space, Comfy definitely hits the sweet tooth, so I guess it comes down to the use, the media, the goal for a particular piece of media.
@@GrocksterRox you can't compare ps to comfy, these are two completely different things. although it makes a stronger video title, like someone else already mentioned: clickbait 🧡
I fully respect your opinion and respectfully disagree as a user which previously was using PS 95% of the time for all media, and now I may use it 5% for a final touch up (if at all). Again, everyone's experiences and skill levels in Comfy will warrant different uses and thus I don't think it's yet at a point where it will mainstream replace the platform, but again always worth a checkpoint in time to see where things stand across the mainstream. Thank you again for your honesty!
@@GrocksterRox as a photoshop user myself (for a very long time) i don't really see how comfy can replace the most basic functions you can do in ps with ease but that may change someday. and yes i see that comfy's great for automating certain things. just wanted to say that you can't compare the two. 💕
too much craziness on the screen. SD needs to go into a more actual program mode like a photoshop
Great thoughts! Agreed, there's definitely some benefit to the "wide open fields" that Comfy's interface provides, but at the same time when you just need to drill into a particular type of image, having a simpler interface can make image creation quicker.
lmao clickbait title.
Granted, I'm a little biased, but I respectfully disagree. In the video we walk in detail through several of the key features that Comfy can do as good, if not better, than Photoshop but at the same time, the solid interface that Photoshop has brings a strong advantage... Either way, I tried to keep it inquisitive with a learning perspective.
@@GrocksterRox photoshop doesnt need a heavy gpu to do the same thing.
That's a great point, thanks.