Very photoshop type technique. AI Stable and Comfy seem to be expanding more and more into edit of images rather than just focus on rendering. The real holy grail I reckon continues to be using resources more efficiently and effectively without being overly taxing. Be good to see steady development around producing accurate digit duplication, accurizing use of prompt language to image, maintain quality image consistency, and large batch generation.
Love the bleach bypass look for the intro...very Spielberg (during his Private Ryan-War of the Worlds-Minority Report era) 🤣. Fit the subject matter very well 👏
Love your videos! Super helpful. Are you using an AI denoising tool for your audio? It sounds more muddy than your earlier videos... Harder to hear you now!
@@sebastiankamph Of course! It turns pink around the edges and then numerous errors appear! But the first is this: "Error occurred when executing KSampler: Given groups=1, weight of size [1280, 2560, 3, 3], expected input[2, 1920, 16, 16] to have 2560 channels, but got 1920 channels instead"
thank you for the video. When I tried to pre-load a photo to change lighting, seems like the process changes the faces of the portrait in the output image quite a bit. Anyway to control that?
As someone who did AI relighting in the past for full body pics, I tackled in in a way where I just kept the AI generated body and then masked in the original face with color correction layers … Ofcourse, thats not a 100% guarantee it will look good, depends on how drastic the lighting change is. Mine wasn’t that drastic.
This only works for SD1.5, right? I cannot find any info on SDXL, not in any issue of the linked repos. And in the node manager in comfy the base is 1.5 for all 3 models. So, what did you put under workflow for SDXL in the video description, something you didn't demonstrate in the video?
Do you make workflows? Id like to find someone who can make a tutorial video. I want a workflow that can handle multiple image + text inputs just like this, but a bit further. I want one for face, one for body, one for outfit, one for background, one for lighting, and an area for inpainting/a selector for outpainting, and a image to video. The one workflow to rule them all kinda deal.
There's little to no tutorials on ControlNet SDXL models, especially for A1111. Like, there's a couple of Tile XL models, but they have very limited instructions and seem to work differently.
The world of AI changes so much, I think you could redo a lot of videos using newer models, newer techniques... do them specifically for an UI like Forge, Comy or so...
Except it doesn't change for A1111 or Forge because no one is creating extensions for them to use the new tools. Everything is for ComfyUI first, because it's where the tools are being developed (mostly). SAI uses ComfyUI almost exclusively for model training.
How about instead of doing a one-of tutorial, you have a small project (like create a 1 minute movie, or a couple page comic book, or a bot V-tuber) and you record the process as a tutorial playlist? I'm personally interested in the comicbook
@@sebastiankamph Just tried it. Results are good but still missing the manual positioning of light when using IMG to IMG. Good results though! Thanks for sharing.
Gives out free editing assets, but doesn't know how to balance the audio from the left and right channel 😅 kidding gonna have to give these a shot, I always love freebies
It was totally worth the wait for the dad joke in the end lol
Kudos on the new and more intricate intros.
Glad you like em :)
Thanks!
Wow, thank you kindly!
damn dude!! love that these have become mini movies, keep up the great work 🤜🤛
Wow, thanks man! Also good to hear from you again 😊🌟 Hope you're well
So good to see you again :) It has been a while. I love your content.
Thank you, very kind! :)
Very photoshop type technique. AI Stable and Comfy seem to be expanding more and more into edit of images rather than just focus on rendering. The real holy grail I reckon continues to be using resources more efficiently and effectively without being overly taxing. Be good to see steady development around producing accurate digit duplication, accurizing use of prompt language to image, maintain quality image consistency, and large batch generation.
Thank you for this video and especially for that intro! Got moves, man )
Love the bleach bypass look for the intro...very Spielberg (during his Private Ryan-War of the Worlds-Minority Report era) 🤣. Fit the subject matter very well 👏
Glad you enjoyed it! Testing some new things 😊
Thank you for this, very helpful. Tell me, is there a way to take a portrait image with a landscape image aspect ratio?
Love your videos! Super helpful. Are you using an AI denoising tool for your audio? It sounds more muddy than your earlier videos... Harder to hear you now!
Thank you for the feedback. I have been changing settings and plugins over time. Also a mic, but that was a long time ago.
I thought it was incredible, but whenever I run it I get an error in KSampler, do you know what could be happening? Thanks!
Depends on the error, any detail?
@@sebastiankamph Of course! It turns pink around the edges and then numerous errors appear! But the first is this:
"Error occurred when executing KSampler:
Given groups=1, weight of size [1280, 2560, 3, 3], expected input[2, 1920, 16, 16] to have 2560 channels, but got 1920 channels instead"
@@WesleyPaulinoCoelho Did you try a SD1.5 checkpoint? SDXL seems not to work.
@@simongotz8126 that was it! Thanks!
thank you for the video. When I tried to pre-load a photo to change lighting, seems like the process changes the faces of the portrait in the output image quite a bit. Anyway to control that?
As someone who did AI relighting in the past for full body pics, I tackled in in a way where I just kept the AI generated body and then masked in the original face with color correction layers … Ofcourse, thats not a 100% guarantee it will look good, depends on how drastic the lighting change is. Mine wasn’t that drastic.
@@pressed0n615 thanks!
The intro was golden. 😂😂😂
That's really cool, thanks for sharing!
where can I find all the lighting types?
Great videos, Thanks! i am woindering if is this possible with full body photos? I've tried it but faces get distorted a lot.. Thanks in advance!
This only works for SD1.5, right? I cannot find any info on SDXL, not in any issue of the linked repos. And in the node manager in comfy the base is 1.5 for all 3 models. So, what did you put under workflow for SDXL in the video description, something you didn't demonstrate in the video?
Currently using it with 1.5
@@sebastiankamph Have you tried it with SDXL, does it even work with it? You published a link to XL flow in the description, that's why I'm asking.
excelent work idk english but this is amazing
Hi from Brazilian content creator!
Hello there my fellow colleague! 😊🌟
Do you make workflows? Id like to find someone who can make a tutorial video. I want a workflow that can handle multiple image + text inputs just like this, but a bit further. I want one for face, one for body, one for outfit, one for background, one for lighting, and an area for inpainting/a selector for outpainting, and a image to video. The one workflow to rule them all kinda deal.
Sounds expensive!
You're so right all that's left is to be wrong, which is correct, right? 🙂
Nice video. It can be a bit hit or miss with standard prompting.
How about doing this with animatediff? Or vid2vid use cases?
There's little to no tutorials on ControlNet SDXL models, especially for A1111. Like, there's a couple of Tile XL models, but they have very limited instructions and seem to work differently.
The world of AI changes so much, I think you could redo a lot of videos using newer models, newer techniques... do them specifically for an UI like Forge, Comy or so...
Except it doesn't change for A1111 or Forge because no one is creating extensions for them to use the new tools. Everything is for ComfyUI first, because it's where the tools are being developed (mostly). SAI uses ComfyUI almost exclusively for model training.
How about instead of doing a one-of tutorial, you have a small project (like create a 1 minute movie, or a couple page comic book, or a bot V-tuber) and you record the process as a tutorial playlist? I'm personally interested in the comicbook
This is awesome! So handy for the type of work I do :)
Does it work with non-human subjects? (lets say a building or a product?)
It does, yes!
It´s brutal!
Now for SDXL please. (I know, there are just 1.5 models, no?🤨)
But this is old using Img to img. You were the one who showed the video on how to do it.
This one has a brand new model attached to it, making it easier to use. And it stays more consistent and works great with any image.
@@sebastiankamph Just tried it. Results are good but still missing the manual positioning of light when using IMG to IMG. Good results though! Thanks for sharing.
LETSSSSS GO !!!!!!
If only it was available for SDXL.
AT LAST!!! (There's a closed-source relighting solution now that's a blender plugin. Lets you replace video background with an HDRI.)
what an intro
Very kind :)
nice
🌟
It's beyond frustrating that people release tools for models that have been deprecated for more than a year.
Interesting you should say that. The ic-light model was released 16 days ago. Do you mean there is an earlier model it was built on that's now old?
Gives out free editing assets, but doesn't know how to balance the audio from the left and right channel 😅
kidding gonna have to give these a shot, I always love freebies
👋