Thanks, as always , there is lot of knowledge shared by you in these videos like how to get the depth in different ways at 5:20 ( i wish it was showed in the video as Text TIPS popup notes , so when we need to find them again it will be easy for as to find them just by passing the mouse through the timeline ) ... also, another suggestion , you don't show what you actually did to go from the generated image to the workflow ( i kwow what you did , but lot of people can't figure it ), it's very important for beginner that the videos is easy to follow ( if you are using keyboard shortcuts , it's always good to show them in the screen )... keep the good work , we love everyone in invoke team
28:40 The way I usually do "optional" features is usually adjusting the strengths to zero. It get's ignored, unfortunately still loads relevant models, so will use resources. But I think a useful hack for now.
Flux 1 is available through workflows in a release candidate at the moment for the Community Edition. Text-to-image and I2I will be available soon on the Professional Edition. We will have full LoRA and ControlNet support soon as well. github.com/invoke-ai/InvokeAI/releases/tag/v4.2.9rc1 or just #new-releases channel in our Discord for most up to date.
one last thing.. I Didnt know the nod you used... Tile Control net.. What exactly does this do, and whats the difference to an initioal image (img to img) process, what are the advantages of the tile node.. we untill now only used img to img for this
Tile ControlNet breaks down the image into smaller tiles, allowing for better detail and structure preservation, making it ideal for tasks like upscaling or precise editing. Traditional image-to-image offers more creative freedom but may lose some details, especially at higher denoising strengths. Use Tile ControlNet when you need to maintain accuracy, and traditional i2i when you want more artistic flexibility.
where to find now the workflows`? By the Way.. the more complex workflow mimics to an extend how we aproach creating architectural Renderings in our company. The big difference is, that we use compared to the example rather giant architectural concepts like skyscraper, big landscapes or Stadiiums or Airports. In such cases, the resolution and also the Trainingmaterial for AI´s doesnt yet fullfill totaly our needs. It seems that the AI has seen millions of smal Houses, but only a few thousend architectural big images. Esp. in smal Details, theAI strugles to get the result correct. Maybe when Flux will be integrated and if Schnell is good enough, this will get better. Or even if FLux DEV is worth renting it. Never the less very helpful.
Thanks, as always , there is lot of knowledge shared by you in these videos like how to get the depth in different ways at 5:20 ( i wish it was showed in the video as Text TIPS popup notes , so when we need to find them again it will be easy for as to find them just by passing the mouse through the timeline ) ... also, another suggestion , you don't show what you actually did to go from the generated image to the workflow ( i kwow what you did , but lot of people can't figure it ), it's very important for beginner that the videos is easy to follow ( if you are using keyboard shortcuts , it's always good to show them in the screen )...
keep the good work , we love everyone in invoke team
28:40 The way I usually do "optional" features is usually adjusting the strengths to zero. It get's ignored, unfortunately still loads relevant models, so will use resources. But I think a useful hack for now.
49:50 I'm sorry, you mentioned it now, before I finished watching >.
This is exactly what i was waiting for.... i am sure.... you made this video exactly only for me... ;-) great....
GREAT tutorial !!!!Can you make a tutorial showing how to turn one photo in a cartoon character?
You might find what you need here: ruclips.net/video/eMR2Um8DGCc/видео.html
is flux 1 available on Invoke ?
Flux 1 is available through workflows in a release candidate at the moment for the Community Edition. Text-to-image and I2I will be available soon on the Professional Edition. We will have full LoRA and ControlNet support soon as well.
github.com/invoke-ai/InvokeAI/releases/tag/v4.2.9rc1 or just #new-releases channel in our Discord for most up to date.
one last thing.. I Didnt know the nod you used... Tile Control net.. What exactly does this do, and whats the difference to an initioal image (img to img) process, what are the advantages of the tile node.. we untill now only used img to img for this
Tile ControlNet breaks down the image into smaller tiles, allowing for better detail and structure preservation, making it ideal for tasks like upscaling or precise editing. Traditional image-to-image offers more creative freedom but may lose some details, especially at higher denoising strengths. Use Tile ControlNet when you need to maintain accuracy, and traditional i2i when you want more artistic flexibility.
where to find now the workflows`?
By the Way.. the more complex workflow mimics to an extend how we aproach creating architectural Renderings in our company.
The big difference is, that we use compared to the example rather giant architectural concepts like skyscraper, big landscapes or Stadiiums or Airports.
In such cases, the resolution and also the Trainingmaterial for AI´s doesnt yet fullfill totaly our needs. It seems that the AI has seen millions of smal Houses, but only a few thousend architectural big images.
Esp. in smal Details, theAI strugles to get the result correct. Maybe when Flux will be integrated and if Schnell is good enough, this will get better. Or even if FLux DEV is worth renting it.
Never the less very helpful.
Just posted the workflows in the video description to a google drive folder.
👋 hi