"...and now she is pissed". Never had a better introduction for another useful comfyui node😂 Appreciate your work and your entertaining videos. I like your effective and pragmatic way of explanation. Thanks.
A WORKFLOW I DOWNLOADED AND IT ACTUALLY WORKS! OMFG! You don't understand how rare that is. As a pro artist but A python noob, you don't understand how many of my list that is at least a screen long of workflows that just don't work and I don't understand why. And I have spent hours and hours and usually give up Thank you.
Unsampler is an insane option that I can only begin to imagine its potential. thanks for shining lights on all these unsung heroes. the channel remains my favorite by a long shot
'and now she's pissed' cracked me up.....Your vids continue to impress and your knowledge of such a new subject is amazing....Love your explanations and subject choices..Wonderful stuff again..
You have read my mind! I've been searching for more information and usage videos and tuts for all of these nodes that are bundled in packages, that other YTers suggest to install, but use only one or 2 of them. Please continue on with these easy, to the point videos for advanced users. WE NEED THEM!
Wow! Using the noise in this fashion really makes it so much nicer than image to image. I've done some really great enhancements of some old 1.5 generations that kept the look of the old but dramatically increased the details with the newer SDXL models. I've never had an upscale do something this nice and not change the image. Can't wait to see what you've got planned next. Your videos are amazing! I'd love to see you tackle a workflow that is geared towards reusing a character, face, clothes and in multiple poses!
Dear Matteo, I became your absolute fan 🎉 your videos and projects (ip-adapter) are generous and abundant. Every your product is valuable but understandable in the same time. Thank you very much, Please keep creating ❤
I was playing with the unsampler, and went (total 20 steps) - unsampler(5 steps) -> advanced sampler (5 steps -> 10 steps) -> advanced sampler +add noise (10->20) and it produces really good variations. I can even supply it with a new prompt at the last step and it's really really good at integrating it and keeping consistency
The best comfyUI tutorials hands down, the amount of info, small tips, real experience that you show in these videos is unmatched and highly appreciated. Keep it up and of course Thanks for sharing!
These videos are fantastic, I'm learning many new techniques and you've introduced me to loads of new nodes. Can't wait to see the new IPAdapter you mentioned.
These videos are so consistently useful, thanks for taking the time! Even on subjects that you'd think are "solved" like image variations, the fine control can be a real asset when you're looking to generate something specific.
Love your videos man, they're a joy to watch. And I like how you keep your examples relatively simple and straight to the point, no unnecessary fluff :)
Wow, great demonstration! I have been playing around with combing noises for a bit now and I still learned a lot! I’m going to take what I’ve learned here and play around with all the different type of noise formats.
📝 Summary of Key Points: The speaker discusses various techniques for creating small variations on an image using the SDXL workflow. They suggest adding low-weight tokens or random numbers to slightly change the image. The concept of "horror negatives" is introduced, where negative prompts with words like "horror" or "zombie" are used to achieve a clean result. Conditioning comcat is explained as a way to change the style or details of an image while keeping the same composition. Conditioning combine is also discussed for achieving more mutation in the image. The use of IP adapter is explored to guide the composition of the image, using different reference images to achieve different styles. The unsampler node from the confi noise extension is shown as a technique to modify an existing image by removing noise until it reaches the original noise at the first step of generation. Creating a batch of images with little differences is demonstrated using fixed base noise and the slurp latent node. The strength of the noise can be adjusted, and a new batch of similar images can be generated by changing the seed in the noise generator. 💡 Additional Insights and Observations: 💬 "There is no one-size-fits-all solution" - The speaker emphasizes that different techniques may work better for different images and prompts. 📊 No specific data or statistics were mentioned in the video. 🌐 The video provides practical examples and demonstrations to support the techniques discussed. 📣 Concluding Remarks: The video provides a comprehensive overview of techniques for creating image variations using the SDXL workflow. From simple tricks like adding tokens or random numbers to more advanced techniques like conditioning comcat and using IP adapter, the speaker demonstrates practical examples and offers valuable insights for achieving desired image variations. Generated using Talkbud (Browser Extension)
Thanks a lot. Your tutorials are great ! Perfectly explained and going to the details which are really hard to find out without the technical insights. Keep up the great work!
That's pretty amazing! I am kinda new to all this AI thing and still learning a lot, but this video really opened my eyes on how to get started and make even more amazing stuff. Keep those videos coming as it seems you really know your stuff! Subscribed!
Very Nice, really! Very useful, thank you. If i can give you a suggestion would be for a vídeo about dynamic composition using automatic masks. Example: generate a subject, cut it with automatic masking (Sam?) and paste it over a generate background and then a second pass to fix The composition and then generate variations of the background for the same subject or vice versa.
This and Scott's are the coolest AI art channels. Kudos! are these workflows available somewhere for reverse engineering? I tried to follow along but it's hard to keep track of everything that's going on.
I love your videos, they are the best! I want to generate keyframes and then interpolate them to create a realistic video in the end without any time constraints. Can you advise me on how I can apply your approaches to create the consistent frames, which you show in this video or other videos? For example, a dog plays with a ball in the garden. The dog must run and be in different positions in each frame, the camera does not move. How to specify the position of the dog and the ball in each keyframe?
Hello can anyone help me with this Error. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 150, 150] to have 4 channels, but got 16 channels instead This part should be the workflow from Unsampler 09:30
@latentvision Could you explain how you created start_at_step primitive (to control both unsampler and kSampler inputs) with just one click and the correct naming? Is this some custom nodes magic? And as an idea for future videos - could you share how you debug the content of different nodes (maskPreivew and PreviewImage aside) with int/bool/etc values in them?
I'm slowly absorbing these valuable insights, my favourite Comfy channel. At the beginning of 'light conditioning' I wasn't getting subtle changes they were drastic until I tried other seeds. Some worked for subtle changes while some did not. Unless I'm mistaken this light conditioning may be seed dependent. Just wondering if some seeds you tried weren't "subtle friendly"?
sometimes it's hard to see them but there's always a difference. Try to use the "enhance difference" node from the Comfy_Essentials extension. Yes, some seeds will show more difference than others, but it's completely random.
My appologies, I was just about to edit my post to say my wiring to each text box was not from both "text_g and text_l". It's now all working fine and looks exactly as yours with the subtle results achieved. I'll also play with the extension as suggested, thank you for the tips.
I noticed you never re-adjusted the values for width/Height on the ClipTextEncode nodes after you switched to the Unsampler demo. Even tho you started working with a different latent size. Was that just an oversight? It didn't seem to make a difference, your images still looked GREAT! I was just curious, I ended up using a node template for SDXL with primitives set up to quickly adjust the values to 4x the latent size as you suggested. Thank you so much for all your teachings! You've helped me GREATLY!
yeah I noticed after I posted the video. the size conditioning doesn't make much difference, it's more of a refinement, so it's not crucial, but yeah in this case it's an oversight
Would conditioning concat be the same as something like Automatic1111's blend function or is it something different? Love these videos, thanks! Also: "a hint of Klimt" had me chuckling.
Matteo, I wish you would explore latent upscaling and show us some useful possibilities for getting high frequency details most effectively through step iterative upscaling and though other more esoteric modes such as block weights etc. And how to best leverage specialised upscale models such as SkinDiff etc
@@latentvision right, what you just showed us! that is a great idea. I will try it now. Love this community ! right, what you just showed us! that is a great idea. I will try it now. Love this community !
I have a unique way of creating characters in midjourney. I'd like to use it as an ipadapter and pose it but I never get any good results. (very detailed, grotesque cartoon style) The goal it to be able to create a character sheet so I can animate it. Have you seen a way to do something like this?
I'd need to see the pictures. Technically it's possible, you probably need a checkpoint or a lora with a close style and depends on the kind of result and fidelity you are after.
This is really cool however I have a question. In The Video you set "end at step" to 0 and it keeps the structure of the loaded image. When I set it to 0 it just uses nothing of my loaded image and just goes by the prompt.. And that's what I thought the whole thing was, to go backwards in an image and then load from there so to say.. By setting it to zero don't you tell the workflow to ignore the loaded image ?
@@latentvision that only works with SDXL models right? Is there an alternative with other models (e.g dreamshaper) or for those you would simply use CLIPTextEncode?
I tried this with a few images. I'm getting back a similar image, but not the same ones as the original. What am I doing wrong? Mostly the background is different, while the subject stays more or less the same (some little differences in attire).
Hello I have 2 issues Repeat Latent Batch gives exactly 2 same images. And: Working with Get Sigma it shows this error : Error occurred when executing BNK_GetSigma: 'SDXL' object has no attribute 'get model_object'
@@latentvision unfortunately no. The error is still there. Also with the ksampler Variation with noise injection. I tried with juggernaut sdxl checkpoint and sd_xl_base 1.0 checkpoint. Same issue with 'get _model_object
@@latentvision would it help to delete comfy at all and install it again so maybe like these the error goes away! Because a lot of updates didn't help at all. Its crazy
Ive watched this video many times trying to use one of this methods to fake an "unstable" animation. Animatediff evolved so quickly that it seems imposible now to make each frame in a different style...... can u make a video on how to make a video with animatediff where Ipadapter keeps the identity of the main subject but the rest of the composition changes style in each frame? have in mind that scheduled prompots are not a solution here. It would be very difficult to write a prompt for each frame.
Thanks for the explanations! Super helpful! Now, Im a bit confused of the width and height of your TextencodeSDXL - it is huge! How come it goes so fast on your workflow, when for me it takes more than 5min with a 4090 ?
I just have to laugh... I wanted to use some of the ideans from this work flow and started a new flow and started building my flow and almost immediately got stuck on on the pos and neg nodes... took me awhile to figure out that the nodes are called primitiveNode... so i added that but it looked nothing like yours.. tried different things... then I though to just copy paste the node to my new on... nope.. no text area to type.... How did you create those primitiveNode nodes to have string out and multiline text area? BTW I am totally enjoying my self watching and learning from your videos. ;O)
How are you getting consistently good images? The moment I change anything in my prompts the image goes crazy. This is nowhere close to my experiences.
dude unsampler is sick! I love that you're showing how some of these other nodes work and not just ipadapter, thanks!
"...and now she is pissed".
Never had a better introduction for another useful comfyui node😂 Appreciate your work and your entertaining videos. I like your effective and pragmatic way of explanation.
Thanks.
Wow. Another great video, so much info and all clearly explained. Your mastery of ComfyUI is impressive.
Hahaha, "and now she's pissed". I would never miss a lesson with such teacher 🙂 Every time I watch something from you I have new ideas, thank you.
Saying this channel is the best ComfyUI resource on YT is an understatement . Thank you Matteo, please keep up the amazing work!
A WORKFLOW I DOWNLOADED AND IT ACTUALLY WORKS! OMFG!
You don't understand how rare that is. As a pro artist but A python noob, you don't understand how many of my list that is at least a screen long of workflows that just don't work and I don't understand why. And I have spent hours and hours and usually give up Thank you.
Unsampler is an insane option that I can only begin to imagine its potential. thanks for shining lights on all these unsung heroes. the channel remains my favorite by a long shot
The unsampler blew my mind! It's amazing all the possibilities available with ComfyUI. Thanks for the tutorial!
absolutely love watching these work sessions. ❤🔥💡💪
'and now she's pissed' cracked me up.....Your vids continue to impress and your knowledge of such a new subject is amazing....Love your explanations and subject choices..Wonderful stuff again..
You have read my mind! I've been searching for more information and usage videos and tuts for all of these nodes that are bundled in packages, that other YTers suggest to install, but use only one or 2 of them. Please continue on with these easy, to the point videos for advanced users. WE NEED THEM!
Wow! Using the noise in this fashion really makes it so much nicer than image to image. I've done some really great enhancements of some old 1.5 generations that kept the look of the old but dramatically increased the details with the newer SDXL models. I've never had an upscale do something this nice and not change the image. Can't wait to see what you've got planned next. Your videos are amazing! I'd love to see you tackle a workflow that is geared towards reusing a character, face, clothes and in multiple poses!
Dear Matteo, I became your absolute fan 🎉 your videos and projects (ip-adapter) are generous and abundant. Every your product is valuable but understandable in the same time. Thank you very much, Please keep creating ❤
I was playing with the unsampler, and went (total 20 steps) - unsampler(5 steps) -> advanced sampler (5 steps -> 10 steps) -> advanced sampler +add noise (10->20) and it produces really good variations. I can even supply it with a new prompt at the last step and it's really really good at integrating it and keeping consistency
I guess this is a situation like: give a man a fish and you feed him for a day. Teach him how to fish and you feed him for a lifetime 😄
@@latentvision Give the man the seed for the fish image and he'll have variations for a lifetime...
Short, to the point, and absolutely jam packed with information. Great video.
The best comfyUI tutorials hands down, the amount of info, small tips, real experience that you show in these videos is unmatched and highly appreciated. Keep it up and of course Thanks for sharing!
Wow! You could have made 10 videos with this content. Respect
not much to say other than thank you very much, great videos, im about to explore your whole channel, you definetly just won a new regular viewer
These videos are fantastic, I'm learning many new techniques and you've introduced me to loads of new nodes. Can't wait to see the new IPAdapter you mentioned.
These videos are so consistently useful, thanks for taking the time! Even on subjects that you'd think are "solved" like image variations, the fine control can be a real asset when you're looking to generate something specific.
Love your videos man, they're a joy to watch. And I like how you keep your examples relatively simple and straight to the point, no unnecessary fluff :)
Applying this in my workflow immediately. Very useful. Thanks!
Found you after the release of ipadapter, your skills in comfy are amazing. Watching all your videos.
So many useful nugets of information. Taking control of the generative image is fascinating. Thank you.❤
Wow, great demonstration!
I have been playing around with combing noises for a bit now and I still learned a lot!
I’m going to take what I’ve learned here and play around with all the different type of noise formats.
📝 Summary of Key Points:
The speaker discusses various techniques for creating small variations on an image using the SDXL workflow. They suggest adding low-weight tokens or random numbers to slightly change the image.
The concept of "horror negatives" is introduced, where negative prompts with words like "horror" or "zombie" are used to achieve a clean result.
Conditioning comcat is explained as a way to change the style or details of an image while keeping the same composition. Conditioning combine is also discussed for achieving more mutation in the image.
The use of IP adapter is explored to guide the composition of the image, using different reference images to achieve different styles.
The unsampler node from the confi noise extension is shown as a technique to modify an existing image by removing noise until it reaches the original noise at the first step of generation.
Creating a batch of images with little differences is demonstrated using fixed base noise and the slurp latent node. The strength of the noise can be adjusted, and a new batch of similar images can be generated by changing the seed in the noise generator.
💡 Additional Insights and Observations:
💬 "There is no one-size-fits-all solution" - The speaker emphasizes that different techniques may work better for different images and prompts.
📊 No specific data or statistics were mentioned in the video.
🌐 The video provides practical examples and demonstrations to support the techniques discussed.
📣 Concluding Remarks:
The video provides a comprehensive overview of techniques for creating image variations using the SDXL workflow. From simple tricks like adding tokens or random numbers to more advanced techniques like conditioning comcat and using IP adapter, the speaker demonstrates practical examples and offers valuable insights for achieving desired image variations.
Generated using Talkbud (Browser Extension)
PAZZESCO!
My favourite one was the unsampler method. I think I need to play with it very soon!
Grazie ancora per tutto quello che fai!
That was a fantastic video. I want to leave work and go home and experiment.
I agree with everyone here your content is so valuable, thank you for all you do Matteo!
Thanks a lot. Your tutorials are great ! Perfectly explained and going to the details which are really hard to find out without the technical insights. Keep up the great work!
You are making me falling in love with ComfyUI
that was the indent ^___^
mindblowing! ty for the workflow! i'll try it for myself
Mateo, YOU are the god! Thank you so much for sharing all your knowledge with us!
I have to say that so far I found all of your videos really useful. I would like some AnimateDiff tutorials.
I am always grateful to hear amazing and moving lectures.
0
REALLY looking forward to the seeing your process for the logo animation as well!
As always this was a fantastic tutorial. Thank you!
This video is amazing! I learned so much today! 👍
That's pretty amazing! I am kinda new to all this AI thing and still learning a lot, but this video really opened my eyes on how to get started and make even more amazing stuff. Keep those videos coming as it seems you really know your stuff! Subscribed!
Excellent. Thank you for sharing this type of tutorials
You have my full attention Maestro Latente!!! Please create a discord community!! ❤️🇲🇽❤️
your approach is very creative and very easy to understand thanks for video
In last 16 mins, i have learned more than i had in last months.... great video, great knowlwdge... ate you an ai scientist?
LOL yeah and it was delicious 😆🍰
hahaha *are
Absolute master class. Thanks for these tutorials.
These videos are excellent! Thank you
incredible. thnak you
Great examples with good explainations
Amazing as usual Mateo. Gracias !
Just adding a number to the prompt to get a variation is true ZEN - simple but effective 😊
This is a great essentials video! Thanks Matteo. Not sure if everyone thinks inpainting is lame, though 😂😂😂
You are amazing.. This is the best video I've ever seen...
mind blowing, great job!
This video is gold.
Very Nice, really! Very useful, thank you. If i can give you a suggestion would be for a vídeo about dynamic composition using automatic masks. Example: generate a subject, cut it with automatic masking (Sam?) and paste it over a generate background and then a second pass to fix The composition and then generate variations of the background for the same subject or vice versa.
So much useful information! Thanks!
This and Scott's are the coolest AI art channels. Kudos! are these workflows available somewhere for reverse engineering? I tried to follow along but it's hard to keep track of everything that's going on.
check the video description, I usually put a few in there
@@latentvision thanks man
Pure gold! Thank you!
Grazie grazie grazie!
Fascinating! Is this something like the Noise Inversion feature in A1111?
Thank you Matteo for the great content. Could you advise which node/extension used in the clip to convert noise into input?
you mean the unsampler? it's comfyui_noise
@@latentvision the step at ruclips.net/video/Ev44xkbnbeQ/видео.htmlsi=cWiy-uDpeQelusMM&t=58 and 1:00 noise_seed node I can't find in base comfy
@@chornsokun that's just a primitive. convert the seed to an input and you can connect a primitive to it
I love your videos, they are the best! I want to generate keyframes and then interpolate them to create a realistic video in the end without any time constraints.
Can you advise me on how I can apply your approaches to create the consistent frames, which you show in this video or other videos? For example, a dog plays with a ball in the garden. The dog must run and be in different positions in each frame, the camera does not move. How to specify the position of the dog and the ball in each keyframe?
what are asking is pretty complicated, it can't be really explained in a YT comment
@@latentvision it would be good if you can teach us in another video. btw you are amazing Matteo!
Hello can anyone help me with this Error.
Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 150, 150] to have 4 channels, but got 16 channels instead
This part should be the workflow from Unsampler 09:30
@latentvision Could you explain how you created start_at_step primitive (to control both unsampler and kSampler inputs) with just one click and the correct naming? Is this some custom nodes magic? And as an idea for future videos - could you share how you debug the content of different nodes (maskPreivew and PreviewImage aside) with int/bool/etc values in them?
double click on the input little dot 😄
thank you, this is really great!
Great video, I have a question what is text_g and text_l in clip text encode? Thanks
Great stuff! Thanks!
I'm slowly absorbing these valuable insights, my favourite Comfy channel. At the beginning of 'light conditioning' I wasn't getting subtle changes they were drastic until I tried other seeds. Some worked for subtle changes while some did not. Unless I'm mistaken this light conditioning may be seed dependent. Just wondering if some seeds you tried weren't "subtle friendly"?
sometimes it's hard to see them but there's always a difference. Try to use the "enhance difference" node from the Comfy_Essentials extension. Yes, some seeds will show more difference than others, but it's completely random.
My appologies, I was just about to edit my post to say my wiring to each text box was not from both "text_g and text_l". It's now all working fine and looks exactly as yours with the subtle results achieved. I'll also play with the extension as suggested, thank you for the tips.
Can you go over all comfy nodes, I’ve learned more watching your videos than any other resource! Thanks
I started doing that, but it's a bit boring...
Maybe to make them but not to watch, I'm enjoying the content!@@latentvision
These tutorials are great!!
I noticed you never re-adjusted the values for width/Height on the ClipTextEncode nodes after you switched to the Unsampler demo. Even tho you started working with a different latent size. Was that just an oversight? It didn't seem to make a difference, your images still looked GREAT! I was just curious, I ended up using a node template for SDXL with primitives set up to quickly adjust the values to 4x the latent size as you suggested. Thank you so much for all your teachings! You've helped me GREATLY!
yeah I noticed after I posted the video. the size conditioning doesn't make much difference, it's more of a refinement, so it's not crucial, but yeah in this case it's an oversight
great stuff as always
Another excellent tutorial. ❤
Would conditioning concat be the same as something like Automatic1111's blend function or is it something different?
Love these videos, thanks!
Also: "a hint of Klimt" had me chuckling.
no, blend is another option. The node is called conditioning average.
Matteo, I wish you would explore latent upscaling and show us some useful possibilities for getting high frequency details most effectively through step iterative upscaling and though other more esoteric modes such as block weights etc. And how to best leverage specialised upscale models such as SkinDiff etc
yeah working with noise to increase details is in the pipeline :)
@@latentvision right, what you just showed us! that is a great idea. I will try it now. Love this community ! right, what you just showed us! that is a great idea. I will try it now. Love this community !
I have a unique way of creating characters in midjourney. I'd like to use it as an ipadapter and pose it but I never get any good results. (very detailed, grotesque cartoon style)
The goal it to be able to create a character sheet so I can animate it.
Have you seen a way to do something like this?
I'd need to see the pictures. Technically it's possible, you probably need a checkpoint or a lora with a close style and depends on the kind of result and fidelity you are after.
Do you have a discord so I can send you the images?@@latentvision
This is really cool however I have a question. In The Video you set "end at step" to 0 and it keeps the structure of the loaded image. When I set it to 0 it just uses nothing of my loaded image and just goes by the prompt.. And that's what I thought the whole thing was, to go backwards in an image and then load from there so to say.. By setting it to zero don't you tell the workflow to ignore the loaded image ?
can you explain more detail why you're using the CLIPTextEncodeSDXL and not just CLIPTextEncode? Is that important to this workflow?
no, it's not essential. As I mentioned at the very beginning CLIPTextEncodeSDXL generally gives slightly sharper details
@@latentvision that only works with SDXL models right? Is there an alternative with other models (e.g dreamshaper) or for those you would simply use CLIPTextEncode?
Why my SDXL node hasn't got green pins on it? Also, my positive and negative prompts has conditioning, not string :(
The Unsampler node is not working
(import failed) it shows after Downloading
comfy made a breaking upgrade, the nodes need to be updated. I believe the unsampler should be fine now
I tried this with a few images. I'm getting back a similar image, but not the same ones as the original. What am I doing wrong? Mostly the background is different, while the subject stays more or less the same (some little differences in attire).
hard to say, it was an "old" workflow so it might be just a matter of updated checkpoints or different version of some library
@@latentvision Ah, I see. No worries. I'll keep trying. Hopefully I'll figure it out. :)
What is sigma in comfy (or SD)? What it means, or it does?
roughly it is the current progress in the generation. you can compare it to a sigma start/end to know where you are in the image generation
@@latentvision oh i get it, thx
so cool! thank you
Hello
I have 2 issues
Repeat Latent Batch gives exactly 2 same images.
And:
Working with Get Sigma it shows this error :
Error occurred when executing BNK_GetSigma:
'SDXL' object has no attribute 'get model_object'
you probably just need to upgrade comfy
@@latentvision unfortunately no. The error is still there. Also with the ksampler Variation with noise injection.
I tried with juggernaut sdxl checkpoint and sd_xl_base 1.0 checkpoint. Same issue with 'get _model_object
I have the same problem
@@latentvision would it help to delete comfy at all and install it again so maybe like these the error goes away!
Because a lot of updates didn't help at all. Its crazy
Ive watched this video many times trying to use one of this methods to fake an "unstable" animation. Animatediff evolved so quickly that it seems imposible now to make each frame in a different style...... can u make a video on how to make a video with animatediff where Ipadapter keeps the identity of the main subject but the rest of the composition changes style in each frame? have in mind that scheduled prompots are not a solution here. It would be very difficult to write a prompt for each frame.
For whatever reason if I put the int to 0 I get nothing and the closer I get to the sample steps (30 in this example) the more the image comes in.
great work 👏
thank you ❤
Thanks for the explanations! Super helpful! Now, Im a bit confused of the width and height of your TextencodeSDXL - it is huge! How come it goes so fast on your workflow, when for me it takes more than 5min with a 4090 ?
Thank You. Great Job.
Ciao Matteo, che GPU utilizzi? Grazie
Total ❤
Awesome!
Wonderful
Am I the only one who can't open "pastebin" links? Does anyone know what am I doing wrong?)
seems to be working for me... I'll find a better location for all the workflows soon
@@latentvision Sorry for the trouble) Previously I averted this problem just by going to your github page, but couldn't find them there this time(
I just have to laugh... I wanted to use some of the ideans from this work flow and started a new flow and started building my flow and almost immediately got stuck on on the pos and neg nodes... took me awhile to figure out that the nodes are called primitiveNode... so i added that but it looked nothing like yours.. tried different things... then I though to just copy paste the node to my new on... nope.. no text area to type.... How did you create those primitiveNode nodes to have string out and multiline text area? BTW I am totally enjoying my self watching and learning from your videos. ;O)
sei un grande
awesome!
wow💡
I didn't find "Unsampler"
it's linked in the video description
My "UnSampler" module shows "undefined"@@latentvision
How are you getting consistently good images? The moment I change anything in my prompts the image goes crazy. This is nowhere close to my experiences.