The Flux.1 Depth model doesn't estimate the depth from an existing image, it's the other way around. It generates an image that could match the existing depth map it's given as input. They're using another (probably third party) model to generate the depth map from the input image.
Wtf are you talking about? The flux depth model does exactly that, it estimates the depth from an existing image... That's what the whole blog post is about... That's what the demo is showing... And it's licensed under "Flux Dev License", what third party are you talking about? Did you even read the blog post or watch the video? You may have the slowest brain in existence dude
Character consistency is the holy grail of ai image generation. OpenAI seemed to have stumbled across the answer with their talk of making ChatGPT "multi-modal" last spring, but as far as I can tell, they never released it and haven't mentioned it since.
Matt great video once again. I can’t believe you’ve crossed 270k subs I started following you when you had around 20-30k (maybe even 15k) and AI was really starting to kick off and I subbed to you and Dave Shapiro along some other great creators. I remember thinking, this guy is gonna boom in popularity but dang I didn’t think it’d be THAT fast. Just wanna say mad respect to you and it’s well deserved you’re a no nonsense just fun content creator who’s very informative and I always look forward to your notifications during my busy work days. Thanks again and here’s to you eventually hitting a 500k-1mil one day. 🍻
About the character consistency, I think they meant that the output image won't match the initial input image you give it, but if you generate 4 outputs from the same input image, those will be all consistent between themselves but not with the original input.
Loving the new models! Im pretty sure u would get better masking/Inpainting results if u broke your prompts up, like get your moon surface background the way you like, then add your green alien, then the Earth in space, instead of one shot prompting it...
Canny is useful in conjunction with depth to find the boundaries of objects. Depth works well for figuring out distance from the camera but it isn't very good at differentiating objects at a similar depth like two people holding hands so it's best if you can use both which seems like that's not an option here?
@MattVidPro thanks for the reply. Yea I was looking at the new oura but the new ringconn is thinner and has a slimmer profile. I believe it doesn't have the vo2 max function of the oura though. Depends what you want it for!
so if this is new, how was it possible to use canny/depth etc with flux until now? was it someone else programming this and now it‘s BFL? Is is a lot better than the old workflow?
Flux _can be stopped_ not enough credits. Before that I asked it to create and image of Batman fighting the Joker and got: *NSFW content detected in image. Try running it again, or try a different prompt.* It is dreadful.
Huge thanks to everyone for watching! Now would be a great time to leave a like and maybe even subscribe if you're enjoying my content :)
Of course, i love your content ❤
🐟 Been watching for over 2 years!
hi
@@LouisGedo Yo!
@@ThymeHere Love to hear it! Thank you so much for being a loyal viewer, it means the world to hear that!
The Flux.1 Depth model doesn't estimate the depth from an existing image, it's the other way around. It generates an image that could match the existing depth map it's given as input. They're using another (probably third party) model to generate the depth map from the input image.
Thank you for the insight!
I wish everyone would just just third party everyone.
Wtf are you talking about? The flux depth model does exactly that, it estimates the depth from an existing image... That's what the whole blog post is about... That's what the demo is showing... And it's licensed under "Flux Dev License", what third party are you talking about? Did you even read the blog post or watch the video? You may have the slowest brain in existence dude
Looking forward to your RUclips channel mate! 😅
Character consistency is the holy grail of ai image generation. OpenAI seemed to have stumbled across the answer with their talk of making ChatGPT "multi-modal" last spring, but as far as I can tell, they never released it and haven't mentioned it since.
yes. imo its a crime against humanity
Wym? ChatGPT is already multimodal.. its been multimodal since gpt-4 vision..
@@fast_harmonic_psychedelic image generation by gpt 4o directly...
14:51 that would be a good thumbnail.
Love watching your videos man! Keep up the good work bro
Appreciate the kind words!!
Matt great video once again. I can’t believe you’ve crossed 270k subs I started following you when you had around 20-30k (maybe even 15k) and AI was really starting to kick off and I subbed to you and Dave Shapiro along some other great creators. I remember thinking, this guy is gonna boom in popularity but dang I didn’t think it’d be THAT fast. Just wanna say mad respect to you and it’s well deserved you’re a no nonsense just fun content creator who’s very informative and I always look forward to your notifications during my busy work days. Thanks again and here’s to you eventually hitting a 500k-1mil one day. 🍻
Flux did MUCH better with your hair than ideogram. Excited to play!
About the character consistency, I think they meant that the output image won't match the initial input image you give it, but if you generate 4 outputs from the same input image, those will be all consistent between themselves but not with the original input.
Thanks for the thorough breakdown video, this looks amazing.
Loving the new models! Im pretty sure u would get better masking/Inpainting results if u broke your prompts up, like get your moon surface background the way you like, then add your green alien, then the Earth in space, instead of one shot prompting it...
Probably true. Getting way to used to lazy prompting because of ideogram 😅
can you please make an install guide for comfy ui with the correct workflow
Canny is useful in conjunction with depth to find the boundaries of objects. Depth works well for figuring out distance from the camera but it isn't very good at differentiating objects at a similar depth like two people holding hands so it's best if you can use both which seems like that's not an option here?
Matt what's the ring you wear? Looking at the RingConn gen 2 myself as there's no monthly sub cost
It’s actually an oura ring 3. Not a fan of paying for subs so might have to look into your option 😂
@MattVidPro thanks for the reply. Yea I was looking at the new oura but the new ringconn is thinner and has a slimmer profile. I believe it doesn't have the vo2 max function of the oura though. Depends what you want it for!
so if this is new, how was it possible to use canny/depth etc with flux until now? was it someone else programming this and now it‘s BFL? Is is a lot better than the old workflow?
I'm just wondering when is FLUX 1.1 [Dev]
Draw over that jacket, then prompt "denim jacket two sizes too small" and "tie with different pattern where the mask is."
17:46 - 45 year old Matt
18:16 - 40 year old Matt
18:26 - 35 year old Matt
Hi new at this how do I use the new tools in Freepicks?
Wow, I just turned my rubber duck into Matt, but it scared me, so I'm changing it back. Great video.
Does it gets expensive for an Designer use it a lot from API?
looks promising
Maybe create a comparison between websites offering FLUX Pro ?
Let us know when we can download such things on our own computer. ty
I'm still going to use dall-e mostly because it's processing the longest prompts
This is good news) I wonder what the requirements are and how quickly lllyasviel will connect them
14:40 Two Guys One Lemon 🤠🍋🤠
Ya-know, they came out with the first ever Flux model back in 1985, with the Flux Capacitor... =P
fun! please install and show us
At this point it is all about video generation.
Image generation is so last year.
I wanna see them ad in LORA' s as well as face swamping in flux
I thought they were going to make everything open source but if not then I'm not in.
There is never a mention of midjourney in this channel. Seems like u n mj don’t gel….😅
i don't know photoshop generative fill seems better and easier.
I use that too, it’s not great for highly specific prompts but I find it very useful for smoothing things out and filling in the cracks so to speak
Flux _can be stopped_ not enough credits. Before that I asked it to create and image of Batman fighting the Joker and got: *NSFW content detected in image. Try running it again, or try a different prompt.* It is dreadful.
Touch the grass me friend
RAW = huge
always the new one
People going to use you as a template, just a warning
@1:57
its blackforest labs not flux
Bravo
Your prompt are wrong I think your brain is not working you need some proper sleep
flux more like flex
First
I'll spank u ❤
💀 @@shinrinyokumusic1808
third
Fourth
We will never know what is real and made up again.
You talk too much man too much yapping get to the point. You dont have to describe everything you do and what we already see.
47MI
137 L
20 C
278K S
1 587 V
NOV 21 2024