00:02 Adobe unveils new Firefly V3 model with enhanced features. 01:35 Firefly now has a style reference feature. 03:06 Adobe showcases impressive AI updates and image generation capabilities. 04:33 Adobe Firefly 3 update brings structure references and style variations for better image generation. 06:03 New tools for generative fill and background removal in Photoshop. 07:39 Exciting updates in Adobe Firefly & Photoshop AI 09:10 Adobe Max sneaks feature new video upscaling and enhancement technology. 10:49 New Adobe features showcased with potential use cases
For someone who started with a box camera in the '30s and then updated years later to a Kodak Brownie Hawkeye, this has certainly been an exponential ride. But it keeps me young, at age 92, and that's good!
Great stuff. What I'm really hoping for is the ability for AI in photoshop to recognise different elements in an image, separate them into different layers and use gen-fill to build the parts of those elements that were occluded by others. Would be amazingly handy for animation and VFX.
Totally-- I mean, you more or less can do that currently, it just takes a good amount of elbow grease and layer management to pull it off. Plus, a bunch of lucky rolls with Gen-fill. It'll be interesting to see if Adobe takes an LLM approach, where you can "chat" with Photoshop to describe what you need done. That was something Dall-e 3 heavily implied it would be able to do, but failed to deliver on it.
@@TheoreticallyMedia I'd be interested to see a workflow video on that. Anything that saves time manually selecting and rebuilding. Select subject works well with clearly defined things (people against backgrounds) but not so great at selecting elements from more abstract images. I'd love it to be able to do that (choose this half a triangle behind this sphere and rebuild it). I'd also like the more LLM approach you mention. I use Midjourney by default but ChatGPT's conversational ability is much nicer to use - not that it translates into better images than midjourney.
@@cinevisionaryfilm you might want to check in with Midjourney if you haven't in awhile. The prompt model has gotten a lot better and is moving more toward a narrative style. You still can't "chat" with it, but you can now direct it in a more conversational tone.
@@TheoreticallyMedia I'll give it a go. I got upgraded to the website from the portal - which is much more user-friendly. The style ref stuff is great for consistency. Still struggles to do certain angles - low angle close ups, over shoulder shots, etc. but it's getting better. I'd love a zoom in as well as a zoom out.
@@cinevisionaryfilm You actually can do that combining fooocus with gimp or with a complex comfyUI workflow. The quality of the images generated by AI models both fooocus and comfyUI are using is higher than firefly.
Appreciate it!! This one went up pretty early, as I was fighting against UK time to get it up! But, made for an great excuse to have an extra cup of coffee!
unsure, but I think there should be an improvement, considering it is an updated model. I never do the PS Beta thing, just because I've had some bad experiences w/ it, so I'll wait to see how it looks when the full update comes out.
@@TheoreticallyMedia Just tested it the resolution does seem to be lower than the source resolution if it is above a certain level or the expansion area is larger.
Not sure-- they didn't mention. I did in my previous interview w/ Adobe (linked in description) ask about resolutions on the video side when you generate, but that was a full on "no comment" So, I guess we'll see when it drops.
1. Do Adobe products support upscaling images similar to Magnific? 2. I must have missed discussions on Structure Reference. What is that exactly? I still don't grok it. Thanks Héctor!
Hector is the BEST! Haha. So, we haven’t seen a creative upscaler like Magnific out of Adobe yet. You can normal upscale, but nothing quite like Magnific. Structure reference seems to be a bit of Pose reference, and overall scene reference. Almost like image to Image, but taking it really seriously. Unlike a lot of other I2I models which just give you a vibe of your input image.
A lot of people say that Firefly imagery is a lower standard to the likes of Midjourney... but Firefly photo imagery looks far more realistic IMO. Midjourney, as striking as it is, just has that 'blatantly AI' kind of look to it
There was a bit in the video that I cut (mostly because it felt like I was rambling) during the Rainbow Bear section that echoes your point. MJ would have crushed the Rainbow Bear prompt, because those aesthetics are what MJ is good at. It isn’t where FF’s strengths are. FF does do great with assets/images you might need for a commercial campaign though.
but now i can't have the generative fill interact with MY IMAGE by going the quick mask, filling with a percent of gray, leaving quick, put in my prompt and watch the generative fill INTERACT with MY IMAGE. I SHURE HOPE ADOBE DOES NOT TAKE AWAY THIS FEATURE as they have with the Beta version
Oh, that’s not good. I haven’t played with the beta yet, but I will 100% let them know about that when I talk with them again. Thst seems like a pretty big oversight. Hopefully, it’s just due to the beta!
Will some of those be available in Photoshop Elements 2025? I'm not a fan of monthly fees, and I've got no use for the full Photoshop. So Elements would be plenty for my minuscule usage!
I'll put that on a list to ask them! They have Sensei in Elements, so I can't help but think at least some of these features will be ported over! But, I'll confirm w/ them next time I get a chance!
What do you think... AI models should have a feature that allows us to generate a specific image using existing images we have, and we just want to incorporate them into the generation with subtle changes. For example, if I have my own product, I would provide the product image to the AI model and instruct it to take this product image and create a creative commercial promotional post!! It would be great feature...!
That's moving into the area of training a model. Which, you can do-- just not in Photoshop yet. I presume at some point it'll come though! I'm going to have. video on training models coming up pretty soon, so keep an eye out for that!
The hand on the guitar isn’t ’remarkably accurate’ fingers aren’t shaped like that and if he is fretting an A minor chord his fingers are in an odd place, his little finger would be dropping down so the fretboard could be seen, not over the strings. Not sure AI is quite up to specific chord positions on instruments quite yet.
Yeah, y'know, as I was looking at it a little later it is a bit off. Kind of fake a min? technically his middle and ring fingers should be up a fret-- but still, that's the closest I've seen to accurate fretting thus far. Still, it's a lot better than that "what chord is this" meme that pops up on every guitar forum! I do love that meme!
Yeah, if that gets rolled into Premiere, it'll be great-- considering I think the other major player in that area is Topaz, which does cost a pretty penny. Always on the lookout for a good Open Source version as well! As soon I I find one, I'll let you know! (There was one I covered awhile back that would let you upscale per video. I'll see if I can find that!)
I used AI to generate my mascot for my channel. Is there a tool that can take that character and produce new images from it for different thumbnails etc?
Could be cool for generating reference photos to do actual canvas psintings from . Helps to realize what’s in one’s creative brain then put real paint to canvas.
That’s a great way of looking at it. Kind of like a mood board or inspiration generator. Lots of different approaches to take with this new technology!
Thanks so much! The music is something that I pop in from time to time just because I enjoy making it. Funny enough, with all the AI content of the channel, it actually isn't AI Generated, but rather made the old fashioned way. Experimenting back and forth with adding it in-- sometimes I feel it just adds a bit of movement to the video, or helps fill out the monotony of just me speaking. Still playing around with best uses-- next video I might just try it out for a few segments as opposed to through the whole thing.
Awesomeness as usual Tim! As for the lady who is fantasizing "leaving it all to go to Venezuela to be wit Hector", I think she is wanting to leave it all for Cassandra and opening up a fish store by Enrique's and Tobias' candle boutique. :) Love your humour!
In terms of hands and fingers Firefly done a great job Vs midjourney in my last work. This update is really great, but in terms of composition midjourney keep being the master, in my experience. Love your channel! 👍🏼🙌🏼
That's a good call. I wonder if it was the call for Giraffes that tripped it up. Admittedly, that's a bad prompt they chose for an example-- but, you called it! And as the first comment: You are the winner of the Chicken Dinner!
I don’t care so much about image creation. But greatly improving low resolution images and better clone stamping are incredibly impactful features for my. Just saw a SpaceX launch in Florida and only had my 24-105 and not my 100-400. It would be really nice to greatly sharpen up the moon.
You might want to try Topaz? That seems to be a pretty rock solid upscaler that doesn't add any additional hallucinations or AI creativity to the original image. PS/LR has a "Super Upscaler"-- but, I've found the results to be fairly middling. Still, might want to also give that a shot.
Haha, I'll point out that I called that WAYYYY back in March of last year (It's a terrible video! ruclips.net/video/s5sBhfU_ujk/видео.htmlsi=BY0htByDOl-Curfl) Personally, I don't take any ire with it, and I generally think the who issue with training data is going to fade out-- considering that a number of other countries (Japan, and I believe India) have already OK'd it. Not saying it makes anything right, but in order to stay competitive, it's just a reality that training data will eventually need to be swept under the rug.
@@tonon_AI fair. But by the looks of it, post SD3, we might not be looking at open source from Stability. That said, we've got quite awhile to go before SD4, so I'm sure some other major open source model will come along. Fingers Crossed!
@@eliezerjimenezsanchez8556 Initially the same training data that all the other models were trained on. But that goes back to like, MJ v1 and 2. Since then, no one outside MJ really knows what their data set is. Plenty speculate, but no one actually knows.
Hector, Anne Hathaway and Papyrus Bold! HA! Made my, well, few very happy minutes, and that's worth it! Great content. Thank you for sharing. I can't believe MJ isn't hitting guitar chords better. What, I'm tired. Thanks again. 💯
same! I haven't counted all 88 keys, but I have yet to see the correct ordering of the black keys. Have you ever seen the Janko Keyboard? All AI Pianos seem to be trained on that layout!
@@TheoreticallyMedia No... I actually don't know much AI... I do use photoshop and since I just learned about Firefly, I am curious to see how I can make use of it if at all for my piano video projects... I'll look it up! thanks!
They kicked off slow to be sure-- but I think it was necessary on their part to tread in lightly. Had Firefly V.1 kicked off with full canvas generation, the majority of the user base would have flipped out. "Photoshop wants to be Midjourney" would be the headline of every RUclips video. But, they've eased in, and for the most part GenAI has been broadly accepted. I think we're going to see a big throttle up from them. (On the other side: I do actually applaud their efforts on the AI Content ID system. That's an important aspect of this weird future we're headed into)
@TheoreticallyMedia ...and the dadest follow up 🤘🤘🤘😂 Remember when you were 8, would fall off of a tree, walk it off and keep playing? And now - a shoulder joint hurts for life, 'cos you slept wrong? Bet Hector drinks coffee because he wants to, not because he needs to! 😉
Much appreciated! This was a fun one to cover! Adobe was pretty slow going at the start, but it looks like the engine in roaring up. Really jazzed to see Firefly starting to look good!
I'll be the first to admit, I don't love it-- but, I will say that they're honing in with each update. I think FF's real strength is going to be with "stock" type images that will require a lot of post manipulation. For more imaginative stuff, I think MJ or Leo is going to be the tool.
On the guitar its also very hard to get them to animate in anything resembling strumming the guitar without morphing fingers. Drums are even worse, someone should teach the ai how to play musical instruments. 😁
Wait a second, guitar players are super super super picky... and no pause or anything after saying that. Nothing?!? I know, I shouldn't FRET about such a thing though.
Again ai is just making up stuff. The prompt was a rain forest, but it showed a desert as that's just easier. The biggest shortfall is just trying to get AI to actually follow the prompt if it contains something that is not common, so it's fine if you want young women with large chests, but forget Komodo dragons. The referencing is either too strong, or too creative and again it does not accurately follow prompts. Let's hope they solve this and not just more wild flashy images
Agreed. I’m curious to see how well the style/structure reference will work. It is still dependent, at the end of the day, on if the model knows what a Komodo dragon is. Sidenote: speaking of which, I once ran an image through a creative upscale and it turned a dead tree into a Kimodo dragon. Not at all what I was looking for, but I’ll admit, it looked REALLY cool!
Early days-- but I totally understand the validity of NSFW Adult content as an art form. Interestingly, Midjourney seems to be on the forefront there, as they've been discussing and testing how to implement adult content for users-- but obviously, it can be a tricky slope that can easily land you on the front page of every major news organization if you aren't careful. Adobe, being the massive/public company they are, aren't going to be the forerunners in that area.
AI takes away all skill and the fun of learning a software… it will make a large amount of people even dumber as everything will be generated and gets automated. To me AI makes no sense.
I don't think that's Adobe's aim here. I think you'll have a toolset that allows for some AI management, but you'll always have "standard" PS under it. For example, the adjustment brush thing-- that's more a Lightroom feature, and something that could already be done in PS, it just took a few extra steps to get there. Some people will stop at the "basic" level, and most of that imagery will be fine. Just fine. But those that go the extra 20% with manual adjustments and really know PS inside and out? Those are the folks who will be getting client work or creating work that actually sells.
Check out my Interview w/ Adobe about Premiere/Sora and More: ruclips.net/video/sWXUQkX6JDI/видео.html
👋
00:02 Adobe unveils new Firefly V3 model with enhanced features.
01:35 Firefly now has a style reference feature.
03:06 Adobe showcases impressive AI updates and image generation capabilities.
04:33 Adobe Firefly 3 update brings structure references and style variations for better image generation.
06:03 New tools for generative fill and background removal in Photoshop.
07:39 Exciting updates in Adobe Firefly & Photoshop AI
09:10 Adobe Max sneaks feature new video upscaling and enhancement technology.
10:49 New Adobe features showcased with potential use cases
For someone who started with a box camera in the '30s and then updated years later to a Kodak Brownie Hawkeye, this has certainly been an exponential ride. But it keeps me young, at age 92, and that's good!
Great stuff. What I'm really hoping for is the ability for AI in photoshop to recognise different elements in an image, separate them into different layers and use gen-fill to build the parts of those elements that were occluded by others. Would be amazingly handy for animation and VFX.
Totally-- I mean, you more or less can do that currently, it just takes a good amount of elbow grease and layer management to pull it off. Plus, a bunch of lucky rolls with Gen-fill. It'll be interesting to see if Adobe takes an LLM approach, where you can "chat" with Photoshop to describe what you need done.
That was something Dall-e 3 heavily implied it would be able to do, but failed to deliver on it.
@@TheoreticallyMedia I'd be interested to see a workflow video on that. Anything that saves time manually selecting and rebuilding. Select subject works well with clearly defined things (people against backgrounds) but not so great at selecting elements from more abstract images. I'd love it to be able to do that (choose this half a triangle behind this sphere and rebuild it). I'd also like the more LLM approach you mention. I use Midjourney by default but ChatGPT's conversational ability is much nicer to use - not that it translates into better images than midjourney.
@@cinevisionaryfilm you might want to check in with Midjourney if you haven't in awhile. The prompt model has gotten a lot better and is moving more toward a narrative style. You still can't "chat" with it, but you can now direct it in a more conversational tone.
@@TheoreticallyMedia I'll give it a go. I got upgraded to the website from the portal - which is much more user-friendly. The style ref stuff is great for consistency. Still struggles to do certain angles - low angle close ups, over shoulder shots, etc. but it's getting better. I'd love a zoom in as well as a zoom out.
@@cinevisionaryfilm You actually can do that combining fooocus with gimp or with a complex comfyUI workflow. The quality of the images generated by AI models both fooocus and comfyUI are using is higher than firefly.
Thanks Tim, great updates as always, you're my favorite morning show!
Appreciate it!! This one went up pretty early, as I was fighting against UK time to get it up! But, made for an great excuse to have an extra cup of coffee!
Did they fix the issue with gen fill where if you make a large selection it makes lower resolution than the original image?
unsure, but I think there should be an improvement, considering it is an updated model. I never do the PS Beta thing, just because I've had some bad experiences w/ it, so I'll wait to see how it looks when the full update comes out.
@@TheoreticallyMedia Just tested it the resolution does seem to be lower than the source resolution if it is above a certain level or the expansion area is larger.
best breakdown i've seen of the new toys. thank you!
Appreciate it!
What attention have they directed towards improving the resolution of generative fill?
Not sure-- they didn't mention. I did in my previous interview w/ Adobe (linked in description) ask about resolutions on the video side when you generate, but that was a full on "no comment"
So, I guess we'll see when it drops.
LOVING the reference to Papyrus Bold! 😂😂😂
What to do for a sequel? GO BOLD!! Haha, so great!
1. Do Adobe products support upscaling images similar to Magnific?
2. I must have missed discussions on Structure Reference. What is that exactly? I still don't grok it.
Thanks Héctor!
Hector is the BEST! Haha. So, we haven’t seen a creative upscaler like Magnific out of Adobe yet. You can normal upscale, but nothing quite like Magnific.
Structure reference seems to be a bit of Pose reference, and overall scene reference. Almost like image to Image, but taking it really seriously. Unlike a lot of other I2I models which just give you a vibe of your input image.
A lot of people say that Firefly imagery is a lower standard to the likes of Midjourney... but Firefly photo imagery looks far more realistic IMO. Midjourney, as striking as it is, just has that 'blatantly AI' kind of look to it
There was a bit in the video that I cut (mostly because it felt like I was rambling) during the Rainbow Bear section that echoes your point.
MJ would have crushed the Rainbow Bear prompt, because those aesthetics are what MJ is good at. It isn’t where FF’s strengths are. FF does do great with assets/images you might need for a commercial campaign though.
but now i can't have the generative fill interact with MY IMAGE by going the quick mask, filling with a percent of gray, leaving quick, put in my prompt and watch the generative fill INTERACT with MY IMAGE. I SHURE HOPE ADOBE DOES NOT TAKE AWAY THIS FEATURE as they have with the Beta version
Oh, that’s not good. I haven’t played with the beta yet, but I will 100% let them know about that when I talk with them again. Thst seems like a pretty big oversight. Hopefully, it’s just due to the beta!
Will some of those be available in Photoshop Elements 2025?
I'm not a fan of monthly fees, and I've got no use for the full Photoshop. So Elements would be plenty for my minuscule usage!
I'll put that on a list to ask them! They have Sensei in Elements, so I can't help but think at least some of these features will be ported over! But, I'll confirm w/ them next time I get a chance!
Awesome additions and highlight vid Tim brah! 🎉
Thanks so much man! Not a party until you show up in the comments!
great coverage as usual. thanks for your great channel.
1000%! Thank YOU!
What do you think...
AI models should have a feature that allows us to generate a specific image using existing images we have, and we just want to incorporate them into the generation with subtle changes.
For example, if I have my own product, I would provide the product image to the AI model and instruct it to take this product image and create a creative commercial promotional post!!
It would be great feature...!
That's moving into the area of training a model. Which, you can do-- just not in Photoshop yet. I presume at some point it'll come though! I'm going to have. video on training models coming up pretty soon, so keep an eye out for that!
But we have to train model for just single image! It will be very time consuming...
That 'RotAIte' feature will be great for product ads. Highly requested AI feature.
RotAI? AIRoto? RotoScape?
You need to work at Adobe in the “name stuff” department!
I don’t know if that department actually exists. But it should!
You’re hired!
The hand on the guitar isn’t ’remarkably accurate’ fingers aren’t shaped like that and if he is fretting an A minor chord his fingers are in an odd place, his little finger would be dropping down so the fretboard could be seen, not over the strings. Not sure AI is quite up to specific chord positions on instruments quite yet.
Yeah, y'know, as I was looking at it a little later it is a bit off. Kind of fake a min? technically his middle and ring fingers should be up a fret-- but still, that's the closest I've seen to accurate fretting thus far.
Still, it's a lot better than that "what chord is this" meme that pops up on every guitar forum!
I do love that meme!
Thanks as always kind sir!
I think for this im most excited in that video upscaling method I wonder if we will ever get that open sourced
Yeah, if that gets rolled into Premiere, it'll be great-- considering I think the other major player in that area is Topaz, which does cost a pretty penny.
Always on the lookout for a good Open Source version as well! As soon I I find one, I'll let you know! (There was one I covered awhile back that would let you upscale per video. I'll see if I can find that!)
Thanks so much for this. Loved the Papyrus reference.
Haha, file that one under "If you know, you know!"
I'll be impressed when they implement all of these things into after effects.
Did you see the interview I did in the previous video? I'm not sure they're really stressing AE right now. I think the priority is PR.
I used AI to generate my mascot for my channel. Is there a tool that can take that character and produce new images from it for different thumbnails etc?
Is the generative fill still low resolution?
Not sure, haven't played with it yet-- HOPEFULLY not though. I'm sure it's gotten better with an updated model.
Could be cool for generating reference photos to do actual canvas psintings from . Helps to realize what’s in one’s creative brain then put real paint to canvas.
That’s a great way of looking at it. Kind of like a mood board or inspiration generator.
Lots of different approaches to take with this new technology!
Great video, but why the music bed?
Thanks so much! The music is something that I pop in from time to time just because I enjoy making it. Funny enough, with all the AI content of the channel, it actually isn't AI Generated, but rather made the old fashioned way.
Experimenting back and forth with adding it in-- sometimes I feel it just adds a bit of movement to the video, or helps fill out the monotony of just me speaking.
Still playing around with best uses-- next video I might just try it out for a few segments as opposed to through the whole thing.
Awesomeness as usual Tim! As for the lady who is fantasizing "leaving it all to go to Venezuela to be wit Hector", I think she is wanting to leave it all for Cassandra and opening up a fish store by Enrique's and Tobias' candle boutique.
:) Love your humour!
In terms of hands and fingers Firefly done a great job Vs midjourney in my last work. This update is really great, but in terms of composition midjourney keep being the master, in my experience. Love your channel! 👍🏼🙌🏼
How did Adobe come to choose Firefly as the name of this AI generator?
Hector is a renowned finger surgeon...
The greatest comment this channel has ever gotten. Right here. Chicken Dinners for life!
It doesn’t seem great at following prompts. That elephant definitely wasn’t in a rainforest. That’s the savanna.
That's a good call. I wonder if it was the call for Giraffes that tripped it up. Admittedly, that's a bad prompt they chose for an example-- but, you called it! And as the first comment: You are the winner of the Chicken Dinner!
I don’t care so much about image creation. But greatly improving low resolution images and better clone stamping are incredibly impactful features for my. Just saw a SpaceX launch in Florida and only had my 24-105 and not my 100-400. It would be really nice to greatly sharpen up the moon.
You might want to try Topaz? That seems to be a pretty rock solid upscaler that doesn't add any additional hallucinations or AI creativity to the original image.
PS/LR has a "Super Upscaler"-- but, I've found the results to be fairly middling. Still, might want to also give that a shot.
I compared those and by far topaz Gigapixel is the best
perfect timing
Thank you!! Bit of a hustle to get it up, but I think...I'm the first video on Firefly v3? I need some sort of RUclips Achievement badge!
and all just trained in our Midjourney images, YAY
Haha, I'll point out that I called that WAYYYY back in March of last year (It's a terrible video! ruclips.net/video/s5sBhfU_ujk/видео.htmlsi=BY0htByDOl-Curfl)
Personally, I don't take any ire with it, and I generally think the who issue with training data is going to fade out-- considering that a number of other countries (Japan, and I believe India) have already OK'd it. Not saying it makes anything right, but in order to stay competitive, it's just a reality that training data will eventually need to be swept under the rug.
@@TheoreticallyMedia im ditching everything for local SD, at least the community is nice and not focus on profit over others works
And Midjourney was trained from?
@@tonon_AI fair. But by the looks of it, post SD3, we might not be looking at open source from Stability. That said, we've got quite awhile to go before SD4, so I'm sure some other major open source model will come along. Fingers Crossed!
@@eliezerjimenezsanchez8556 Initially the same training data that all the other models were trained on. But that goes back to like, MJ v1 and 2. Since then, no one outside MJ really knows what their data set is. Plenty speculate, but no one actually knows.
Hector, Anne Hathaway and Papyrus Bold! HA! Made my, well, few very happy minutes, and that's worth it! Great content. Thank you for sharing. I can't believe MJ isn't hitting guitar chords better. What, I'm tired.
Thanks again. 💯
Still don’t get how the darn thing works
From a technology standpoint? I believe the answer is “Magic Elves”
…I mean, it’s the only logical answer.
Love your stuff a lot Tim, but you better tell them there are no elephants in the Amazonian rain forest
Haha, my guess is it was "Let the Intern Prompt" Day!
I've yet to see an AI image that accurately generates a piano keyboard layout!!
same! I haven't counted all 88 keys, but I have yet to see the correct ordering of the black keys. Have you ever seen the Janko Keyboard? All AI Pianos seem to be trained on that layout!
@@TheoreticallyMedia No... I actually don't know much AI... I do use photoshop and since I just learned about Firefly, I am curious to see how I can make use of it if at all for my piano video projects... I'll look it up! thanks!
Why Sora disappeared!!
That's more on the Premiere side. Hasn't actually dropped yet though!
Adobe still not at the top of AI but good to see they are progressing. The video upscaling looks really good.
They kicked off slow to be sure-- but I think it was necessary on their part to tread in lightly. Had Firefly V.1 kicked off with full canvas generation, the majority of the user base would have flipped out. "Photoshop wants to be Midjourney" would be the headline of every RUclips video.
But, they've eased in, and for the most part GenAI has been broadly accepted. I think we're going to see a big throttle up from them.
(On the other side: I do actually applaud their efforts on the AI Content ID system. That's an important aspect of this weird future we're headed into)
Wow-wow, coming out of the gate with rapid-fire dad jokes 🤘😂🔥 May have to re-watch to actually focus on the information 😅
Haha, I was having a little extra fun in this one! Got a good night's sleep and had an extra cup of coffee before hitting record on this one!
@TheoreticallyMedia ...and the dadest follow up 🤘🤘🤘😂 Remember when you were 8, would fall off of a tree, walk it off and keep playing? And now - a shoulder joint hurts for life, 'cos you slept wrong? Bet Hector drinks coffee because he wants to, not because he needs to! 😉
Very Sharp
Much appreciated! This was a fun one to cover! Adobe was pretty slow going at the start, but it looks like the engine in roaring up. Really jazzed to see Firefly starting to look good!
The aesthetic is like Dalle
I'll be the first to admit, I don't love it-- but, I will say that they're honing in with each update. I think FF's real strength is going to be with "stock" type images that will require a lot of post manipulation. For more imaginative stuff, I think MJ or Leo is going to be the tool.
@@TheoreticallyMedia I’m a big mj user. I was hoping they would partner up, but I don’t think it’s a path for them. Bummer!
On the guitar its also very hard to get them to animate in anything resembling strumming the guitar without morphing fingers. Drums are even worse, someone should teach the ai how to play musical instruments. 😁
Haha, I did have a drummer joke in there, but I cut it out because I felt I was getting too hammy!
@@TheoreticallyMedia 😁
Wait a second, guitar players are super super super picky... and no pause or anything after saying that. Nothing?!? I know, I shouldn't FRET about such a thing though.
Ahhhhhh, how’d I miss that pun?!!! Ok, I’ll turn in my dad-joke card until I can prove myself worthy once again!
I’ll try to “pickup” my game!
Sooooo….whats a sub of PS cost these days now? First born child? Left foot?
Haha, PS alone I think is 19 bucks, but you can also bundle it with LR. I think the full package (all apps) is $59?
02:00 - I have no faith in AI if it's prepared to accept that elephants, giraffes and baobabs live in an Amazonian forest.
Haha, I maintain it was “let the intern prompt” day.
But yeah, it clearly ignored the rainforest keyword.
Yeah, that Papyrus sequel was pretty bold ಡ ͜ ʖ ಡ
An elephant in the Amazon Rainforest? AI really? Who taught you that?
Again ai is just making up stuff. The prompt was a rain forest, but it showed a desert as that's just easier. The biggest shortfall is just trying to get AI to actually follow the prompt if it contains something that is not common, so it's fine if you want young women with large chests, but forget Komodo dragons. The referencing is either too strong, or too creative and again it does not accurately follow prompts. Let's hope they solve this and not just more wild flashy images
Agreed. I’m curious to see how well the style/structure reference will work. It is still dependent, at the end of the day, on if the model knows what a Komodo dragon is.
Sidenote: speaking of which, I once ran an image through a creative upscale and it turned a dead tree into a Kimodo dragon. Not at all what I was looking for, but I’ll admit, it looked REALLY cool!
😊
Papyrus 😬
BOLD!!
Hec. . .tor! 😂
The most interesting man in Venezuela!
Papyrus XDD 😁
Papyrus BOLD!
You still cant use it for adult stuff which makes it useless for fetish and nude art photographers
Early days-- but I totally understand the validity of NSFW Adult content as an art form. Interestingly, Midjourney seems to be on the forefront there, as they've been discussing and testing how to implement adult content for users-- but obviously, it can be a tricky slope that can easily land you on the front page of every major news organization if you aren't careful.
Adobe, being the massive/public company they are, aren't going to be the forerunners in that area.
Thanks, the intro went too far for me
How's so?
Since when have Elephants lived in an Amazonian forest? Rubbish in Rubbish out 😆
Haha, yeah-- it's interesting they went with that as the example prompt. They called out giraffes as well. Maybe they let the intern prompt that one?
@@TheoreticallyMedia maybe chatgpt wrote it
Pity it's so woke it's unusable
Fakery rules. Sad times
AI takes away all skill and the fun of learning a software… it will make a large amount of people even dumber as everything will be generated and gets automated. To me AI makes no sense.
I don't think that's Adobe's aim here. I think you'll have a toolset that allows for some AI management, but you'll always have "standard" PS under it. For example, the adjustment brush thing-- that's more a Lightroom feature, and something that could already be done in PS, it just took a few extra steps to get there.
Some people will stop at the "basic" level, and most of that imagery will be fine. Just fine. But those that go the extra 20% with manual adjustments and really know PS inside and out? Those are the folks who will be getting client work or creating work that actually sells.