has anyone addressed the copyright ability of these AI generated images? until the courts settle that I would be wary to produce any kind of product with these programs
Yes, it has been ruled on. Ai generated images are not copyrightable. The next step will be class action lawsuit and demanding restitution from plagiarists. Justice is coming for all the thievery done here by stupid people thinking they can get away with robbing artists. Its gonna be bloody.
Need to see it. The Photoshop AI stuff looked amazing in the demos and is crushingly underwhelming as it is, and that is on static images. I can' thelp but feel that these are very, very controlled examples. Look at the items on the table. They are basically custom made with tracking codes to make them easier to track in 3D.
"underwhelmed" is the exact emotion i had from the PS generative functions... and if it's bad in PS... then...AE? yeah good luck... it's like the marketers never considered we'd try to use this stuff in real production environments...
@@DamianHanley239 I think it’s a first step. There isn’t really a great pipeline from AI devs to Full Production, so…for better or worse. Adobe is the only company with the muscle to build that bridge. It’ll be a messy bridge, with lots of potholes, and will collapse at least once…meanwhile we will always pay the toll- but, at least the bridge will be built and the normalization of AI in production can begin.
@@TheoreticallyMedia yeah i mean... i know you're right... it's gonna take time... but the dang marketing they use makes it looks amazing!... i guess that's the function of marketing though :)
the reason the footage of the purse and cup is "wonky", is because they are trying to make a map of the objects, getting all the different angles, etc. It's pretty standard to move the camera like this when "scanning" something.
I'm actually interested to see more of the generate fill in after effects. That was impressive. That would be really handy for any slight or medium changes for video you want to edit.
00:27 🎨 Project Primrose creates flexible textiles that display content, allowing patterns to change and even animate, controlled by body movement. 01:22 🎥 Project Fast Fill (Gen fill for video) enables seamless integration of elements into videos with impressive motion tracking and light adaptation. 02:58 🧳 Fast Fill is ideal for subtle enhancements like removing small elements or fixing continuity errors, rather than dramatic changes. 03:12 🌆 Project Scene Change effortlessly combines subjects and backgrounds from different videos, offering stable and impressive compositing abilities. 04:48 📸 Project Scene Change even generates contextual shadows on subjects for addedrealism. 05:15 🎭 Project Posable allows users to pose AI-generated characters and associate props for contextual interactions, offering a simplified 3D character manipulation tool. 07:07 🏠 Adobe Illustrator now includes generative fill, enabling the creation of 3D objects from generated vector images for use in animations.
That Illustrator 3-D function looks like a simple extrusion. Notice how the chimney spans the full depth of the house. I can see it saving some time, but there would still be significant editing required, so I don't know that it's a game changer yet.
I think it’s great for doing g little 2.5d animations. Maybe not world changing, but really cool. It does make me wonder if full 3d isn’t far off. Personally I think we’re a few months (or maybe weeks) from seeing really amazing text to 3d.
Wow. This is doing a better job faster than me using planar tracking in after effects and projecting the photoshop paintout. It would take a lot of effort to extract and match the lighting and movement in a believable way. So many hours of pixel-fucking. Now it's just doing it in minutes with just a few clicks. Quite impressive.
It’s pretty nuts. And the thing I keep saying: this is the bad version. It only gets better from here. I’ll say on the illustrator front: the demo made the gens look super fast, that was actually NOT the case in my real world time. I mean, it was like, 50 seconds. So, y’know; still way faster than manually doing it…haha
While as an amateur I love this, I also feel like this is such a slap in the face for people who have put years into learning how to edit stuff like this, all to be rewarded with a 1-click solution
I get your point-- but I also tend to think that the majority of people using this will still be those people who have put years in. If you're looking for an "easy" button, there are much better solutions than Adobe products, which-- no matter how easy they are-- still have a pretty steep learning curve. Like, I think once AI hits After Effects? It's still going to be an intimidating program to learn on Day 1. If anything, I just think that AI in Adobe offers some quicker workflows, but you still have to know what you're doing. Take that Scene Change Example with the guy in the Jungle Temple-- Yeah, it "worked"-- but it wasn't that impressive, because it still needed to be lit correctly-- and needed a color pass...and, a few hundred other things. But that's my point: None of this is a "one button" solution.
When AI came out with stable diffusion : NOOO IT IS STEALING ART AND CREATIVITY. When ADOBE created AI model from Adobe stock contents(provided by millions of individuals) : YEEE, AI IS FUTURE, I LOVE AI, I WILL GIVE MY 1 KIDNEY AS "CREDIT POINTS"
Haha, yeah I’m not at all a fan of the credit system. I talk about that a bit in the “no hype” look at Firefly 2. I’ll say this to Adobe’s credit: by putting it into industry standard software, we’re on the road to industry standard usage. Which, might be good or bad, but at least it’ll normalize the use of AI, which to be honest is still kind of a fringe technology. My greater concern is that Adobe becomes THE name for creative AI, and that’s never a good thing either.
Should I go to work today? With "Everything Changed", how will I learn to function in the world again? I need to know, what still works and what doesn't.
You can not show up. If your boss asks where you are, just send them a screenshot of this thumbnail. They'll leave the office as well...and on and on it will go, until.... Well, I guess it'll be a global day off. I think we could all use it.
I'm less worried about Adobe when it comes to having a mental breakdown about AI. For me, it is more about all the angles happening at once. Images, video, 3d, voices, music, LLMs, robots, etc. Consider that GPT-4 was done training before ChatGPT was even made public, so what do they have now? MidJourney said they just finished sourcing their video training data. Gemini is coming in probably less than a month. AI Explained channel was just showing advancements in artificially generated video for training cars and robots, and last video about discoveries of using lots of disparate robot training data gives a better result even than what the original data was for. 2024 is going to be pretty crazy I think.
I looked it up, in the beginning, you'll get so many credits but when that has been used up, you have to pay for it as a standalone app. No idea how much that will be.
Like a thief looking forward to the next heist? You are morally bankrupt, perhaps you should check yourself for psychopaty? Doesnt that sound interesting and explai your little ai thievery excitements and coverage?
I see a lot of RUclipsrs claiming that Adobe is "stunning the world" with their recent release, but it's barely just incrementally improved over past technologies. EDIT: I mean, I guess it's improvement enough over the current state of the art for commercial tools.
To be fair: I haven’t been wow’d by Firefly. If you watch my last video, I think that’s pretty clear. But: the stuff that was announced here is pretty epic. The dress? That’s something we didn’t see coming. And while video Inpainting and Nerf/splatting isn’t new, incorporating it into the Adobe line is basically a straight line to a professional workflow. I mean, we’re all sort of “in it” and I think that makes it easy to forget that the majority of the world has no idea what’s happening in our little corner of the room. For better or worse, Adobe just put it all on a stage.
100% You can kinda tell in my last video, I was a little underwhelmed. I try not to manufacture hype, so...yeah, I don't know if that reads or not? This one on the other hand: The hype is real for me!
Oh, which book? If you're into Sci-Fi Noir, I just finished Titanium Noir by Nick Harkaway. It's a fun read, very-- y'know Blade Runner-esque. Sort of more revolves around Longevity rather than AI. Breezy easy read though!
@@TheoreticallyMedia "The Dark Forest," by Cixin Liu (the sequel to "The Three Body Problem). The fabrics also respond to the mood of the wearer. Thanks for the recommendation!
After being underwhelmed by Firefly, I have to say, this looks immense. The MAX stuff, especially the mapping (walking behind the coffee cup) is a game changer. Literally WOW! Midjourney need to get 3d out and do something special with it FAST, or they're about to get left behind.
I was (and have been) pretty underwhelmed by Firefly as well. But that’s ok, if Adobe has the rest of this on the horizon! And I should say, I don’t mind the Firefly model in Gen-Fill, it has actually done pretty well for me there, but whole image generation is just not doing it for me.
i think adobe is just being careful at the moment, but eventually they will crush everyone in the ai space like every big corpo. smaller ai companies are going to have to find a specific niche
Yes Firefly was underwhelming but I think you need to put it in context. Adobe is a large company with a very established user base, that has expectations and deadlines. On top of that there is no doubt a large part of their user base that is actively against generative AI. The comparison to that is the extremely active open source world of generative AI. Comparing the two Firefly was always going to look tame, but I strongly feel in the long run Firefly will do some pretty amazing things as Adobe ramps up, and it will do them in a way that I can use professionally with less trepidation and uncertainty than the current systems.
@@JohnVanderbeck Compete or die. Welcome to the world of business. Who cares what pressures they have? They may well catch up, take over and integrate. In the meantime, cash is pouring into it's competitors.
My mind is spinning uncontrollably, lol. I am one that continuously brainstorms 24/7, always thinking of ideas, stories, etc. Now, with this technology my mind will def melt in the near future - You'll find me at the nearest asylum. But seriously, this is all happening fast - and with tech like this hitting the streets we will just see an increase in speed with new ai tech. Adobe has def went all in with AI in recent months and they seem to be traveling faster than most right now.
I think they’re on a bit of a buying frenzy. I’m pretty sure somewhere in the archives of the channel, the video Inpainting was in paper format (I’ll have to dig around for that)- Adobe had that coin purse to buy their way to the top spot- and to be honest, who else could/should it be? On the plus side, it’ll be good to have all these tools under one roof, Adobe has always been good about interplay between their products. But yeah, these are INSANE times!
It is such a game changer. If you take a look at the previous video I did on Firefly (the No Hype one) I’ve got a section toward the back that talks about Premiere and some of the other cool video projects they have in the pipeline.
2:43 "You can see where it is....the color of the trees..." To be fair, generally speaking (especially in this day and age), on horizon shots (or nearly horizon shots), items in the foreground appear darker than in the background, don't they? Did it take this into account or was it happenstance?
I'll have to go back and look, but wasn't it a lighter hue? The thing that I'm wondering is if: Like Gen-Fill, it needed a larger sample size to draw from-- Those joggers were pretty small in the frame. That said, I still say that no one would ever notice the difference anyhow. Well, maybe unless you were one of the joggers!! "Mitch, I could SWORN we were jogging past that video shoot, right?" "Dude....look at the trees....what is happening here?!"
The end results of a lot of "fast removes" videos I've seen end up with the subject(s) travelling through a completely abandoned space. It gives me "I am legend" post apocalyptic vibes.
Ughhhhh, you’re right! The green screen billboards in baseball drive me insane to begin with. And I presume the MLB will be the first to adopt this, as they seem intent on doing everything they can to destroy baseball.
@@TheoreticallyMedia Definitely. Plus since batters and pitchers spend so much time just standing there on screen it will be easier for the technology to keep up. (Unless it's Elly De La Cruz)
Wife told me I was nuts because, I believe, we will be able to ask AI to show as a never before seen movie of our choice using custom parameters... whenever we want. This will happen in my lifetime.
1000% will. The weird part about it is how it'll sneak up on us. It won't be tomorrow, where we're like "Netflix, Make me a Batman Movie starring me." It'll be this gradual rollout of stuff that only kinda half works-- until all the sudden, it does-- but it isn't that shocking. I was thinking about that the other day when I told my watch to set a reminder that instantly showed up on my calendar. Then, hopping back to the first time I tried voice dictation in...like, Windows 98? I mean...that did NOT work at all. Even when we first got Siri/Cortana/Alexia-- constant complaints on how "dumb it was." No one complains anymore...but no one marvels at it either!
@@TheoreticallyMedia You're telling me. I used DragonSpeak, and it was brutal. Now I can just Windows+H and instantly voicetype in any program. All the Home Hubs are pretty dumb now compared with Chat GPT and Bard, which can answer questions directly instead of "I found this web site," and it's only been a few years since they were introduced. I, for one, welcome our future AI overlords.
I said the same thing but i imagine prompting it to recreate any movie you want with a hilarious and random ending that cuts the plot short right in the middle of the film or even 5 minutes in.
They will include it now with their pay to use software service because they need their users to beta test it then when it's developed into a new product they will lock it behind an option or paywall. Adobe and Autodesk can both take a hike I wouldn't give them a dollar.
Better content? Just another tool, still needs the imagination and direction from someone to create something. They probably thought art was done with when the camera obscura found its way into art.
Don't crush my kaiju battle dreams! 🤣 This is amazing. I've already been experimenting with using gen fill and other AI tools built into current gen of Adobe software and this stuff is all just next level. There is a Blender -> AE plugin that allows for some pretty wild camera translation stuff. Mixing all these tools together is gonna blow minds with what it can do for amateur and small budget filmmakers. Incredible. Thanks for the breakdown!
I’d never crush a Kaiju dream! Not even with an 80ft lizard foot! Yeah, I was just saying: when Gen Fill hits AE? Things are going to be nuts. I don’t even know what that’s going to look like! Some interesting flexes going on here though. DubDubDub is going to wipe away Elevenlabs’ translation feature, and while Posture is just kind of SD Control Net, the fact that Adobe has Miximo is a pretty big game changer on that front. Adobe is showing its the 800lb gorilla of creative AI. Does that make it a Kaiju?
I always thought the real areas where this kind of thing would shine is in giant scenarios or size morphs like this ❤❤❤❤❤❤ We could have whole adventure series with big/small villains and heroes etc.❤❤❤❤
I had a whole pitch about that once. Basically “it’s a small world” but in reverse: everyone in the world shrinks down to Smurf size. And what would happen then? It was back in that era of Lost and Heroes- so I thought it was a good hook. But, like Lost and Heroes, I couldn’t figure out what the actual reason for it happening was!
@@TheoreticallyMedia Ya, when I was a tiny child 🤔 there was a TV series “land of the giants” where the premise was an alien planet with giant people and little people. There’s Ginormica in Pixar’s stuff. But a series with adventures on a planet of giants was my first introduction at an early age. There’s the “drink me” of Alice in Wonderland, the littles, Thumbelina (who I wished I was as a kid at one point, 😂) and so Many others. But to actually do it right would definitely require CGI/AI. There’s a book by Tabitha King the wife of Stephen King called “Small World” I think, with a diabolical theme of course, considering the source 😂. But would be able to be made into a movie now with the new technology. So there definitely a lot of this kind of theme out there. It definitely touches a deep archetypal realm, perhaps because we spent many millions of years much smaller in our ancestral heritage. Like the tiny furry creatures we used to be in trees during the time of the dinosaurs. Anyway thanks and take care!! Love your channel!! ❤️👍🏻
Oh, I didn’t even think about that. Haha, Adobe being Adobe, there’s a good chance it’ll crash during a covert mission. “Private Jones! We are in a night mission! Why is your camo colored beach flamingo?!”
Don't forget, the more Software can do, and the easier, the more and also more realistic fakes can be that are intended to do "bad" and are made by people that are "bad" (momentarely, getting overwhelmed by trauma, anxiety "not a good person") (way more complicated then written here) Also: we are all at fault for "bad" people, we all can change it, and hopefully we all are on the way doing it.
It is. It’s a very 2.5d animation thing, but still handy. I do think we’ll be seeing some major strides coming out of text to 3d models in the next few months. Or…maybe even weeks…
At some point! The AI showcase didn’t really perform the way I was hoping it would, but I’m thinking about starting a second channel to focus on community films! Stay tuned!
Nothing Adobe could ever do would make my jaw drop. Well maybe one thing, stop with the subscription based model and get your fingers out of my bank account every month.
Indeed. I’m surprised they haven’t had one yet. Topaz seems to be going with this bizarre $300 + a subscription price model? I still can’t make heads or tails out of their marketing copy. But I do wonder if that new pricing structure is indicative of a heavy hitter entering the space.
Thanks Sway! I’m so fired up about this incoming era of AI! This one did have me thinking what we’ll see in the After Effects front at some point, Like, what is that even going to look like?!
You said it, man. After not using any paid (nor pirated lol) Adobe products for more than 10 years (this includes AE) - They reeled me back in with PS a few days ago. They're doing a great job with community building, listening to creators, and innovation. It kind of reminds of the old Apple days. I might do the switch or use their tools more than I thought. @@TheoreticallyMedia
It is a real golden age. Seriously, I wake up every day and think: “what is possibly coming down the pike today? And like, every day: it is something amazing.
Poseable is what excites me the most. It is a problem we've been struggling with for some in Generative AI and while there are techniques and solutions, I feel it isn't a solved problem yet. If nothing else the workflow needs to be easier.
Totally agree! I'm really jazzed about that as well-- I mean, I have that homebrew version, but it is a total workaround. I'll be happy once there is a dedicated application for posing and generating!
@@TheoreticallyMedia Yeah I have several homebrewed workflows as well. Some work better than others, but the whole space feels like it has a LOT of room for improvement. One of the things I have been having fun doing lately is taking my own photography work, removing the real models, and trying to replace them with AI generated characters in similar poses. And anything outside a very boring pose almost immediately falls apart. If you want to do senior yearbook photos, family albums, or celeb headshots then the systems we have now are clunky but work. But if you want to actually express yourself with creative posing like I do in my work, you are out of luck. Excited to see what Poseable will bring to this and if it is more powerful in understanding poses. The major problem with the Stable Diffusion ControlNet workflows is that it has no concept of depth to the OpenPose skeleton. So it doesn't really understand a hand being in front versus behind for example.
It’s there, kinda? If you check out my Firefly video, I’ve got a section on Premiere. The TL;DW: they have text to storyboard, and then text to animatic (which is video, let’s face it) and that was announced back in April. I think they’re waiting for that tech to get more rock solid before deploying.
this is just a modified version of controlnet, which local installers have been using for months. just install your own version of SDXL and use the many model variations and UIs. you'll have MUCH greater creative capacity and no censorship (speaking of censorship, i'm sure you'll delete this). i have the new version of creative suite, and the limits and strangleholds it imposes on users are just slightly less annoying than all the new and wonderful errors i get when trying to do daily tasks with PS.
I don't ever delete comment, just FYI. Well, I should say, I haven't yet-- I'm sure someone at some point will come in and say something REALLY stupid and I'll do it-- but thus far, I haven't. I totally agree with you by the way: Pose Control is pretty much just controlnet-- And Scene Change is NERFs and Gsplats. I have seen Video Inpainting-- In face, an early version of it is in one of my videos. I'm presuming Adobe just bought them. To all that, I'll say: You and I, and many others that are in this space, have seen this tech. But we are a VERY small group of people compared to the vast majority of Adobe users. And, most of those users don't want to mess around with installing SD and messing with Loras. They just want to open PS or AE and work. That's where the gamechanger is. And I think in a lot of ways, creative AI needs Adobe to standardize. The one plus side to all this is that it normalizes GenAI in professional workspaces-- which is a good thing. It's also Adobe....meaning it's also a bad thing. I've got a 20 year love/hate relationship with that company, so I get where you're coming from.
@@TheoreticallyMedia wow. that was refreshing! i'm generally lambasted and booted for saying this kind of thing. - but you're right, we're likely the fringe. happily subscribed to a reasonable content creator :)
So I guess with Project Primrose, the designs are projected onto the dress in real-time in the video and the dress itself is not one big wearable display. That would be interesting hardware, not interesting AI. Adobe is not a hardware company. However, I had to make my own conclusion about that. So you, and perhaps also Adobe, left out the most important thing: What Project Primrose actually does.
So, from reading the documentation, I think it's actually a material that is sown into the fabric-- so not projection. That said, I totally agree with you: No way Adobe is actually going into production on this. I think this is just a nifty R&D project for them. A "We thought we Could, so We Did" kind of situation. That said, I could see them partnering with someone down the line and licensing the tech out. Some super high end clothing company with a "powered by Adobe" ad campaign. But that's just speculation on my part...
I honestly don't understand this weird obsession with sterilizing the world around people in images and video. Obviously, fixing things like mics in shot or what have you is useful, and actually helps those managing continuity on shoots actually do more than just say "hey we need spend a bunch of money to reshoot this", but focusing so much of this on how it can be used to make it look like you were in this strange liminal space devoid of other life is... odd to me.
That's a really good point. I don't know, maybe something to do with a collective subconscious realization that we never get a chance to be "alone" anymore? Like-- we sort of desire some feeling of space? It's a really interesting rabbit hole you've got there...haha, not sure how far into it I want to delve!
So, I’m not the hugest fan of Firefly- if you see the video I did yesterday, I think you can kinda tell. BUT: the stuff in this video? That’s the real power of Adobe, and I don’t think anyone can compete on that front.
@@TheoreticallyMedia i'll admit the applications it has in video editing is pretty good, even surpasses what NUWA seems to be promising/capable of, for static image gen on a single image/frame/etc though it's pretty weak, though I'm sure if they keep up in the AI race they can improve over time too.
@@xbon1 the thing Adobe has going for it is the massive user base. It’s something that I’m really critical of as well- since the last thing I want is a homogenized AI look, if there is a “king of the hill” platform
I kinda wonder! I miss Poser, and I can only presume someone snatched up that licence. One of my favorite eras of computer art was the combo of Poser and Bryce3D!
@@TheoreticallyMedia... And Kai's Goo. WTF was that? Gamification (you had to use tools in certain ways to unlock the next Goo) of really cool tools is a fail. But Krause's fractal imaging software (forget the name) was unrivaled. Still is.
To be sincere, Adobe is always making hype of something we've already seen before. I tested Generative Photoshop and the results were pretty bad. And the Adobe subscription is always complicated. I'd rather go back to my 3D and Mage Stable Diffusion workflow. Thanks for the video!
Haha, I actually get some OK results from GenFill in Photoshop, if I'm not leaning on it too hard. Like, smaller things I think it's fine with-- but going with a full gen image (ala Firefly) just doesn't do it for me. And the Adobe sub is complicated by design I think!
Oh, I’m sure it will justify their $2 increase to the various plans. Nothing with Adobe is ever free. At least now they separate out the beta versions, instead of just releasing broken tools that we have to deal with….so, thanks?
Yeah, I hope not. But I can see it. I don’t think they ever figured out what to do with Miximo to begin with. The Poseable pitch was probably something like “hey, remember that thing we have in the closet from a few years back? I think we can use it!” Or, they build Poseable and just kind of forget about Mixamo. I’ve seen that happen with them too. Like, it just sits there like a forgotten toy. But, at least we can still play with it!
So, I had a pal that worked for World of Warcraft at the hight of their popularly. I think it was something like 12 million users paying at $12/mo, 12 million x 12. A month. Adobe has a smaller user base, but a higher cost…so, yeah. That’s a lot of money.
Could you imagine if that was the in the presentation? Just, Rorschach walks out and says: "You've got it wrong, I'm not locked in here with you. You're locked in here with me..." And then the livestream goes black.
Yeah it probably is harder actually. They probably re-wrote Photoshop to get the web version going, but the desktop version has so much history in it, it probably isn't worth it for them to re-write at that level.
Haha. I actually broke a cardinal rule of mine, in that I updated to the latest OS to run these new features. I generally like to wait at least 6 months. And that does for any OS, Thus far, things seem…mostly stable? Although some plugins on the music side are a little sluggish. That’s pretty common for virtual instruments though.
Uhhh. Ok. But always curious: how is this clickbait? Is it the title? People always get bent out of shape when I use the “Everything Just Changed” title, but I do save it for when I genuinely think things did change. Adobe introducing this level of AI on their products is pretty big. Now, if you don’t like that they’re doing it, that’s a whole other thing.
Wonder what they'll discontinue next. Higher costs, fewer features, stagnant development on the apps that faithful users have paid for for years. Burning through credits while AI generates stuff you can't use. Subscriptions and digital delivery killed improvement and innovation. And deadlines. Great for shareholders, I guess.
Yeah, I’ll say that I really despise that whole credit system. There isn’t much worse than trying to generate something while keeping an eye on a ticking clock. Oh, but I’m sure a higher paid tier of unlimited is in the pipeline…
Call me when they have an AI robot who can look and act exactly like me and go do my job. Just so long as he can't come back and replace me at home, too. OMG. What have I done!????
I mean, yes and no. We've been able to do things like this for quite some time-- it just cost a LOT of time and money. I get what you're saying though: now that it's cheap and accessible, we'll see a lot more of it.
If you watch the Firefly 2 video I did before this, you’ll see that I’m fairly critical of their pricing structure. What they really need is an ala carte menu. Where we can choose PS, PR, AE plus whatever else and pay per app. Not these weird bundles that they choose.
I went back and forth on that! Is that the correct style? That the tie is not supposed to show under a one button jacket? I mean, I don’t know how to tie a full Windsor knot, so I’m legitimately asking.
Wow, they're almost keeping up with stuff "2 minute papers" has presented a few years ago! Pretty impressive for a lazy-assed company resting on their laurels from 30 years ago
Everything Just Changed!! 😕 oh My!!! Will I be able to live like I used to? How is my life going to change? How will society cope, and keep running the world the same as it used to be? What do I need to do different to cope with this change? 😖 Oh, the horror!
Haha, totally accurate. It’s at least 4 versions before we get to something that works. And generally, by that time they also break a tool that we all constantly use!
Technology being used for this, and not all being pooled into automating the mining of the resources, is why humanity is going to fail at this and go extinct. You need to be responsible and you're just not being.
@@TheoreticallyMediathanks for concurring unfortunately brilliant minded idiots think everything is solved with dumb AI they created. I refuse to be subdue by it.
Granted. But this year’s tech demo is next year’s product. Of these, I think most will go to product. Maybe not the Dress/fabric thing, since that’s a WHOLE other product world that Adobe shouldn’t enter. But I can see them partnering with like, Supreme to make a (stupidly overpriced) one off. But the other stuff is already right on the cusp of product. Video Inpainting is a must for them, and Scene Change is just Adobe Branded NERFs/GSplats. I think we’ll see most of this at next year’s Max, but…it’ll also be late, in that those of us in the space will have already seen it and used it.
@@TheoreticallyMedia Yeah, I don't think they can afford to not release most of that stuff, considering all the competition out there. Even the DubDubDub is what ElevenLabs just released and neither is doing the mouth movements that HeyGen is.
Adobe Firefly 2 video is here: ruclips.net/video/MJXOpdl4r7w/видео.html
👋
has anyone addressed the copyright ability of these AI generated images? until the courts settle that I would be wary to produce any kind of product with these programs
Yes, it has been ruled on. Ai generated images are not copyrightable. The next step will be class action lawsuit and demanding restitution from plagiarists. Justice is coming for all the thievery done here by stupid people thinking they can get away with robbing artists. Its gonna be bloody.
All those empty chairs in the Adobe office from developers replaced with AI?
Haha, that’s a good burn. It’s either that or it’s 3am and this dude hasn’t left the office in weeks.
Need to see it. The Photoshop AI stuff looked amazing in the demos and is crushingly underwhelming as it is, and that is on static images. I can' thelp but feel that these are very, very controlled examples. Look at the items on the table. They are basically custom made with tracking codes to make them easier to track in 3D.
Those “designs” on the cup and bag looked a little suspicious to me as well!
neural filters are trash, not useful at all
"underwhelmed" is the exact emotion i had from the PS generative functions... and if it's bad in PS... then...AE? yeah good luck... it's like the marketers never considered we'd try to use this stuff in real production environments...
@@DamianHanley239 I think it’s a first step. There isn’t really a great pipeline from AI devs to Full Production, so…for better or worse. Adobe is the only company with the muscle to build that bridge.
It’ll be a messy bridge, with lots of potholes, and will collapse at least once…meanwhile we will always pay the toll- but, at least the bridge will be built and the normalization of AI in production can begin.
@@TheoreticallyMedia yeah i mean... i know you're right... it's gonna take time... but the dang marketing they use makes it looks amazing!... i guess that's the function of marketing though :)
the reason the footage of the purse and cup is "wonky", is because they are trying to make a map of the objects, getting all the different angles, etc. It's pretty standard to move the camera like this when "scanning" something.
Yup, although is it just me, or did they also speed that video up? It looked time ramped to me.
I'm actually interested to see more of the generate fill in after effects. That was impressive. That would be really handy for any slight or medium changes for video you want to edit.
00:27 🎨 Project Primrose creates flexible textiles that display content, allowing patterns to change and even animate, controlled by body movement.
01:22 🎥 Project Fast Fill (Gen fill for video) enables seamless integration of elements into videos with impressive motion tracking and light adaptation.
02:58 🧳 Fast Fill is ideal for subtle enhancements like removing small elements or fixing continuity errors, rather than dramatic changes.
03:12 🌆 Project Scene Change effortlessly combines subjects and backgrounds from different videos, offering stable and impressive compositing abilities.
04:48 📸 Project Scene Change even generates contextual shadows on subjects for addedrealism.
05:15 🎭 Project Posable allows users to pose AI-generated characters and associate props for contextual interactions, offering a simplified 3D character manipulation tool.
07:07 🏠 Adobe Illustrator now includes generative fill, enabling the creation of 3D objects from generated vector images for use in animations.
That Illustrator 3-D function looks like a simple extrusion. Notice how the chimney spans the full depth of the house. I can see it saving some time, but there would still be significant editing required, so I don't know that it's a game changer yet.
I think it’s great for doing g little 2.5d animations. Maybe not world changing, but really cool. It does make me wonder if full 3d isn’t far off.
Personally I think we’re a few months (or maybe weeks) from seeing really amazing text to 3d.
Wow. This is doing a better job faster than me using planar tracking in after effects and projecting the photoshop paintout. It would take a lot of effort to extract and match the lighting and movement in a believable way. So many hours of pixel-fucking. Now it's just doing it in minutes with just a few clicks. Quite impressive.
It’s pretty nuts. And the thing I keep saying: this is the bad version. It only gets better from here.
I’ll say on the illustrator front: the demo made the gens look super fast, that was actually NOT the case in my real world time.
I mean, it was like, 50 seconds. So, y’know; still way faster than manually doing it…haha
Dude, I hear you. Projecting in AE is always troublesome for me - I end up in Blender which I'm slower but I don't pull my hair out either. haha.
While as an amateur I love this, I also feel like this is such a slap in the face for people who have put years into learning how to edit stuff like this, all to be rewarded with a 1-click solution
I get your point-- but I also tend to think that the majority of people using this will still be those people who have put years in. If you're looking for an "easy" button, there are much better solutions than Adobe products, which-- no matter how easy they are-- still have a pretty steep learning curve.
Like, I think once AI hits After Effects? It's still going to be an intimidating program to learn on Day 1.
If anything, I just think that AI in Adobe offers some quicker workflows, but you still have to know what you're doing. Take that Scene Change Example with the guy in the Jungle Temple-- Yeah, it "worked"-- but it wasn't that impressive, because it still needed to be lit correctly-- and needed a color pass...and, a few hundred other things.
But that's my point: None of this is a "one button" solution.
When AI came out with stable diffusion : NOOO IT IS STEALING ART AND CREATIVITY.
When ADOBE created AI model from Adobe stock contents(provided by millions of individuals) : YEEE, AI IS FUTURE, I LOVE AI, I WILL GIVE MY 1 KIDNEY AS "CREDIT POINTS"
Haha, yeah I’m not at all a fan of the credit system. I talk about that a bit in the “no hype” look at Firefly 2.
I’ll say this to Adobe’s credit: by putting it into industry standard software, we’re on the road to industry standard usage. Which, might be good or bad, but at least it’ll normalize the use of AI, which to be honest is still kind of a fringe technology.
My greater concern is that Adobe becomes THE name for creative AI, and that’s never a good thing either.
Should I go to work today? With "Everything Changed", how will I learn to function in the world again? I need to know, what still works and what doesn't.
You can not show up. If your boss asks where you are, just send them a screenshot of this thumbnail. They'll leave the office as well...and on and on it will go, until....
Well, I guess it'll be a global day off. I think we could all use it.
I'm less worried about Adobe when it comes to having a mental breakdown about AI. For me, it is more about all the angles happening at once. Images, video, 3d, voices, music, LLMs, robots, etc. Consider that GPT-4 was done training before ChatGPT was even made public, so what do they have now? MidJourney said they just finished sourcing their video training data. Gemini is coming in probably less than a month. AI Explained channel was just showing advancements in artificially generated video for training cars and robots, and last video about discoveries of using lots of disparate robot training data gives a better result even than what the original data was for. 2024 is going to be pretty crazy I think.
Creative Suite is either going to get a lot more expensive, or it won't include all of these AI tools.
I looked it up, in the beginning, you'll get so many credits but when that has been used up, you have to pay for it as a standalone app. No idea how much that will be.
Basically... "Try before you buy". :)
Whoopee
Thanks for the presentation, very impressive how much the AI is moving in media environment
From what I’m hearing, these last few months of the year are going to get even crazier! Super excited!
Like a thief looking forward to the next heist? You are morally bankrupt, perhaps you should check yourself for psychopaty? Doesnt that sound interesting and explai your little ai thievery excitements and coverage?
Always learning something new about AI. Thanks for the content
Thanks so much for watching and dropping the comment!
I see a lot of RUclipsrs claiming that Adobe is "stunning the world" with their recent release, but it's barely just incrementally improved over past technologies. EDIT: I mean, I guess it's improvement enough over the current state of the art for commercial tools.
To be fair: I haven’t been wow’d by Firefly. If you watch my last video, I think that’s pretty clear.
But: the stuff that was announced here is pretty epic. The dress? That’s something we didn’t see coming. And while video Inpainting and Nerf/splatting isn’t new, incorporating it into the Adobe line is basically a straight line to a professional workflow.
I mean, we’re all sort of “in it” and I think that makes it easy to forget that the majority of the world has no idea what’s happening in our little corner of the room.
For better or worse, Adobe just put it all on a stage.
This is what was hoped for yesterday! The demos were surreal 🤯
100% You can kinda tell in my last video, I was a little underwhelmed. I try not to manufacture hype, so...yeah, I don't know if that reads or not? This one on the other hand: The hype is real for me!
@@TheoreticallyMediaYes, totally got that vibe! Excited to see how it all develops 👀
Just a few minutes ago, I was reading a passage in a science fiction novel where people in a futuristic setting wear animated clothing.
Oh, which book? If you're into Sci-Fi Noir, I just finished Titanium Noir by Nick Harkaway. It's a fun read, very-- y'know Blade Runner-esque. Sort of more revolves around Longevity rather than AI. Breezy easy read though!
@@TheoreticallyMedia "The Dark Forest," by Cixin Liu (the sequel to "The Three Body Problem). The fabrics also respond to the mood of the wearer. Thanks for the recommendation!
After being underwhelmed by Firefly, I have to say, this looks immense. The MAX stuff, especially the mapping (walking behind the coffee cup) is a game changer. Literally WOW! Midjourney need to get 3d out and do something special with it FAST, or they're about to get left behind.
I was (and have been) pretty underwhelmed by Firefly as well. But that’s ok, if Adobe has the rest of this on the horizon!
And I should say, I don’t mind the Firefly model in Gen-Fill, it has actually done pretty well for me there, but whole image generation is just not doing it for me.
i think adobe is just being careful at the moment, but eventually they will crush everyone in the ai space like every big corpo. smaller ai companies are going to have to find a specific niche
Yes Firefly was underwhelming but I think you need to put it in context. Adobe is a large company with a very established user base, that has expectations and deadlines. On top of that there is no doubt a large part of their user base that is actively against generative AI. The comparison to that is the extremely active open source world of generative AI. Comparing the two Firefly was always going to look tame, but I strongly feel in the long run Firefly will do some pretty amazing things as Adobe ramps up, and it will do them in a way that I can use professionally with less trepidation and uncertainty than the current systems.
@@JohnVanderbeck Compete or die. Welcome to the world of business. Who cares what pressures they have? They may well catch up, take over and integrate. In the meantime, cash is pouring into it's competitors.
@@icvideoservices or they will buy the start ups.
My mind is spinning uncontrollably, lol. I am one that continuously brainstorms 24/7, always thinking of ideas, stories, etc. Now, with this technology my mind will def melt in the near future - You'll find me at the nearest asylum. But seriously, this is all happening fast - and with tech like this hitting the streets we will just see an increase in speed with new ai tech. Adobe has def went all in with AI in recent months and they seem to be traveling faster than most right now.
I think they’re on a bit of a buying frenzy. I’m pretty sure somewhere in the archives of the channel, the video Inpainting was in paper format (I’ll have to dig around for that)-
Adobe had that coin purse to buy their way to the top spot- and to be honest, who else could/should it be?
On the plus side, it’ll be good to have all these tools under one roof, Adobe has always been good about interplay between their products.
But yeah, these are INSANE times!
Same bro! I feel like I’m going crazy with ideas 🤯
GENERATIVE fill now in video...very impressive Adobe.
Love it.
It is such a game changer. If you take a look at the previous video I did on Firefly (the No Hype one) I’ve got a section toward the back that talks about Premiere and some of the other cool video projects they have in the pipeline.
@TheoreticallyMedia thank u...well check it out.
A game changer for sure. Exciting times for the Media industry ahead. 😁
WHOA!!! Now this is what I was hoping for in terms of innovation. Thanks for sharing this with us.
100%! Yesterday was…well, it was ok. I’m not the biggest Firefly fan. But today? This is the forward thinking stuff that blows me away!
2:43 "You can see where it is....the color of the trees..."
To be fair, generally speaking (especially in this day and age), on horizon shots (or nearly horizon shots), items in the foreground appear darker than in the background, don't they? Did it take this into account or was it happenstance?
I'll have to go back and look, but wasn't it a lighter hue? The thing that I'm wondering is if: Like Gen-Fill, it needed a larger sample size to draw from-- Those joggers were pretty small in the frame.
That said, I still say that no one would ever notice the difference anyhow. Well, maybe unless you were one of the joggers!!
"Mitch, I could SWORN we were jogging past that video shoot, right?"
"Dude....look at the trees....what is happening here?!"
@@TheoreticallyMedia lol
The end results of a lot of "fast removes" videos I've seen end up with the subject(s) travelling through a completely abandoned space. It gives me "I am legend" post apocalyptic vibes.
Finally the tools to fix everything in post
Oh, god...you're right. Production will never get anything right again!! This is going to make for some very sloppy directors!
Sports uniforms will be billboards changing after each play.
That you can turn off with a monthly subscription.
Ughhhhh, you’re right! The green screen billboards in baseball drive me insane to begin with.
And I presume the MLB will be the first to adopt this, as they seem intent on doing everything they can to destroy baseball.
@@TheoreticallyMedia Definitely.
Plus since batters and pitchers spend so much time just standing there on screen it will be easier for the technology to keep up.
(Unless it's Elly De La Cruz)
My first thought wasn't Katniss's dress, but Rorschach's mask!
Ooooooohhhhh that is AWESOME!
Wife told me I was nuts because, I believe, we will be able to ask AI to show as a never before seen movie of our choice using custom parameters... whenever we want. This will happen in my lifetime.
1000% will. The weird part about it is how it'll sneak up on us. It won't be tomorrow, where we're like "Netflix, Make me a Batman Movie starring me." It'll be this gradual rollout of stuff that only kinda half works-- until all the sudden, it does-- but it isn't that shocking.
I was thinking about that the other day when I told my watch to set a reminder that instantly showed up on my calendar. Then, hopping back to the first time I tried voice dictation in...like, Windows 98? I mean...that did NOT work at all. Even when we first got Siri/Cortana/Alexia-- constant complaints on how "dumb it was."
No one complains anymore...but no one marvels at it either!
I said the same thing and that I would be staring in the film.
@@TheoreticallyMedia You're telling me. I used DragonSpeak, and it was brutal. Now I can just Windows+H and instantly voicetype in any program.
All the Home Hubs are pretty dumb now compared with Chat GPT and Bard, which can answer questions directly instead of "I found this web site," and it's only been a few years since they were introduced.
I, for one, welcome our future AI overlords.
I said the same thing but i imagine prompting it to recreate any movie you want with a hilarious and random ending that cuts the plot short right in the middle of the film or even 5 minutes in.
In today's world, we should be able to figure out whose thinking ability is limited.
They will include it now with their pay to use software service because they need their users to beta test it then when it's developed into a new product they will lock it behind an option or paywall. Adobe and Autodesk can both take a hike I wouldn't give them a dollar.
its awesome but what does that mean for creative companies job cuts?
Better content? Just another tool, still needs the imagination and direction from someone to create something. They probably thought art was done with when the camera obscura found its way into art.
Finally I can become the king of FLAVORTOWN
Haha. “A heart attack for you, a clogged artery for you! Exploding cholesterol for you!” Everyone wins in Flavortown!!
Don't crush my kaiju battle dreams! 🤣
This is amazing. I've already been experimenting with using gen fill and other AI tools built into current gen of Adobe software and this stuff is all just next level. There is a Blender -> AE plugin that allows for some pretty wild camera translation stuff. Mixing all these tools together is gonna blow minds with what it can do for amateur and small budget filmmakers. Incredible.
Thanks for the breakdown!
I’d never crush a Kaiju dream! Not even with an 80ft lizard foot!
Yeah, I was just saying: when Gen Fill hits AE? Things are going to be nuts. I don’t even know what that’s going to look like!
Some interesting flexes going on here though. DubDubDub is going to wipe away Elevenlabs’ translation feature, and while Posture is just kind of SD Control Net, the fact that Adobe has Miximo is a pretty big game changer on that front.
Adobe is showing its the 800lb gorilla of creative AI.
Does that make it a Kaiju?
I always thought the real areas where this kind of thing would shine is in giant scenarios or size morphs like this ❤❤❤❤❤❤ We could have whole adventure series with big/small villains and heroes etc.❤❤❤❤
I had a whole pitch about that once. Basically “it’s a small world” but in reverse: everyone in the world shrinks down to Smurf size. And what would happen then?
It was back in that era of Lost and Heroes- so I thought it was a good hook. But, like Lost and Heroes, I couldn’t figure out what the actual reason for it happening was!
@@TheoreticallyMedia Ya, when I was a tiny child 🤔 there was a TV series “land of the giants” where the premise was an alien planet with giant people and little people. There’s Ginormica in Pixar’s stuff. But a series with adventures on a planet of giants was my first introduction at an early age. There’s the “drink me” of Alice in Wonderland, the littles, Thumbelina (who I wished I was as a kid at one point, 😂) and so
Many others. But to actually do it right would definitely require CGI/AI. There’s a book by Tabitha King the wife of Stephen King called “Small World” I think, with a diabolical theme of course, considering the source 😂. But would be able to be made into a movie now with the new technology. So there definitely a lot of this kind of theme out there. It definitely touches a deep archetypal realm, perhaps because we spent many millions of years much smaller in our ancestral heritage. Like the tiny furry creatures we used to be in trees during the time of the dinosaurs. Anyway thanks and take care!! Love your channel!! ❤️👍🏻
gen fill in illustrator is incredible
I seriously don’t understand why this isn’t the big news! It’s incredible!
Futuristic camouflage for soldiers.
Oh, I didn’t even think about that. Haha, Adobe being Adobe, there’s a good chance it’ll crash during a covert mission.
“Private Jones! We are in a night mission! Why is your camo colored beach flamingo?!”
I noticed that you made a reference to Dad's Army, @TheoreticallyMedia. That made me chuckle!
Oh! I thought 'Phast Phil' was the name of the guy in the shot!!!
I know this is frightening, but this could be the beginning of the rise of 2D animation
Might have a video on that soon!
Imagine running Doom on a dress.
Doom.
On a dress.
What does that first one have to do with Adobe? Meaning, unless you make a magic dress, how does this apply to you?
Don't forget, the more Software can do, and the easier, the more and also more realistic fakes can be that are intended to do "bad" and are made by people that are "bad" (momentarely, getting overwhelmed by trauma, anxiety "not a good person") (way more complicated then written here)
Also: we are all at fault for "bad" people, we all can change it, and hopefully we all are on the way doing it.
Unfortunately, the Illustrator 3D effect looks like a basic extrude, not a true object creation.
It is. It’s a very 2.5d animation thing, but still handy. I do think we’ll be seeing some major strides coming out of text to 3d models in the next few months. Or…maybe even weeks…
Thanks! hey is 1945 AI still going to be shared, it means a lot to me that you like it
At some point! The AI showcase didn’t really perform the way I was hoping it would, but I’m thinking about starting a second channel to focus on community films! Stay tuned!
Thanks for the information!!
Nothing Adobe could ever do would make my jaw drop. Well maybe one thing, stop with the subscription based model and get your fingers out of my bank account every month.
I’ve been griping about that since CS6.
At one point, I had one machine that still ran CS6…it was like a little black box of freedom!
Im so FUCKING excited for fast fill. When does this release in beta?
Not soon enough!!!
UI looks a hell lot like Dear ImGui! Ah Adobe is actually on the list of sponsors! Nice!
It seems they also introduced a video upscaller. Topaz may get some real competition soon ;)
Indeed. I’m surprised they haven’t had one yet. Topaz seems to be going with this bizarre $300 + a subscription price model? I still can’t make heads or tails out of their marketing copy. But I do wonder if that new pricing structure is indicative of a heavy hitter entering the space.
@@TheoreticallyMedia yeah, you are right. My thoughts exactly. :)
Good job Tim!
Thank you!!
Great summary Tim!
Thanks Sway! I’m so fired up about this incoming era of AI! This one did have me thinking what we’ll see in the After Effects front at some point,
Like, what is that even going to look like?!
You said it, man. After not using any paid (nor pirated lol) Adobe products for more than 10 years (this includes AE) - They reeled me back in with PS a few days ago.
They're doing a great job with community building, listening to creators, and innovation. It kind of reminds of the old Apple days.
I might do the switch or use their tools more than I thought. @@TheoreticallyMedia
Amazing! I wonder what you can use it for.
Man, at this point? Anything you can imagine!
I think that's the point we need to be more than just extremely careful considering what I can imagine
Thanks for this one! 😊
1000%!
What a time to be alive
It is a real golden age. Seriously, I wake up every day and think: “what is possibly coming down the pike today?
And like, every day: it is something amazing.
Thanks!
Oh wow! Thank you so much for that!! Very much appreciated!
Poseable is what excites me the most. It is a problem we've been struggling with for some in Generative AI and while there are techniques and solutions, I feel it isn't a solved problem yet. If nothing else the workflow needs to be easier.
Totally agree! I'm really jazzed about that as well-- I mean, I have that homebrew version, but it is a total workaround. I'll be happy once there is a dedicated application for posing and generating!
@@TheoreticallyMedia Yeah I have several homebrewed workflows as well. Some work better than others, but the whole space feels like it has a LOT of room for improvement. One of the things I have been having fun doing lately is taking my own photography work, removing the real models, and trying to replace them with AI generated characters in similar poses. And anything outside a very boring pose almost immediately falls apart. If you want to do senior yearbook photos, family albums, or celeb headshots then the systems we have now are clunky but work. But if you want to actually express yourself with creative posing like I do in my work, you are out of luck. Excited to see what Poseable will bring to this and if it is more powerful in understanding poses. The major problem with the Stable Diffusion ControlNet workflows is that it has no concept of depth to the OpenPose skeleton. So it doesn't really understand a hand being in front versus behind for example.
Great review
Primrose was the sister of Katniss in the Hunger Games... So it's not coincidental
ahhhh, the odds are in your favor! good catch!
"We don't need no stinking human designers" Seems like a stab in the back to their customer base
Eh, I think we're always going to need designers. Mostly so that bosses can yell at them, and blame them when clients get mad. Sigh...
Really lovely playing on your channel by the way!
:-)@@TheoreticallyMedia
the news was pretty good, but what about text to video in adobe ?
It’s there, kinda? If you check out my Firefly video, I’ve got a section on Premiere. The TL;DW: they have text to storyboard, and then text to animatic (which is video, let’s face it) and that was announced back in April.
I think they’re waiting for that tech to get more rock solid before deploying.
Awesome. Thanks.
100%!! Thank you for watching!
this is just a modified version of controlnet, which local installers have been using for months.
just install your own version of SDXL and use the many model variations and UIs. you'll have MUCH greater creative capacity and no censorship (speaking of censorship, i'm sure you'll delete this).
i have the new version of creative suite, and the limits and strangleholds it imposes on users are just slightly less annoying than all the new and wonderful errors i get when trying to do daily tasks with PS.
I don't ever delete comment, just FYI. Well, I should say, I haven't yet-- I'm sure someone at some point will come in and say something REALLY stupid and I'll do it-- but thus far, I haven't.
I totally agree with you by the way: Pose Control is pretty much just controlnet-- And Scene Change is NERFs and Gsplats. I have seen Video Inpainting-- In face, an early version of it is in one of my videos. I'm presuming Adobe just bought them.
To all that, I'll say: You and I, and many others that are in this space, have seen this tech. But we are a VERY small group of people compared to the vast majority of Adobe users. And, most of those users don't want to mess around with installing SD and messing with Loras. They just want to open PS or AE and work. That's where the gamechanger is.
And I think in a lot of ways, creative AI needs Adobe to standardize. The one plus side to all this is that it normalizes GenAI in professional workspaces-- which is a good thing.
It's also Adobe....meaning it's also a bad thing. I've got a 20 year love/hate relationship with that company, so I get where you're coming from.
@@TheoreticallyMedia wow. that was refreshing!
i'm generally lambasted and booted for saying this kind of thing. - but you're right, we're likely the fringe.
happily subscribed to a reasonable content creator :)
One question though are any of these tools available yet ?
So I guess with Project Primrose, the designs are projected onto the dress in real-time in the video and the dress itself is not one big wearable display. That would be interesting hardware, not interesting AI. Adobe is not a hardware company. However, I had to make my own conclusion about that. So you, and perhaps also Adobe, left out the most important thing: What Project Primrose actually does.
So, from reading the documentation, I think it's actually a material that is sown into the fabric-- so not projection. That said, I totally agree with you: No way Adobe is actually going into production on this. I think this is just a nifty R&D project for them. A "We thought we Could, so We Did" kind of situation.
That said, I could see them partnering with someone down the line and licensing the tech out. Some super high end clothing company with a "powered by Adobe" ad campaign. But that's just speculation on my part...
When Adobe makes an impression.... The world must be upside down -
Haha, well we’ll know if thing have gone completely sideways if you ever get an email that says “Adobe has issued you a refund”
Could a mask be made from the dress fabric?
Apparently it is a mesh that goes in fabric. So, yes. A mask, a couch, a wall tapestry…
The possibilities are kind of mind boggling.
I honestly don't understand this weird obsession with sterilizing the world around people in images and video. Obviously, fixing things like mics in shot or what have you is useful, and actually helps those managing continuity on shoots actually do more than just say "hey we need spend a bunch of money to reshoot this", but focusing so much of this on how it can be used to make it look like you were in this strange liminal space devoid of other life is... odd to me.
That's a really good point. I don't know, maybe something to do with a collective subconscious realization that we never get a chance to be "alone" anymore?
Like-- we sort of desire some feeling of space?
It's a really interesting rabbit hole you've got there...haha, not sure how far into it I want to delve!
Crazy!
We live in a real golden age!
You can now fix the final season of Game of Thrones, haha
adobes gen ai released a generation behind... dall-e 3 has no competition right now.
So, I’m not the hugest fan of Firefly- if you see the video I did yesterday, I think you can kinda tell.
BUT: the stuff in this video? That’s the real power of Adobe, and I don’t think anyone can compete on that front.
@@TheoreticallyMedia i'll admit the applications it has in video editing is pretty good, even surpasses what NUWA seems to be promising/capable of, for static image gen on a single image/frame/etc though it's pretty weak, though I'm sure if they keep up in the AI race they can improve over time too.
@@xbon1 the thing Adobe has going for it is the massive user base. It’s something that I’m really critical of as well- since the last thing I want is a homogenized AI look, if there is a “king of the hill” platform
Is Poseble the old Poser interface?
I kinda wonder! I miss Poser, and I can only presume someone snatched up that licence. One of my favorite eras of computer art was the combo of Poser and Bryce3D!
@@TheoreticallyMedia... And Kai's Goo. WTF was that? Gamification (you had to use tools in certain ways to unlock the next Goo) of really cool tools is a fail. But Krause's fractal imaging software (forget the name) was unrivaled. Still is.
To be sincere, Adobe is always making hype of something we've already seen before. I tested Generative Photoshop and the results were pretty bad. And the Adobe subscription is always complicated.
I'd rather go back to my 3D and Mage Stable Diffusion workflow.
Thanks for the video!
Haha, I actually get some OK results from GenFill in Photoshop, if I'm not leaning on it too hard. Like, smaller things I think it's fine with-- but going with a full gen image (ala Firefly) just doesn't do it for me.
And the Adobe sub is complicated by design I think!
For web, results are okay but for hi-res images, generative fill is a no go.
If everything changed why is my bank account still empty?
Oh, I didn’t say it was going to change for the better!
Yeah sure mind blowing, Adobe will start charging consumers for every feature they have showcased.
Oh, I’m sure it will justify their $2 increase to the various plans.
Nothing with Adobe is ever free.
At least now they separate out the beta versions, instead of just releasing broken tools that we have to deal with….so, thanks?
@@TheoreticallyMedia I agree exactly like what they are doing with generative fill.
Jan Brady?
Currently this video is at 90k views and you are the first person to catch that! I’ve been waiting for you!!
"Project Poseable" = kill off Mixamo. Then charge for Project Poseable.
Yeah, I hope not. But I can see it. I don’t think they ever figured out what to do with Miximo to begin with. The Poseable pitch was probably something like “hey, remember that thing we have in the closet from a few years back? I think we can use it!”
Or, they build Poseable and just kind of forget about Mixamo. I’ve seen that happen with them too. Like, it just sits there like a forgotten toy. But, at least we can still play with it!
Nice to see Adobe has been using our billions in subscription fees for something.
So, I had a pal that worked for World of Warcraft at the hight of their popularly. I think it was something like 12 million users paying at $12/mo,
12 million x 12. A month.
Adobe has a smaller user base, but a higher cost…so, yeah. That’s a lot of money.
This is sorcery!
The only channel I immediately click on and like
Oh man. Thank you so much!
rorschach mask!
Could you imagine if that was the in the presentation? Just, Rorschach walks out and says: "You've got it wrong, I'm not locked in here with you. You're locked in here with me..."
And then the livestream goes black.
wake me up when they make an ai uv unwrapper
And still Photoshop does not fully support 32 bit mode. I guess it's much more harder thad developing AI stuff.
Yeah it probably is harder actually. They probably re-wrote Photoshop to get the web version going, but the desktop version has so much history in it, it probably isn't worth it for them to re-write at that level.
Meanwhile Apple this year: Updated all our Softwares to work with iPhone 15 Pro 😴
Haha. I actually broke a cardinal rule of mine, in that I updated to the latest OS to run these new features. I generally like to wait at least 6 months. And that does for any OS,
Thus far, things seem…mostly stable? Although some plugins on the music side are a little sluggish. That’s pretty common for virtual instruments though.
and you just got downvoted and do not recommended for clickbait... what a world we live in!
Uhhh. Ok. But always curious: how is this clickbait? Is it the title? People always get bent out of shape when I use the “Everything Just Changed” title, but I do save it for when I genuinely think things did change.
Adobe introducing this level of AI on their products is pretty big.
Now, if you don’t like that they’re doing it, that’s a whole other thing.
Great vid
Wonder what they'll discontinue next.
Higher costs, fewer features, stagnant development on the apps that faithful users have paid for for years.
Burning through credits while AI generates stuff you can't use.
Subscriptions and digital delivery killed improvement and innovation. And deadlines.
Great for shareholders, I guess.
Yeah, I’ll say that I really despise that whole credit system. There isn’t much worse than trying to generate something while keeping an eye on a ticking clock.
Oh, but I’m sure a higher paid tier of unlimited is in the pipeline…
@@TheoreticallyMedia I wouldn't be surprised - or, if "new features" in Firefly gen 2 will create that tier.
If everyone can be a graphic artist, no one is.
Call me when they have an AI robot who can look and act exactly like me and go do my job. Just so long as he can't come back and replace me at home, too. OMG. What have I done!????
Haha. And then because RobotYou is a copy of you, it’ll want its own robot as well!
And on and on it will go!
@@TheoreticallyMedia Maybe we already ARE robots, programmed to believe we are not. Because if we knew the truth, we'd go insane!
great:)
Thank you!!
The power over video manipulation these days is getting, frankly, worrying.
I mean, yes and no. We've been able to do things like this for quite some time-- it just cost a LOT of time and money. I get what you're saying though: now that it's cheap and accessible, we'll see a lot more of it.
@@TheoreticallyMedia yup, misinformation and scams are going to be far more sophisticated and easily performed in the future.
Did it agree to pay an overpriced subscription fee?
If you watch the Firefly 2 video I did before this, you’ll see that I’m fairly critical of their pricing structure.
What they really need is an ala carte menu. Where we can choose PS, PR, AE plus whatever else and pay per app. Not these weird bundles that they choose.
I see only half a tie ? Oops!
I went back and forth on that! Is that the correct style? That the tie is not supposed to show under a one button jacket?
I mean, I don’t know how to tie a full Windsor knot, so I’m legitimately asking.
Wow, they're almost keeping up with stuff "2 minute papers" has presented a few years ago!
Pretty impressive for a lazy-assed company resting on their laurels from 30 years ago
Everything Just Changed!! 😕 oh My!!! Will I be able to live like I used to? How is my life going to change? How will society cope, and keep running the world the same as it used to be? What do I need to do different to cope with this change? 😖 Oh, the horror!
Haha, yeeeeah, I knew I was gonna get some grief on that title! Ehhhh, worth it!
"Ai Creative" is the biggest fucking oxymoron I've ever heard.
Your very easily pleased.
Adobe cherry-picks the F out of their demos. Everything they've shown over the years never works half as good in practice for normal users. Lol
Haha, totally accurate. It’s at least 4 versions before we get to something that works. And generally, by that time they also break a tool that we all constantly use!
I’ve tried firefly and must say it is shity. It only looks good on selected examples in your adverts
If you look at the video I did before this one (Firefly No Hype), well- I mostly agree with you. It isn’t great at all.
Technology being used for this, and not all being pooled into automating the mining of the resources, is why humanity is going to fail at this and go extinct. You need to be responsible and you're just not being.
Stupid world we live in.
I mean...you aren't wrong.
@@TheoreticallyMediathanks for concurring unfortunately brilliant minded idiots think everything is solved with dumb AI they created. I refuse to be subdue by it.
Say Goodbye to the reality!
Yeah.... tech demos are just that... tech demos...
Granted. But this year’s tech demo is next year’s product.
Of these, I think most will go to product. Maybe not the Dress/fabric thing, since that’s a WHOLE other product world that Adobe shouldn’t enter. But I can see them partnering with like, Supreme to make a (stupidly overpriced) one off.
But the other stuff is already right on the cusp of product. Video Inpainting is a must for them, and Scene Change is just Adobe Branded NERFs/GSplats.
I think we’ll see most of this at next year’s Max, but…it’ll also be late, in that those of us in the space will have already seen it and used it.
@@TheoreticallyMedia Yeah, I don't think they can afford to not release most of that stuff, considering all the competition out there. Even the DubDubDub is what ElevenLabs just released and neither is doing the mouth movements that HeyGen is.
VFX artists are gonna be out of business soon