We need more videos like that. This stupid hype has to finally be exposed with proper testing and real world examples. Everyone who actually tests AI comes to the same conclusion very quickly. But almost nobody want's to talk about it publicly for some reason. Why?! Thanks, Hugo!
Working in VFX/post/motion since 2001 here. Currently at 15:03 this WDS proposal would it be honest if they stated "Money Grab unless you're looking for an IG low quality reel to amuse your friends". Huge thanks Hugo for this very detailed clip that make us VFX people proud of human work. AI it's a partial tool at the end of the day. PD: Just smiled at the end of the chapter when you said almost the same about social net's content.
I'm curently figuring out a stable workflow for moveAI for some cutscene productions and had samey issues. My colleague was wondering why it worked as we tested it and now as I was trying to figure out better workflows, it wasnt working. The thing was, doing just until it barely works is not production ready. We hire actors, rent a place to shoot, take full days and then it has to work as stable as possible. The issue was, I first wanted to make a good calibration and after that do the takes. But when you first make a calibration, it ends up in another session so your takes are in a session without a calibration lol. I figured that you could change the server ID before uploading the takes so I could get everything into the same session but then another problem occured. One of the iphones disconnected for wahtever reason and as it reconnected, the camera order was changed. So my super duper calibration didnt work for the takes and assigning the cam order manually isnt a feature yet :D Thats the main issue with AI tools right now. They are made for people who wouldnt figure it out themselves anyway so there is no option to fix possible problems on your own, to be safe while shooting. For now the way to go is to always calibrate before shooting the takes and do it more than once, for the case the calibration fails. Then do the takes in hopes that some of the calibration works so you dont have to do it all over again. I feel so so bad going to the shoot next week with this fragile workflow. Hope they will soon make it at least a bit more solid for professional works and not just some funny tests for tiktok.
Finally! A grounded review of wonder dynamics for vfx. A lot of hype on this one and you are right, their marketing is making us vfx artists no favors giving the impression that it really is a "one click" solution like a lot of people think working with computers is. Huge potential but so far, a lot of false marketing. I also hope this tech develops soon and gives us more freedom to work on the creative part and less on the technical side.
Thank you for this video and for thoroughly playing around with the possibilities Wonder Dynamics Studio offers so far. I was really curious and I also don't see artist being replaced by this kind of results. The way the sales team presented this software sounded so crazy and no wonder artists started to wonder whether they should even begin a career in the VFX industry now (or start considering shifting to another indurstry). Indeed, ballancing out the front page and talking more about VFX in social media platforms or previz sounds much closer to what could be quickly implemented over wiping out the high end CGI process. Thank you again for the work, Hugo! You're an amazing inspiration to many and btw, about "all that ranting"... It's really just labeling what we all see in the shots. So thanks again for this video and for being so open to speak about it honestly
So glad you did this review. I found it very strange when I imported the scene into Maya and seen the camera at origin ?????. The sliding in the animation is crazy and cleaning up the animation was a killer. It would have been way better to animate manually. Yes time spent in the animation pipeline would be a little more than the process of upload, create, download and cleanup in WD pipeline but results would be solid. I still feel the Ai tools are good but some tools make the artist move away from creative control and output which is a big mistake. Cordless tools for a carpenter are fantastic but the carpenter still has total creative control.
im so glad you posted this because as much as I love these kinds of tools, the hordes of videos saying "VFX IS OVER" are such click bait garbage its insufferable. Visual effects will never be over, it will just become more and more sophisticated. Even if everything becomes point and click, all that means is that it gives the creators more time to iterate and create a better imagery/storytelling. The irony is that we're already pointing and clicking lol! it will just allow everyone to go deeper with whatever it is they are doing.
I was in the closed beta for wonder studio and my experience was very similar to yours. I handed it clips that I thought would be easy for it to handle. Simple camera movement, clear character movement, very little overlap, etc. and it produced 'close', but ultimately unusable and barely salvageable results. I kind of wish that they would have hyper focused on one aspect of VFX like camera tracking and worked hard to get that to a good place instead of trying to do everything in one go. Like if they made an AI that could get a solid track from most footage without much correction from the user that would be awesome. Same goes for mocap or cleanplating. The issue is that we just aren't there yet and I question whether these tools will ever be good enough and if we are just in another tech hype cycle.
tools like this seem to work best when they focus very intently on doing one single thing well rather than trying to be a suite. if they put 100% of their effort into just the mocap/tracking part, or the auto-roto part, or the clean plate generation part, it might spit out something truly useful as a peice of a finish project.
alternatively, instead of trying to use ML to replace human jobs, we should go back to using it to generate surreal imagrey that would be very difficult to produce any other way. deep dream, those stablediffusion videos where they scroll through the vector space smoothly to blend from image to image, and everything else that looks eldritch and otherworldly. frankly it's a more interesting use anyway.
No, let them do the suite, perfection can take time, when people do ambitious things it can sometimes take time and not come out perfect out of the gate. Thats the endgame that everyone can make vfx. 15 years ago professional high quality photos with a phone was a joke, a scam and phones should just focus on communication, look at now. Total suites.
This stuff had been around for years. Saw these at Siggraph way back. The problem before was mainly compute. Ai scaled with Nvidia GPU tech. Nvidia is capping consumer cards vram at 24 gb. Apple silicon will be interesting cuz unified memory for GPU
I’ve been looking at ai roto and clean plate generation for years. Image segmentation ai. Now it’s hit and miss. Faster to finish it manually but you can get a rough roto fast if it works. When this does work, a lot of artist will need to move on to higher level work.
Thank you very much for the very interesting video. I just saw Open AI Sora and the wuality of video it produces... I so much wish to switch (finally) career to full time VFX from teaching (currently teaching music and computing(including VFX) in public primary in London), but all this hype of AI makes me wonder if VFX will be a secure job for the future. Thank you very much for you input, it really is encouraging.
The biggest mistake people make is thinking 'AI will replace everything' - when in fact, it will be just like Houdini - a 'link' in the chain which can have its uses, where appropriate. I watched the whole video very carefully then played with this tool you mentioned. From what i could tell, the hosted tool you show an open pose detector + a masking tool like coco (which provides the clean plate by doing generative infill) some ipadapter type influence at play for clothing/character looks, a ksampler for a first pass, an upscaler (likely through pixel space) and a second ksampler for compositing things 'cleanly' and giving images a bit of athmospher (likely with the denoiser set super low). Long story short, it's 3 worlds away from the professionalism you expect as a pro. However, the advances in the FLOSS space are very exiting. There's nothing in wonderstudio I cannot do with comfyui and a solid understanding of diffusion already, today. Would love to see a follow up based on recent advancements such a motion loras training on consumer hardware , or the big 'tracking' (or lack thereof) issue you point out which is being addressed in a paper that came out last week. Thank you for your thoughtful video.
To celebrate springtime my Nuke compositing courses are 60% Off until the 1st of April. This is the biggest discount ever! Don’t miss this offer. Use the promo code SPRINGSALE during checkout: hugosdesk.myshopify.com/discount/SPRINGSALE Two courses are available to buy: Nuke Compositing course: 👉Access to 150 classes (already available, extra classes dropping soon) 👉More classes soon! 👉50+ hours of training The Workshops course (pre-order): 👉Real Production Workshops (some classes already available. More classes dropping in 2024) All courses include: 👉Student license path available (paid separately) 👉Showreel material (200GB+) 👉Private Discord (2000 students) 👉Fully endorsed by @TheFoundryTeam I hope you can join our amazing community of 2000+ students.
The problem is that many believe that this really works in a production scheme. It may evolve and one day even work. I have some doubts, each shot is different, with millions of possibilities and it seems to me that covering all of this is impossible for now. When these artificial intelligence themes are consolidated, the software we normally use ends up being implemented. Until then, I think we are still a little far from that happening. I see this more as tiktok or instagram filters than in actual production. It's impressive but it's still far from working properly. Thank you for your honest review, the first I found.
I agree Hugo. I haven't used Dynamics Studio yet, but I use Runway quite a bit to do quick rotos for me. Sometimes it's amazing! Other times, when I think it's going to do a good job, it fails miserably. I kind of feel the roto tool is worse now than it was a year ago. Their inpaint tool is completly useless. It looks great when you're building keyframes, but is a hot, morphy mess when it renders!
like you said Hugo , i think Ai is going to give us tools that will make the vfx work a bit easier , the same way every new tech came before ... Ai & VFX will fuse and create new very cool tools... i do think that VFX artists need to learn how these things work and maybe how they are trained and coded ... i can see a point in the future where creating an Artistic piece is going to be equal to creating the Ai model itself or the Ai tool itself .... it is not happening tomorrow, but sooner than we think is my guess
To my eye, the only thing that they've done well is reconstructing the lighting (probs the clean plate wrapped to HDRI but whatevs) If their AI could identify cameraand scene information and then split up the tasks like a VFX artist would, and allow users to re-run those parts of the process that fail, maybe this would be useful.
Great video! A lot of hype with AI, but what is really usable in every day life right now, that is the real question. When you raise this question, the good half or even 2 thirds of AI tools are not even usable today, but are just paid toys. A lot of companies are just too pushy on their marketing.
Well, I used a very good camera. This footage was filmed with a Blackmagic Ursa Mini Pro G2 with professional Cine lenses with RAW in 4.6K. Not sure the camera would make any difference. Their software is just not ready yet! I also loaded footage filmed with a RED 8K and got the same BS results.
@@HugosDesk Of course it should work on any shot but it would be interesting to see if it actually did produce the results they showcased or if there was something else going on in there, such as full production workflows, which of course, would mean they were completely fraudulent claims. Nice review btw, you just saved me 100's of dollars.
"Potential" is the key word here. Potential is great for raking in investor money. But what all these machine learning tools have in common is basically NOISE. Noisy inputs creates noisy results, the real world is very noisy, and I'd bet these tools aren't going to create a clean result even if you gave it perfectly sharp CG renders to analyze. ML creates a hot noisy mess, they try to smooth it out by filtering and blending, then re-fit to the noise through various algorithms, this approach is good for getting in the ball park, no further. All these things are old news in the CG industry, the only thing that's newish here is that companies like this create an automated pipeline with no human intervention. The only way to make use of these results is to put a human eyeball in between every step of that pipeline and correct for the noise and refine the results, which is basically what VFX artists do for a living, outside of the creative decisions.
Most of these AI tech bros are grifters. Create a half baked broken ai app, get shillers to scream about it online, rise the value of the broken app then sell it to a clueless investor or lock it behind a paywall. If they were really trying to create an industry standard tool they would tirelessly work on it day & night while consulting vfx artists to have their in put. But no. They want to release the tools to grifting youtubers who can shill about it while never knowing how vfx are done while yelling this is the end for vfx artists. Its like other people being vfx artists was a big hinderance to their existance. I wish one of them would try and replace a transformer in one of those chaotic Michael Bay shots with all the camera shake and pyro going off.
you know what - this reminds of those cheesy photo-editors that add filters, people thought they will replace photoshop and professional retouching too, but that never happened. same here - sure this might work for some low budget or indie youtube stuff but not work on a real film or a show.
All Ai Mocap will not replace Mocap or roto tracking match. Just look at all Ai Mocap feet it’s with every Ai system. Optical Mocap is the only way for high level animation.
The problem with a lot of people in the VFX/AI cross-over of the venn diagram is they're much too technical for their own good. While a piece of software like this may work for some shots in the next 5-10 years, maybe even be good enough to give a first pass on most shots, how will it handle strange sequences that DOPs and Directors will throw at it? How will it handle fast camera motion? how will it handle strange rack focus? how will it handle exaggerated crazy lens distortion? these are all non-technical, creative decisions that are made in film and TV all the time to give interesting visual results that VFX artists have to work around all the time. It's irresponsible to claim that VFX is dead and that AI tools such as this are already worthwhile replacements. We're humans, helping to bring the vision of other humans to life, to be enjoyed by other humans. No AI can ever replicate or replace that. The VFX industry is already in a race-to-the-bottom, creating many of the issues we face in the industry to this day. The vendors are crabs in a bucket and will try to replace us where they can. We must fight against it.
Sir request you to give review of pika labs. There are claims that these tools will render the VFX business in house, and we will no longer need separate vfx studios.
Hello. Nice video, I totally agree with you about AI at this stage, but for me it is quite obvious it can’t replace a serious CG work and also that if you want to use it you must think your footage “easier”, without complicated movements or body position with overlapping of characters that can confuse AI. It’s made for simple things, as you say. But if you have some attentions creating your shots results can be much better. Using it with green screen footage for example, allow you to replace easily a character and for hobbist it can be a great thing. I think you used footage that shown all AI limits, so you don’t use it “correctly” for its real purpose. Their advertising is not completely honest, but imo it’s obvious it’s just an advertising. I think its biggest issue is it not track correctly the camera in blender, if it would do, you could adjust character position in the animation (you can do it even now) and create much better VFX, but if the camera doesn’t work correctly, it’s completely useless. At the end, in other words you can’t use same workflow you would use in other cases, you should think all the creation process according to the tool possibilities. As you said, good for RUclips or social, not for professional work.
Ai is not good for all VFX tasks. For now. This company hype is due to their marketing. Other companies are also working on every aspect of this. I’m pretty sure soon they’ll get there. Like every other disruptive technology in history: Bad at fist steps, but improves over time, and will be very helpful along the way, then change an entire industry. 🤷♂️
@HugosDesk but it was free before when i used it i will show u my work to u later today what was i made but worse curse thing is they fool crapp minds make it fully paid now very bad news greedy team sensless
I completely disagreed with you. Advertise that it's a pre-visual?? I'm so happy I found this software. If it was pre-visual I would've ignored it and so would thousands of other people. U don't know what type of shots to use. Purposely giving the hardest shots for ai tracking and model animation. I tested it with APPROPRIATE shots and it was wonderful. You gave 5% props to the software and 95% shit on it. Just don't use it and influence bad advertising for CGI progress in regards to this software
I would say good luck with convincing a client to use 'APPROPRIATE' shots. I've worked in the VFX industry for 23 years, and I've never had a client that would be so helpful. The software needs to handle any shot as advertised on their website. Anyway, I completely respect your opinion. Thanks for watching.'
@@HugosDesk I don't think it was ever an option for a vfx sup or a studio to tell a client that something cannot be done becuase their sofwares only work with APPROPRIATE SHOTS. NO AI in the world will be able to account for million possible angles that can be shot and give pixel perfect results on first pass. AI cannot replace Compositors completely!
What are you talking about? All my videos have a lot of light and they are filmed on a cinema camera in Log with 15 stops of range. It's professional footage. It's not graded if that is what you mean, but grading is done after VFX
@@juko8794 THe videos are in 6K and they are very sharp. Of course, watching on RUclips (this was a live stream) won't give it justice. Not sure why you are defending this software. It should work on any footage regardless. It's not a good product and they should not made claims like they do on the promotional material. It's a weak software not ready for production or consumer work. It's a Beta with too much hype.
I am so happy that someone with authority like yours has finally exposed this software for what it really is, just a decent attempt at automatization but literally NOTHING more. They are not even close. This won't be useful in production for a decade to come. Nice for hobbyists.
Nice review and honest opinion. There are phrases that never get old: "never trust sales talk!".
Thanks so much for watching. Absolutely agree, too much hype. Hopefully it stabilises soon.
We need more videos like that. This stupid hype has to finally be exposed with proper testing and real world examples. Everyone who actually tests AI comes to the same conclusion very quickly. But almost nobody want's to talk about it publicly for some reason. Why?! Thanks, Hugo!
Thanks for watching. I always try my best to be constructive and honest on my reviews. That’s the only way for me.
Just Google how to use correct the tool and you will quickly find out that even ILM is using it. Jesus the YT kids... So tired...
Working in VFX/post/motion since 2001 here. Currently at 15:03 this WDS proposal would it be honest if they stated "Money Grab unless you're looking for an IG low quality reel to amuse your friends". Huge thanks Hugo for this very detailed clip that make us VFX people proud of human work. AI it's a partial tool at the end of the day.
PD: Just smiled at the end of the chapter when you said almost the same about social net's content.
I'm curently figuring out a stable workflow for moveAI for some cutscene productions and had samey issues. My colleague was wondering why it worked as we tested it and now as I was trying to figure out better workflows, it wasnt working. The thing was, doing just until it barely works is not production ready. We hire actors, rent a place to shoot, take full days and then it has to work as stable as possible. The issue was, I first wanted to make a good calibration and after that do the takes. But when you first make a calibration, it ends up in another session so your takes are in a session without a calibration lol. I figured that you could change the server ID before uploading the takes so I could get everything into the same session but then another problem occured. One of the iphones disconnected for wahtever reason and as it reconnected, the camera order was changed. So my super duper calibration didnt work for the takes and assigning the cam order manually isnt a feature yet :D Thats the main issue with AI tools right now. They are made for people who wouldnt figure it out themselves anyway so there is no option to fix possible problems on your own, to be safe while shooting. For now the way to go is to always calibrate before shooting the takes and do it more than once, for the case the calibration fails. Then do the takes in hopes that some of the calibration works so you dont have to do it all over again. I feel so so bad going to the shoot next week with this fragile workflow. Hope they will soon make it at least a bit more solid for professional works and not just some funny tests for tiktok.
Finally! A grounded review of wonder dynamics for vfx. A lot of hype on this one and you are right, their marketing is making us vfx artists no favors giving the impression that it really is a "one click" solution like a lot of people think working with computers is. Huge potential but so far, a lot of false marketing. I also hope this tech develops soon and gives us more freedom to work on the creative part and less on the technical side.
Plot twist, there is thousands of indian vfx guys working overtime deliver those shots as soon as possible
Thank you for this video and for thoroughly playing around with the possibilities Wonder Dynamics Studio offers so far. I was really curious and I also don't see artist being replaced by this kind of results. The way the sales team presented this software sounded so crazy and no wonder artists started to wonder whether they should even begin a career in the VFX industry now (or start considering shifting to another indurstry).
Indeed, ballancing out the front page and talking more about VFX in social media platforms or previz sounds much closer to what could be quickly implemented over wiping out the high end CGI process.
Thank you again for the work, Hugo! You're an amazing inspiration to many and btw, about "all that ranting"...
It's really just labeling what we all see in the shots. So thanks again for this video and for being so open to speak about it honestly
So glad you did this review. I found it very strange when I imported the scene into Maya and seen the camera at origin ?????. The sliding in the animation is crazy and cleaning up the animation was a killer. It would have been way better to animate manually. Yes time spent in the animation pipeline would be a little more than the process of upload, create, download and cleanup in WD pipeline but results would be solid. I still feel the Ai tools are good but some tools make the artist move away from creative control and output which is a big mistake.
Cordless tools for a carpenter are fantastic but the carpenter still has total creative control.
Haha, I'm happy there are people like you talking about this situation.
im so glad you posted this because as much as I love these kinds of tools, the hordes of videos saying "VFX IS OVER" are such click bait garbage its insufferable. Visual effects will never be over, it will just become more and more sophisticated. Even if everything becomes point and click, all that means is that it gives the creators more time to iterate and create a better imagery/storytelling. The irony is that we're already pointing and clicking lol! it will just allow everyone to go deeper with whatever it is they are doing.
Thank you. You have saved us time to go through the same issues.
See you in a few years after a few updates :)
Saved you $100! 🤣
@@HugosDesk Not only me :D Thanks to you, they lost profits by tens of thousands of dollars :D
If the camera isn't even track how is this usable?
I was in the closed beta for wonder studio and my experience was very similar to yours. I handed it clips that I thought would be easy for it to handle. Simple camera movement, clear character movement, very little overlap, etc. and it produced 'close', but ultimately unusable and barely salvageable results.
I kind of wish that they would have hyper focused on one aspect of VFX like camera tracking and worked hard to get that to a good place instead of trying to do everything in one go. Like if they made an AI that could get a solid track from most footage without much correction from the user that would be awesome. Same goes for mocap or cleanplating. The issue is that we just aren't there yet and I question whether these tools will ever be good enough and if we are just in another tech hype cycle.
tools like this seem to work best when they focus very intently on doing one single thing well rather than trying to be a suite. if they put 100% of their effort into just the mocap/tracking part, or the auto-roto part, or the clean plate generation part, it might spit out something truly useful as a peice of a finish project.
alternatively, instead of trying to use ML to replace human jobs, we should go back to using it to generate surreal imagrey that would be very difficult to produce any other way. deep dream, those stablediffusion videos where they scroll through the vector space smoothly to blend from image to image, and everything else that looks eldritch and otherworldly. frankly it's a more interesting use anyway.
No, let them do the suite, perfection can take time, when people do ambitious things it can sometimes take time and not come out perfect out of the gate. Thats the endgame that everyone can make vfx. 15 years ago professional high quality photos with a phone was a joke, a scam and phones should just focus on communication, look at now. Total suites.
Plus AI still new, it does not live to the hype now, but one day it probably will.... .
This stuff had been around for years. Saw these at Siggraph way back. The problem before was mainly compute. Ai scaled with Nvidia GPU tech. Nvidia is capping consumer cards vram at 24 gb. Apple silicon will be interesting cuz unified memory for GPU
I’ve been looking at ai roto and clean plate generation for years. Image segmentation ai. Now it’s hit and miss. Faster to finish it manually but you can get a rough roto fast if it works. When this does work, a lot of artist will need to move on to higher level work.
you're awesome, helped me a lot in determining my workflow. many thanks
Thank you very much for the very interesting video. I just saw Open AI Sora and the wuality of video it produces... I so much wish to switch (finally) career to full time VFX from teaching (currently teaching music and computing(including VFX) in public primary in London), but all this hype of AI makes me wonder if VFX will be a secure job for the future. Thank you very much for you input, it really is encouraging.
The biggest mistake people make is thinking 'AI will replace everything' - when in fact, it will be just like Houdini - a 'link' in the chain which can have its uses, where appropriate.
I watched the whole video very carefully then played with this tool you mentioned. From what i could tell, the hosted tool you show an open pose detector + a masking tool like coco (which provides the clean plate by doing generative infill) some ipadapter type influence at play for clothing/character looks, a ksampler for a first pass, an upscaler (likely through pixel space) and a second ksampler for compositing things 'cleanly' and giving images a bit of athmospher (likely with the denoiser set super low).
Long story short, it's 3 worlds away from the professionalism you expect as a pro. However, the advances in the FLOSS space are very exiting. There's nothing in wonderstudio I cannot do with comfyui and a solid understanding of diffusion already, today.
Would love to see a follow up based on recent advancements such a motion loras training on consumer hardware , or the big 'tracking' (or lack thereof) issue you point out which is being addressed in a paper that came out last week.
Thank you for your thoughtful video.
I’m eagerly anticipating the results from the PNG multi-passes. 😂
40:52 / previs. Interesting video
To celebrate springtime my Nuke compositing courses are 60% Off until the 1st of April. This is the biggest discount ever! Don’t miss this offer. Use the promo code SPRINGSALE during checkout: hugosdesk.myshopify.com/discount/SPRINGSALE
Two courses are available to buy:
Nuke Compositing course:
👉Access to 150 classes (already available, extra classes dropping soon)
👉More classes soon!
👉50+ hours of training
The Workshops course (pre-order):
👉Real Production Workshops (some classes already available. More classes dropping in 2024)
All courses include:
👉Student license path available (paid separately)
👉Showreel material (200GB+)
👉Private Discord (2000 students)
👉Fully endorsed by @TheFoundryTeam
I hope you can join our amazing community of 2000+ students.
The problem is that many believe that this really works in a production scheme. It may evolve and one day even work. I have some doubts, each shot is different, with millions of possibilities and it seems to me that covering all of this is impossible for now.
When these artificial intelligence themes are consolidated, the software we normally use ends up being implemented. Until then, I think we are still a little far from that happening.
I see this more as tiktok or instagram filters than in actual production. It's impressive but it's still far from working properly. Thank you for your honest review, the first I found.
THANK YOU so much for an honest assessment of this software before I dedicate time to it.
great video bro!
I agree Hugo. I haven't used Dynamics Studio yet, but I use Runway quite a bit to do quick rotos for me. Sometimes it's amazing! Other times, when I think it's going to do a good job, it fails miserably. I kind of feel the roto tool is worse now than it was a year ago. Their inpaint tool is completly useless. It looks great when you're building keyframes, but is a hot, morphy mess when it renders!
like you said Hugo , i think Ai is going to give us tools that will make the vfx work a bit easier , the same way every new tech came before ... Ai & VFX will fuse and create new very cool tools... i do think that VFX artists need to learn how these things work and maybe how they are trained and coded ... i can see a point in the future where creating an Artistic piece is going to be equal to creating the Ai model itself or the Ai tool itself .... it is not happening tomorrow, but sooner than we think is my guess
To my eye, the only thing that they've done well is reconstructing the lighting (probs the clean plate wrapped to HDRI but whatevs)
If their AI could identify cameraand scene information and then split up the tasks like a VFX artist would, and allow users to re-run those parts of the process that fail, maybe this would be useful.
Much needed video 👍
Yeah… it’s special stream , your hair though … very special today …..like it …. One and only Mr Hugo ..thanks for podcast
Thank you too!
They should not stop being ambitious, it can take time. But agree that for optics, and the CGI hate is ridiculous.
Great video!
A lot of hype with AI, but what is really usable in every day life right now, that is the real question. When you raise this question, the good half or even 2 thirds of AI tools are not even usable today, but are just paid toys.
A lot of companies are just too pushy on their marketing.
that decent truth that we all needs...tq4d infos!
i think we need to have really good camera if you want a good result. I have sony fx3 i am going to test this
Well, I used a very good camera. This footage was filmed with a Blackmagic Ursa Mini Pro G2 with professional Cine lenses with RAW in 4.6K. Not sure the camera would make any difference. Their software is just not ready yet! I also loaded footage filmed with a RED 8K and got the same BS results.
The benchmark test would be to replicate the shots in their promo.
Ok, so this software only works on the shots they provide? Shouldn't it work on any shot? At least that was what they said on the website!
@@HugosDesk Of course it should work on any shot but it would be interesting to see if it actually did produce the results they showcased or if there was something else going on in there, such as full production workflows, which of course, would mean they were completely fraudulent claims. Nice review btw, you just saved me 100's of dollars.
@@hdtvnz Thanks. A lot will surely happen now since they were just bought by Autodesk, so I'm assuming it will find its way into Maya somehow!
But sir, isn't it scary at this stage too, it is certainly going to replace artists in the coming decade or will it not?
"Potential" is the key word here. Potential is great for raking in investor money. But what all these machine learning tools have in common is basically NOISE. Noisy inputs creates noisy results, the real world is very noisy, and I'd bet these tools aren't going to create a clean result even if you gave it perfectly sharp CG renders to analyze. ML creates a hot noisy mess, they try to smooth it out by filtering and blending, then re-fit to the noise through various algorithms, this approach is good for getting in the ball park, no further. All these things are old news in the CG industry, the only thing that's newish here is that companies like this create an automated pipeline with no human intervention. The only way to make use of these results is to put a human eyeball in between every step of that pipeline and correct for the noise and refine the results, which is basically what VFX artists do for a living, outside of the creative decisions.
Most of these AI tech bros are grifters. Create a half baked broken ai app, get shillers to scream about it online, rise the value of the broken app then sell it to a clueless investor or lock it behind a paywall. If they were really trying to create an industry standard tool they would tirelessly work on it day & night while consulting vfx artists to have their in put. But no. They want to release the tools to grifting youtubers who can shill about it while never knowing how vfx are done while yelling this is the end for vfx artists. Its like other people being vfx artists was a big hinderance to their existance. I wish one of them would try and replace a transformer in one of those chaotic Michael Bay shots with all the camera shake and pyro going off.
you know what - this reminds of those cheesy photo-editors that add filters, people thought they will replace photoshop and professional retouching too, but that never happened. same here - sure this might work for some low budget or indie youtube stuff but not work on a real film or a show.
All Ai Mocap will not replace Mocap or roto tracking match. Just look at all Ai Mocap feet it’s with every Ai system. Optical Mocap is the only way for high level animation.
The problem with a lot of people in the VFX/AI cross-over of the venn diagram is they're much too technical for their own good.
While a piece of software like this may work for some shots in the next 5-10 years, maybe even be good enough to give a first pass on most shots, how will it handle strange sequences that DOPs and Directors will throw at it?
How will it handle fast camera motion? how will it handle strange rack focus? how will it handle exaggerated crazy lens distortion?
these are all non-technical, creative decisions that are made in film and TV all the time to give interesting visual results that VFX artists have to work around all the time.
It's irresponsible to claim that VFX is dead and that AI tools such as this are already worthwhile replacements. We're humans, helping to bring the vision of other humans to life, to be enjoyed by other humans. No AI can ever replicate or replace that.
The VFX industry is already in a race-to-the-bottom, creating many of the issues we face in the industry to this day. The vendors are crabs in a bucket and will try to replace us where they can. We must fight against it.
Sir request you to give review of pika labs. There are claims that these tools will render the VFX business in house, and we will no longer need separate vfx studios.
Pika is going to be the exact same thing. There are underlying technical limitations which are being worked on by researchers, but we're a while away.
Hello. Nice video, I totally agree with you about AI at this stage, but for me it is quite obvious it can’t replace a serious CG work and also that if you want to use it you must think your footage “easier”, without complicated movements or body position with overlapping of characters that can confuse AI. It’s made for simple things, as you say. But if you have some attentions creating your shots results can be much better. Using it with green screen footage for example, allow you to replace easily a character and for hobbist it can be a great thing. I think you used footage that shown all AI limits, so you don’t use it “correctly” for its real purpose. Their advertising is not completely honest, but imo it’s obvious it’s just an advertising. I think its biggest issue is it not track correctly the camera in blender, if it would do, you could adjust character position in the animation (you can do it even now) and create much better VFX, but if the camera doesn’t work correctly, it’s completely useless. At the end, in other words you can’t use same workflow you would use in other cases, you should think all the creation process according to the tool possibilities. As you said, good for RUclips or social, not for professional work.
BEWARE OF WONDER - I had a contract through the end of this year and now they deny me access unless I upgrade
44:22......yeah, it's not you.
Hahaha 🤣
"blender scene" lmao. Not a red flag at all.
it's the spirit of Méliès ? maybe, in ten years, in the perfect AI world we'll try to reconstruct these glitches. Thank you for this great review!
What about sora
Ai is not good for all VFX tasks. For now. This company hype is due to their marketing.
Other companies are also working on every aspect of this. I’m pretty sure soon they’ll get there. Like every other disruptive technology in history: Bad at fist steps, but improves over time, and will be very helpful along the way, then change an entire industry. 🤷♂️
I think, the thing about the software is that it runs on something called “hype” sorry, but it’s purely and totally crap.
At least for the funny video that the person want to hide behind the screen?
❤❤❤
❤🤘
Srupiiiidd wonder studio is not free now
It wasn’t free when I reviewed either! I paid £100 for 1 month at the time!
@HugosDesk but it was free before when i used it i will show u my work to u later today what was i made but worse curse thing is they fool crapp minds make it fully paid now very bad news greedy team sensless
Yea, this is a gimmick.
I completely disagreed with you. Advertise that it's a pre-visual?? I'm so happy I found this software. If it was pre-visual I would've ignored it and so would thousands of other people. U don't know what type of shots to use. Purposely giving the hardest shots for ai tracking and model animation. I tested it with APPROPRIATE shots and it was wonderful. You gave 5% props to the software and 95% shit on it. Just don't use it and influence bad advertising for CGI progress in regards to this software
I would say good luck with convincing a client to use 'APPROPRIATE' shots. I've worked in the VFX industry for 23 years, and I've never had a client that would be so helpful. The software needs to handle any shot as advertised on their website. Anyway, I completely respect your opinion. Thanks for watching.'
@@HugosDesk I don't think it was ever an option for a vfx sup or a studio to tell a client that something cannot be done becuase their sofwares only work with APPROPRIATE SHOTS. NO AI in the world will be able to account for million possible angles that can be shot and give pixel perfect results on first pass. AI cannot replace Compositors completely!
Toy
bro the video that u uploaded were not adequately lit...
What are you talking about? All my videos have a lot of light and they are filmed on a cinema camera in Log with 15 stops of range. It's professional footage. It's not graded if that is what you mean, but grading is done after VFX
look at ur footage. its contrast between subject and background is low. And the videos are not sharp either.@@HugosDesk
@@juko8794 THe videos are in 6K and they are very sharp. Of course, watching on RUclips (this was a live stream) won't give it justice. Not sure why you are defending this software. It should work on any footage regardless. It's not a good product and they should not made claims like they do on the promotional material. It's a weak software not ready for production or consumer work. It's a Beta with too much hype.
I am so happy that someone with authority like yours has finally exposed this software for what it really is, just a decent attempt at automatization but literally NOTHING more. They are not even close. This won't be useful in production for a decade to come. Nice for hobbyists.