As much as I love these AI tools for it's ease of use etc. 3D technology in itself is here to stay, it is even used by AI in the autonomous cars for example. Also, the AI is just a cool term for machine learning, which is a prediction algorithm. And the predictions are just gambling every time, even if you fix the seed, the slight prompt change will throw completely different variation. On the other hand, 3D is an exact technique to simulate how real objects and light work. No hallucinations, total control. So I think it just can't be replaced at least in this area, because it's a totally different paradigms of content creation. Each technology has it's use case and 3D is here to stay and evolve alongside with machine learning "pixel morphing algorithms". I think the other question is how much automation will there be in 5 or even 10 years.
I think you nailed it at the end. It feels like gambling. If I have to produce something for a client, I'd rather not rely on a gambling machine... I still would love to know how I went with the liquid stuff. How many attempts did you need there? Could you make variations of the ones you liked? Or where they also just one time happy accidents?
What I've found while experimenting with all these ai tools is that the main issue is repeatability, as you said usually it's just happy accidents, happy accidents that can't be replicated exactly. So in a professional workflow I don't see how client's feedback can be applied using this tools.
Actually if you have complete control over the model, like in the case of the "open source" ones (Stable Diffusion, Flux etc.) that you can install on your computer, you have the ability to use a fixed seed number, so it generates exactly the same image when you execute. But even then, the slight change in the prompt will create a totally different variation.
Great video Vincent! As you said it is still a toy at this point but considering the speed of development and progress I think within 5 years the game will be over. 3D will always be needed for particular tasks tho I believe.
Try Kling Pro out, the new 1.5 is dope and the start/end frame capability of the prior 1.0 version is pretty good. Free tier is very slow, very often. Just straight up image to video can produce great results.
This is great and all but the whole Runway part of this work reduces you to a text writer, or surface-level art director at best. This isn't fun to do, it's just getting to the end result as fast and cheap as possible at the expense of real artistry and while you compromise on the quality that you're very clearly capable of. Relying on happy accidents to get anywhere is going backwards.
Hey Vincent, just wanted to check that you know Runway AI is trained on copyrighted content that they don't have any rights to use, so basically stealing from artists that spend their whole lives developing their skills and trying to make a living with it...
Yeah… adobe already jumped that obstacle dude, Im not saying that we should just look and do nothing if copyright is an issue but if you think it will be a real problem for the development of the technology you are “hallucinating”
Prompting is key. Be way more specific. Just imagine a client telling you “Just do a rendering of flowing hair”. There is too much room for interpretation. And AI “interprets” by taking the statistical median.
Give it 2 years and most 3d artists will be using AI to build 3D objects and animate everything. It'll still need a 3d artist to operate it but just a totally different way of working. This doesn't just go for 3d, this goes for a lot of digital workflows
It will replace the renderer. The people who should be worried are folks at Redshift and Arnold. Still going to take artists to use to its fullest potential because unique work will always take visual vocabulary, creative judgment. I'm a 3D artist that's been using image gen tools for the past 2 years.
There’s nothing fascinating about this. The results reek of a typical, plastic-like AI look. I’m not impressed. You can’t even compare this to your work, which is highly detailed. Even someone with no CG experience could tell that it’s human-made.
Your work, exploration, and ideas are always brilliant.
🥰
Most useful overview so far. Thx Vincent.
As much as I love these AI tools for it's ease of use etc. 3D technology in itself is here to stay, it is even used by AI in the autonomous cars for example.
Also, the AI is just a cool term for machine learning, which is a prediction algorithm. And the predictions are just gambling every time, even if you fix the seed, the slight prompt change will throw completely different variation.
On the other hand, 3D is an exact technique to simulate how real objects and light work. No hallucinations, total control. So I think it just can't be replaced at least in this area, because it's a totally different paradigms of content creation.
Each technology has it's use case and 3D is here to stay and evolve alongside with machine learning "pixel morphing algorithms". I think the other question is how much automation will there be in 5 or even 10 years.
Beautifully cottony, I love your work!
美丽的棉絮状,我很喜欢你的作品❤
I think you nailed it at the end. It feels like gambling. If I have to produce something for a client, I'd rather not rely on a gambling machine... I still would love to know how I went with the liquid stuff. How many attempts did you need there? Could you make variations of the ones you liked? Or where they also just one time happy accidents?
@@frankwerner923 for now there is no way to do variations, so it was just trying out many different prompt directions
What I've found while experimenting with all these ai tools is that the main issue is repeatability, as you said usually it's just happy accidents, happy accidents that can't be replicated exactly. So in a professional workflow I don't see how client's feedback can be applied using this tools.
Actually if you have complete control over the model, like in the case of the "open source" ones (Stable Diffusion, Flux etc.) that you can install on your computer, you have the ability to use a fixed seed number, so it generates exactly the same image when you execute.
But even then, the slight change in the prompt will create a totally different variation.
2:42 They're probably similar to what you would have simulated because Runway's probably been trained on your own videos (amongst others).
Great video Vincent! As you said it is still a toy at this point but considering the speed of development and progress I think within 5 years the game will be over. 3D will always be needed for particular tasks tho I believe.
Try Kling Pro out, the new 1.5 is dope and the start/end frame capability of the prior 1.0 version is pretty good. Free tier is very slow, very often. Just straight up image to video can produce great results.
Can it make 3d models or 3d model rotating video from 2d image, so you czn try to make high quality 3d asset from it for Unreal?
no
Interesting
will it replace arnold renderer or any other rendering softwares?
@@sarvar8795 perhaps, but I have no prediction on that😅
This is great and all but the whole Runway part of this work reduces you to a text writer, or surface-level art director at best. This isn't fun to do, it's just getting to the end result as fast and cheap as possible at the expense of real artistry and while you compromise on the quality that you're very clearly capable of. Relying on happy accidents to get anywhere is going backwards.
totally agree with that, but overall I think it is good to keep the eyes open and see what is currently possible.
Hey Vincent, just wanted to check that you know Runway AI is trained on copyrighted content that they don't have any rights to use, so basically stealing from artists that spend their whole lives developing their skills and trying to make a living with it...
Yeah… adobe already jumped that obstacle dude, Im not saying that we should just look and do nothing if copyright is an issue but if you think it will be a real problem for the development of the technology you are “hallucinating”
of course I know, my own work is in there trainingsets:
haveibeentrained.com/
@@vincentschwenkThe Oneness and acceptance of the flow of progress is refreshing. Learn or Lose
Prompting is key. Be way more specific. Just imagine a client telling you “Just do a rendering of flowing hair”. There is too much room for interpretation. And AI “interprets” by taking the statistical median.
Give it 2 years and most 3d artists will be using AI to build 3D objects and animate everything. It'll still need a 3d artist to operate it but just a totally different way of working. This doesn't just go for 3d, this goes for a lot of digital workflows
It will replace the renderer. The people who should be worried are folks at Redshift and Arnold.
Still going to take artists to use to its fullest potential because unique work will always take visual vocabulary, creative judgment.
I'm a 3D artist that's been using image gen tools for the past 2 years.
There’s nothing fascinating about this. The results reek of a typical, plastic-like AI look. I’m not impressed. You can’t even compare this to your work, which is highly detailed.
Even someone with no CG experience could tell that it’s human-made.