WOW! I'm a novice compositor to NukeX and seeing this demo on Deep Compositing just opened a whole new chapter in using this amazing creative tool. THANKS!!!
Hi @sleek1978. Part 1 of 2:: Deep is different to a standard z-depth. z-depth gives you one sample for depth at a particular pixel - the first item it hits as the ray is cast into the scene. A deep image gives you multiple samples (colour values), per pixel, going back in depth. So for example if you have two objects, one in front of the other at a particular pixel then you'll get two samples, one for the object in front at a particular depth, and one for the object behind at its own depth.
Thanks for the walk-through. It was awesome to hear from someone in a feature film, someone from a professional production facility. I came here not knowing what deep compositing was. Now I feel like I have a pretty good understanding of what it is, and what the benefits are. I even understand somewhat the workflow. Deep is basically the merging of the 2-D and 3-D world, letting us treat 2-D data in a 3-D way, including viewing a rudimentary 3-D preview of the scene without actually having to integrate any 3-D elements. Pretty awesome. One thing I have heard about deep is that it takes a ton of hard drive space. But just look at what becomes available to the compositor, the compositor doesn't have to really know a ton about 3-D and they are able to work with many 3-D elements in the scene and add rotoscoping and holdouts in the correct 3-D position in the scene, freaking awesome. It would have been nice if you would have turned on proxy mode so that the rendering would have been much faster for this walk-through. But thank you so much for sharing. I have to imagine that there was a ton of time rotoscoping there. I have to wonder if just creating the entire scene and 3-D wouldn't have been just as quick.
That is really mind blowing. I'm just starting to learn nuke at school and I can't wait to get a little more experienced with it. Thank you very much for this video, I love to have this insight into such a big movie production.
@sleek1978 Part 2 of 2: continued... This means you maintain control over how this data is dealt with right up to the comp stage. Say for example you decide you don't want the object in front you can simply mask it out (using a mask for a particular depth range), and you'll be able to see the object behind. Of course, as with all things cutting edge there is the trade off - in this case that your source files are bigger, but the extra control may make it worthwhile, depending on how you work.
We asked Robin and got this response: "Yes we had match moves for the cars, but they were pretty simple. Edge detail was not as good as we needed it to be, so we had to combine the matchmove deep with roto shapes (turned to deep). Also, there was no need to model every type of car on that bridge, so animation would have used a lot of stand in cars for similar vehicles. This added to the lack of detail."
2 part reply from Robin: "If animation on an ape changed we had to re-run reflection and shadow passes, but they were very fast compared to doing an actual ape. We could render each shadow/reflection for each ape separately and deep merge them together. We only rendered a handful of shadow passes for one animation change." continued below...
i guess im asking randomly but does anyone know of a tool to log back into an Instagram account?? I was dumb forgot my password. I would appreciate any assistance you can offer me!
@Caspian Noe I really appreciate your reply. I found the site on google and im trying it out atm. Looks like it's gonna take a while so I will reply here later with my results.
Great tutorial and very informative. The only bit of advice I'd like to offer is that you should figure out a compression method that will retain the detail and legibility of the text within the video. Even in 720p none of the onscreen text in the tutorial, names of nodes, menu text etc, is illegible.
Thanks it's rare to see in detail, how pro work on blockbuster, but how the director and production peoples planed stuff like that, i mean for stereoscopic stuff or for dangerous stunt or explosion, or just subtle background change for periode film, and color grading, compositing is valuable, but it's like today movie is more and more 3D compositing and less less real shoot.
Surely better to have full 3d data in that scene than roto the cars? I can see the benefits of Proxy 3d data to help composting efficiency in future projects using this feature - awesome stuff.
That makes a lot of sense. So I would imagine you used roto for the silhouette and the rendered deep data for depth culling within the car? Eg. the roof being further back than the hood.
Hi there! Awesome video!. One question though. at 35:50 you show the complete matte of the cars which we suppose is the result of all the rotos after placed in the correct depth in cards .But it looks like there is depth in each car. I am assuming there was some kind of multiplication of the rotos with a rough depth pass from the simple cg car geometry? and if that is the case how do you work it out without the depth of the proxy geo that its not always a standard in smaller productions.
i dont even use nuke but wow. what a level of control you have with the depth. I still dont get how you got a z pass of the cars and they're not digital footage.
Great explanation of how to use deep data properly. When I first got notice of I thought it would only be useful for volumetric effects like smoke and clouds. But this video really shows how useful automatic matting is. But in what context was the apes animated in? Where there no low res blocked out cars in that scene? And if so, could you not ask your CG dept. to render a deep image of those? Would you still need to do the roto then? Could you not deep crop out certains part for CC then?
Hollander; Im sure youre not using a PC. But since I have one, I would like to know how much RAM should I have installed to be able to do what youre doing in the video. great presentation . Thanks
does it work with bears too? :D just kidding, have a question though, how do you integrate separate lighting elements into this workflow? say I have rendered my reflections, diffuse, gi, etc, into different channels of the deep exr. how can I shuffle them, process and add them back together? thank you :)
I wonder why mess with depth etc. if you can basically do something like creating low poly versions of the cars and then having a node that would mask out everything that's behind that lowpoly mesh. Much less data needs to be stored. furthermore I think we can bake an algorythm that automatically creates lowpoly meshes by analyzing motion of pixel.
If you have 3D data available, sure, but it is not always the case. Also, some 2D artists might feel more comfortable with 2D processes rather than learning complete 3D workflows, I think.
it must have reflection pass rendered from 3d software, then in Nuke yo have to apply operation " Plus". Keep in mind for Lighting (where if they are in color RGB, they must have to "Plus" and where if we have pass where they are in Black and white for example- Ambient occlusion they usually have to "Multiply". hepe u will get that. best of luck for your studies.
@hulllewis0817 "How do you load mpeg,or avchd files into nuke" Best way is to export your video to an exr sequence. Nuke is not a encoder/decoder application, I wish they hadn't added the ability to import any video at all b/c now people (not saying you) will complain that it doesn't read the codec of their video or it doesn't work well enough.
Nice! One question. When you select a file in the DeepRead, which is it? I mean, is it the same as the RGBA or is it a different one, basically, is Deep another different RENDER ELEMENT, or is Nuke transforming and rgba into Deep?? A hope you can understand me... Thanks in advance!!
DeepRead takes a deep exr rendered. It does not convert traditional 2D EXR into a deep image. You can use DeepFromImage for that, albeit it will not be entirely the same as if you were getting a rendered deep exr in the first place.
@sleek1978 "so the "Deep" node is actually Z-depth ...right?" Z-depth-zilla is more like it. As far as I understand it you can think of it in these ways : - It will store depth data for something that's completely behind something else. - A regular z-depth image is a 2D image - Deep is a 3D z-depth image. - Imagine you have a different z-depth image for every milimeter (or pixel) of distance from the camera.
It is a series of Z-depth blades (slices) making up the deep image. You have multiple samples per pixel as opposed to z-depth which has one sample per pixel, the closest object to the camera (smallest depth).
All the special effects in the world and a multimillion dollar budget and the studio put a commercial vehicle license plate on a car, the black Nissan Maxima, 6Z76299 CA.
can anybody the process of making this kind of film ?? this is all i know 3d modeling -> rigingg -> texturing -> rendering -> export to nuke-> adding environment -> compositiong -> final?? please add the missing parts or correct me if i am wrong
I am not sure if it required 3D modelling as that is kind of the point of Deep that you do not need 3D modelling in my opinion. Otherwise, you could just use that directly.
Hi, NukeNoob here. I have a scene where the 3D object I have imported needs to pass behind an object in the video (e.g passing behind a lamppost). How would I go about doing this? would I have to mask/rotoscope like in this tutorial?
If it doesn't have a tail, its not a monkey,,, Even if it has a monkey Shape. if it doesent have a tail its not a monkey, and if its not a money, its an Ape
Nuke is far more flexible than AE. AE is very clumsy on non-trivial tasks. You just get lost what layer something is on and you often need a lot of error-prone copypasta in AE compared to Nuke.
I'm in 2023 and watching this video, I'm still learning these techniques 11 years ago, It's amazing
this is some serious compositing.
WOW! I'm a novice compositor to NukeX and seeing this demo on Deep Compositing just opened a whole new chapter in using this amazing creative tool. THANKS!!!
Hi @sleek1978. Part 1 of 2:: Deep is different to a standard z-depth. z-depth gives you one sample for depth at a particular pixel - the first item it hits as the ray is cast into the scene.
A deep image gives you multiple samples (colour values), per pixel, going back in depth. So for example if you have two objects, one in front of the other at a particular pixel then you'll get two samples, one for the object in front at a particular depth, and one for the object behind at its own depth.
Thanks for the walk-through. It was awesome to hear from someone in a feature film, someone from a professional production facility. I came here not knowing what deep compositing was. Now I feel like I have a pretty good understanding of what it is, and what the benefits are. I even understand somewhat the workflow.
Deep is basically the merging of the 2-D and 3-D world, letting us treat 2-D data in a 3-D way, including viewing a rudimentary 3-D preview of the scene without actually having to integrate any 3-D elements. Pretty awesome.
One thing I have heard about deep is that it takes a ton of hard drive space. But just look at what becomes available to the compositor, the compositor doesn't have to really know a ton about 3-D and they are able to work with many 3-D elements in the scene and add rotoscoping and holdouts in the correct 3-D position in the scene, freaking awesome.
It would have been nice if you would have turned on proxy mode so that the rendering would have been much faster for this walk-through. But thank you so much for sharing.
I have to imagine that there was a ton of time rotoscoping there. I have to wonder if just creating the entire scene and 3-D wouldn't have been just as quick.
I had a rough idea of what deep comp was but now it is pristine clear... very useful indeed... and go for that exploding bananas
This is a really good explaination of how to use deep in compositing. I would like to get a real move with deep on next show. Thanks for share mate!
I think the most impressive part is getting the damn fur to play well with all the advanced compositing passes.
That is really mind blowing. I'm just starting to learn nuke at school and I can't wait to get a little more experienced with it. Thank you very much for this video, I love to have this insight into such a big movie production.
that thing makes AE look like the iMovie of Compositing.
ae is for kids
@sleek1978 Part 2 of 2: continued... This means you maintain control over how this data is dealt with right up to the comp stage. Say for example you decide you don't want the object in front you can simply mask it out (using a mask for a particular depth range), and you'll be able to see the object behind.
Of course, as with all things cutting edge there is the trade off - in this case that your source files are bigger, but the extra control may make it worthwhile, depending on how you work.
Very generous to share some of your vfx secrets... Great to see!
We asked Robin and got this response: "Yes we had match moves for the cars, but they were pretty simple. Edge detail was not as good as we needed it to be, so we had to combine the matchmove deep with roto shapes (turned to deep). Also, there was no need to model every type of car on that bridge, so animation would have used a lot of stand in cars for similar vehicles. This added to the lack of detail."
thx for the introduction. really awesome stuff
The deep nodes are going to change the way of rotoscoping and compositing so much. Very useful tutorial!!
So deep compositing is like an alpha channel for pixel depth in the image. Kinda cool !
2 part reply from Robin: "If animation on an ape changed we had to re-run reflection and shadow passes, but they were very fast compared to doing an actual ape. We could render each shadow/reflection for each ape separately and deep merge them together. We only rendered a handful of shadow passes for one animation change." continued below...
i guess im asking randomly but does anyone know of a tool to log back into an Instagram account??
I was dumb forgot my password. I would appreciate any assistance you can offer me!
@Dario Ezequiel instablaster =)
@Caspian Noe I really appreciate your reply. I found the site on google and im trying it out atm.
Looks like it's gonna take a while so I will reply here later with my results.
@Caspian Noe it did the trick and I now got access to my account again. I'm so happy!
Thank you so much you saved my account :D
@Dario Ezequiel you are welcome xD
Great tutorial and very informative. The only bit of advice I'd like to offer is that you should figure out a compression method that will retain the detail and legibility of the text within the video. Even in 720p none of the onscreen text in the tutorial, names of nodes, menu text etc, is illegible.
Thanks it's rare to see in detail, how pro work on blockbuster, but how the director and production peoples planed stuff like that, i mean for stereoscopic stuff or for dangerous stunt or explosion, or just subtle background change for periode film, and color grading, compositing is valuable, but it's like today movie is more and more 3D compositing and less less real shoot.
Omg im studying VFX and i think this is soooo cool :) can't wait to get my hands on it
This is excellent! Why haven't I heard of this!
Surely better to have full 3d data in that scene than roto the cars? I can see the benefits of Proxy 3d data to help composting efficiency in future projects using this feature - awesome stuff.
@TheFoundryChannel thank you very much ...very helpful answer
I don't know why but i watched the whole thing
very interesting! thanx for sharing!
Nice Info..Thanks man
"an explosion of bah-nah-nahs" lolz
Dammit. Time to learn Nukex
Deep compositing is actually also available in standard Nuke, not just NukeX.
What's the next step in future compositing?
Doopth passes?
I think we already know the answer is yes.
My mind is blown.
this was useful tutorial , i wish if you can tell us about the fog that in the distance
That makes a lot of sense. So I would imagine you used roto for the silhouette and the rendered deep data for depth culling within the car? Eg. the roof being further back than the hood.
I have a gut feeling that it was not so sophisticated, just plain roto with single depth for the cars.
Can we get the footage so that we can try and learn…. Plz! 🙏🏻
Uouuu what an amazing experience it would be if it were possible for us to gain access to this shot for us to try Our compositing
Rui Pedro Sousa That's my all time dream, to have access to big movie productions' assets and be able to play with them.
Hi there! Awesome video!. One question though. at 35:50 you show the complete matte of the cars which we suppose is the result of all the rotos after placed in the correct depth in cards .But it looks like there is depth in each car. I am assuming there was some kind of multiplication of the rotos with a rough depth pass from the simple cg car geometry? and if that is the case how do you work it out without the depth of the proxy geo that its not always a standard in smaller productions.
i dont even use nuke but wow. what a level of control you have with the depth. I still dont get how you got a z pass of the cars and they're not digital footage.
all footages are digital, meaning not analog, but you probably meant that cars are not 3D models.
What about car movement and car bonnet/roof denting when the monkeys are jumping on them?
Whats with this video quality. Can't you do 1080?
Great explanation of how to use deep data properly. When I first got notice of I thought it would only be useful for volumetric effects like smoke and clouds. But this video really shows how useful automatic matting is.
But in what context was the apes animated in? Where there no low res blocked out cars in that scene? And if so, could you not ask your CG dept. to render a deep image of those? Would you still need to do the roto then? Could you not deep crop out certains part for CC then?
I think DeepCrop works with a bounding box, znear and zfar. Roto has more flexibility for the matte shape.
@Shaun Fontaine Blender does compositing, I just stated I like freeware Im not comparing it
cool channel. subbing.
nice
Hollander; Im sure youre not using a PC. But since I have one, I would like to know how much RAM should I have installed to be able to do what youre doing in the video. great presentation . Thanks
12-16 GB should be enough for 1080p and standart 25 FPS
thank !
does it work with bears too? :D just kidding, have a question though, how do you integrate separate lighting elements into this workflow? say I have rendered my reflections, diffuse, gi, etc, into different channels of the deep exr. how can I shuffle them, process and add them back together?
thank you :)
HAHA
:)
I think one of the way is to render the deep and the regular exr separately, then use a DeepRecolor to combine them.
I wonder why mess with depth etc. if you can basically do something like creating low poly versions of the cars and then having a node that would mask out everything that's behind that lowpoly mesh. Much less data needs to be stored. furthermore I think we can bake an algorythm that automatically creates lowpoly meshes by analyzing motion of pixel.
If you have 3D data available, sure, but it is not always the case. Also, some 2D artists might feel more comfortable with 2D processes rather than learning complete 3D workflows, I think.
Ram and CPU are using please.
For I got pentium III and freezes
how did u add the reflections?
My guess would be that they are rendered separately out of Maya and he just comps them on top.
it must have reflection pass rendered from 3d software, then in Nuke yo have to apply operation " Plus". Keep in mind for Lighting (where if they are in color RGB, they must have to "Plus" and where if we have pass where they are in Black and white for example- Ambient occlusion they usually have to "Multiply". hepe u will get that. best of luck for your studies.
@hulllewis0817
"How do you load mpeg,or avchd files into nuke"
Best way is to export your video to an exr sequence. Nuke is not a encoder/decoder application, I wish they hadn't added the ability to import any video at all b/c now people (not saying you) will complain that it doesn't read the codec of their video or it doesn't work well enough.
Topic is really good, but hard to follow.
if you could give a introduction about whole structure set-up, then going to talk the details.
+Jason Chen It's not hard to follow at all, clearly deep composting for a feature film is not a topic for beginners.
oBLACKIECHANoo Could be
Nice!
One question. When you select a file in the DeepRead, which is it? I mean, is it the same as the RGBA or is it a different one, basically, is Deep another different RENDER ELEMENT, or is Nuke transforming and rgba into Deep?? A hope you can understand me...
Thanks in advance!!
DeepRead takes a deep exr rendered. It does not convert traditional 2D EXR into a deep image. You can use DeepFromImage for that, albeit it will not be entirely the same as if you were getting a rendered deep exr in the first place.
Does flame have something similar as this? I'd be interest to know :)
@sleek1978
"so the "Deep" node is actually Z-depth ...right?"
Z-depth-zilla is more like it. As far as I understand it you can think of it in these ways :
- It will store depth data for something that's completely behind something else.
- A regular z-depth image is a 2D image - Deep is a 3D z-depth image.
- Imagine you have a different z-depth image for every milimeter (or pixel) of distance from the camera.
If you drink every time he says deep, you will be most definitely smashed :)
so the "Deep" node is actually Z-depth ...right ? or is it something else ? im new to Nuke and i really appreciate the answer
It is a series of Z-depth blades (slices) making up the deep image. You have multiple samples per pixel as opposed to z-depth which has one sample per pixel, the closest object to the camera (smallest depth).
360p ?((
All the special effects in the world and a multimillion dollar budget and the studio put a commercial vehicle license plate on a car, the black Nissan Maxima, 6Z76299 CA.
Good eye
What is the size of a frame with Deep data?
It can be easily up to 150MB depending on the complexity of the frame, I would think.
If you're going down that path, you'll be better off trying to bargain a seat for Autodesk Flame.
the foundry nuke :)
@Shaun Fontaine I am not professional so dats okay
hallo sir mujhe nuke passes file kese online download kaete hai mujhe jan na hai plz sir replay me
Yeah, I'd rather stick with AE. It will still get the job done nicely.
can anybody the process of making this kind of film ?? this is all i know
3d modeling -> rigingg -> texturing -> rendering -> export to nuke-> adding environment -> compositiong -> final??
please add the missing parts or correct me if i am wrong
+esp heroz final = rendering haha
+esp heroz final = editing -> colour grading -> tweaking everything again -> rendering -> DONE
I am not sure if it required 3D modelling as that is kind of the point of Deep that you do not need 3D modelling in my opinion. Otherwise, you could just use that directly.
Hi, NukeNoob here.
I have a scene where the 3D object I have imported needs to pass behind an object in the video (e.g passing behind a lamppost).
How would I go about doing this? would I have to mask/rotoscope like in this tutorial?
If you have true 3D data, you may not need to Roto (and Deep).
Still confused of what 'deep' is
uses a depth channel for calculating which pixel(each pixel) should be drawing in front of which other pixel.
rly? so tell me about the advantages that this software has over the AE
AE is mostly for CG effects, and really isnt for 3D editing like this software.
If it doesn't have a tail, its not a monkey,,, Even if it has a monkey Shape. if it doesent have a tail its not a monkey, and if its not a money, its an Ape
Nuke is far more flexible than AE. AE is very clumsy on non-trivial tasks. You just get lost what layer something is on and you often need a lot of error-prone copypasta in AE compared to Nuke.
Uhm, you realize Blender is not a fourth as advanced as Nuke, right?
Blender only has EXTREMELY basic compositing.
This goes deep! Thanks for the excellent explanation. Threat yourself to a banana:)
so i mean something like 3d masks
OMG...deep deep deep deep deep deep deep deep deep deep deep deep deep deep deep
Horrorgraphy Very hard to follow,
sorry, just annoy me.
what name program u use ?
Nuke.
can anybody tell me whats that software call??
Nuke.
The Compositing capabilities of Cinema 4D is limited compared to Nuke LengendaryKidd
I stay with Blender
Oh geez this was 12 years ago, surely adobe has something similar by nHAHAHAHAHAHAHAH
@bradchodges cars aren't that hard. Human faces are.
wut ?