CAN I JUST TAKE A MOMENT TO APPRECIATE HOW GOOD THIS SCENE LOOKS The thing most scenes lack is the micro details like the small litter on streets and the flyers and all the little bits and pieces and you have that down to perfection that image looks like it came right out of a camera
this went way over my head, but I hope to really understand the concepts you lay out here- the flickering is driving me mad, Thank You for this video!!!
Super Image Denoiser does all of this with just 2 clicks, makes use of the vector pass to create temporal denoising so just wanna put that out. But huge thanks for showing how its actually done.
Fantastic invaluable tutorial. Would be amazing if you could do more blender compositing tutorials now that it is coming into its own properly. For instance color matching for live-action/cg plates and quality DOF using depth pass and custom optics (cannot find any good lens blurs that don't look like crappy cg anywhere). Just saying. It would be godsend for vfx people
Thank you for this in-depth tutorial! Beautiful scene. I'm still struggling with flicker even after using your technique. You can see it in your render as well at 7:52 if you look at the windows on the second floor of the building on the right. The only way that I've seen to get rid of this is using something like Neat Video to post-process with their tool. Do you have any new recommendations?
I can't seem to render exr sequence that include RGBA... I using OpenEXR Multilayer format with RGBA selected in output properties. I don't see option to turn on RGBA in layer properties. Thanks for your help. Mike
This is a common problem with Blender tutorials on RUclips. The author is using a very high-resolution screen, edits nodes and types values in a small portion of the screen, and then uploads the video in 1080p. This makes the text of actually important parts of the video look blurry and difficult to read. We need either zooming in to the node setup portion of the screen (not showing the whole screen) or making the video's resolution 4K or something so that the text would not look fuzzy.
Thanks for the feedback, I understand the issue. However most of my videos focus on the actual concepts behind things rather than being a step by step how to guide. The exact values are often not needed if you grok the concepts. I would always encourage to play around.
I definitely agree. My biggest issue with most tutorials is when certain crucial details are omitted or sped thru or somewhat skipped over whether by intention or not. In the case of certain things being difficult to read, especially for blender, any tutorial should zoom into nodes and have a res higher than 1080p if possible. What helped me here with what I needed, though, was simply pausing at multi-pass denoise part of the video and then using the frame by frame option on the RUclips pc video player to see how the set up actually works. ( , key for back a frame and . key for forward a frame respectively. )
One problem I am having is that whenever I use some form of multi-pass denoising in the compositing tab, my emissive materials no longer properly show, and I can't seem to find any fix for this online.
At 1:49 the node setup isn’t working and it looks just like what you have. It isn’t showing anything at all in the viewport and I get “no render output node”
Its in the files on gumroad or you can pause and transcribe the node setup. But its just a median between 3 frames using max and min functions like you would if you were to calculate it manually. I do try to focus more on the overall idea and showcase possibilities and not so exact step by step.
Thanks for this video its been very helpful. I'm wondering if there is any documentation that could help me to better understand the last section on median denoising.
This is brilliant, i wished you denoised the result tempral denoise render just to see how it's going to differ from straight up denoising each frame. Then I realized the noise actually looks like real noise from a camera foorage which make the scene more believable. thumbs up 👍
Thanks, the real power comes from balancing both temporal and spatial denoising and then re-graining where needed. This is especially true for any works where you have to integrate cg into plates.
Using this workflow in various renders since many years, just do it in Nuke or Fusion with neat videos denoiser. But, thanks for putting that up. I just like to do local corrections, where things do flicker to much etc. Blender compositor could also be a solution with some multi frame denosier.
@@grigorescustelian6012 Absolut not. Its working here perfect since more then 10 years on every installation we have. Its one of the plugins ever purchased. Can't say it ever failed. Maybe you should complain to there support, they are very nice and respond quickly.
I can follow every part of the video but one. How are you getting an RGBa socket on the image node? I've tried everything I can think of in ver 3.3. I'm rendering to multilayer exr, full float, dwab lossless, with rgba checked in the output. I have my motion blur off and vector checked in the passes. I've tried so many variations and I never get an RGBa socket, I'm always left with the combined socket, the alpha socket, and the vector socket.
Hi I have a rather simple question at the temporal denoising you use the rgba data as well as the vector data, in my data export tab there is no rgba data to check, the one at the top is called "Combined" and i suppose its the same? but if I follow ur steps I cant replicate the displace effect, idk if thats due to the combine/rgba difference or wth...
Hi @statixvfx1793 , thanks a lot for this very valuable process ! I saw that render man uses 7 frames to temporal denoise. Is there a way to temporal denoise with more images ?
Thanks for the video, how do you get to the stage of seeing the Exr file in the compositing window when you open it and start the segment on temporal denoising.
You need to render out the frames to an image sequence (exr) first, then bring it back into the compositor for the temporal stuff to work. If you just want the spatial denoising you can render and composite directly.
Thanks for the tutorial, this is really helpful! For the temporal de-noising I don't have the vector pin on my image sequence nodes (there is just an alpha and a depth) so my median image comes out with the 3 images not aligned properly. How would I go about adding the vector pin to fix this?
you need to add an input to your "file output" node within the compositor and name it "vector". blender will know to pass the vector channel to that pin so that its accessible later in the compositor.
@xandizandi2271 i'm on Blender 3.5 and missing the vector pass in the sequence node even though I've rendered the exr with the vector pass enabled. I can see the individual vector pass in the blender compositor viewer node as well as in after effects, so I know its being rendered. Any idea why this output is missing? EDIT(FIXED): I needed to add vector to the file output node and rerender and blender knew to pass the vector map through to that output.
Hey dude you have a step by step video for this??? Because when I want to add the exr secuence images, it doesnt appear the node with the depth value (also the viewer node doesnt appear with z value), so I started wrong 😅 (and yes, I checked the box z on vuewlayer)
Very nice bro, but can you explain what you doing in the node group to rake the median of 3 frames, and are we denoise the image befor taking the avarege of the median of images ? can you answer me please ?!
I tend to do spatial de-noise first (the multipass way) and then in comp (mostly Nuke & fusion) thats where I do temporal de-noising. Either by average or median. I don't think the AI denoiser would work with noise thats combined and warped by the temporal approach. Better do spatial denoising first and then temporal on the passes where its flickering the most. The median group is just the median function using max and mins to find the median between 3 values.
This video is incredibly helpful and was exactly what I was looking for!!! Thanks so much for this and I just subscribed. I'm about to watch your Fusion/Resolve denoise video as well as those would be the two ways that I would go about denoising an animation (since a single frame is easy). One question though. Did you just include the median/average of 3 consecutive frames in the composite node network, and then just keyframe the "frame" parameter to change the frame number of the exr nodes to get the final export?
Oh wait! nevermind I just noticed you put the denoising setup on your gumroad, so i'm going to go buy that and support since your video was so helpful.
How can I render the video file from the temporal denoising method? I'm new to blender. I get into the compositing screen, can see the whole "video" from there but I don't know how to turn it into an actual video file without having to take hours to re-render everything again, which sounds kind of pointless to me, since the files are already rendered in there all denoised.
Anybody know why I can't see the vector output in the compositor? I can't find anything in the forums and am so confused... I only get combined and alpha, vector is enabled, experimental, and developer extras, but I wanted to compare this to the built in temporal denoise... lolol I know this isn't a forum but if any yall know how to help, I'd appreciate it.
Ideally both, but its highly dependent on the shot. Like mentioned in the video, hair/fur and transparencies can cause issues and would have to be solved slightly differently.
You do it once for the whole sequence. But you need access to the frames before and after. If Blender had a timeoffset node it would be easier. But once you've set it up like this it works for any sequence and number of frames.
i see it works quite well on a still sequence of multi exrs but where would you add a final motion blur vector pass if say you did NOT have motion blur enabled on the initial renders is it added before the adds and multiplys or do you take the mid frame vector pass and add that at the end
the node group can be figured out if you are really cleaver!!! you just have to know where to look. That hint is really misleading but thats what im giving you all!
Thanks for your video.... I haven't used Blender since 2.93 and now with 3.3 I've realized that in the composer they have shortened the Denoise node (I mean the Render Layer node), they have simplified it. Now it's just connecting (activating the Denoising data pass first!!) the normal and the albedo... Noise image or many other passes that came out no longer exist. Is that right or am I missing something? Thank you!
Yes, a median (or an average) of 3 frames is essentially one image with 3x the amount of samples. Thus reducing the amount of noise. Hope that makes sense :)
Great tutorial! One question: after doing the temporal denoising, how do I save the composited image sequence? I could save the composited frames one by one but there must be a way to save the entire sequence automatically, right?
Just open the exr sequences in another project, removal all passes except the compositor, then set the render frames to match your sequence. If you have 240 frames in the exr, set the render to start at frame 2 and end at 239. It should render out into whatever format you like. I rendered my little test out in avi and it works perfectly.
You can always render a render pass/scene/viewlayer with motion blur disabled and override all the scene material with a really simple one (basically skipping the lighting step) as a separate render. We've used this technique on features where we always render util passes separately anyway to get things like pworld, motion vectors and various aux passes. That way you can set the sample count super low as you're only interested in the first few samples anyway since youre not calculating any lighting.
@@statixvfx1793 Thank you for replying. But i tried it and it didn't work. The image with motion blur is very different from the one without. The pixels are all in different places. Applying vector pass rendered without motion blur onto image with motion blur results in many artifacts, especially on objects rotating at high speed.
@@aaronguo5128 Unfortunately when it comes to extreme motion blur and complex transforms this way of doing temporal denoising will not work. There are other tricks you can do, like creating a matte based on the motion vector (speed matte) to seperate the temporal denoising for the less extreme parts of the image and using oflow or displacing the vectors with themselves to "smear" the extreme motion blur out. You can also run a median filter on the blurriest bits with the same matte etc. At the end of the day, theres a lot of small things you can do but it mostly comes down to a shot by shot basis at this point.
@@statixvfx1793 I ended up using Neat Video XD. It loses some sharpness and detail but since I'm not aiming for the highest production quality it's acceptable. Thanks a lot for your information.
@@statixvfx1793 oh nice! film VFX is the industry I want to work in! I’m currently working in the arch viz industry. I tried to redo the set up in the compositing when you do the multi pass denoising but I wasn’t able to reproduce it. Yesterday was the first time that I did an animation and I got some horrible noise in the darker area. I used the simple pass denoising.
@@statixvfx1793 What EXR format do you use? And what compression method? I find when I render in exr the file sizes astart getting huge. If I use it for an animation, I'd quickly fill up a hard drive.
@@Layston Mostly DWAAB, which is lossy compression. I use it for almost every pass except when you need high bit depth or cryptomatte. Crypto doesnt work with it use zip16 or something else.
this is super awesome! I have a question though... how does this influence render times? I (like most i assume) have just been using built in denoising... and optix is much faster than open image, but open image gives a much cleaner result... obviously the simpler the composition the faster the result, but would doing this composite denoise faster than the built in denoising, or does it give better results? (hopefully both, but i highly doubt it lol) super great video, thank you for sharing!
It definitely adds to render/compositing times, but in Blender 3.0 and 3.1 they've upgraded the OIDN library so it should be significantly faster. That said, its still faster than rendering with more samples so I would consider the added denoising time to be negligible :)
@@statixvfx1793 awesome thanks! In this little journey I started down, I found an addon called Super Image Denoiser, or SID. It seems like it’s a big node group that kind of has these features built into, including interpolated de noising, which I thought was neat. Have you heard of this before?
@@FinalMotion No im not familiar with that tool. But this technique have been used in film vfx for at least 12-13 years. Its a fairly known workflow. Its weird when people "productize" workflows like that.
@@ClipTalks5 What version of Blender are you using? The setup on gumroad was built for Blender 2.93 and works until 4.1. This is stated on the gumroad page too. Blender changed how the compositor worked in 4.2 and I haven’t updated the example on gumroad with support for it yet. But the technique works and the sample file works fine in previous blender versions.
CAN I JUST TAKE A MOMENT TO APPRECIATE HOW GOOD THIS SCENE LOOKS
The thing most scenes lack is the micro details like the small litter on streets and the flyers and all the little bits and pieces and you have that down to perfection that image looks like it came right out of a camera
much appreciated! :)
Yeah I would buy it just to admire the work in 3D workspace, I wouldn't even use it for a render lol
@@statixvfx1793 And how to add a vector or rgba in the sequence node?
THATS WHAT IM TALKING ABOUT
1000% most under rated video I have seen.
this went way over my head, but I hope to really understand the concepts you lay out here- the flickering is driving me mad, Thank You for this video!!!
Very useful video. It's nice that you showed the results of each method!
Brilliant! Love your Fusion videos, happy to see you doing Blender stuff!
cant thank you enough for this masterpiece. your help is really appreciated
Super Image Denoiser does all of this with just 2 clicks, makes use of the vector pass to create temporal denoising so just wanna put that out. But huge thanks for showing how its actually done.
hi, can you please tell me how to get this super image denoiser? is it free?
Fantastic invaluable tutorial. Would be amazing if you could do more blender compositing tutorials now that it is coming into its own properly. For instance color matching for live-action/cg plates and quality DOF using depth pass and custom optics (cannot find any good lens blurs that don't look like crappy cg anywhere). Just saying. It would be godsend for vfx people
Thank you for this in-depth tutorial! Beautiful scene. I'm still struggling with flicker even after using your technique. You can see it in your render as well at 7:52 if you look at the windows on the second floor of the building on the right. The only way that I've seen to get rid of this is using something like Neat Video to post-process with their tool. Do you have any new recommendations?
This looks more like z-fighting to me than flickering due to noise.
I can't seem to render exr sequence that include RGBA... I using OpenEXR Multilayer format with RGBA selected in output properties. I don't see option to turn on RGBA in layer properties. Thanks for your help. Mike
This is a common problem with Blender tutorials on RUclips. The author is using a very high-resolution screen, edits nodes and types values in a small portion of the screen, and then uploads the video in 1080p. This makes the text of actually important parts of the video look blurry and difficult to read. We need either zooming in to the node setup portion of the screen (not showing the whole screen) or making the video's resolution 4K or something so that the text would not look fuzzy.
@@redhootoboemonger4328 i'm not sure that the feedback is directed to you but good on you for learning I guess
Thanks for the feedback, I understand the issue. However most of my videos focus on the actual concepts behind things rather than being a step by step how to guide. The exact values are often not needed if you grok the concepts. I would always encourage to play around.
I definitely agree. My biggest issue with most tutorials is when certain crucial details are omitted or sped thru or somewhat skipped over whether by intention or not. In the case of certain things being difficult to read, especially for blender, any tutorial should zoom into nodes and have a res higher than 1080p if possible. What helped me here with what I needed, though, was simply pausing at multi-pass denoise part of the video and then using the frame by frame option on the RUclips pc video player to see how the set up actually works. ( , key for back a frame and . key for forward a frame respectively. )
One problem I am having is that whenever I use some form of multi-pass denoising in the compositing tab, my emissive materials no longer properly show, and I can't seem to find any fix for this online.
Thank you for the detailed explaination, this is very useful.👍👍👍
This is wildly informative. Such a cool technique. Thanks for sharing!
Btw, now one of the new features of blender 3.1 is temporal denoising via optix :)
At 1:49 the node setup isn’t working and it looks just like what you have. It isn’t showing anything at all in the viewport and I get “no render output node”
Wow!!!! This is MAGIC
Thanks for sharing, Its really save a lot time to try and erro
Thank you very much!
I really appreciate Blender content on your channel. :)
Why are you plugging the non noisy image in the denoise node
I feel like the median thing was completely left out of explanation. How to build the node sequence?
Its in the files on gumroad or you can pause and transcribe the node setup. But its just a median between 3 frames using max and min functions like you would if you were to calculate it manually. I do try to focus more on the overall idea and showcase possibilities and not so exact step by step.
Thanks for this video its been very helpful. I'm wondering if there is any documentation that could help me to better understand the last section on median denoising.
Broooothheerrr thankk uu soo much 😩😩😩😩 you saved my life ❤
Works perfectly. Beats upping the samples to 500 to counter the effects of the built in denoiser.
You are amazing! I have to try this with moire in live action footage as well :D
And how to add a vector or rgba in the sequence node?
Export the sequence as multilayer exr with the vector pass enabled. Also have rgba selected in the output
This is brilliant, i wished you denoised the result tempral denoise render just to see how it's going to differ from straight up denoising each frame.
Then I realized the noise actually looks like real noise from a camera foorage which make the scene more believable.
thumbs up
👍
Thanks, the real power comes from balancing both temporal and spatial denoising and then re-graining where needed. This is especially true for any works where you have to integrate cg into plates.
Extremely useful tips. Thank you.
Thanks!
Thank you for this video, how does the median denoising work?
You gotta buy his product, or try to figure it out yourself, I'm afraid
Using this workflow in various renders since many years, just do it in Nuke or Fusion with neat videos denoiser. But, thanks for putting that up. I just like to do local corrections, where things do flicker to much etc. Blender compositor could also be a solution with some multi frame denosier.
Neat video plugin is a mess, sometimes not even working.
@@grigorescustelian6012 Absolut not. Its working here perfect since more then 10 years on every installation we have. Its one of the plugins ever purchased. Can't say it ever failed. Maybe you should complain to there support, they are very nice and respond quickly.
I can follow every part of the video but one. How are you getting an RGBa socket on the image node? I've tried everything I can think of in ver 3.3. I'm rendering to multilayer exr, full float, dwab lossless, with rgba checked in the output. I have my motion blur off and vector checked in the passes. I've tried so many variations and I never get an RGBa socket, I'm always left with the combined socket, the alpha socket, and the vector socket.
Hi I have a rather simple question at the temporal denoising you use the rgba data as well as the vector data, in my data export tab there is no rgba data to check, the one at the top is called "Combined" and i suppose its the same? but if I follow ur steps I cant replicate the displace effect, idk if thats due to the combine/rgba difference or wth...
Hi @statixvfx1793 , thanks a lot for this very valuable process ! I saw that render man uses 7 frames to temporal denoise. Is there a way to temporal denoise with more images ?
Thanks for the video, how do you get to the stage of seeing the Exr file in the compositing window when you open it and start the segment on temporal denoising.
You need to render out the frames to an image sequence (exr) first, then bring it back into the compositor for the temporal stuff to work. If you just want the spatial denoising you can render and composite directly.
what kind of input that proovide RGBA n vector for exr file?
Thanks for the tutorial, this is really helpful!
For the temporal de-noising I don't have the vector pin on my image sequence nodes (there is just an alpha and a depth) so my median image comes out with the 3 images not aligned properly. How would I go about adding the vector pin to fix this?
saim problem:(
me2
Xandizandi
commented:
Export the sequence as multilayer exr with the vector pass enabled. Also have rgba selected in the output
Also turn off motion blur, or you wont get any vector informatio
you need to add an input to your "file output" node within the compositor and name it "vector". blender will know to pass the vector channel to that pin so that its accessible later in the compositor.
@xandizandi2271 i'm on Blender 3.5 and missing the vector pass in the sequence node even though I've rendered the exr with the vector pass enabled. I can see the individual vector pass in the blender compositor viewer node as well as in after effects, so I know its being rendered. Any idea why this output is missing? EDIT(FIXED): I needed to add vector to the file output node and rerender and blender knew to pass the vector map through to that output.
Hey dude you have a step by step video for this???
Because when I want to add the exr secuence images, it doesnt appear the node with the depth value (also the viewer node doesnt appear with z value), so I started wrong 😅 (and yes, I checked the box z on vuewlayer)
Very nice bro, but can you explain what you doing in the node group to rake the median of 3 frames, and are we denoise the image befor taking the avarege of the median of images ? can you answer me please ?!
I tend to do spatial de-noise first (the multipass way) and then in comp (mostly Nuke & fusion) thats where I do temporal de-noising. Either by average or median.
I don't think the AI denoiser would work with noise thats combined and warped by the temporal approach. Better do spatial denoising first and then temporal on the passes where its flickering the most.
The median group is just the median function using max and mins to find the median between 3 values.
@@statixvfx1793 thanks bro, can you share a screenshoot fron nodegroup please
This video is incredibly helpful and was exactly what I was looking for!!! Thanks so much for this and I just subscribed. I'm about to watch your Fusion/Resolve denoise video as well as those would be the two ways that I would go about denoising an animation (since a single frame is easy). One question though. Did you just include the median/average of 3 consecutive frames in the composite node network, and then just keyframe the "frame" parameter to change the frame number of the exr nodes to get the final export?
Oh wait! nevermind I just noticed you put the denoising setup on your gumroad, so i'm going to go buy that and support since your video was so helpful.
How can I render the video file from the temporal denoising method? I'm new to blender. I get into the compositing screen, can see the whole "video" from there but I don't know how to turn it into an actual video file without having to take hours to re-render everything again, which sounds kind of pointless to me, since the files are already rendered in there all denoised.
Anybody know why I can't see the vector output in the compositor? I can't find anything in the forums and am so confused... I only get combined and alpha, vector is enabled, experimental, and developer extras, but I wanted to compare this to the built in temporal denoise... lolol I know this isn't a forum but if any yall know how to help, I'd appreciate it.
Which method would you use? Temporal or multi pass demonise? Or both? Or would it be dependant on the scene?
Ideally both, but its highly dependent on the shot. Like mentioned in the video, hair/fur and transparencies can cause issues and would have to be solved slightly differently.
wait, so for the temporal method you have to do that manually for every 3 frames?
You do it once for the whole sequence. But you need access to the frames before and after. If Blender had a timeoffset node it would be easier. But once you've set it up like this it works for any sequence and number of frames.
i see it works quite well on a still sequence of multi exrs but where would you add a final motion blur vector pass if say you did NOT have motion blur enabled on the initial renders is it added before the adds and multiplys or do you take the mid frame vector pass and add that at the end
Now that I have finished creating the node, do I have to convert it to an image again, or can I convert it directly to a video? Please reply.
Do you want to explain median denoise/ What's node group
hey, how many samples did you use ?
the node group can be figured out if you are really cleaver!!! you just have to know where to look. That hint is really misleading but thats what im giving you all!
How to render exr with rgba and vector?
does it work on Blender 3.5 I need it for this version?
Is the Denoising on your Gumroad also compatible with Blender 3.3 as well? thx
What are your PC specs?
Would this render method lack the fine details because it's missing the Normal and Albedo passes?
No, not if you first use the regular intel denoiser WITH normals and albedo, then do temporal (median does retain even more sharpness then an average)
@@statixvfx1793 ah okay cool. Did you do that here or no?
Thanks for your video.... I haven't used Blender since 2.93 and now with 3.3 I've realized that in the composer they have shortened the Denoise node (I mean the Render Layer node), they have simplified it. Now it's just connecting (activating the Denoising data pass first!!) the normal and the albedo... Noise image or many other passes that came out no longer exist.
Is that right or am I missing something? Thank you!
Second is not working - Black image
this is so hard to me to understand, so you get rid of noise even without denoiser just the medium of 3 frames with noise ?
Yes, a median (or an average) of 3 frames is essentially one image with 3x the amount of samples. Thus reducing the amount of noise. Hope that makes sense :)
Great tutorial! One question: after doing the temporal denoising, how do I save the composited image sequence? I could save the composited frames one by one but there must be a way to save the entire sequence automatically, right?
Just open the exr sequences in another project, removal all passes except the compositor, then set the render frames to match your sequence.
If you have 240 frames in the exr, set the render to start at frame 2 and end at 239.
It should render out into whatever format you like. I rendered my little test out in avi and it works perfectly.
This is great. But rendering vector pass requires you to turn off motion blur. Is there way to temporal denoise with motion blur?
You can always render a render pass/scene/viewlayer with motion blur disabled and override all the scene material with a really simple one (basically skipping the lighting step) as a separate render. We've used this technique on features where we always render util passes separately anyway to get things like pworld, motion vectors and various aux passes. That way you can set the sample count super low as you're only interested in the first few samples anyway since youre not calculating any lighting.
@@statixvfx1793 Thank you for replying. But i tried it and it didn't work. The image with motion blur is very different from the one without. The pixels are all in different places. Applying vector pass rendered without motion blur onto image with motion blur results in many artifacts, especially on objects rotating at high speed.
@@aaronguo5128 Unfortunately when it comes to extreme motion blur and complex transforms this way of doing temporal denoising will not work.
There are other tricks you can do, like creating a matte based on the motion vector (speed matte) to seperate the temporal denoising for the less extreme parts of the image and using oflow or displacing the vectors with themselves to "smear" the extreme motion blur out. You can also run a median filter on the blurriest bits with the same matte etc.
At the end of the day, theres a lot of small things you can do but it mostly comes down to a shot by shot basis at this point.
@@statixvfx1793 I ended up using Neat Video XD. It loses some sharpness and detail but since I'm not aiming for the highest production quality it's acceptable. Thanks a lot for your information.
Really great video! How did you learn all of that? Do you work professionally with Blender?
Hi Damien, yes I do. Film vfx. Went from houdini to blender for general vfx stuff. Its great.
@@statixvfx1793 oh nice! film VFX is the industry I want to work in! I’m currently working in the arch viz industry. I tried to redo the set up in the compositing when you do the multi pass denoising but I wasn’t able to reproduce it.
Yesterday was the first time that I did an animation and I got some horrible noise in the darker area. I used the simple pass denoising.
my blender crash after attemt cancelling the rendering :(
thanks for the tutorial sir
are you using exr for the quality being lossless ?
Yes, always render EXRs.
@@statixvfx1793 What EXR format do you use? And what compression method? I find when I render in exr the file sizes astart getting huge. If I use it for an animation, I'd quickly fill up a hard drive.
@@Layston Mostly DWAAB, which is lossy compression. I use it for almost every pass except when you need high bit depth or cryptomatte. Crypto doesnt work with it use zip16 or something else.
this is super awesome!
I have a question though... how does this influence render times?
I (like most i assume) have just been using built in denoising... and optix is much faster than open image, but open image gives a much cleaner result...
obviously the simpler the composition the faster the result, but would doing this composite denoise faster than the built in denoising, or does it give better results? (hopefully both, but i highly doubt it lol)
super great video, thank you for sharing!
It definitely adds to render/compositing times, but in Blender 3.0 and 3.1 they've upgraded the OIDN library so it should be significantly faster.
That said, its still faster than rendering with more samples so I would consider the added denoising time to be negligible :)
@@statixvfx1793 awesome thanks!
In this little journey I started down, I found an addon called Super Image Denoiser, or SID. It seems like it’s a big node group that kind of has these features built into, including interpolated de noising, which I thought was neat.
Have you heard of this before?
@@FinalMotion No im not familiar with that tool. But this technique have been used in film vfx for at least 12-13 years. Its a fairly known workflow.
Its weird when people "productize" workflows like that.
@@statixvfx1793 SID is free...
Uou, so usefull
3:50 ….. legit IS a camera. So absurd
cool
I NEED A REFUND CUZ YO SHI DONT WORK GANG WTF
@@ClipTalks5 What version of Blender are you using? The setup on gumroad was built for Blender 2.93 and works until 4.1. This is stated on the gumroad page too. Blender changed how the compositor worked in 4.2 and I haven’t updated the example on gumroad with support for it yet.
But the technique works and the sample file works fine in previous blender versions.