Well ... i want to do more complex sculpting on Blender, and learn UE5 faster, since with my GPU is too slow to get any patience of learning it :/. So... i guess it's a yes. Also, i was looking into Substance Painter, but i have many materials in Blender, so, Baking textures quicker would be good!
Of course this is a giant leap in perf gen-to-gen, and it is worth it for someone who needs to double their speeds right away. Thank you Wade sm for the video by the way. But the value proposition that rendering farms provide makes one rethink before making the purchase especially if u have to upgrade the rest of ur gear like PSUs and MoBo. This is the main points for me, especially when a 3090 still provides one of the fastest viewport and rendering speeds. On a side note, and I think this is so important, 4090 owners can't double their VRAM with a second GPU like 3090/ti, which is worth considering if you're already stepping on the limits of 24GBs of video ram. And this alone lets me wonder, probably we can expect something next year that's gonna have NVLINK and slightly faster speeds. It's really exciting that now we can take super fast rendering for granted and how cheap it has become. Thanks again Wade, it was an exciting and a complete video.
would be interesting to see 4090 vs multiple 3090s combined in the same machine. and some more info on power consumption comparisons when working on big projects.
Probably the most useful of the huge influx of 4090 videos happening right now...cuz all we ever get are stats, and catering to the gamers. Thanks for putting one out for the creatives!
I got bored of all the reviews talking only about games, and that's just a waste of GPU, games don't need that much power, the real use for a GPU is in things like this
@@Oscar4u69 if they are to play at 4k 100fps + it's not really a waste, especially since atm it's the only gpu that can do so natively with modern titles
@@ishiddddd4783 but there are not many great games to play that suit this performance right now. I would rather play gmod than to play any of the stuffs made for 11, 12 yr olds that's being marketed "4K max settings" currently
We work on Disney and Marvel films at Pinewood Studios. After doing similar tests, we changed our RTX 3090 cards over to 4090's. We stopped buying Quadro cards years ago. Great video - subscribed - thank you!
Really cool that you got access to these cards like Digital Foundry / Gamers Nexus / et cetera. Those guys aren't focusing on the artist tools like you are, so it's really nice to see that aspect explored here! Definitely interested in Unreal / Houdini / Davinci/Fusion.
Yes please for the Part 2. I would especially like see how it compares in Blender when you have a lot of hair and subsurface scattering! And maybe also with a giant pile of grass and other vegetation.
Note: if you're using Cycles (as opposed to Eevee) it's *all* raytraced. It's a path tracing renderer which means that every sample for every pixel has been raytraced, and has therefore gone through the RTX hardware pipeline - the only difference with reflective surfaces is how coherent the rays are. To test non-raytracing performance you'd need to use Eevee.
@@Pixel_FX They are two related things - one is a subset of the other. The important bit is that if you have an RTX GPU and Cycles is configured to make use of it via OptiX for hardware acceleration, then the Cycles path samples are being calculated on the GPU's RT cores. And since everything Cycles does is path samples, then _everything_ is rendered with RT cores. This is unlike most video games, which must run in real time and therefore only use RT when they have to, like for reflections and such, and never use RT to do path tracing because it's still too demanding and slow for real time.
I misspoke - I was talking about the shaders not being reflection / refraction-heavy in that scene. The scene didn't require much complex calculation compared to something like the Maya render later in the video
@@SirWade Diffuse lighting is *more* complex to calculate than specular reflections. As @KillahMate is trying to explain, this is how path tracing works - a diffuse surface is really just a crap-ton of reflections, all from different directions, and the colour is averaged over hundreds of samples until it becomes smooth. Whereas a specular shader reflects all the rays in more or less the same angle so it turns into a clean result much more quickly. You are getting confused with game rendering terminology. Path tracing is *all* reflections. 100%. It doesn't matter how many specular surfaces there are. And every single one of those rays is calculated using the RT cores.
Great video. Picked up the RTX 4090 today and it is incredible at hardware rendering in Arnold, Blender, Keyshot etc. Games are fun but this GPU is excellent for content creation.
@@SW-fh7he Walked into Microcenter on launch day. They had plenty. They're sold out now but they should have more shortly. Apparently Nvidia is sending out links to Geforce Experience users that will allow them to easily order a 4090 from Best Buy without having to deal with the bots that are slamming best buy right now.
No 4090 will live as long as a quadro (RTX A6000) for example under stressful work loads daily rendering as I use them. One of the main reasons why I was afraid to buy one.
Finally, an in-depth analysis of 4090 performance for 3d workflow!!! Could you pretty pleeeease (as you mentioned earlier in the video) do a separate video for unreal engine? Thanks!
Finally some real world tests that really show why an animator would spend so much on a card like this. That 3 minute short test is the best I’ve seen that no other RUclipsrs seem to understand. Thank you!
I think another angle of looking at this is efficiency. If you have your computer compute a lot then this will also add to the power bill fast. Just yesterday i saw a review where they tried to find the best efficiency by slightly underclocking (about 150mhz) and undervolting and they got the power consumption down almost to 3090 levels (roughly from 400W to 300W) with only a few percent of performance lost. That would mean that you basically double your efficiency in that case. So there's another cost factor that can make that investment worthwhile even quicker.
As an artist working with a 1080 Ti in the current year. I don't even fully grasp the amount of creative decisions I could make with this card if I was able to afford it. I agree with the statement in regards to the gaming community. The conversation about the 40 series ends the same way it does every year, more frames equals better performance better gaming experience. That's it. For the creative community it means time budgets can be allocated differently. When you mentioned the difference of 9 hours, man, 9 hours for sound design or post processing in general can make a huge difference. Really good review. First time I check out the channel, thanks for sharing.
yes please make a part 2. I am a freelance who does use Maya/c4d/Houdini, and render mostly with Redshift. I have my own 5 system minifarm stocked with 30xx GPUS. But a big part of my time is spent in simulations. Only a few situation excist that can speed up simulations with GPUs, but I would very much like to see some of these run on a 4090. Great video as always, keep up the good work.
Part 2 please, would be really cool if you could include 4080 and 7900 XTX, and even better if you could add video production and other creative applications!
I am so glad you talked about an actual animation rendering benchmark. I have gotten in countless arguments with people about how a single still frame tells me nothing about how a video card will perform for my needs as an artist. All that tells me is how well that video card will render that one single frame with the most optimal settings. It tells me basically nothing. People just DON'T understand that the settings used for one frame migh not be great for the next frame. So one frame might render in 30sec, but the next might render in a 1min30sec, and the frame after that might take 5min. So that single-frame bench mark is utterly useless. So thank you. I fell vindicated. lol. Also on the note of renderfarms Blender has a fairly popular one called sheepit where you can use your own hardware to earn time on the render farm for youru own projects.
Imagine when you try to tell people that rendering scenes has nothing to do with actual viewport performance while you're working creating that scene, but people loves to see simplifications and numbers just to think they know something and they did the right choice, even if they only use it to play minecraft.
There's a lot more to consider here, especially for the average 3D modelling animation enthusiast, if you look at your results from the Animated Frames Sprite Fright production files, you can clearly see it's not "double the performance", if anything it's barely 20 percent more with a 4090 card than an 3090, so if you're the average hobbyist it might not be such a huge deal to miss 20 percent performance, it's certainly NOT twice the performance. And that brings me to another thing - cost - these cards in Sweden were I live, cost around 26K Sek which translates to 2321 USD, most of us who bought the 3090 for around 2000 USD might not be THAT motivated to junk our cards and pay an extra 2.3K to get the difference. In a professional STUDIO setting I totally get the value, even a 5 percent difference can make or break some larger budgets with time constraints. You also have to realize that DLSS isn't used everywhere, thats more an Interpolation thing like those used in older television sets to draw the frames inbetween two extremes or rendered images. So in short - I don't think you will notice much difference when working with Blender cycles in the viewport when rotating and inspecting the scene. The biggest major upgrade was in fact from 1080ti to 3090 were you went from choppy slow movements to relatively real time. From 3090 to 4090 - that difference is not as HUGE as you make it sound here. Also in Blender, animation files (especially rigged ones) are very CPU bound and here it's actually better to have a better CPU.
I’m an animation student and I have a 3080 ti. I just finished my first ever shader render on MAYA. It’s was just 150 frames but it smoked my M1 Mac-Mini. Once my work-loads start getting super heavy later on, I’ll definitely upgrade but I should be good. Also my cpu is a Ryzen 9 5900x with 32 gigs of ddr4 ram. I’m sure I’ll be upgrading to either 64 or 128 gigs of ram in the near future
Thank you for making this video, I have purchased a 4090 thinking that maybe i have overspent. But as a 3D visualiser you have helped me justify with a smile that i made a great investment
Very interested in video export from unreal engine 5 using the movie render queue in a raytracing heavy scene. I have a 3090 right now and it does pretty good but the 4090 looks like it leaps ahead by quite a bit. Thanks for putting this video together.
I have two 3090s, so this card is mute for me. Can't pack 2 of these in the case to get the double, no MB supports two because of how thick they are. Gonna stay with NVLinked 2x3090s and skip this generation unless I'm able to watercool them.
Thanks! finally a review for creators. ...and yeah! Please a part two!!! It would be great to see how it performs in different render engines and different 3D software.
VRAM was the reason I went with a 3060 instead of a 3070, because whenever I would need to waste tens of hours using CPU+RAM as a fallback, wouldn't be worth it for the few tens of minutes saved with the faster 3070. It was very hard to justify replacing a broken GPU and being voluntarily fleeced with an almost $1k 3060 back at the height of the cryptomining craze, but seeing how fast viewport rendering is with a 4090, I'm tempted to bite the bullet once currency exchage rates stabilize. This review was everything I wanted from a creator perspective but never got from the usual outlets. Thanks!
As someone dabbling a lot with Blender and doing some freelance work, your graphs were very helpful, whether brute-forced or RT native. Great explanations! Subbed.
In a lot of workflows the artist is the limiting factor, not the hardware - even in LookDev. The real benefit will be in lighting and final rendering(if you are not working on complex shots that wont fit into your vram anyways). (Btw., if you are talking about rentability, the power draw is an important factor. So whats the min(idle/desktop), avg(working in the viewport), and max(rendering) power consumption? Whats the mix(desktop, viewport, render) in real world scenarios? How much does it draw in an example project per day? Compared to other cards? And how much does the room temp rise(believe me it does)? How much power is needed for the AC to cool it? These factors are also important to calculate your costs and to figure out whats the best solution for you.)
3:25 that's one of the reasons why I bought the 12GB model of the 3060, so that I wouldn't have to worry about running out of memory anymore (as I sometimes did with my 4GB 1050 Ti)
I do mostly CAD and 3d scanning and modeling (you don't need that much vram very often). But renders take a lot It's a shame to only upgraded from a 1060 6GB to a 3070 8GB. I was really mad about there wasn't any model in existance of a 3070 12GB. And there was a lot of difference for the 3080 12GB
@@wachocs16 yeah, that's why the 3060 was so appealing to me, it was reasonably priced and had more VRAM than a 3070 (which was actually quite expensive)
@@Im_Ninooo i too did from 1060 6gb to rtx 3070 8gb .. wanted 3080 but it was power hungry and expensive too in mining boom. my Cooler master v650 watts 80 plus gold cud handle upto 3070 only. 😭
The fun stuff is when my friends and I went into Micro Center and each of us walk out with a 4090, people curse at us for being scalpers. Little did they know we're just building out own rendering stations :( . This card is a blessing.
Finally, a video benchmarking the RTX 4090 in animation and rendering, hopefully this dude gets more views and subscribers because he honestly deserves it.
1 year ago I switched from a 1080 to a 3080. It blew my mind when I realized, I'm able to move a fully rigged (human) character of 300.000 faces in the cycles viewport (with denoising) in the middle of a "Kids Bedroom Scene" with A LOT of stuff in it. Sure at a couple FPS, but a couple years ago this was just simply unthinkable and fully impossible.
I like that you showed a bit about how it affects more than just rendering. I would like to see something similar showing how CPUs and RAM speed could impact sculpting performance and other features, in Blender.
because of the large amount of gamers (probably) i've always had issues with the famous top tech youtubers not telling me the performance uplift on the software i use. finally! ps thanks to Paul from newegg for linking this video :P
Glad someone finally addressed VRam and rendering. Everyone is always talking about gaming. The render time is a big thing but I need the most VRam that I can get and Nvidia throttles that on most models.n
I’m a 2nd year 3D animation student. As much as I’d love to upgrade my 3080ti to a 4090, my classes aren’t anywhere near advanced in complexity or size yet to warrant the cost. At this point, I figure by the time I’m actually doing grad classes, the 5090 will be out and from what I’ve seen, it’s supposed to be even twice as fast as the 4090 so I figured I’d wait till then. I have it paired with a Ryzen 5900X and as far as I’ve seen so far, we mostly use MAYA. I’m sure we’re not that far from it now. Great video, thank you. A part 2 to this video would be awesome!
Great video. The algorithm populated this for me! Just wanted to speculate when you mentioned the 3090 taking 45 mins per frame, the 3000 Fe are notorious for having really bad thermal pads and worse alignment for the cooler in the early batches (2020). I recently bought a 3090 Fe and noticed the memory temperatures at thermal throttle (110+) because gddr6 and 6x have error correcting components if the memory heats up too much it can start to trip over itself and create errors that will slow down it's performance. Using quality thermal pads and trying to improve the cooler's seating has increased my thermal headroom on the memory and it's running at a nice cool 92C max (my alignment might not have been perfect as some other people reported 88C max temp with the most memory intensive applications...mining).
Thank you so much for including A6000, so many youtubers dont even include a6000 to compare for 3d rendering given that their vram is the same compared to 4090.
I cannot wait to try this card in Blender but for now the closest I got to a 4090 was by downloading a 3D model of it then zooming in on the details. XD
Yesterday I ordered a new PC, equipped with a 4090. Eye-watering cost, here in the UK. This video helped calm my jitters. I buy a new machine every 3-4 years. This is the first time I've bought chiefly for Blender performance, rather than gaming.
Just came from a 1080ti , got a 3090ti both the company paid for so Im happy for now, and 4090ti is right around the corner not to mention 5000 series. Good video!
The speeds are insane. I only have a gtx 1060 6gb and rendering is very slow on that thing. Seeing the speed of 3070 was a huge eye opener, let alone the 4090!
Now with the 4090 abandoning the NVLINK I m seriously considering skipping the 40x and waiting for the 50x with a hope they restore it. In the meantime I will seriously get my hands on 2 3090s and add them so I can get advantage of the 24+24=48GB with NVLINK. Imo I avoid quadro since I ain’t a studio owner or something. As an artist I can see a lot of value on rtx 2080s I had now 3090s I m planning on plus the 48gb with NVLINK for my budget is more than to make me happy and load my scenes and/or project. I ain’t getting crazy about if it will render in 12h instead of 8h. As long as I can improve my workload in a logical cost I am happy with it. As I mentioned above I invested on a good motherboard and a nice CPU that I believe serves 80% of the projects a blender artist needs. So in future I can add any 30x or 40x gpus but definitely I won’t spend 1600-2000 Europe prices for a 4090 that does give me a 1.7x of a 3090 and still get stuck with 24gb. For the same money I can get 2 x 3090s, Asus here costs 1200euros incl VAT and excluding VAT is 867€ x 2 = 1730 let’s say roughly. Let’s add an NVLINK 125€ that tops it at 1900€. I hope I don’t sound arrogant or biased but I cant see myself spending enormous amount of quadros. Farm rendering is still too expensive unless u r running a studio with lots of clients. If u r a solo artist I don’t see it as a solution for the time being. Maybe later that will be more competition around the market and prices become more reachable yes.
Finally someone that not getting fooled by a shiny object. As you said you can buy 2 3090 for the price of one 4090. I myself have a 3090 working towards my second on. My biggest disappointment with the 4090 is that I was expecting it to be 32gb not 24gb.
Man thanks for being one of the very few to test for creators. Maybe do another that also shows benchmarks for editing softwares like davinci, premier and fcp. Cheers from Nigeria
@@CreatorChaz yeah thanks i saw his one before sir wade posted his. It would be good to get more people doing these benchmarks so we can compare i guess.
The 16k image test: That image has plenty of raytracing. Reflection is actually the lightest form of raytracing. Illumination and shading radiosity ("global illumination") is one of the heaviest things for raytracing, which that scene has plenty of, and basically the whole scene is raytraced and quite heavily.
I love the comparison to render farms ... i think a lot of people wanna know if they should invest in the hardware themselves! Very helpful! THANKS SIR
This is such a helpful video. I'm in the midst of thinking if I should upgrade to a 3090 from a 3060 or just take a big leap to a 4090. Plus coming from someone that not only knows about pc specs but also does 3D themselves is a lot more reliable than just to watch some random benchmark videos. Thank you for making this video. Cheers!
I think there also needs to be a discussion on the power consumption of a machine that uses the 4090. From the top of my head I believe that it ranges from around 350 watts up to 600w which is like running a hand held vacuum cleaner while you render and half that at idle.
Most of us in the industry knows that we don't render shots as a whole but rather by parts (render passes) then comp it, but this vid is showing the GPU's power if ever you don't have time to do that and just render it in one go or checking what the render would look like really quick. Would be cool if you can add a scene with SSS and Volumes in it as it is one of the expensive things/time consuming to render. All things aside this vid is very informative! :D
@@amanda.collaud Yea I noticed the 16k render,just saying coz currently sir wade renders the shaders on the baseline where 4090 can render quickly with high quality resolution. I dont think its a stupid vid but rather a more thorough and specific vid. Im an artist and I work in the in industry and curios on what would be the result.
Personally for Blender we are skipping the 40 series. 2 NVlinked 3090s give you slightly faster speed but has a combined 48GB of vram so that is what we have been going with.
@Sir Wade Neistadt-Thank you for sharing. I like the A6, but is it unfair to compare it to a 4090? When you look at the A6000 at 300 watts and the 4090 at 450/600 watts of power, what kind of results would you get IF the A6000 had that kind of power? This would allow you to almost compare the cards apples to apples, up to 24GB of VRAM. Your thoughts?
It would be interesting also, to do a calculation based off the power consumption for a animated render, which would give an indication as to how much air conditioning would be needed if there was a renderfarm of these guys. Also, with these sorts of tests, optix denoiser would be a better choice over open image denoise. yes open image denoise is a better denoiser, but its CPU bound not GPU bound, which skews the results
Ugh. I was really hoping that the card wasn't that good. Nvidia's been getting a little too bold lately, and I was more than ready to write this card off. But I was wondering what made you test at 16k instead of 4k? Either way, I love the video and hope you make a part 2! Thanks Sir Wade!
I mostly just wanted to push the GPUs further to see if the performance scaled - I did a 4K / 16K test 2 years ago and figured I'd just stick with it :P Glad you liked it!
As a professional using blender, It's a no brainer: RTX 4090 is a worth buying. Even if you already have a 3090 or 3090ti. In work you will have return of your investment quickly. Of course, depends where you live and how much you can make with your work. Do your math. I remember when Titan X Pascal was the best in class and I was very happy back then with 2 or 3 cards... Today a 4090 does the job 10 to 14 times faster than a Titan X Pascal. 450Wh vs 3,5kWh (14x Titan X). It's an incredible progress.
Great video! Please do take a look at Unreal and an even further look into blender, like different types of viewport settings and final render times at 4K would be amazing!
You correction is only half correct. If Blender was swapping memory to RAM, then there's no way it would take that long. However, if it was swapping to SSD, then yes. Idk what your setting was. Usually, VRAM issues are easily avoidable if you calculate with tiling. Takes longer depending on the speed of your SSD, but if the GPU doesn't have to store everything in VRAM.. you can render almost anything.
Glad i found you with this herdly ever see 3d artists benchmark these cards and i havent done it in years now even after graduating with a degree and everything as theirs no work in florida for it
At around 3:10 minute of the video. I am asking just in case. Do you put your viewport in shading render preview while rendering the scene? I learned this the hard way 5 years ago, I remember I ran out of memory using my 1070 when I rendering some heavy scene using Cycles. I noticed that I put viewport sample and render sample as the same value at 4096. I was curious why was it able to use 6GB of VRAM in viewport shading but not in render. Turns out, when I was rendering the scene, both viewport and render windows using the same amount of vram. I decided to put my viewport in image editor layout, it staring render each frame under a 15 minute instead of "out of memory" error. Nowadays, even when using 3090... I always turn on the [temporary editor > image editor] in the settings and ctrl+space in one of the windows for the render (so that other windows are inactive while I am rendering). For my case it improves my render x3 to x4 ever since I was aware of that viewport shading also uses vram even while I am rendering. Frame 1: 18s vs 1min09s (with viewport shading in background) of a same scene settings I did from 5 years ago. Not sure if your case the same as mine. Hope this info helps could someone.
Great video, waiting to see this exact comparison for Unreal Engine 5.1 with a scene with really heavy 8k textures that push the VRAM. Also BEAWARE that the 4000 series cards don't NVlink in case someone is thinking of getting two. (well you physically won't be able to fit two anyways) This might be why two A6000 or two 3090s might be a better bet for some looking to stack another card later down the line to get more future performance.
Thank you for this, Sir. I'd like to see the Part 2. Eventually, if you can get them, I'm interested in how the 4080 cards will do with pro apps. I animate, so fast rendering isn't that important to me, but the Omniverse benefits with Unreal integration are very tempting. I already know I'll have to upgrade my 3060ti soon, it's extremely refreshing to hear an artist's view on what to choose. I like daydreaming of AMD competing here, but it really doesn't seem realistic. What do you think?
@@CaptainScorpio24 So far, I enjoy it. If you're only animating with well designed rigs with Maya's viewport 2.0, it's good enough, but anything beyond that might want more power. Currently, I'm attending Animation Mentor, which uses very efficient rigs. The 3060ti paired with a Ryzen 5600x allows me to view my animation in Maya's viewer 2.0 with lights and textures. Of course, playblast is more accurate, but the viewer plays well enough as you tackle notes. Once I graduate AM, I'll want to explore more projects similar to what Sir is tackling on his channel. As he pointed out in the video, a 3070 couldn't handle rendering some files. That said, you only need gpu-only rendering for faster render times. If you're limited on budget, 3060ti will do for a while, and it games pretty well with ray tracing games at 1080p. GPU prices are going to go down soon, as the 4080 cards release next month and AMD introduces its new generation. So, if you can wait, I recommend holding out for a 3070 or 3080, or a less expensive 3060ti. Also, if I didn't care about ray tracing and Nvidia's Omniverse, I would have tried AMD. For the same price, they have more power and memory. That said, I love ray tracing, so I don't regret the purchase.
Late to the party but AMD has been making a lot of updates and its been getting better, they're also finally adding hardware ray tracing with 3.5 in blender
@@rVox-Dei Yes, I'm very interested in RDNA 3's improvements. They're getting Ray Tracing sorted, but people say CUDA still has a vast pro advantage. So, the general thought is that, for a 24gb card, a 3090 will still be a better choice than a 7900xtx, but we have yet to see if that's the case. I have a close eye on the 4080 (costs about the same as 3090ti), but I'm concerned about the 16gb vs 24 for rendering large scenes.
Great video ... well done! Honestly it is very hard to find a good video on real card performance ... I mean a 3DMark or Cinebench score is nice, but it doesn't say anything, heck even the ones where they show gameplay are ridiculous. What you did is amazing as it provides some real world context as to what to expect when you fire up a render. And you're right about renderfarms. I am not doing high end stuff (mostly 3D stuff for marketing clients who don't have any content), but even within that context a renderfarm is too expensive. I mean in one of my last projects I rendered a 250 frame clip of a doorknob with a slowly panning camera ... even with optimizing and compromising it costs €1 per frame ...times 5 variations ... not really doable. For the time being I switched to Redshift as it renders much faster and no fireflies , it's a biased engine and the GI and stuff looks a bit flaky as it is a biased engine... but if and when some money comes in I definitely will build a Desktop system with this beast of a card in it . What kind of CPU are you running? Intel based (Xeon or I9) or a Ryzen? Again great video !!!
What do you think of the results so far?? Would you get use out of these speeds? And anything you'd like to see me cover next time? :)
Well ... i want to do more complex sculpting on Blender, and learn UE5 faster, since with my GPU is too slow to get any patience of learning it :/. So... i guess it's a yes. Also, i was looking into Substance Painter, but i have many materials in Blender, so, Baking textures quicker would be good!
If you complete the test again I'd be interested in seeing how it handles particles for special effects.
I would love to see a review on how far the HiP backend has come, can't wait until radeon 7000 becuase nvidia does have a monopoly in blender rn
Of course this is a giant leap in perf gen-to-gen, and it is worth it for someone who needs to double their speeds right away. Thank you Wade sm for the video by the way. But the value proposition that rendering farms provide makes one rethink before making the purchase especially if u have to upgrade the rest of ur gear like PSUs and MoBo. This is the main points for me, especially when a 3090 still provides one of the fastest viewport and rendering speeds.
On a side note, and I think this is so important, 4090 owners can't double their VRAM with a second GPU like 3090/ti, which is worth considering if you're already stepping on the limits of 24GBs of video ram. And this alone lets me wonder, probably we can expect something next year that's gonna have NVLINK and slightly faster speeds. It's really exciting that now we can take super fast rendering for granted and how cheap it has become. Thanks again Wade, it was an exciting and a complete video.
would be interesting to see 4090 vs multiple 3090s combined in the same machine. and some more info on power consumption comparisons when working on big projects.
Probably the most useful of the huge influx of 4090 videos happening right now...cuz all we ever get are stats, and catering to the gamers. Thanks for putting one out for the creatives!
I got bored of all the reviews talking only about games, and that's just a waste of GPU, games don't need that much power, the real use for a GPU is in things like this
@@Oscar4u69 if they are to play at 4k 100fps + it's not really a waste, especially since atm it's the only gpu that can do so natively with modern titles
@@ishiddddd4783 but there are not many great games to play that suit this performance right now. I would rather play gmod than to play any of the stuffs made for 11, 12 yr olds that's being marketed "4K max settings" currently
@@Anti-FreedomD.P.R.ofSouthKorea k, but that's you and gmod runs in 4k with almost a decade old hardware
How is it usefel while no Radeon GPUs are included
We work on Disney and Marvel films at Pinewood Studios. After doing similar tests, we changed our RTX 3090 cards over to 4090's. We stopped buying Quadro cards years ago.
Great video - subscribed - thank you!
Why you stopped buying Quadros?
@@nyahbinghiman5984muchhhhhhhh dollars 😂
tell your boss to make actual good movies
good job g, i dont usually watch films or shi but i appreciate yall being able to make all that realistic even like 10 20 years ago
Really cool that you got access to these cards like Digital Foundry / Gamers Nexus / et cetera. Those guys aren't focusing on the artist tools like you are, so it's really nice to see that aspect explored here! Definitely interested in Unreal / Houdini / Davinci/Fusion.
nvidia is desperate to sell this scam card, they'll send cards to anyone for a few bucks so they can lie to people
@@n00buo 4080 12gb is scam card, 4090 is beast
@@RazielXT 3080 is the best card so far, 4000 series can't compete with Ampere they're just heaters for tards with money
@@n00buo
4090 seems to be a pretty decent deal
@@user9267 HAHHAHA nvidia got bots for youtube comments, they know.
Yes please for the Part 2. I would especially like see how it compares in Blender when you have a lot of hair and subsurface scattering! And maybe also with a giant pile of grass and other vegetation.
All hands up for more Blender testing!
Note: if you're using Cycles (as opposed to Eevee) it's *all* raytraced. It's a path tracing renderer which means that every sample for every pixel has been raytraced, and has therefore gone through the RTX hardware pipeline - the only difference with reflective surfaces is how coherent the rays are. To test non-raytracing performance you'd need to use Eevee.
Note: scenes layer is awesome you can render cycles and eevee together
Ray tracing and path tracing are two different things.
@@Pixel_FX They are two related things - one is a subset of the other. The important bit is that if you have an RTX GPU and Cycles is configured to make use of it via OptiX for hardware acceleration, then the Cycles path samples are being calculated on the GPU's RT cores. And since everything Cycles does is path samples, then _everything_ is rendered with RT cores.
This is unlike most video games, which must run in real time and therefore only use RT when they have to, like for reflections and such, and never use RT to do path tracing because it's still too demanding and slow for real time.
I misspoke - I was talking about the shaders not being reflection / refraction-heavy in that scene. The scene didn't require much complex calculation compared to something like the Maya render later in the video
@@SirWade Diffuse lighting is *more* complex to calculate than specular reflections. As @KillahMate is trying to explain, this is how path tracing works - a diffuse surface is really just a crap-ton of reflections, all from different directions, and the colour is averaged over hundreds of samples until it becomes smooth. Whereas a specular shader reflects all the rays in more or less the same angle so it turns into a clean result much more quickly. You are getting confused with game rendering terminology. Path tracing is *all* reflections. 100%. It doesn't matter how many specular surfaces there are. And every single one of those rays is calculated using the RT cores.
The "Why Vram Matters" chapter is like the hidden gem of this video. Glad Paul recommended your channel.
I would love to see a part 2 with unreal and houdini.
Great video. Picked up the RTX 4090 today and it is incredible at hardware rendering in Arnold, Blender, Keyshot etc. Games are fun but this GPU is excellent for content creation.
How did you get it?
@@SW-fh7he Walked into Microcenter on launch day. They had plenty. They're sold out now but they should have more shortly. Apparently Nvidia is sending out links to Geforce Experience users that will allow them to easily order a 4090 from Best Buy without having to deal with the bots that are slamming best buy right now.
I gonna pick up mine today and build a whole new computer for Daz Studio because I'm character designer.
Also the psu... :/ and the wattage is hugee!!!
No 4090 will live as long as a quadro (RTX A6000) for example under stressful work loads daily rendering as I use them. One of the main reasons why I was afraid to buy one.
Finally, an in-depth analysis of 4090 performance for 3d workflow!!! Could you pretty pleeeease (as you mentioned earlier in the video) do a separate video for unreal engine? Thanks!
Yes, Unreal and even further look into blender, like different types viewport settings and final renders at 4K would be amazing!
Thats what im looking for as well.. thnxx mate
That's why I got the 12 GB RTX 3060 instead of the 8 GB RTX 3070, a tad slower, but cheaper, and it handles larger scenes!
Great video!
I got gtx 680 with 4 gb instead of 2, and it served me some more years than it would otherwise, as many programs had 4gb as a bare minimum with time.
I bought RTX3060 12GB instead of RTX4060 8GB for 3Ds Max.
Finally some real world tests that really show why an animator would spend so much on a card like this. That 3 minute short test is the best I’ve seen that no other RUclipsrs seem to understand. Thank you!
only this guy can say, "mom I need 4090 for homework".
Quickest way to get your parents to take you out of animation school and put you in a real university
@@cryogenicheart2019after graduating from an animation university, I found more interests in tech and pc, not animation :)
Great you're back, can't wait to see more of your animations.
I think another angle of looking at this is efficiency. If you have your computer compute a lot then this will also add to the power bill fast. Just yesterday i saw a review where they tried to find the best efficiency by slightly underclocking (about 150mhz) and undervolting and they got the power consumption down almost to 3090 levels (roughly from 400W to 300W) with only a few percent of performance lost. That would mean that you basically double your efficiency in that case. So there's another cost factor that can make that investment worthwhile even quicker.
It's good to see someone making benchmarks on creative programs rather than games. Thanks!
As an artist working with a 1080 Ti in the current year. I don't even fully grasp the amount of creative decisions I could make with this card if I was able to afford it. I agree with the statement in regards to the gaming community. The conversation about the 40 series ends the same way it does every year, more frames equals better performance better gaming experience. That's it. For the creative community it means time budgets can be allocated differently. When you mentioned the difference of 9 hours, man, 9 hours for sound design or post processing in general can make a huge difference.
Really good review. First time I check out the channel, thanks for sharing.
yes please make a part 2. I am a freelance who does use Maya/c4d/Houdini, and render mostly with Redshift. I have my own 5 system minifarm stocked with 30xx GPUS. But a big part of my time is spent in simulations. Only a few situation excist that can speed up simulations with GPUs, but I would very much like to see some of these run on a 4090.
Great video as always, keep up the good work.
You're absolutely right! Thank you for the video. For us, who use these cards for work, these cards are INSANE!
Part 2 please, would be really cool if you could include 4080 and 7900 XTX, and even better if you could add video production and other creative applications!
I am so glad you talked about an actual animation rendering benchmark. I have gotten in countless arguments with people about how a single still frame tells me nothing about how a video card will perform for my needs as an artist. All that tells me is how well that video card will render that one single frame with the most optimal settings. It tells me basically nothing. People just DON'T understand that the settings used for one frame migh not be great for the next frame. So one frame might render in 30sec, but the next might render in a 1min30sec, and the frame after that might take 5min. So that single-frame bench mark is utterly useless. So thank you. I fell vindicated. lol.
Also on the note of renderfarms Blender has a fairly popular one called sheepit where you can use your own hardware to earn time on the render farm for youru own projects.
Imagine when you try to tell people that rendering scenes has nothing to do with actual viewport performance while you're working creating that scene, but people loves to see simplifications and numbers just to think they know something and they did the right choice, even if they only use it to play minecraft.
@@carlesv7219 For real. It's like I can seal with 10fps in the viewport, whjat I need is shorter rendertimes to help me itterate faster. lol
There's a lot more to consider here, especially for the average 3D modelling animation enthusiast, if you look at your results from the Animated Frames Sprite Fright production files, you can clearly see it's not "double the performance", if anything it's barely 20 percent more with a 4090 card than an 3090, so if you're the average hobbyist it might not be such a huge deal to miss 20 percent performance, it's certainly NOT twice the performance. And that brings me to another thing - cost - these cards in Sweden were I live, cost around 26K Sek which translates to 2321 USD, most of us who bought the 3090 for around 2000 USD might not be THAT motivated to junk our cards and pay an extra 2.3K to get the difference. In a professional STUDIO setting I totally get the value, even a 5 percent difference can make or break some larger budgets with time constraints. You also have to realize that DLSS isn't used everywhere, thats more an Interpolation thing like those used in older television sets to draw the frames inbetween two extremes or rendered images.
So in short - I don't think you will notice much difference when working with Blender cycles in the viewport when rotating and inspecting the scene. The biggest major upgrade was in fact from 1080ti to 3090 were you went from choppy slow movements to relatively real time. From 3090 to 4090 - that difference is not as HUGE as you make it sound here.
Also in Blender, animation files (especially rigged ones) are very CPU bound and here it's actually better to have a better CPU.
You're always the go to for usable info on graphics cards. Everyone else is gaming
I’m an animation student and I have a 3080 ti. I just finished my first ever shader render on MAYA. It’s was just 150 frames but it smoked my M1 Mac-Mini. Once my work-loads start getting super heavy later on, I’ll definitely upgrade but I should be good. Also my cpu is a Ryzen 9 5900x with 32 gigs of ddr4 ram. I’m sure I’ll be upgrading to either 64 or 128 gigs of ram in the near future
I got the RTX S 5100
CUDA cores: 41,984
Boost clock: 2.3GHz
Memory: 128GB GDDR6X
Memory bus: 3760-bit
Memory bandwidth: 2036GBps
RT cores: 264 (3nd-gen)
Tensor cores: 1042 (3rd-gen)
NVLink SLI: No
PCIe: Gen 5
HDMI: 2.1
HDCP: 2.3
Display connectors: 2x HDMI 2.1, 4x DisplayPort 1.4
Length: 15.3 inches
Width: 6.0 inches
Height: 4-slot
Maximum GPU temp: 102
Graphics card power: 460W
Recommended power supply: 1000W
Power connectors: 5x 8-pin (with supplied 26-pin adapter)
Thank you for making this video, I have purchased a 4090 thinking that maybe i have overspent. But as a 3D visualiser you have helped me justify with a smile that i made a great investment
Finally someone showcasing what this card is actually meant for! Thanks
FINALLY ,this is what i was looking for ,not reviews that talk about video games and stuff,3D is my thing
i'd love to see the performance differences between all the 40XX cards
Very interested in video export from unreal engine 5 using the movie render queue in a raytracing heavy scene. I have a 3090 right now and it does pretty good but the 4090 looks like it leaps ahead by quite a bit. Thanks for putting this video together.
do you regret buying 3090?
I have two 3090s, so this card is mute for me. Can't pack 2 of these in the case to get the double, no MB supports two because of how thick they are. Gonna stay with NVLinked 2x3090s and skip this generation unless I'm able to watercool them.
@@chillsoft there exist a watercooled version of 4090
Thanks! finally a review for creators. ...and yeah! Please a part two!!! It would be great to see how it performs in different render engines and different 3D software.
VRAM was the reason I went with a 3060 instead of a 3070, because whenever I would need to waste tens of hours using CPU+RAM as a fallback, wouldn't be worth it for the few tens of minutes saved with the faster 3070. It was very hard to justify replacing a broken GPU and being voluntarily fleeced with an almost $1k 3060 back at the height of the cryptomining craze, but seeing how fast viewport rendering is with a 4090, I'm tempted to bite the bullet once currency exchage rates stabilize.
This review was everything I wanted from a creator perspective but never got from the usual outlets. Thanks!
As someone dabbling a lot with Blender and doing some freelance work, your graphs were very helpful, whether brute-forced or RT native. Great explanations! Subbed.
Love the video format, the information in the video. This is exactly what I will send people whenever I have to explain why VRAM matters.
Love the focus on creators - not enough of that. You're the man Sir Wade!
These kind of results are the selling point for me. It's really a great value when it performs this well in work and play.
In a lot of workflows the artist is the limiting factor, not the hardware - even in LookDev. The real benefit will be in lighting and final rendering(if you are not working on complex shots that wont fit into your vram anyways).
(Btw., if you are talking about rentability, the power draw is an important factor. So whats the min(idle/desktop), avg(working in the viewport), and max(rendering) power consumption? Whats the mix(desktop, viewport, render) in real world scenarios? How much does it draw in an example project per day? Compared to other cards? And how much does the room temp rise(believe me it does)? How much power is needed for the AC to cool it? These factors are also important to calculate your costs and to figure out whats the best solution for you.)
3:25 that's one of the reasons why I bought the 12GB model of the 3060, so that I wouldn't have to worry about running out of memory anymore (as I sometimes did with my 4GB 1050 Ti)
Wow, someone else who went from a 1050 to a 3060, nice
I do mostly CAD and 3d scanning and modeling (you don't need that much vram very often). But renders take a lot
It's a shame to only upgraded from a 1060 6GB to a 3070 8GB. I was really mad about there wasn't any model in existance of a 3070 12GB. And there was a lot of difference for the 3080 12GB
@@macksnotcool I've had a 750 Ti for years, then a friend gave me his old 1050 Ti which I used for a few months before upgrading. worth every cent.
@@wachocs16 yeah, that's why the 3060 was so appealing to me, it was reasonably priced and had more VRAM than a 3070 (which was actually quite expensive)
@@Im_Ninooo i too did from 1060 6gb to rtx 3070 8gb ..
wanted 3080 but it was power hungry and expensive too in mining boom.
my Cooler master v650 watts 80 plus gold cud handle upto 3070 only. 😭
The fun stuff is when my friends and I went into Micro Center and each of us walk out with a 4090, people curse at us for being scalpers. Little did they know we're just building out own rendering stations :( . This card is a blessing.
Part 2 for Cinema 4D and Redshift would be awesome for the 4090. I am thinking about replacing my 3090.
Finally, a video benchmarking the RTX 4090 in animation and rendering, hopefully this dude gets more views and subscribers because he honestly deserves it.
1 year ago I switched from a 1080 to a 3080. It blew my mind when I realized, I'm able to move a fully rigged (human) character of 300.000 faces in the cycles viewport (with denoising) in the middle of a "Kids Bedroom Scene" with A LOT of stuff in it.
Sure at a couple FPS, but a couple years ago this was just simply unthinkable and fully impossible.
This is the first serious content creator review, not just for RUclipsrs
I like that you showed a bit about how it affects more than just rendering. I would like to see something similar showing how CPUs and RAM speed could impact sculpting performance and other features, in Blender.
Ram speed doesn't really matter much in long time rendering as much as it does with gaming
All WRX systems use 2666mhz
This is really helpful, thankyou! Everyone else just concentrates on games and that's not what we need.
I’m a 3d artist and more power is always useful. Faster rendering is the key to managing workload. It would be a no brainer for me to want this card.
Great video. Super awesome to see the 4090 from an artist point of view.
because of the large amount of gamers (probably) i've always had issues with the famous top tech youtubers not telling me the performance uplift on the software i use. finally!
ps thanks to Paul from newegg for linking this video :P
Man, before I watch the video, I just wanted to say you are GORGEOUS! GOSH!!!
Glad someone finally addressed VRam and rendering. Everyone is always talking about gaming. The render time is a big thing but I need the most VRam that I can get and Nvidia throttles that on most models.n
I’m a 2nd year 3D animation student. As much as I’d love to upgrade my 3080ti to a 4090, my classes aren’t anywhere near advanced in complexity or size yet to warrant the cost. At this point, I figure by the time I’m actually doing grad classes, the 5090 will be out and from what I’ve seen, it’s supposed to be even twice as fast as the 4090 so I figured I’d wait till then. I have it paired with a Ryzen 5900X and as far as I’ve seen so far, we mostly use MAYA. I’m sure we’re not that far from it now. Great video, thank you. A part 2 to this video would be awesome!
Hands down one of the most comprehensive and useful reviews / deep dives on the 4090. Subbed, Liked and please do a part 2 on C4D!
Great video. The algorithm populated this for me! Just wanted to speculate when you mentioned the 3090 taking 45 mins per frame, the 3000 Fe are notorious for having really bad thermal pads and worse alignment for the cooler in the early batches (2020). I recently bought a 3090 Fe and noticed the memory temperatures at thermal throttle (110+) because gddr6 and 6x have error correcting components if the memory heats up too much it can start to trip over itself and create errors that will slow down it's performance. Using quality thermal pads and trying to improve the cooler's seating has increased my thermal headroom on the memory and it's running at a nice cool 92C max (my alignment might not have been perfect as some other people reported 88C max temp with the most memory intensive applications...mining).
Thank you so much for including A6000, so many youtubers dont even include a6000 to compare for 3d rendering given that their vram is the same compared to 4090.
Results are mind blowing !!
Would love a part two testing embergen maybe and some houdini and karma gpu on it !!
Pleade part 2! Im looking at this card specifically for blender and the unreal engine :D thanks so much for the content!
I cannot wait to try this card in Blender but for now the closest I got to a 4090 was by downloading a 3D model of it then zooming in on the details. XD
I would be interested in a Houdini 4090 benchmark with mantra and karma.
Yesterday I ordered a new PC, equipped with a 4090. Eye-watering cost, here in the UK. This video helped calm my jitters. I buy a new machine every 3-4 years. This is the first time I've bought chiefly for Blender performance, rather than gaming.
Just came from a 1080ti , got a 3090ti both the company paid for so Im happy for now, and 4090ti is right around the corner not to mention 5000 series. Good video!
The speeds are insane.
I only have a gtx 1060 6gb and rendering is very slow on that thing. Seeing the speed of 3070 was a huge eye opener, let alone the 4090!
Now with the 4090 abandoning the NVLINK I m seriously considering skipping the 40x and waiting for the 50x with a hope they restore it. In the meantime I will seriously get my hands on 2 3090s and add them so I can get advantage of the 24+24=48GB with NVLINK.
Imo I avoid quadro since I ain’t a studio owner or something. As an artist I can see a lot of value on rtx 2080s I had now 3090s I m planning on plus the 48gb with NVLINK for my budget is more than to make me happy and load my scenes and/or project. I ain’t getting crazy about if it will render in 12h instead of 8h.
As long as I can improve my workload in a logical cost I am happy with it. As I mentioned above I invested on a good motherboard and a nice CPU that I believe serves 80% of the projects a blender artist needs. So in future I can add any 30x or 40x gpus but definitely I won’t spend 1600-2000 Europe prices for a 4090 that does give me a 1.7x of a 3090 and still get stuck with 24gb.
For the same money I can get 2 x 3090s, Asus here costs 1200euros incl VAT and excluding VAT is 867€ x 2 = 1730 let’s say roughly. Let’s add an NVLINK 125€ that tops it at 1900€.
I hope I don’t sound arrogant or biased but I cant see myself spending enormous amount of quadros.
Farm rendering is still too expensive unless u r running a studio with lots of clients. If u r a solo artist I don’t see it as a solution for the time being. Maybe later that will be more competition around the market and prices become more reachable yes.
Finally someone that not getting fooled by a shiny object. As you said you can buy 2 3090 for the price of one 4090. I myself have a 3090 working towards my second on. My biggest disappointment with the 4090 is that I was expecting it to be 32gb not 24gb.
Fantastic video! Would definatly like to see some more.
Can't wait to see the 4060 & 4070 and their performance per power consumption
Man thanks for being one of the very few to test for creators. Maybe do another that also shows benchmarks for editing softwares like davinci, premier and fcp.
Cheers from Nigeria
A youtuber named Eposvox has a video that might be what you're looking for. I hope that helps.
@@CreatorChaz yeah thanks i saw his one before sir wade posted his. It would be good to get more people doing these benchmarks so we can compare i guess.
@@otegadamagic Yeah, It's kinda rough finding non-gaming benchmarks sometimes. I hope more people pop up in the space.
@@CreatorChaz yeah apparently NVidia cares more about gamers than content creators. No wonder they mainly sent test units to gamers for review
The 16k image test: That image has plenty of raytracing. Reflection is actually the lightest form of raytracing. Illumination and shading radiosity ("global illumination") is one of the heaviest things for raytracing, which that scene has plenty of, and basically the whole scene is raytraced and quite heavily.
6:37 Now i wanna Play Super Mario 64 so badly! Thanks!
I figured that the 4090 was going to sell out at the price it did for this exact reason. Professionals
Thanks! This is the best benchmark for 3d artists.
I love the comparison to render farms ... i think a lot of people wanna know if they should invest in the hardware themselves! Very helpful! THANKS SIR
This is such a helpful video. I'm in the midst of thinking if I should upgrade to a 3090 from a 3060 or just take a big leap to a 4090. Plus coming from someone that not only knows about pc specs but also does 3D themselves is a lot more reliable than just to watch some random benchmark videos. Thank you for making this video. Cheers!
What did you end up doing
@@hman6159 nth yet 😆saving still HAHAHA
I think there also needs to be a discussion on the power consumption of a machine that uses the 4090. From the top of my head I believe that it ranges from around 350 watts up to 600w which is like running a hand held vacuum cleaner while you render and half that at idle.
Needed this review. Very good content
Id love to see a part 2!
Most of us in the industry knows that we don't render shots as a whole but rather by parts (render passes) then comp it, but this vid is showing the GPU's power if ever you don't have time to do that and just render it in one go or checking what the render would look like really quick. Would be cool if you can add a scene with SSS and Volumes in it as it is one of the expensive things/time consuming to render. All things aside this vid is very informative! :D
no its a stupid vid. he rendered in 16K resolution which favours the 4090. No one noticed?
@@amanda.collaud Yea I noticed the 16k render,just saying coz currently sir wade renders the shaders on the baseline where 4090 can render quickly with high quality resolution. I dont think its a stupid vid but rather a more thorough and specific vid. Im an artist and I work in the in industry and curios on what would be the result.
5:05 when GPU's run out of VRAM, they can sometimes use system RAM (out of core) but then it slows down a lot. maybe that's what happened
Personally for Blender we are skipping the 40 series. 2 NVlinked 3090s give you slightly faster speed but has a combined 48GB of vram so that is what we have been going with.
Great video, that's what i call quality content! Thank you.
will be nice to see tests like this on new CPUs like AMD AM5 lineup and Intel 13Gen
Super helpful as I'm considering purchasing a 4090 or 4080. Thanks so much!
This video was off the charts
This is a great video! You explained all this very well.
@Sir Wade Neistadt-Thank you for sharing. I like the A6, but is it unfair to compare it to a 4090? When you look at the A6000 at 300 watts and the 4090 at 450/600 watts of power, what kind of results would you get IF the A6000 had that kind of power? This would allow you to almost compare the cards apples to apples, up to 24GB of VRAM.
Your thoughts?
It would be interesting also, to do a calculation based off the power consumption for a animated render, which would give an indication as to how much air conditioning would be needed if there was a renderfarm of these guys.
Also, with these sorts of tests, optix denoiser would be a better choice over open image denoise. yes open image denoise is a better denoiser, but its CPU bound not GPU bound, which skews the results
just built my very first pc with a 3060 less than half a yr ago 😅 I do VFX, with Maya, Nuke, Houdini, and Unreal
Ugh. I was really hoping that the card wasn't that good. Nvidia's been getting a little too bold lately, and I was more than ready to write this card off. But I was wondering what made you test at 16k instead of 4k? Either way, I love the video and hope you make a part 2! Thanks Sir Wade!
I mostly just wanted to push the GPUs further to see if the performance scaled - I did a 4K / 16K test 2 years ago and figured I'd just stick with it :P Glad you liked it!
@@SirWade that favours the 4090 ...
As a professional using blender, It's a no brainer: RTX 4090 is a worth buying. Even if you already have a 3090 or 3090ti. In work you will have return of your investment quickly. Of course, depends where you live and how much you can make with your work. Do your math.
I remember when Titan X Pascal was the best in class and I was very happy back then with 2 or 3 cards... Today a 4090 does the job 10 to 14 times faster than a Titan X Pascal. 450Wh vs 3,5kWh (14x Titan X). It's an incredible progress.
Great video! Please do take a look at Unreal and an even further look into blender, like different types of viewport settings and final render times at 4K would be amazing!
You correction is only half correct.
If Blender was swapping memory to RAM, then there's no way it would take that long. However, if it was swapping to SSD, then yes. Idk what your setting was. Usually, VRAM issues are easily avoidable if you calculate with tiling. Takes longer depending on the speed of your SSD, but if the GPU doesn't have to store everything in VRAM.. you can render almost anything.
Glad i found you with this herdly ever see 3d artists benchmark these cards and i havent done it in years now even after graduating with a degree and everything as theirs no work in florida for it
would love to see a part 2 - specifically rendering in unreal engine 5
Please make a part 2! This was an amazing video. 🙂
Yeeees i wanted this video so much, thanks !
At around 3:10 minute of the video. I am asking just in case.
Do you put your viewport in shading render preview while rendering the scene?
I learned this the hard way 5 years ago, I remember I ran out of memory using my 1070 when I rendering some heavy scene using Cycles.
I noticed that I put viewport sample and render sample as the same value at 4096.
I was curious why was it able to use 6GB of VRAM in viewport shading but not in render.
Turns out, when I was rendering the scene, both viewport and render windows using the same amount of vram.
I decided to put my viewport in image editor layout, it staring render each frame under a 15 minute instead of "out of memory" error.
Nowadays, even when using 3090...
I always turn on the [temporary editor > image editor] in the settings and ctrl+space in one of the windows for the render (so that other windows are inactive while I am rendering).
For my case it improves my render x3 to x4 ever since I was aware of that viewport shading also uses vram even while I am rendering.
Frame 1: 18s vs 1min09s (with viewport shading in background) of a same scene settings I did from 5 years ago.
Not sure if your case the same as mine. Hope this info helps could someone.
The beard looks really good on you!
Great video, waiting to see this exact comparison for Unreal Engine 5.1 with a scene with really heavy 8k textures that push the VRAM. Also BEAWARE that the 4000 series cards don't NVlink in case someone is thinking of getting two. (well you physically won't be able to fit two anyways) This might be why two A6000 or two 3090s might be a better bet for some looking to stack another card later down the line to get more future performance.
Honestly, this is one of the great contents I have watched. Thanks for posting : D
This is incredible and so thorough.
Thank you for this, Sir. I'd like to see the Part 2. Eventually, if you can get them, I'm interested in how the 4080 cards will do with pro apps. I animate, so fast rendering isn't that important to me, but the Omniverse benefits with Unreal integration are very tempting. I already know I'll have to upgrade my 3060ti soon, it's extremely refreshing to hear an artist's view on what to choose. I like daydreaming of AMD competing here, but it really doesn't seem realistic. What do you think?
how's your 3060 ti working
@@CaptainScorpio24 So far, I enjoy it. If you're only animating with well designed rigs with Maya's viewport 2.0, it's good enough, but anything beyond that might want more power. Currently, I'm attending Animation Mentor, which uses very efficient rigs. The 3060ti paired with a Ryzen 5600x allows me to view my animation in Maya's viewer 2.0 with lights and textures. Of course, playblast is more accurate, but the viewer plays well enough as you tackle notes. Once I graduate AM, I'll want to explore more projects similar to what Sir is tackling on his channel. As he pointed out in the video, a 3070 couldn't handle rendering some files. That said, you only need gpu-only rendering for faster render times. If you're limited on budget, 3060ti will do for a while, and it games pretty well with ray tracing games at 1080p. GPU prices are going to go down soon, as the 4080 cards release next month and AMD introduces its new generation. So, if you can wait, I recommend holding out for a 3070 or 3080, or a less expensive 3060ti. Also, if I didn't care about ray tracing and Nvidia's Omniverse, I would have tried AMD. For the same price, they have more power and memory. That said, I love ray tracing, so I don't regret the purchase.
Late to the party but AMD has been making a lot of updates and its been getting better, they're also finally adding hardware ray tracing with 3.5 in blender
@@rVox-Dei Yes, I'm very interested in RDNA 3's improvements. They're getting Ray Tracing sorted, but people say CUDA still has a vast pro advantage. So, the general thought is that, for a 24gb card, a 3090 will still be a better choice than a 7900xtx, but we have yet to see if that's the case. I have a close eye on the 4080 (costs about the same as 3090ti), but I'm concerned about the 16gb vs 24 for rendering large scenes.
Great video ... well done!
Honestly it is very hard to find a good video on real card performance ... I mean a 3DMark or Cinebench score is nice, but it doesn't say anything, heck even the ones where they show gameplay are ridiculous.
What you did is amazing as it provides some real world context as to what to expect when you fire up a render.
And you're right about renderfarms.
I am not doing high end stuff (mostly 3D stuff for marketing clients who don't have any content), but even within that context a renderfarm is too expensive.
I mean in one of my last projects I rendered a 250 frame clip of a doorknob with a slowly panning camera ... even with optimizing and compromising it costs €1 per frame ...times 5 variations ... not really doable.
For the time being I switched to Redshift as it renders much faster and no fireflies , it's a biased engine and the GI and stuff looks a bit flaky as it is a biased engine... but if and when some money comes in I definitely will build a Desktop system with this beast of a card in it .
What kind of CPU are you running?
Intel based (Xeon or I9) or a Ryzen?
Again great video !!!