V-Ray | CPU or GPU - Which one can render FASTER?

Поделиться
HTML-код
  • Опубликовано: 26 дек 2024

Комментарии • 110

  • @JonasNoell
    @JonasNoell  11 месяцев назад +3

    ✅Check out Patreon for all my scene files, bonus videos, a whole course on car rendering or just to support this channel 🙂
    patreon.com/JonasNoell

  • @nawrasryhan
    @nawrasryhan 11 месяцев назад +19

    interesting video as always. thanks for sharing !
    Few notes to consider tho :
    1-When you render with CPU it is preferred to use Bucket to get the best performance .while it is the opposite for GPU .
    and then we can use noise limit to compare time instead of quality which would be more accurate indication imo.
    2-GPU will always use BF for GI primary, so what you are changing in the UI is the secondary engine , so you are using now BF+BF instead of BF +LC which will make the rendering slower.
    also by using BF for the secondly you are limiting the GI bounces to 3 instead of 200 that the light cache use by default .
    an additional tip is to use VRay color2bump map to get kind-of similar bump intensity between CPU and GPU .

    • @JonasNoell
      @JonasNoell  11 месяцев назад +5

      Hi, thanks for the input. Much appreciated! :-)
      1: Reason I went for Progressive is that can just setup a fixed rendertime and then compare noiselevels afterwards. I found that much more practical then trying to match noiselevels trough bucket and then see what renders faster. Because either time or quality has to be fixed to compare. And to match noise between GPU and CPU is kind of impossible as the settings translate very different and the result is very different. Or how would you approach this?
      2. In fact I meant that I used BF for Secondary Rays in CPU mode, that's why I also use the same settings in GPU. So both render with BF + BF. If that wasn't clear, sorry for the confusion :-)
      Color2Bump is a good tip, would make a good habbit to use that in the future by default in order to enable easy switching.

    • @nawrasryhan
      @nawrasryhan 11 месяцев назад +2

      @@JonasNoell ah okay, I've watched the video again and I see that you have mentioned the light cache in the gpu section . I had the impression you are talking about the primary engine but now I understand , sorry for that .
      As for the quality , I get your point and yes there will be difference in the quality with the same noise threshold. but we can minimize that if we set a low value to make the rendering quality high enough to neglect the difference , it is just and idea if you are planning to make part 2 : ))
      I see many people complaining about the Gpu VRam and compare it to Ram usage by CPU which is very different as the GPU can render huge scenes with "just" 24 GB VRam.
      with that being said, I personally still use CPU because of the GPU limitations ( missing and not supported features ) on one side and the Hardware I have on the other had is more CPU based till now .

    • @belousovarchitect
      @belousovarchitect 3 месяца назад

      ell me, if working with a GPU leads to more difficulties in configuring all materials and less accuracy, and you choose a central processor, then why not use corona render, it's easier? how is the v-ray CPU better?

  • @bartekmuczyn
    @bartekmuczyn 11 месяцев назад +6

    As an archviz artist let me tell you - i’ve been using 3990x as a workstation and 3970x as a rendernode for almost 3 years. Being able to render 4k image in production quality under 15 minutes is priceless. Time is money.

    • @JonasNoell
      @JonasNoell  10 месяцев назад

      Yeah invest in good hardware is never a bad idea. Nothing worse than being held back by bad tools.

    • @feanorfs6313
      @feanorfs6313 9 месяцев назад

      totally agree, I use 3990+2990 with 128Gb both - and THAT is significant. I just can't imagine to worrying about RAM to render all my scattered trees and grass :)

  • @jangrashei1752
    @jangrashei1752 9 месяцев назад +3

    Vray vs redshift would be interesting, in a more complex scene

  • @VR6DAMIT
    @VR6DAMIT 11 месяцев назад +4

    Gpu rendering speed is impressive. I just wish it gave the exact same result as the CPU engine, but as Chaos say themselves they are essentially two different render engines

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      Yeah, I still think its impressive of how close they are. Not sure how that applies to Engines that offer switching between CPU + GPU? Do they give a pixel perfect identical result? (Cycles, Arnold)

    • @MrMadvillan
      @MrMadvillan 10 месяцев назад

      this is sort of the holy grail. Ideally you’d run your gpu for look dev and cpu bucket for final but even arnold which is supposed to be 1 to 1 has a sub par gpu experience still.

  • @kenzorman
    @kenzorman 11 месяцев назад +4

    With quick renders geometry and texture load time is more significant. I feel like GPU is generally quicker but load times are slower. However on my system with big scenes the GPU can fall over when it runs out of VRam. I'd like to see these tests on a a really big scene with lots of textures . ... I'm look at a big system upgrade so I'm working out where to spend my money

    • @Soluna_y2k
      @Soluna_y2k 11 месяцев назад

      I fully agree! I´d love to see this comparison on a huge environment scene with trees, grass, etc.

    • @JonasNoell
      @JonasNoell  11 месяцев назад +4

      Yeah as stated at the end of the video the examples are simplistic in comparison to real production scenes. I can try to experiment with a more complicated test, though my scenes so far are normally setup very simple for tutorial purposes :-)

    • @Double_Vision
      @Double_Vision 11 месяцев назад

      I render a lot of extremely large environments, think over 60km wide, with deformation on the terrain. Add to this instanced geometry and heavy hero assets and it's all CPU. GPU for V-Ray motion graphics work is fine with two 2080 Ti cards though.

  • @emf321
    @emf321 11 месяцев назад +3

    The other reason i have not yet switched over to GPU rendering is: I have 10 years worth of projects working on, and perfecting all my VRAY CPU materials. These materials have many nodes and parameters, and they look different using GPU rendering. For example sometimes "60" bump in CPU VRAY looks like "6" bump in GPU VRAY, and you have to adjust the value to match the look. I don't have time to troubleshoot and convert hundreds of materials node by node, parameter by parameter to get similar looking VRAY GPU materials. Unless VRAY GPU materials look like, and work EXACTLY like VRAY CPU materials, i'll never switch to GPU rendering.

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      Yeah, I can totally understand that. Would definitely also be a factor for me to consider.
      I think their intentions were a 1:1 match, but in reality that may not be entirely achievable. But what is also a big factor is that you can use 99% the same workflow no matter if you use GPU and CPU. If you would switch to another GPU renderer you would need to completely redo all your materials from scratch including all of the scene setup/illumination.
      Being able to reach 95% the same result while using completely different engines and hardware is still very remarkable in my view.

    • @bgtubber
      @bgtubber 11 месяцев назад +2

      Very good point. If Chaos want people to use the GPU engine more, I think they should create a two way converter: CPUGPU. I don't think it would be impossible, no?

    • @belousovarchitect
      @belousovarchitect 3 месяца назад

      and can you tell me everything else that you have to edit for the GPU?

  • @MauroSanna
    @MauroSanna 11 месяцев назад +2

    Whenever I compared the CPU vs GPU rendering with V-Ray, I have always found that the GPU output is missing lots of details (especially displacement, bump and refraction).
    Yes, it is smoother, but the sacrifice in the details isn't good for production.
    When it comes to other GPU renderers instead (Octane and Redshift for example), the situation is very different. Those engines are able to preserve all the details and still manage to be faster and smoother.

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      Interesting, I will give that a try and see. I'm quite new to GPU rendering, so haven't really tried it out in all possible scenarios yet.

    • @MauroSanna
      @MauroSanna 11 месяцев назад

      @@JonasNoell
      You can tell from the tyres details in your car scene. Look at the side wall as an example.
      Also the refraction in the headlamp is not as accurate as in the CPU.

    • @JonasNoell
      @JonasNoell  11 месяцев назад +2

      @@MauroSanna Ok I guess that is true, I was mainly out looking for noise but indeed there is less detail in there. Though I would have to investigate if that is just an issue with shader translation between CPU and GPU or if it is just not possible to reach the same level of detail. Mind that the scene has been set up with CPU, so everything was tweaked to look correct there.
      So for example maybe GPU needs in general higher bump amount to reach the same intensity as in CPU for example. Would have to check that out.

    • @MauroSanna
      @MauroSanna 11 месяцев назад

      @@JonasNoell
      That would make perfect sense.
      Though, during my tests I was under the impression that vray GPU was facing some limitations due to few missing features, compared to the CPU version.
      Also, I need to mention that we are using vray for Maya at work, although I found that I could do everything you can do with vray for Max, using the plug-in shader nodes whenever certain features weren't available natively.

  • @L3nny666
    @L3nny666 11 месяцев назад +2

    the only reason i'm not using GPU is, that there are still some things not supported. nodes etc. also i never worked on a scene that consumed less than 40gb ram. so i don't know how that translates to vram. the 3990x can be purchased for 2700€ at times nowadays...

    • @nawrasryhan
      @nawrasryhan 11 месяцев назад +3

      40 GB on Ram are not the same on VRAM.
      you can render huge scenes on gpu with just 24 GB VRam.
      The limitation is there of course, but it's not as low as people usually expect it.

    • @JonasNoell
      @JonasNoell  11 месяцев назад +2

      Never a scene that comsumed LESS than 40GB ram? What type of scenes do you normally work on?
      For 3990x I guess you mean 2nd hand or? New versions at least here in Germany go for around 4800€, so they are now more expensive than when I got mine 2 years ago. Thanks inflation :-)

    • @L3nny666
      @L3nny666 11 месяцев назад +2

      @@JonasNoell I do fairly complex arch viz scenes. Due to time limitations it’s often faster to brute force it than to optimize for lower ram usage. I could use lower res textures, vray proxies and optimize geometry, but why should I? I also never render less than 4000px wide images. So my renders can get up to 20 megapixel.

    • @Ricardo-de9ju
      @Ricardo-de9ju 10 месяцев назад

      What kind of nodes and important features are not supported? Nobody talks about that, right?

    • @feanorfs6313
      @feanorfs6313 9 месяцев назад

      @@Ricardo-de9ju there is a long list of unsupported features on official site

  • @iannolisgeorgianimis4925
    @iannolisgeorgianimis4925 11 месяцев назад +1

    You had my curiosity, now you have my attention. What kind of motherboard would you recommend - or are currently using - for dual GPU setup? I'm thinking of jumping into nVidia Omniverse or Vantage.

    • @JonasNoell
      @JonasNoell  10 месяцев назад +2

      Hi my Mainboard is a: ASUS PRIME TRX40-Pro
      I run dual GPU setup on it and seems to work flawless so far.

  • @salamburhan8617
    @salamburhan8617 11 месяцев назад +1

    this comparison is like dream for content creator > thanks

  • @bgtubber
    @bgtubber 11 месяцев назад +1

    My personal experience with the latest version of V-Ray GPU is very mixed. GPU rendering is fast on my RTX 3090, but once you start working on real-world projects which are a bit more complex than a few objects on a plain background, the IPR update speed becomes very sluggish. I often wait up to 5-10 seconds for the IPR to update even a minor change in the scene such as changing the diffuse color of a material or moving an object. My CPU renders a bit slower, but at least I don't wait seconds between scene changes. Due to this, using CPU ends up faster (for me) since IPR with CPU is much more responsive to scene changes.

    • @JonasNoell
      @JonasNoell  11 месяцев назад +1

      You have to probably set the IPR settings correctly for your GPU: support.chaos.com/hc/en-us/articles/19851352329873-Using-interactive-rendering-in-V-Ray-GPU
      I also had very bad IPR performance initially but this has helped me a lot

    • @bgtubber
      @bgtubber 11 месяцев назад

      ​@@JonasNoell Thanks. I normally leave IPR settings at their defaults. They are good enough for most cases, unless you're doing something very niche. Also, the IPR settings normally influence the IPR rendering speed, not the frequency of IPR updates. Hopefully Chaos can tackle this problem in the new version of V-Ray GPU.

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      @@bgtubber For me the defaults worked terribly for my 4090 even in this super basic scene. It was updating and rendering extremely slow. So I would definitely try to adjust them according to the doc and see if that fixes it.

    • @bgtubber
      @bgtubber 11 месяцев назад

      @@JonasNoell Ok, I'll check it out in more detail, thanks!

  • @scpk2246
    @scpk2246 8 месяцев назад +1

    Hi Jonas, can you make a video explaining BEST VRAY GPU SETTINGS?
    Also:
    -GPU+CPU setting
    -TEST setting
    -MEDIUM QUALITY setting
    -HIGH QUALITY setting
    Because I dont have a starting ground on GPU setting - I have no ideas on the quality of settings

  • @belousovarchitect
    @belousovarchitect 3 месяца назад

    tell me, if working with a GPU leads to more difficulties in configuring all materials and less accuracy, and you choose a central processor, then why not use corona render, it's easier? how is the v-ray CPU better?

  • @petarivic6528
    @petarivic6528 10 месяцев назад

    If you can, do a part 2 with complex scenes. It is clear GPU is better for simple shaders, but as can be seen from the headlight of the car CPU seems to be producing more accurate results.

    • @JonasNoell
      @JonasNoell  10 месяцев назад +1

      Yeah, I plan to do that on a more complicated scene in order to get a more complete comparison.

  • @belousovarchitect
    @belousovarchitect 3 месяца назад

    and can you tell me everything else that you have to edit for the GPU?

  • @emf321
    @emf321 11 месяцев назад +1

    CPU still renders more accurately and with finer details. Just adding the Nvidia denoiser render element you can use your GPU to clear up the noise, giving you accuracy and speed. (CPU is not limited by VRAM, only RAM.) Also, GPU rendering you have to wait longer for load time, load your entire scene into VRAM, while rendering on CPU, rendering starts immediately, since you scene is already loaded in the RAM. This makes a huge difference when you do many quick renders.

    • @JonasNoell
      @JonasNoell  11 месяцев назад +1

      Ok interesting. I have to test that out on more complex scenes. So far all the scenes I have tested it where simple scenes without many ressources. If for complex scenes you really have to spend long time waiting for the renderer to even start that would be an important factor that could easily become very annoying...

    • @emf321
      @emf321 11 месяцев назад

      @@JonasNoell The VrayDenoiser does a great job, not the default vray denoiser, but the Nvidia AI denoiser. For most materials it correctly removes the GI noise, not the material noise. There are still a few cases, like a complex fabric material, which gets smoothed when it shouldn't, but its still worth it, especially the many preview renders required where you can get instant general results.

    • @belousovarchitect
      @belousovarchitect 3 месяца назад

      tell me, if working with a GPU leads to more difficulties in configuring all materials and less accuracy, and you choose a central processor, then why not use corona render, it's easier? how is the v-ray CPU better?

    • @emf321
      @emf321 3 месяца назад

      @@belousovarchitectWhy must i learn Corona renderer if i already perfectly understand vray,.. which got more features anyway? Why must i spend an eternity to convert thousands of perfectly fine vray-cpu materials into corona materials?

    • @belousovarchitect
      @belousovarchitect 3 месяца назад

      @@emf321
      and what do you think is better - v ray or crown? And why? I don't know how to decide (

  • @ammaralammouri1270
    @ammaralammouri1270 11 месяцев назад

    Thanks for the sharing ,I think this video can be series of comparation for different scenes and different scenarios .

    • @JonasNoell
      @JonasNoell  11 месяцев назад +3

      Yeah was also thinking that would be an option, I should run additional tests with different and more complex scenes to get a bigger picture

    • @ammaralammouri1270
      @ammaralammouri1270 11 месяцев назад

      @@JonasNoell Exactly 🫡

  • @AForsyth
    @AForsyth 11 месяцев назад

    Thank you a lot! I'm very interested in your findings and further exploration of this area - be it vfx work with volumes, exterior or interior scenes. How well Vray GPU is performing. I'm hearing Vray GPU lags behind Fstorm and Redshift in terms of speed, but it's just hearsay, so it'd be great to have some confirmation from a reliable source. Apparently, Vantage is quickly becoming a very powerful complementary tool for Vray CPU rendering for animations, scene exploration UE-style and near-instant feedback.

    • @JonasNoell
      @JonasNoell  10 месяцев назад +3

      I compared VRay with Redshift and Fstorm in simpler scenes. There my results were that FStorm was the fastest, then V-Ray and last Redshift.
      You can check out a video of that here: www.jonasnoell.com/downloads/vray/fstorm_redshift_vray_v002.mp4

  • @Mr.Bumhaniya
    @Mr.Bumhaniya 11 месяцев назад

    Meanwhile me watching this on a i7 4770 and 1660 Ti with tears in my eyes and trying to learn 3d. Is this the same car from your Udemy course, I thought about purchasing but noticed the last update on the course was a year ago.

    • @JonasNoell
      @JonasNoell  10 месяцев назад

      Yeah, same like from the Udemy course. The course is finished, doesn't need to be updated anymore :-)

  • @ProvVFX
    @ProvVFX 7 месяцев назад

    Would like to see you do a comparison between Arnold 7.3 and Vray in the future.

    • @JonasNoell
      @JonasNoell  6 месяцев назад +1

      Let's see, I have so far never used Arnold and only ever heard it is super slow :-)

    • @ProvVFX
      @ProvVFX 6 месяцев назад

      @@JonasNoell Yes, it's very slow haha. I use an older version at times (don't know what Arnold version but it's in Max 2019) and what I end up doing is brute force GI, and clean up in post. That's the only way for it to go faster lol

  • @almostdead9567
    @almostdead9567 10 месяцев назад

    Would it possible to get a comparison of laptop vs desktop performance ? Like timings and stuff ?

    • @JonasNoell
      @JonasNoell  10 месяцев назад

      For Laptop I think GPU is the only viable option as high core count CPUs for Laptops don't really exist. You can get quite beefy GPUs though. So for Laptop definitely go for GPU rendering...

    • @vmafarah9473
      @vmafarah9473 10 месяцев назад +1

      @@JonasNoell
      you can have 16 Pcore RYZEN 7945HX .
      while intel have 8P core, 16 E core laptops.
      Also Laptops comes with 4080 desktop chips called 4090m.

  • @MrCorris
    @MrCorris 10 месяцев назад

    Bump in CPU doesn't translate into GPU the same way, so bump will look quite different between the two with the same settings.

    • @JonasNoell
      @JonasNoell  10 месяцев назад

      How about if going through a Color2Bump node?

    • @MrCorris
      @MrCorris 10 месяцев назад

      @@JonasNoell honestly if you're just looking at switching to GPU based rendering I'd go straight to Vantage now. It's really very good 👍

  • @Itsyesfahad
    @Itsyesfahad 11 месяцев назад +1

    I gave up on GPUs man, Nothing like rendering on CPU for final production quality.

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      Ok interesting what was lacking for you?

    • @Itsyesfahad
      @Itsyesfahad 11 месяцев назад

      @JonasNoell The end result is different. CPU is way more accurate, BUT! If you don't care, anything works just fine aside from the hardware efficiency when using CPU.

  • @valeriogranito
    @valeriogranito 11 месяцев назад

    I don't follow. CPU or GPU: which one render faster? Then you are setting the 2 scene so that they render at the same time?? Let them do both their job and then compare time, and then quality too. Or am I missing something?

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      Oh ok I understand why that may be missleading but I maybe wrongly assumed the answer would be selfexplaining: So normally to compare which one renders faster you would set the same output quality and then compare the time needed to achieve that for both engines. In practice though that is very difficult because the noise levels don't really compare between both engines, so it is very difficult to reach the same exact output quality.
      So the easier way is to reverse the process and use progressive sampling where you give both engines the exact same time to refine the picture as much as possible and then see which once can output the more clean image. Then after you compare the results and see which one has higher quality / less noise you can safely assume that it renders faster (to reach the same quality) compared to the other engine :-)

  • @supaflykai
    @supaflykai 11 месяцев назад

    Cool video. I'm wondering if the results would be as significant if you used bucket rendering instead? In my tests with vray gpu/cpu, I found that with bucket rendering there wasn't actually much of a time difference and the CPU was generally on top in terms of the actual render quality. The GPU did render faster, but it loaded the scene much slower, and the end result was a similar total time. Bucket rendering, at least for CPU, is much faster than progressive.

    • @JonasNoell
      @JonasNoell  11 месяцев назад +1

      I also prefer bucket but it makes it very hard to compare as you can't just set the same rendertime for both engines. With bucket you can't definite the time. So you would have to eyeball to have comparable noiselevels between both engine and then compare the rendertime. In my testing that was quite difficult to achieve as they have a very different noise pattern, and even using same settings will lead to different outcomes. So the progressive way is easier to compare as you just set the time and then see what has more or less noise.

  • @Ricardo-de9ju
    @Ricardo-de9ju 10 месяцев назад

    Still couldn't find the right GPU renderer. I tried RS which is pretty good, but not even close to Vray node shaders.

    • @JonasNoell
      @JonasNoell  10 месяцев назад

      Oh ok? I thought Redshift is the "closest" to V-Ray compared to the other native GPU renderers

    • @Ricardo-de9ju
      @Ricardo-de9ju 10 месяцев назад

      @@JonasNoell I don't think so, there are still a lot of basic things missing that should be prioritized. Because it was a native tool in Cinema 4D, it had to be very well integrated.

    • @JonasNoell
      @JonasNoell  10 месяцев назад

      @@Ricardo-de9ju Yeah, but initially it was developed independently for multiple platforms before it was aquired by Maxon. But after that it is probably a bit more focussed on C4D :)

  • @ss54dk
    @ss54dk 10 месяцев назад

    GPU can not bake PBR texture, cannot use on industry workstation or gaming.

  • @SerhatK.
    @SerhatK. 11 месяцев назад

    Are you considering switching to Corona Renderer?

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      Hmm don't really found a compelling reason yet to change, what can Corona do better?

    • @SerhatK.
      @SerhatK. 11 месяцев назад

      @@JonasNoell After an 8-year v-ray adventure, I switched to Corona. The images look more realistic. Am I the only one who thinks this way

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      @@SerhatK.I heard other people mentioned similar things. I will try it out. For me V-Ray offers the most complete feature set of any renderer. It's VFB is unmatched by any other renderer. It has it's quirks but for me so far I always found a way around it. Have tried Corona a few times so far and I also loved it! There is even some video on Corona vs. Vray in my channel. Though it is outdated now... :-)

  • @MrMadvillan
    @MrMadvillan 10 месяцев назад

    would be interesting to see a cpu ve gpu on an environment with lots of assets

    • @JonasNoell
      @JonasNoell  10 месяцев назад

      Yeah right, will take that into account for future content

  • @AbdullahRamzan-ci2cj
    @AbdullahRamzan-ci2cj 11 месяцев назад

    very nice video ,one thing i want to ask about vray that nobody have answer yet is that i have used redshift ,cycles,arnold,corona and all of them are giving realistic results on the same scene with a single light but vray is raw ,off,unreal,fake,sharp,and dark like a ps 2 game always i have to change the post settings apply some filmic filters beauty settings then in after effects i spend hours on render elements but still it looks shit compare to all of the other render engines i just mentioned above without even doing any extra work ,i never got a realistic result with vray how to get a realistic render in vray can you tell me bro its been years now i am tired ,i know its ray traced with bias engine but its too feels like a arcviz render all the time not soft cause redshift is also biased ray traced engine but still it gives a very soft lively result.

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      Hi you can check out my “Basic lighting for beginners” tutorial that you can find in my channel. You have to setup tonemapping by yourself in vray while other renderer do this step by default in the background.

    • @AbdullahRamzan-ci2cj
      @AbdullahRamzan-ci2cj 11 месяцев назад

      okay thanks bro@@JonasNoell

  • @markpeters6430
    @markpeters6430 11 месяцев назад

    In any given situation, GPU will always be significantly faster than CPU, but something this video doesn't touch on, that seemingly no one ever mentions, is memory usage. The biggest, most expensive consumer GPUs only have 24GB of memory, which severely limits the size of scenes you can create. If you want more VRAM, it's going to cost you exponentially more. Scenes I create already take up around 40 -- 50GB RAM. Two RTX4090 GPUs for combined 48GB VRAM is going to cost a lot more than my 3970X with 256GB RAM.

    • @JonasNoell
      @JonasNoell  11 месяцев назад +1

      You are right, the Memory Limitation is of course an important toppic that I should have mentioned. If your average scene easily takes up more than 50GB of memory then right now the CPU would be the better option. Or investige if there are options to reduce the memory usage (instances, scattering, proxies, etc.). I also saw there is an (experimental) Out-of-core option in the settings that could help in situations where Memory is short. Could be worth a try.
      The great thing with V-Ray is that you can (in theory) just switch between both engines if one starts to cause issues...

    • @matiasbenavidesdigitalvisu9511
      @matiasbenavidesdigitalvisu9511 11 месяцев назад

      there are nVIDIA "industrial" GPU that have 48gb each (RTX A6000) but cost a lot more

    • @markpeters6430
      @markpeters6430 11 месяцев назад

      @@matiasbenavidesdigitalvisu9511 Yeah, those were the ones I was referring to when I said they cost exponentially more. They're like 4-times the price of a 4090, if I'm not mistaken. They're preposterously expensive.

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      @markpeters6430 Because one is marketed to a customer group with less money (GeForce: „Gamers“) and one to a customer group with deeper pockets („Professionals“) 😀 Nonetheless if you main concern is RAM usage than GPU right is not the best option. In my day to day work my scenes would only very very rarely require more than 24GB of memory. So everyone’s usecase is different

    • @markpeters6430
      @markpeters6430 11 месяцев назад +1

      @@JonasNoell Yup.
      Also, I'm a hobbyist. I mostly just use 3ds Max as a creative outlet. I'm not too worried about doing things efficiently or optimised. I just create stuff for myself and because I enjoy it.

  • @Rammahkhalid
    @Rammahkhalid 11 месяцев назад

    Far from the comparison, you are the best in managing the viewport and windows 😂😂😅

  • @mukeshsuthar3627
    @mukeshsuthar3627 3 месяца назад

    My pc gets restarted while I press rendering button normal 3d max vray rendering in ur free online tutorials a normal bathroom scene with 4 light and tiles material
    I have intel i7 with 7 generation and
    16 x 2 ram
    No graphics card
    512 gb ssd and deep cool air Coler
    And
    1tb hd
    750walt supply
    It has 4 cores and 8 threads
    Can I add rtx 3060 gpu then it will all god

  • @juanromero-fi2cf
    @juanromero-fi2cf 11 месяцев назад

    After being rendering on gpu for years i really will not go back to cpu... only for simulation

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      Compared to many other comments it's refreshing to hear that some also switch the GPU and don't regret it. Can I ask in what area you work on? I have the feeling most people who complain GPU is not suitable for their tasks work in Archviz where you regularly have to work with scenes that have huge amounts of textures, geometry or other ressources. So would like to know what projects you use it for.

    • @juanromero-fi2cf
      @juanromero-fi2cf 11 месяцев назад +1

      @@JonasNoell vfx, fluids and also set extension. Tools like unreal , redshift even chaos vantage are game changers

    • @JonasNoell
      @JonasNoell  11 месяцев назад

      @@juanromero-fi2cf Thx! Do you experience unreasonable long loading times to load textures, ressources while using GPU in your line of work? A lot of comments seem to mention that.

    • @juanromero-fi2cf
      @juanromero-fi2cf 11 месяцев назад

      @@JonasNoell not really even big scenes take like 60secs to 2 mins to load on gpu but comoensates with massive speed on render.

  • @jaylee2656
    @jaylee2656 11 месяцев назад

    9:15 i can see the left side chair is weird 🤣🤣🤣🤣🤣🤣

    • @JonasNoell
      @JonasNoell  11 месяцев назад +1

      You are not supposed to look there :-)

    • @jaylee2656
      @jaylee2656 11 месяцев назад

      @@JonasNoell 🤣🤣🤣

  • @marcusrwalker
    @marcusrwalker 11 месяцев назад +1

    Yeah vray gpu is useless for anything other than a shader ball.

    • @JonasNoell
      @JonasNoell  11 месяцев назад +1

      Could you elaborate a bit on that? :-)

    • @bgtubber
      @bgtubber 11 месяцев назад +1

      ​@@JonasNoell IPR updates become very sluggish on even moderately complex scenes. Exteriors with lots of scattered (Forest Pack) geometry and displacement? Forget it. Yes, it will normally render faster than CPU, but look dev in real world projects is painfully sloooow. I'm using Threadripper 3970x 32-core, 64GB RAM and RTX 3090 24GB.

    • @marcusrwalker
      @marcusrwalker 11 месяцев назад +1

      @@JonasNoell I'm running an rtx 6000 and 3990X, Maya Vray and Houdini Vray. Most of the general shading nodes dont work on GPU like fractal noise and even vray distance. Reflections/Refractions are always different, and large GPU loading times to render. CPU on both Maya and Houdini give consistent results and always supports all the latest nodes. Vray gpu is always way behind and not even useful for preview as it doesn't represent the scene accurately. This is for high end work.

  • @DjDime92
    @DjDime92 3 месяца назад

    GPU much faster than CPU if GPU is good, but I have unexplainable problems with GPU rendering with Vray5, such as water ignoring 80% of the noise I set, some lights' reflections randomly get pixellized, GPU literally ignores opacity on some plant leaves, and as a perfectionist I can only say FUCK NVIDIA GeForce RTX 3060. Will use it for dickhead clients, and CPU (AMD Ryzen 5 5600X 6 core), which is like 20x slower for my portfolio. Either way I'm quite annoyed by the unexplainable difference and FUCK this era of technology i fucking hate it. excuse my language, it's rather hard being stupid and i just can't find any solution to this...