I tried making UE5's NANITE in BLENDER!

Поделиться
HTML-код
  • Опубликовано: 17 янв 2025

Комментарии • 175

  • @jamescombridgeart
    @jamescombridgeart  2 месяца назад +29

    Thanks for hanging out folks! This quick node setup is definitely just for fun and not a 1:1 nanite (relax, I'm not that smart!). Someone has also rightly pointed out that this method is only preferable when using lots of unique meshes. For any repeated meshes, using linked objects (Alt+D) is still way more performance friendly! And texture and UV friendly! So take with a grain of salt and follow other optimization best practices! I hope blender eventually has some kind of native functionality like nanite to help with viewport performance though!
    Do check out a recent vid made by 'stache' on scene optimisation. It's excellent!

    • @spiralingspiral72
      @spiralingspiral72 2 месяца назад +2

      the best solution would have to be the decimate node built-in to geometry nodes itsef

    • @IIIDemon
      @IIIDemon 2 месяца назад +8

      yeah calling this "nanite" is a bit over the top, but gets the idea across.
      what nanite actually does involves a bunch more pre-calculation. they break models into chunks and swap the chunks, with clean 'seams' between the mesh chunks. it avoids the jitter that comes from most progressive decimation LoD techniques, and takes care of the processing problem that was slowing you down. i have no idea how you would implement that in blender, but its probably more than geometry nodes can handle.
      this is a pretty cool basic progressive decimation technique though. good video.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +2

      ​@@IIIDemonyeah fair call. I might have got a bit over excited by the concept😅

  • @musikdoktor
    @musikdoktor 2 месяца назад +266

    This is not Nanite, it's just dynamic LOD.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +49

      True! Just a fun experiment 🤓

    • @wonderboy75
      @wonderboy75 2 месяца назад +2

      Was going to say…

    • @MarcoCapelli74
      @MarcoCapelli74 2 месяца назад +10

      [Kind Of] he added in the title.

    • @wonderboy75
      @wonderboy75 2 месяца назад +16

      @@musikdoktor while being a fun experiment I wonder if it actually improves performance or if it actually requires more processing to run this dynamically?

    • @nathanfarley6250
      @nathanfarley6250 2 месяца назад +13

      That's literally what nanite is

  • @wndr0
    @wndr0 2 месяца назад +37

    I don’t know if someone already said but at 4:00 you can use the “Active Camera” node and connect it in the object info socket so you will always be good with any camera you’re using

  • @bean_mhm
    @bean_mhm 2 месяца назад +38

    Also you should probably use 1 / distance to get a more "logarithmic" curve because a distance of 100 and 101 are way less important than 1 and 2

  • @RQ321
    @RQ321 2 месяца назад +25

    For the decimation problem of recalculating all assets in the scene using the modifier, you can probably create your own LODs ( Level of Detail ) in a collection, and then use geometry nodes to determine which one to display, based on distance. However, this would be similar LODs in a game engine in the traditional sense and not a smooth transition, game engines traditionally have roughly 4-5 levels of decimation. But you can decide for yourself how many you actually need and place it in a collection.
    This way you don't recalculate all decimation on all assets on the fly. It's already precomputed and the geometry node just decides which to display at what distance.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Huh! That's a cool idea. I'm sure there'd be a way to switch between them!

    • @P9_STUDIO
      @P9_STUDIO 2 месяца назад +1

      Cool... I wonder if that switch from one LOD to another would be "very" noticeable. Anyways i would love to see something like that on a scene full of meshes.

  • @globglob3D
    @globglob3D 2 месяца назад +15

    Cool idea but the execution has a few issues.
    1 - You do not need this weird setup with Geometry Proximity and the Cube. What you want is calculating the distance between the camera and the object origin. Just do that. Take 2 object info nodes, use Self Object node and the Camera as inputs, and retrieve Distance between the 2 positions with the Vector Math node.
    2 - There is an Active camera node for the Object info, no need to specify your camera object in the node group.
    3 - This is not very useful at the moment because it's going to be so laggy since everytime you move the camera GN is going to recompute every objects from their 1 million polycount to the currently needed polycount. To make it more useful I'd first bake a few LODs (all of this can be done in GN procedurally on Frame 1) and caching that. Then when calculating the merge by distance I'd retrieve the closest LOD that has a higher density than what's required and merge by distance from that LOD instead than from the High Poly. Also I'd include steps so it doesn't reculate for tiny movement but every 1 meters for example (although I would not do it linearly but exponentially as camera get closer to the object).
    More on the Geometry Proximity and the Cube:
    What your setup is doing is looking at the Cube and asking "What's the closets Face on the Cube to the Camera Location?" or "Which face as the shortest distance to that Camera?". Then you retrieve that distance and use that to Merge by Distance. That's what Geometry Proximity does, it's retrieving the closest mesh component. I hope it's obvious why this is not ideal.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Geat notes! Thanks for taking the time to respond! I'd love to see a better version of my crude attempt that's for sure! Do let me know if you find anything cool!

  • @kolupsy
    @kolupsy 2 месяца назад +11

    This is Nanite with a huge asterisk. If you render out an animation, I wonder if this would even be faster than using persistent data rendering, aside from having bad performance when moving the camera. Also one of the nanite traits that you could have easily replicated is that it has a specific geometry density on screen, not some arbitrary decimation based on linear distance to the camera.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +3

      Very true! I may have got over excited. If you figure out a method to do what you described I'd love to know! 🙏

  • @ravenweikel5974
    @ravenweikel5974 2 месяца назад +4

    This is better than nanite, you FOOL! You’ve already WON!

  • @dzmigo2649
    @dzmigo2649 2 месяца назад +28

    I was wondering why Blender doesn't have Nanite or Lumen techniques. These could be incredibly helpful for reducing render times. I'm definitely going to use this! Thanks a lot for sharing this technique!

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +4

      Thanks! I hope they do some day! This is definitely a crude work around in comparison. Eevee is getting closer and closer to being realtime and behaving like lumen though. So at least that I can see being a reality!

    • @pczone7641
      @pczone7641 2 месяца назад +15

      its not a Realtime program so accuracy and quality are more important than speed! Nanite is just a fancy automated LOD system , Lumen is well just real time lighting system no benefit for them in a 3d tool like blender especially Lumen

    • @Stimes
      @Stimes 2 месяца назад +5

      Could be landed soonly when Vulkan backend gonna be fully implemented inside eevee-next :) See What "jeroen bakker"are do ! About Meshlets (Geometry streaming) for Vulkan

    • @Nevil_Tan
      @Nevil_Tan 2 месяца назад +6

      We have a group that working with unreal in our studio. they use Nanite only for viewport optimization and disable it on final render because it's caused so many unwanted flicker and small jumping on mesh parts. Literaly Nanite is a grate tool for Gaming but its not good for animation.

    • @johnnymartinez8668
      @johnnymartinez8668 2 месяца назад +3

      Might be Epic having millions to spend on developing but not sure 🤔

  • @bean_mhm
    @bean_mhm 2 месяца назад +3

    1:30 btw you could just add a Position node and an Object Info node (with the object being the camera). The object info node has a Location output, so just add a Vector Math node and choose Distance and plug the position and the location of the camera.

    • @anonymousanonymous2284
      @anonymousanonymous2284 2 месяца назад

      That would cause the output to be a field, and the merge-by-distance node only accepts single values in the distance input. He could take the length of the Location output though.

    • @bean_mhm
      @bean_mhm 2 месяца назад

      @anonymousanonymous2284 then you could add another Object Info set to self (this object) and use its location, instead of the Position node.

    • @anonymousanonymous2284
      @anonymousanonymous2284 2 месяца назад

      @@bean_mhm That's one way to do it. Assuming that both of the Object Info nodes are set to original, it should be functionally identical to plugging the Location output of a Object info set to Relative into a Vector Math node using the Length operator. I just have a tendency to use fewer nodes if there are two ways of doing things, even though I don't think that it makes any difference as far as performance goes.

    • @bean_mhm
      @bean_mhm 2 месяца назад

      @@anonymousanonymous2284 good point

  • @MaxChe
    @MaxChe 2 месяца назад +10

    Congratulations, you have reinvented LODs
    Good experiment, either way it's a handy way to optimize, thanks!

  • @alediazofficial2562
    @alediazofficial2562 2 месяца назад +8

    excellent work mate! I had a feeling something like this ought to be possible within blender. Even though it isn’t really the same as Nanite, but yea kind of in principle. Adjusting polygon count based on the distance from camera automatically is a blessing for those of us who have done it manually. Im still new to geometry nodes but I have seen enough to understand Blender’s full potential hasn’t been tapped into just yet! Keep it going 🙏🏽

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +2

      Yeah definitely not a 1:1 transfer lol. Just for fun! I love making fun little tools like this that can speed up my workflow 😊. Thanks for stopping by!

    • @alediazofficial2562
      @alediazofficial2562 2 месяца назад +1

      @ Well in the end it did prove to make a difference, especially in rendering time. That 2 second difference is exponential. When one is attempting to render hundreds/over a thousand frames, it does add up to some time saved! Subbed 🙏🏽

  • @ENORMOUS26
    @ENORMOUS26 2 месяца назад +1

    that's clever idea will it work with animations as well, If the camera recalculating on every frame there might be issue, however i think it will not if we are setting a costume range. What's your thought on that.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Yeah I definitely made it with static cam or still frames in mind, and using instances is probably still way more performance friendly lol

    • @ENORMOUS26
      @ENORMOUS26 2 месяца назад

      ​@@jamescombridgeart I see

  • @sander-wit
    @sander-wit 2 месяца назад +2

    To copy the geo nodes modifier to all objects you want to decimate, you can also select all objects and use the 'Copy to selected' button, hidden on the small arrow next to modifier name. This way the existing modifiers on the objects are kept. By the way you can also expose the camera selection in the modifier section by dragging the camera selection in the geo nodes editor to the group input. You can do the same for the map range max value, exposing it for easier use. By the way you can later change the value for multiple objects by selecting the objects and holding alt before clicking the value to change it, which will apply it to all selected objects.

  • @merseyviking
    @merseyviking 2 месяца назад +2

    The value of the decimate node isn't an arbitrary [0-1], but scene units distance between vertices. To improve this a bit, calculate the size of a pixel for each mesh (based on the centroid, or some other metric), and use that as the input to the decimate node. So you can say merge vertices that are about _n_ pixels apart

  • @molbertan
    @molbertan 2 месяца назад +7

    This is very interesting, but unfortunately it doesn't work with instances . That is, instead of having 6 Ald+D copies of the wall (120k polygons in total), we will have 6 copies with 600+k polygons. This solution may be suitable for one very high poly object, but not for a set of objects, because using this method we will only increase the number of polygons with each new copy

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +3

      Hey great point, you're absolutely right, this method would only be preferable when using Unique meshes. Thanks for pointing this out 👍

  • @ak-gi3eu
    @ak-gi3eu 2 месяца назад +1

    Can u do new way of culling ?

  • @kaizu4914
    @kaizu4914 2 месяца назад +3

    I have one question: Can I set input object to 'Active Camera'?
    (Input - Scene - Active Camera)

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +2

      Maybe!? If you figure something out let me know! Not sure if geo nodes has something native for that!

    • @kaizu4914
      @kaizu4914 2 месяца назад +1

      @@jamescombridgeart Update: I tested my self and active camera node work!

  • @thereisonlyonefreeman
    @thereisonlyonefreeman 2 месяца назад +1

    tnx for the tutorial dude super useful

  • @CaioFelipekingpotato
    @CaioFelipekingpotato 2 месяца назад +2

    Genius! Thank you

  • @greentokyo
    @greentokyo 2 месяца назад +1

    Really cool and clever!

  • @SiCk949
    @SiCk949 2 месяца назад +1

    One question, why not use "Camera Data" node (it has a distance value) instead object info?
    I think this node refers to the active camera. Will be more "out the box" functionality.
    But I'm sure I'm losting something.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      No you aren't! I had no idea it existed to be perfectly honest. A few folks have pointed that out. Always something to learn!

    • @SiCk949
      @SiCk949 2 месяца назад

      @@jamescombridgeart I checked and "Camera Data" appears only in Shader editor, not the Geonodes (don't know much more as I have not knowledgement).
      Don't know if possible or not!

  • @Whalester
    @Whalester 2 месяца назад +2

    But does it cause flickering in animation, or larger frame compile times between frames because of the geometry nodes updating every frame?

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Not sure to be perfectly honest! I haven't road tested that much, and I'm sure there are better ways to optimise besides from this experiment!

  • @hugoantunesartwithblender
    @hugoantunesartwithblender 2 месяца назад +1

    Really great! And about textures? Cause textures can ocupy mush more memory than mesh

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Yeah this is a pretty crude experiment. A handy feature in blender is somewhere in the render tab you can set the maximum texture resolution for the whole scene. Great for bringing down memory usage if you have a bunch of unnecessary 4k+ textures

  • @uhmmm_uhhh
    @uhmmm_uhhh 2 месяца назад +1

    This is cool. I wonder if there’s a way to decimate a mesh based on its size in the camera, that way you wouldn’t have to fine tune this for every mesh.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      That would be cool. Someone smarter than me, get on it!! 😂

  • @ScorgeRudess
    @ScorgeRudess 2 месяца назад +1

    This is amazing

  • @akuunreach
    @akuunreach 2 месяца назад +1

    @jamescombridgeart To get closer to nanite, you might try some way of adding more geometry as you get closer.
    Based on my understanding, nanite has "unlimited" geometric detail.
    While you'll likely be limited if you can't figure out a way to implement everything, it would be a bit closer than the dynamic LOD you have now.
    A very nice first attempt.

  • @brandosbucket
    @brandosbucket Месяц назад

    Is there a node that displays the result from a node? like a numerical value, rather than just anonymously pipes it through?

    • @jamescombridgeart
      @jamescombridgeart  Месяц назад

      @@brandosbucket not that I'm aware of, but that's not saying much 😉. Besides from checking vertices count in viewport stats that is.

  • @pravinrenders
    @pravinrenders 2 месяца назад

    This is gold thanks for sharing it for free thanks a lot to youtubers like you:D

  • @ramilgr7467
    @ramilgr7467 2 месяца назад +1

    What to do with textures and uv? Tnx!!!

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Ah sorry friend. Since I mainly make still frames for concept art, I hadn't considered many aspects of this node! It's pretty destructive unfortunately

  • @TristanNLD
    @TristanNLD 2 месяца назад +1

    Well done man

  • @phalhappy8612
    @phalhappy8612 2 месяца назад

    can we do that with instanced object? for example we instanced tree on the landscape and we want to nanite the tree.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Not sure! There have been some great suggestions in the comments already though! Could be worth looking into

    • @Nevil_Tan
      @Nevil_Tan 2 месяца назад

      For instance objects it's better don't make various version. because Instance meshes send once to render engine and the engine reuse the same data over and over.

    • @hugoantunesartwithblender
      @hugoantunesartwithblender 2 месяца назад

      For a florest is better to use the billboard technique

  • @iratewatcher
    @iratewatcher 2 месяца назад +1

    it's just wow

  • @pravinrenders
    @pravinrenders 2 месяца назад

    when i plug min and max to group input to access it from modifier panel but it doesnt work same why so?

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Yeah I ran into this too. It's because when you drag into the modifier, it means it takes those values on a per object/modifier basis. Whereas if you leave them in the geo nodes area, they stay dynamic / more global. Either would work technically, just depends our your needs

  • @Tertion
    @Tertion 2 месяца назад +1

    I wonder how well this would work in UPBGE ..... ?

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      No idea, probably not very well I imagine lol. This was a very crude experiment made mainly with rendering still frames or a static camera

  • @zarioka
    @zarioka 2 месяца назад +1

    i think every 3d software should have nanite function since ue5

    • @pansitostyle
      @pansitostyle 2 месяца назад +1

      afaik nanite isnt open to use for everyone

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Totally!

    • @zarioka
      @zarioka 2 месяца назад

      @@pansitostyle it really should cause it will boost on game character design even in low spec pc,

  • @NonoGG
    @NonoGG 2 месяца назад +1

    Maybe use a simple plane for the modifyer with a collection input. Hide the collection in scene and render and let the Geonodes Rebuild the whole Collection with dynamic lods. So you could link your camera easy. Than you should consider to reduce geometry thats not in viewport. At least at a baking node and bake the node if there is camera movement.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Interesting approach! If you do end up experimenting with it let me know how it goes!

  • @HenrisKas
    @HenrisKas 2 месяца назад

    We need animation test, as for now, for static frame / no camera movement it's cool, but Nanite's power is about animation / camera movement and real time geo, or am wrong? In anyway cool tutorial and good creative thinking.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Cheers. Yes you're right. I mainly use blender for still frames for concept art, so for me it has a niche use case. Not sure about other use cases. Definitely feel free to experiment and let me know how it goes!

  • @ДанилЧірва
    @ДанилЧірва 2 месяца назад

    so, it slowdowns mesh calculation but potentially reduces memory consumption. Would be interesting to know if there is flickering in animation

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Possibly, I haven't tried an animation with it. Being mainly a concept artist I'm more of a single image guy! I imagine if you dialed it in to look good at the right distance it'd work ok 🤔

  • @federicolanzarotta9187
    @federicolanzarotta9187 14 дней назад

    Man i love this but 10% to 15% preformance gain is not small nor irrelevant especially for the kind of scenes in which you'd need to use dynamic LOD

  • @Beryesa.
    @Beryesa. 2 месяца назад

    Definitely earned a deserved sub!

  • @Maxwell_Nexten
    @Maxwell_Nexten Месяц назад

    thanks!

  • @darkdoc
    @darkdoc 2 месяца назад

    this is so helpful thankyou

  • @BaumAuto77320
    @BaumAuto77320 2 месяца назад

    I think this is THE BEST Tutorial! The only thing that was keep getting in my way wehen building big scence was the amount of lag even in the viewport thanks to your tutorial that schould be an Issue anymore.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      This was more of a fun experiment really! I would highly recommend you look up a recent video made by the channel 'stache' on scene optimisation if you're having trouble with performance. It's recent, and very comprehensive!

    • @BaumAuto77320
      @BaumAuto77320 2 месяца назад

      @@jamescombridgeart thanks

  • @apatsa_basiteni
    @apatsa_basiteni 2 месяца назад

    Amazing thanks for this trick.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Cheers! Definitely not a replacement for instancing, but it was fun to figure this out either way 😊

  • @handdrawnbink
    @handdrawnbink 2 месяца назад

    Thank you so much for this 🙏🏻

  • @gswhitedevil6631
    @gswhitedevil6631 2 месяца назад +3

    isn't creating a instance of that object will be better options ?🤷

  • @CasualRandom50
    @CasualRandom50 2 месяца назад

    this would be the bomb for the game engine godot

  • @yoskkdkdk
    @yoskkdkdk 2 месяца назад

    there’s no gain in nanite without hardware support for it

  • @LudvikKoutnyArt
    @LudvikKoutnyArt 2 месяца назад +3

    It's not usable in any way. Modern viewports can handle few dozen of millions of triangles easily. And once your start exceeding that and the framerate starts dropping because of that, using your merge by distance setup is going to cost a LOT more performance to calculate than the large triangle count. And also memory, since there will now multiple permutations of mesh topology, and those meshes therefore can't be instanced. I know it's just for fun but even withing the context of fun it doesn't make much sense :)

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      True! Better to apply it on large scenes, if you do opt to use it at all.

  • @GameDevGeeks
    @GameDevGeeks 2 месяца назад

    this is fucking brilliant

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Haha thanks. I'm sure there's more comprehensive and powerful ways to leverage this setup. Have fun!

  • @antiimperialista
    @antiimperialista 2 месяца назад

    what do you mean by decimating?

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      It works similar to the 'decimate' modifier - so I found it natural to use the same terminology!

    • @antiimperialista
      @antiimperialista 2 месяца назад

      @@jamescombridgeart i see what you mean now thanks for the quick response much respect

  • @MammanAnimations
    @MammanAnimations 2 месяца назад

    Bro u just made a game changer

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Haha I don't know about that! instancing is probably still a better option depending on your needs ;) But I had fun figuring this out regardless!

  • @meshgen
    @meshgen Месяц назад

    jiggery-pokery

  • @kaizu4914
    @kaizu4914 2 месяца назад

    So nanite basically auto create LODs instead of manually make LODs

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Basically! There's some contention over whether or not Nanite is properly optimized for this purpose in unreal amongst some devs I've spoken to - but that's the concept of it yes. pretty cool though - especially if you hate making LOD's 😅

    • @kaizu4914
      @kaizu4914 2 месяца назад +1

      @ I hear some case LOD's increase file size and reduce performance, so nanite can be an alternative method.
      Still it take some power for decimate process in exchange for reduce memory load.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Yeah, some up front cost potentially in exchange for render efficiency. Same for LODs I suppose. Up front (or space cost in a games case) cost, for better performance

  • @AllExistence
    @AllExistence Месяц назад

    Merging actually takes more time then rendering.

  • @SanOcelotl
    @SanOcelotl 2 месяца назад +1

    Not really the same as nanite, but is a cool setup

  • @xanzuls
    @xanzuls 2 месяца назад +3

    This is very cool, and yeah it's not a 1-1 recreation of how "Nanite" works under the hood, but the idea of Nanite sums up as dynamic decimation based on the difference from camera and that's what you did, kinda. But I do think based on the Nanite official documentation, you can remake Nanite again but much closer to how Nanite actually works, and although it would still be limited to a geonodes modifier, and wouldn't change underneath how Blender renders polygons, unless you change the source code of Blender, it still could be a very useful tool for specific purposes.
    This is what I found from the official doc of Nanite.
    ""During import: meshes are analyzed and broken down into hierarchical clusters of triangle groups.
    During rendering: clusters are swapped on the fly at varying levels of detail based on the camera view, and connect perfectly without cracks to neighboring clusters within the same object. Data is streamed in on demand so that only visible detail needs to reside in memory. Nanite runs in its own rendering pass that completely bypasses traditional draw calls. Visualization modes can be used to inspect the Nanite pipeline. "

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +1

      Interesting, that makes sense how it's way more performance than this crude version haha

  • @TR-707
    @TR-707 2 месяца назад +1

    Tim Sweeney HATES this man - find out why in this video

  • @timeforsleepzzz
    @timeforsleepzzz 2 месяца назад +1

    this is so cool!

  • @juancarlospaco
    @juancarlospaco 2 месяца назад

    I done this decades ago with BGE in Blender, its just too slow, also Nanite works like Voxels but GPU, if you lower quality in Nanite you see the literal Voxel in real time, a photoscanned rock looks like a Minecraft rock.

  • @SillyandgoofyAnim8or
    @SillyandgoofyAnim8or 2 месяца назад

    decimate modifier

  • @niloytesla
    @niloytesla 2 месяца назад

    👌

  • @nikko3d
    @nikko3d 2 месяца назад

  • @emmanueliyege244
    @emmanueliyege244 Месяц назад

    You are a blender unreal engine bridge.

  • @SUVO_RAW
    @SUVO_RAW 2 месяца назад

    The purpose of nanite is to speed up work. The purpose of this sh&T is to choke your PC fast. lol

  • @shmuelisrl
    @shmuelisrl 2 месяца назад

    this isn't nanite, obviously.

  • @SuperSertyuio
    @SuperSertyuio 2 месяца назад

    So basially you remade microdisplacement!
    P.S: No, i am actually wrong! This would complement microdisplacement really well! Since microdisplacement only adds more detail to close meshes, but does not decimate far meshes. Also Microdisplacement is reliant on a displacement map!

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Yeah I get what you mean! Same same but different 😁

  • @rocksquared
    @rocksquared 2 месяца назад +3

    That's absolutely not anything like nanite... it's a just basic decimator trick that can't work in real situation needs.
    How would it cope within a 500 million poly scene ? It wouldn't because it is not design to work with high density data where the optimization would make sense.
    If you render an animated version of the rock with the cam moving, how stable frame to frame would it look like ? It would be jagged so much it'd be unwatchable.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад +2

      @@rocksquared you're 100% right. It's nowhere close to what proper nanite would be capable of, that's way above my pay grade😅. In terms of a render, since it's offline rendering it wouldn't be noticeable in the final. Although with super dense scenes I'd always recommend optimising and applying modifiers tailored to the shots needs.
      Sorry if I got your hopes up only to let you down friend! Maybe I shall tweak the title to avoid a similar experience for others

    • @syedsafisalam6799
      @syedsafisalam6799 2 месяца назад +3

      ​@@jamescombridgeartYou replied with such sweetness and humbleness. I loved your video. Thanks for the video and keep posting these type of tips and tricks for blender and also for Ue5 if possible ❤

  • @ViTfilm
    @ViTfilm 2 месяца назад +1

    millennials invent LOD From the 90s

  • @Metarig
    @Metarig 2 месяца назад

    It's fun to try, but I think it's misleading. This isn’t Nanite, and it's not really useful; we could replicate it in other software, and it's not even new - we could do decades ago. But what's the point? You lose UVs, and the frame rate takes a hit because constantly merging by distance affects performance more than using high-poly geometry. Plus, the results aren’t predictable either.

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Yeah I'm starting to realise this! Thank you for the feedback. ♥️

  • @realhamza2001
    @realhamza2001 2 месяца назад +5

    I know this is unrelated but give the Quran a read, also I've always wondered if this is possible

    • @jamescombridgeart
      @jamescombridgeart  2 месяца назад

      Me too! I hope they make something more official like nanite eventually that speeds up scenes in real time

    • @poopiecon1489
      @poopiecon1489 2 месяца назад +1

      Why this makes u bring uo Quran??

    • @parasjain9134
      @parasjain9134 2 месяца назад +1

      Don't bring religion into this moron

    • @satyamanu2211
      @satyamanu2211 2 месяца назад +14

      Why do people bring religion into every single thing

    • @poopiecon1489
      @poopiecon1489 2 месяца назад

      @satyamanu2211 I dont understand,,