Nanite | Inside Unreal

Поделиться
HTML-код
  • Опубликовано: 18 дек 2024

Комментарии • 391

  • @DamielBE
    @DamielBE 3 года назад +236

    that short at the beginning was brillant!! it's rare to see horror in space without going all Dead Space or Alien. REally well done!

  • @333studios.
    @333studios. 3 года назад +452

    That intro film was legit scary.

    • @simongravel7407
      @simongravel7407 3 года назад +18

      Pretty good visual effects for a seemingly low-budget production! All the CGI was rendered in Unreal Editor???
      Edit: www.unrealengine.com/en-US/spotlights/behind-the-lens-of-the-short-film-challenge-australia

    • @333studios.
      @333studios. 3 года назад +12

      @@simongravel7407 If I'm not mistaken everything was done in Engine. Such an amazing experience to work in. Makes creating art feel real and your ideas can just come alive without having to go to school for years.

    • @Rotimusic
      @Rotimusic 3 года назад +4

      @@333studios. i don't understand if the human in this short film is CGI or real...?

    • @333studios.
      @333studios. 3 года назад +10

      @@Rotimusic looks like it was a real shot with a greenscreen and placed him into the scene. Like film makers do! It's looks amazing 😍

    • @kokoze
      @kokoze 3 года назад +10

      It was cool but the ending was hella goofy if you ask me

  • @PamirTea
    @PamirTea 3 года назад +73

    Talk starts at 8:00

  • @NashMuhandes
    @NashMuhandes 3 года назад +89

    I'd like to see a full film made out of that short at the start. That was amazing!

    • @kippesolo8941
      @kippesolo8941 3 года назад +1

      for unreal

    • @prism37
      @prism37 3 года назад +1

      it was kinda scary ngl

    • @rose348
      @rose348 3 года назад +1

      Its been a while since horror legitimately gave me the creeps like that

    • @SreenikethanI
      @SreenikethanI 3 года назад

      @@kippesolo8941 bruh

  • @UnderfundedScientist
    @UnderfundedScientist 3 года назад +58

    Absolutely legendary, so excited for full release

  • @CanadianReset
    @CanadianReset 3 года назад +22

    This was great! Very well explained and easy to understand. I'd love to see a part 2 where Brian goes deeper on how Nanite works.

  • @Synkhan
    @Synkhan 3 года назад +12

    Honestly thought 36:00 was a high res photo....Then the camera started moving later... I am honestly mind blown by the level of detail achieved. Now all thats left is physics and foliage.

    • @gulanhem9495
      @gulanhem9495 3 года назад

      Yes, it can't be believed. It can't be believed.

  • @grantdillon3420
    @grantdillon3420 3 года назад +34

    Amazing, amazing. The level of brilliance here in working from first principles to develop a generalized, scalable, graphics engine is just genius. This is a quantum leap for real-time graphics, and likely represents the final prototype for completely photorealistic real time imagery.

    • @P_Luke_12345
      @P_Luke_12345 3 года назад

      Yeah. You can basically photogrammetrize your neighbourhood -> clean up the models -> place on a level -> welcome to your virtual neighbourhood. No more bothering with LOD models. Just have one version and you're set.

    • @grantdillon3420
      @grantdillon3420 3 года назад

      @@P_Luke_12345 will be interesting for AR whenever that technology gets to a point where you can just wear it around anywhere

    • @teranyan
      @teranyan 3 года назад

      Raytracing is better than this and is becoming a reality soon in terms of real time cg. While this tech is brilliant, imo it came too late because it will soon be obsolete compared to RT anyways.

    • @jojolafrite90
      @jojolafrite90 3 года назад

      Final? Especially now, it has margin to improve they said it, like better compression.

    • @teranyan
      @teranyan 3 года назад

      @Ceki Nanite is not related to lighting, but raytracing is. Nanite solves the geometry resolution problem while RT solves both the geometry and the lighting accuracy problem. So RT is just a better solution

  • @MrTravoltino
    @MrTravoltino 3 года назад +68

    Euclideon with their "unlimited detail" just quietly crying in the corner...

    • @shayhan6227
      @shayhan6227 3 года назад +8

      I never understood what Euclidean was trying to do. Were they trying to create a sort of point cloud type system that was compressed into a hash function?

    • @KillahMate
      @KillahMate 3 года назад +8

      @@shayhan6227 As far as I can tell they were mostly trying to defraud the Australian government and various private investors. But as for what they were _saying_ they're doing, if you parse the muddy marketing language (because they never provided anything close to a technical breakdown) it _looks_ like some kind of virtualized voxel system with a custom ray-marched renderer. So funnily enough quite similar to what UE5 is doing, except with voxels instead of polygons and signed distance fields.

    • @chrisf8021
      @chrisf8021 8 месяцев назад +2

      ​@@shayhan6227 They achieved what nanite achieved a decade earlier and were subjected to worse hostility than UE5 has been. While the application of their concept and achievement couldn't be realised because of the form taken (point cloud data rather than triangles), their pioneering efforts were well ahead of their time and poorly appreciated by the safari rabble. Solving the geometry problem through efficiency than brute force, as everyone else was doing, was visionary, and we should all be thankful nanite clones their original concept.

    •  2 месяца назад +1

      @@KillahMate love, when people know technical 3d stuff. i never dived into this, staying a web developer at most :-D

  • @Cadmeus
    @Cadmeus 3 года назад +35

    As a hobbyist, raising a glass right now to not fussing around with baking normal maps! It's a dream come true, I'm sure I'm not the only one who found this an annoying technical hurdle when learning the 3d asset pipeline.

    • @chancemcclendon3906
      @chancemcclendon3906 3 года назад +1

      its not tragic once you get good at it and can be really satisfying when you get it to look good but thinking about not having to do it anymore makes me happy.

    • @mgodoi3891
      @mgodoi3891 3 года назад +8

      ​@@chancemcclendon3906 It's tragic for me and many more i believe. Every time i retopologized a model, or created/baked lightmap, i felt that i ways trowing away precious time of my life. Now we can focus more on telling a story. The only thing that actually matters.

    • @shimmentakezo1196
      @shimmentakezo1196 3 года назад

      @@mgodoi3891 I feel the same, it is a pain in the brain and I never manage to do it. Then texturing in substance or mixer is impossible since I have no curvature, no normals. Also creating geometry is difficult, re doing the process for lods and collision. Because of all that, I never managed to create a full asset, despite having spent years on learning and practicing. I hope nanite with help me, but I'm not sure lol. I'm trying to sculpt an asset but I can't understand how to use sculpt to make non organic asset, there is 0 precision on what I'm doing.
      And what about animating an object with 50 millions triangles

    • @shimmentakezo1196
      @shimmentakezo1196 3 года назад

      @@kidmosey But getting decent geometry is a thing in itself !

    • @mojojojo6292
      @mojojojo6292 3 года назад

      @@mgodoi3891 You could have just used the real time renderer in UE4. Obviously it wasn't as good as lumen but it was still good and allowed real time workflows without annoying light maps or baking times. If you have an RTX card you could even use the RT lighting instead of lightmaps. Unless you are releasing a project that needs to maximize performance (and static lighting quality) I wouldn't be wasting my time with light maps and baking. Learn the real time lighting workflow. It's far quicker for messing around while learning other aspects and seeing instant results. Obviously it can't do global illumination but you can somewhat fake that. Lumen solves that problem.

  • @massivechafe
    @massivechafe 3 года назад +48

    This really reminds me of that idea from Euclideon. (infinite detail real-time engine) though this is so much better lmao

    • @simongravel7407
      @simongravel7407 3 года назад +4

      Voxels are cool but f* em'. ;)

    • @velocityra
      @velocityra 3 года назад +7

      The difference with Euclideon is that this is done by people who actually know that they're talking about.

    • @aladdin8623
      @aladdin8623 3 года назад +16

      Actually in 55:50 Brian considered Voxels as well to virtualize geometry first, but for the sake of workflow in computer graphics and compatibility with artist's former products and tools etc, they stick to triangles.
      That compromise is ok, but also undermines a true revolution in computer graphics, which voxels pose with their many advantages over polygons. Some of them are deforming, destructing and bending of voxels and proper physics calculation in general, which you can't do exactly with static meshes in nanite.
      Also UE5 already borrows some advantages of voxels at least to some extent by using voxel cone tracing global illumination and signed distance field global illumination in Lumen. And Nanite seems to use signed distance fields as well. It is exiting, how unreal engine and other engines are going to be developed further. My guess for the market is, that after further graphical improvements near photorealism customers are going to demand much more approaches to realistic physics. Read the many customer's critical comments and even shitstorms on popular sports games and their developers. They seem to be already fed up with the x iteration of graphicaly slightly improved sports games every year again and again, while the player's movements still do look artifical and are not satisfying.

    • @alan83251
      @alan83251 3 года назад

      @@aladdin8623 Is there any reason the current implementation of nanite precludes voxels from being used as well, for example to represent volumetric objects and generate triangle- based and nanite-enabled meshes where the voxels say a particular material should be?

    • @aladdin8623
      @aladdin8623 3 года назад +2

      @@alan83251 In short, i don't know :) I only can guess here. Since nanite does also use signed distance fields, which are related to voxels, the mix with volumetric objects might be even easier. In general it should be definitely possible to combine polygons and voxels as we already have seen in several games or applications in the past. And this is good news and important for a possible transition or cowork at least.
      Among other notable caveates combining the two techniques the light leaking problem should be mentioned. Walls made out of ploygons have to be thicker than 10 cm to avoid that. This and other drawbacks can be read in the documentation for lumen. If my eyes are not mistaken light leakage for example becomes visible in the UE5 Demo the ancient. Every time the female figure echo lifts her energy ball behind her back for a prepared energy strike, the emitted light from that energy ball shouldn't reach her legs, because her body is in the way. But it does. If her body was made out of voxels, there might be proper solutions to prevent light leakage. For example one can try to place light blocking voxels under her skin.
      Another promising approach for preventing light leakage is based on the former mentioned SDFs. To learn more about that look for SDFGI used in the godot engine 4, which was awarded by epic by the way. Lumen seems to use that partially already among other GI solutions all together.
      When it comes to SDFs in general and raymarching on them i highly recommend the works of Inigo Quilez. What he does achieve by just a bunch of algorithims in that regard is truly remarkable and amazing.
      ruclips.net/video/Jf9MlYtkJM0/видео.html

  • @theishiopian68
    @theishiopian68 3 года назад +14

    "It looks like rocks because there are"
    The funny part is that this is so profound from a gamedev perspective

  • @oholimoli
    @oholimoli 3 года назад +5

    Brian should get a Nobel price for his work!

  • @needsmoretacos4807
    @needsmoretacos4807 9 месяцев назад

    Thank you so much for this and your siggraph presentation a few years back! These two Videos/ papers along with the UE5 documentation has been a life saver for myself and my company! I am TA responsible for leading the deep-diving, documenting, and creating pipeline best practices for nanite across our entire company and without this and the siggraph presentation I'd be much further back in my deepr understanding of the concepts, logic, and our on the group tests.

  • @randallcromer66
    @randallcromer66 3 года назад +5

    Unreal 5 is going to change everything because it's the single biggest graphics engine advancement ever. Welcome to the future of graphics and it is looking bright for all content creator's. Thanks to everyone who work on the Unreal 5, you guys are freaking awesome.

  • @RONI-EtailaxiA
    @RONI-EtailaxiA 3 года назад +19

    1:31:39 WTF, that's really see how good nanite is

    • @QUANT_PAPA
      @QUANT_PAPA 3 года назад +4

      yhea that was the real juice

  • @o2cokolja
    @o2cokolja 3 года назад +45

    And it's all for FREE! How crazy is that?

  • @TruthIsKey369
    @TruthIsKey369 3 года назад +3

    Euclideon was first with unlimited details, but you took it to the next level Epic Games, well done!

  • @RONI-EtailaxiA
    @RONI-EtailaxiA 3 года назад +11

    First time watched full intro film, really attended me, good job man!

  • @chucktrier
    @chucktrier 3 года назад +2

    I love that you call 128 tri patches edits, i brings mind to the Evans talk. I am blown away and you amazing work is really inspiring. And i am sold that you use your own rasterizer.

  • @LNYuiko
    @LNYuiko 3 года назад +12

    What would be amazing to see is procedural generation of landscapes with Nanite. A level design artist could simply enter the parameters: what kind of environment, biome, topography, paths or roads if any, etc. Then the engine using Nanite and the Quixel library will generate a landscape which the artist could then just fine tune and make ready for production. This tech is already being tested in 3d renderers through use of algorithms. Hopefully one day something like it will be in the engine.

    • @Naru1305
      @Naru1305 3 года назад +6

      You could build something like that today using Houdini and Houdini Engine and then let Nanite do its magic I think. It would need a fairly complex HDA but nothing too hard if you build it up step by step.

  • @jonas_dero
    @jonas_dero 3 года назад

    The technology is amazing thats for sure. It doesnt seem like normal baking will be a thing of the past, simply because a baked map still packs much more detail than a say 2m high poly mesh. Im pretty sure megascans high poly assets are not 'source scans' (which i assume are closer to the 50-100m polys range) Additional microdetail can be extracted from 16k albedo with good results. Also, good luck UVing those assets with millions and millions of polys if you ever want to do any map editing, which is impossible without custom UVs. It seems like we will continue to decimate and bake for the time being but simply being able to use more high poly assets with nanite is really a great improvement! Props to these geniuses!

  • @tk5887
    @tk5887 3 года назад +1

    Thank you for this conversation. I am witnessing the gentlemen who are at the tip of the spear. Awesome to see the stewards are intelligent and powerful at their craft

  • @ClowdyHowdy
    @ClowdyHowdy 3 года назад +1

    I'm not into game design and I don't know a lot about it, but I'm still fascinated by this enough to watch all of it. Like some sort of automated and optimized LOD system that scales like crazy.

  • @FaithfulMC
    @FaithfulMC 3 года назад

    This is revolutionary, the comments talk about the intro, but the rest of the video is UNREAL!!!

  • @Ned7567
    @Ned7567 3 года назад

    Man, that short was incredible!

  • @CaptainSchlockler
    @CaptainSchlockler 3 года назад +6

    This is going to be even more insane when GPU vendors start implementing hardware to support some of the new techniques.

  • @jerobarraco
    @jerobarraco Год назад

    2:08:09 it got me thinking. Masking is mostly an optimization for the general old way to draw triangles (since you can make an arbitrary-shaped-hole in a plane formed by just one triangle).
    but with nanite , you can just make the hole with triangles. it might or not be more issues for the artists, but in theory should work similarly to what a masked surface is. and i don't think it will be much slower. since nanite in theory is a relatively fixed cost per screen size (and materials).
    i prefer they don't add that support if it's gonna lower performance and you can work around it with actual geometry.

  • @jojolafrite90
    @jojolafrite90 3 года назад +1

    This is quite literally game changing.

  • @pochcalpadlos
    @pochcalpadlos 3 года назад +2

    Triangles! Triangles everywhere!

  • @zubetto85
    @zubetto85 3 года назад +1

    As far as I understand, all this "magic" is possible mainly (if not entirely) due to the new data structure and associated algorithms, rather than new hardware features. If so, this is such an amazing and remarkable invention as, for example, quicksort!!!

  • @PlusEqual
    @PlusEqual 3 года назад

    Intro film got my attention, good work!

  • @gabocavallaro
    @gabocavallaro 3 года назад +3

    So amazing!! thanks for the deep explanation!!

  • @ThisIsNotWhatItLooksLik
    @ThisIsNotWhatItLooksLik 3 года назад +9

    Is Nanite patented technology or will other studios like Blender be able to replicate it now that they know it is possible?

    • @ThisIsNotWhatItLooksLik
      @ThisIsNotWhatItLooksLik 3 года назад +1

      @Scotland Dobson That is reassuring. It will be interesting to see how long others will take to get similar results. Tim Sweeney mentioned that is was not easy.

    • @kazioo2
      @kazioo2 3 года назад

      It doesn't make much sense for tools like Blender to use tech like Nanite. Its trade off is sacrificing flexibility and editability (super important for Blender; nanite meshes are cooked before they can be rendered) for increased real-time performance (not that important).

    • @ThisIsNotWhatItLooksLik
      @ThisIsNotWhatItLooksLik 3 года назад

      @@kazioo2 The baking to nanite would be done just before rendering to keep editing flexibility. UE5 seemed to do importing as nanite really fast. When you render a animation that takes 10+ minutes per frame it should make sense, not to mention renders that take 10+ hours. It would make 4k+ rendering also possible cost/time wise.

  • @MidnightSt
    @MidnightSt 3 года назад +1

    when I first saw people demonstrating and using Nanite, the very first "what is nanite and how to use it" that got out there, I was annoyed, because none of them were really able to explain how it works, just give some weird claims about it that made it seem like impossible magic.
    now that I've seen this whole GDC talk (because that's basically what it is)... Now I understand.
    And yeah, it basically IS magic. Amazing work. Amazing piece of tech.

  • @saish24
    @saish24 3 года назад

    from 40:09 is what all actually want to hear LOUD AND CLEAR!! You process only what you SEE!!

  • @RM_VFX
    @RM_VFX 3 года назад

    As a film environment artist, I've been flirting with UE4 for a few years now. Seeing the same type of lighting tech and mesh quality in UE5 we've had in film for over a decade finally has me diving in for real. Finally feels like a serious filmmaking tool.

  • @CarlosBaraza
    @CarlosBaraza 3 года назад

    2:22:04 why don't you add a label to the editor with the final production size of the asset?

  • @juniorcudjoe8471
    @juniorcudjoe8471 3 года назад +1

    7:24 where this project from 🤔🤔🤔🤔🤔🤔🤔

  • @outlander234
    @outlander234 3 года назад +1

    So is there a feature in it that can cull all the overlapping geometry once you are satisfied with the placements something akin to light baking sort of baking of the geometry to optimize it or you have to do it in modeling program?

  • @hewhoisme4343
    @hewhoisme4343 3 года назад +2

    Wouldn't it be cool if we could use eye trackers combined with nanite? That could even further increase performance for lower power systems by rendering tri's that the viewer isn't looking at a lower level.

  • @sirpanek3263
    @sirpanek3263 3 года назад +1

    finally, next gen for real. the photogrammetry is very cool

  • @RainMan52
    @RainMan52 3 года назад +1

    Thanks for that early morning jumpscare...
    Don't need that coffee anymore :D

  • @berthein5476
    @berthein5476 3 года назад

    Bravo to the people who made that shortfilm! id love to see some funding going their way to turn it into like a 30 minute thing. keep the claustrophobic and legit scary thing. just expand everything a bit. add one or two more scenes pre and after.

  • @jars23
    @jars23 3 года назад +1

    I'm 30 minutes into this video, still waiting for the definitive moment when I realize on a basic level how Nanite works. It took me years and some experimentation in 3DS Max before I saw with my own eyes how Normal Maps differed from Bump Maps. I'm excited to realize how virtual geometry works.

    • @ramakarl
      @ramakarl 3 года назад +1

      Take a very detailed mesh with millions of tris. Now break it into tiny patches of say 1000 triangles each. For each patch, simplify it several times (reduce from 1000 down to 400, 200 or 100 tris) before hand. Save all to disk. So you basically have LODs but for each patch of the full mesh. Now, during render, for each patch you select a detail level that just covers the pixels best and dynamic load from disk. Render using a custom system specialized for very tiny tris.

    • @jars23
      @jars23 3 года назад

      @@ramakarl Thank you! I'm beginning to be able to visualize it! Especially when you said it's basically LOD but for each patch of the full mesh. But what do you mean by 'select a detail level that just covers the pixels best and dynamic load from disk'?

  • @peremoyaserra2749
    @peremoyaserra2749 3 года назад

    Does what he says on 59:30 mean that with Nanite in order for the engine to make a single drawcall per father material it's no loger necessary to make a Blueprint of the mesh?

  • @sundaygraphix9648
    @sundaygraphix9648 3 года назад +2

    Will you be able to transfer projects built in early access to 1.0 when it arrives?

  • @Jsfilmz
    @Jsfilmz 3 года назад +5

    there it is i was looking for this since it cut off yesterday can we download this level

    • @UnchartedWorlds
      @UnchartedWorlds 3 года назад +1

      Good 👌

    • @Jsfilmz
      @Jsfilmz 3 года назад

      @@UnchartedWorlds whoa! i think i know you.

    • @UnchartedWorlds
      @UnchartedWorlds 3 года назад +1

      @@Jsfilmz I found my dawg here, I leave a msg 👊😁

    • @hashemieada4846
      @hashemieada4846 3 года назад +1

      And am a big fan of your work 👏👏

    • @Jsfilmz
      @Jsfilmz 3 года назад

      @@UnchartedWorlds can you tell these guys to release this environment too or what

  • @Moctop
    @Moctop 3 года назад +2

    The nanite data size was said to be 16.14gb but the slide said 6.14gb.

  • @TheMostGreedyAlgorithm
    @TheMostGreedyAlgorithm 3 года назад +2

    Maybe, to reduce overdraw in cases with many close slises, UE5 needs a way to bake a heightmap from the lowest visible heights at all points, so you can just cull everything beneath it?

    • @DMEGC
      @DMEGC 3 года назад

      Wouldn’t it be possible to “cut” the assets in storage so it only use the parts that are shown on any part of the project? This wouldn’t be so beneficial in terms of reducing overdraw but it would have the additional advantage of reducing storage space.
      After you finish the project you make nanite look for the parts of an asset that are shown at any moment (even if you use that assets multiple times) and make a version of that asset with basically no detail in the parts that aren’t used at any time whatsoever, and keep those as your assets. Make it a post process basically.
      You could probably merged these two systems, and end up with assets that are completely flat on the surface even at the cost of replicating some parts of assets.

  • @simmzzzz
    @simmzzzz 2 года назад

    unbelievable technology
    Absolute game-changer

  • @stephanebourez
    @stephanebourez 3 года назад

    Awesome, as usual. Thank you guys! Thank you.

  • @Dnightartist
    @Dnightartist 3 года назад +1

    Absolutely AMAZING 👏🏾👏🏾👏🏾

  • @knessing7681
    @knessing7681 3 года назад +2

    can you/anyone correct me if I'm wrong (haven't tried UE5 yet) but according to the Documentation, you can't paint decals or vertex paint on meshes that has Nanite enabled, right? Has anyone tried it yet, trying to vertex paint or paint decals onto Nanite enabled meshes ?

  • @mentaltelepathy24
    @mentaltelepathy24 3 года назад

    that short film was sick i want a full on feature film

  • @elizabethivy1337
    @elizabethivy1337 3 года назад

    I'm looking forward to seeing how people take advantage of nanite to improve assets that used to rely on transparency maps for performance. I bet we'll have much more convincing plants!

  • @DavesChaoticBrain
    @DavesChaoticBrain 3 года назад

    Okay, how the heck do I enable the stats readout on the right side fo the screen they're using at 1 hour, 29 minutes, 30 seconds? I've been searching for that for over an hour now!

  • @perschistence2651
    @perschistence2651 3 года назад +3

    I see a problem in the triangle density when you combine Nanite with your temporal upsampling. The picture looks not as good as it should look because with the lower internal resolution less triangles seem to be rendered than the final, upsampled picture needs. Nanite should optionally render more triangles when temporal upsampling is active. The same should be true to DLSS.

    • @tharaxis1474
      @tharaxis1474 3 года назад +1

      That wouldn't make sense from what I would understand, the upscaling mechanism is still using a lower-resolution base image as its source, so rendering triangles smaller that the pixels in the base image would just be wasting draws. The expectation is that the upscaler is then supposed to "resolve" what the higher resolution image should look like from those lower resolution pixels. DLSS in particular does this through machine learning.

    • @perschistence2651
      @perschistence2651 3 года назад +1

      @@tharaxis1474 DLSS 1.0 was using ML upscaling but DLSS 2.0 is using the temporal component to gather more samples for the final image, so it has not "to dream up" more details, they are really rendered over time. It's done via a jitter of the sampling coordinates from frame to frame. Similar how UE5 is doing it. Therefore, the final image really "contains" way more internal samples than the internal resolution does. Even though the internal resolution is lower, through the jitter of the sampling coordinates, it often gathers even more samples than native rendering would do. Therefore, the geometric detail of the internal resolution is not enough because it is going to get resolved to a way higher output resolution with true sample information.

  • @mauricegandenberger4179
    @mauricegandenberger4179 2 года назад

    Was there everything made in UE5 in the intro, especially Commander Diaz or was he filmed as a real Person in front of a UE-Background scene?

  • @ExotiqBeautii
    @ExotiqBeautii 3 года назад +5

    *THANKS FOR POSTING GUYS! ABOUT TO SIT DOWN AND WATCH THIS AS SOON AS I HAVE FREE TIME TONIGHT*

  • @thezyreick4289
    @thezyreick4289 3 года назад

    What about using some form of boolean cut method as a type of 'bake' method, where the assets you are adding in can do boolean cuts to other assets, like in boxcutter for blender. Then when you are set on how it is and like the result, you compile it or bake it to make those boolean cuts final and remove any geometry that isn't visible and is instead hidden under meshes of other objects. This may not fully get rid of all of them, but it would easily remove over 90% of the hidden mesh in a way that benefits nanite the most by removing all the problematic geometry in one fell swoop without being destructive during the design processes, and while allowing the artists full freedom in design

  • @heidenburg5445
    @heidenburg5445 3 года назад

    Thank you Brian!!! modelling is exponentially more fun without having to worry about retopo and all that nerd stuff.

  • @cryptocurrentsea7842
    @cryptocurrentsea7842 3 года назад

    Unreal! Cant wait to get my hands on UE5! I am a metaverse builder specializing in gamified spaces. I'm saving up for a new computer just for building with UE5 in the metaverse. This is gonna be wild.

  • @SonictheHedgehogInRealLife
    @SonictheHedgehogInRealLife 2 года назад

    with the overdraw issue wouldnt something like the boolean modifier in blender work? where if you use combine the two assets the make up the ground it will just delete the interior geometry. or maybe say if the geometry is not being affected by light dont draw that geometry because its underground.

  • @arashairshiraz1046
    @arashairshiraz1046 3 года назад

    nanite is a big revolution in histiry of science and beauty .
    its like we have cgi quality games at ast ray tracing and ninite are gonna boom

  • @YYYValentine
    @YYYValentine 3 года назад

    Does it do some raycasting for occlusion culling? When a polygon smaller than a pixel, do you draw just a point, or project the full triangle (with edges etc...) ?

  • @salihsendil
    @salihsendil 2 года назад

    Hello!
    I tried to get an indentations effect for my material on the floor but I realized there is no displacement in the material outline node. How can I get this effect in UE5?
    Thanks

  • @KARLOSPCgame
    @KARLOSPCgame 3 года назад

    Is there a limit you can put in the total of triangles?
    Like for the nintendo switch for example putting a limit and the nanite scales down in the background so the game can priorize the player model that cant use nanite and the game can go to a smooth 60 fps

  • @Vynzent
    @Vynzent 3 года назад

    So, nanite could be used for blades of grass right? Assuming they are static? So like, a game could have regular grass in areas the player will walk around in (for physics or whatever) but distant grass can be nanite instead of just disappearing from view? And I don't mean the grass will be swapped out, I mean like areas the player won't be walking in.

  • @ladiesman2048
    @ladiesman2048 3 года назад

    what's that weird flickering cloud at 1:32:31 !?

  • @GoraGames
    @GoraGames 3 года назад

    Thank you guys

  • @AriusTigger
    @AriusTigger 3 года назад

    What RTX Video Card are you using ? I got a 3050 but Sometimes It lags, special on EPIC MODE... thanks in advance

  • @tobiyokageyama6449
    @tobiyokageyama6449 3 года назад

    That intro was scary dude. Im freakin alone in my room 😂

  • @AlexandruJalea
    @AlexandruJalea 3 года назад +7

    That short at the start was Unreal.
    I know, shameless 😂🤣😂🤣

  • @almagulmenlibayeva5225
    @almagulmenlibayeva5225 3 года назад

    Great Stream! Thanks

  • @blackpanther6389
    @blackpanther6389 3 года назад

    Interesting - not sure how many of you guys have read Euclideon's patent, but some of the slides and their explanations sounds very similar to what is explained in the patent, hmmm. I've spent quite a bit of time to get to the level of understanding that I'm at and there's still so much to scratch, but it's kinda salty, but refreshing to know that someone else has done what Euclideon had demonstrated, except they were a bit more inclusive - not knocking it - I'm just saying that I'll follow where the technology is accessible, and the unreal engine seems to be what I'll be using!

  • @ThisIsNotWhatItLooksLik
    @ThisIsNotWhatItLooksLik 3 года назад +1

    If you stream in 4k people with 1080 displays will see the individual polygons way better since the compression wont mush it as much.

  • @ipadize
    @ipadize 2 года назад

    im confused, was the space scene all ingame unreal engine 5 footage?

  • @rafapiasek6981
    @rafapiasek6981 3 года назад

    Great stream! I would love to see more technical explanation how it works. I would also like to see something similar explaining lumen.

  • @danielanggara5633
    @danielanggara5633 3 года назад +1

    Just curious, what GPU are you guys using?

  • @preston748159263
    @preston748159263 3 года назад

    Can i download 53:00 somewhere?

  • @annakquinn7084
    @annakquinn7084 3 года назад +2

    My ancestors told me about the legend when people could buy GPUs for fair prices. They even where available for purchase.

    • @juliana.2120
      @juliana.2120 3 года назад

      @Rashad Foux Bitcoin entered the chat...

  • @trevorjenkins1524
    @trevorjenkins1524 3 года назад

    Does Nanite work on collision geometry as well? This may have been answered but I missed it.

  • @Rydn
    @Rydn Месяц назад

    does anyone know of theres a video from unreal that goes into rhe detail of how was virtual texturing implemented?

  • @jazzlehazzle
    @jazzlehazzle 3 года назад

    Heh. Moab. Broke my back there hiking. Amazing place. :)

  • @qui573
    @qui573 3 года назад +1

    What was the source of Underwater Fauna scene?

  • @3rdGen-Media
    @3rdGen-Media Год назад

    Take a shot every time Chance says "not an artist"

  • @hewhoisme4343
    @hewhoisme4343 3 года назад

    Does anyone know if the first demo is available? Or is it just the valley of the ancients?

  • @jojolafrite90
    @jojolafrite90 3 года назад

    This is amazing, it puts everything else in perspective. I hope it forces other editors that use their own engines to either use this engine or, put money in research and development.

  • @quosswimblik4489
    @quosswimblik4489 3 года назад

    future triangle surfaces will have special types of depth mapping alongside special texture mapping which will be more like pattern mapping. To give each tetrahedron made from triangle bodies could have 2D surface control that controls how the truly 3D stuff inside works. All this will be generated in low accuracy and dynamically sampled ready for high accuracy/detail upscaling. Because a lot of the detail will be in a lower dimensional depth map data there will be more 3D data efficiency. Because of the pattern/texture 3D surface mapping you will be able to eventually zoom in on things like skin and see the wrinkles working. Using AI to control the sampling of a frame to minimize what needs calculating and maximize upscaling detail efficiency so you can do more with less. I personally believe for about 210p 15bit to match a high quality video of real life at 210p 15bit all you need is PS3 level computation but with a more optimal instruction set and far more efficient/complex software and a lot of pre-baking. You would need PS4 level processing for 300p 18bit and PS5 level computation for 340p 18bit.
    Final thought.
    We obviously are not making the most efficient use of our systems and the real question is as problem complexity increases what is the optimal use of avoidance and exclusion and how close to real are you prepared to work for and how much slack will common horse power allow you in the future when trying to get close to 4D real looking/playing games.

  • @RC-1290
    @RC-1290 3 года назад

    A large part of this video was interesting. But it would really benefit from timestamps. [Edit] Thanks for the timestamps!

  • @peremoyaserra2749
    @peremoyaserra2749 3 года назад +1

    1:07:35 Did I understand correctly? Does Nanite rasterize small triangles via CPU?! (edit: it doesen't)

    • @kazedcat
      @kazedcat 3 года назад +1

      No they use GPU but instead of using the fix function rasterizer engine. They use the programmable compute units of the gpu.

    • @peremoyaserra2749
      @peremoyaserra2749 3 года назад

      @@kazedcat Oh! That makes a lot more sense, thank you! To make sure, all this pipeline customization, early culling, etc is possible thanks to DX12s Mesh Shading right?

    • @kazedcat
      @kazedcat 3 года назад

      pere moya serra They are doing custom shader on hierarchical clusters of triangles. Each cluster contain 128 triangles and have a bounding box that is use for collision, culling and LOD swapping. The primary primitive they are processing is a cluster.

  • @dextermorgan5397
    @dextermorgan5397 2 года назад

    i know there is lumen and now its nanite but what is the thing that came in between called ? i dont rememeber :(

  • @berthein5476
    @berthein5476 3 года назад

    I really want that OG demo to play around with it

  • @maoribloodtraitor
    @maoribloodtraitor 2 года назад

    Amazing achievement, well done

  • @LivingTheDream77
    @LivingTheDream77 3 года назад +1

    Is UE5 using shader model 6.0 or above 6.0 ?

  • @jars23
    @jars23 2 года назад

    Here's hoping that Nanite will somehow be implementable in last gen (PS4, Xbox One) consoles for one last wave of games before PS5 and XBSX become more available and we inevitably make that big jump to current gen.

  • @marshal487
    @marshal487 3 года назад +1

    Thank You Epic 😘😍❤️

  • @bushgreen260
    @bushgreen260 3 года назад +2

    *Does nanite render more triangles the higher the resolution?*

    • @XJoukov
      @XJoukov 3 года назад +1

      since it's tied with the resolution yes but you would only see a quality upgrade from the resolution :)

  • @harborned
    @harborned 3 года назад

    Great talk!! Some huge talent at Epic!