Wow, NVIDIA’s Rendering, But 10X Faster!

Поделиться
HTML-код
  • Опубликовано: 1 сен 2023
  • ❤️ Check out Fully Connected by Weights & Biases: wandb.me/papers
    📝 The paper "3D Gaussian Splatting for Real-Time Radiance Field Rendering" is available here:
    repo-sam.inria...
    Unofficial implementation: jatentaki.gith...
    Community showcase links:
    / 1693755522636206494
    / divesh-naidoo-48809934...
    / huguesbruyere_gaussian...
    jo...
    / 1694322101744738706
    / 1692346451668636100
    My latest paper on simulations that look almost like reality is available for free here:
    rdcu.be/cWPfD
    Or this is the orig. Nature Physics link with clickable citations:
    www.nature.com...
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
    If you wish to appear here or pick up other perks, click here: / twominutepapers
    Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
    Károly Zsolnai-Fehér's links:
    Twitter: / twominutepapers
    Web: cg.tuwien.ac.a...
  • НаукаНаука

Комментарии • 291

  • @zyxyuv1650
    @zyxyuv1650 11 месяцев назад +542

    This paper is the actual goat not just the goat

    • @zaidlacksalastname4905
      @zaidlacksalastname4905 11 месяцев назад +14

      My man is the "-est" in goat not enough

    • @beri4138
      @beri4138 11 месяцев назад +2

      This paper is the new tupac

    • @PrinceWesterburg
      @PrinceWesterburg 11 месяцев назад +5

      'Goat' is such poor English, it should be 'GAT' but of course that would be to say there will never be better so its better to say its the 'Greatest Algorithm Yet!' Yeah, 'GAY', that'd suit millennials better.

    • @man-from-2058
      @man-from-2058 11 месяцев назад +2

      This is the paper of all time

    • @kristoferkrus
      @kristoferkrus 11 месяцев назад

      What's the difference?

  • @cocccix
    @cocccix 11 месяцев назад +625

    I really love when the most amazing papers don't use neural networks

    • @OrangeC7
      @OrangeC7 11 месяцев назад +103

      For as cool and magical as neural networks are, there's just so much more you can do when you actually understand what the algorithm is doing

    • @jensenraylight8011
      @jensenraylight8011 11 месяцев назад +47

      yes, because people are tired of inconsistency, daydreaming result, gaslighted, or just blatantly wrong.
      this on the other hand produce much consistent result and could be improved further

    • @Rubysh88
      @Rubysh88 11 месяцев назад +17

      Yeah, neural networks/ai are great but are being used for everything now… even when it’s unnecessary or a good ol’ algorithm would be enough.

    • @onlyeyeno
      @onlyeyeno 11 месяцев назад

      @@jensenraylight8011 .. Not saying that people are not tired of the things You enumerate. But I would like to point out that neither Neural networks or other similar "methods/algorithms", nor their creators, are "gaslighting" or "lying" to anyone by creating their "methods/algorithms".
      Rather it's the people using the results of these "methods/algorithms" that might be using them for that purpose. But that doesn't, tie this "negative activity" to any inherently quality or property of the "methods/algorithms" used.
      Rather the "method/algorithms" used are only chosen due to their perceived efficacy and/or quality of their results. And that being the case if we create better algorithms "by hand" then those will be used instead, for all the "negative activities" You just mentioned.
      E.g. If I choose use a "program" to create a realistic picture of a well known politician in the act of a "heinous act". And I then use the resulting image to spread misinformation. It makes NO DIFFERENCE WHAT SO EVER whether the "program" I used is built on an "A.I.-based" or a "completely hand crafted" algorithm. Imho the "underlying origin" of the "method/algorithm" is a total non sequitur in this context. It's simply the "perceived quality" of results that will dictate the choice, and if the "hand crafted algorithms" are "better" then these will used, for good and ill purposes alike.
      Best regards.

    • @cocccix
      @cocccix 11 месяцев назад

      Wtf stop this scam shit in my comment section

  • @darrylkid210
    @darrylkid210 11 месяцев назад +118

    This is why explainability in AI is so important. Let AI do the tedious exploration of solution space, it explains what it's doing, and then we can work with what it found and modify the solution however we want.

  • @mikehibbett3301
    @mikehibbett3301 11 месяцев назад +64

    I love your use of the phrase "just one more paper down the line". So true.

    • @someman7
      @someman7 11 месяцев назад

      But this one side-steps and leapfrogs all the AI ones?

  • @ENDESGA
    @ENDESGA 11 месяцев назад +314

    Can't believe it has no neural components! Just proof that we humans still have the advantage, but one day that may change 😅

    • @uku4171
      @uku4171 11 месяцев назад +50

      It's human-made tools either way.

    • @AlvaroALorite
      @AlvaroALorite 11 месяцев назад +2

      How does that prove it?

    • @uku4171
      @uku4171 11 месяцев назад

      @@AlvaroALorite Because it's superior to the neural net models.

    • @michaelleue7594
      @michaelleue7594 11 месяцев назад +50

      I mean, it's still an evolution of NERF. They just figured out what was in the black box reasoning that NERF was doing. This is the real strength of neural networks: not offloading reasoning onto machines, but creating alternative paths toward conclusions that can then be studied and developed into theory.

    • @spx0
      @spx0 11 месяцев назад +4

      @@michaelleue7594great view

  • @Something9008
    @Something9008 11 месяцев назад +63

    Would love to experience this level of detail in VR.

    • @mho...
      @mho... 11 месяцев назад +4

      give it some years more^^

    • @AdiusOmega
      @AdiusOmega 11 месяцев назад +9

      It'll happen in your lifetime guaranteed. The technology improvement are still exponential which is crazy to me.

    • @minheritance
      @minheritance 10 месяцев назад

      i give it 5 years@@AdiusOmega

    • @Hyperlooper
      @Hyperlooper 10 месяцев назад

      ​@@minheritancethere's a gaussian plugin for unreal now, so this could be done today

  • @Wobbothe3rd
    @Wobbothe3rd 11 месяцев назад +49

    This is even more amazing than the average mind blowing break throughs on this chanbel.

  • @M4rt1nX
    @M4rt1nX 11 месяцев назад +39

    Those thin structures were impossible to even scan a few years ago. What a time to be alive and be able to witness the advances that we achieve just by pushing further.

    • @himan12345678
      @himan12345678 11 месяцев назад +3

      Similar to removing noise from astral photography, we've been able to recover fine details/structures from videos that didn't appear to have them based on just the raw resolution. Just because you or the researchers in these fields are ignorant to something doesn't mean it hasn't existed for many years. I've improved open NeRF models by adding in steps to include gaussian/eulerian amplification/magnification. It's really a simple fix and should have been picked up years ago. It baffles me that it's been such a problem for so long that they just can't seem to figure out.

  • @hemant5718
    @hemant5718 11 месяцев назад +180

    this paper is just lit. imagine it being used in future games. crazy times ahead everyone.

    • @PuppetMasterdaath144
      @PuppetMasterdaath144 11 месяцев назад +1

      BOOM!

    • @lawsen3719
      @lawsen3719 11 месяцев назад +1

      2 decades at least

    • @Andytlp
      @Andytlp 11 месяцев назад +12

      @@lawsen3719 2 years you mean whenever it can be rendered real time.

    • @jenkem4464
      @jenkem4464 11 месяцев назад +1

      @@Andytlp Realtime is 30-60fps. They showed here 135 fps..

    • @Andytlp
      @Andytlp 11 месяцев назад

      @@jenkem4464 Would be one way to get hyper realistic graphics. Fps seems good and runs real time huh. What are the requirements though?

  • @bashergamer_
    @bashergamer_ 11 месяцев назад +5

    It’s pretty impressive that we get better speed without sacrificing quality. Amazing work!

  • @psiga
    @psiga 11 месяцев назад +8

    Sublime! The amount of progress that's been made in just a year or two feels _unreal._ Looks like the Singularity is right on-schedule!

  • @Metazolid
    @Metazolid 11 месяцев назад +17

    This is actually really impressive, also glad to see how much potential there still is to be gained from non-AI research. I feel like most recent papers were largely based on machine learning.

  • @0sliter0
    @0sliter0 11 месяцев назад +10

    "quite a bit of memory" is a nice and understandable value

    • @vitordelima
      @vitordelima 11 месяцев назад

      At least It seems to be very compressible.

  • @Jeal0usJelly
    @Jeal0usJelly 11 месяцев назад +12

    5:10 can't that be fixed somehow by combining it with ray tracing as like the next step of rendering a scene?
    It's incredible. I never thought things would advance so quickly, I'm starting to believe we might see actual AR and VR games before the end of the decade even.

    • @Doubel
      @Doubel 11 месяцев назад

      ​@@Faizan29353keyword "actual" - stable immersive media that looks (practically) indistinguishable from reality

  • @JorgetePanete
    @JorgetePanete 11 месяцев назад +10

    No longer "It's NeRF or nothing" 😮‍💨

  • @Nick_With_A_Stick
    @Nick_With_A_Stick 10 месяцев назад

    It has very very minor issues I can see like blurriness and bad aliasing at corners, but it looks 100x better than traditional render 10 years ago. And it took 10 years so, I can imagine using anti aliasing, anistrophic, and multisampling can make this WAY better than it already is. The gta clip was so impressive.

  • @adamfilipowicz9260
    @adamfilipowicz9260 11 месяцев назад +31

    Love the endless advancement, hope this new technique can output 3D mesh data

    • @MetalGearMk3
      @MetalGearMk3 11 месяцев назад +5

      Currently it can not but maybe in the near future - it will be a game changer.

    • @nodelayfordays8083
      @nodelayfordays8083 11 месяцев назад +7

      Does it have to?

    • @syntheticpatience6872
      @syntheticpatience6872 11 месяцев назад +6

      I hope videogames start using this tech instead of meshes, or a mix of both 🤔

    • @grahamthomas9319
      @grahamthomas9319 11 месяцев назад +1

      It’s an interesting question whether this replaces mesh objects. I could see it going in a hybrid direction.

    • @vitordelima
      @vitordelima 11 месяцев назад +1

      The regular NeRF can be converted to meshes via methods such as the marching cubes, so I guess the same can be done with this one. This method is terrible for anything that isn't a solid smooth object but there are probably others.

  • @shApYT
    @shApYT 11 месяцев назад +6

    This is ground breaking! This is what I subbed to this channel for.

  • @titusfx
    @titusfx 11 месяцев назад +28

    🎯 Key Takeaways for quick navigation:
    00:53 🏞️ New technique for real-time rendering of virtual worlds promises over 10x faster rendering than previous methods, addressing thin structure challenges.
    01:52 🎮 The new technique offers both faster rendering and higher quality results compared to NVIDIA's Instant NERF technique, surprising with its superior performance.
    02:49 🖥️ This breakthrough method is not a NERF variant and doesn't rely on neural networks; it's a handcrafted computer graphics technique with innovative ideas.
    03:14 🌊 The technique uses a 3D Gaussian splatting approach to represent objects as a sum of waves, enabling efficient rendering on 2D screens while conserving computation around solid objects.
    04:40 🎨 The algorithm focuses on scene primitives rather than pixels, drawing from a long-standing concept in computer graphics, resulting in a fast yet high-quality rendering solution.

  • @villagerjj
    @villagerjj 10 месяцев назад

    People sometime think neural networks are just brains, and to an extent, you can call them that. But what they really are, are function replicators, and depending on the function you are replicating, it may or may not be faster to just write the function.

  • @berzanmikaili
    @berzanmikaili 10 месяцев назад +1

    Honestly their approach is so mind blowingly creative its insane

  • @21EC
    @21EC 11 месяцев назад +1

    5:41 - My mind is blown....this is like the most incredible graphics tech I've ever seen and the most photorealistic and convincing fully realistic virtual dog I've ever seen by far ! what the....it's so incredible to the point of having hard time to believe such tech can even exist..

  • @GoldBearanimationsYT
    @GoldBearanimationsYT 11 месяцев назад +6

    I was impressed a year ago and now wow

  • @paxtoncargill4661
    @paxtoncargill4661 10 месяцев назад

    3dgs is so intuitive when looking at it, I'm surprised we didn't think about this in the 90s

  • @simoncowell1029
    @simoncowell1029 11 месяцев назад +4

    Somebody needs to use this in VR asap, and make us a nice RUclips video.

  • @justluke0001
    @justluke0001 11 месяцев назад +3

    Does anybody else think that the Professor used an AI voice clone to do the sponsor segment? To me it seemed less express or and I think there were some audible artifacts.

  • @snailedlt
    @snailedlt 11 месяцев назад +6

    NVIDIA is popping off!
    So incredible to see more and more incrdible papers from NVIDIA, Google's Deepmind AI, Open AI, Adobe and more!
    What a time to be alive!

  • @TudorIrimescu
    @TudorIrimescu 11 месяцев назад +2

    My jaw dropped when you said the word "algorithm". Insanity.

  • @Ninth_Penumbra
    @Ninth_Penumbra 11 месяцев назад +3

    Fascinating work.
    Imagine integrating both High Resolution Photography (for perfect details), Video (for motion, 3D perspectives) & Lidar (for comprehensive depth perception) with this kind of neural network processing to render real life spaces for games/movies/etc., as well as to teach AIs about 3D spaces.

  • @jhunt5578
    @jhunt5578 11 месяцев назад +2

    Incredible detail. Almost indistinguishable from real images at points.

  • @ludologian
    @ludologian 11 месяцев назад +5

    This proves that you don't need over engineer techniques for certain tasks... also LEARN FROM GAME DEVS some are specialized in hacks e.g demoscenes where 3D implemented in systems that don't have gpu !

  • @IM2awsme
    @IM2awsme 11 месяцев назад +1

    Mixing this with photogramitry is going to be insane, I've been using an AI that practically does that already using images generated through MidJourney

  • @aresaurelian
    @aresaurelian 11 месяцев назад

    Thank you. I approve of this. Much gratitude. Now add a simple transform vector as we swirl and jerk it around so it looks like it has mass.

  • @TNMPlayer
    @TNMPlayer 11 месяцев назад

    Ever since the corridor video I've been waiting for more news and coverage on nrfs. I'm happy to see it coming from you.

  • @davidanderson5310
    @davidanderson5310 11 месяцев назад +2

    "Two" Minute Papers' video is longer than the original author's presentation.

  • @jmalmsten
    @jmalmsten 11 месяцев назад +4

    Whenever I see these results I go, great. But how do I use it in a workflow to add stuff to it or add these volumes to a normal mesh and shader render so I can modify the result?

  • @EvilStreaks
    @EvilStreaks 11 месяцев назад +1

    Most exciting report I've seen in ages. Dawn of the next level.

  • @DavenH
    @DavenH 11 месяцев назад +3

    A feedback: can you use a more objective metric than FPS? If it's FPS on a supercomputer GPU cluster, I don't know how realtime that is. But if it's a stock CPU's onboard graphics chip, that's much different.
    On the paper - stunning results... can't wait to see the impact of this.

    • @KyriosHeptagrammaton
      @KyriosHeptagrammaton 11 месяцев назад

      I think it popped up on screen that it was on an A600 which as I understand it, is kinda supercomputer levels.

  • @SimplestUsername
    @SimplestUsername 11 месяцев назад +2

    5:15 _"These first, unoptimized versions, consume quite a bit of memory"_
    Like Ram? Exactly how much memory are we talking?

    • @corr2143
      @corr2143 11 месяцев назад +3

      Its almost never mentioned in his videos unfortunately. Love the channel, but wish these factors were not just glossed over as an afterthought so often.

  • @walthodgson5780
    @walthodgson5780 11 месяцев назад

    That dog looked exactly like my dog that passed a few years back.

  • @catfree
    @catfree 11 месяцев назад

    I just love how there are still smart people who don't use AI and instead optimize things

  • @bnckatona
    @bnckatona 11 месяцев назад +2

    Not to be rude or anything, but did you use an AI to generate the voiceover for this video?

  • @Felixxenon
    @Felixxenon 11 месяцев назад +5

    That settles it. We do actually live in a simulation.

  • @epicthief
    @epicthief 11 месяцев назад +1

    What a day to be alive, humanity crafty coding beats out a neural network

  • @comtronic
    @comtronic 11 месяцев назад +2

    I have shown this channel for some people and they are 100% sure the narrator voice is generated by an AI 🤖😅😅

    • @Doubel
      @Doubel 11 месяцев назад

      Or he's just Austrian and has a different cadence when speaking English... lol

  • @JN-qj9gf
    @JN-qj9gf 11 месяцев назад +2

    Can someone pleae explain this to me like I'm an idiot audience member watching a scifi movie?
    What is the source they're using to create the 3D scenes? A collection of photos? Or is it a 3D model created with this technology applied over the top? What sort of hardware does it rely on for rendering?

  • @yannrolland4193
    @yannrolland4193 11 месяцев назад +7

    Your voice sounds really robotic on this video, did you try a voice generated by AI or is it just the jump cuts?
    Very nice video btw :)

    • @uku4171
      @uku4171 11 месяцев назад

      I think it's just his voice lol

    • @itsamiserio4824
      @itsamiserio4824 11 месяцев назад +1

      Yes, it does sound like TTS AI, which has been used on the channel for a while

  • @IceMetalPunk
    @IceMetalPunk 11 месяцев назад +4

    Given the speed and clarity of these, I wonder: can language embeddings work with these, the way LERF handles it for NeRFs? And how much overhead, in terms of time, would that add? Because if we can do 135 fps real-time radiance fields with full language embedding, we can make robots that fully understand everything about their surroundings from just real-time camera information... which is a huge step forward in AI-brained robots.

  • @ardonnie
    @ardonnie 11 месяцев назад

    Need to create a large database of scenes and objects along with detailed descriptions, that way we can train models to generate new scenes based on user input.

  • @tomdchi12
    @tomdchi12 11 месяцев назад +1

    Can these techniques output a geometric mesh like photogrammetry or do they only output 2D images?

  • @alexmattheis
    @alexmattheis 11 месяцев назад

    What a time to be alive! It is absolutely amazing! 😉 👍

  • @Sutanreyu
    @Sutanreyu 11 месяцев назад +1

    This is actually really incredible.

  • @blengi
    @blengi 11 месяцев назад +3

    Mr 2 minute papers' voice has an unnatural repetitive rhythm about it in parts....

  • @imaUFO672
    @imaUFO672 11 месяцев назад

    This is just so beyond impressive and I can’t wait to see this tech used in video games

  • @pile_of_kyle
    @pile_of_kyle 11 месяцев назад +1

    Am I the only one here who can't comprehend how the 3D Gaussian thing works? How is it rendering a 3D world using "simple computer vision techniques"?

    • @kylebowles9820
      @kylebowles9820 11 месяцев назад

      I haven't read the paper so this is just speculation but the gaussian is a good building block, like a voxel but fundamentally smooth. They have well understood properties and are highly suitable for linearization (eg. like in a Kalman filter) so you can combine and project them easily.

  • @jamesr141
    @jamesr141 11 месяцев назад +7

    Please talk normally. You're ruining your own channel. 10/10 content. 2/10 syncopated narration. Finish...your....sentences....and phrases.....according to......everyday speech.... This also....applies.....to inflection. Thank you. I hope you will sort it out.

  • @lancemarchetti8673
    @lancemarchetti8673 11 месяцев назад +2

    Mind blowing development!

  • @ManishBhartiya
    @ManishBhartiya 11 месяцев назад

    What a time to be alive!

  • @IceHacks
    @IceHacks 11 месяцев назад +1

    Curious how animations would be handled.

  • @SandroMedia
    @SandroMedia 10 месяцев назад

    Man, imagine this in real time streaming!!!

  • @TheAnimeSounds
    @TheAnimeSounds 11 месяцев назад +1

    just another excuse where cards costing over 1500$+ cant render games at stable 60Hz on max settings, because of those supposedly "faster working" technologies.

  • @sergeycataev2867
    @sergeycataev2867 11 месяцев назад +2

    Is the voice over generated too😆?

  • @nightm4re.
    @nightm4re. 11 месяцев назад +2

    I keep coming back to your videos for the groundbreaking scientific discoveries presented.
    What really puts me off tho is the pacing, pronunciation, and rhythm of your narration.
    Hope you'll be able to train your speaking flow to make these otherwise great videos more enjoyable.

  • @Kaylin9087
    @Kaylin9087 11 месяцев назад

    That's...
    That's incredible.
    This is the next step of the evolutionary path of technology.

  • @dxnxz53
    @dxnxz53 11 месяцев назад

    what a time to be alive!!!

  • @Secretgeek2012
    @Secretgeek2012 11 месяцев назад

    That's my dog!
    Well, not actually my dog, but it is identical to mine!
    Maybe she's got a secret hobby. 😊

  • @edislucky
    @edislucky 11 месяцев назад +1

    Soo..... Use Google maps as your input, this creates you a 3D world to play in ?

  • @_spartan11796
    @_spartan11796 11 месяцев назад +2

    Incredible stuff!

  • @amiraliasgari_
    @amiraliasgari_ 11 месяцев назад

    Creating a virtual world maybe solve some problems but make a huge number of new problems at the same time. I really don't understand why you are so excited?

  • @mho...
    @mho... 11 месяцев назад

    so if i get that right 🤔
    they "just" ignore the 3d part & only "smooth out" the output picture before the screen shows it?!
    cool idea indeed!

  • @joegran
    @joegran 11 месяцев назад

    Fellow scholars... I did not hold onto my papers for this one

  • @grzesiektg
    @grzesiektg 11 месяцев назад +1

    Hey, was the voice fr this episode done by AI?

  •  11 месяцев назад

    Saw the video of the games looking "realistic" and got the reminder again thar realism today is not what we see in reality, bur rather what we see in the movies or through a cheap camera that is struggling wit the white balance.

  • @lordseptomus441
    @lordseptomus441 11 месяцев назад

    videogames in 6 years are gonna look amazing

  • @t1460bb
    @t1460bb 11 месяцев назад

    Impressive! Amazing! What a time to be alive!

  • @harryp.6847
    @harryp.6847 11 месяцев назад

    Hey guys, I have a question. I am a land surveyor and I want to utilize this technology. Can I simply upload a video into the "software" and get an output as 3d model ? Would this be a potential use case for this study? (Sorry If it's a dumb question) If so, is this years away for end user or I can actually make it happen today with a graphics card and some codes complied? Sorry if it's an obvious question. I'd appreciate if one of you can send me in the right direction. Because this would be a game changer in my profession! Thanks.

  • @maxwoodcutting8764
    @maxwoodcutting8764 11 месяцев назад

    This is the type of tech that both fascinates me and also makes me depressed as it is clearly going to replace my dream job

  • @tiandutoit7287
    @tiandutoit7287 11 месяцев назад

    I would like to see a paper on 3Blue1Brown's video on the topic of the light through a barber pole, that would be impressive.

  • @theunseen010
    @theunseen010 11 месяцев назад

    Now this one gets me excited

  • @caboose22320
    @caboose22320 11 месяцев назад

    this is gonna make for some insane horror games

  • @jacejunk
    @jacejunk 11 месяцев назад

    Was relatively easy to create Gaussian splat model from my own video.

  • @spaceowl5957
    @spaceowl5957 10 месяцев назад

    I'm not understanding what this is about, like what are the inputs and outputs to the algorithms discussed?

  • @paskie66070
    @paskie66070 11 месяцев назад +3

    His voice just gave up 💀

  • @moahammad1mohammad
    @moahammad1mohammad 11 месяцев назад

    Hey, just so we know, some fancy new vr headsets with depth sensors are coming soon...
    Building virtual worlds with just a glance? I bet money on it

  • @kein2009
    @kein2009 11 месяцев назад

    wow! Amazing advancements. Thank you very much for sharing.

  • @m2-x-n253
    @m2-x-n253 11 месяцев назад

    i mean these were thoughts which theorised would take longer time but here we are.....WOW....it is finally here.

  • @jannchavez9257
    @jannchavez9257 11 месяцев назад

    Soon this channel will be called Two Minutes Philosophy, since I would question my reality soon.

  • @uku4171
    @uku4171 11 месяцев назад +1

    Are there any practical use-cases for radiance field rendering? Seems like there are so many papers on it, but they are all there for the sake of just being there? I'd get it if it made 3D models (meshes), but as I understand, it doesn't.

    • @PHIplaytesting
      @PHIplaytesting 11 месяцев назад +5

      How practical is a photograph? Well, now we have 3D photographs. With the modern developments in virtual reality in particular I think we will find ourselves able to experience "photos" much more immersively in the near future. It could change how films are shot. It could even be used in making truly photorealistic videogames. It's really up to the creative people of the world how far we can take advancements in media technology like this.

    • @IceMetalPunk
      @IceMetalPunk 11 месяцев назад +2

      It makes a full 3D rendering of the scene, which gives computers the ability to understand 3D geometry from 2D inputs. Off the top of my head, this gives a few immediate use cases:
      1. It allows you to take a few photos of a location, then set up and experiment with virtual cameras in the scene in post, which could be invaluable for video/film creation. Especially to get shots that would be impractical or even impossible to do with physical cameras.
      2. LERFs embed semantic information into a NeRF scene, but I think a similar technique may work for Splatting as well. This allows natural-language queries to return segmentation within the 3D scene. To put that into perspective, if we could do this in real time (currently, LERFs take 45 minutes to generate a single scene, but I don't know how much faster it'd be applied to a Splat scene), a robot could use a single camera, take a brief look around its environment, and in real-time it could understand not only the full 3D geometry around it (for things like motion planning, etc.) but also the semantic nature of *everything around it.* So imagine telling a robot that's never seen your living room before "please clean my silver-framed movie poster", and after just a few seconds of looking around the room, it immediately is able to navigate all your furniture and obstacles, make its way to the poster (which it also autonomously recognized the nature and location of), and start cleaning it. It's Rosie from the Jetsons levels of robotic comprehension and autonomy.

  • @TampaCEO
    @TampaCEO 11 месяцев назад

    "What a time to be alive!"

  • @blackoes
    @blackoes 11 месяцев назад

    these renders look so real

  • @guepardiez
    @guepardiez 11 месяцев назад

    If the model creates a 3D structure from a few images or a video, surely it won't have enough information about all the details in the scene? What happens then? Does it make up details?

  • @andrewclanton3520
    @andrewclanton3520 11 месяцев назад +3

    Are you saying you're making real life, faster than real life?!

    • @beri4138
      @beri4138 11 месяцев назад +9

      @@rst7190 No I live in my head with the voices

    • @cireciao792
      @cireciao792 11 месяцев назад

      ​@@beri4138relatable

    • @uku4171
      @uku4171 11 месяцев назад +3

      @@beri4138 they won't stop

  • @subspark
    @subspark 11 месяцев назад

    More videos on papers like this please doc. ✍🏽

  • @TakumiJoyconBoyz
    @TakumiJoyconBoyz 11 месяцев назад

    I think this will be great for things like using VR in Google Maps Street View but I don't think it will be applied to things like gaming or movies.

  • @Sergeeeek
    @Sergeeeek 11 месяцев назад +1

    Is the voice AI generated in this video? It sounds really unnatural, I'll use subtitles instead.

  • @hazzag458
    @hazzag458 11 месяцев назад +3

    Is the dude who talks an ai cause his voice has a pattern

  • @iutu8235
    @iutu8235 11 месяцев назад +3

    This is AI voice

  • @UAM-
    @UAM- 11 месяцев назад +1

    This is a game changer

  • @centenarium
    @centenarium 11 месяцев назад

    Now that (!) is one of the times to be alive.

  • @gulagwarlord
    @gulagwarlord 11 месяцев назад

    San Francisco and LA are so dangerouns now... you'd be far safe driving though the streets virtually than irl.