Gaussian Splatting! The next big thing in 3D!

Поделиться
HTML-код
  • Опубликовано: 17 авг 2024
  • КиноКино

Комментарии • 375

  • @IRWBRW964
    @IRWBRW964 11 месяцев назад +447

    3D Gaussian Splatting is actually not a NeRF technology as there is no neural network, but the splats are directly optimized through rasterization rather than the ray tracing like method of NeRFs.

    • @spyral00
      @spyral00 11 месяцев назад +12

      Looks like it's a new way to display point clouds, am I wrong?
      Still amazing and I have to try it!

    • @JB-fh1bb
      @JB-fh1bb 11 месяцев назад +9

      @@spyral00Right? I thought this gaussian splatting technique was a new way to present the point data generated by NeRF

    • @malfattio2894
      @malfattio2894 11 месяцев назад

      Wow, it looks really damn good considering

    • @Blox117
      @Blox117 11 месяцев назад +2

      so it will be faster too

    • @WWG1-WGA
      @WWG1-WGA 10 месяцев назад

      That means we can play even more with the neurons

  • @jimj2683
    @jimj2683 11 месяцев назад +468

    Imagine Google Street view built with this. It could then be used in a GTA type game with the entire world.

    • @ariwirahadi8838
      @ariwirahadi8838 11 месяцев назад +40

      you forget about flight simulator..it is generated by real map

    • @valcaron
      @valcaron 11 месяцев назад +42

      Grand Theft Auto Frankfurt
      GTA but everything's all blurry.

    • @michaking3734
      @michaking3734 11 месяцев назад +13

      i bet in the next 20-30 years

    • @florianschmoldt8659
      @florianschmoldt8659 11 месяцев назад +26

      There is no good way use splatting with interactive light and shadow or animation. All the lighting is fixed together with the color information. So I guess, this tech won't make it into gaming.

    • @strawberriesandcum
      @strawberriesandcum 11 месяцев назад +2

      @@valcaron and most of it is missing

  • @crs11becausecrs10wastaken
    @crs11becausecrs10wastaken 11 месяцев назад +82

    If scanning software is actually capturing and rendering details as fine as leaves of plants, without all of the artifacts, then that is absolutely mind-blowing.

  • @filipewnunes
    @filipewnunes 11 месяцев назад +109

    I spent lots and lots of hours in my life unwraping UVs and correcting meshes to use in my archviz projects.
    The amount of development in this field is insane. And we are in the first days of this.
    What a time to be alive.

    • @OlliHuttunen78
      @OlliHuttunen78  11 месяцев назад +14

      My thoughts exactly. Many things are changing very fast now. Although this does not yet create anything from scratch and for these Nerf you still need something existing, from which things are transformed into 3D by taking the pictures from real world. Traditional modeling still certainly has its place in when creating something new.

    • @captainflimflam
      @captainflimflam 11 месяцев назад +13

      I got that reference! 😉

    • @loleq2137
      @loleq2137 11 месяцев назад +8

      Ah, a fellow Scholar!

    • @MagicPlants
      @MagicPlants 11 месяцев назад +1

      well said!

    • @nekosan01
      @nekosan01 10 месяцев назад

      Photogrammetry is very old, do not why you only know this marketing stuff and enjoying if its much worse than realitycapture and other app, and they do not require expensive videocard, also you can import in sculpt software for fixing mesh and project uv very easy, than this garbage

  • @4Gehe2
    @4Gehe2 11 месяцев назад +11

    Ok I did a quick reading of the paper. This is a clever thing however what should be kept in mind is that it doesn't preserve details so much as makes them up.
    (The explaning bit is in the chapters 5.1, 5.2, and fig. 2, 3, 4 of the paper). Basically you reconstruct the environment not by analysis but by statistically informed guesses. After which you then do analysis on whether the guess was too big or small. Then you refrence the solution to the original data to see how close you were with your guesses. If the guess was too small you duplicate it near the point; if the guess was too great you divide it in to two. Meaning that if you need to estimate a curve, instead of trying actually solve the curve you keep guessing the shape of the curve, but because of the process of duplication and division of the guesses you basically approach faster to the solution. However it is important to keep in mind that you don't actually get THE SOLUTION, you get approximation of the solution based of guesses.
    Basically this is the way you can do square roots and cube roots in your head to 2-3 decimals, by estimating upper and lower and iterating that (for those that don't know: if you want to estimate square root of 6, you can in your had calculate that 2x2 is 4 3x3 is 9, so the solution is between those; then you can do 2,5x2,5 you get 6,25 which is more, so you know that solution needs to be less than that so 2,25x2,25 you get 5,0625... so on and so forth. You will never practically get the solution of 2,2449489743 because we only go to 3 decimals but lets be honest 0,04% error is more than enough.
    To simplify a bit: Imagine you are sculpting with clay and want to replicate a shape. You can only add or remove instead of shaping it with your hands. If you have too much material you cut away hald of the amount of you know to be too much. If you added too little clay, you add another same sized lump. And you keep repeating this untily you get close enough approximation of the thing you are replicating.
    What is important to keep in mind is the limitations of this. You can't replicate things accurately for the simple reason that if you lack information on details you can't just guess them! Your data resolution doesn't increase. You only actually know the datapoints that you gathered. So for historical, scientific or engineering purposes you have will not be able to get any extra information (And I hope that people realise this, before they try to use details from this in a court of law or something), you really can't know anything more from this than you can get from just looking at the frames as pictures.

  • @JimmyNuisance
    @JimmyNuisance 11 месяцев назад +33

    I fell in love with splat engines when I spent time in Dreams on the PSVR. It's fantastic for creatives, it makes it very easy to make new and unseen surfaces.

    • @kazioo2
      @kazioo2 10 месяцев назад

      That renderer went through so many changes and iterations (also after they made public explanations) I'm not really sure how much of typical splatting is still used there. There are many conflicting informations about it.

  • @linuxmill
    @linuxmill 11 месяцев назад +70

    guassian splatting has been around for many years. I used it in the late 90's. It's a method of generating implicit functions, which can then be contoured.

    • @MatteoMi
      @MatteoMi 11 месяцев назад +15

      I'm not a specialist, but I suppose this is similar to VR, that's also been around from the 80s, but the tech wasn't mature enough. I mean, maybe.

    • @EmileChill
      @EmileChill 11 месяцев назад +2

      @linuxmill I used autodesk 123d catch which isn't avalible anymore, i believe it was the same kind of technique but not 100% sure.

    • @danielalorbi
      @danielalorbi 11 месяцев назад +19

      Yup, the new thing here is using it to render radiance fields in real time

    • @EmileChill
      @EmileChill 11 месяцев назад +2

      @@danielalorbi That's incredible!

    • @stephanedubedat5538
      @stephanedubedat5538 11 месяцев назад

      while the technique is not new, its application to NERF is

  • @Oho_o
    @Oho_o 11 месяцев назад +1

    Those gaussian splats looks like galaxies in space at 2:07 .. ;O

  • @TheCebulon
    @TheCebulon 11 месяцев назад +7

    All the time, I thought I saw videos and was wondering about 3D. 🤣
    Then it hit me: These ARE 3D renders.
    Absolutely stunning.

  • @patjackmanesq
    @patjackmanesq 11 месяцев назад +11

    2.7k subs is a ridiculously low amount for such quality videos! Great work, brother

  • @wozniakowski1217
    @wozniakowski1217 11 месяцев назад +9

    I feel like those galaxy-like ellipses with feathered edges are THE new polygons and soon this rendering method will replace them, especially in the gaming industry. What a time to be alive

    • @spyral00
      @spyral00 11 месяцев назад +9

      That depends. Can they support animation, rigging, fluids, etc?
      Voxels are great but they still aren't the norm... Maybe it's just another great tool in the shelf.

    • @bricaaron3978
      @bricaaron3978 11 месяцев назад +1

      I would say the gaming industry would be the last area to use this method. It looks like this is a method of rendering; it has nothing to do with the generation and manipulation of 3D data.

    • @spyral00
      @spyral00 11 месяцев назад

      ​@@bricaaron3978 true... I would love to see a game made wth NERF point clouds rendered with this though.

    • @bricaaron3978
      @bricaaron3978 11 месяцев назад

      @@spyral00 Can you, in a few sentences, explain why NERF point clouds are different from any other point cloud so that I don't have to research it, lol?

    • @spyral00
      @spyral00 11 месяцев назад

      @@bricaaron3978 NERF is an algorithm that generates 3d models (or point clouds) from 2d photos, using neural nets. Pretty amazing stuff, but quite complex and not yet widely used. This technique seems to be just a way to display the results in a nice way, if I understand correctly. In theory one could make game environments using photos+NERF as an input and this to render them, pretty sure it'd look amazing

  • @Ironside451
    @Ironside451 11 месяцев назад +3

    Reminds me of that moment on Star Trek into Darkness when they are looking at security footage and are able to move around the footage just like this

  • @TorQueMoD
    @TorQueMoD 11 месяцев назад +31

    Great video! The RTX 3070 has 8GB of Vram though, not 4. I'm super excited to see where NeRF will take us in another 5 years! It's a boon for indie developers who don't have the time or budget to create high quality assets.

    • @stash.
      @stash. 11 месяцев назад

      it varies, i have the 6gb 3070 model
      _===edit====_
      Turns out i had the 8gb version not the 6gb as i mentioned earlier

    • @GrandHighGamer
      @GrandHighGamer 11 месяцев назад

      @@stash. 4GB would be incredibly low still (and 8GB is already pitiful for a card that cost around $800), to the point where it wouldn't make sense to exist at all. At that point a 3060 would both be cheaper and potentially have 4x the memory. I'd imagine this was just a mistake.

    • @esaedvik
      @esaedvik 11 месяцев назад +1

      @@GrandHighGamer 8GB is perfectly fine for the use cases of 1080-1440p gaming.

  • @thenerfguru
    @thenerfguru 11 месяцев назад +1

    Thanks for the shout out! You can now view the scene in the NerfStudio viewer which unlocked smooth animation renders.

    • @OlliHuttunen78
      @OlliHuttunen78  11 месяцев назад

      Yes. I just noticed your new video about it. Have to try it. Thanks Jonathan!

  • @o0oo888oo0o
    @o0oo888oo0o 11 месяцев назад +1

    Great, best videos about this niche of nerf's etc. i found so far. Keep it up!

  • @stash.
    @stash. 11 месяцев назад +1

    bringing old family photos will be a huge market boom

  • @8eck
    @8eck 11 месяцев назад +2

    I remember when i first tried Nerf. Since then, they have evolved into insane quality!

  • @ChronoWrinkle
    @ChronoWrinkle 11 месяцев назад +1

    Hot damn, it should be possible to extract depth , normals, and glossines from such capture, this is insane!

  • @vadimkozlov3228
    @vadimkozlov3228 8 месяцев назад +2

    fantastic and very professional youtube channel. appreciate your work

  • @drekenproductions
    @drekenproductions 11 месяцев назад +1

    thanks for linking to the nerf guru. could come in handy some day if i decide to try this!

  • @jamesleetrigg
    @jamesleetrigg 11 месяцев назад +14

    If you watch two minute papers, there’s a new radiance, field, technique that is over 10 times as fast and better quality so look forward to seeing this in VR/AR

    • @Barnaclebeard
      @Barnaclebeard 11 месяцев назад +21

      Can't stand to watch TMP anymore. It's nothing but paid content and catchphrases. I sure would love a channel like the old TMP.

    • @primenumberbuster404
      @primenumberbuster404 11 месяцев назад +9

      ​@@Barnaclebeard fr 😢 Many of those papers are actually not even peer reviewed.

    • @Barnaclebeard
      @Barnaclebeard 11 месяцев назад +8

      @@primenumberbuster404 And it's exceedingly rare that there is any analysis or insight beyond, "imagine what it can do two papers down the road!" anymore.

    • @Summanis
      @Summanis 11 месяцев назад +1

      Both this video and the TMP one are on the same paper.

  • @Dartheomus
    @Dartheomus 11 месяцев назад

    My mom walked in the room and asked what the hell I was doing. I told her to just relax. I'm gaussian splatting.

  • @BlenderDaily
    @BlenderDaily 11 месяцев назад +1

    so exciting! thanks for the explanation:)

  • @damsen978
    @damsen978 11 месяцев назад

    This is literally what will follow photographs and images in general where you can see captured moments of your family and friends in full 3D. Now we need a device that would capture these automatically with a click of a button.

  • @LaVerite-Gaming
    @LaVerite-Gaming 11 месяцев назад +1

    It's beautfiul that the first image I ever saw rendered in this way now is a Captain Haddock figurine ❤

  • @TheABSRDST
    @TheABSRDST 11 месяцев назад

    I'm convinced that this is how our vision works irl

  • @romanograsnick
    @romanograsnick 10 месяцев назад

    Astonishing achievements were made, that is great! I hope this may lead to set builders to make more models which can be traced and recreated in 3d space, keeping these sculpting jobs relevant. Thanks!

  • @DailyFrankPeter
    @DailyFrankPeter 11 месяцев назад +1

    All we need now is a scanner in every phone for taking those selfie pointclouds and we'll be in the world of tomorrow.

  • @The-Filter
    @The-Filter 11 месяцев назад

    Man, thank you for this video! That stuff is really next gen! wow! And top notch moderation! Very relaxing and informative!

  • @Inception1338
    @Inception1338 11 месяцев назад

    One more time for gauss to show the world who is the king of Mathematics.

  • @talis1063
    @talis1063 11 месяцев назад +10

    I'm deeply uncomfortable with how fast everything is moving right now. Feels like anything you touch could become obsolete in months.

    • @flameofthephoenix8395
      @flameofthephoenix8395 11 месяцев назад +1

      Except for farming.

    • @ChainsawGutsFuck
      @ChainsawGutsFuck 11 месяцев назад

      @@flameofthephoenix8395 Or water. Or oxygen. Or physical existence.

    • @flameofthephoenix8395
      @flameofthephoenix8395 11 месяцев назад

      @@ChainsawGutsFuck I figured he was talking about careers.

    • @Sc0pee
      @Sc0pee 11 месяцев назад

      If you mean traditional 3D-modelling for gaming/movies or 3D-printing then no at least not for the foreseeable future, because this technique doesn't produce mesh models, which is a requirement in games and movies for dynamical lightning, animation, surfacing, interactivity etc. And it also requires you to have the object you want in real life to work with.

  • @HandleBar3D
    @HandleBar3D 11 месяцев назад +3

    This is gonna be huge in real estate, once it’s a streamlined app on both ends.

  • @3dvolution
    @3dvolution 11 месяцев назад

    It's getting better and better, that's an impressive method, thanks for sharing ;)

  • @MonsterJuiced
    @MonsterJuiced 11 месяцев назад +9

    This is fascinating! I hope there's going to be some kind of support for blender/ unreal/ unity soon I would love to play with this

    • @Jackpadgett-gh8ht
      @Jackpadgett-gh8ht 10 месяцев назад

      there is support for it! volinga AI, search it up

  • @luketimothy
    @luketimothy 10 месяцев назад

    Just imagine a machine that can generate point clouds around itself at a rate of 60 per second, and a technique like this that can render that point cloud at the same 60 per second rate. Truly 3D video. Would be amazing.

  • @chosenideahandle
    @chosenideahandle 11 месяцев назад

    Terve Olli! Another Finn with an awesome RUclips channel (I'm not including myself 😁)! Thanks for keeping us up-to-date on what is going on with this cutting edge stuff.

  • @marco1941
    @marco1941 11 месяцев назад

    Wow, now we’ll see really interesting development in video game production and of course in the results.

  • @pan6593
    @pan6593 11 месяцев назад

    Great summary, insight and practical example - thanks!

  • @EBDeveloper
    @EBDeveloper 11 месяцев назад

    Glad I found your channel ;) .. nice to meet you Olli

  • @fontenbleau
    @fontenbleau 11 месяцев назад +1

    Technology from Minority Report movie, showed 20 years ago, that's how long it takes to make.

  • @MommysGoodPuppy
    @MommysGoodPuppy 11 месяцев назад

    Yesss i cant wait for this to be utilized in vr, I assume we could render absolutely insane detail in realtime for simulating reality or having big budget cgi movie visuals in games

  • @renko9067
    @renko9067 10 месяцев назад

    This is basically how the actual visual field works. Overlays of sensations, sounds, and smells complete the illusion of subject/object. It is the zero dimension quantum wave field.
    The scene ‘moves’ in relation to the ‘eyes’ of an apparent subject.

  • @michaelvicente5365
    @michaelvicente5365 11 месяцев назад

    ohhh thanks for explaining, I saw a couple things on twitter and was wondering what this gaussian splatting was about!

  • @jimmyf2618
    @jimmyf2618 11 месяцев назад +1

    This reminds me of the the old "Unlimited Detail" video promising infinite rendering

  • @lordofthe6string
    @lordofthe6string 11 месяцев назад

    This is so freaking cool, I hope one day I can make a game using this tech.

  • @domovoi_0
    @domovoi_0 11 месяцев назад

    Incredible.
    Love and blessings!

  • @Neura1net
    @Neura1net 11 месяцев назад +1

    Very cool. Thank you

  • @Eddygeek18
    @Eddygeek18 11 месяцев назад +5

    Next step is getting it working with animations and physics and you have a new game rendering method. I have always felt mesh rendering is limited, been waiting for a new method such as this. Hope it's the one this time since there have been quite a few duds in the past

    • @0ooTheMAXXoo0
      @0ooTheMAXXoo0 11 месяцев назад

      Apparently Dreams (2020) on PS4 uses this technique.

    • @Tattlebot
      @Tattlebot 11 месяцев назад +2

      Games consistently refuse to use new technologies, because teams don't have faith in leadership, and don't have the skills. Games are getting less featureful and interactive. Talented writers are negligible. The result is an oversupply of unsophisticated chew toys. No incentive to upgrade from 5700 XT type cards.

    • @catsnorkel
      @catsnorkel 11 месяцев назад +1

      Until this method can produce poly models that can properly fit into a pipeline, I really don't see this being widely used in either the games or film industries, but I can see it being used a lot in archvis for example.

    • @Eddygeek18
      @Eddygeek18 11 месяцев назад

      @@catsnorkel i know what you mean gpus are designed for polygons and engines have very specific mechanisms for it, but i don't think it would take too much modify existing software to make use of GPU effeciently for this technology. They both use techniques hardware is capable of so if invested in i don't think it would take Unity or Unreal much more time to integrate the tech into their engines compared with poly based rendering pipelines. Since it uses a scattering field type rendering it shouldn't be much different

    • @catsnorkel
      @catsnorkel 11 месяцев назад +2

      @@Eddygeek18Thing is, this technique does not support dynamic lighting, and isn't even built in a way that could be modified to support it. Same with animation, surfacing, interractivity etc. It is a really cool idea to render directly from pointcloud data like this, skipping most of the render pipeline, however the parts that are skipped over is **where the game happens**

  • @GraveUypo
    @GraveUypo 11 месяцев назад

    these are so good that you can probably use screenshots of these models to make 3d models with old photogrametry software.

  • @GeekyGami
    @GeekyGami 11 месяцев назад +1

    This point cloud technology is much older than 2020.
    It has been tried for a decade at this point, on and off.

  • @joelmulder
    @joelmulder 11 месяцев назад

    Once video games and 3D software rendering engines start to use this… Oh boy, that’s gonna be something else

  • @MaxSMoke777
    @MaxSMoke777 11 месяцев назад +3

    It's a cute way to make use of point clouds. I'm certain it'll be handy for MRI's and CT scans, but it's nowhere near as useful as an actual 3D model. You couldn't use it for video game models or 3D printing. It could be extremely useful for real-time, point-cloud, video conferencing, since it's so fast.

    • @catsnorkel
      @catsnorkel 11 месяцев назад

      agreed. it will probably find a few niche use cases for certain effects that are layered on top of a traditional poly-based render pipeline, but it's not going to completely take over, probably ever. This is a technology developed for visualisation, and not really suitable for games or film.

  • @icegiant1000
    @icegiant1000 11 месяцев назад

    How long before micro drones are just buzzing up and down our bike paths, sidewalks, streets and so on, grabbing HQ images, beaming them to the cloud, and by the end of the day you can do a virtual walkthrough of the local fair, or the car dealership, or a garage sale on the other side of town, or the crowd at a football game. Only thing stopping us is CPU power and storage, and that is getting solved fast. Exciting times! P.S.- How long before people stay home, and just send out their micro drones, and view everything in VR at home. A lot safer than getting mugged.

  • @metatechnocrat
    @metatechnocrat 11 месяцев назад

    Well one thing it'll be useful for is helping me examine images for clues to hunt down replicants.

  • @Datdus92
    @Datdus92 11 месяцев назад

    You could walk in your memories with VR!

  • @MilesBellas
    @MilesBellas 11 месяцев назад +1

    The entire VFX Industry is under massive disruptive growth that now prioritizes INDIVIDUAS.....
    ....a huge paradigm shift.

  • @endrevarga5111
    @endrevarga5111 10 месяцев назад

    Idea!
    1. Make a low-poly 3D scene in Blender. It's a 3D skeleton. Use colors as object IDs.
    2. Using real-time fast OpenGL engine, quick-render some hundred images, placing the camera to different locations like photographing a real scene for the 3DGS creation. The distribution of the camera should be easy using Geometry Nodes.
    3. Using these images, use Runway-ML or ControlNet etc. to re-skin them according to a prompt. If possible, use one image to ensure consistency.
    4. Give the re-skinned images to the 3DGS creation process to create a 3DGS image for the scene.
    Et voilà, a 3D AI-generated virtual reality is converted to 3DGS.

  • @MartinNebelong
    @MartinNebelong 11 месяцев назад

    Great overview and certainly exciting times! 😊

  • @tristanjohn
    @tristanjohn 11 месяцев назад

    Absolutley phenomenal!

  • @imsethtwo
    @imsethtwo 11 месяцев назад

    solution to the floating artifacts would be just make procedural volumetric fog and use it to your advantage 😎

  • @triplea657aaa
    @triplea657aaa 11 месяцев назад

    Gauss strikes again!

  • @taureanwooley
    @taureanwooley 11 месяцев назад

    Perforated disck layering at one point with bezier curve translations and HDR data mining ...

  • @i2c_jason
    @i2c_jason 11 месяцев назад +1

    It seems like there is a divergence in 3D modeling as AI comes online... the artistic 3D formats with no geometrical accuracy seem to be leading, but when will we get 100% geometrically correct AI output, such as STEP files? STEP files are extremely complex to create and parse, so will this be 5-10 years out before we get such a thing as an AI output?

  • @NecroViolator
    @NecroViolator 11 месяцев назад +1

    I remember a Australian company making infinite graphics with something similar. They made games and other stuff.
    Cant remember the name but it was many years ago. :(

  • @wolfzert
    @wolfzert 11 месяцев назад

    Woow, que bien, un punto más para seguir andando en la IA

  • @MotMovie
    @MotMovie 11 месяцев назад

    Good stuff mate. Very interesting indeed and great to see such in depth look into things with self made examples. As a sidenote, music is a bit big for this, I mean it´s not cure for cancer (just yet) so perhaps go bit easier on "Life will win again, there will be beautiful tomorrow" soundtrack :p . Anyhow, cheers, will be back for more.

  • @mauricedevries7154
    @mauricedevries7154 11 месяцев назад

    The moment that we realize that we are already living in a simulation..

  • @tonygardner4077
    @tonygardner4077 11 месяцев назад

    liked and subscribed ... hi from New Zealand

  • @lolmao500
    @lolmao500 11 месяцев назад

    Next gen of graphic cards apparently will all have a neural network chip on there.

  • @Jakeuh
    @Jakeuh 11 месяцев назад

    This will be able to be processed in real time at some point in the future. Just soak that in and rethink about. Apple visions “memory” playback

  • @GauravSharma-gt2gp
    @GauravSharma-gt2gp 11 месяцев назад +1

    If this is so amazing, then why did 360 videos failed to gain popularity?

  • @afti03
    @afti03 10 месяцев назад

    Fascinating! could you make a video on what would be the most relevant use cases for this type of technology?

  • @helper_bot
    @helper_bot 11 месяцев назад

    exciting news!

  • @ziomalZparafii
    @ziomalZparafii 11 месяцев назад

    Closer and closer to Esper from Blade Runner.

  • @costiqueR
    @costiqueR 11 месяцев назад

    I tell you this: is a game changer for the industry...

    • @catsnorkel
      @catsnorkel 11 месяцев назад

      depends on the industry though. Archvis yes, absolutely.
      Games and film, it will only really have a minor impact since it isn't really geared towards those use cases.

  • @Felenari
    @Felenari 11 месяцев назад

    Good watch. Subscribe earned. Haddock is one of my faves.

  • @ukyo6195
    @ukyo6195 10 месяцев назад +1

    Looks like we have found Kim Jong-un secret yt canal

  • @soscilogical1904
    @soscilogical1904 11 месяцев назад

    what's the file size of similar quality Nerf vs 3D files? Do scenes load faster to Vram?

  • @ayymang2157
    @ayymang2157 11 месяцев назад

    She Next on my Big Thing till my Gaussian Splat

  • @eekseye666
    @eekseye666 11 месяцев назад

    Oh I love your content! Should have been subscribed last time I met your channel. I didn't, but I do it now! )

  • @gamiensrule
    @gamiensrule 11 месяцев назад

    Amazes me. There's an old star trek TNG episode where the computer rebuilds part of a girls face that's behind someome else, and even as a kid I thought to myself, "they'd have to be able to somehow measure light and use it to reconstruct the hidden areas"...
    And now we're basically doing it, right?

    • @bricaaron3978
      @bricaaron3978 11 месяцев назад +1

      What do you mean "measure light"? If it's essentially a photograph, then the only light available is what you see --- what actually entered the camera lens.
      Also, keep in mind that _any_ method that does something like this is not showing something that's _real._ It's fake. In other words, let's say a computer _did_ have the "light to measure", it _did_ have enough information to render the missing part of the girl's face in the picture. It's still not showing what happened, what _was,_ but merely what _might have been._ How do you know that a fly didn't land on that part of her face? How do you know exactly how the wind was blowing her hair on that side of her head? Etc., etc.
      This is the case whether it's photographs, movies, or video games. Such technology can never magically make an image more _accurate._ Anything that it adds to the picture is 100% fake.

    • @gamiensrule
      @gamiensrule 11 месяцев назад

      @@bricaaron3978 I'm talking about (very generally speaking) the way you can take a spherical 360 photograph and use it to light a 3d scene. It's a very very minor version of what they did in the episode, but it occurred to me that if you had enough knowledge about how light reacts with substances and itself, you could potentially measure bouncing light and even construct what is likely hidden based on machine learning. We're a long way from it, but things are clearly going that direction.
      For instance, measuring a discoloration (like a shadow or some odd smudge of blue light) on the side of someone's face, with all the other information taken into account from the entire image (or better yet, several) and you could theoretically "guess" what is making the discoloration on the side of her face using AI.

    • @bricaaron3978
      @bricaaron3978 11 месяцев назад

      @@gamiensrule *"I'm talking about (very generally speaking) the way you can take a spherical 360 photograph and use it to light a 3d scene."*
      Light and matter are two completely different things.
      *"...and you could theoretically "guess" what is making the discoloration on the side of her face using AI."*
      Yes, exactly what I explained. Whether it is a human being manually guessing, as I have done myself using Photoshop, or it is a computer algorithm --- it is nothing more than fake information that attempts to make an image look acceptable, or "real". The final result has nothing to do with reality. It's 100% fake.

    • @gamiensrule
      @gamiensrule 11 месяцев назад

      @@bricaaron3978 We agree, you're just convinced for some reason that I'm trying to say something else.

    • @bricaaron3978
      @bricaaron3978 11 месяцев назад

      @@gamiensrule Well, that's alright then!

  • @punktachtneun9743
    @punktachtneun9743 11 месяцев назад

    how did you achieve the out of focus effect in the tintin figure?

  • @XRCADIA
    @XRCADIA 11 месяцев назад

    Great video man, thanks for sharing

  • @Moshugaani
    @Moshugaani 11 месяцев назад

    I wonder if the high demand of VRAM could be circumvented by using some other memory to compensate, like normal RAM or a part of your SSD?

  • @MarinusMakesStuff
    @MarinusMakesStuff 11 месяцев назад +2

    Awesome!!! Though, for me, all that matters is getting a correct mesh and I couldn't care less about textures personally. I hope the mesh generation will soon also make leaps like this :)

    • @joonglegamer9898
      @joonglegamer9898 11 месяцев назад

      Yeah you're spot on, this is not new, there might be new elements to it which is great, but I won't bat an eye until they come up with a perfect, easy to seam - seamless uv-mapping model, we still have to make our models animateable, relying on low poly to get the most of the CPU / GPU powers in any setup, so yeah untill then we can keep dreaming, hasn't happened in 40+ years.

  • @zdspider6778
    @zdspider6778 11 месяцев назад +4

    5:13 3070 has 8GB VRAM, not 4. Same with 3070Ti. Nvidia should be ashamed of themselves. 16GB should have been the default by now.

  • @ShahabEslamian
    @ShahabEslamian 11 месяцев назад

    So I get a video of my room with my phone's camera and this software turns it into 3D?

  • @DavidKohout
    @DavidKohout 11 месяцев назад

    This really makes me feel like living in the future.

    • @DavidKohout
      @DavidKohout 11 месяцев назад

      This just confirms that we're living in the best times, from the start of the phone technology to this.

  • @liliangimenez4461
    @liliangimenez4461 11 месяцев назад +1

    How big are the files used to render the scene? Could this be used as a light field video format?

  • @baxtardboy
    @baxtardboy 11 месяцев назад

    How long before we can strap on a headset and actually go inside our old photograph albums?

  • @wasdfg662
    @wasdfg662 10 месяцев назад

    can be possible to export as 3d files/meshes the results of gaussian splatting?

  • @GfcgamerOrgon
    @GfcgamerOrgon 11 месяцев назад +3

    I can see stable diffusion going into that direction someday. I rember that games that utilized cubemaps, there is already a virtual enterprise that use a form of point clouds instead of poligons. It should be wonderful to train such data into tree dimensional stable diffusion models.

    • @Savigo.
      @Savigo. 11 месяцев назад +4

      Cubemaps are literally planar images projected into a cube.

    • @GfcgamerOrgon
      @GfcgamerOrgon 11 месяцев назад

      @@Savigo. Yes! Its where all started! Like lunacy/secrets of da vinci(2006) and others, now with that, instead of ordinary cubemaps we can overlap those images in a very realistic static light games without the loss performance with raytracing, and even begin to train AI on these, so we can create the interactive images to tell interactibe histories!

  • @yurygaltykhin6271
    @yurygaltykhin6271 11 месяцев назад

    I am pretty much sure that this (or a similar) tech in conjunction with the neural engines' development will finally lead to the creation of fully immersible and high-definition virtual worlds that are very cheap to produce. This is a mixed blessing to me because, in the near future, it will be impossible to distinguish artificial media from legitimate "real" images and videos. My bet is that soon we will see a legislative trend to compulsory disclosure of the origin of an image or a video when publishing, the first for the mass media and later on for any publications, including social media for the general public. Nevertheless, in a few years from now, I expect to see new video games that will make the games made with Unreal Engine 5 look as unrealistic as idk, Doom 2.

  • @foxy2348
    @foxy2348 11 месяцев назад

    amazing. How is this rendered? In what program?

  • @striangle
    @striangle 11 месяцев назад

    absolutely amazing technology! super excited to see where the future takes us. thanks for sharing! ..side question - what is the music track on this video?

  • @EveBatStudios
    @EveBatStudios 11 месяцев назад

    I really hope this gets picked up and adopted quickly by companies that are training 3-D generation on nerfs. The biggest issue I’m seeing is resolution. I imagine this is what they were talking about coming in the next update with imagine 3D. Fingers crossed that would be insane.

  • @manzell
    @manzell 11 месяцев назад

    Nerfies! What a time to be alive!

  • @chukukaogude5894
    @chukukaogude5894 11 месяцев назад

    Now we put this with vr or xr oe whatever they call it and boom. We can recreate old images and animate them.

  • @JorgetePanete
    @JorgetePanete 11 месяцев назад

    Please clarify that it isn't NeRF in the video

  • @ferencszabo3504
    @ferencszabo3504 11 месяцев назад +1

    Can it produce a 3d mesh? Like an obj file?

    • @OlliHuttunen78
      @OlliHuttunen78  11 месяцев назад

      Unfortunately no. These are like a volume models. In Luma AI service it is possible to convert NeRF models to Mesh but it will loose all these qualities like transparency and reflections.