This is much more enjoyable than that other indian sounding ass. At least I don't have to watch at 2x speed so I don't fall asleep and skip the first half because it's just padding that everyone already knows about
"It's all open source" meaning that it will get actual development by people interested in the field and the various uses of the technology and not be strictly tied to development based around the agenda of corporations who mainly only develop new fields and block development further to maximize profits by having the minimum required to have it patented
A bit late, but you probably never looked at the CS research community before. Nearly all student/professor papers comes with source code repo, and many corp research does as well. Go through a NeuroIPS archive and you can see hundreds of repos. Nearly all of the recent AI breakthroughs in the recent years are based on an OPEN paper published by someone in Google(the transformer paper), which ironically made google stress a lot now because they didn't get the opportunity to profit from their own invention before the other competitors.
@@PrivateCCC I totally agree that the powers that be (and more importantly, their donors... Gates, Bezos, Google, etc) have a vested interest in undermining FOSS. As with most things, the best way to make it infeasible for them to steal a Right from the people, is for a critical mass of people to exercise that Right.
Oh, yeah, it's great that ONE A$$ hole can now drown RUclips with TENS OF THOUSANDS of videos all made by an AI, and make MILLIONS OF DOLARS in the process. Great for this already broken society. Great to have billions of videos in our medias that will keep us to see any real content. And nothing new will ever be made anymore.
And the absolute stagnation of societal development means every bit of technology only sharpens the ridiculous irrationality of modernity and capitalism. If you were aware at all it should immensely worrying.
Bro I work on planes for a living, I have no idea about what even you are talking about. I have superficial general knowledge, and have no reason to be here. Its like a swimmer accidently going to a calligraphy course and they are like "yea Ive written once" thats me. But I like watching your goofy and unique videos that explains concepts I never knew existed or mattered.
A fun potential use for this would be a game where the landscapes and assets aren’t actual objects but props made by artists, kind of similar to how games like Hylics use claymation
Ever since software existed it has outpaced hardware. Tbe first computer programs were entirely written out on paper before computers that could run them even existed
@@dsdy1205 yes it always used to exist a level above. But this adds an additional dimension. That is certainly some progress I also find it epic how 2D photography existed before 2D rendering was a thing but 3D rendering was already a thing ages before 3D phototherapy became a thing
Direct Mapping: Each splat directly corresponds to a point (or set of points) in the mesh. As the splat undergoes any transformation, the mesh point follows suit. If the splat moves to the left, the point moves to the left. If the splat grows in size, perhaps the point-mesh around that splat could expand or become denser. Fluidity in Motion: Imagine the splats as buoys floating on water. As they move, the net (or mesh) attached to them adjusts and moves with them. This creates a fluid, dynamic motion in the mesh that's directly influenced by the movement and behavior of the splats. Temporal Dynamics: If the splat data is changing over time (like in a sequence or animation), the point-mesh would continuously adjust and evolve frame by frame, creating a moving, dynamic representation. Overlapping and Merging: If two splats come close or overlap, the corresponding points in the mesh might also come closer, merge, or influence each other in some manner, depending on the design and algorithm in place. Interactivity: This approach also opens doors for interactivity. A user (like a designer or animator) could directly manipulate a splat and see real-time adjustments in the corresponding mesh, providing a tactile and intuitive way to shape and design 3D structures or animations. In essence, by directly linking the behavior of the splats to the motion and structure of a point-mesh, you can create a dynamic, responsive, and potentially more intuitive system for mesh generation and manipulation. It's a concept that harmoniously blends data representation with visual aesthetics and user interaction.
This tech is developing insanely fast! This feel like a major turning point in computer graphics, finally photo realistic graphics. Not almost photoreal, it's literally constructed from photos.
AFAIK this does not allow any kind of reflection, though. Anything other than rough, evenly lit surfaces won't change realistically depending on the point of view. Maybe there is a way to combine this method and traditional ones.
@@ondrej_hrdina NERFs do allow for accurate reflections, but only of what was visible to the camera from the training data. Photogrammetry can't do reflections at all
your narration is genius, it's non stop, has comic breaks and I loved that idea of using high pass filter to make a second voice. I don't even know what gaussian splatting is, i'm just watching for the fascinating presentation
Gaussian is neat! So photogrammetry is the process of taking a giant chunk of images, throwing them into the AI and getting back a 3d object scan of it using dark ai sorcery. Problem is, actual 3d objects are complex, so you're trading size and performance for quality, and if you're just displaying something, its overkill. NeRF took that and made a process where it just makes a giant point cloud and fudges the details, and you get a photorealistic 3d environment with way less work. Unfortunately, it has to exist in its own little weird container to work. Gaussian splatting takes less processing to do the same thing, but can run in a nice lightweight container which is compatible with other things.
Extremely well said, you managed to describe it perfectly. I began this video relaxed and amused thinking it was just another meme video, but I was physically tense in awe by the end. What a brilliant storyteller this man is.
Every second I watched this I feel like I’m learning something I regret knowing, but I can’t pause because I’m too curious about what’s next so it’s just a cycle until the video ends and I feel empty inside
Damn, that looks really cool! Wonder if this will possibly also have use cases for video editing or some experimental art with motion blurs or glitch effects.
This makes me feel like with enough 360 degree cameras for fidelity we can get 3d videos which can be paused, rewound and slowed down while also being able to move around in the 3d space and look at entirely different parts. Ideally as simply as recording a video with multiple perspectives. I can't seem to find it anymore but there was a recreation of some serious explosion for analysis years back extrapolated from all the footage and positions of each for the event. It feels like we can do that serious work but with just a few phones in a room with a click of a button and in minutes.
This technology would be the pinnacle of computing. We got to the point where we can make all our fantasies to life becoming closer to a 'perfect world', at least in VR.
Imagine how much further along humanity would be if every exciting new scientific paper were condensed into a 30 second clip anyone can understand / be intrigued by to learn more.
That rainbow pattern looks like something I created once while messing with color tiling. Where the RGB of a tiles neighbors was slightly different in some random way.
it's basically the difference between keyframed animation and frame by frame animation, assume each frame is an entirely new scene or just a slightly edited version of the last scene
This looks like with a little improvement it could be used in 3D camera tracking and 3D scanning from footage. Looks similar to a PositionToPoints node in Nuke from the data of a 3D render position pass
2 minute papers meet 30 second papers. What a time to be alive!
60 second papers
This is much more enjoyable than that other indian sounding ass. At least I don't have to watch at 2x speed so I don't fall asleep and skip the first half because it's just padding that everyone already knows about
52.1 second papers
52.477 second papers actually
Hello Scholar
this has to be the best advertisement for a technology i wont directly interact with in the near future that i have seen
Lmao exactly
GAUSSIAN SPLATTING 2: ELECTRIC BOOGALOO
Starring Jason Statham: featuring Jason Statham.
💀
@@shApYT Jason Splatham
i still don't understand this joke , even after a month
gaussian bukkake
"but why does this matter, isn't it just a 3D video"
JUST A 3D VIDEO, thats ground breaking.
huh
i love how short and to the point this video is, snappy editing and all
big time. what a banger
designed for zoomers that have low iq
It‘s indeed quite brilliant in its making!
bill wurtz!
It's too quick for my brain to process.
Anyone else think it was jerma in the thumbnail?
Yeah lolll
Thank you. I'm not insane
*_-the ceo of 💀_*
"It's all open source" meaning that it will get actual development by people interested in the field and the various uses of the technology and not be strictly tied to development based around the agenda of corporations who mainly only develop new fields and block development further to maximize profits by having the minimum required to have it patented
A bit late, but you probably never looked at the CS research community before. Nearly all student/professor papers comes with source code repo, and many corp research does as well. Go through a NeuroIPS archive and you can see hundreds of repos. Nearly all of the recent AI breakthroughs in the recent years are based on an OPEN paper published by someone in Google(the transformer paper), which ironically made google stress a lot now because they didn't get the opportunity to profit from their own invention before the other competitors.
FOSS is the only way out of the impending dystopian technocracy hellscape. Change my mind.
@@MacroAggressor agreed until they somehow make it illegal
@@PrivateCCC I totally agree that the powers that be (and more importantly, their donors... Gates, Bezos, Google, etc) have a vested interest in undermining FOSS. As with most things, the best way to make it infeasible for them to steal a Right from the people, is for a critical mass of people to exercise that Right.
@@MacroAggressor corporations routinely steal the labor of FOSS developers and exploit them all the time.
Loving the rapid improvement of tech and AI recently
i believe next year will be very interesting
Also love the rapid movement of this dude's video lol
About one of the few things that still get me excited in the world we are living in.
Oh, yeah, it's great that ONE A$$ hole can now drown RUclips with TENS OF THOUSANDS of videos all made by an AI, and make MILLIONS OF DOLARS in the process. Great for this already broken society. Great to have billions of videos in our medias that will keep us to see any real content. And nothing new will ever be made anymore.
And the absolute stagnation of societal development means every bit of technology only sharpens the ridiculous irrationality of modernity and capitalism. If you were aware at all it should immensely worrying.
jerma took the edibles. again.
Bro I work on planes for a living, I have no idea about what even you are talking about. I have superficial general knowledge, and have no reason to be here. Its like a swimmer accidently going to a calligraphy course and they are like "yea Ive written once" thats me. But I like watching your goofy and unique videos that explains concepts I never knew existed or mattered.
This is truly amazing. And also very simple, I am surprised we are only seeing this now. Tech for small network has been mature for a while
A minute of my time well spent on a subject I know nothing about, good video
Wow super quick and condensed recap in a funny way. Nice
Everything is a function
They're a function, you're a function, I'm a function!
Functional programming? More like unfunctional programming
dysfunctional @@youtubehandlesux
my life serves no function 😎😎🤯
in Category theory - Maxwell, probably
Wow what an interesting time to be alive...gonna lurk in the discord.
Bill Wurtz sends his compliments 🤣
Jokes aside, this is exciting stuff!
I LOVE YOUR CONTENT
I love your fast paced comedic and informative approach to these things, 10/10 :)
I really wish that this video was edited for an audience other than people with a fried attention span because it seems interesting.
If you got the attention span, you can do the research yourself 👀
bro really just fit a whole essay into a minute
This feels so much like a Bill Wurtz video I love it
A fun potential use for this would be a game where the landscapes and assets aren’t actual objects but props made by artists, kind of similar to how games like Hylics use claymation
So basically, we already developed the software for holographic imagery before the hardware made it there?
yes.
technology!
@@realcoolguy123 ruclips.net/video/Fc1P-AEaEp8/видео.html
Ever since software existed it has outpaced hardware. Tbe first computer programs were entirely written out on paper before computers that could run them even existed
@@dsdy1205 yes it always used to exist a level above. But this adds an additional dimension. That is certainly some progress
I also find it epic how 2D photography existed before 2D rendering was a thing but 3D rendering was already a thing ages before 3D phototherapy became a thing
Direct Mapping: Each splat directly corresponds to a point (or set of points) in the mesh. As the splat undergoes any transformation, the mesh point follows suit. If the splat moves to the left, the point moves to the left. If the splat grows in size, perhaps the point-mesh around that splat could expand or become denser.
Fluidity in Motion: Imagine the splats as buoys floating on water. As they move, the net (or mesh) attached to them adjusts and moves with them. This creates a fluid, dynamic motion in the mesh that's directly influenced by the movement and behavior of the splats.
Temporal Dynamics: If the splat data is changing over time (like in a sequence or animation), the point-mesh would continuously adjust and evolve frame by frame, creating a moving, dynamic representation.
Overlapping and Merging: If two splats come close or overlap, the corresponding points in the mesh might also come closer, merge, or influence each other in some manner, depending on the design and algorithm in place.
Interactivity: This approach also opens doors for interactivity. A user (like a designer or animator) could directly manipulate a splat and see real-time adjustments in the corresponding mesh, providing a tactile and intuitive way to shape and design 3D structures or animations.
In essence, by directly linking the behavior of the splats to the motion and structure of a point-mesh, you can create a dynamic, responsive, and potentially more intuitive system for mesh generation and manipulation. It's a concept that harmoniously blends data representation with visual aesthetics and user interaction.
You rock. Thank you for this!
thanks chatgpt
More botsplaining doesn't make this tech more impressive, you know. New developments in Global Illumination would be far more useful than this.
@@youtubehandlesux Well, be the change you want to see in the world
I'm pretty sure this is just bones in current 3D rendering.
i'm in love with this video style
The first Gaussian Splatting that I ever produced? You're drinking it, right now!
It took 50 seconds to drop my jaw and make me think "holy cow."
That's incredible! Can't wait to see what the future will bring!
This tech is developing insanely fast! This feel like a major turning point in computer graphics, finally photo realistic graphics. Not almost photoreal, it's literally constructed from photos.
It's the first tech that allows for realistic foliage in a 3D scene (in my opinion). I hope that someone finds a way to effectively use it in games.
AFAIK this does not allow any kind of reflection, though. Anything other than rough, evenly lit surfaces won't change realistically depending on the point of view. Maybe there is a way to combine this method and traditional ones.
@@ondrej_hrdina NERFs do allow for accurate reflections, but only of what was visible to the camera from the training data. Photogrammetry can't do reflections at all
Until people can finally feel how lifeless all this is.
Remember back in 2023 when we all thought this was sooo photo-realistic?
Short and to the point!!! Thank you!! 💖
Love this format!
your narration is genius, it's non stop, has comic breaks and I loved that idea of using high pass filter to make a second voice. I don't even know what gaussian splatting is, i'm just watching for the fascinating presentation
It’s inspired by bill wurtz. Still really cool
If you think this is good you should watch "The History of the Entire World, I Guess" by Bill Wurtz. It's like this, but More.
Gaussian is neat! So photogrammetry is the process of taking a giant chunk of images, throwing them into the AI and getting back a 3d object scan of it using dark ai sorcery. Problem is, actual 3d objects are complex, so you're trading size and performance for quality, and if you're just displaying something, its overkill. NeRF took that and made a process where it just makes a giant point cloud and fudges the details, and you get a photorealistic 3d environment with way less work. Unfortunately, it has to exist in its own little weird container to work. Gaussian splatting takes less processing to do the same thing, but can run in a nice lightweight container which is compatible with other things.
Extremely well said, you managed to describe it perfectly. I began this video relaxed and amused thinking it was just another meme video, but I was physically tense in awe by the end. What a brilliant storyteller this man is.
super cool! super curious to use these techniques in unreal
Every second I watched this I feel like I’m learning something I regret knowing, but I can’t pause because I’m too curious about what’s next so it’s just a cycle until the video ends and I feel empty inside
Damn, that looks really cool! Wonder if this will possibly also have use cases for video editing or some experimental art with motion blurs or glitch effects.
Corridor's about to go hard with this
now THAT is how you advertise an awesome piece of tech. I've recently became interested in 3D splatting and NeRFs.
That’s my favourite kind of editing
I love the style of the video!
This makes me feel like with enough 360 degree cameras for fidelity we can get 3d videos which can be paused, rewound and slowed down while also being able to move around in the 3d space and look at entirely different parts. Ideally as simply as recording a video with multiple perspectives.
I can't seem to find it anymore but there was a recreation of some serious explosion for analysis years back extrapolated from all the footage and positions of each for the event. It feels like we can do that serious work but with just a few phones in a room with a click of a button and in minutes.
love that 1 month of Quick change
DAYUM THAT SOUNDS REALLY USEFULL
Love the Bill Wurtz style.
Yeah, I was thinking of Bill Wurtz.
great format, did not waste time and is clear
This is the Vertor man.
Thanks for the update! Very cool
i like the trap ad libs grraa-ta-ta-ta
What a cool video. 50 seconds and I know everything i needed about this! Love it, thanks🙏
What is splatting, then?
*you could make a religion outta this*
Finally we will be able to pick a pov for our taste
I have a playlist called “cool stuff” this is definitely going in that playlist 😂
Yo that’s actually great for computer vision
Scary, but really impressive algebruh
well this fascinating stuff just went over my head haha
This is how a Two Minute Papers and bill wurtz collab would look like
hahahahah yes
temporal splatting. bring it on.
I can already see those CSI scenes becoming an actual real thing😂
incredible TWO DEE Gaussian splattering
This technology would be the pinnacle of computing. We got to the point where we can make all our fantasies to life becoming closer to a 'perfect world', at least in VR.
intriguiging!
i have no idea what i just watched but i'm not complaining
Super nice video!
Imagine how much further along humanity would be if every exciting new scientific paper were condensed into a 30 second clip anyone can understand / be intrigued by to learn more.
My Apple Vision Pro need this technology!
That rainbow pattern looks like something I created once while messing with color tiling. Where the RGB of a tiles neighbors was slightly different in some random way.
Incomprehensible
top level content. 10/10
I appreciate the Bill Wurtz inspiration
Dude just spawned dropped a one minute paper and yeeted.
"No time to explain, Morty!"
What I gathered from this is that, animated p*rn will now be much easier to create. Just lovely.
I thought the thumbnail was Jerma…
this feels like one of those random Bill Wurtz videos that are really fucking fast
i don't know who you are, or what you're talking about, but this video made me happy
you deserver this sub
this video reminded me of bill wurtzs editing style. pretty good stuff
it's basically the difference between keyframed animation and frame by frame animation, assume each frame is an entirely new scene or just a slightly edited version of the last scene
Loki 2 spaghettification
It makes me think of the video surveillance footage in the movie Star Trek; Into Darkness, where you can pan around the footage in realtime.
nice editing
Bruh my ADHD is vibing to this shit
The hard part for this is getting training data without any good 360 cameras
This feels like a Bill Wurtz video
imagine your doctor is making 3D model of your organs and check you inside using VR xDD
someone reinvented bill wurtz.
thumbnail looks like jermas evil sibling
"you've heard of gaussian splatting"
me: uh huh totally i definitely have
I thought the thumbnail was a jerma green screen clip
the only reason I clicked on this
not disappointed?
This looks like with a little improvement it could be used in 3D camera tracking and 3D scanning from footage. Looks similar to a PositionToPoints node in Nuke from the data of a 3D render position pass
that sounds amazing
My gaussian just splatted all over the place!
I would love to be part of such discord groups where We're solving crazy ideas. Please add
beautiful
history of tech, i guess
Amazing subscribed
This video feel like adhd when muted
I need a longer video
this is lit
What would the camera setup be to make this 4d Gaussian splating like a reverse 360 camera pointed at you?
The way this video is formatted is an excellent example of why it's more useful to write out important ideas in prose
I'm not sure I understand the distinction you're drawing. As opposed to what, writing important ideas in poetry?
yeah. poetry is for cringe essays on youtube @@garshtoshteles
this editing gives me bill wurtz vibes
I want the VR capability that this could provide more than anything