Bit difficult given the requirement for a fixed viewpoint. I reckon it's more useful in film, EG a realistic looking building collapse that just happens to avoid any debri landing on the protag.
@@monawoka97 Not even that, it effectively doesn't change the animation, it just swaps around objects / materials at a convenient time where objects are overlapping or colliding and thus can be used for a slight of hand. Its a cool technique, but not exactly useful or groundbreaking. I think I need to read the paper to seriously understand the excitement, maybe there is some cool tricks regarding occlusion and parallelism of the problem solving..
I honestly love that this isnt AI, sure AI is cool and impressive, but there is something artisanal about just using "normal" math to achieve this effect.
@jorgezombie78Not afraid, but it's much easier to reverse engineer and debug "normal" code to understand why a decision is made (with neural networks, it is almost impossible). This implied certainty of "artisinal" systems gives them significant weight over AI in this respect
@@Vorexia to be quite frank I don’t know what he means. He is completely wrong for assuming any type of AI development doesn’t have basic fundamental math. Would you mind explaining to me what he means because I’m lost lol
@@animation-recappedAI is a type of prediction tool, and on top of that it is hard to understand the model because it uses millions or billions of parameters. Compare that to those equations you used back in high school to calculate the trajectory of a projectile; it has an exact answer and will always be correct if you can account for all of the conditions. Errors might occur due to errors of measuring the inputs, but then you can at least finetune the models and know the output will be controlled. With limited information, or shortcuts to the formulas to improve realtime calculation performance on millions of particles with complex interactions, these intentional models also become predictive, but they are much easier to finetune by “hand” as each of their components are well understood. The 3 body problem is a classic example of how we need predictive tools that are still based off of rather simple models with tweaking to account for measurable errors. AI will not be able to accomplish such prediction outside of using those same non-blackbox models itself. Blackbox meaning impossible to understand the meaning of the model, which is what AI models are considered to be.
When I first saw the simulations I thought that the physics was being slightly altered in an unnoticable way, but upon hearing that it's just changing the apperance of objects I was amazed. For an algorithm which just changes appearances when it's hard to see, I'm not suprised it runs almost in real time without much effort.
I actually came across a paper some years ago which was doing exactly that -- I think they were subtly tweaking the forces over the entire physics simulation to get to the desired state, though, naturally, their method wasn't quite as fast as this (it performed some sort of search over the possible evolutions of the simulation). I can't remember the title of that paper, unfortunately.
@@IceMetalPunk ", instead of using motion control to estimate plausible motions for objects with fixed appearances, it can be more efficient to estimate plausible appearances for objects with fixed motions. Consequently, our method, ViCMA (Visual Control of Multibody Animations), is simple and practical. It carries out no complicated optimization or sampling, and, in fact, it only requires the animation trajectory to be generated once. The cost of applying the remainder of the algorithm is negligible." - research.nvidia.com/labs/prl/vicma/2023ViCMA.pdf
@@IceMetalPunk "instead of using motion control to estimate plausible motions for objects with fixed appearances, it can be more efficient to estimate plausible appearances for objects with fixed motions." - the paper. Also the paper name ViCMA (*Visual* Control of Multibody Animations)
Except not really because the technique is entirely smoke and mirrors. It uses camera occlusion to replace things with other things. In the case of the balls it's just using the motion blur to do it and is really obvious if you look frame by frame.
If it’s not realtime, you just bake the physics and color the balls on the last keyframe. That’s a very old trick. I was confused until he got to the point.
~"All of these techniques are completely handcrafted. No AI anywhere. This is powered by pure human ingenuity. WHAT A TIME TO BE ALIVE!" Thats a sentence I never thought I would hear on this channel about two years ago. How times have changed
Come on software optimisation phase of tech, let's make the best of what we have through ingenious tricks and whatnots pls, not just simply brute forcing performance with hardware optimisation and "more and smaller", but I suppose software updates aren't as profitable, so who knows what the speed will go at
I am glad, that NV knows, that artist need not just beauty, but artistic control over it. I know how to do such trick end-to-end, but here is a new way, i suppose. The trick is: 1. drop cards randomly. 2. swtich sides for end positions to form text 3. switch sides for start positions to form text 4. if end state and start state do not match, switch state in the middle. So now i will look what NV made.
You sir had me reading papers. It is really inspiring seeing these implementations. I saw the other day a video of the guy who created ipAdapter by implementing someone's else papers. What a time to be alive!!!!!
I went through the same process as when dumbfounded by a magic trick. First is awe ("Oh, coool!"), then confusion ("wait a minute... How?"), then anger ("How THE HECK is this doneee???"), then enlightenment ("Ohhhh, ok, that makes a lot of sense") to finally back to awe. It's such a simple concept but with really interesting implications and applications.
I don't get it, though... there's already a simple way to do this, run the simulation once with the balls, then when they're at the end state, set their colors to look like whatever you want the end to look like. Then 'rewind' and run the simulation again (with the 'new' colors in place) and it occurs without any magic, a valid simulation that will reliably produce the end result I guess it'd get slightly more complicated with something like the dice, but doesn't seem worth making all this to solve it...
If you slow down the video to 0.25 speed at 1:06 you can see that a few of the balls change colour to match the placement. The ball in the centre right turns from blue to pink.
It's not. They don't cheat with visible visualisations. They cheat in the background where it cannot be detected in any way. This kind of animations can be pixel peeped rather easily to detect the cheating, which is not something they'd want.
@@getsideways7257 Also not in elections. Votes are not counted based on computer simulations, much less on fixed animations such as seen in this video.
Really doesn't and anteshell is also wrong. Online gambling is massively regulated with every single game being subject to audit etc If we say there's a 95.01% return and after millions of games it turns out to be just 95% we would get fined ... so no one's giving up billions in profit to steal your $1
2:10 watched on 25% speed and the purple balls that get left befind turn blue at some point.. so yeah, they don't make sudden "moves" but they do gradually colorshift.. magic!
There's a flaw in the logic of the system. Specifically, when the dice fall (at 2:05, for example), the system is seemingly told to take advantage of occlusion to change the top face of the die to a six and make sure that no non-top face displays a six. The flaw is that the face opposite the six on a die is a one. So we shouldn't be able to see any ones if we can see the six on the same die...but we do.
I'd love to see an expansion on this where they figure out how to measure the degree of visibility or alteration which is necessary for certain tricks to be pulled off. This is impressive but imagine how much more impressive it could get if we approach the limit of believability instead of hugely overcompensating.
If using a camera, figuring out if something is visible has been solved for quite a while. Most computing programs do not draw certain elements to save performance. For the balls, I think they just calculate the points of highest velocity, acceleration and or jerk to choose when to perform the swap
I was able to spot the changes in colord in the pachinko ball simulation. I am sure others did as well. It is still a good trick, but not imperceptible
I never would have guess they literally cheat during simulation. I would have expected the simply do some reverse physics or something. Setup the final condition and find the initial condition to reach it.
I was really hoping that this was trying to exploit chaotic system. Like, there is a near infinite many ways cards could fall and arrange themselves on the ground, and so one of those possibilities is likely close to our desired image. So I was hoping that instead of simulating physics, it would "tweak" physics so that the desired outcome was guaranteed to happen, while remaining as visually consistent as possible to a person. That would be insane.
You should do a video on the RandNLA/RandBLAS paper. It's new algorithms that can do fundamental linear algebra on huge matrices 1000x faster on paper, (closer to 20x in practice but 20x faster is still insane). Things like least squares, for example. The basic idea is that for certain computations you can get an arbitrarily close approximation to the real answer by doing the math on a much smaller matrix made of a random subset of elements from the original matrix, instead of computing the whole thing. So they trade exactness for speed, and notably the tradeoff is configurable so you can tune it to be "good enough" for your specific application.
That would be awesome if implemented on simulations created in After Effects, Cinema 4D and similar special effects softwares. As this would mean having much more control over the outcome of some of the simulated effects.
The minute I watched this I remember I did this exact same trick in After effects years ago. That was inspired by when I was at school doing calculus. Whenever I was stuck I would work backwards from the answer and just stitched to where I was stuck and hope they middle jump wasn’t noticeable:)
Is this similar tech to the one used by nvidia based on the earth clone model to predict the weather? Would this mean the closer our model to reality, the closer we are to predicting the future?
Was I the only one who immediately saw the pachinko balls changing color mid-bounce? I know that the visibility function is never favorable in a sim where everything is visible all the time, but the color changes happening during motion blur turned them into multicolor streaks!
When he said he watched specific balls closely, I thought he was saying there were no replacement shenanigans. And the video of the simulated vehicle also made me think it could help the simulated vehicle go from point A to point B. But now I'm disappointed. I thought this was going to be a crazy physics simulations that took the beginning and end state and generated in-between states and worked backwards and forwards deciding chaotic variables like bounce direction and wind as is went, something no human would be able to tell is changing. And the technique was going to be some powerful algorithm that did this trial and error more efficiently than past methods. Like the next best thing after an infinite improbability drive. Now that would be amazing.
You don't need AI for this; you can simply write some code that changes the object's color, texture, or look in some way as soon as it touches the ground to achieve the same effect.
Oh, wow, I was expecting this to just be some improvement over that, uh, what was it called? Differentialble Simulations? That one from a while ago where the manipulated the rounding errrors and stuff to subtly steer a simulation towards a desired result, stirring two liquids into an image or doing trick-shots with bouncing objects and stuff like that.
Honestly it almost seems like this video is totally AI generated. The script is way to excited considering what the whitepaper actually contains in terms of knowledge and useful systems. Basically just a handful of expressions for identifying suitable places to swap around the balls, and sure as soon as you have pre-baked this "score" for your pre-baked animation it is very easy to change the end result without doing a lot of calculations... To me it seems like both the script and the voice is AI generated on this channel these days.
Watching this and I thought about special effect companies being able to finally control a simulation without just having to hope that the next generation looks better
5:09 That's my favorite job interview discussion. When asked by a big company, how many programmers can we throw at this project, to get it done faster. My answer, 'It doesn't take 9 women 1 month to have a baby. It takes one woman nine months.'
Except that the dots on the die don't match reality. Opposite die faces, added up, equal seven. 6-1, 5-2, 4-3, etc. Opposite a six, the face carries a one. But I saw a one on a face ninety degrees from a six, so I at least know they aren't faithfully accurate dice.
I seriously thought the trick was taking a start and end, then generating plausible frames in-between them until complete, sort of like how stable diffusion works with noise.
I realized how the simulation was cheating once I saw the dice. In the end, there was a dice that has a 6 on top and a 1 on the side (which never happens on fair dice), seems they forgot to swap all the faces. Thats how I realized they probably just change one face of the sim to fit the chosen result
Sorry, why is this significant, unusual or important? What will it be used for?its just making the computer do what we want, like, program, no? "make tgwm all land at 6" ( of course in code). Serious answers please ❤
it make anything looks like they are not programmed. like a self-fulfilled prophecy. imagine GTA or any sandbox game. stiff dead body make it looks like it is programmed. ragdoll make dead body looks like simmulated. this will make dead body looks like real dead body, not some random ragdoll pose.
I thought this worked without cheating. Just considering every possible outcome in a random experiment and computing exactly the possibility that leads to the outcome we want.
@0:50 you suggest the results are random. I believe for this technique to work, physics engine needs to be 'Deterministic' meaning that if you run the simulation, the results look random, but if you re-run the same simulation, you will have exactly the same random looking result. So it's not truly random. ruclips.net/video/9IULfQH7E90/видео.html
So fantasy/sci-fi media/games/shows could finally fix the giant cgi turtle textures in rising of shield hero. The cgi degraded by two decades when the turtle was in the scene.
Could be a lot of applications in video games, with controllable simulations and all.
Not really, this doesn't work on simulations but pre baked animations.
Bit difficult given the requirement for a fixed viewpoint. I reckon it's more useful in film, EG a realistic looking building collapse that just happens to avoid any debri landing on the protag.
@@monawoka97 Not even that, it effectively doesn't change the animation, it just swaps around objects / materials at a convenient time where objects are overlapping or colliding and thus can be used for a slight of hand. Its a cool technique, but not exactly useful or groundbreaking. I think I need to read the paper to seriously understand the excitement, maybe there is some cool tricks regarding occlusion and parallelism of the problem solving..
no more ragdoll dead body in GTA
Yeah, unfortunately it feels like the most useful application there would be manipulating player psychology, for instance with loot box mechanics.
I forgot that this is a papers channel and not an AI channel, great to see things outside that space once in a while.
It's good to see a complete and reliable new technology, which can't be said for a lot of AI stuff which gets a bit exhausting
Give me some rendering and light transport content 😭
This the content that made the channel popular to begin with, the AI stuff become quite tiresome honestly.
The reverse engineered physical simulation to match a sound sample is the real magic.
I honestly love that this isnt AI, sure AI is cool and impressive, but there is something artisanal about just using "normal" math to achieve this effect.
AI is normal Math
@@animation-recapped You know exactly what they meant lol, no need to get pedantic over this
@jorgezombie78Not afraid, but it's much easier to reverse engineer and debug "normal" code to understand why a decision is made (with neural networks, it is almost impossible). This implied certainty of "artisinal" systems gives them significant weight over AI in this respect
@@Vorexia to be quite frank I don’t know what he means. He is completely wrong for assuming any type of AI development doesn’t have basic fundamental math. Would you mind explaining to me what he means because I’m lost lol
@@animation-recappedAI is a type of prediction tool, and on top of that it is hard to understand the model because it uses millions or billions of parameters.
Compare that to those equations you used back in high school to calculate the trajectory of a projectile; it has an exact answer and will always be correct if you can account for all of the conditions. Errors might occur due to errors of measuring the inputs, but then you can at least finetune the models and know the output will be controlled. With limited information, or shortcuts to the formulas to improve realtime calculation performance on millions of particles with complex interactions, these intentional models also become predictive, but they are much easier to finetune by “hand” as each of their components are well understood.
The 3 body problem is a classic example of how we need predictive tools that are still based off of rather simple models with tweaking to account for measurable errors. AI will not be able to accomplish such prediction outside of using those same non-blackbox models itself. Blackbox meaning impossible to understand the meaning of the model, which is what AI models are considered to be.
When I first saw the simulations I thought that the physics was being slightly altered in an unnoticable way, but upon hearing that it's just changing the apperance of objects I was amazed. For an algorithm which just changes appearances when it's hard to see, I'm not suprised it runs almost in real time without much effort.
I actually came across a paper some years ago which was doing exactly that -- I think they were subtly tweaking the forces over the entire physics simulation to get to the desired state, though, naturally, their method wasn't quite as fast as this (it performed some sort of search over the possible evolutions of the simulation). I can't remember the title of that paper, unfortunately.
It's the appearance *and* the paths.
@@IceMetalPunk ", instead of using motion control to estimate
plausible motions for objects with fixed appearances, it can be more efficient to estimate plausible appearances for objects with fixed motions.
Consequently, our method, ViCMA (Visual Control of Multibody
Animations), is simple and practical. It carries out no complicated
optimization or sampling, and, in fact, it only requires the animation
trajectory to be generated once. The cost of applying the remainder
of the algorithm is negligible." - research.nvidia.com/labs/prl/vicma/2023ViCMA.pdf
@@IceMetalPunk "instead of using motion control to estimate plausible motions for objects with fixed appearances, it can be more efficient to estimate plausible appearances for objects with fixed motions." - the paper. Also the paper name ViCMA (*Visual* Control of Multibody Animations)
This will honestly be great for animators who want realistic physics but also want control over certain keyframes
Except not really because the technique is entirely smoke and mirrors. It uses camera occlusion to replace things with other things. In the case of the balls it's just using the motion blur to do it and is really obvious if you look frame by frame.
If the illusion works, it wouldn't matter
@@AurrenTV and why do you think that wouldn't be useful to animators? A scaled down version of this has been used by animators for decades.
@@AurrenTV For the non-animator audience who won't be frame peaking it is good enough.
If it’s not realtime, you just bake the physics and color the balls on the last keyframe. That’s a very old trick. I was confused until he got to the point.
~"All of these techniques are completely handcrafted. No AI anywhere. This is powered by pure human ingenuity. WHAT A TIME TO BE ALIVE!"
Thats a sentence I never thought I would hear on this channel about two years ago. How times have changed
7:36
I remember him saying this a few years ago.
i felt that "What a time to be alive"! So nice to sub to a channel excited about this stuff
You guys realize he feels the same regardless of whether it's AI or handcrafted? It's all cool tech to get excited over.
Just 5 more years and Iwill buy a Graphics card lolol
relatable
Come on software optimisation phase of tech, let's make the best of what we have through ingenious tricks and whatnots pls, not just simply brute forcing performance with hardware optimisation and "more and smaller", but I suppose software updates aren't as profitable, so who knows what the speed will go at
I am glad, that NV knows, that artist need not just beauty, but artistic control over it.
I know how to do such trick end-to-end, but here is a new way, i suppose.
The trick is:
1. drop cards randomly.
2. swtich sides for end positions to form text
3. switch sides for start positions to form text
4. if end state and start state do not match, switch state in the middle.
So now i will look what NV made.
Okay, so nv did not invent anything new as idea. But the mechanic is tailored well.
0:40 me explaining how my algorithm works
😭😭😭
You sir had me reading papers. It is really inspiring seeing these implementations.
I saw the other day a video of the guy who created ipAdapter by implementing someone's else papers.
What a time to be alive!!!!!
This is probably one of the most fun things I get to do, read papers and see what can I create with them
Nice to see a non AI episode ... for a change...
brings me back to what the channel was like 4-5 years ago
The Hitchhiker's Infinite Improbability Drive😄. And it would be even more crazy if the pink balls ended at the top.
😄🤣
At 2:11 one can see the magic at work as around 28 balls (from right to left) a pink one on the third row up changes from pink to lavender.
two years ago your content inspired me to go to computer Science. Thank you!
Glad to see some non AI content.
This is super interesting with the obvious limitation that it forces a preferred visual frame with occlusion rendering.
I went through the same process as when dumbfounded by a magic trick. First is awe ("Oh, coool!"), then confusion ("wait a minute... How?"), then anger ("How THE HECK is this doneee???"), then enlightenment ("Ohhhh, ok, that makes a lot of sense") to finally back to awe. It's such a simple concept but with really interesting implications and applications.
I don't get it, though... there's already a simple way to do this, run the simulation once with the balls, then when they're at the end state, set their colors to look like whatever you want the end to look like. Then 'rewind' and run the simulation again (with the 'new' colors in place) and it occurs without any magic, a valid simulation that will reliably produce the end result
I guess it'd get slightly more complicated with something like the dice, but doesn't seem worth making all this to solve it...
Good to see you nack with papers related to 3d Graphics, between the mayhem of AI Muckery
If you slow down the video to 0.25 speed at 1:06 you can see that a few of the balls change colour to match the placement. The ball in the centre right turns from blue to pink.
This is how online gambling sites work
It's not. They don't cheat with visible visualisations. They cheat in the background where it cannot be detected in any way. This kind of animations can be pixel peeped rather easily to detect the cheating, which is not something they'd want.
@@anteshell That's obvious. Still, you can't help but think of those right away...
Also elections
@@getsideways7257 Also not in elections. Votes are not counted based on computer simulations, much less on fixed animations such as seen in this video.
Really doesn't and anteshell is also wrong. Online gambling is massively regulated with every single game being subject to audit etc
If we say there's a 95.01% return and after millions of games it turns out to be just 95% we would get fined ... so no one's giving up billions in profit to steal your $1
@1:40 YES! XD
Thank you Dr. Amazing as usual!
there is a verry easy way to achieve this: just by baking the simulation and apply the texture after
2:07 Why is the 1 next to the 6 on the dice :D They should be on opposite sides.
C'mon Dave!! 😂
Does it bug anyone else that the dice pips aren't on the right sides?
2:10 watched on 25% speed and the purple balls that get left befind turn blue at some point.. so yeah, they don't make sudden "moves" but they do gradually colorshift.. magic!
There's a flaw in the logic of the system. Specifically, when the dice fall (at 2:05, for example), the system is seemingly told to take advantage of occlusion to change the top face of the die to a six and make sure that no non-top face displays a six. The flaw is that the face opposite the six on a die is a one. So we shouldn't be able to see any ones if we can see the six on the same die...but we do.
I'd love to see an expansion on this where they figure out how to measure the degree of visibility or alteration which is necessary for certain tricks to be pulled off. This is impressive but imagine how much more impressive it could get if we approach the limit of believability instead of hugely overcompensating.
If using a camera, figuring out if something is visible has been solved for quite a while. Most computing programs do not draw certain elements to save performance.
For the balls, I think they just calculate the points of highest velocity, acceleration and or jerk to choose when to perform the swap
Those dice were all loaded - opposite faces should add up to 7!🤔😀
I was able to spot the changes in colord in the pachinko ball simulation. I am sure others did as well. It is still a good trick, but not imperceptible
yeah i also did. went back, put it in 0.25 and just saw balls changing colors if they were too far away to were they should have been.
Our simulations have been so successful we've lapped reality and come back out the other side.
I'm working on a game with a forced camera angle, could make for some amazing results!
I never would have guess they literally cheat during simulation. I would have expected the simply do some reverse physics or something. Setup the final condition and find the initial condition to reach it.
621 balls and 621 bunnies? Oh my what a stimulating simulation indeed.
9 * 69, yes.
I love youre videos man!
You are too kind, thank you so much! 🙌📜
I was really hoping that this was trying to exploit chaotic system. Like, there is a near infinite many ways cards could fall and arrange themselves on the ground, and so one of those possibilities is likely close to our desired image. So I was hoping that instead of simulating physics, it would "tweak" physics so that the desired outcome was guaranteed to happen, while remaining as visually consistent as possible to a person. That would be insane.
You should do a video on the RandNLA/RandBLAS paper. It's new algorithms that can do fundamental linear algebra on huge matrices 1000x faster on paper, (closer to 20x in practice but 20x faster is still insane). Things like least squares, for example.
The basic idea is that for certain computations you can get an arbitrarily close approximation to the real answer by doing the math on a much smaller matrix made of a random subset of elements from the original matrix, instead of computing the whole thing.
So they trade exactness for speed, and notably the tradeoff is configurable so you can tune it to be "good enough" for your specific application.
felt like something was off in the movement, couldn't pinpoint exactly what it was though. it's like ai interpolation when you don't slow it down
That would be awesome if implemented on simulations created in After Effects, Cinema 4D and similar special effects softwares. As this would mean having much more control over the outcome of some of the simulated effects.
This technique has been seen by many more than 100 people now! Thank you!
The minute I watched this I remember I did this exact same trick in After effects years ago. That was inspired by when I was at school doing calculus. Whenever I was stuck I would work backwards from the answer and just stitched to where I was stuck and hope they middle jump wasn’t noticeable:)
Is this similar tech to the one used by nvidia based on the earth clone model to predict the weather?
Would this mean the closer our model to reality, the closer we are to predicting the future?
Holy crap!! This plus time travel can change everything!! Perfection awaits!
Thank you for introducing these interesting researches as always, sir!
Man I love how they figured out how the sound looked like. I am quite blind, because I can never see sound sadly
It's good to see that classic simulation papers haven't been entirely eaten by AI.
Was I the only one who immediately saw the pachinko balls changing color mid-bounce? I know that the visibility function is never favorable in a sim where everything is visible all the time, but the color changes happening during motion blur turned them into multicolor streaks!
When he said he watched specific balls closely, I thought he was saying there were no replacement shenanigans. And the video of the simulated vehicle also made me think it could help the simulated vehicle go from point A to point B. But now I'm disappointed. I thought this was going to be a crazy physics simulations that took the beginning and end state and generated in-between states and worked backwards and forwards deciding chaotic variables like bounce direction and wind as is went, something no human would be able to tell is changing. And the technique was going to be some powerful algorithm that did this trial and error more efficiently than past methods. Like the next best thing after an infinite improbability drive. Now that would be amazing.
You don't need AI for this; you can simply write some code that changes the object's color, texture, or look in some way as soon as it touches the ground to achieve the same effect.
Oh, wow, I was expecting this to just be some improvement over that, uh, what was it called? Differentialble Simulations? That one from a while ago where the manipulated the rounding errrors and stuff to subtly steer a simulation towards a desired result, stirring two liquids into an image or doing trick-shots with bouncing objects and stuff like that.
Im confused, you can do this already with simulation programs?
Nvm that sound thing was insane
What a time to be AI!
Honestly it almost seems like this video is totally AI generated. The script is way to excited considering what the whitepaper actually contains in terms of knowledge and useful systems. Basically just a handful of expressions for identifying suitable places to swap around the balls, and sure as soon as you have pre-baked this "score" for your pre-baked animation it is very easy to change the end result without doing a lot of calculations... To me it seems like both the script and the voice is AI generated on this channel these days.
Cool.. but what is the use case? In what application would this be useful?
Motion graphics
Watching this and I thought about special effect companies being able to finally control a simulation without just having to hope that the next generation looks better
Wait wait, the one where they input sounds and it outputs a video is NOT AI?
Ok now we need a new round of trickshot context from Corridor Crew (An animator try to simulate a trickshot faster than another guy doing it for real)
this is basically simulating Poincare recurrence time^^
I knew I'd seen you talk about these simulations before. I just didn't remember it had been such a long time ago...
The small music video made me LOL
5:09 That's my favorite job interview discussion. When asked by a big company, how many programmers can we throw at this project, to get it done faster. My answer, 'It doesn't take 9 women 1 month to have a baby. It takes one woman nine months.'
it takes 9 men to get a woman pregnant faster. more men more probability.
just try explaining that to MBA bros
Controlable laplace's demon, damn.
Except that the dots on the die don't match reality. Opposite die faces, added up, equal seven. 6-1, 5-2, 4-3, etc. Opposite a six, the face carries a one. But I saw a one on a face ninety degrees from a six, so I at least know they aren't faithfully accurate dice.
Ummmm, at 2:11 There is a ball that changes color from cyan to green. THAT SEEMS A BIT SUS TO ME
Yes, that's the trick
Yeah, I saw it before the explanation, lol@@rvens8885
Peppers? WE DONT HAVE ANY PEPPERS!
I really love the handcrafted papers. AI is all great but we also need new algorithms we can really control not black boxes that do something
it's crazy how this paper got such few views
I seriously thought the trick was taking a start and end, then generating plausible frames in-between them until complete, sort of like how stable diffusion works with noise.
I saw some people on yt doing stuff like that in their simulations. YT short from Pezzza's Work channel is in my head when i look at this 😊
I realized how the simulation was cheating once I saw the dice.
In the end, there was a dice that has a 6 on top and a 1 on the side (which never happens on fair dice), seems they forgot to swap all the faces. Thats how I realized they probably just change one face of the sim to fit the chosen result
didn't realise that poker involved dice. what a time to be alive!
I miss when this channel was mostly physics videos, the AI was interesting for 2 or 3 videos, but now only every 4th or 5th video isn't about AI
Sorry, why is this significant, unusual or important? What will it be used for?its just making the computer do what we want, like, program, no? "make tgwm all land at 6" ( of course in code). Serious answers please ❤
it make anything looks like they are not programmed.
like a self-fulfilled prophecy.
imagine GTA or any sandbox game.
stiff dead body make it looks like it is programmed.
ragdoll make dead body looks like simmulated.
this will make dead body looks like real dead body, not some random ragdoll pose.
@@begobolehsjwjangan2359 gotcha.that explains it somewhat. Thank you :)
Someone just invented the final destination for AI agents 🤣
Sounds familiar. Didn't someone already did a simulation where things can go in any direction if they are not observed?
1:48 umm see some ones, if all the dice landed six side up we should not see any ones.
Very impressive and even better that this doesn't use AI. +1 for human ingenuity.
Hiding in occlusion and chaos, so if a tree falls where nobody's around, it indeed didn't make a sound and perhaps even never happened
I can do that too but with deterministic math, no AI or recoloring needed
I thought this worked without cheating. Just considering every possible outcome in a random experiment and computing exactly the possibility that leads to the outcome we want.
This solves Quantum physics
@0:50 you suggest the results are random. I believe for this technique to work, physics engine needs to be 'Deterministic' meaning that if you run the simulation, the results look random, but if you re-run the same simulation, you will have exactly the same random looking result. So it's not truly random.
ruclips.net/video/9IULfQH7E90/видео.html
Awesome video, as always! I'm glad the channel is not 100% AI now.
Now scientists can get the outcome right of every experiment!
So fantasy/sci-fi media/games/shows could finally fix the giant cgi turtle textures in rising of shield hero. The cgi degraded by two decades when the turtle was in the scene.
Fated simulations
Casino: The trick I've been using is why it was revealed.
I think I have seen the ball switching colors in an ad
what an incredible video!
This algorithm was created just to stress Captain Dissillusion out
What a time to be alive!
everytime i read nvidia somewhere, i ask myself if it is patented somehow
what if this is how quantum particles work in a unit to create reality
I don't fall for the "I guess it takes hours and hours" anymore 😂
06:49
6:46 - Holy shit!
I can see a new online casinos with "controlable simulations" not rigged at all.
Got really disapointed when you essentially said the simulation was a lie.
The sound thing is actually useful though.
Some of the cubes show a one on the side . that's not possible with a six on top.
Really cool stuff!