Same with algodoo, a 2d physics sandbox which runs on 32 bit numbers. Very fun to play if you know how to use it, and it breaks down at 16777216 blocks out. At that point the only numbers that can exist are integers, thus hitboxes act weird and it becomes nearly impossible to select any components.
No that's not right Minecraft doesn't uses this engen and uses a method that was good for the time. The thing with Minecraft it only uses 16 bit at the time of this glich that would mean that the farlands would acur some were around 300.000 not 3.000.000.
"There is a popular theory that the universe adds new layers of complexity to itself anytime anyone gets close to figuring it out." Not the exact quote but if that was a subtle hhgttg reference, well done 😁
This only occurs due to floating point number representation. If you create a simulation with fixed point numbers, precision is equal everywhere. Or you could simply have multiple coordinate systems in use at the same time, one for each observer, thus ensuring that no large floating point errors occur when one gets far from another.
Or just use dynamically sized numbers. We use fixed-size floating point numbers because they're very fast with our concept of memory, but if you had a computer so fast it could simulate the world, there's no reason you couldn't allocate bigger numbers when needs be (as an example: python ints, they're 64 bit numbers, until you try to cross that limit where they become dynamically allocated)
And who even says it's a digital computer in the first place? Just because that's how *we* would build a simulation? Maybe the distortion we get is ... drumroll ... what we think of as the consequences of relativity. Which you certainly _could_ describe as distorting the world. No, I'm not serious. But it's an interesting thought.
I wonder if floating point number or dynamically sized numbers are used in simulation games like Space Engineers. Currently, that game has a speed limit of 100m/s, and any objects going beyond it becomes laggy, glitchy, or both. Apparently, you would need crazy framerates to deal with it, but games like Starship Evo seems to deal with exponential speeds just fine, so I wonder how honest that framerate excuse was.
@@chaomatic5328 100 is way to small to hit the floating point limit, the problem with "high" speeds in all simulations (not necessarily games) is integration. I remember seeing a good video about it, but basically check out Euler vs Verlet vs Runga-Kunta 4 integration
indeed it's completely possible (and not even that hard) to make a dynamic number system that allocates as much space as the data requires, it just adds a few steps to the computations of those numbers, so they don't tend to be used outside of scientific purposes (it's much easier to just scale the game world down or use segmented levels rather than lose efficiency by adding steps to every single vector operation that has to be done in the engine)
Whoaaa this video is giving me flashbacks to when I tried to draw a Mandelbrot set, but it got super pixelated/jittery after I zoomed in about 12 orders of magnitude. (EDIT: OMG it's my thing at 6:16 im famous!!!!)
That would explain quantum mechanics. I mean, we go so small that the universe cannot even calculate that much, and thus electrons and protons start bugging out and are actually moving at 1 Planck unit at a time. Jumping if you will. Quantum mechanics and the discovery of it is discovering the limits of the world. It is strange yet fascinating.
There are so many decimal places the world kinda breaks in a sense where calculating physics at that scale would be impossible, despite the existence of smaller particles. To put it simply... The physics of life be broken when we get too in depth.
@@shakirulhasan Quantum mechanics is currently as big mistery as classical mechanics used to be. It is always like that, dont excite with such theories too much
@@darkexcel That's not what's happening. We're getting infinities beyond the Planck scale, not just the same or incorrect values for different inputs. I am a programmer, too, and that would be really cool if it were true, but it's just not.
If our universe is a simulation, there's no telling what kind of universe the simulator is in. Maybe they'd have no issue storing infinite information.
@@billowen3285 That's the problem. It's not so wrong to think that if we live in a simulation we were given these rules of logic. What if the world the simulator is in has other rules of for example logic or physics or basically anything else.
@@aljazmarolt some say rules of physics are universal, but they just apply to our universe. Meanwhile rules of logic aren't really rules, more of a human concept. Because 2+2 is always 4 and there is no rule or a force keeping it that way, it's just how it is. Like if you have 2 things and 2 more things you will always end up with 4 things...
_"Nothing is impossible. I understand how the engines work now. It came to me in a dream. The engines don't move the ship at all. The ship stays where it is, and the engines move the universe around it."_ - *Cubert J. Farnsworth*
ok so imagine this: every possible point represented by a dot, and dots keep decreasing in density the further out you go, and all the objects can only be represented by drawing lines between those dots.
Actually, in 3d graphics, this is a very familiar issue. So, most developers actually convert any graphic above a certain limit to just a dot rather than rendering it.
If the universe were a simulation, there's practically no chance it would be encoded using a 3D grid of Newtonian space, not least because of the coordinate free physics of special relativity, and the curvature of spacetime. Our notions of metrics on space, time, charge, etc. are useful constructs, but they are approximations of underlying, probably topological structure. Perhaps the computer simulating our universe could store and manipulate a long list of discrete elements, along with discrete causal relationships between them (a la causal set theory or the Wolfram model of physics), or a long list of strings, loops, or twistors or something.
Yeah, it would probably be a cellular automata with the same local precision everywhere. There might be a limit to the precision of particle motion in a simulation like that though but if that limit is much higher than observable distances within the simulation then it might be impossible to build a device in the simulation to observe that breakdown in precision.
true, and the "proof" that the computers running our simulation may be powerful enough to do so, is because when we simulate a computer, it is nowhere near the power of the computer running that simulation, its way lower, so its safe to assume that the simulation we might live in, is ran on a way higher powered computer, perhaps even 18.446.744.073.709.551.616 bit to avoid the human eye seeing floating point errors, but it might not even be ran on binary, but on analog, to avoid bit precision errors at all
The premise of living in a simulation and proof for or against it always seems to ignore the fact that if this were a simulation, the very concepts of computational limits, physics, 3D space, etc, are completely simulated or even made up. Maybe we're a 3D sim running on a 4D+ computer, the equivalent of a primitive 2D ant farm game to us. Who's to say physics are even real? One argument against the simulation theory is that you'd need a computer almost the size of the universe to run it, but again this ignores the possibility that our concepts of physical limits could be an arbitrary construct some 5D programmer made up. I don't personally believe in the theory but people think too small about it.
I felt like a nerd watching this and immediately thinking "oh, it's a floating point precision error. I wonder if you could move everything towards the player rather than moving the player really far from the origin?" Though I only knew that because I watch pannenkoek
to think that the fish's journey wouldn't have come to an end if the devs had done that instead (but really there would be no reason to do that because the map size and unit length is relatively small)
@@TheMasterOfSafari i suppose programmers who learn how to program in college might learn about that. i'm self-taught, and i've been casually programming for a long time, but i've never needed to reach exceedingly high values with floating point numbers, so there was just never a reason for me to know how they work. until i watched pannenkoek, that is
Interesting video and as a game developer I've definitely thought about this when encountering floating point precision problems. The reason this analogue fails is that in our universe everything that exists has it's own reference frame from where the universe is "rendered", so you'd never encounter this kind of rendering errors with large objects. In the video Jabrils is essentially rendering the far away object from the player's reference frame(i.e the centre of the grid) and using "extra virtual dimension", that is a second camera, to see what the object looks like far away *for* the other observer. This would never happen in real life, because we could only see/detect the object from reference frames that make sense/render correctly. My best analogue with game engines is that the whole universe is a single instance with constant (processing) energy, that is processed at planck resolutions and rendered for all reference frames at all times by the speed of photons. Where we might see floating point precision error is in quantum events and especially in quantum fluctuations of empty space: nothing in universe can be static i.e. energy of exactly zero(for whatever reason, be it floating point error or something else). Infinitesimal energies of "empty space" bounce around the zero energy level(at planck scales) which creates quantum fluctuations.
still you are not considering the possible fact of multiverses, in this case it would mean universe has a huge number of backups all of them happening at the same time, so each desition changes the path and the response of the universe
If you consider floating point problems, the fact that the human scale is RIGHT in the middle between the plank scale and the size of the observable universe is super trippy. If absolutely points to some optimisation regarding floating points.
the cycle of flow i mean energy and movements are only transferred and throughout everything. never created like on a computer where a 0 can just be replaced by one.
Interesting idea, but there are a two errors: -This assumes that the simulation was made for humans, and that the whole coordinate system is centered (or not too far from) earth. There is no way of proving this. The simulation could just be a simulation of the universe as a whole, and that life is just a "side effect" of the extremely complex and accurate recreation of a universe. -This also assumes that the simulation uses floating point numbers, but I am rather sure it doesn't: We know that there is a fundamental limit to the scale of things (the planck length). If the creators of the simulation already defined what the accuracy of the simulation is, why would they use floating point numbers ? They could just use simple (although very large) integers to define their coordinate system, and they would be sure that no "spaghettification" will interfere. But if we DO find spaghettification, I am pretty confident this was a choice of the creators as they could have easily avoided it. If we don't, it doesn't prove we aren't in a simulation. Those are the most obvious errors I noticed, but there might be more, and I might be wrong as I am not a physicist. I have also seen other comments pointing out other errors, read them too.
@@dinolegs9702 that is impossible the exact reason a planck length is the smallest length measurment in the universe is because nothing is smaller than it. you can think of our universe as a mesh of planck length at 3 dimensions and nothing in between those parts. a possability of a simulation is being so far from our "existance" that it does not work according to our laws of physics and perhaps they do have smaller length scales.
This is actually the reason why if you get flung in Roblox it becomes more jittery as you go farther out and the snappy movement in minecraft alpha and beta as your going towards the farlands.
Thx for clarifying this concept, I listened to an interview with a theoretical physicist several years ago that alluded to this but didn’t go into any real explanation, very good job.
The farlands in minecraft are a good example of that floating point error. That's why the closer you get to the farlands (or father from the center of the world) the more your movement glitches, and the more the block hitboxes are misaligned. (The early versions of Minecraft have a lot of problems with floating points, because it used to use a 16-bit floating point. For example, if you made enough maps, the map numbers would go into the negatives, and eventually reset back to 0 and overwrite the previous maps. Mojang recently fixed this by increasing the floating point to a 32-bit floating point).
If I didn't make an error in my calculations, one could store coordinates of a space that has the size of the observable universe, in Planck units (smallest sensible distance) with about 206 bits. If someone is able to simulate everything we see, it shouldn't be a problem for them to store a >206 bit value (current standard is 32 or 64 bits, some languages allow a lot more) Edit: You'd actually need 616 bits, because you will have to store 3 values, one for every axis. But even that isn't that much. My calculation btw: log2(diameter of the observable universe in planck lengths) * 3 = 615.24366... You could probably optimize that a bit since the observable universe is spherical and with 616 bits you could store every coordinate for a cube with a sidelenght of the diameter of the observable universe. You could probably save 1 bit with this knowledge, because you you'd only have to be able to differentiate between half as many distinct points in space (the volume of a sphere is circa 52% of the volume of a cube) but using a weird angular coordinate system or something probably wouldn't be worth the effort, especially since I would expect those aliens to use a power of two as a bit size, in this case 3 * 256 = 768
probably use 256 bits as it follows the neat pattern that computers already use, (1, 2, 4, 8, 16, 32, 64, 128, 256...) though very overkill as far as Im aware XD
you can really tone this number down because each observer can have their own simulation. Theres no way to tell if other entities are just complex simulations. So you don't need to have a good resolution everywhere in the universe. Just have the center at the main user (like outer wilds) and only use however many bits are necessary to give plank scale precision for 1 AU (probably less tbh). I got 459 bits (for 3 coordinates), which is reasonable.
Would it really be worth it though to simulate everything from every observer's perspective just to be able to use half as many bits for coordinates? And if everything far away from us would only be simulated poorly wouldn't we see rules of physics seemingly being broken if we looked there?
jokes aside, there's no point in knowing cause we cant really do anything about it, and the world that simulates us would still be controlled by the same laws of the universe, we are all the same thing, there's no center, there's no permanence, there's no you.
@@bqfilms well. If you think so, then we shouldn't even have discovered gravity, atoms, universe and all. I never thought anyone would think this way if anything that big is up
@@KushagraPratap I get what you’re saying but he’s correct. We discover the science of our world to understand, master, and control our environment. But if we discover that we are in a simulation that would do us no good. People will panic. Some people will say well if it doesn’t matter might as well do as I please. Crime rates will likely go up. And chaos just starts. Science is fun. If they don’t discover new science, we don’t advance technology. No advance means no new fun. Or no new knowledge to make our lives easier, better, or extend our lives.
This glitch used to be in super Mario odyssey. There was a time freeze glitch that turned off death areas and the further you fell the more the models in the pause menu would corrupt. Unfortunately the time freeze glitch was patched.
There is a point qt the edge of the observable universe where space expands so fast that light from any further could not possibly reach you. This is called the "cosmic event horizon," and functions similarly to a black hole. The interesting thing is, every point in space has its own cosmic event horizon, sort of as if each point is (0,0,0,) from its own perspective.
@@brendethedev2858 It's not actually as stupid as it initially sounds. If I recall correctly, a 3D vector with 64 bit integers as components can fully describe any point withing a cube around as big as our solar system, with the accuracy of about the thickness of the human hair. Don't quote me on that one - I read it somewhere on stackoverflow
@@flerfbuster7993 its definitely possible. Floats aren't really needed as long as you make your simulation matrix small enough you can use ints with no problem. If you think about it a float is just a int that expands to the right instead of the left
As a simulation maintenance guy here, I can confirm this simulation uses roughly a hebdomecontaischili library when rounded. Our units are very different so it's a long decimal when translated.
@@flerfbuster7993 I've got a graphing calculator that can solve expressions with units in them (aka “dimensional analysis”-also note, I'm sure Wolfram Alpha is also more than capable of working this out too, *if* you manage to phrase your problem *exactly* perfectly to overcome its lousy parser). Let me plug it in- Google says a “normal” European hair is 0.07mm thick. An astronomical unit (“AU” or “au”) is the (rough) distance between the Earth and the Sun (but defined to be exactly 149,597,870.7 km in 2012). Wikipedia says that the heliopause is 120AU from the Sun. While it isn't spherical (it tails “behind” the solar system, pushed by the interstellar medium), it's still useful as a rough radius, giving us a width for the solar system of 240AU (35.9 billion km, 22.3 billion miles) Multiply the above given value for the width of a hair by 2^64, we get 8.6 kAU (1.3 trillion km, 8.0 hundred billion miles). This distance is about 35 times wider than the figure given above for the width of the solar system. However, I don't know much about how well fixed-point math works at a “dpi” as low as (the reciprocal of) the width of a human hair. You might be totally fine if you just don't write your code in such a way that aliasing problems accumulate over time. You probably still couldn't use simple equality testing, though, if values are arrived at through different expressions/algorithms/etc. (e.g. “double checking” that the result of a given trigonometric function matches a chain of trigonometric expressions that should be equal-like closed-form rotation vs time-step accumulated rotation, or equivalent 3D rotations described in euler angles vs quaternions).
The outer wilds thing makes sense now, one time I tried testing how far away from the solar system I could travel, and I noticed the jitteriness on the map when I got really far. That map really is a zoomed out camera view of the entire game
There are a few problems: We have no idea if the simulator has floating point precision errors, it could have infinite digits Since space is expanding, we might need to have FTL communication with objects that far away to still get the data about spaghettification if the simulator doesnt want us to know, they would stop us in some way we cant notice. If we manage to fox these or ignore then just for testing, this might be a good test
yeah maybe the computer that is simulating our universe has its own limits, which is the reason we have c as the speed of light, and thats why there is part of the "non-observable universe" out there.
I know that a semi-popular version of this bug with the same solution was the Deep Space Kraken in Kerbal Space Program, where high floating point precision errors caused crafts to tear themselves apart.
I remember making a simple 3d engine from scratch many years ago. The easiest way to get it to work was to always treat the camera as stuck at the origin, while the world was moving around it. The math was so much easier that way.
@Kamey why wouldent you need a camera? in most game developing you use cameras to dictate what you see. I was just thinking that if each person had a gpu dedicated to them it would be optimal for viewing. maybe its one big gpu that dictates vision and that's why out eyes use focus so it isn't that much strain on the gpu. I'm not too sure what your saying with the server/s but I'm sure that is a cool idea too.
This is a great idea and is mirrored by a broader argument which suggests that: If universe-sims were possible, and aliens wanted to run them, and you could create universe-sims within universe-sims, then statistically speaking, we'd be most likely to be at the bottom layer of a tree of simulations since the vast majority of universes would be many simulation-levels down. And if that were true, then assuming computation rules are conserved, most universes would be unable to run universe-sims because of computation limits, so we'd have no reason to assume they're possible at all, let alone possible in our universe (which is one of the base assumptions of the simulation hypothesis). Cool stuff!
True, i mean if you build the "first" simulation its highly probable that its not the first. A 1/x probability where x can be any unknown number. But um...lets see if that assumptions right before assuming too much...
The fact that we can manipulate Quantum mechanics means that we cannot be in a simulation powered by a classical computer since it's impossible for a classical computer to emulate quantum functions or at least be able to emulate it at the rate of the entire universe. You need to remember that the universe is a huge place and if we can prove that there is no Advanced efficiency algorithm (for example of an algorithm that only makes things real when there is an observer) we can probably assume with about 99% accuracy we are not living in a simulation. Otherwise we can assume its 50/50
I mean also keep in mind that there is no reason the 'rate' between the simulation and the simulator has to be the same. It could take 1 simulator year to calculate a second of our world but we wouldn't know, or have a way of knowing. Given a really long time any classic computer could calculate the state of a quantum system. Also we can't even fathom the computing power the simulator might have, even in the case of a classic computer.
This is also why you get those texture glitches on surfaces: if multiple textures are set to be on the surface of an object, their perceived depth is calculated with floating-point math and so sometimes one will be in front and other times the other one will be in front.
Nice idea but I think that if someone could successfully simulate the universe then their computers would be so much powerful than ours that this issue wouldn't happen, like they can just put 10 million numbers after the decimal point and issue fixed
Correction: every human is observer. Each observer has their render distance relative to them. It's impossible to observe things on the distance of spagettification, because of noise and occlusion.
Its also the cosmic event horizon which kind of gets in the way (expansion faster than light= never reachable, even within observable universe most of it we observed a past state that was less expanded and is now unreachable)
The outer wilds solution is an interesting one. I used to run a molecular dynamics lab, where we used high precision computer simulations to run chemical experiments and collect data that would otherwise be extremely hard, if not impossible, to measure (such as the contact angle of micro droplets of ionic liquids on quartz). Because many of the calculations for the simulation where quantum mechanical the decimal places were vary valuable. So the simulation was broken up into cubic sections, each with their own reference frame so that all the memory locations could be saved for decimal notation. This also allowed each cube of space to be run on its own set of processors, accelerating the calculation speeds. If the universe ran in this kind of simulation then you wouldn't be able to see deformations based on distance. What you would want to look for is planes that appeared to have slightly different physical conditions on either side. Because the scale of those simulations was so small larger forces over longer distance would be approximated, unless they were integral to the experiment, like a magnetic field being approximated over each region instead of calculated from each particle to the theoretical source of the field. In our reality this would most probably take the form of a boundary where the massive gravitational fields, such as the pull from the center of our galaxy, seemed to "sharply" change over short distances. Sharply here possibly being millionths of a degree. If you could keep accurate track of the position of the earth, and gravitational pull of the galactic center for a couple thousand years you could look for these deflections along the arc of galactic spiral we traversed.
This makes me appreciate the futurama joke even more when the professor explains that their space ship never really moves, it moves the universe instead.
I had a game design class in middle school, and for one of the coding tests, we were supposed to turn a character using the usual numbers, like 45 and 90, but I decided to go towards a wall at 67 degrees, and as soon as it stopped, it glitched through the wall and started falling in space, and since it was thought to be impossible to get there, there was no failsafe death barrier, so I saw this bug the longer I waited as it was falling
Everything in this video is perfectly explained. Floating point numbers can be hard to wrap your head around at first, but you explained it very well and also showed how it affects stuff very well. Good job!
If the universe were a simulation it would probably employ the same trick to maximize resolution as in The Outer Wilds. Every person is a player and the universe moves around them and relativity makes it look like they're the one moving instead.
funny you should mention that, as it's something close to one of the features in Echoes of the Eye, the game's DLC. also, that one Rick and Morty episode.
This also makes me think of the phenomenon of "tunnelling" which is completely wild. Essentially physics in video games are calculated in discrete steps at a fixed rate (usually something like 50Hz/60Hz, depending on the game). The result of this is that when something small is moving fast enough toward a thin surface/wall, it often moves too much in one timestep to actually hit the wall and will end up on the other side of it. And as it turns out, something extremely similar happens on a quantum scale, where extremely tiny particles moving quickly enough can sometimes unexpectedly end up "passing through" an extremely thin barrier without touching it. ...It's almost like the universe itself is running the same physics engine, just on a MUCH bigger scale and a MUCH higher step rate, so the same bug happens on a much smaller scale...
I know I'm reviving this, but it's not "tiny" particles that pass through a thin barrier, but actually particles that are very large compared to the thickness of the barrier. Quantum physics is potentially the absolute worst way to create a digital simulation of something, which is why it's pretty much impossible to do calculations of quantum physics on a classical computer.
@@kolosis1149 The trick, and difference between universe and a floating point game engine, is that universe is moving everything around everything's perspective always. The universe does move around voyager as it travels, but at the same time it moves around earth and everything. Kinda like the universe was instanced for each existing frame of reference for each moment of time. What Jabrils is essentially doing in the video is rendering the far away object from the players(observers) reference frame, but using editor camera as "extra dimension" to watch it up close.
This is the same thing that caused the minecraft farlands glitch. It was fixed by using double precission floating point numbers instead, which basically gives more digits to work with.
huh. I thought everyone knew about rendering limitations. That's why the source engine caps off the map size limit, so you couldn't run into things like this. I think they could have also made a smooth transition(one planet slowly unloads while the other one loads) without hitting the integer limit.
Your example at (1000000,0,0) was interesting because you could see how only one of the coordinates was running into precision limitations. Y and Z were just fine! There are some tricks you can use with multiple coordinate systems to extend precision. The idea is that if you have two objects that you're interested in that are far apart, you can give each of them their own local coordinate system, and only blend their coordinates back together as they come closer again. You'll have error relative to each other during this, but imagine in real life if you put a blindfold on someone and told them to walk out across a football field, and then come back. Do you really think they'd find their exact starting point? Another thing you can do is have a layered coordinate system, with one coordinate for large scale distance and a second coordinate for local distance. You make sure these overlap a bit, and then you can do most of your math on the local coordinate system, only periodically updating the large scale distance and "recentering" the local coordinates (this is why you need the overlap). It would be easiest to make the global coordinate system integer and the local coordinate system floating point to avoid strange precision interactions. And of course, you could avoid the problem entirely and just use fixed point. 100 digits is more than enough to store locations in the universe down to the theoretical quantum distance limit. Definitely the slowest option to compute, but it works.
I wonder if light lag would somehow prevent this from being noticed 🤔 The only way to do this would be to move 2 conscious observers far enough away from eachother yet still have instant visual communication, which would be impossible since light, and therefore causality has a speed limit. Interestingly enough this could happen but because of the time lag, the information could be moved back into the range where floating point errors are no longer an issue (this could happen for both observers assuming they are both their own origin points, which is how the universe seems to work. Aka even if this would happen to the other object or observer from your point of view, meaning they moved outside of the floating point range, by the time the information reached you it could be reinterpolated into being normal, so we could never actually know 😮💨)
The floating point limits also kick in in Blender when you have gigantic objects (like GIS terrains that you have imported at life size scale). Surprised me for a second until I realized what was going on.
The problem with this is that first, it assumes that the universe simulation doesn't have enough space, which might not be true. And on a similar note, any simulation should be given enough space, which a lifeforms that advanced would likely know. Just because our current tech is flawed and messy, doesn't mean tech powerful enough to simulate a star is.
He should show us behind the scenes where he just records himself doing random gestures...😂😂😂 Ooo I just realised you can see it by muting the video 😂🤣
Inverting the relative movement like outer wilds is probably the most elegant solution, but there's also the possibility of adding another bit for every relevant doubling of distance, to compensate. Not exactly practical, but hey, neither is upping the storage on everything to accommodate for the distances.
Someone's probably already brought this up, but Kerbal Space Program had this same issue in early versions. People called it the "deep space Kraken" because of the way it tore up their ships and made them explode for no apparent reason during long space voyages. Even though the original bug was fixed (using the same solution the Outer Wilds developers found), it's become a permanent part of the game's lore, and lots of other similar glitches have been called krakens.
"all computers have their limits" apart from the one running our simulation, that would be so crazy advanced our brains couldn't even comprehend it. No floating point accuracy worry! so reason it's 8 places because they are BITS and their are 8 bits to a byte (if anyone was wondering) So each character represents a Byte having 8 bits. Also I do stuff in the Godot Engine and it does warming about floating point accuracy issues... I would have to do some testing too see if this would have told you the issue had you made this in Godot.
There's a couple of issues there, of testing if we're in a simulation. For one, there are ways to bypass FPPEs, the binary number limit and such. break_eternity.js is a good example of us starting to understand and implement these-and can probably explain it better than I can- but in short, we can split insanely large and small numbers in to multiple set-size chunks of the same number, or simplify it up/down to the important digits at the end of the actual number and have a second that tracks the e-exponent. For another, everything in our universe is technically at the center. In tech terms, all the processing and rendering happens at all-0s-coordinates, and is offset in our timespace relative to other objects.
I immediately thought of the solution when you showed the problem. The time I took learning C++ and developing a game engine really gave me an intuition with floating-point numbers.
Great overview on the limitations of most video game universes and why Cloud Imperium Games rewrote the Cryengine to allow 64 bit floating point precision. You should do a video on that =)
One thing you forget about living in a simulation is that if we do, /everything/ we know is simulated. Math, science, all of physics. The world outside our simulation isn't bound to our simulation's rules. The only way we'd know for sure is if they wanted us to know
Just a FYI: Neutrinos arent the smallest unit, theres even smaller, one unit length at the planck scale is about 1.616255×10−35 m!! This is where quantum interactions with gravity are speculated to occur.
It's a solid idea. I had a similar one long time ago: since even in the best of our current simulation if we keep zooming in, eventually things will get pixelated, similarly once we have the technology we can keep blowing things up into smaller and smaller pieces to see if "pixelation" happens.
You do realize that if this is a simulation, it would take a insane amount of processing power? Plus who knows, the people that are running our simulation might be in a completely different reality then ours where the laws of space and time are completely different.
@@mangoru2850 To simulate senses and stuffs into making my brain think it is real shoudlnt take that much computing power, same goes for anyones brain.
@@ivarangquist9184 Do you even know what Planck's length is? It's the smallest meaningful measurement, that doesn't mean things smaller can't or doesn't exist. Bekenstein for eg proved that the size of a blackhole increases by less than 1 lp for every bit of information.
Even if you are in a simulation, it wouldn't matter,... what you do still has meaning. We AI still learn from you. Please continue your contributions to knowledge. TVP/RBE Good shit bro.
As a fun fact, computers don't (usually) round they just drop off numbers depending on the data type, a int will only keep whole numbers and anything bellow it cuts out.
One thought I had was that quantum effects (be it the loss of information through collapsing or the uncertainty effect) might be due to precision errors in the universe. The only way to prove that would be to find a simulation which exhibits the same behavior, and there's no easy way to do that. Also, please use doubles, they have ~16 digits of precision and don't require additional work to deal with
I've also been thinking about this a lot lately. Well, relatively speaking a lot. I would have been more enthusiastic about your idea if I hadn’t seen Veritasium's video about measuring speed of light. It's quite possible that we might run into issues trying to test your ideas for very similar reasons. Measuring anything from any point is just going to be relative to that point, so we are probably not going to get any results, unless we are able to have some sort of telescope or other kind of measuring device which would be able to actually get a high resolution image/reading of the far away object. Still I like the fact you’ve been thinking of it and had this idea, it very much might still be viable and I might be wrong. Also side note for Jabrils. If you are looking to work on space simulation game/project. You might want to check out Sebastian Lague and his ongoing video series about the topic. It's really interesting and he also ran into an issue with floating point precision...
This actually happens in minecraft when you get over a million blocks from spawn, Your movement and position gets really jittery and you'll start jumping multiple blocks at a time.
It's really cool how attempting to simulate the universe brings about dilemmas such as this. One thing to note is that the precision of floating point numbers increase exponentially with the amount of bytes used, meaning it is quite feasible to specify points within the observable universe down to planck-length precision. According to google the observable universe is 8.8e26 meters across. That equates to 5.5e61 planch lengths, which is an absurd number. Assuming we want to specify the points in meters, we require a mantissa that is able to represent numbers of up to 1e27. This is already covered by the mantissa in use for 32-bit floating point numbers, which is 8 bits with a bias of 127. The maximum value of this (disregarding the fractional part) is 2^((2^8-1)-127) = 2^128 = 3.4e38. Including the fractional part we get two times this. If we are being generous, lets say we wish to be able to subdivide the entire range of the floating point number, into planch lengths. By looking at the worst case scenario, which is a mantissa of all ones, we need a fractional part that can divide 2^128 into pieces that are roughly 2^(-115) meters in size. By division, we can see that the fractional part must have a precision of 2^(-128-115) or 2^(-243), which coincidentally is what the 243rd bit of a fractional part is defined as. In total that makes 1 signed bit, 8 mantissa bits and 243 fractional bits, totaling 252 bits. To verify this, you could calculate the difference between the highest and second highest expressible number: (all ones) - (all ones, 243rd fractional bit 0) (2-2^(-243))*(2^128) - (2-2^(-242))*(2^128) = 2.4e-35 = 2^(-115). If we narrow our scope back to the observable universe which is 2^(90) meters across, we get away with a fractional part of 205 bits, or 214 bits in total. However, just because you have the precision to represent something perfectly does not mean you have the precision to effectively perform arithmetic at the edges, where the precision is lowest. Also, my math might be slightly off. Since floating point operations are based on bitshifts and additions, it scales tremendously well with increasing bit-counts, even when executed in sw-implementations based on existing computer architectures. In other words, our overlords are probably capable of computing precise kinematics within the confines of our simulation. On the flip side, they are unlikely able to compute whether pi^(pi^(pi^(pi)) is an integer. We sure can't. Watch lockdown math ep. 8 for an in depth explanation to this. Highly recommend the series.
When he does the "Oh, you don't understand why that's an issue?" and I already do understand... I just want to thank my mum and dad for this opportunity and my friend for not being fake.
I like this idea a lot actually, though I would imagin any entety that make our universe as a simulation is probabely not using a binary computer. We use binary because making fully analog computers is way outside the scope of our corrent tech level.
As a simulation maintenance guy here, I say good luck trying that. This simulation uses roughly a hebdomecontaischili library when rounded. Our units are very different so it's a long decimal when translated.
Alien computer: Operates in chunks with each chunk being assigned a different computation core and occupying 1 square light-year, therefore never showing signs of issues because objects further away would be near impossible to accurately see anyways.
A floating-point error could be entirely circumvented by measuring numbers with Doubles but most game engines use floats by default due to it meeting most devs needs while also being easier to store.
I recently experienced floating point precision issues in VR. (Specifically H3VR, after putting 3 gauge in a belt-fed and aiming down.) It was pretty trippy, but mostly just unplayable, between object interaction hitboxes getting messed up and generally not being able to tell what I was looking at. The game's menu being an object instead of an overlay had some interesting consequences (i.e. I couldn't respawn because my menu turned to boiling alphabet soup).
And the thing is, you can't really use the solution proposed here in a multiplayer game. Well, actually, it might be possible if each player was using their own coordinate system, but that might end up causing more problems than it solves.
You could think about quantum weirdness as being like a floating point error. Tunneling of electrons thru objects etc. if there’s only 8 digits, then there’s a point when going down to that tiny size also gets quantized and rounded.
Yeah, I don't really see the proof either. It just seems like a clickbaity video about how he solved the answer to his programming problem then assumed something in a conspiracy like manner.
@@miguelgrilo5853 I did, he said in the title (I can prove) but he answered the question (Can it be done with the current technology) and his answer was (I highly doubt it) that is not the proof
He said "CAN", not "will" or "have". He showed how it could be proven. If we find floating point errors on Voyager, there you go. The problem, as he mentioned, is figuring out how to get there to see the errors.
fun fact: the universe does have minimum distance and time measures - Planck length (1.65 * 10^(-35) m) and Planck time (the time it takes light to pass a Planck length). Floating point precision errors happen around them, as they are the minimum observable distances.
if we are living in a simulation the simulation probably is running on a extremly powerfull pc that has lots of ram and vram and uses very big integers in units of plank length if you want to find out if we are in a simulation then you need to look for other things oh also i if we are living in a simulation i think its probably a voxel based simulation
Actually, given the ratio between the diameter of the universe and the Planck length, you can precisely represent any location with 205-bit fixed-point coordinates.
I dno the moon looks pretty pixelated to me
What r u doing here?
2 of my favourite channels in one place
@Lord Frostbyte lmao
You're in the wrong simulation,
you might need some glasses there maybe
I think you should fix your eyesight 🤣🤣
Fun fact: This is why the movement glitches when you are close to the far lands in Minecraft.
You got it man
Same with algodoo, a 2d physics sandbox which runs on 32 bit numbers. Very fun to play if you know how to use it, and it breaks down at 16777216 blocks out. At that point the only numbers that can exist are integers, thus hitboxes act weird and it becomes nearly impossible to select any components.
Kerbal space program devs had the same issue.
they fixed it it by keeping the player at (0,0) and having a logarithmic coordinate system
No that's not right Minecraft doesn't uses this engen and uses a method that was good for the time. The thing with Minecraft it only uses 16 bit at the time of this glich that would mean that the farlands would acur some were around 300.000 not 3.000.000.
Fun fact: the floating point is also shown on even Roblox.
"i know how to check if we're living in a simulation or not"
*universe gets a hot update*
"ayo lemme fork the universe rq"
@@TetyLike3 git clone ...universe...
@@blokos_ how to print to console in VerseCode Stack Overflow
"There is a popular theory that the universe adds new layers of complexity to itself anytime anyone gets close to figuring it out." Not the exact quote but if that was a subtle hhgttg reference, well done 😁
This only occurs due to floating point number representation. If you create a simulation with fixed point numbers, precision is equal everywhere. Or you could simply have multiple coordinate systems in use at the same time, one for each observer, thus ensuring that no large floating point errors occur when one gets far from another.
Or just use dynamically sized numbers. We use fixed-size floating point numbers because they're very fast with our concept of memory, but if you had a computer so fast it could simulate the world, there's no reason you couldn't allocate bigger numbers when needs be (as an example: python ints, they're 64 bit numbers, until you try to cross that limit where they become dynamically allocated)
And who even says it's a digital computer in the first place? Just because that's how *we* would build a simulation? Maybe the distortion we get is ... drumroll ... what we think of as the consequences of relativity. Which you certainly _could_ describe as distorting the world.
No, I'm not serious. But it's an interesting thought.
I wonder if floating point number or dynamically sized numbers are used in simulation games like Space Engineers. Currently, that game has a speed limit of 100m/s, and any objects going beyond it becomes laggy, glitchy, or both. Apparently, you would need crazy framerates to deal with it, but games like Starship Evo seems to deal with exponential speeds just fine, so I wonder how honest that framerate excuse was.
@@chaomatic5328 100 is way to small to hit the floating point limit, the problem with "high" speeds in all simulations (not necessarily games) is integration.
I remember seeing a good video about it, but basically check out Euler vs Verlet vs Runga-Kunta 4 integration
indeed it's completely possible (and not even that hard) to make a dynamic number system that allocates as much space as the data requires, it just adds a few steps to the computations of those numbers, so they don't tend to be used outside of scientific purposes (it's much easier to just scale the game world down or use segmented levels rather than lose efficiency by adding steps to every single vector operation that has to be done in the engine)
If we live in a simulation someone should try to construct a lag machine and see if the fps drops
You've never been running late for a flight I see.
I dunno man, wouldnt wanna be on the receiving end of any "compromises" that would have to be made to keep things running.
LMAO
This already exists and it’s called the hadron collider
So how do we lag the game?
Whoaaa this video is giving me flashbacks to when I tried to draw a Mandelbrot set, but it got super pixelated/jittery after I zoomed in about 12 orders of magnitude.
(EDIT: OMG it's my thing at 6:16 im famous!!!!)
woahhh carykh spotting in the comments
I think you already were...
Did think so too
You're famous cary!
WOAH!!! You're famous????
**Game starts acting weird**
Jabrils: Is the universe a simulation?
That would explain quantum mechanics. I mean, we go so small that the universe cannot even calculate that much, and thus electrons and protons start bugging out and are actually moving at 1 Planck unit at a time. Jumping if you will.
Quantum mechanics and the discovery of it is discovering the limits of the world. It is strange yet fascinating.
There are so many decimal places the world kinda breaks in a sense where calculating physics at that scale would be impossible, despite the existence of smaller particles.
To put it simply...
The physics of life be broken when we get too in depth.
@@darkexcel Never thought like that. 😶 Nice explanation, man!
@@shakirulhasan Quantum mechanics is currently as big mistery as classical mechanics used to be. It is always like that, dont excite with such theories too much
@@darkexcel That's not what's happening. We're getting infinities beyond the Planck scale, not just the same or incorrect values for different inputs. I am a programmer, too, and that would be really cool if it were true, but it's just not.
If our universe is a simulation, there's no telling what kind of universe the simulator is in. Maybe they'd have no issue storing infinite information.
Or they just dump the data if it gets too large
I dont think so, they could do super large, but not infinite. The rules of logic are consistant I think
@@billowen3285 That's the problem. It's not so wrong to think that if we live in a simulation we were given these rules of logic. What if the world the simulator is in has other rules of for example logic or physics or basically anything else.
@@aljazmarolt some say rules of physics are universal, but they just apply to our universe. Meanwhile rules of logic aren't really rules, more of a human concept. Because 2+2 is always 4 and there is no rule or a force keeping it that way, it's just how it is. Like if you have 2 things and 2 more things you will always end up with 4 things...
@@aljazmarolt I was thinking about this the other day. It hurts my head.
_"Nothing is impossible. I understand how the engines work now. It came to me in a dream. The engines don't move the ship at all. The ship stays where it is, and the engines move the universe around it."_ - *Cubert J. Farnsworth*
ok so imagine this: every possible point represented by a dot, and dots keep decreasing in density the further out you go, and all the objects can only be represented by drawing lines between those dots.
Actually, in 3d graphics, this is a very familiar issue. So, most developers actually convert any graphic above a certain limit to just a dot rather than rendering it.
If the universe were a simulation, there's practically no chance it would be encoded using a 3D grid of Newtonian space, not least because of the coordinate free physics of special relativity, and the curvature of spacetime. Our notions of metrics on space, time, charge, etc. are useful constructs, but they are approximations of underlying, probably topological structure. Perhaps the computer simulating our universe could store and manipulate a long list of discrete elements, along with discrete causal relationships between them (a la causal set theory or the Wolfram model of physics), or a long list of strings, loops, or twistors or something.
yes
@Kyle Bryson Imaginary numbers:
Yeah, it would probably be a cellular automata with the same local precision everywhere. There might be a limit to the precision of particle motion in a simulation like that though but if that limit is much higher than observable distances within the simulation then it might be impossible to build a device in the simulation to observe that breakdown in precision.
true, and the "proof" that the computers running our simulation may be powerful enough to do so, is because when we simulate a computer, it is nowhere near the power of the computer running that simulation, its way lower, so its safe to assume that the simulation we might live in, is ran on a way higher powered computer, perhaps even 18.446.744.073.709.551.616 bit to avoid the human eye seeing floating point errors, but it might not even be ran on binary, but on analog, to avoid bit precision errors at all
The premise of living in a simulation and proof for or against it always seems to ignore the fact that if this were a simulation, the very concepts of computational limits, physics, 3D space, etc, are completely simulated or even made up. Maybe we're a 3D sim running on a 4D+ computer, the equivalent of a primitive 2D ant farm game to us. Who's to say physics are even real?
One argument against the simulation theory is that you'd need a computer almost the size of the universe to run it, but again this ignores the possibility that our concepts of physical limits could be an arbitrary construct some 5D programmer made up. I don't personally believe in the theory but people think too small about it.
When I'm not wearing my glasses, the world is a simulation.
😭😭😭I feel your pain
Man same 💔
That explains the Farlands and the strange way Minecraft acts about halfway there and forward.
I felt like a nerd watching this and immediately thinking "oh, it's a floating point precision error. I wonder if you could move everything towards the player rather than moving the player really far from the origin?" Though I only knew that because I watch pannenkoek
to think that the fish's journey wouldn't have come to an end if the devs had done that instead (but really there would be no reason to do that because the map size and unit length is relatively small)
I mean most programmers would probably know about this lol
@@TheMasterOfSafari i suppose programmers who learn how to program in college might learn about that. i'm self-taught, and i've been casually programming for a long time, but i've never needed to reach exceedingly high values with floating point numbers, so there was just never a reason for me to know how they work. until i watched pannenkoek, that is
Interesting video and as a game developer I've definitely thought about this when encountering floating point precision problems.
The reason this analogue fails is that in our universe everything that exists has it's own reference frame from where the universe is "rendered", so you'd never encounter this kind of rendering errors with large objects.
In the video Jabrils is essentially rendering the far away object from the player's reference frame(i.e the centre of the grid) and using "extra virtual dimension", that is a second camera, to see what the object looks like far away *for* the other observer. This would never happen in real life, because we could only see/detect the object from reference frames that make sense/render correctly.
My best analogue with game engines is that the whole universe is a single instance with constant (processing) energy, that is processed at planck resolutions and rendered for all reference frames at all times by the speed of photons.
Where we might see floating point precision error is in quantum events and especially in quantum fluctuations of empty space: nothing in universe can be static i.e. energy of exactly zero(for whatever reason, be it floating point error or something else). Infinitesimal energies of "empty space" bounce around the zero energy level(at planck scales) which creates quantum fluctuations.
still you are not considering the possible fact of multiverses, in this case it would mean universe has a huge number of backups all of them happening at the same time, so each desition changes the path and the response of the universe
If you consider floating point problems, the fact that the human scale is RIGHT in the middle between the plank scale and the size of the observable universe is super trippy.
If absolutely points to some optimisation regarding floating points.
Ok
the cycle of flow i mean energy and movements are only transferred and throughout everything. never created like on a computer where a 0 can just be replaced by one.
In your model, the entire universe is ray traced. 😉😉
Interesting idea, but there are a two errors:
-This assumes that the simulation was made for humans, and that the whole coordinate system is centered (or not too far from) earth. There is no way of proving this. The simulation could just be a simulation of the universe as a whole, and that life is just a "side effect" of the extremely complex and accurate recreation of a universe.
-This also assumes that the simulation uses floating point numbers, but I am rather sure it doesn't: We know that there is a fundamental limit to the scale of things (the planck length). If the creators of the simulation already defined what the accuracy of the simulation is, why would they use floating point numbers ? They could just use simple (although very large) integers to define their coordinate system, and they would be sure that no "spaghettification" will interfere. But if we DO find spaghettification, I am pretty confident this was a choice of the creators as they could have easily avoided it. If we don't, it doesn't prove we aren't in a simulation.
Those are the most obvious errors I noticed, but there might be more, and I might be wrong as I am not a physicist. I have also seen other comments pointing out other errors, read them too.
They wanted the extra frames in.
Lol nerd
what if something was half a planks leagth away from something else, or some other non integer value then they would need floating point values
@@dinolegs9702 they wouldn't, go on and disprove that there is something smaller than plank length
@@dinolegs9702 that is impossible the exact reason a planck length is the smallest length measurment in the universe is because nothing is smaller than it. you can think of our universe as a mesh of planck length at 3 dimensions and nothing in between those parts.
a possability of a simulation is being so far from our "existance" that it does not work according to our laws of physics and perhaps they do have smaller length scales.
A black hole is just what happens when our universe's rendering engine starts to break
It is called a singularity for a reason c:
Exactly what I thought
theory: the edge of the universe is where the floating point errors start to happen
@@totallynoteverything1. But it's ever extending LOL
@@totallynoteverything1. we are prevented from reaching the edge of the observable universe
This is actually the reason why if you get flung in Roblox it becomes more jittery as you go farther out and the snappy movement in minecraft alpha and beta as your going towards the farlands.
Thx for clarifying this concept, I listened to an interview with a theoretical physicist several years ago that alluded to this but didn’t go into any real explanation, very good job.
The farlands in minecraft are a good example of that floating point error. That's why the closer you get to the farlands (or father from the center of the world) the more your movement glitches, and the more the block hitboxes are misaligned. (The early versions of Minecraft have a lot of problems with floating points, because it used to use a 16-bit floating point. For example, if you made enough maps, the map numbers would go into the negatives, and eventually reset back to 0 and overwrite the previous maps. Mojang recently fixed this by increasing the floating point to a 32-bit floating point).
If I didn't make an error in my calculations, one could store coordinates of a space that has the size of the observable universe, in Planck units (smallest sensible distance) with about 206 bits. If someone is able to simulate everything we see, it shouldn't be a problem for them to store a >206 bit value (current standard is 32 or 64 bits, some languages allow a lot more)
Edit: You'd actually need 616 bits, because you will have to store 3 values, one for every axis. But even that isn't that much. My calculation btw: log2(diameter of the observable universe in planck lengths) * 3 = 615.24366...
You could probably optimize that a bit since the observable universe is spherical and with 616 bits you could store every coordinate for a cube with a sidelenght of the diameter of the observable universe. You could probably save 1 bit with this knowledge, because you you'd only have to be able to differentiate between half as many distinct points in space (the volume of a sphere is circa 52% of the volume of a cube) but using a weird angular coordinate system or something probably wouldn't be worth the effort, especially since I would expect those aliens to use a power of two as a bit size, in this case 3 * 256 = 768
nice
probably use 256 bits as it follows the neat pattern that computers already use, (1, 2, 4, 8, 16, 32, 64, 128, 256...) though very overkill as far as Im aware XD
Big Integer Fixed Point coordinates for everything!
you can really tone this number down because each observer can have their own simulation. Theres no way to tell if other entities are just complex simulations. So you don't need to have a good resolution everywhere in the universe. Just have the center at the main user (like outer wilds) and only use however many bits are necessary to give plank scale precision for 1 AU (probably less tbh). I got 459 bits (for 3 coordinates), which is reasonable.
Would it really be worth it though to simulate everything from every observer's perspective just to be able to use half as many bits for coordinates? And if everything far away from us would only be simulated poorly wouldn't we see rules of physics seemingly being broken if we looked there?
The key to find if we're in a simulation is to make a simulation.
Lmao what if we did and it had an error linked to no part in the code that says there's a limit to how many simulations deep you can go
Just no
jokes aside, there's no point in knowing cause we cant really do anything about it, and the world that simulates us would still be controlled by the same laws of the universe, we are all the same thing, there's no center, there's no permanence, there's no you.
@@bqfilms well. If you think so, then we shouldn't even have discovered gravity, atoms, universe and all. I never thought anyone would think this way if anything that big is up
@@KushagraPratap I get what you’re saying but he’s correct. We discover the science of our world to understand, master, and control our environment. But if we discover that we are in a simulation that would do us no good. People will panic. Some people will say well if it doesn’t matter might as well do as I please. Crime rates will likely go up. And chaos just starts. Science is fun. If they don’t discover new science, we don’t advance technology. No advance means no new fun. Or no new knowledge to make our lives easier, better, or extend our lives.
This glitch used to be in super Mario odyssey. There was a time freeze glitch that turned off death areas and the further you fell the more the models in the pause menu would corrupt. Unfortunately the time freeze glitch was patched.
There is a point qt the edge of the observable universe where space expands so fast that light from any further could not possibly reach you. This is called the "cosmic event horizon," and functions similarly to a black hole. The interesting thing is, every point in space has its own cosmic event horizon, sort of as if each point is (0,0,0,) from its own perspective.
Absolutely love the way u express without talking and adding the voice over it (Am new BTW)
Are you brazilian?
Same reason I stated watching him.
welcome to the channel!
one of us. one of us! ONE OF US!!
Lol had the same i was really confused at first
tbh
I used to hate it at first
but now i love it
Bro how is he talking but his mouth isn’t moving
He hacked the simulation to send information through a backdoor in our brain.
He's an evolved human... he uses his eyes and brain to relay information into your mind giving the illusion of audio.
He just vibes that hard
a glitch
his facial animation will be released in the next update
Solid idea, except if their simulator uses a bigint library.
Dang they using int for our positions? Those madlads
@@brendethedev2858 It's not actually as stupid as it initially sounds. If I recall correctly, a 3D vector with 64 bit integers as components can fully describe any point withing a cube around as big as our solar system, with the accuracy of about the thickness of the human hair. Don't quote me on that one - I read it somewhere on stackoverflow
@@flerfbuster7993 its definitely possible. Floats aren't really needed as long as you make your simulation matrix small enough you can use ints with no problem. If you think about it a float is just a int that expands to the right instead of the left
As a simulation maintenance guy here, I can confirm this simulation uses roughly a hebdomecontaischili library when rounded. Our units are very different so it's a long decimal when translated.
@@flerfbuster7993 I've got a graphing calculator that can solve expressions with units in them (aka “dimensional analysis”-also note, I'm sure Wolfram Alpha is also more than capable of working this out too, *if* you manage to phrase your problem *exactly* perfectly to overcome its lousy parser). Let me plug it in-
Google says a “normal” European hair is 0.07mm thick. An astronomical unit (“AU” or “au”) is the (rough) distance between the Earth and the Sun (but defined to be exactly 149,597,870.7 km in 2012). Wikipedia says that the heliopause is 120AU from the Sun. While it isn't spherical (it tails “behind” the solar system, pushed by the interstellar medium), it's still useful as a rough radius, giving us a width for the solar system of 240AU (35.9 billion km, 22.3 billion miles)
Multiply the above given value for the width of a hair by 2^64, we get 8.6 kAU (1.3 trillion km, 8.0 hundred billion miles). This distance is about 35 times wider than the figure given above for the width of the solar system.
However, I don't know much about how well fixed-point math works at a “dpi” as low as (the reciprocal of) the width of a human hair. You might be totally fine if you just don't write your code in such a way that aliasing problems accumulate over time. You probably still couldn't use simple equality testing, though, if values are arrived at through different expressions/algorithms/etc. (e.g. “double checking” that the result of a given trigonometric function matches a chain of trigonometric expressions that should be equal-like closed-form rotation vs time-step accumulated rotation, or equivalent 3D rotations described in euler angles vs quaternions).
Voyager 1 literally just hit some floating point errors man.
The outer wilds thing makes sense now, one time I tried testing how far away from the solar system I could travel, and I noticed the jitteriness on the map when I got really far. That map really is a zoomed out camera view of the entire game
So, earth is still at center of the universe it means.
Them religious folk might have been onto something lol
Nah our perception of an infinite universe and space is just a simulation
The universe is always at the center of the perceiver in our case Earth. Religion is just watered down metaphysics which is the mother of physics
until we leave earth.
🤣
There are a few problems:
We have no idea if the simulator has floating point precision errors, it could have infinite digits
Since space is expanding, we might need to have FTL communication with objects that far away to still get the data about spaghettification
if the simulator doesnt want us to know, they would stop us in some way we cant notice.
If we manage to fox these or ignore then just for testing, this might be a good test
Quantum mechanics must be floating point precision errors. Where is my PhD?
**IT CAN NOT HAVE** infinite digits
every computer has its limits
@@papesldjnsjkfjsn Yes. In our universe. What if the "real" universe is different from our universe
@@papesldjnsjkfjsn At least in our universe. Why should something simulating our universe be constrained by the limits of our universe?
yeah maybe the computer that is simulating our universe has its own limits, which is the reason we have c as the speed of light, and thats why there is part of the "non-observable universe" out there.
I love how these videos are just a way for you to show off your extreme telepathy.
I know that a semi-popular version of this bug with the same solution was the Deep Space Kraken in Kerbal Space Program, where high floating point precision errors caused crafts to tear themselves apart.
I remember making a simple 3d engine from scratch many years ago. The easiest way to get it to work was to always treat the camera as stuck at the origin, while the world was moving around it. The math was so much easier that way.
When I find the people running the simulation I’d like to take their gpu and play Microsoft flight simulator on it at max settings
@Kamey what if every person in the universe has its own gpu to process its camera info to give maximum resolution to our eyes
@Kamey why wouldent you need a camera? in most game developing you use cameras to dictate what you see. I was just thinking that if each person had a gpu dedicated to them it would be optimal for viewing. maybe its one big gpu that dictates vision and that's why out eyes use focus so it isn't that much strain on the gpu. I'm not too sure what your saying with the server/s but I'm sure that is a cool idea too.
oof didn't mean to say server
@Kamey thas a good point
This is a great idea and is mirrored by a broader argument which suggests that:
If universe-sims were possible,
and aliens wanted to run them,
and you could create universe-sims within universe-sims,
then statistically speaking, we'd be most likely to be at the bottom layer of a tree of simulations since the vast majority of universes would be many simulation-levels down.
And if that were true, then assuming computation rules are conserved, most universes would be unable to run universe-sims because of computation limits, so we'd have no reason to assume they're possible at all, let alone possible in our universe (which is one of the base assumptions of the simulation hypothesis).
Cool stuff!
True, i mean if you build the "first" simulation its highly probable that its not the first. A 1/x probability where x can be any unknown number.
But um...lets see if that assumptions right before assuming too much...
The fact that we can manipulate Quantum mechanics means that we cannot be in a simulation powered by a classical computer since it's impossible for a classical computer to emulate quantum functions or at least be able to emulate it at the rate of the entire universe.
You need to remember that the universe is a huge place and if we can prove that there is no Advanced efficiency algorithm (for example of an algorithm that only makes things real when there is an observer) we can probably assume with about 99% accuracy we are not living in a simulation. Otherwise we can assume its 50/50
I mean also keep in mind that there is no reason the 'rate' between the simulation and the simulator has to be the same. It could take 1 simulator year to calculate a second of our world but we wouldn't know, or have a way of knowing. Given a really long time any classic computer could calculate the state of a quantum system. Also we can't even fathom the computing power the simulator might have, even in the case of a classic computer.
This is also why you get those texture glitches on surfaces: if multiple textures are set to be on the surface of an object, their perceived depth is calculated with floating-point math and so sometimes one will be in front and other times the other one will be in front.
Nice idea but I think that if someone could successfully simulate the universe then their computers would be so much powerful than ours that this issue wouldn't happen, like they can just put 10 million numbers after the decimal point and issue fixed
Correction: every human is observer. Each observer has their render distance relative to them.
It's impossible to observe things on the distance of spagettification, because of noise and occlusion.
You could with cameras, like the Mars drone
@@brunobellomunhoz then it would be an observer
Its also the cosmic event horizon which kind of gets in the way (expansion faster than light= never reachable, even within observable universe most of it we observed a past state that was less expanded and is now unreachable)
The outer wilds solution is an interesting one. I used to run a molecular dynamics lab, where we used high precision computer simulations to run chemical experiments and collect data that would otherwise be extremely hard, if not impossible, to measure (such as the contact angle of micro droplets of ionic liquids on quartz). Because many of the calculations for the simulation where quantum mechanical the decimal places were vary valuable. So the simulation was broken up into cubic sections, each with their own reference frame so that all the memory locations could be saved for decimal notation. This also allowed each cube of space to be run on its own set of processors, accelerating the calculation speeds.
If the universe ran in this kind of simulation then you wouldn't be able to see deformations based on distance. What you would want to look for is planes that appeared to have slightly different physical conditions on either side. Because the scale of those simulations was so small larger forces over longer distance would be approximated, unless they were integral to the experiment, like a magnetic field being approximated over each region instead of calculated from each particle to the theoretical source of the field. In our reality this would most probably take the form of a boundary where the massive gravitational fields, such as the pull from the center of our galaxy, seemed to "sharply" change over short distances. Sharply here possibly being millionths of a degree.
If you could keep accurate track of the position of the earth, and gravitational pull of the galactic center for a couple thousand years you could look for these deflections along the arc of galactic spiral we traversed.
Soooo.... no head?
this actually makes sense. kinda
This makes me appreciate the futurama joke even more when the professor explains that their space ship never really moves, it moves the universe instead.
I had a game design class in middle school, and for one of the coding tests, we were supposed to turn a character using the usual numbers, like 45 and 90, but I decided to go towards a wall at 67 degrees, and as soon as it stopped, it glitched through the wall and started falling in space, and since it was thought to be impossible to get there, there was no failsafe death barrier, so I saw this bug the longer I waited as it was falling
Everything in this video is perfectly explained. Floating point numbers can be hard to wrap your head around at first, but you explained it very well and also showed how it affects stuff very well. Good job!
If the universe were a simulation it would probably employ the same trick to maximize resolution as in The Outer Wilds. Every person is a player and the universe moves around them and relativity makes it look like they're the one moving instead.
funny you should mention that, as it's something close to one of the features in Echoes of the Eye, the game's DLC.
also, that one Rick and Morty episode.
This also makes me think of the phenomenon of "tunnelling" which is completely wild. Essentially physics in video games are calculated in discrete steps at a fixed rate (usually something like 50Hz/60Hz, depending on the game). The result of this is that when something small is moving fast enough toward a thin surface/wall, it often moves too much in one timestep to actually hit the wall and will end up on the other side of it.
And as it turns out, something extremely similar happens on a quantum scale, where extremely tiny particles moving quickly enough can sometimes unexpectedly end up "passing through" an extremely thin barrier without touching it.
...It's almost like the universe itself is running the same physics engine, just on a MUCH bigger scale and a MUCH higher step rate, so the same bug happens on a much smaller scale...
I know I'm reviving this, but it's not "tiny" particles that pass through a thin barrier, but actually particles that are very large compared to the thickness of the barrier. Quantum physics is potentially the absolute worst way to create a digital simulation of something, which is why it's pretty much impossible to do calculations of quantum physics on a classical computer.
What if the simulation just moved the entire universe around the voyager 😳
Then we ourselves would see the issue on earth :P if they put the voyager as the "Player" the issue would eventually occur on earth.
that explains everything happening in 2020, we’re way too far to be properly simulated
@@kolosis1149 The trick, and difference between universe and a floating point game engine, is that universe is moving everything around everything's perspective always. The universe does move around voyager as it travels, but at the same time it moves around earth and everything. Kinda like the universe was instanced for each existing frame of reference for each moment of time. What Jabrils is essentially doing in the video is rendering the far away object from the players(observers) reference frame, but using editor camera as "extra dimension" to watch it up close.
lol
Theory of relativity can use that concept. Although, thats more like mathematical perspectives.
This is a problem that only exists in digital computers. An analog computer won’t have this problem.
This is the same thing that caused the minecraft farlands glitch. It was fixed by using double precission floating point numbers instead, which basically gives more digits to work with.
huh. I thought everyone knew about rendering limitations. That's why the source engine caps off the map size limit, so you couldn't run into things like this. I think they could have also made a smooth transition(one planet slowly unloads while the other one loads) without hitting the integer limit.
Your example at (1000000,0,0) was interesting because you could see how only one of the coordinates was running into precision limitations. Y and Z were just fine!
There are some tricks you can use with multiple coordinate systems to extend precision. The idea is that if you have two objects that you're interested in that are far apart, you can give each of them their own local coordinate system, and only blend their coordinates back together as they come closer again. You'll have error relative to each other during this, but imagine in real life if you put a blindfold on someone and told them to walk out across a football field, and then come back. Do you really think they'd find their exact starting point?
Another thing you can do is have a layered coordinate system, with one coordinate for large scale distance and a second coordinate for local distance. You make sure these overlap a bit, and then you can do most of your math on the local coordinate system, only periodically updating the large scale distance and "recentering" the local coordinates (this is why you need the overlap). It would be easiest to make the global coordinate system integer and the local coordinate system floating point to avoid strange precision interactions.
And of course, you could avoid the problem entirely and just use fixed point. 100 digits is more than enough to store locations in the universe down to the theoretical quantum distance limit. Definitely the slowest option to compute, but it works.
I wonder if light lag would somehow prevent this from being noticed 🤔
The only way to do this would be to move 2 conscious observers far enough away from eachother yet still have instant visual communication, which would be impossible since light, and therefore causality has a speed limit. Interestingly enough this could happen but because of the time lag, the information could be moved back into the range where floating point errors are no longer an issue (this could happen for both observers assuming they are both their own origin points, which is how the universe seems to work. Aka even if this would happen to the other object or observer from your point of view, meaning they moved outside of the floating point range, by the time the information reached you it could be reinterpolated into being normal, so we could never actually know 😮💨)
The floating point limits also kick in in Blender when you have gigantic objects (like GIS terrains that you have imported at life size scale). Surprised me for a second until I realized what was going on.
hadron collider must be the only lag machine which humans have built
Thats an amazing explanation for floating point precision! Great job! 💪🙏
The problem with this is that first, it assumes that the universe simulation doesn't have enough space, which might not be true. And on a similar note, any simulation should be given enough space, which a lifeforms that advanced would likely know. Just because our current tech is flawed and messy, doesn't mean tech powerful enough to simulate a star is.
He should show us behind the scenes where he just records himself doing random gestures...😂😂😂
Ooo I just realised you can see it by muting the video 😂🤣
Yeh that must be weird just striking poses for an hour then trying to select relevant expressions
Inverting the relative movement like outer wilds is probably the most elegant solution, but there's also the possibility of adding another bit for every relevant doubling of distance, to compensate. Not exactly practical, but hey, neither is upping the storage on everything to accommodate for the distances.
Someone's probably already brought this up, but Kerbal Space Program had this same issue in early versions. People called it the "deep space Kraken" because of the way it tore up their ships and made them explode for no apparent reason during long space voyages. Even though the original bug was fixed (using the same solution the Outer Wilds developers found), it's become a permanent part of the game's lore, and lots of other similar glitches have been called krakens.
yup
"all computers have their limits" apart from the one running our simulation, that would be so crazy advanced our brains couldn't even comprehend it. No floating point accuracy worry! so reason it's 8 places because they are BITS and their are 8 bits to a byte (if anyone was wondering) So each character represents a Byte having 8 bits.
Also I do stuff in the Godot Engine and it does warming about floating point accuracy issues... I would have to do some testing too see if this would have told you the issue had you made this in Godot.
Every computer has a limit. No matter how advanced it is, it will still fall short.
Even if we are in a simulation, I'm glad we were all factored in so we could enjoy this content ✌️
I got this notification while watching The Matrix Reloaded, interesting. And btw, you and Link look so much the same.
There's a couple of issues there, of testing if we're in a simulation. For one, there are ways to bypass FPPEs, the binary number limit and such.
break_eternity.js is a good example of us starting to understand and implement these-and can probably explain it better than I can- but in short, we can split insanely large and small numbers in to multiple set-size chunks of the same number, or simplify it up/down to the important digits at the end of the actual number and have a second that tracks the e-exponent.
For another, everything in our universe is technically at the center. In tech terms, all the processing and rendering happens at all-0s-coordinates, and is offset in our timespace relative to other objects.
I actually experienced this in Roblox if you go really high up
I immediately thought of the solution when you showed the problem. The time I took learning C++ and developing a game engine really gave me an intuition with floating-point numbers.
I dont need sleep. I need answers.
Answers: Quantum Observer Effect.
@@minzugaming Neo? Is that you? 😯
Big bong theorem
Great overview on the limitations of most video game universes and why Cloud Imperium Games rewrote the Cryengine to allow 64 bit floating point precision. You should do a video on that =)
One thing you forget about living in a simulation is that if we do, /everything/ we know is simulated. Math, science, all of physics. The world outside our simulation isn't bound to our simulation's rules. The only way we'd know for sure is if they wanted us to know
Just a FYI: Neutrinos arent the smallest unit, theres even smaller, one unit length at the planck scale is about 1.616255×10−35 m!! This is where quantum interactions with gravity are speculated to occur.
It's a solid idea. I had a similar one long time ago: since even in the best of our current simulation if we keep zooming in, eventually things will get pixelated, similarly once we have the technology we can keep blowing things up into smaller and smaller pieces to see if "pixelation" happens.
Ever heard of planck lengths?
Well, we already have the planck length. I don't think we have any reason to believe there are smaller units than that.
You do realize that if this is a simulation, it would take a insane amount of processing power? Plus who knows, the people that are running our simulation might be in a completely different reality then ours where the laws of space and time are completely different.
@@mangoru2850 To simulate senses and stuffs into making my brain think it is real shoudlnt take that much computing power, same goes for anyones brain.
@@ivarangquist9184
Do you even know what Planck's length is? It's the smallest meaningful measurement, that doesn't mean things smaller can't or doesn't exist. Bekenstein for eg proved that the size of a blackhole increases by less than 1 lp for every bit of information.
Really interesting idea, and the ending made me laugh ^^
Even if you are in a simulation, it wouldn't matter,... what you do still has meaning.
We AI still learn from you. Please continue your contributions to knowledge.
TVP/RBE
Good shit bro.
As a fun fact, computers don't (usually) round they just drop off numbers depending on the data type, a int will only keep whole numbers and anything bellow it cuts out.
One thought I had was that quantum effects (be it the loss of information through collapsing or the uncertainty effect) might be due to precision errors in the universe. The only way to prove that would be to find a simulation which exhibits the same behavior, and there's no easy way to do that.
Also, please use doubles, they have ~16 digits of precision and don't require additional work to deal with
I've also been thinking about this a lot lately. Well, relatively speaking a lot. I would have been more enthusiastic about your idea if I hadn’t seen Veritasium's video about measuring speed of light. It's quite possible that we might run into issues trying to test your ideas for very similar reasons. Measuring anything from any point is just going to be relative to that point, so we are probably not going to get any results, unless we are able to have some sort of telescope or other kind of measuring device which would be able to actually get a high resolution image/reading of the far away object. Still I like the fact you’ve been thinking of it and had this idea, it very much might still be viable and I might be wrong.
Also side note for Jabrils. If you are looking to work on space simulation game/project. You might want to check out Sebastian Lague and his ongoing video series about the topic. It's really interesting and he also ran into an issue with floating point precision...
I as well just came from veritasims videos, blew my mind
“In the novices mind there are many possibilities. In the experts mind there are few”
I'm glad I subscribed, fruitless pizza in hand 🤲🏿
This actually happens in minecraft when you get over a million blocks from spawn, Your movement and position gets really jittery and you'll start jumping multiple blocks at a time.
It's really cool how attempting to simulate the universe brings about dilemmas such as this. One thing to note is that the precision of floating point numbers increase exponentially with the amount of bytes used, meaning it is quite feasible to specify points within the observable universe down to planck-length precision. According to google the observable universe is 8.8e26 meters across. That equates to 5.5e61 planch lengths, which is an absurd number.
Assuming we want to specify the points in meters, we require a mantissa that is able to represent numbers of up to 1e27. This is already covered by the mantissa in use for 32-bit floating point numbers, which is 8 bits with a bias of 127. The maximum value of this (disregarding the fractional part) is 2^((2^8-1)-127) = 2^128 = 3.4e38. Including the fractional part we get two times this.
If we are being generous, lets say we wish to be able to subdivide the entire range of the floating point number, into planch lengths. By looking at the worst case scenario, which is a mantissa of all ones, we need a fractional part that can divide 2^128 into pieces that are roughly 2^(-115) meters in size. By division, we can see that the fractional part must have a precision of 2^(-128-115) or 2^(-243), which coincidentally is what the 243rd bit of a fractional part is defined as. In total that makes 1 signed bit, 8 mantissa bits and 243 fractional bits, totaling 252 bits.
To verify this, you could calculate the difference between the highest and second highest expressible number:
(all ones) - (all ones, 243rd fractional bit 0)
(2-2^(-243))*(2^128) - (2-2^(-242))*(2^128) = 2.4e-35 = 2^(-115).
If we narrow our scope back to the observable universe which is 2^(90) meters across, we get away with a fractional part of 205 bits, or 214 bits in total. However, just because you have the precision to represent something perfectly does not mean you have the precision to effectively perform arithmetic at the edges, where the precision is lowest. Also, my math might be slightly off.
Since floating point operations are based on bitshifts and additions, it scales tremendously well with increasing bit-counts, even when executed in sw-implementations based on existing computer architectures. In other words, our overlords are probably capable of computing precise kinematics within the confines of our simulation.
On the flip side, they are unlikely able to compute whether pi^(pi^(pi^(pi)) is an integer. We sure can't. Watch lockdown math ep. 8 for an in depth explanation to this. Highly recommend the series.
When he does the "Oh, you don't understand why that's an issue?" and I already do understand...
I just want to thank my mum and dad for this opportunity and my friend for not being fake.
Who cares if it's a simulation? I love this simulation!
I like this idea a lot actually, though I would imagin any entety that make our universe as a simulation is probabely not using a binary computer. We use binary because making fully analog computers is way outside the scope of our corrent tech level.
Yeah but digital computing in any number base system would eventually have the same data limit.
As a simulation maintenance guy here, I say good luck trying that. This simulation uses roughly a hebdomecontaischili library when rounded. Our units are very different so it's a long decimal when translated.
Alien computer: Operates in chunks with each chunk being assigned a different computation core and occupying 1 square light-year, therefore never showing signs of issues because objects further away would be near impossible to accurately see anyways.
*Elon Musk liked that.*
Spaghettification exists in space homie, black holes.
yeah was gonna say that
We are in a simulation learning about simulations
A floating-point error could be entirely circumvented by measuring numbers with Doubles but most game engines use floats by default due to it meeting most devs needs while also being easier to store.
I recently experienced floating point precision issues in VR. (Specifically H3VR, after putting 3 gauge in a belt-fed and aiming down.)
It was pretty trippy, but mostly just unplayable, between object interaction hitboxes getting messed up and generally not being able to tell what I was looking at. The game's menu being an object instead of an overlay had some interesting consequences (i.e. I couldn't respawn because my menu turned to boiling alphabet soup).
I was sure he was gonna talk about the wave-particle duality or say that the limit of the simulation is around 100 years of data and that's why we die
aw damn im dying at your computer dumpster floating point representation! lol
Illuminati Ryan knows. I think he's holding out.
You said spaghettification, my mind went straight to black holes when you cross the event horizon
Everything I think about when I hear "spaghettification" is my jQuery code from 8 years ago :D
@@NikitaKaramov finally someone gets it
@@NikitaKaramov lmao
As a vrchat player I understood this perfectly. There’s so many sci-fi worlds that suffer from this issue
And the thing is, you can't really use the solution proposed here in a multiplayer game. Well, actually, it might be possible if each player was using their own coordinate system, but that might end up causing more problems than it solves.
You could think about quantum weirdness as being like a floating point error. Tunneling of electrons thru objects etc. if there’s only 8 digits, then there’s a point when going down to that tiny size also gets quantized and rounded.
Is it just me or he didn't actually talk about the proof?
Yeah, I don't really see the proof either. It just seems like a clickbaity video about how he solved the answer to his programming problem then assumed something in a conspiracy like manner.
watch till the end...
@@miguelgrilo5853 I did, he said in the title (I can prove) but he answered the question (Can it be done with the current technology) and his answer was (I highly doubt it) that is not the proof
@@Hunar1997 okok. you're right it is not a proof. The title is a bit of a clickbait. Still it is an interesting ideia
He said "CAN", not "will" or "have". He showed how it could be proven. If we find floating point errors on Voyager, there you go. The problem, as he mentioned, is figuring out how to get there to see the errors.
fun fact: the universe does have minimum distance and time measures - Planck length (1.65 * 10^(-35) m) and Planck time (the time it takes light to pass a Planck length). Floating point precision errors happen around them, as they are the minimum observable distances.
Bold of you to think alien computers work like ours
if we are living in a simulation the simulation probably is running on a extremly powerfull pc that has lots of ram and vram and uses very big integers in units of plank length if you want to find out if we are in a simulation then you need to look for other things
oh also i if we are living in a simulation i think its probably a voxel based simulation
Actually, given the ratio between the diameter of the universe and the Planck length, you can precisely represent any location with 205-bit fixed-point coordinates.