I love how he calculated the size of the universe in Plank lengths so casually. 10^122 is clearly an unimaginably large number, yet since it can be expressed so concisely it's also unimaginably miniscule compared to these other numbers that cannot even be expressed via recursive arrow notation
well, we have to remember that the way our positional numbering system, combined with the way we trivialized powers, it looks real small, but if we instead used a prime factorization system to write our numbers, 10¹²² would be as much of a hassle to write as tree(g(64)), if not more so
You can fit 1 googol ( 10^100) number of Planck particles in one square inch of space. While you would need another 100 QUINTILLION number of universes of subatomic material just to represent the number googol (10^100).
10^122 is big, but easily expresable arithmetically. Graham Number can’t even be expressed. You can take billion in a power of billion in a power of billion, and so on. And you can write that for the rest of your life, and still won’t even make a dent in Graham’s number.
Because you can just memorise the size of the universe in planck lengths or atoms and convert since it's so important in this case but never in your life will you think to memorise 52+70=122
I think you guys are missing something, and that's the possibility that the universe might be infinite in both space and time. Tony only talked about our observable universe and the things that could be stored in it. However, the real universe almost certain;y continues beyond that. So, if the universe goes on infinitely, or even arbitrarily, far from us, then any number short of infinity could be realised in terms of distance. That means there is a point that is TREE (Graham's number) metres away from us somewhere in the universe. The same thing stands if our universe lasts forever in the future (or even the past), which means will be a time that is TREE (Graham's number) years in the future (or past) from this moment. There would be a lot more permutations where this number would be realised that involve a lot more exotic and speculative physics with multiverses, dimensions and all that but I won't get into that.
I think it's kind of cool that we've compressed numbers so efficiently that we can talk about stuff like TREE(g(64)) which is so unimaginably more massive than anything the universe could ever contain... and yet it takes about 11 bytes to write it down.
That's true, but we only have to define them once. Meaning that the more we use a function, the smaller the average space it uses up. If we use it enough, the function's storage space spread out across all of its uses is functionally nil, meaning my point stands.
Gamesaucer Sure, but some functions are incompressible, meaning that for low values of n, they exceed R[R(2^10^122 + 1)], where R(n) is Rayo's function. Their second order logic formulas don't exist and are not expressible with all the storage space in the universe.
Some functions are indeed incompressible. However, that holds true for any compression method. There will be things you can't compress with it, because you're representing a certain amount of data in less than that amount, meaning that some of it will be left behind along the way. That much is inevitable. But what I find highly interesting is just how much we can compress certain, select things. We lose a lot of granularity when we talk about numbers of that size, but to me that doesn't matter much. The thing that's special to me is just how small we can make some of them. And "some" is proportionally basically zero when we're going so huge, but to us, it's still a massive amount of things.
Gamesaucer Well, there is always a way to compress a function in a higher order logic. If it is impossible to compress a function to be expressible by a certain amount of symbols in a certain order logic, then you go to a higher order logic and define the number as the number such that it takes more symbols than available in the previous order. But in every order of logic, incompressible functions exist. But going beyond first order logic pretty much the concept meaningless and intractable.
That should be 2^(10^122) then, since 10^122 is the number of bits you can store in the universe, and 2^(10^122) is the largest possible number you can store with that many bits.
Daniel Bamberger that's what I was thinking. Even if we round that figure way up and just call it 3^^^3, that still wouldn't be close to g_1, let alone Graham's number for example.
That's similar to what I was thinking. It's not just the amount of Planck volumes that counts...it's the number of ways they can be arranged. A mind boggling number, but even that number is less than g(1)....never mind g(64).
the problem with that number is that a planck length is not physical is measure of space, writing this out i suppose the very presence or absence of something physical would be used as a way to store information at this scale
@@mustangtel9265 "not just the amount of Planck volumes" - it's the number of Planck areas, actually, not Planck volumes. Not that it matters much, but the amount of information that can be stored in the universe depends on the surface area, not the volume.
@@fuseteam A Planck length is a physical measure. It's expressed in metres (roughly 10^-35 of them). Going below the Planck length is what doesn't make sense "physically", in as much as you would not be able to distinguish two points in space that are distant less than a Planck length.
I can see the struggle Tony has while explaining what De Sitter space is After 5:50 he wanted to say De Sitter several times but stopped each time, it's both funny and somewhat frustrating for him
Reminds me when I realise that number of decimal places of Pi needed to measure the diameter of the visible universe in Planck lengths was smaller than than we have already calculated.
Why do you need Pi to measure something? Numbers could only be used to calculate quantities not measuring them. Measuring is done by comparing a physical value with an unit.
@@Universal_Craftsman You measure a quantity (like perimeter) which you then use in conjunction with pi to measure the diameter... or vice versa.. I think that's what OP is saying
These rabbit holes of numbers just fill me with awe. He is literally thinking about how you should tinker with the laws of the universe JUST in order to be able to think about bigger numbers. The fact i listened to that, that it got in my mind is beautiful. This is the purest form of curiosity i have encountered - people invented the model maths is, then tried really hard to make an efficient way of describing it(I.e. explored it) and now they are pushing the limits. And yet again they just explore how to most efficiently push them, just so they can see the next boundary and push it. An endless pit of possibilities that can not be even imagined, yet are perfectly described. Just because we are curious what lies beyond in a model we invented. My eyes are watering at the thought of the beauty of human curiosity.
I love big number videos, thanks for this! I do want to disagree though with the latter half of your video, on the nominal reality of TREE(Graham's) as your argument seems to ignore combinatorics - a simple 3x3x3 Rubik's cube has 43 quintillion possible combinations, for a mere volumetric cost of nine cubes. I feel it would be tough to convince me of the unreality of those combinations, too, as I can use a very simple algorithm to access any one I want in a few seconds. I would love to see a comparison of TREE(g(64)) to the universe's *possibility* space, especially as Everett's interpretation of QM asserts the reality of that space.
Yeah, The number of permutations in puzzles like big Rubik's Cubes (V-Cube 6 or bigger) is already far bigger than the number of elementary particles in the Observable Universe. :)
If I had to guess, I would think that the universe's possibility space, although very large, would still be minuscule compared to even Graham's Number or Tree(3), let alone Tree(G). And the reason is because you are starting with such a small base number to work with (plank volumes in the universe). To go from G2 from G1, you are there, starting with a base of G1, which is already way past any "universal" numbers like 10^76 or 10^122. Just my thoughts on the matter.
Valkhiya That's pretty convincing, but it seems to me that that is only addressing the information limit of a universe, not necessarily the entire possibility tree. My understanding is in MW, each particle in the entire universe generates a new universe for each possible state as it evolves through each possible moment, which seems to me like its doing some kind of sequence climbing? The size and complexity of that structure would be vastly larger than the mere potential information limit - most of it would be redundant copies - but would still be real, at least from some perspectives.
So here’s a bound theory question (I think): is ~10^122 the point were numbers flip from from ones with physical properties (or a physical number) to numbers that can only be conceived conceptually? (Is this a sort of numerical event horizon?)
A more accurate calculation gives an answer of ~3.73*10^124 bits of data storage for the observable universe. That means the largest actual number it would be possible to store in our universe would be around 10^10^124. Notably this is larger than a googolplex which is "only" 10^10^100.
I think he left out a step in his calculation of the biggest possible number that can exist. 10^122 is simply the SIZE of the biggest number, not the biggest number itself. If each of those Planck units can store one bit of data, then the actual "biggest number" is 2^(10^122), which is quite a bit bigger, but still much smaller than Graham's number.
I think the storage of a blackhole has been determined to be the surface area of the blackhole, not, (as I expected) the volume. So he is, (as I understand it) calculating the area of the universe rather than its volume, (possibly that could be clarified in a follow up video.)
I think it's interesting, however, to think about the fact that we can imagine that number. Not in the sense that I think of all the digits of something like Graham's number, but that I have a way to get there using the g(n) function. Think about Mersenne Primes (2^n - 1). The largest one we know is something that takes an entire book to write out, but I can store a version of it that doesn't take up much space in the form of 2^n - 1. I imagine g(n) could be stored the same way, by storing it literally. The full number itself isn't stored, but the meaning is still there because my brain knows what g(n) does with n. So technically, the universe CAN store a version of g(n). Same could go for TREE(n). I know what TREE means, so I can derive the meaning of the number from that.
you can definitely store a compressed version of a number. cause thats what we've done by defining the functions. but by arguing if it "exists" you want that amount of stuff to happen. like for 25 to "exist" you want a 5x5 arrangement of /something/ to be possible. but in order to create a mechanism that could physically /DO/ the function growth of graham and the like would require at a minimum to be able to at least physically hold the final amount of /stuff/
I just about followed what Tony was saying about the hypothetical data limit of the universe but this was glossed over as a fun after-thought to show, by comparison, how ludicrously tiny it is compared to TREE(Graham's Number). Any chance you could make a video (whether it's Numberphile, Computerphile or Sixty Symbols or even a mega crossover event for all three!) that takes a little more time to lead us through that estimation (or a slightly more precise estimation) in more detail please?
Either works, SCG(3), SSCG(3), SCG(13).....The growth rate is farther beyond TREE as anything we could imagine. TREE(TREE(TREE(.....TREE(3) repeated TREE(3) number of times is nothing, there's really no way of representing SCG(3) in terms of TREE(3) short of linking it to SCG itself.
Here's a number: The number of possible organisations of all fundamental particles in the universe, within a space the volume of the current universe, where each particle can be placed on one of any of the intersections in a three dimensional grid with all lines one Planck length apart, filling the universe, ignoring physical laws (I.e. quarks can be separated from each other, particles can overlap etc) with no two particles being placed on the same intersection. Obviously still endlessly smaller than Graham's number, but something that may be interesting for someone more qualified than me to look into (and make a shorter definition for).
That's basically Poincare recurrence time (upper bound to explore all those possibilities, then return arbitrarily close to your original state). It's some ungodly number
@@pierrecurie I hadn't notice that, how similar would these two numbers be? They can't be exactly the same can they? Recurrence time models random motion, while my number simply measures states.
@@alansmithee419 If you look at the proof for the existence of recurrence time, it basically amounts to counting/measuring the states. The result is an upper bound, so "actual" recurrence times are typically much smaller (eg simple harmonic oscillator).
hakk bak - de Sitter space. Tony describes it earlier in the video - it's basically the universe we have without gravity or matter. Slightly less basically, it's a 4-sphere in Minkowski space.
I'd like to point out that this only refers to our observable universe. The unobservable universe may be infinitely large (we don't know) in which case Tree(G64) suddenly becomes the tiny and minuscule one :)
10^122 is our universe’s cap, but 11!! From the Rayo’s number video is about 6^(286,078,170). Our world is so small that a number represented by 4 small symbols fits our entire universe 50+ times
@@ionrubyyy 11!! = 39,916,800! which is 6.16726073584544404020555366840519023143521568039372872... × 10^286078170 When you plug in 11!! In Wolfram Alpha, you get 10,395, however if you plug in ((11)!)! You get what I got. I implied the enormous number. I don’t understand what wolfram alpha is doing when there are no parentheses in the expression.
@@ionrubyyy If you plug in 11!! into wolfram alpha, you get 10,395. However, that is incorrect. 11!! = 39,916,800! which is that number I previously stated. Watch the Numberphile video on RAYO's number and they'll confirm it. Try it on your calculator on your phone. If you have an iPhone, open the calculator, turn it to landscape mode, type the number 11, then find the button on the left side that says x! Press that button. You should see 39,916,800. Press it again, and you will get an error because it's too big. I do not know what Wolfram Alpha is computing when you type in 11!!. However, if you type into Wolfram Alpha the expression ((11)!)!, you will get the answer I stated initially. Hope this helps.
10^122 is a huge number. But have a look at the volume of the universe in Planck units. I've calculated it to around 8.45x10^184 and seen other places 4.65×10^185. The observable universe is 8.8x10^26 meters across giving a volume of 4x10^80m^3. The Planck unit is 1.616255x10^-35m so a Planck volume is 4.22x10^-105m^3. 4x10^80 divided with 4.22x10^-105m^3 is 9.48x10^184.
The largest number calculated for a physical application is the Poincarre Recurrence time which is like... 10^10^10^10^10 or something. Or 10^^5 something like that.
If I recall correctly, if you wanted to write a proof of tree(3) being finite using finite arithmetic, it would itself require an absurdly massive proof consisting of billions and billions of digits and symbols.
@@Michoss9 that's only for trying to prove it with finite algebra or something. He said there's a different approach that we've done already because we do know it's finite
Mike Wagner TREE(n) for all n can be proven to be finite for all n in transfinite arithmetic, but not with finite arithmetic. However, for each individual value of n = k, a theorem stating that TREE(k) is finite exists in finite arithemetic, but this proof would be impossible to complete, it would take "too long" in a rigorous sense.
So data, rale all atoms and make it base two - so some property of each atom defines 0 or 1, but take every planck second since the beginning till the end as a digit. Gives as a limit to what possible data you could store in forever when read correctly.
The Planck constant needs to change? Seems convenient to me that the molar Planck constant is roughly proportional to the error on several of our current observations. I am also aware of a few theorists working on compactified spacetime, I would consider adding that to the list.
But it comes out of things like the universe observably not being flooded in ultra high energy photons. There's some wiggle room in there but not orders of magnitude.
@@RobertSzasz I don't claim to have all the answers, but I'm pretty sure inflation proposes those in the early universe? Compressing time allows them to exist today too, from a very different perspective. I would agree that QM is consistent with our limit on apparent information, but it's interesting to me that this ties together everything on the list: Scale invariance provides a ruleset compatible to QM in which dualities allow for weak fields capable of encoding additional data, followed by the expansion of the observable universe. None of that shows that the universe is actually infinite, but it looks to me like we're at least on our way. The No-hiding theorem seems to be in conflict with our increase in apparent information for a noncompact finite universe.
There's a presumption in this video that is not accurate: the universe may not be (I think that the weak consensus among cosmologists at this point is probably that it is infinite, given that we've yet to detect any curvature, which is the best current alternative) . The OBSERVABLE universe has a few different definitions depending on what parameters you want to tweak, but for even the largest definition of observable universe, there's no possible way that anything physical or even information-theoretic tied to the physical that has a scale remotely approaching TREE(g64). But given, for example, an eternal inflation model, there could be TREE(g64) universes within the inflationary spacetime fabric, trivially. What's more interesting, in an infinite universe, would be whether there are TREE(g64) DISTINCT things. That is, are there that many things (any things) that are not repetitions of previous states. That is a really interesting question, and I don't think there's anything like a consensus on that.
I was wondering a similar thing. But I have a feeling that the rate at which the number of new inflating universes grows might be slower than the rate graham's number grows. So we might get to an amount of information in the entire multiverse of the order or larger than g(64), maybe even as large as TREE(3), but still smaller than TREE(g64).
Also, if you really want to talk about physics here, the consensus is that it is not meaningful to talk about the universe beyond the observable universe. For scientific purposes, it does not exist. We can make hypothesis all the time, but hypothesis is not science.
@@angelmendez-rivera351 I'm not sure how you would measure consensus otherwise. There are few if any universal agreements, but consensus can range from tenuous understanding to near universal acceptance. All consensus means is that there is a general agreement. There might be hundreds of people who disagree out of thousands or just two. As for outside of the observable universe, we're talking about mapping information theory to physical scale, not the soundness of any particular theory of cosmology. For example, the number of branes in an eternal inflation model would probably be infinite. What I think is interesting is that it's easy to exceed the scale of TREE(n) but not to match it. What I mean by that is that you can say the natural numbers have a higher cardinality (or is that ordinality, I always forget) than any finite value like TREE of any natural number, but to find some finite relationship in any system that's that large is nigh impossible outside of the actual definition of the TREE function.
Space-time curves infinitely within the singularity, so, moving from outside the singularity towards it you will at some point reach a point where the space-time curvature can be measured as TREE(g64)
Isn't it more beautiful that in order for a simple abstraction like number to be comprehensible, we must imagine so much more than we can ever use? In this case, we have to compare two numbers (storage capacity of our universe vs. storage requirements for TREE(Graham)), one of which exists while the other "doesn't". Seems to me like a simple equivocation here, likely over "exist", but possibly over "number".
Well, the same argument of "inadequate capacity" can be made for 3^^^3 or f(5,5). Both are way bigger than anything physical, and much simpler to understand than g(64) or TREE(3). I suspect (actually I firmly believe) that the equivocation is about "exist", not about "number".
I'm so glad you made this video.. It's exactly where my mind went with these large numbers as well. But I ponder this as well: Could the number be applied to physically existent probabilities? Such as the probability of our universe existing in its current state? Which is either exactly 1.0 (in a philosophical sense) or 1 over a denominator of something like a permutation of the number of possible elements with all the places those elements could exist (in a simplified sense - obviously there are a lot of different ways to approach that problem). But are there probabilities of the universe which would have a denominator bigger than tree(3) or tree(graham's number) ?
Going by the analogy of a hard drive that's capped on data, I think this kind of massive number can exist. While of course, as you say, there's no way to 'store' the entire number given the entire universe as storage space, it could be 'streamed' in from some hypothetical exterior source. In the same way a video doesn't have to exist on our hard drive for us to view it and/or to exist (much like this RUclips video), if such a thing existed, it could be inspected tiny chunk by tiny chunk and I'd argue that would exist.
Yeah well, but you'll never have the whole thing like that, and you'll never know the whole thing like that. Since you have to delete parts to make room for new parts. And the deleted parts are gone then. So the whole thing won't exist at any given moment in that limited space, and nothing in that space will ever know the whole thing.
Ippikiryu If you could transfer universe amount of data in Plank time, you still can't do it. Universe will experience heat death before that. Even the time itself would probably stop exist.
Great ideas but, the final aproximation feels sort of weird because we can easily write a one with 122 zeroes after it, so it is representable. The 10^122 figure makes more sense if we talk about countable or measurable things in the universe, assuming we can find a clever way to make the measures not continuous or if so, not dependent on some other values (so it's fixed not matter what units we use?). But what is the biggest number we can represent? Well, what we mean by represent should be more precisely defined or else one could argue TREE(3) is a representation of TREE(3), or that an algorithm that would eventually arrive at the value TREE(3) represents TREE(3). Using the very specific definition of, "the number must be written without any operations, in decimal digits", the result of 10^10^122 should be pretty hard to represent, even if a single particle was being used for each digit, which is barely valid to the definition. And if you really try to ease the definition by say, allowing numbers in any bases of digits, or allowing operations, or allowing particles in different states (that's a thing right?), or the different places a particle can be, etc, then maybe there hasn't been anything we used we can't represent
Perplaxus 10^122 is the binary size of the biggest number that can be stored in the universe, not the number itself. *But what is the biggest number we can represent?* There is no such biggest number, since you can make arbitrary notation to represent arbitrarily large finite quantities, and that does not even account for transfinite quantities, of which there is no largest representable member. So it is not sensible question. Instead, the question that makes sense is the largest number that can be stored in the universe, which is what was addressed in the video.
10^122 is the number of bits that fit into the universe, but you could reasonably ask about the number of permutations of bits, and call it something physical. So 10^122 factorial is the much larger interesting number. Still way smaller than Graham's number or anything else..
Depends on if you're writing out a number or actually using it to quantify physical objects. If you're writing out the number, then yes, the information limit is the logarithm of the number, but if you are expressing the number in terms of physical objects, then the information limit represents the number itself.
I wonder if that is impossible. If we consider the space-time with bundle structure in it, sure that there’s only so many units in plank distance in the observable universe. But within each cell, is the field bounded in terms of energy or whatever the essential way of measuring it. Maybe it is true that in the Milky Way there’s a upper bound for how much energy a cell can contain. But one assumption in general relativity is that, the average mass increases with the radius you consider. So that don’t seems to have an upper bound. But still you need to consider the growth rate of that, which makes it sounds unlikely though.
This is somewhat misleading, the 10^122 is approximately the number of qbits on our horizon, but we can still conduct some operations on numbers with alternate representations in space proportional to the representation size rather than the full binary expansion, so, for many purposes, numbers _much_ larger than 2^(10^122) exist.
10^122 is the total amount of information in the universe, not the biggest number. In order to have the biggest possible number that can fit in our universe you need to have a number that is 10^122 bits long (or 2^(10^122)). Which is, while finite, a whole lot bigger. Also, if our universe actually had such a number defined, there would be nothing in our universe left over that could observe it.
So we can have all the Tree(Graham Number)↑↑...(Tree(Graham's Numbers)...↑↑Tree(Graham's Number) amounts of data we want if we can just find enough dark energy? Has anyone tried fracking space yet?
but you will never live long enough to find 'enough' dark matter even everyone live billion years it is still 0.00% of process to collect enough dark matter to store g64
@@fakestory1753 Our universe cannot handle g(1) in terms of computing and storing it, and g(2) in terms of writing it down with arrow notation, never mind g(5)!
No, we’d actually need LESS dark energy, because dark energy drives the expansion of our universe, which creates the cosmic horizon. An infinite steady state universe is ideal, but it’s not the one we live in :(
Please follow this video with a history lesson of Archimedes' "The Sand Reckoner" (if you haven't discussed that yet), it would tie it all up so neatly!
I’ve always felt like the largest number that has any real basis in our universe would be the number of permutations that every sub atomic particle could be at in every Planck volume of the universe. Which I think would be the factorial of the number you calculated, but I could be wrong. I wonder if it gets close to Tree(Graham)
He assumes that the fourth dimension, time, is finite. If the universe goes forever in to the future, TREE(3) units of planck time will have passed. Another potential counterpoint, if you believe in the Everettian interpretation of quantum mechanics, the quantity of parallel universes could exceed TREE(3), (or any other cardinal number). I understand I am pushing the definition of existence here, but I found this interesting anyways.
We do not need to know the data itself. We only need The Metadata of the Metadata of the Metadata on and on and on for a very high number of finite no. of steps. Then we can get an idea of it without running into too many problems. This is a combo of the Physicist in me and the power-crazed SCP fan in me talking together in favor of this as I really want it to exist. Imagine light going at TREE(g(c)) c being the current two-way speed of light. Or imagine a singular tree that is TREE(g(10)) dimensional, with a huge coverage. This would be kind of the end of the finite universe as we know it. We need more info in our universe which is even more densely packed, so that more excitement can happen.
I only have an A level in physics so correct me if im wrong but when he says "Our universe" he means the observable universe not anything beyond that, and the size of that is limited by the speed of light, no?
So if I'm understanding that, TREE(Graham's number) is even larger than the number of possible states for the observable universe? Edit: nvm, watched the rest. Lesson learnt.
@@unfetteredparacosmian Wouldn't it just be the factorial of the number of states? I.e. if he said it was 10^144, the possibility space would be (10^144)!
This video makes a poor assumption. It assumes that the universe is finite. That is possibly true, but the standard model of cosmology assumes a universe that is infinite in extant. In that case, Graham's number compared to the infinite size of the universe is insignificant.
Whether the universe is infinite or not is irrelevant, all that matters is the observable universe. Everything else is unreachable and thus meaningless to this thought experiment
What if we used a Conway's Game of Life universe? You could have a machine that takes TREE(g64) gliders moving from top to bottom which then spits out a glider to the left for each one it reads. So in a sense this machine is counting to TREE(g64). Is it enough to say the machine understands the number, by counting to it? We could also have a machine that spits out different gliders that represent the digits of TREE(g64). Is it enough now to say the machine comprehends the number, by being able to write its digits? We could design much more complicated neural networks that take in TREE(g64) gliders and apply very high-level reasoning to it (this is possible because the Game of Life is Turing-complete). I think whatever definition you have of intelligent life understanding a number, the Game of Life universe can meet it. So in that sense I would say TREE(g64) is "real".
I have to disagree with the calculation at the end. If I write down on a piece of paper a number with 200 digits (which is indeed possible because there exist pens and paper in our universe), then I could have done that in 10^200 ways, so I already have stored more information. Dividing the observable universe into a total number of M smallest units, there can still be a "thing" at every spot, so I think a correct (non-optimal) upper bound would be N^M, where N is the total number of "things". Now how many "things" are there?
No, we can't have TREE(Graham's number) of things or even just Graham's number of things in the universe. However, that doesn't mean these numbers don't "exist" in the universe, depending on what you mean by "exists". When you think of the number 10 for instance, do you picture 10 things, or just the Arabic numerals, "1","0"? This is a more efficient way of conveying the meaning of a number than counting it. Alternatively, you could say that 10 is equal to the set, {{{{{{{{{{{}}}}}}}}}}}, but we don't need to just use the successor function to describe any number. Graham's number "exists" in the same way that Pi exists. You can't write out all the digits of pi in decimal, nor can you measure an ideal circle in the first place, because ideal shapes don't exist. However, Pi exists because we can create an algorithm which generates pi. The same idea goes for Graham's Number, or TREE(3), or TREE(Graham's Number). You can define an algorithm that generates Graham's number (by recursion) or the TREE function (by brute force probably). The only difference is that a computer generating TREE(Graham) will eventually halt (after a few eternities), and a computer generating Pi will never halt. This brings me to the point I wanted to get at. The interesting thing about large numbers that we have defined, like Graham's number is that despite being extremely large, they have extremely LOW information entropy. Apparently a Turing machine with only ~37 bits of input can calculate it. But the most amazing thing is that the vast, vast majority of positive integers less than Graham's actually don't exist, i.e. cannot be meaningfully expressed in any way that will fit in the universe. The entropy of the universe, 10^122, is related to the biggest number of things that could possibly exist in the universe. However, the number of numbers that could exist in any universe similar to ours is equivalent to the number of microstates, not the entropy. If S = 10^122 = k*ln( microstates), where k is Boltzmann's constant, then the universe has roughly 10^10^144 possible microstates, the number of ways that all the particles and energies in the universe could be oriented. In theory, each of these could represent a different number. Since 10^10^144 is less than Graham's number (it's even less than G1), then most integers lower than it cannot be represented in any way whatsoever.
Sure, there's not enough bits in the (observable) universal, however one can write it on paper and define in a finite and a very compact way the rules that yield this number
That's all conceptual in order to represent a hypothesis or "what if", but that is a far cry from a physical medium with which to represent it tangibly.
@@Teck_1015 Two rules and the number 3 are enough to define TREE(3). There can very well be three kinds of seeds in the world, as well as a forest thet grows with these rules.
Ξενοφώντας Σούλης Sure, but that the rules exist is not relevant. Computer scientists don't exactly care about whether the compressed version of a quantity can be compressed (it always can be if you go a high enough order of logic.) Calling that "expressible" means you don't understand the definition of "expressible."
@@angelmendez-rivera351 I never spoke about computer scientists, only pure Mathematics. And there is a way yo describe this number in under 10 minutes (Numberphile has done exactly that).
Ξενοφώντας Σούλης Describing and expressing are not the same thing. But whatever. I'm not going to waste my time explaining such a basic difference to people on RUclips. It's not what degrees are for. Believe what you want. Have a nice day.
So if the Univers that we see is 10^122 M is 10^119 KM is 6.21400000E+118 Miles. You made it bigger than you can fit the biggest number. Did I understand that right?
You can also use arrow up notation with complex numbers but the limit seems to be two arrows for at least with current math. This took some time and paper but I calculated what (1+i)^^3 is: cos((cos(ln(sqrt(2)))-sin(ln(sqrt(2))))e^(-pi/4)pi/4+(cos(ln(sqrt(2)))+sin(ln(sqrt(2))))e^(-pi/4)ln(sqrt(2)))sqrt(2)^((cos(ln(sqrt(2)))-sin(ln(sqrt(2))))e^(-pi/4))*e^(-(cos(ln(sqrt(2)))+sin(ln(sqrt(2))))e^(-pi/4)pi/4)+sin((cos(ln(sqrt(2)))-sin(ln(sqrt(2))))e^(-pi/4)pi/4+(cos(ln(sqrt(2)))+sin(ln(sqrt(2))))e^(-pi/4)ln(sqrt(2)))sqrt(2)^((cos(ln(sqrt(2)))-sin(ln(sqrt(2))))e^(-pi/4))*e^(-(cos(ln(sqrt(2)))+sin(ln(sqrt(2))))e^(-pi/4)pi/4)i Probably hardest and funniest mathy thing yet I have done
Wait a minute, aren't there plenty of processes that undergo combinatorial explosion that nonetheless do described something about the universe? It wouldn't be too much of a stretch to apply something like Ramsey theory to particles and forces. Though I don't know if you could approach Tree(g(64)) you certainly could contrive a question about the universe that would lead to Ackerman type growth.
Kelly Stratton 10^10^10^10^10 is an upper bound to the Poincaré recurrence time of what in physics is called an "empty, vast universe," which represents an universe bubble several orders of magnitude larger than our own observable universe. This is to say, if we had to consider the amount of time it would take for the universe to reset itself and traverse every single quantum microstate possible, then 10^10^10^10^10 planck units of time is an upper bound. This number must obviously be bigger than the number of possible microstates of the universe, which is in itself bigger than the biggest possible number that can be encoded in the universe.
I believe the calculation was for the observable universe. Taking into account the largest estimated positive curvature of space-time, the minimum value of probably closer to 10^130 ... still a really really small number. I wonder if someone could estimate what the largest energy density in the universe could be in order to have at least graham's number of information? How would such a number be expressed?
Deeper question here: What's the largest number that can be expressed in symbolic logic in the universe. For example, printing in 10 or 12 font with plenty of empty page space, the symbolic derivation of g(64) could probably fit on a single printed page. This could be coded in binary and expressed in "on" and"off" planck units. In this sense Grahams number would take up very little space. Could tree(3) be coded in a similar way? How we you even properly conceptualize this way of fitting the biggest possible number into the universe? Another way of thinking about this: the concept of tree(g(64)) "fits" in a single human brain (sort of). With a perfectly efficient brain the size of the entire universe, what's the largest number/concept that could be conceived?
Chris Sekely There is no such largest number, since arbitrary symbolic notation can be used to express arbitrarily large numbers. We can go to arbitarily high orders of logic to compress expressions as much we want. More concretely, I can define a function F(n) such that this is equal to the smallest number not representable in x-order logic with n symbols. Then let n be the number of Planck volumes in the universe and I have expressed a large number. Normally, you would want to use the label F_x to specify the order of logic of the function. With this method, there is such a largest number. However, I can circumvent this entirely by simply creating new symbols. Symbolic logic sets no restriction on what symbols I can use, so long as they are part of the language. I can arbitrarily expand the language and add new symbols arbitrarily, which allows me to not need to label the function, but rather just use a new symbol for a higher order logic function. Then any limitations would come from the limit of possible symbols I can use. As far as I understand, though, there is no limit. For any symbol that exists, I can make a new symbol from it. Okay, I suppose you may be able to come up with such a limitation symbols. But I'm already a step ahead of you: I can define a function such that there is a number not expressible in this type of notation, and I can do so in symbolic logic. In fact, I can define the function F(n) as the smallest number not expressible in n symbols in any lower order of logic with new symbols. And so on. You may have to consider transfinite orders of logic, and so, but you can always go one order higher and form a compression defined explicitly from the limits of the previous orders.
Yes. Although numbers like this are so enormous that it makes no difference. The number of ways you could permutate the Planck volumes in the universe is nothing compared to G1, which is nothing compared to G64, which is nothing compared to TREE(3)
I think you're looking at this wrong, it's not about how many bits of data the universe can store but how many states the universe can be in at any one time. Think of it this way, there are 54 cards in a deck. This means, let's say, we can only store 54 bits of data in a single deck of cards. However, this also means, we can have up to 54! possible states in a single deck of cards (assuming that every card is unique). According this logic, this means the largest number the universe could store would be (10^110)!
Planck's length is the smallest possible meaningful length in the universe, according to quantum physics. It's approximately \(1.616229(38) \times 10^{-35}\) meters. It's derived from fundamental physical constants such as the speed of light, Planck's constant, and the gravitational constant. The volume of one cubic Planck length would be \(1.616229(38) \times 10^{-105}\) cubic meters. This is an incredibly tiny volume, highlighting the scale at which quantum effects become significant. To calculate how many cubic Planck lengths would fit into the observable universe, we first need to know the approximate volume of the observable universe. The observable universe is estimated to have a radius of about 46.5 billion light-years, which translates to roughly \(8.8 \times 10^{80}\) cubic meters. Now, to find out how many cubic Planck lengths would fit into the observable universe, we can divide the volume of the observable universe by the volume of one cubic Planck length. \[ \frac{8.8 \times 10^{80} \, \text{cubic meters}}{1.616229(38) \times 10^{-105} \, \text{cubic meters}} \] The result is approximately \(5.44 \times 10^{184}\) cubic Planck lengths. So, about \(5.44 \times 10^{184}\) cubic Planck lengths would fit into the observable universe.
Somewhere in a very, very, very, very, very, very distant future : "We had to hack the multiverse and kick off a few new ones for extra storage space, but we can now show you the integrality of Tree(3) in this brand new museum."
If I can use a notation system to write out a number, though, doesn't that make it real? He wrote 10^122 as the largest real number, but he didn't even write it out. He used a shorthand (i.e, exponent). I can use Bowers' Array Notation to write Generalplex, {10,10,10,(10,10,10,10)}, a number that is incomprehensibly larger than Graham's Number (though of course still nowhere near TREE(3)). I'm using a notation system to write this number, which is much more complicated and powerful than exponents, but it's a well defined number that is computable.
There is multiple roads to Rome, yet it is just one city. A leaf can fall to the ground in an unimaginable, yet not infinite, number of ways, yet the leaf and the air it travels through consist of a relatively small amount of particles. Sure, perhaps it isn't possible to have TREE(Graham) as number of particles (not considering an infinite number of multiverses), but would it otherwise be possible to still imagine the number through such thought experiment? In that case the number, I reckon, exists. But... perhaps I simply don't grasp the sheer size of that thing ;)
The number of possible "ways" of something happening is wholly encompassed by Tony's calculation. That's sort of what they're getting at with the whole "information limit" thing. It's all just in that little 10^122. There seem to be a few competing schools of thought in the other comments that want bigger estimates for stuff like combinatorics or whatever- I have no idea if any of them are based in truth, but even then people seem to agree that the biggest number you can reasonably wring out of the observable universe is something like 10^10^10^10^10. I forget how many 10s there are supposed to be, but that's irrelevant. You could stack 10s all day and you wouldn't get any closer to Graham's number.
if you need to use this number for something imagine that since big bang, every smallest possible time unit of time alternate universes were created and they keep being created ever since (but not until present day but) until black holes evaporate - and from each alternate universe many more alternate universes create... and then count up all planck constant for every combination of atoms or whatever particles they could contain all together. that would be maybe even bigger than TREE(grahams number)
The amount of data that the universe contains may "not be that big", but if we consider the possible permutations of all of this data then we can make it much bigger. Obviously it still wouldn't compete with any of these numbers, though.
10^122 is the biggest number of "something countable" you can potentially have in observable universe. That does not mean bigger numbers don't exist. Numbers are a concept, they don't have to have objects which you can count to reach them. Questioning existence of such big numbers is like questioning existence of irrational or transcendent numbers. If you need to make infinite steps to reach exact value of the number, the number can't exist since you can't ever make that infinite steps. You can't even make more than 10^122 steps, right?
I love how he calculated the size of the universe in Plank lengths so casually. 10^122 is clearly an unimaginably large number, yet since it can be expressed so concisely it's also unimaginably miniscule compared to these other numbers that cannot even be expressed via recursive arrow notation
well, we have to remember that the way our positional numbering system, combined with the way we trivialized powers, it looks real small, but if we instead used a prime factorization system to write our numbers, 10¹²² would be as much of a hassle to write as tree(g(64)), if not more so
You can fit 1 googol ( 10^100) number of Planck particles in one square inch of space. While you would need another 100 QUINTILLION number of universes of subatomic material just to represent the number googol (10^100).
10^122 is big, but easily expresable arithmetically. Graham Number can’t even be expressed. You can take billion in a power of billion in a power of billion, and so on. And you can write that for the rest of your life, and still won’t even make a dent in Graham’s number.
of course, they can be expressed concisely. for example, the number featured in this video can be expressed as Tree(G(64)).
@@sehr.geheim Huh? How so?
You see the number 10^122 and think: "wow, that's tiny, absolutely minuscule".
I was thinking the same. :)
Is that a quote or was that just your own thought? In any case, I know I was thinking something similar
and them you remember 10^100 is a googol
@@fuseteam But still a very small number. :)
@@martinh2783 indeed xD
googol _is_ the first of large numbers with a name :p
Love that he approximates in seconds the size of the universe in Planck lengths but has to ask what 70 + 52 is
Because you can just memorise the size of the universe in planck lengths or atoms and convert since it's so important in this case but never in your life will you think to memorise 52+70=122
I think you guys are missing something, and that's the possibility that the universe might be infinite in both space and time. Tony only talked about our observable universe and the things that could be stored in it. However, the real universe almost certain;y continues beyond that. So, if the universe goes on infinitely, or even arbitrarily, far from us, then any number short of infinity could be realised in terms of distance. That means there is a point that is TREE (Graham's number) metres away from us somewhere in the universe. The same thing stands if our universe lasts forever in the future (or even the past), which means will be a time that is TREE (Graham's number) years in the future (or past) from this moment. There would be a lot more permutations where this number would be realised that involve a lot more exotic and speculative physics with multiverses, dimensions and all that but I won't get into that.
I think it's kind of cool that we've compressed numbers so efficiently that we can talk about stuff like TREE(g(64)) which is so unimaginably more massive than anything the universe could ever contain... and yet it takes about 11 bytes to write it down.
But what really matters is the definition of those functions which surely takes at least several hundred characters
That's true, but we only have to define them once. Meaning that the more we use a function, the smaller the average space it uses up. If we use it enough, the function's storage space spread out across all of its uses is functionally nil, meaning my point stands.
Gamesaucer Sure, but some functions are incompressible, meaning that for low values of n, they exceed R[R(2^10^122 + 1)], where R(n) is Rayo's function. Their second order logic formulas don't exist and are not expressible with all the storage space in the universe.
Some functions are indeed incompressible. However, that holds true for any compression method. There will be things you can't compress with it, because you're representing a certain amount of data in less than that amount, meaning that some of it will be left behind along the way. That much is inevitable. But what I find highly interesting is just how much we can compress certain, select things. We lose a lot of granularity when we talk about numbers of that size, but to me that doesn't matter much. The thing that's special to me is just how small we can make some of them. And "some" is proportionally basically zero when we're going so huge, but to us, it's still a massive amount of things.
Gamesaucer Well, there is always a way to compress a function in a higher order logic. If it is impossible to compress a function to be expressible by a certain amount of symbols in a certain order logic, then you go to a higher order logic and define the number as the number such that it takes more symbols than available in the previous order. But in every order of logic, incompressible functions exist. But going beyond first order logic pretty much the concept meaningless and intractable.
Out of all the videos on the entire internet, this is the last one I expected to make a brexit reference.
number of seconds to brexit: TREE (G64)+1
@@dAvrilthebear Number of extensions the UK will have to ask for.
what about that one video where the guy pours milk into a jar of coins
Kinda pissed me off too
I missed it, what was the reference?
Love this, this is the pub chat after filming a numberphile video and sinking a few pints
“Universe says no.”
I read that in the Carol Beer (little Britain) voice
*You've broken maths human, stop that*
The universe should just download more energy
Time for Chuck Norris to get on his bike.
I know a great website it can go to to download more ra--download more surface area, I mean
EA is charging too much
Needs more ram
Nice
That should be 2^(10^122) then, since 10^122 is the number of bits you can store in the universe, and 2^(10^122) is the largest possible number you can store with that many bits.
Daniel Bamberger that's what I was thinking. Even if we round that figure way up and just call it 3^^^3, that still wouldn't be close to g_1, let alone Graham's number for example.
That's similar to what I was thinking. It's not just the amount of Planck volumes that counts...it's the number of ways they can be arranged. A mind boggling number, but even that number is less than g(1)....never mind g(64).
the problem with that number is that a planck length is not physical is measure of space, writing this out i suppose the very presence or absence of something physical would be used as a way to store information at this scale
@@mustangtel9265 "not just the amount of Planck volumes" - it's the number of Planck areas, actually, not Planck volumes. Not that it matters much, but the amount of information that can be stored in the universe depends on the surface area, not the volume.
@@fuseteam A Planck length is a physical measure. It's expressed in metres (roughly 10^-35 of them). Going below the Planck length is what doesn't make sense "physically", in as much as you would not be able to distinguish two points in space that are distant less than a Planck length.
"Allocate more information to the simulation
- Why ?
- The Sim I wanted to be a mathematician is back at it with trees and that graham guy"
*Sim Universe has stopped working* 🎆
@@DreckbobBratpfanne we'd finally get a break
for a while
Anyway, are you guys still alive?
@@tacitozetticci9308 yep xD
I can see the struggle Tony has while explaining what De Sitter space is
After 5:50 he wanted to say De Sitter several times but stopped each time, it's both funny and somewhat frustrating for him
De baby sitter is sexy!
What is that?
Reminds me when I realise that number of decimal places of Pi needed to measure the diameter of the visible universe in Planck lengths was smaller than than we have already calculated.
Reckless Roges with 30 you’re only a blood cell off. You don’t need a lot of digits.
Why do you need Pi to measure something? Numbers could only be used to calculate quantities not measuring them. Measuring is done by comparing a physical value with an unit.
@@Universal_Craftsman 2Pi is used to calculate planck units, since it describes a particular property of free space.
@@VineFynn Yes, you calculate not measuring it. Sorry but I couldn't resist my pickiness there.
@@Universal_Craftsman You measure a quantity (like perimeter) which you then use in conjunction with pi to measure the diameter... or vice versa.. I think that's what OP is saying
"TREE is daddy" - Tony
🤣🤣🤣
Tony likes daddies.
These rabbit holes of numbers just fill me with awe. He is literally thinking about how you should tinker with the laws of the universe JUST in order to be able to think about bigger numbers. The fact i listened to that, that it got in my mind is beautiful.
This is the purest form of curiosity i have encountered - people invented the model maths is, then tried really hard to make an efficient way of describing it(I.e. explored it) and now they are pushing the limits. And yet again they just explore how to most efficiently push them, just so they can see the next boundary and push it. An endless pit of possibilities that can not be even imagined, yet are perfectly described. Just because we are curious what lies beyond in a model we invented. My eyes are watering at the thought of the beauty of human curiosity.
Very true. That's so cool about mathematics. :)
I love big number videos, thanks for this! I do want to disagree though with the latter half of your video, on the nominal reality of TREE(Graham's) as your argument seems to ignore combinatorics - a simple 3x3x3 Rubik's cube has 43 quintillion possible combinations, for a mere volumetric cost of nine cubes. I feel it would be tough to convince me of the unreality of those combinations, too, as I can use a very simple algorithm to access any one I want in a few seconds. I would love to see a comparison of TREE(g(64)) to the universe's *possibility* space, especially as Everett's interpretation of QM asserts the reality of that space.
Yeah, The number of permutations in puzzles like big Rubik's Cubes (V-Cube 6 or bigger) is already far bigger than the number of elementary particles in the Observable Universe. :)
@@Valkhiya,
So the biggest number applicable in the (Observable) Universe's like 1000^^20 IIRC?
If I had to guess, I would think that the universe's possibility space, although very large, would still be minuscule compared to even Graham's Number or Tree(3), let alone Tree(G). And the reason is because you are starting with such a small base number to work with (plank volumes in the universe). To go from G2 from G1, you are there, starting with a base of G1, which is already way past any "universal" numbers like 10^76 or 10^122. Just my thoughts on the matter.
@@gregorycarlson9139, do you mean the Observable Universe or the Whole Universe?
Valkhiya That's pretty convincing, but it seems to me that that is only addressing the information limit of a universe, not necessarily the entire possibility tree. My understanding is in MW, each particle in the entire universe generates a new universe for each possible state as it evolves through each possible moment, which seems to me like its doing some kind of sequence climbing? The size and complexity of that structure would be vastly larger than the mere potential information limit - most of it would be redundant copies - but would still be real, at least from some perspectives.
So here’s a bound theory question (I think): is ~10^122 the point were numbers flip from from ones with physical properties (or a physical number) to numbers that can only be conceived conceptually?
(Is this a sort of numerical event horizon?)
A more accurate calculation gives an answer of ~3.73*10^124 bits of data storage for the observable universe.
That means the largest actual number it would be possible to store in our universe would be around 10^10^124. Notably this is larger than a googolplex which is "only" 10^10^100.
Googolplex is 10^(10^100), otherwise it will give a different number
I would LOVE to see @Numberphile do a video on SSCG(3) or SCG(3). These numbers destory TREE(3) in terms of size!
Please! PLEASE!!!
boring
Thanks so much for making these videos, and for supporting a great cause. I truly appreciate your work, and the work of the people you talk with.
Fascinating, incredible, and I don't know why we love this kind of subject!
I think he left out a step in his calculation of the biggest possible number that can exist. 10^122 is simply the SIZE of the biggest number, not the biggest number itself. If each of those Planck units can store one bit of data, then the actual "biggest number" is 2^(10^122), which is quite a bit bigger, but still much smaller than Graham's number.
I think the storage of a blackhole has been determined to be the surface area of the blackhole, not, (as I expected) the volume. So he is, (as I understand it) calculating the area of the universe rather than its volume, (possibly that could be clarified in a follow up video.)
He never claimed that was the largest number to store. He said this number expresses a cap on the data in terms of the area of the universe.
I think it's interesting, however, to think about the fact that we can imagine that number. Not in the sense that I think of all the digits of something like Graham's number, but that I have a way to get there using the g(n) function.
Think about Mersenne Primes (2^n - 1). The largest one we know is something that takes an entire book to write out, but I can store a version of it that doesn't take up much space in the form of 2^n - 1. I imagine g(n) could be stored the same way, by storing it literally. The full number itself isn't stored, but the meaning is still there because my brain knows what g(n) does with n. So technically, the universe CAN store a version of g(n). Same could go for TREE(n). I know what TREE means, so I can derive the meaning of the number from that.
It's like compression.
you can definitely store a compressed version of a number. cause thats what we've done by defining the functions. but by arguing if it "exists" you want that amount of stuff to happen.
like for 25 to "exist" you want a 5x5 arrangement of /something/ to be possible.
but in order to create a mechanism that could physically /DO/ the function growth of graham and the like would require at a minimum to be able to at least physically hold the final amount of /stuff/
Kolmogorov Complexity is the theory of randomness/compression. Delving into 101 concepts of it will discuss these things and formalize them.
Josh Duewall Sure, but that is an abuse of language. These compressions are not what computer scientists actually call "storing a number."
more often than not you gotta unzip a file before u can use it
This man has made a gallery for his children in his office.
I’ve thought about this video for a very long time so I’m really glad you did it
I just about followed what Tony was saying about the hypothetical data limit of the universe but this was glossed over as a fun after-thought to show, by comparison, how ludicrously tiny it is compared to TREE(Graham's Number). Any chance you could make a video (whether it's Numberphile, Computerphile or Sixty Symbols or even a mega crossover event for all three!) that takes a little more time to lead us through that estimation (or a slightly more precise estimation) in more detail please?
SCG(3) laughs at TREE(TREE(TREE(3))) in the showers.
I think you mean SCG(13)? Also i only heard it being bigger than TREE(3)
Either works, SCG(3), SSCG(3), SCG(13).....The growth rate is farther beyond TREE as anything we could imagine. TREE(TREE(TREE(.....TREE(3) repeated TREE(3) number of times is nothing, there's really no way of representing SCG(3) in terms of TREE(3) short of linking it to SCG itself.
KalOrtPor oh ok
ok? who cares?
Here's a number:
The number of possible organisations of all fundamental particles in the universe, within a space the volume of the current universe, where each particle can be placed on one of any of the intersections in a three dimensional grid with all lines one Planck length apart, filling the universe, ignoring physical laws (I.e. quarks can be separated from each other, particles can overlap etc) with no two particles being placed on the same intersection.
Obviously still endlessly smaller than Graham's number, but something that may be interesting for someone more qualified than me to look into (and make a shorter definition for).
That's basically Poincare recurrence time (upper bound to explore all those possibilities, then return arbitrarily close to your original state).
It's some ungodly number
@@pierrecurie I hadn't notice that, how similar would these two numbers be? They can't be exactly the same can they? Recurrence time models random motion, while my number simply measures states.
@@alansmithee419 If you look at the proof for the existence of recurrence time, it basically amounts to counting/measuring the states. The result is an upper bound, so "actual" recurrence times are typically much smaller (eg simple harmonic oscillator).
Converting it to binary information bits it should be 2^10^122....Which is still ridiculously small??
5:50 the Entropy of the De Sitter... what was professor going to say before he dumbed it down?
hakk bak - de Sitter space. Tony describes it earlier in the video - it's basically the universe we have without gravity or matter. Slightly less basically, it's a 4-sphere in Minkowski space.
@@Tevildo Which video? Link?
@@slendeaway7730 This video, starting at 4:17.
@@Tevildo Ok I think I understand now. It's the universe but things won't collapse on themselves or quantumly fluctuate out of existence or whatever.
I'd like to point out that this only refers to our observable universe. The unobservable universe may be infinitely large (we don't know) in which case Tree(G64) suddenly becomes the tiny and minuscule one :)
Exactly what I was thinking. It could also be that the total universe is TREE(3) Planck units across.
10^122 is our universe’s cap, but 11!! From the Rayo’s number video is about 6^(286,078,170). Our world is so small that a number represented by 4 small symbols fits our entire universe 50+ times
11!! (eleven double-factorial) is 10395.
@@ionrubyyy 11!! = 39,916,800! which is 6.16726073584544404020555366840519023143521568039372872... × 10^286078170
When you plug in 11!! In Wolfram Alpha, you get 10,395, however if you plug in ((11)!)! You get what I got. I implied the enormous number. I don’t understand what wolfram alpha is doing when there are no parentheses in the expression.
@@ionrubyyy If you plug in 11!! into wolfram alpha, you get 10,395. However, that is incorrect. 11!! = 39,916,800! which is that number I previously stated. Watch the Numberphile video on RAYO's number and they'll confirm it. Try it on your calculator on your phone. If you have an iPhone, open the calculator, turn it to landscape mode, type the number 11, then find the button on the left side that says x! Press that button. You should see 39,916,800. Press it again, and you will get an error because it's too big. I do not know what Wolfram Alpha is computing when you type in 11!!. However, if you type into Wolfram Alpha the expression ((11)!)!, you will get the answer I stated initially. Hope this helps.
Mindblowing!
Is this the reason why WinRar never expire?
10^122 is a huge number. But have a look at the volume of the universe in Planck units. I've calculated it to around 8.45x10^184 and seen other places 4.65×10^185. The observable universe is 8.8x10^26 meters across giving a volume of 4x10^80m^3. The Planck unit is 1.616255x10^-35m so a Planck volume is 4.22x10^-105m^3. 4x10^80 divided with 4.22x10^-105m^3 is 9.48x10^184.
2:26 keyword *'our'*
Unlisted video hype.
@MichaelKingsfordGray Come again?
@MichaelKingsfordGray Ohhh ok
Bye
what's that?
MichealKingsfordGray Oh I see. You’re trying to start a feud.
The largest number calculated for a physical application is the Poincarre Recurrence time which is like...
10^10^10^10^10 or something. Or 10^^5 something like that.
Would he mind doing a proof why tree(3) has to be finite? Or of it's easily generalizable even tree(n)?
If I recall correctly, if you wanted to write a proof of tree(3) being finite using finite arithmetic, it would itself require an absurdly massive proof consisting of billions and billions of digits and symbols.
@@Michoss9 that's only for trying to prove it with finite algebra or something. He said there's a different approach that we've done already because we do know it's finite
Michoss9 TREE(3), not tree(3). tree(3) is a different but related quantity.
Mike Wagner TREE(n) for all n can be proven to be finite for all n in transfinite arithmetic, but not with finite arithmetic. However, for each individual value of n = k, a theorem stating that TREE(k) is finite exists in finite arithemetic, but this proof would be impossible to complete, it would take "too long" in a rigorous sense.
@@angelmendez-rivera351 yeah, that's what I was trying to get at. I would like to see him go through this transfinite arithmetic proof.
String theorists: we think there's 10^{500} possible versions of string theory! That's clearly way too many!
TREE(3): You are little baby
True Hahaha. Mathematicians use by far the biggest numbers. :)
An amazing extra video of the main one.😊 I wonder if the Continuum Hypothesis will one day be done by Numberphile.😊
eww weeb
Barto Game Club,
I’m only a nerd with a ginormous interest in science & mathematics.😂
@@erik-ic3tp what is the difference?
Barto Game Club, None, actually.😂
@@erik-ic3tp G U A C A M O L E N I G G A P E N I S
So data, rale all atoms and make it base two - so some property of each atom defines 0 or 1, but take every planck second since the beginning till the end as a digit.
Gives as a limit to what possible data you could store in forever when read correctly.
This is DEEP.
I think having USBs with tree (Graham's number) is more likely than £350M for the NHS.
Great video :-)
One is a financial impossibility, the other is a physical one. Choose your poison.
The funny thing is, that Planck wantet to let his helping constant h (h for helping) run to zero...
The Planck constant needs to change? Seems convenient to me that the molar Planck constant is roughly proportional to the error on several of our current observations. I am also aware of a few theorists working on compactified spacetime, I would consider adding that to the list.
But it comes out of things like the universe observably not being flooded in ultra high energy photons. There's some wiggle room in there but not orders of magnitude.
@@RobertSzasz I don't claim to have all the answers, but I'm pretty sure inflation proposes those in the early universe? Compressing time allows them to exist today too, from a very different perspective. I would agree that QM is consistent with our limit on apparent information, but it's interesting to me that this ties together everything on the list: Scale invariance provides a ruleset compatible to QM in which dualities allow for weak fields capable of encoding additional data, followed by the expansion of the observable universe. None of that shows that the universe is actually infinite, but it looks to me like we're at least on our way. The No-hiding theorem seems to be in conflict with our increase in apparent information for a noncompact finite universe.
There's a presumption in this video that is not accurate: the universe may not be (I think that the weak consensus among cosmologists at this point is probably that it is infinite, given that we've yet to detect any curvature, which is the best current alternative) . The OBSERVABLE universe has a few different definitions depending on what parameters you want to tweak, but for even the largest definition of observable universe, there's no possible way that anything physical or even information-theoretic tied to the physical that has a scale remotely approaching TREE(g64). But given, for example, an eternal inflation model, there could be TREE(g64) universes within the inflationary spacetime fabric, trivially. What's more interesting, in an infinite universe, would be whether there are TREE(g64) DISTINCT things. That is, are there that many things (any things) that are not repetitions of previous states. That is a really interesting question, and I don't think there's anything like a consensus on that.
I was wondering a similar thing. But I have a feeling that the rate at which the number of new inflating universes grows might be slower than the rate graham's number grows. So we might get to an amount of information in the entire multiverse of the order or larger than g(64), maybe even as large as TREE(3), but still smaller than TREE(g64).
A weak consensus is not a consensus at all.
Also, if you really want to talk about physics here, the consensus is that it is not meaningful to talk about the universe beyond the observable universe. For scientific purposes, it does not exist. We can make hypothesis all the time, but hypothesis is not science.
@@angelmendez-rivera351 I'm not sure how you would measure consensus otherwise. There are few if any universal agreements, but consensus can range from tenuous understanding to near universal acceptance. All consensus means is that there is a general agreement. There might be hundreds of people who disagree out of thousands or just two. As for outside of the observable universe, we're talking about mapping information theory to physical scale, not the soundness of any particular theory of cosmology. For example, the number of branes in an eternal inflation model would probably be infinite. What I think is interesting is that it's easy to exceed the scale of TREE(n) but not to match it. What I mean by that is that you can say the natural numbers have a higher cardinality (or is that ordinality, I always forget) than any finite value like TREE of any natural number, but to find some finite relationship in any system that's that large is nigh impossible outside of the actual definition of the TREE function.
@@danielpress6152 eternal inflation generally presumes an infinite universe as the substrate for the inflating bubble universes (branes).
Space-time curves infinitely within the singularity, so, moving from outside the singularity towards it you will at some point reach a point where the space-time curvature can be measured as TREE(g64)
Isn't it more beautiful that in order for a simple abstraction like number to be comprehensible, we must imagine so much more than we can ever use?
In this case, we have to compare two numbers (storage capacity of our universe vs. storage requirements for TREE(Graham)), one of which exists while the other "doesn't". Seems to me like a simple equivocation here, likely over "exist", but possibly over "number".
Well, the same argument of "inadequate capacity" can be made for 3^^^3 or f(5,5). Both are way bigger than anything physical, and much simpler to understand than g(64) or TREE(3). I suspect (actually I firmly believe) that the equivocation is about "exist", not about "number".
@@dlevi67 In mathematics, "exists" = "can be defined". (For the most part, though there are such a thing as undefinable real numbers...)
With a formalistic approach to math you just imagine the number as the way it is defined. Not hard at all
I'm so glad you made this video.. It's exactly where my mind went with these large numbers as well. But I ponder this as well:
Could the number be applied to physically existent probabilities? Such as the probability of our universe existing in its current state? Which is either exactly 1.0 (in a philosophical sense) or 1 over a denominator of something like a permutation of the number of possible elements with all the places those elements could exist (in a simplified sense - obviously there are a lot of different ways to approach that problem).
But are there probabilities of the universe which would have a denominator bigger than tree(3) or tree(graham's number) ?
Going by the analogy of a hard drive that's capped on data, I think this kind of massive number can exist. While of course, as you say, there's no way to 'store' the entire number given the entire universe as storage space, it could be 'streamed' in from some hypothetical exterior source. In the same way a video doesn't have to exist on our hard drive for us to view it and/or to exist (much like this RUclips video), if such a thing existed, it could be inspected tiny chunk by tiny chunk and I'd argue that would exist.
Yeah well, but you'll never have the whole thing like that, and you'll never know the whole thing like that. Since you have to delete parts to make room for new parts. And the deleted parts are gone then. So the whole thing won't exist at any given moment in that limited space, and nothing in that space will ever know the whole thing.
Ippikiryu If you could transfer universe amount of data in Plank time, you still can't do it.
Universe will experience heat death before that.
Even the time itself would probably stop exist.
Great ideas but, the final aproximation feels sort of weird because we can easily write a one with 122 zeroes after it, so it is representable.
The 10^122 figure makes more sense if we talk about countable or measurable things in the universe, assuming we can find a clever way to make the measures not continuous or if so, not dependent on some other values (so it's fixed not matter what units we use?).
But what is the biggest number we can represent? Well, what we mean by represent should be more precisely defined or else one could argue TREE(3) is a representation of TREE(3), or that an algorithm that would eventually arrive at the value TREE(3) represents TREE(3).
Using the very specific definition of, "the number must be written without any operations, in decimal digits", the result of 10^10^122 should be pretty hard to represent, even if a single particle was being used for each digit, which is barely valid to the definition.
And if you really try to ease the definition by say, allowing numbers in any bases of digits, or allowing operations, or allowing particles in different states (that's a thing right?), or the different places a particle can be, etc, then maybe there hasn't been anything we used we can't represent
Perplaxus 10^122 is the binary size of the biggest number that can be stored in the universe, not the number itself.
*But what is the biggest number we can represent?*
There is no such biggest number, since you can make arbitrary notation to represent arbitrarily large finite quantities, and that does not even account for transfinite quantities, of which there is no largest representable member. So it is not sensible question. Instead, the question that makes sense is the largest number that can be stored in the universe, which is what was addressed in the video.
10^122 is the number of bits that fit into the universe, but you could reasonably ask about the number of permutations of bits, and call it something physical. So 10^122 factorial is the much larger interesting number. Still way smaller than Graham's number or anything else..
So if the number of bits of data is 10^122, isn't the largest possible representable number 2^10^122?
Yes more or less
I would call that the amount of possible states. Representing a number does not need bits. For example "TREE(3)" represents a number.
It depends on your language L.
@@SmileyMPV Not in this sense, that's the whole point of this video.
Depends on if you're writing out a number or actually using it to quantify physical objects. If you're writing out the number, then yes, the information limit is the logarithm of the number, but if you are expressing the number in terms of physical objects, then the information limit represents the number itself.
I wonder if that is impossible. If we consider the space-time with bundle structure in it, sure that there’s only so many units in plank distance in the observable universe. But within each cell, is the field bounded in terms of energy or whatever the essential way of measuring it. Maybe it is true that in the Milky Way there’s a upper bound for how much energy a cell can contain. But one assumption in general relativity is that, the average mass increases with the radius you consider. So that don’t seems to have an upper bound. But still you need to consider the growth rate of that, which makes it sounds unlikely though.
This is somewhat misleading, the 10^122 is approximately the number of qbits on our horizon, but we can still conduct some operations on numbers with alternate representations in space proportional to the representation size rather than the full binary expansion, so, for many purposes, numbers _much_ larger than 2^(10^122) exist.
10^122 is the total amount of information in the universe, not the biggest number. In order to have the biggest possible number that can fit in our universe you need to have a number that is 10^122 bits long (or 2^(10^122)). Which is, while finite, a whole lot bigger. Also, if our universe actually had such a number defined, there would be nothing in our universe left over that could observe it.
So we can have all the Tree(Graham Number)↑↑...(Tree(Graham's Numbers)...↑↑Tree(Graham's Number) amounts of data we want if we can just find enough dark energy? Has anyone tried fracking space yet?
but you will never live long enough to find 'enough' dark matter
even everyone live billion years it is still 0.00% of process to collect enough dark matter to store g64
even you got g63 of people to work g63 years , still nowhere close to g64
and our universe can't handle g5 already
@@fakestory1753 Our universe cannot handle g(1) in terms of computing and storing it, and g(2) in terms of writing it down with arrow notation, never mind g(5)!
No, we’d actually need LESS dark energy, because dark energy drives the expansion of our universe, which creates the cosmic horizon. An infinite steady state universe is ideal, but it’s not the one we live in :(
@@dlevi67 Our universe cannot handle 3^^^3 already. Let alone G1 (4 arrows).
TREE(G64!)
would be in-freakin-crazy-sane
Please follow this video with a history lesson of Archimedes' "The Sand Reckoner" (if you haven't discussed that yet), it would tie it all up so neatly!
I’ve always felt like the largest number that has any real basis in our universe would be the number of permutations that every sub atomic particle could be at in every Planck volume of the universe. Which I think would be the factorial of the number you calculated, but I could be wrong. I wonder if it gets close to Tree(Graham)
I would guarantee it wouldn’t even touch tree(graham)
I was thinking about that for a while too, but I had to come to the conclusion that it's not even a single bit close :
TREE() has just made my week.
He assumes that the fourth dimension, time, is finite. If the universe goes forever in to the future, TREE(3) units of planck time will have passed.
Another potential counterpoint, if you believe in the Everettian interpretation of quantum mechanics, the quantity of parallel universes could exceed TREE(3), (or any other cardinal number).
I understand I am pushing the definition of existence here, but I found this interesting anyways.
Not really: en.wikipedia.org/wiki/Poincar%C3%A9_recurrence_theorem
Well the universe won't run forever
We do not need to know the data itself. We only need The Metadata of the Metadata of the Metadata on and on and on for a very high number of finite no. of steps. Then we can get an idea of it without running into too many problems. This is a combo of the Physicist in me and the power-crazed SCP fan in me talking together in favor of this as I really want it to exist. Imagine light going at TREE(g(c)) c being the current two-way speed of light. Or imagine a singular tree that is TREE(g(10)) dimensional, with a huge coverage. This would be kind of the end of the finite universe as we know it. We need more info in our universe which is even more densely packed, so that more excitement can happen.
I only have an A level in physics so correct me if im wrong but when he says "Our universe" he means the observable universe not anything beyond that, and the size of that is limited by the speed of light, no?
So if I'm understanding that, TREE(Graham's number) is even larger than the number of possible states for the observable universe? Edit: nvm, watched the rest. Lesson learnt.
Just Graham's number is stupidly large, way bigger than the number of possible states for the observable universe.
The universe's possibility space is on the order of 10^10^343
@@unfetteredparacosmian Poincare recurrence is way bigger than that.
@@unfetteredparacosmian Wouldn't it just be the factorial of the number of states? I.e. if he said it was 10^144, the possibility space would be (10^144)!
What about thinking not only about the storage capacity, but the possible permutations of this stockage capacity. What would be the number?
This video makes a poor assumption. It assumes that the universe is finite. That is possibly true, but the standard model of cosmology assumes a universe that is infinite in extant. In that case, Graham's number compared to the infinite size of the universe is insignificant.
Whether the universe is infinite or not is irrelevant, all that matters is the observable universe. Everything else is unreachable and thus meaningless to this thought experiment
Was this video filmed in 2012... or do you not change your calendar...?
brexit wasn't a thing in 2012, so I presume not
The world ended in 2012 remember?
@@deciMae Possible evidence of time travel?
@@adlsfreund, 😆 yeah.
The number of ways of arranging 10^122 unique items is (10^122)! still tiny compared to even g(1).
Brilliant video, truly amazing. (:
What if we used a Conway's Game of Life universe? You could have a machine that takes TREE(g64) gliders moving from top to bottom which then spits out a glider to the left for each one it reads. So in a sense this machine is counting to TREE(g64). Is it enough to say the machine understands the number, by counting to it? We could also have a machine that spits out different gliders that represent the digits of TREE(g64). Is it enough now to say the machine comprehends the number, by being able to write its digits? We could design much more complicated neural networks that take in TREE(g64) gliders and apply very high-level reasoning to it (this is possible because the Game of Life is Turing-complete). I think whatever definition you have of intelligent life understanding a number, the Game of Life universe can meet it. So in that sense I would say TREE(g64) is "real".
Why 1/H0^2 but not 1/H0^3? If we are talking about volume? The same is for Plank length.
Correct formula shall be (H0 * Lp)^(-3) ~ 10^183.
I have to disagree with the calculation at the end. If I write down on a piece of paper a number with 200 digits (which is indeed possible because there exist pens and paper in our universe), then I could have done that in 10^200 ways, so I already have stored more information. Dividing the observable universe into a total number of M smallest units, there can still be a "thing" at every spot, so I think a correct (non-optimal) upper bound would be N^M, where N is the total number of "things". Now how many "things" are there?
No, we can't have TREE(Graham's number) of things or even just Graham's number of things in the universe. However, that doesn't mean these numbers don't "exist" in the universe, depending on what you mean by "exists". When you think of the number 10 for instance, do you picture 10 things, or just the Arabic numerals, "1","0"? This is a more efficient way of conveying the meaning of a number than counting it. Alternatively, you could say that 10 is equal to the set, {{{{{{{{{{{}}}}}}}}}}}, but we don't need to just use the successor function to describe any number.
Graham's number "exists" in the same way that Pi exists. You can't write out all the digits of pi in decimal, nor can you measure an ideal circle in the first place, because ideal shapes don't exist. However, Pi exists because we can create an algorithm which generates pi. The same idea goes for Graham's Number, or TREE(3), or TREE(Graham's Number). You can define an algorithm that generates Graham's number (by recursion) or the TREE function (by brute force probably). The only difference is that a computer generating TREE(Graham) will eventually halt (after a few eternities), and a computer generating Pi will never halt.
This brings me to the point I wanted to get at. The interesting thing about large numbers that we have defined, like Graham's number is that despite being extremely large, they have extremely LOW information entropy. Apparently a Turing machine with only ~37 bits of input can calculate it.
But the most amazing thing is that the vast, vast majority of positive integers less than Graham's actually don't exist, i.e. cannot be meaningfully expressed in any way that will fit in the universe. The entropy of the universe, 10^122, is related to the biggest number of things that could possibly exist in the universe. However, the number of numbers that could exist in any universe similar to ours is equivalent to the number of microstates, not the entropy.
If S = 10^122 = k*ln( microstates), where k is Boltzmann's constant, then the universe has roughly 10^10^144 possible microstates, the number of ways that all the particles and energies in the universe could be oriented. In theory, each of these could represent a different number. Since 10^10^144 is less than Graham's number (it's even less than G1), then most integers lower than it cannot be represented in any way whatsoever.
Sure, there's not enough bits in the (observable) universal, however one can write it on paper and define in a finite and a very compact way the rules that yield this number
That's all conceptual in order to represent a hypothesis or "what if", but that is a far cry from a physical medium with which to represent it tangibly.
@@Teck_1015 Two rules and the number 3 are enough to define TREE(3). There can very well be three kinds of seeds in the world, as well as a forest thet grows with these rules.
Ξενοφώντας Σούλης Sure, but that the rules exist is not relevant. Computer scientists don't exactly care about whether the compressed version of a quantity can be compressed (it always can be if you go a high enough order of logic.) Calling that "expressible" means you don't understand the definition of "expressible."
@@angelmendez-rivera351 I never spoke about computer scientists, only pure Mathematics. And there is a way yo describe this number in under 10 minutes (Numberphile has done exactly that).
Ξενοφώντας Σούλης Describing and expressing are not the same thing. But whatever. I'm not going to waste my time explaining such a basic difference to people on RUclips. It's not what degrees are for. Believe what you want. Have a nice day.
Still remember the time when I first learn about a number called Trillion and that blown my mind and here are we now.
So if the Univers that we see is 10^122 M is 10^119 KM is 6.21400000E+118 Miles.
You made it bigger than you can fit the biggest number.
Did I understand that right?
You can also use arrow up notation with complex numbers but the limit seems to be two arrows for at least with current math.
This took some time and paper but I calculated what (1+i)^^3 is:
cos((cos(ln(sqrt(2)))-sin(ln(sqrt(2))))e^(-pi/4)pi/4+(cos(ln(sqrt(2)))+sin(ln(sqrt(2))))e^(-pi/4)ln(sqrt(2)))sqrt(2)^((cos(ln(sqrt(2)))-sin(ln(sqrt(2))))e^(-pi/4))*e^(-(cos(ln(sqrt(2)))+sin(ln(sqrt(2))))e^(-pi/4)pi/4)+sin((cos(ln(sqrt(2)))-sin(ln(sqrt(2))))e^(-pi/4)pi/4+(cos(ln(sqrt(2)))+sin(ln(sqrt(2))))e^(-pi/4)ln(sqrt(2)))sqrt(2)^((cos(ln(sqrt(2)))-sin(ln(sqrt(2))))e^(-pi/4))*e^(-(cos(ln(sqrt(2)))+sin(ln(sqrt(2))))e^(-pi/4)pi/4)i
Probably hardest and funniest mathy thing yet I have done
Wait a minute, aren't there plenty of processes that undergo combinatorial explosion that nonetheless do described something about the universe? It wouldn't be too much of a stretch to apply something like Ramsey theory to particles and forces. Though I don't know if you could approach Tree(g(64)) you certainly could contrive a question about the universe that would lead to Ackerman type growth.
Kelly Stratton and probably related to probability
None of these combinatorial quantities exceed 10^10^10^10^10. So, no.
@@angelmendez-rivera351 Could you expand on this?
Kelly Stratton 10^10^10^10^10 is an upper bound to the Poincaré recurrence time of what in physics is called an "empty, vast universe," which represents an universe bubble several orders of magnitude larger than our own observable universe. This is to say, if we had to consider the amount of time it would take for the universe to reset itself and traverse every single quantum microstate possible, then 10^10^10^10^10 planck units of time is an upper bound. This number must obviously be bigger than the number of possible microstates of the universe, which is in itself bigger than the biggest possible number that can be encoded in the universe.
So my question back too you would be (assuming you are correct for argument sake) does tree(grahams number) = infinity? (Or any transcendental number)
Tony: explains impossibly large numbers
Also Tony: "What's 70 + 52?"
I believe the calculation was for the observable universe. Taking into account the largest estimated positive curvature of space-time, the minimum value of probably closer to 10^130 ... still a really really small number. I wonder if someone could estimate what the largest energy density in the universe could be in order to have at least graham's number of information? How would such a number be expressed?
Deeper question here: What's the largest number that can be expressed in symbolic logic in the universe. For example, printing in 10 or 12 font with plenty of empty page space, the symbolic derivation of g(64) could probably fit on a single printed page. This could be coded in binary and expressed in "on" and"off" planck units. In this sense Grahams number would take up very little space. Could tree(3) be coded in a similar way? How we you even properly conceptualize this way of fitting the biggest possible number into the universe? Another way of thinking about this: the concept of tree(g(64)) "fits" in a single human brain (sort of). With a perfectly efficient brain the size of the entire universe, what's the largest number/concept that could be conceived?
Chris Sekely There is no such largest number, since arbitrary symbolic notation can be used to express arbitrarily large numbers. We can go to arbitarily high orders of logic to compress expressions as much we want.
More concretely, I can define a function F(n) such that this is equal to the smallest number not representable in x-order logic with n symbols. Then let n be the number of Planck volumes in the universe and I have expressed a large number. Normally, you would want to use the label F_x to specify the order of logic of the function. With this method, there is such a largest number. However, I can circumvent this entirely by simply creating new symbols. Symbolic logic sets no restriction on what symbols I can use, so long as they are part of the language. I can arbitrarily expand the language and add new symbols arbitrarily, which allows me to not need to label the function, but rather just use a new symbol for a higher order logic function. Then any limitations would come from the limit of possible symbols I can use. As far as I understand, though, there is no limit. For any symbol that exists, I can make a new symbol from it.
Okay, I suppose you may be able to come up with such a limitation symbols. But I'm already a step ahead of you: I can define a function such that there is a number not expressible in this type of notation, and I can do so in symbolic logic. In fact, I can define the function F(n) as the smallest number not expressible in n symbols in any lower order of logic with new symbols. And so on. You may have to consider transfinite orders of logic, and so, but you can always go one order higher and form a compression defined explicitly from the limits of the previous orders.
If you count different permutations of things you can get much higher numbers (like the number possible chess configurations for instance)
Yes. Although numbers like this are so enormous that it makes no difference. The number of ways you could permutate the Planck volumes in the universe is nothing compared to G1, which is nothing compared to G64, which is nothing compared to TREE(3)
I think you're looking at this wrong, it's not about how many bits of data the universe can store but how many states the universe can be in at any one time. Think of it this way, there are 54 cards in a deck. This means, let's say, we can only store 54 bits of data in a single deck of cards. However, this also means, we can have up to 54! possible states in a single deck of cards (assuming that every card is unique). According this logic, this means the largest number the universe could store would be (10^110)!
So the observable universe is said to have 10^80 particles. Thus there are x=(10^80)! ways to combine those particles. This is 10îî3
Planck's length is the smallest possible meaningful length in the universe, according to quantum physics. It's approximately \(1.616229(38) \times 10^{-35}\) meters. It's derived from fundamental physical constants such as the speed of light, Planck's constant, and the gravitational constant.
The volume of one cubic Planck length would be \(1.616229(38) \times 10^{-105}\) cubic meters. This is an incredibly tiny volume, highlighting the scale at which quantum effects become significant.
To calculate how many cubic Planck lengths would fit into the observable universe, we first need to know the approximate volume of the observable universe.
The observable universe is estimated to have a radius of about 46.5 billion light-years, which translates to roughly \(8.8 \times 10^{80}\) cubic meters.
Now, to find out how many cubic Planck lengths would fit into the observable universe, we can divide the volume of the observable universe by the volume of one cubic Planck length.
\[ \frac{8.8 \times 10^{80} \, \text{cubic meters}}{1.616229(38) \times 10^{-105} \, \text{cubic meters}} \]
The result is approximately \(5.44 \times 10^{184}\) cubic Planck lengths.
So, about \(5.44 \times 10^{184}\) cubic Planck lengths would fit into the observable universe.
Somewhere in a very, very, very, very, very, very distant future : "We had to hack the multiverse and kick off a few new ones for extra storage space, but we can now show you the integrality of Tree(3) in this brand new museum."
rotwang2000 Actually, the universe will just reset itself before this happens, making this never happen.
@@angelmendez-rivera351, why? He thinks of a scenario with a Godlike-being beyond the Universe (on a Multiversal level).
10^122 permutated is a little bigger.
Meaning every bit interacting with every other bit. (10^122)! Factorial... I suppose? or maybe more?
If I can use a notation system to write out a number, though, doesn't that make it real? He wrote 10^122 as the largest real number, but he didn't even write it out. He used a shorthand (i.e, exponent). I can use Bowers' Array Notation to write Generalplex, {10,10,10,(10,10,10,10)}, a number that is incomprehensibly larger than Graham's Number (though of course still nowhere near TREE(3)). I'm using a notation system to write this number, which is much more complicated and powerful than exponents, but it's a well defined number that is computable.
There is multiple roads to Rome, yet it is just one city. A leaf can fall to the ground in an unimaginable, yet not infinite, number of ways, yet the leaf and the air it travels through consist of a relatively small amount of particles. Sure, perhaps it isn't possible to have TREE(Graham) as number of particles (not considering an infinite number of multiverses), but would it otherwise be possible to still imagine the number through such thought experiment? In that case the number, I reckon, exists. But... perhaps I simply don't grasp the sheer size of that thing ;)
The number of possible "ways" of something happening is wholly encompassed by Tony's calculation. That's sort of what they're getting at with the whole "information limit" thing. It's all just in that little 10^122. There seem to be a few competing schools of thought in the other comments that want bigger estimates for stuff like combinatorics or whatever- I have no idea if any of them are based in truth, but even then people seem to agree that the biggest number you can reasonably wring out of the observable universe is something like 10^10^10^10^10. I forget how many 10s there are supposed to be, but that's irrelevant. You could stack 10s all day and you wouldn't get any closer to Graham's number.
@@andrew_cunningham got it, I think ;). thanks for explaining!
Can TREE(g64(3)) be written in Conway chained-arrow notation?
if you need to use this number for something imagine that since big bang, every smallest possible time unit of time alternate universes were created and they keep being created ever since (but not until present day but) until black holes evaporate - and from each alternate universe many more alternate universes create... and then count up all planck constant for every combination of atoms or whatever particles they could contain all together. that would be maybe even bigger than TREE(grahams number)
The amount of data that the universe contains may "not be that big", but if we consider the possible permutations of all of this data then we can make it much bigger. Obviously it still wouldn't compete with any of these numbers, though.
10^122 is the biggest number of "something countable" you can potentially have in observable universe. That does not mean bigger numbers don't exist. Numbers are a concept, they don't have to have objects which you can count to reach them.
Questioning existence of such big numbers is like questioning existence of irrational or transcendent numbers. If you need to make infinite steps to reach exact value of the number, the number can't exist since you can't ever make that infinite steps. You can't even make more than 10^122 steps, right?
This naive extension of Rayos number is quite big:
Rayo(SCG(13))
Rayos number is Rayo(100)
Thanks for saving 'like' # 3 * 127 for me ... it may sound a bit ... odd ... but this makes me happy :)
I love how their calendar is still 2012.
Wasn't there a video describing how we know that tree(3) is finite? Swear there was one but can't find it