A Finite Game of Infinite Rounds

Поделиться
HTML-код
  • Опубликовано: 2 окт 2024

Комментарии • 190

  • @cocoscacao6102
    @cocoscacao6102 2 года назад +1043

    You take the blue cotton ball, the story ends... You wake up in your bed and believe... whatever you want to believe. You take the red cotton ball, you stay in wonderland, and burn your processor.

    • @geckoram6286
      @geckoram6286 Год назад +25

      this is gold

    • @shigekax
      @shigekax Год назад +24

      I feel like this exactly what happens with the square variant, you either draw the blue ball first or spend your life with red

    • @Jivvi
      @Jivvi Год назад +5

      ​@@geckoram6286No, it's blue.

    • @inrevenant
      @inrevenant Год назад +15

      As the processor burns, the whole world turns red, darker and darker, until it's pitch black. You can't see anything for a while. But you begin to feel... Like you're jostling somewhat, and it's getting stronger...
      You realize your eyes are closed, so you open them, and the world begins to come into focus as your eyes are getting used to the overcast day in a snowy mountainside, where a horse is pulling the cart you're sitting in, along with a handful of others, and one of them turns to you to say:
      > Hey, you. You're finally awake. You were trying to cross the border, right? Walked right into that imperial ambush, same as us, and that thief over there ...

    • @StarLink149
      @StarLink149 Год назад +1

      Thank god I woke up in my bed, I thought I'd wake up in Skyrim.

  • @curtmack
    @curtmack Год назад +461

    In Magic: the Gathering, it's fairly common that combining a number of card effects allows you to repeat a process an unlimited number of times. Because of this, the Magic Comprehensive Rules - a 200+ page document, rigorously defining every piece of the game and how they interact - define a process for shortcutting loops. Essentially, the player demonstrates the loop once, then says something like "I repeat that a billion times." Their opponent, then, has the opportunity to accept the shortcut as-is, or interrupt the loop at some point with their own effect.
    The loop shortcut rules also provide for naming a desired end state instead. For example, if one loop gives you one surplus mana, and another loop lets you spend five mana to create three 1/1 tokens, you could just say, "I repeat these two loops until I have a billion tokens," without having to do any further calculation. However, to be able to do this, it must be clear that your desired end state will definitely happen in some finite number of attempts. In particular, no part of the loop can be random. This restriction isn't without controversy, and there are known card combinations that would be playable in decks if you were allowed to shortcut random processes with a tail probability of 0.
    Seeing examples like this really helps cement why the Magic rules team made that decision. Infinities are weird; play with them at your own peril.

    • @alejrandom6592
      @alejrandom6592 Год назад +5

      Amazing shit here

    • @EebstertheGreat
      @EebstertheGreat Год назад +32

      I feel like if you can demonstrate to a judge's satisfaction that you will be able to achieve a given game state in finitely many moves with a probability of exactly 100%, you should be able to take the shortcut, even if it's possible that you could fail infinitely many times (with 0 probability). Expected value doesn't really come into play, since we are talking about the probability of reaching _one specific state,_ not the expected state. For instance, if some loop allows me to get infinitely many coin flips with no side effects, it seems reasonable that I could say something like "I flip until I get ten heads" or something, since that will almost-surely happen eventually, and forcing people to go through the motions for hundreds or even thousands of flips is just a waste of time.

    • @danielroder830
      @danielroder830 Год назад +14

      Magic the Gathering can actually be a computer, i don't know the specifics, but you can use it to add numbers represented as tokens basically.

    • @EebstertheGreat
      @EebstertheGreat Год назад +39

      ​@@danielroder830 The paper by Churchill, Baderman, and Herrick creates a Turing machine by using green x/x tokens to represent the state of the tape at position -(x-2) and white x/x tokens to represent the state at position +(x-2). The creature type of each token corresponds to a symbol on the tape. So a 4/4 green Aetherborn token means the symbol A is in cell -2, and a 3/3 white Faerie token means the symbol F is in cell +1. (There are no 1/1 tokens in this construction, and at most one 2/2 token.) Their construction uses 18 total symbols, represented by types in alphabetical order from Aetherborn to Sliver.
      Copies of Rotlung Reanimator are created, and Artificial Evolution and Glamerdye are used to change the type and color words in their text. Each copy encodes a rule for the Turing machine. For instance, if you change the text of Rotlung Reanimator to read "Whenever Rotlung Reanimator or another _Aetherborn_ dies, create a 2/2 _green_ _Basilisk_ creature token," it encodes the rule, "when reading symbol 1 in state A, write symbol 2 and move right." Changing the creature types changes the symbols. And changing the color from green to white changes the direction from right to left.
      Reading a symbol is accomplished by forcing Alice to cast Infest, which gives all creatures -2/-2 until end of turn. This kills a 2/2 token, triggering the effect of one copy of Rotlung Reanimator. A different copy will trigger depending on the creature type of that token, i.e. the symbol being read. This creates a new token of the appropriate type (the write step). Then movement left and right is accomplished with Cleansing Beam and Soul Snuffers. Each player controls a Vigor, so the Cleansing Beam actually gives the tokens of one color +2/+2 instead of dealing 2 damage. So this combo effectively buffs all green tokens by +1/+1 while debuffing all white ones by -1/-1, or vice-versa, thus moving left or right.
      As a final note, there are actually _two_ Rotlung Reanimators for each symbol, one for state A and one for state B. All of them have Cloaks of Invisibility, and only 18 are phased in at once. Switching between states is accomplished by forcing them all to phase in/out.
      Each turn, Alice must cast a card due to Bob's Wild Evocation in play, and every decision turns out to be forced. A Wheel of Sun and Moon allows her to recycle her remaining cards. Bob meanwhile has no cards in hand or deck and a Recycle in play, stopping him from getting decked. Both players have a Blazing Archon, so no one can attack. And Alice's only land remains tapped due to Choke. At any given time, Alice has only one token (and her other creatures are not valid targets), so her plays are always forced to be on that one. Whenever Bob creates a new token, Alice's Illusory Gains takes control of it and relinquishes control of the previous token, ensuring Alice always has exactly one.
      The complete details are quite complicated and can be found here: arxiv.org/pdf/1904.09828.pdf. I didn't mention the Xathrid Necromancers for instance, which are similar to Rotlung Reanimators and are distinguished from them in order to control flow from state A to B.
      The idea is that in the setup, many voluntary actions are taken to create any desired tape and instructions for a (2,18) Turing Machine. Once the setup is complete, a sequence of only mandatory actions executes the machine. If the machine eventually halts, then the game will end, with Alice winning. If it doesn't halt, the game will never end. By the rules of magic, such a forced infinite loop results in a draw. Therefore, if the machine halts, Alice wins, and if it doesn't halt, she draws. This means in order to decide who has won a given game, you need to solve the halting problem for the corresponding Turing machine. This shows that in general, the outcome of a game of Magic is undecidable--no general algorithm exists that can always (accurately) decide, even when only mandatory actions remain. It also shows that any computation can be embedded in a game of MTG by turning it into a Turing machine that performs that computation.

    • @zacharybarbanell1064
      @zacharybarbanell1064 Год назад +9

      The particular point of controversy is not just that you can't shortcut it - since, in the relevant state, it's not really possible to shortcut it - but that attempting to play it out without shortcutting it is considered stalling under the MTR (not the CR).

  • @Paul71H
    @Paul71H Год назад +139

    I was pretty sure from very early in the video that you were headed toward the infinite sum 1 + 1/2 + 1/3 + 1/4 + 1/5 + . . . (or something very similar), which diverges to infinity, but which goes to infinity very, very slowly. But even though I knew pretty much where you were going, the journey was very entertaining and informative. (And the journey went on a bit further too, which was cool.)

    • @YerpyMoose
      @YerpyMoose Год назад +1

      harmonic series divergence 😊

  • @donaldhobson8873
    @donaldhobson8873 2 года назад +130

    1 round of this game, 9 rounds thumb twiddling, 1 round this game, 89 rounds thumb twiddling. 1 round this game 899 thumb twiddling. So the rounds where you do anything are rounds 1,10,100 ..., and there you use the red ball, blue ball. Infinite expected value of digits.

    • @ozargaman6148
      @ozargaman6148 Год назад

      Thumb twiddling?

    • @donaldhobson8873
      @donaldhobson8873 Год назад +18

      @@ozargaman6148 In computer science, it's the null op. A step that doesn't do anything except waste time.

    • @ozargaman6148
      @ozargaman6148 Год назад +1

      @@donaldhobson8873 oh so basically the red ball?

    • @alejrandom6592
      @alejrandom6592 Год назад +13

      ​@@ozargaman6148 no it's more like you take out a ball and do nothing. No matter if it's blue or red

    • @ozargaman6148
      @ozargaman6148 Год назад

      @@alejrandom6592 but it's the equivilant of a red ball in this game, it acts the same

  • @SmallerRidley
    @SmallerRidley Год назад +69

    Makes me appreciate Minimize and Double Team not allowing for infinite evasion boosts.

  • @joseville
    @joseville 2 года назад +86

    Everything about this video is great, but the sound design on this video is amazing! I don't think I've seen this done with math/STEM videos before. The sound really emphasized the weirdness/bizarreness. It gave me a similar vibe to the game Braid.

    • @ttmfndng201
      @ttmfndng201 Год назад

      If I have to be honest, I found it quite jarring. like, it's just not pleasant to listen to

    • @a52productions
      @a52productions Год назад +1

      i really love the few math videos ive found with music tailored to the video (this is a great example ruclips.net/video/ZYj4NkeGPdM/видео.html), and i think the whole tone scales used in this video fit the theme so well!!

  • @erint9283
    @erint9283 Год назад +54

    This is my favourite of the SoME2 videos I've seen so far. When it showed the expected value of rounds increasing in the python program I immediately thought, oh, it's got to be 1/2+1/3+1/4...! Even though I didn't know exactly how the maths would work out. Same enjoyment when you're reading an Agatha Christie and you get an idea who the killer is even though you can't say precisely how they did it. Great job!!

  • @mathscharts
    @mathscharts 2 года назад +37

    An excellent explanation, and a deep problem. It is somewhat similar to St. Petersburg paradox by Bernoulli, where you play a game that also has an infinite expected payout. Yet, due do ever decreasing probability of a large payout, no one would be willing to pay a large sum of money to play the game. In economics, this was traditionally used as an argument against simply computing expected values and in favour of the expected utility theory, which itself has its own problems.

  • @SoaringMoon
    @SoaringMoon Год назад +24

    In python, printing the output is very very slow. Simply removing the print function for each round makes the run time nearly instant.

    • @ArthurKhazbs
      @ArthurKhazbs 3 месяца назад +1

      Or buffering the output and then flushing it after the computation completes

    • @doctorwhouse3881
      @doctorwhouse3881 3 месяца назад

      I/O in general is slow, even if not interacting with the disk directly, in basically every language.

  • @chalkchalkson5639
    @chalkchalkson5639 Год назад +16

    Challenge question solution:
    The number of (base 10) digits in the number n is 1+floor(log10(n)). The probability stays the same obviously. First let's show that the sum converges:
    1. the terms in the sum are positive, so it's enough to show that a sum of greater terms converges
    2. [1 + floor(log(n))] / [n(n+1)]

    • @yashpermalla3494
      @yashpermalla3494 Год назад

      it's funny how just "brute forcing" the number of digits in the series is easier to solve than using the log trick for digits

  • @Iamg1004
    @Iamg1004 2 года назад +4

    Sup dude! Nevermind the video, which was excellent, the audio (Quality, voiceover, and sounds/music) was one of the best I've personally seen. Great all around! Congrats.

  • @timolson4809
    @timolson4809 7 месяцев назад +6

    Here’s a way to do the challenge:
    The probability of the ball being found in n tries or less is n/(n+1). You can subtract out probabilities from before. Digits wise you’re interested in the cases where n = 9, 99, 999, etc. So you get:
    9/10 + 2*(99/100-9/10) + 3*(999/1000-99/100)…etc.
    This simplifies to 9/10 + 2*9/100 + 3*9/1000 …etc. or the sum from n=1 to infinity of 9n/10^n. The 9 can be factored out, so you now have 9 Σ n/10^n. Rewrite this sum as starting with n=0: 9*(1/10)*Σ[n*(1/10)^(n-1)]. Inside the summation is a version of d[x^n]/dx with x=1/10. So you can rearrange the expression (limits from 0 to infinity):
    (9/10)*d[Σ(x^n)]/dx evaluated at x=1/10. The inside of the summation is a geometric series that converges (since x

  • @shminge7779
    @shminge7779 Год назад +13

    This reminds me of my favorite problem like this, that I still don't have a satisfactory answer for: take a dice labeled 0 to 5, roll it. Now take that many 0-5 dice, and roll them. Now take the sum of all those dice, and roll that many, etc. Is it safe to assume that at some point you'll roll all 0s and finish the game, or can you go on forever?

    • @rubenquirozmarnef8119
      @rubenquirozmarnef8119 Год назад +5

      I'm pretty sure it can go on forever. As soon a single high digit shows the probability of finishing the game scales as 1/6^(m*n) where m is some increasing value. And n is the trial number. This goes to zero incredibly fast. So the probability of never ending is quite large. It's very similar to the case where you add a cube number. If you replace your condition by a weaker one. Namely always add a single dice if you didn't throw all 0's the chance of finishing scales as 1/6^n. Which goes to zero almost immediately

    • @aimericduchemin7061
      @aimericduchemin7061 Год назад

      This seems to be pretty similar to the Galton-Watson process (en.m.wikipedia.org/wiki/Galton%E2%80%93Watson_process) so since each dice has on average 2.5 "children" the probability of extinction as described in the article would be less than one, making it is possible to go on forever.

  • @wyboo2019
    @wyboo2019 Год назад +8

    so if you programmed in the example where you add red balls up to a square, theres a 50/50 chance the program would never terminate? its weird to think about

  • @piguyalamode164
    @piguyalamode164 2 года назад +4

    The expected number of digits is about log(n), so using the fact that p(n rounds)=1/n(n+1) we see that
    E=sum log(n)/n(n+1)
    This does converge. (log(n)n^2, so E

    • @1zl541
      @1zl541 Год назад +5

      The log approx is actually harder than working with the exact number of digits here.
      We always have 1 digit, if n>=10 we add a second, if n>=100 we add a third, etc., so we have
      1 + Pr(n>=10) + Pr(n>=100) + ... = 1 + 1/10 + 1/100 + ... = 1/(1-1/10) = 10/9

    • @television9233
      @television9233 Год назад

      It does converge! I think you might be accidentally taking the integral at the wrong bounds, it should be the integral from n=1 to infinity. (you might be accidentally using 0 instead of 1?)

    • @chalkchalkson5639
      @chalkchalkson5639 Год назад

      You forgot to round the log for the number of digits and to add 1. 13 has 2 digits = 1+floor(lg(13)), not lg(13)=1.114...
      but as @@1zl541 pointed out, splitting the sum into a double sum over number of digits and number with that number of digits allows you to find it exactly

  • @marius9139
    @marius9139 Год назад +5

    should the calculation for the average number of rounds at 4:33 not be 1*p(1)+2*p(2)+3*p(3)?

  • @landsgevaer
    @landsgevaer Год назад +6

    The funny and unique thing with Cauchy is that the mean of N samples has *the same distribution* as a single sample!

  • @hamc9477
    @hamc9477 Год назад +4

    Could you do a video on the "diminishing die" problem? I.e. if you roll k on an n sided dice, that face disappears and is no longer available to roll. What is the expected value of an n-sided diminishing die? What about 2 such die? What about rolling r such n-sided dice? What about rolling n,m diminishing die? I tried to work it out once but the maths got too hard for me and my shoddy python code I made to simulate it didn't really work

    • @kaleclack8618
      @kaleclack8618 3 месяца назад

      Is it not just (n+1)/2, since each side will be rolled exactly once? Or do you mean the expected number of rolls to remove all n faces?

  • @AzureLazuline
    @AzureLazuline Год назад +8

    i thought back to this video because i've been playing a game where dice have different effects on the sides, like attacking enemies, blocking damage, and so on. One of the rare effects is "gain N rerolls" and i was trying to calculate the probability of stuff when that's involved, which is tricky! If one of the sides is +1 reroll then it's basically like there's just the 5 sides left, but when N gets higher it's hard to figure out the exact stats... but then again, that usually means you're clearly winning anyway. 😄 When the total N across all sides is =6 then i don't think there's an expected value but it's not infinite (unless it's exactly 1 per side), but i'm not quite sure - it's close to equivalent to asking how long it takes for a random walk to go negative. Total N >6 is easy though, it means there's a good chance of going infinite!

  • @robvdm
    @robvdm Год назад +1

    If you want something that describes “typical” behavior like the mean, but works for very pathological random variables, like this one, you can always use the median.

  • @ToXaNe1
    @ToXaNe1 2 года назад +5

    imagine one day someone runs this and hits some silicon lottery level game that just continues for hours and hours

  • @archivewarrior8535
    @archivewarrior8535 Год назад +1

    Here’s my attempt at the ending challenge. (SPOILERS) (LONG)
    I got 10/9. Now here’s (most of) my work:
    (0) To preface, when I use log(), I mean the base 10 logarithm. Also, as used here, the floor function,⌊x⌋, returns the nearest integer to x that is less than x.
    (1) First, let’s define a function D(n), for positive integers n, that returns the number of digits in n. I don’t know how to describe the process of figuring out what that function is, but it ends up being D(n) = ⌊log n⌋ + 1. A sanity check can be done by evaluating the function around powers of 10.
    (2) Next, the probability of terminating at round n is P(n) = 1/[n(n + 1)], as shown in the video.
    (3) The expected number of digits, then, is E = Σ[n=1,∞] (P(n)*D(n)), or E = Σ[n=1,∞] (⌊log n⌋ + 1)/[n(n +1)]. After checking that this converges (I used a comparison with the series of log(n)/n^2 and the integral test), this can be split into two sums, with terms ⌊log n⌋/[n(n + 1)] and 1/[n(n + 1)], which can be evaluated separately and combined at the end.
    (4) Before that, though, it will be useful to find the partial fraction decomposition of 1/[n(n + 1)]. We set it equal to A/n + B/(n + 1). Using the cover-up test, we find that A = 1 and B = -1. Thus, 1/[n(n + 1)] = 1/n - 1/(n + 1).
    (5) Now we can evaluate Σ[n=1,∞] 1/[n(n + 1)]. We can plug in the partial fraction decomposition we just found to get Σ[n=1,∞] (1/n - 1/(n + 1)). Expanding yields (1/1 - 1/2) + (1/2 - 1/3) + (1/3 - 1/4) + …, which we can now see is a telescoping sum. All but one of the terms cancel, leaving just 1/1, and so Σ[n=1,∞] 1/[n(n + 1)] = 1.
    (6) Next, let’s look at Σ[n=1,∞] ⌊log n⌋/[n(n + 1)], which we will call S. This series looks kinda scary, so let’s get it into a more workable form. We can start by using the partial fraction decomposition of 1/[n(n + 1)]: S = Σ[n=1,∞] ⌊log n⌋*[1/n - 1/(n + 1)]. Distributing the floor stuff and pulling the result apart into two sums yields S = Σ[n=1,∞] ⌊log n⌋/n - Σ[n=1,∞] ⌊log n⌋/(n + 1). Re-indexing the second sum from n to n-1 gives S = Σ[n=1,∞] ⌊log n⌋/n - Σ[n=2,∞] ⌊log(n - 1)⌋/n. We can make the indices match by taking the first term out of the first sum, but since that term is zero, we are left with S = Σ[n=2,∞] ⌊log n⌋/n - Σ[n=2,∞] ⌊log(n - 1)⌋/n. Finally, we can combine the sums into S = Σ[n=2,∞] (⌊log n⌋ - ⌊log(n - 1)⌋)/n.
    (7) It may look like the sum is worse now, but we can shown that the expression ⌊log n⌋ - ⌊log(n - 1)⌋ has some helpful properties.
    (8) First, let k be a natural number or 0, let n be a natural number greater than 1, and constrain n to the interval [10^k,10^(k+1)), so 10^k ≤ n < 10^(k+1). Notice that, since the interval n is in is bounded by two consecutive powers of 10 and open on the right, 10^k is the only power of 10 on that interval. Notice also that since log is a monotonic increasing function, log(n) ≥ log(10^k) = k, and log(n) < log[10^(k+1)] = k + 1. Thus, k ≤ log(n) < k + 1, and so ⌊log n⌋ = k.
    (9) Now, there are two cases: n = 10^k and n ≠ 10^k.
    (10) Case 1: let n = 10^k. Note that log(n - 1) = log(10^k - 1) < log(10^k) = k. I won’t show why, but log(n - 1) ≥ k - 1. Since k - 1 ≤ log(n - 1) < k, ⌊log(n - 1)⌋ = k - 1. Thus, if n = 10^k, ⌊log n⌋ - ⌊log(n - 1)⌋ = k - (k - 1) = 1.
    (11) Case 2: let n ≠ 10^k. Note that since 10^k is the only power of 10 on the constraining interval for n, n is not a power of 10. Note also that n is now on the interval [10^k + 1,10^(k+1)), and so n - 1 is on the interval [10^k,10^(k+1) - 1). Since this is a sub-interval of [10^k,10^(k+1)), by a similar argument to before, ⌊log(n - 1)⌋ = k. Thus, if n ≠ 10^k, ⌊log n⌋ - ⌊log(n - 1)⌋ = k - k = 0.
    (12) To summarize paragraphs 8-11, the value of ⌊log n⌋ - ⌊log(n - 1)⌋ is 1 if and only if n is a power of 10, and is 0 otherwise.
    (13) With this information, the sum S = Σ[n=2,∞] (⌊log n⌋ - ⌊log(n - 1)⌋)/n is fairly easy to calculate. We can expand it and fill in values for the floor expression: S = 0/2 + 0/3 + … + 0/9 + 1/10 + 0/11 + … + 0/99 + 1/100 + 0/101 + … + 0/999 + 1/1000 + …. Removing the zero terms gives S = 1/10 + 1/100 + 1/1000 + …, which is a geometric series with first term a = 1/10 and common ratio r = 1/10. Since |r| < 1, we can use the formula for a geometric series: S = a/(1 - r) = (1/10)/[1 - (1/10)] = 1/(10 - 1) = 1/9. Thus, Σ[n=1,∞] ⌊log n⌋/[n(n + 1)] = 1/9.
    (14) To summarize, we found an expression for the expected number of digits in our score as E = Σ[n=1,∞] ⌊log n⌋/[n(n + 1)] + Σ[n=1,∞] 1/[n(n + 1)]. We then found that Σ[n=1,∞] 1/[n(n + 1)] = 1 and Σ[n=1,∞] ⌊log n⌋/[n(n + 1)] = 1/9.
    (15) Now, we can finish it out: E = Σ[n=1,∞] ⌊log n⌋/[n(n + 1)] + Σ[n=1,∞] 1/[n(n + 1)] = 1/9 + 1 = 10/9.
    Thank you for reading all the way through (or for skipping to the end, I suppose). Sorry for formatting or mistakes; I typed this on a phone screen and I can’t see much of the screen at once. Feel free to add to this or point out mistakes!

  • @cheshire1
    @cheshire1 2 года назад +12

    My answer for the expected value of digits:
    10/9

  • @kenhaley4
    @kenhaley4 Год назад

    Wow! I thought I understood probability. This is amazing!

  • @charliezard64
    @charliezard64 2 месяца назад

    As soon as you explained the premise of the game, I knew the harmonic series was gonna be problematic. Then I realized this was all gonna lead to the harmonic mean.

  • @latenter1428571
    @latenter1428571 Год назад +1

    First challenge problem:
    P(1 digit) = (1/1)*(1/2) + (1/2)*(1/3) + (1/3)*(1/4) + ... + (1/9)*(1/10) = 1 - 1/2 + 1/2 - 1/3 + ... + 1/9 - 1/10 = 1 - 1/10 = 9/10
    P(2 digits) = (1/10)*(1/11) + ... (1/99)*(1/100) = 1/10 - 1/100 = 9/100 = 9/(10^2)
    ...
    P(n digits) = 9/(10^n)
    So E(number of digits) = 9 * sum_from_n=1_to_inf [ n/(10^n) ] = 9 * (1/9 * (1+ 1/9)) = 10/9

  • @smorcrux426
    @smorcrux426 2 года назад +3

    Amazing video! I watched a ton of some2 videos during the whole exposition, but how I'm going back and watching the videos I didn't get to see, and I really love all of them!

  • @JacksonBockus
    @JacksonBockus Год назад +3

    I get a surprising amount of joy from the harmonic series and the weird, unintuitive results that derive from it

  • @norbertvalterkalocsai6584
    @norbertvalterkalocsai6584 Год назад +4

    i just love how the music is matched to the video

  • @rhettorical
    @rhettorical Год назад +2

    I had a teacher explain to me once that if "math is the language of the universe", that means that, like spoken language, there are things for which we don't have words for, as well as things for which words do not convey the full meaning. When facing a problem that doesn't make any reasonable sense, like a problem that's simultaneously finite and infinite, you've hit that limitation of mathematical "language". It's the kind of problem that led to the development of mathematical concepts we take for granted today, from irrational numbers to calculus to the very concept of zero. Or, more relevant to this video, infinity.

  • @kikones34
    @kikones34 Год назад +2

    I don't have time to work it out right now, but regarding the question at the end, I got the feeling that the expected value for the number of digits converges because it is the logarithm of the expected value, and the series 1/2 + 1/3 + 1/4 + 1/5... diverges at a logarithmic rate, as such, when applying the logarithm it perfectly cancels the growth, making it a constant.

  • @danielius9156
    @danielius9156 Год назад +4

    I ran this simulation in C# with 10,000,000,000 games and got an error that the ball amount exceeded int32 which is 2,147,483,647. Well that was unexpected, or was it?

    • @user-semenar
      @user-semenar 3 месяца назад

      It was, since the probability of you going over the limit in a given game is a 1/2,147,483,648, which should have a very good probability if you repeat that 10,000,000,000 times.

  • @MogaTange
    @MogaTange Год назад +2

    Me who’s spent the last 13.8 billion years playing with cotton balls and can now detect the minute chemical differences in dyes and slight radiation differences between blue and red light.

  • @IonfraX
    @IonfraX 2 года назад +1

    Really great video! Thanks for covering this. One question : do you know any literature or references where people have found "new possibilities / patterns to explore or problems to solve", in order to tackle this contradiction b/w infinite round expectation vs. 0 probability of infinite rounds?

    • @josephnewton
      @josephnewton  2 года назад +1

      I don't know any good references off the top of my head, but the Cauchy distribution is probably the best-studied example of one of these random variables, so that's a good avenue to look into. A neat way of showing it doesn't follow the central limit theorem (the reason most variables approach their expected value) is that the average of a number of independent Cauchy-distributed variables also has the Cauchy distribution. Apparently you can prove this succinctly with characteristic functions, but I haven't looked into that.

    • @cheshire1
      @cheshire1 2 года назад +1

      I don't know any literature on this, but one question that I would like to explore is: if the expected value is infinite, how fast does the average tend to infinity as we increase sample size?
      The contradiction you mention is not actually one, it's just counterintuitive. The way to deal with that is to work with it until your intuition catches up.

  • @Boringpenguin
    @Boringpenguin 2 года назад +10

    For the challenge problem, if anyone would like some hints:
    Hint 1: Try partial fraction decomposition on 1/(n*(n+1)), you will see there are some telescoping going on
    Hint 2: Try to differentiate the infinite geometric series together with its closed form, something similar will probably come up at the very end :)

    • @AlphaCarinae
      @AlphaCarinae Год назад +2

      Ah yes, very helpful hints... they mean absolutely nothing to me but I'm sure they're helpful.

  • @alansmithee419
    @alansmithee419 Год назад +1

    The most inaccurate part of this video is the claim that getting blue balled means you finished.

  • @PaulMurrayCanberra
    @PaulMurrayCanberra Год назад +1

    So, what's the expected number of rounds if you play g games?

  • @zlodevil426
    @zlodevil426 Год назад +1

    What do you even mean by “how many digits the expected value has”?

  • @JTtheking134
    @JTtheking134 3 месяца назад +1

    I think you misswrote at 6:02, I'm pretty sure it's supposed to be "E = 1 * P(1) + 2 * P(2) + 3 * P(3) + ...".

    • @josephnewton
      @josephnewton  3 месяца назад +1

      Good catch, that was a typo. I’ve added a correction to the video description

  • @all_things_random3713
    @all_things_random3713 Год назад +1

    Every round add an infinite amount of red balls 🧠

  • @alejrandom6592
    @alejrandom6592 Год назад +1

    Mistake at 4:32 and 6:02 in the arguments of P

  • @legendgames128
    @legendgames128 3 месяца назад

    The expected number of digits in the number of rounds can be calculated by the sum from n=1 to infinity, with each term equal to ceiling(log(n+1))/(n(n+1)).
    The denominator came from the probability calculation, and the log(n+1) came from replacing n with the number of digits it has, rounding up to get a whole number.
    Running this through Desmos, replacing infinity with x, at x=10^6, the sum seems to have converged to 1.111, which looks suspiciously similar to 10/9.

  • @danielfernandes1010
    @danielfernandes1010 2 года назад +2

    I love this! You kept the video interesting throughout. Would love to see more from you!

  • @johnchessant3012
    @johnchessant3012 2 года назад +2

    Great video! One thing I'm wondering about is the simulation you showed at the start, where the averages for 10^3, 10^4, 10^5, 10^6 trials seemed to increase logarithmically. However, when you do more runs of those sample sizes the averages fluctuate wildly so I'm not sure how best to formalize this pattern, if it exists at all. Is there a fast analytic way to calculate the median or mode of the average of N i.i.d. samples from this distribution, P(X = k) = 1/k(k+1)? (Obviously the mean would be infinite.)

    • @josephnewton
      @josephnewton  2 года назад +3

      I couldn't see an easy way to find the median or mode, however I came up with another idea which was to artificially limit the number of rounds based on the probability that all games are below a certain number of rounds. I added those details to the video description, and it does seem to be logarithmic! My guess is that the entire distribution gradually shifts upward logarithmically (only approximately, since it's a discrete distribution - maybe there's a continuous distribution that does the same shift but more smoothly?). I should say though that there's some sampling bias in the video, because I reran the simulations until I got good looking results (although I wasn't necessarily expecting a logarithmic dependence). In practice the averages are much less clean!

    • @piguyalamode164
      @piguyalamode164 2 года назад +3

      Given the expected value is the harmonic series, and it is known that the limit as n->infty of (1+1/2+1/3+...+1/n)-log(n) converges to a fixed constant, the logarithmic increase is not unsurprising(I would guess it is simply because the additional samples "give more opportunity" to reach further into the game, ie further into the less likely parts of the series, ie add terms)

  • @rikschaaf
    @rikschaaf Год назад

    If expected value for the number of digits in the number of rounds is finite, how can it be that the expected number of rounds is infinite, rather than a number that has that amount of digits?
    You'd think that knowing the expected number of digits in the number of rounds at the very least creates bounds on the expected number of rounds. (if it were to have 5 digits, you'd think that the expected number of rounds would have to be between 10000 and 99999).
    I think your statement conveys some kind of paradox, because any number with a finite number of digits is itself finite. In base 10, every number with n digits where n is a finite positive integer is a number between 10^n-1 and 10^n.

  • @lzllzzl
    @lzllzzl 3 месяца назад

    An interesting fact about the Polya urn process described in the video is that it's equivalent to the following process: Pick a random p uniformly from [0, 1], then output a sample from a geometric distribution of rate p.

  • @WilliamWizer
    @WilliamWizer Год назад

    the number 1 can't be prime. every composite number has one, and only one, prime factorization. if 1 were prime, every single composite number would had infinite prime factorizations since n*1 = n.
    also, 1 has half the amount of divisors any prime has. any prime X has 4 divisors (1,-1,X and -X) but 1 only has 2 divisors (1 and -1)
    this stays true even if you only consider natural numbers. 1 will only have one divisor and primes will have 2 divisors (1 and X)

  • @berryesseen
    @berryesseen 3 месяца назад

    Mean of iid random variables converges to the expected value only if the expected value is finite. Mean of iid Cauchy random variables is again Cauchy distributed. You can show it using the characteristic function. This video is a good example for why simulation doesn't mean anything. You have to prove the convergence if you want to claim it.

  • @haroldp.sadwood1181
    @haroldp.sadwood1181 Год назад

    9:15 This revelation astounded me, so I wrote a program to try and simulate this. It basically calculates what percentage of games end finitely. Here's the code:
    def quadratic_increase(num_trials: int) -> float:
    num_red_balls = 1
    num_finite_games = 0
    num_turns = 1
    drawn_ball = 1
    for _ in range(num_trials):
    while drawn_ball > 0:
    drawn_ball = random.randint(0, num_red_balls)
    num_turns += 1
    num_red_balls = num_turns**2 - 1

    # if the number of red balls exceeds 1,000,000, we can assume that blue ball is never getting picked
    if num_red_balls > 1000000:
    num_finite_games -= 1
    break
    num_finite_games += 1
    num_red_balls = 1
    num_turns = 1
    drawn_ball = 1

    return num_finite_games / num_trials
    As I increase the number of trials (I went up to 1,000,000), the number converges to 0.75. This isn't the 0.5 number demonstrated in the video, but that's because I kept the first round in, giving a 50% chance to pick the blue ball on the first round and win immediately. This obviously halves the final limit result, making it 1/4, which means there's a 1 in 4 chance that the game continues infinitely. It's such an interesting concept, as I thought the game would be guaranteed to end at some point given infinite trials, but math is math.

  • @television9233
    @television9233 Год назад

    Great video! I like the accent.
    I think the expected number of digits is between ( π^2/(12 log(10)) ) and ( log(2) + π^2/(12 log(10)) )
    which is between (0.357193) and (1.05034)
    I can't get the exact number because the number of digits for any number N is exactly (floor(log_10(N)) + 1) but the floor function breaks my integral calculator when I try to compute ( (floor(log_10(N)) + 1) / (n*(n+1)) ) and it just can't solve the integral to infinity, so I used the approximation that the floor(N) is simply bounded
    N-1

    • @josephnewton
      @josephnewton  Год назад +2

      Not quite, the expected value doesn’t have to be an integer. For example, if you roll a 12-sided die what’s the expected value for the number of digits? There’s 3 possible 2-digit numbers, so it’s 1+3/12, or 1.25.
      For the original question, I’d recommend writing the expected value as a sum rather than an integral, and see if you can split the sum up into easier parts to remove the logarithms. Don’t always rely on computers and algorithms!

    • @alansmithee419
      @alansmithee419 Год назад

      Another note: use your knowledge to analyse your answers!
      How could the expected number of digits be less than one? It can't. Every number has at least one digit.
      So clearly your lower bound was found through some kind of error.

  • @lorenzovillani3102
    @lorenzovillani3102 3 месяца назад

    For the cauchy variable the problem is to be adressed as the fact that it has infinite variance and thus does not follow the central limit theorem, neither the big number law: the empirical average does not get arbitrarily close to the statistical average if N goes to infinity

  • @hayksargsyan4416
    @hayksargsyan4416 3 месяца назад +1

    If anyone might see this comment. I wonder if removing some of the games with most turns could make the true expected value pop out? It seems like those very long games are what make the expected value so unstable that it just diverges. But such games are rare too.

    • @josephnewton
      @josephnewton  3 месяца назад +1

      Sure, you could add a cutoff at some number N so that any games past length N are counted as only having length N. If you do this, then the expected value is approximately the natural logarithm of N. Unfortunately this depends heavily on the choice of N and can still be arbitrarily large; there isn’t a choice of N that gives a “canonical true value” for the expectation.

    • @hayksargsyan4416
      @hayksargsyan4416 3 месяца назад

      @@josephnewton wow that's so fascinating. Though I'm still not comfortable with the idea that this is just an edge case of statistics. It seems like introducing more constraints is of no help. I wonder what happens when introduced higher abstractions/generalizations (if there are any)? I can't exactly define what type of abstractions, but just as in other fields, often things display more properties when generalized in some way which brings more insight than "we just tend to avoid this sort of loopholes".

    • @josephnewton
      @josephnewton  3 месяца назад

      Well, the most well-studied generalisation of expected value is ‘moments’: the first moment is expected value, the second is related to standard deviation or variance, and then there’s higher moments beyond that. So you can sort all random variables into spaces based on how many moments exist, with well-behaved cases like the normal distribution having all moments and examples like this one having none. There’s a huge amount of theory behind recovering a probability function from its moments, in a similar manner to Fourier series and the like, but it’s mostly focussed on the most well-behaved case. I’d guess that more generally you could do something with fractional moments, but that’s outside my knowledge…

  • @klafbang
    @klafbang 8 месяцев назад +5

    Outcomes with probability 0 can happen. If I ask you to pick a random real number, the probability of you picking a natural number is |N| / |R| = 0.

    • @mnm1273
      @mnm1273 3 месяца назад +1

      Picking a genuinely random real number is impossible. Because you can only express it with a finite number of digits.
      If you could the number that would come out wouldn't be natural, because at some point the natural number's pattern would break.

  • @yanickpalmers
    @yanickpalmers 2 года назад +1

    amazing video, you should have wayy more subscribers!

  • @mihaleben6051
    @mihaleben6051 Год назад

    0:57 well that would be 1/(n+1·n)
    n would be the turns and amounts of balls.

  • @FelanLP
    @FelanLP Год назад

    I don't get why it's 2/3 in round two. It's still just 1 blue ball.

  • @otter502
    @otter502 Год назад

    If you try to calculate whether you will finish the game you are guaranteed too if you have infinite tries

  • @AlbertoSaracco
    @AlbertoSaracco 2 года назад +1

    Cute! Very nice exposition!

  • @Ubeogesh
    @Ubeogesh Год назад

    This problem seems like a great way to generate random numbers

    • @kodirovsshik
      @kodirovsshik Год назад

      Are you sure about that?

    • @Ubeogesh
      @Ubeogesh Год назад

      @@kodirovsshik no

    • @Ubeogesh
      @Ubeogesh Год назад

      @@kodirovsshik but the same problem gives different result each time you run it

    • @kodirovsshik
      @kodirovsshik Год назад

      @@Ubeogesh The same thing can be said about many different random simulations. But the main problem here is that you would need a random number generator to perform a simulation like this in the first place. Another problem is that numbers you get via simulations like this are not gonna be uniformly distributed which makes them useless unless you need this specific distribution ( P(k)=1/k(k+1) ), in which case there are better ways to do this

  • @sirlight-ljij
    @sirlight-ljij Год назад

    I give you two games, with the setup exact same as yours.
    Game 1: If you draw a red ball on turn 1, add TREE(1)=1 new ball into your bag. On turn 2, add TREE(2)=3 red balls into the bag. On turn 3, add TREE(3) red balls into the bag...
    Game 2: If you draw a red ball on turn 1=TREE(1), add a new red ball into the bag. If you draw a red ball on turn 2, do nothing and just keep drawing balls. If you draw a red ball on turn 3=TREE(2), add a second red ball. A third red ball will be added on turn TREE(3)

  • @TheRealMarauder
    @TheRealMarauder Год назад

    Since in this game, the probability of having not ended the game by the end of round n is 1/(n+1), you can make the probability of having not ended the game be 1/(log(n)+2); then the expected value of the number of digits is 1/3+1/4+1/5+...
    As far as a reasonable game of balls that samples 1/(log(n)+2)-1/(log(n-1)+2)? I have no idea for a game that samples a logarithm. Maybe some approximation of the area under 1/x could help? Something that looks like "play independent copies of the first game some number of times, and if the first k games combined end up being longer than the next m combined, end the game", maybe.

  • @zachbills8112
    @zachbills8112 Год назад

    For the digits problem, it isn't any more work to calculate the moment generating function than just the expectation itself. The mgf is then 9*exp(t)/(10-exp(t)). From here the expected number of digits is 10/9 and the standard deviation is sqrt(10)/9.

  • @kartofel24724
    @kartofel24724 11 месяцев назад

    Ah yes, my favorite variable. "balls"

  • @Robinsonero
    @Robinsonero Год назад +2

    Great work!

  • @FatedHandJonathon
    @FatedHandJonathon Год назад

    I’m interested by the way the average number of rounds played by the simulation increased very close to logarithmically. Is the expected average number of rounds per game after n games finite?

  • @Unchained_Alice
    @Unchained_Alice Год назад

    It is possible to flip a coin forever and never get heads. There is a 0% chance of it happening but it's still possible (0% chance doesn't mean impossible). There is a > 0% chance of doing it in a finite time so it's clearly possible

  • @dawidziomalify
    @dawidziomalify Год назад

    So anyone came up with solution or nah?

  • @Fine_Mouche
    @Fine_Mouche Год назад

    8:28 : "longest : 4606259" does it mean somewhere in the list of numbers 1, 2 ,136, 3, 1, 2, 3,1 ,1 ,5, 2, 10, 1, 4, .... there is a 4606259 ?

    • @mikaeus468
      @mikaeus468 Год назад

      That how I understood it

  • @CreamusG
    @CreamusG Год назад

    I mean this whole thing reminds me of a few other things in mathematics. When I think of a way to intuitively see how an average can increase despite every trial being independent, I think of looking at the median value of the data.
    That lead me to think about a video I watched showcasing the "just one more" paradox, in which case you're taking a gamble which has an average gain but is not favorable since the vast majority of the time you will lose.
    Then I thought about random walks, how a 2D random walk will always return to its starting point given an infinite number of steps, but a 3D random walk only has a chance to. It's really at that point where these concepts go over my head. Especially with the infinite pidgeonhole principle. Also the puzzle where you add numbered balls to a box 1 by 1 and remove the square roots of the number you just added, and even though in the scenario you never remove more balls than you add, if you do it an infinite number of times the box will be empty.
    Math is crazy.

  • @alikaperdue
    @alikaperdue Год назад

    @11:26 - the moment you ran out of red balls.
    You should have just held one red ball in your hand and pretend to put it in while palming it down so that it can be seen coming back up like your adding a new one. Better to flash it where we expect it and cover it when moving between.

  • @joaozin003
    @joaozin003 Год назад

    Tue infinite series there is the Harmonic

  • @bridgeplayer
    @bridgeplayer 2 года назад +1

    great submission joseph

  • @G4naD
    @G4naD Год назад

    nice visualisation of improper integral from 1 to infinty of 1/x and 1/x^2 functions

  • @qwertz12345654321
    @qwertz12345654321 Год назад

    Saying the game always ends is mathematically incorrect though. There is mathematical terminoligy for the games like this: The game ends with probability 1

    • @haroldp.sadwood1181
      @haroldp.sadwood1181 Год назад

      That's what makes the example that converges to 1/2 rather than 0 so fascinating to me. I would have thought that even that game, where the number of red balls increases quadratically, would be guaranteed to end, but no, there's apparently a 50% chance that the game continues forever. Would you say the game ends with a probability of 1/2? It feels like, if you were really given INFINITE games, you'd hit that 1 in (absurdly big number) at some point, but really, if you have INFINITE games, you will be trying to hit a 1 in infinity, which is impossible.

  • @OMGitshimitis
    @OMGitshimitis Год назад

    If I were to take the 1000 game situation run it 1000 times and plot each average value on a graph what will that graph look like?

  • @theeraphatsunthornwit6266
    @theeraphatsunthornwit6266 Год назад

    now i got a mild existential crisis
    even there is a little chance to draw blue ball in the last question, I can still have half a chance to stay in limbo forever?

  • @Jivvi
    @Jivvi Год назад

    My conclusion based on this information is that you have at least 4 blue balls and 26 red balls.

  • @pafnutiytheartist
    @pafnutiytheartist Год назад

    This kinda breaks me. The expected number of digits in the answer is finite, all answers are finite. Yet their mean is infinite.

  • @ThatSkiFreak
    @ThatSkiFreak Год назад

    cool

  • @teogm
    @teogm Год назад

    5:42 I tried to get the general formula for the probability of reaching round n without watching, and the solution I reached is P(n) = [n-1]/[(r+1)!]. I wonder if it is correct and, if so, why it differs from the one in the video

    • @mnm1273
      @mnm1273 Год назад +2

      It's (n-1)!/(n+1)!=1/n(n+1) You've forgotten the other factors up top.

  • @Zero-4793
    @Zero-4793 2 года назад +4

    10:40 i think the avg is 0, it just has a high Standard deviation thanks to the heavy tails. The results are still relatively close to 0 given the infinite range and mass in the limits. Average still 0, just greater variation

    • @josephnewton
      @josephnewton  2 года назад +18

      Keep in mind that each output from the program at 10:40 is already 1 million samples averaged, and if you take the 10 outputs shown in the video and average them you get about -1.1 which still isn't any closer to zero. It's correct to say zero is the mode of the distribution, but both the mean and standard deviation are undefined (although the standard deviation could be defined to be infinity, while the mean is just undefined). This article explains the algebra more thoroughly: en.wikipedia.org/wiki/Cauchy_distribution#Explanation_of_undefined_moments

    • @alansmithee419
      @alansmithee419 Год назад +2

      It seemed that way to me as well so I looked it up. Turns out the Cauchy distribution just has an undefined average and variance.
      This is a mathematical fact that has been proven. The video maker did not just throw this in on a whim without good reason (though I do think it could've been made clearer that this was the case).

  • @willie333b
    @willie333b Год назад

    Cool

  • @Ugnaski
    @Ugnaski Год назад

    What's the background music called?

  • @Sara-cq6gt
    @Sara-cq6gt 2 года назад +1

    Great video😊

  • @ake_lindblom
    @ake_lindblom 2 года назад +1

    Brilliant!

  • @agnishom
    @agnishom 2 года назад

    Traditionally, this bag should have been called an urn

  • @robotv56
    @robotv56 Год назад

    If only you could play this cotton ball game with each step taking half as long as the last step. If you did this, and the first step took one second, the game would always end under 2 seconds.

  • @pedroth3
    @pedroth3 11 месяцев назад

    Amazing, really well done! Great idea and explanation.

  • @정준영-i7r
    @정준영-i7r Год назад

    Then..
    Whats the expected value of the average rounds when you play n games in perspective of n?

  • @dragonrock8467
    @dragonrock8467 Год назад

    Do more probability videos ❤

  • @hughcaldwell1034
    @hughcaldwell1034 Год назад

    Nice video and an interesting problem. My one sticking point, which isn't limited to this video, is the supposed inevitability of drawing blue/flipping heads/etc. The argument is always that, since the probability of never flipping heads goes to zero, a heads-free sequence is impossible, and I don't buy that. The probability of getting any given infinite sequence goes to zero, but if we conclude from that that all sequences are impossible, then we have a problem.
    Admittedly probability isn't my area, but in other areas of maths, something being impossible means that its definition contains a contradiction. There is nothing inconsistent about a heads-free sequence.

    • @mnm1273
      @mnm1273 Год назад

      When you go to infinity the probability of any definable infinite sequence goes to zero. This means all we're left with is infinite chaos with no discernable pattern. Given that you can't even write an example of that it's clear they're not really to be treated as sequences.

    • @Unchained_Alice
      @Unchained_Alice Год назад +1

      You are right. I did study probability theory as an undergraduate and 0% chance doesn't mean impossible. This especially is apparent when something is continuous. Any 1 particular thing will have a 0% chance of happening.
      This will also be true for any sequence of infinite flips. Every one has an equal chance of happening, namely 0% chance. Yet there must be a sequence if you do it for eternity.
      So yes, there is a 0% chance of it happening but it can happen

  • @kosmobold
    @kosmobold 10 месяцев назад

    dude i love the music in this

  • @gabes6108
    @gabes6108 Год назад

    What is the meidian expected value of the first cotton ball game?

    • @btf_flotsam478
      @btf_flotsam478 Год назад

      There's a 50/50 chance it ends after one turn, you do the rest.

  • @mihaleben6051
    @mihaleben6051 Год назад

    Ah yes, math.

  • @strikeemblem2886
    @strikeemblem2886 2 года назад

    I wished you'd supplement your explanation at 7:12-11:02 not with code and mention of a heavy-tailed RV, but by actually delving into the "fiddly" bits: e.g. the distinction between 7:25 vs 7:55, that P(eventually draw a blue) and 1 - lim P(no blues for n rounds) are not the same thing. (Otherwise, great work.)

    • @AssemblyWizard
      @AssemblyWizard 2 года назад

      Can you elaborate please? (or give search terms)

    • @strikeemblem2886
      @strikeemblem2886 2 года назад +1

      @@AssemblyWizard Sure. Let's define the event A_n = {on round n, draw a blue}. Then we have {eventually draw a blue} = union_{n=1 to infty} A_n. Meanwhile, using de Morgan's laws, we have that the complement of {no blues for n-1 rounds} = union_{k=1 to n-1} A_k. So on one hand, we have lim 1-P(no blues for n-1 rounds) = lim P(union_{k=1 to n-1} A_k) ... and on the other hand we have P(union_{n=1 to infty} A_n) ... they are equal because of the continuity of P from above. (This is a basic but key result in probability theory.)
      .
      I was hoping for the above paragraph. In other words, in 7:08 - 9:34 I preferred (1) a careful writing down of the events that we are interested in, (2) justify the interchange of limits and infinite union by using the "basic result".

  • @blackeyefatlip7045
    @blackeyefatlip7045 Год назад

    Actually, the program has a particularly interesting flaw: it can only go up to a certain number of balls. This matters because it will inevitably overflow thanks to the maximum value a variable can store. Eventually it will overflow to 0 or 1 and it will then either crash or simply "choose" from the supposedly 1 ball in the bag. This is why the program will always terminate and won't be truly valid data for this game.

    • @costa_marco
      @costa_marco Год назад

      The program, as written, is correct. Python uses arbitrary precision integers, which can grow to absurd values. But a simple for loop from 0 to 100 million takes about 4 seconds on my computer. A 32 bits integer overflow would take about 120 seconds (again, on my computer). If you would use the normal 64 bits integers on Python, this simple loop would take 16000 years to run... Cheers!

    • @blackeyefatlip7045
      @blackeyefatlip7045 Год назад +1

      @@costa_marco Today I learned something new about Python. Thanks for the info

  • @alejrandom6592
    @alejrandom6592 Год назад

    Man this is crazy!

  • @travisporco
    @travisporco Год назад

    very nice!

  • @phaZZi6461
    @phaZZi6461 Год назад

    i love this riddle

  • @fullfungo
    @fullfungo 2 года назад

    Nice👍