Eddie suggested that I ask.the keen among you the following nice question (first watch the video): How many different ways are there to rearrange a conditionally convergent series to get the sum π? Yes, of course, infinitely many. The real question is whether there are countably infinitely many or uncountably infinitely many ways?
It should be uncountably infinite as if you make a list of numbers in a table that add to π, you should be able to do the same thing as with Cantor's diagonal argument to show that you have not listed every single way. (Edit: I only just realised it said sum to π and not sum to any number)
@@Josh-dj1bl You change the structure by permuting infinite terms, that doesn't guarantee the sum stays the same. And there are already uncountably many real numbers that must be covered by those permutations. My intuition is that it is countably infinite
It's uncountable because for any arbitrarily large integer n, there are 2ⁿ ways to pick the first n terms (either positive or negative) and then the remaining terms are fully determined by the over/under method.
This demonstrates something non-math people don't get: Infinity is full of trap doors, subtleties, and other frustrations. The early infinity theorists like Cantor nearly lost their mind over this kind thing.
I would not exactly agree. Sometimes it works the opposite way, and being a non-math helps to not get lost in math abstractions, or, if you wish, fall into the traps, which are still purely mathematical.
come down from that horse, friend. I have never had an aptitude for maths, I learn much better from visualizations than still reference frames. i.e. plotting by hand vs watching a video of the plot. hell, one does not even require numbers to demonstrate the nuances of infinity, just a line with evenly spaced marks, and a little imagination.
This is less a problem of infinity, and more a problem of simplifications, and particularly of failing to keep track of your simplifications and their implications. There's probably a derivative of units that should be devised to keep track of such things (or perhaps it's already within some mixture of calculus's limits and the concept behind "big O notation").
I think the problem with infinity is the assumption that it can be accurately described in a finite world Sure, we can make pretty good approximations of how it might be behave, but it's just that: approximations
Takes time to do, but is definitely worth it. Also, this did become a lot easier since RUclips now allows us to input the full script and then creates proper subtitles automatically from that :)
An Italian Math and CS Senior Lecturer here. Just want to share that, as it happened to the Mathologer, when my professor did the Riemann rearrangements theorem in Real Analysis I, as a freshman, was totally upset and amazed by this counterintuitive result. Congrats to The @Mathologer whose videos always make us see the things we know under new and interesting perspectives.
It's interesting to look at the patterns of positive & negative terms when rearranging to Pi. The first 10 terms on the positive side are: 13, 35, 58, 81, 104, 127, 151, 174, 197, 220. If you look at the differences between terms, you get: 22, 23, 23, 23, 23, 24, 23, 23, 23, 23, 23, 23, 23, 24, 23, 23, 23, 23, 23, 23, 24, 23, 23, 23, 23, 23, 23, 24, 23, 23, 23, 23, 23, 23, 24, 23, 23, 23, 23, 23, 23, 24, 23, 23, 23, 23, 23, 23, 23, 24 ... You get similar almost repeating patterns when your target is e, or the golden ratio. I think that what's going on here is that ln(23) is close to Pi, so we are very close to the fixed 23 positive 1 negative ratio.
I can see 22/7 (ish). 🤯. (Edit: on second thoughts, 23+1/7 … which then generalises to continued fractions approximations to e^π, if you want to follow that rabbit-hole.)
Very well spotted. The reason for this is that Gelfond's number is approximately equal to 23. It turns out that if an arrangement of our series has the sum pi, then the ratio of the number of positive to negative terms in the finite partial sums of the series converges to Gelfond's number. This is just one step up from what I said about us being able to get arbitrarily close to pi by turning truncations of the decimal expansion of Gelfond's number into fractions. Similarly for other target numbers. For example, to predict what the repeating pattern for e is, you just have to calculate e^e :)
@@andrewkepert923 When doing path representations of continued fractions of square roots, I noticed that sqrt(13) is the first where the integer part does not unite into repeating period. For example rabbit holes of sqrt(n^2+1) for 2, 5 and 10 are [1; 2], :[2; 4] and [3; 6]. Using < for L and > for R for nicer visuals, and starting with < for whole number 1, < for sqrt(2). Likewise >>>>>, the integer part and the repeating period with length 10 don't combine into repeating string. Instead, when combined and arranged in the length of repeating period, there's a bit turn at 3rd digit. >>>>>> >>>>>> >>>>>> etc. After 13, 29 and 41 had the same property. Haven't looked further so far, just thought worth mentioning, as 13 is an interesting "turning number" in many ways.
Is this the answer to the Eddie's question?@@Mathologer somebody pointed there that we first need to know the value to find the possible expansion series. for every finite approximation we have countable number of expansion possibilities, multiply it by N numbers of approximation and it still is countable. Or zero if none of them approximates pi :) for sure the infinite solution is computationally hard
There's actually a sort-of-explanation for why e^π is roughly π+20. If you take the sum of (8πk^2-2)e^(-πk^2), it ends up being exactly 1 (using some Jacobi theta function identities). The first term is by far the largest, so that gives (8π-2)e^(-π)≈1, or e^π≈8π-2. Then using the estimate π≈22/7, we get e^π≈π+(7π-2)≈π+20.
@@Mathologer I wouldn't be surprised if it was already published somewhere, but I haven't been able to find it anywhere. I was working on some problems involving modular forms and I tried differentiating the theta function identity θ(-1/τ)=√(τ/i)*θ(τ). That gave a similar identity for the power series Σk^2 e^(πik^2τ). It turned out that setting τ=i allowed one to find the exact value of that sum.
@@Mathologer I don't know if it's new, but it's certainly not well known. To quote the Wolfram MathWorld article "Almost Integer": "This curious near-identity was apparently noticed almost simultaneously around 1988 by N. J. A. Sloane, J. H. Conway, and S. Plouffe, but no satisfying explanation as to "why" e^π-π≈20 is true has yet been discovered."
As soon as you moved the negative fractions below the top line, my first instinct was "Wait...isn't the top part 'outpacing' the bottom part?" Then I lost confidence when you collapsed them, lining up all pos and neg, lol. I was like "but, but, but...." Anyway, I love that stuff!
Yeah, infinite sets make sophistry an easy task, because standard logic dictates that there are no greater or lesser infinities. Problems like this one, among others, prove that this is not the case; you just have to add to the infinity in a different direction.
@@wolvenedge6214 Yeah, I realized as he went along that the user is choosing more terms in a certain direction to ENSURE arrival at predetermined sum. This still leaves my intuition feeling that if the chosen number is positive, then the sum MUST contain more positive numbers than negative ones...then again...even that can be shown to be untrue, cuz one could arbitrarily choose more (but very small) negatives, and fewer (but very large), to arrive at the same number. Finally, however, the systematic "rule" that Mr Polster used, isn't arbitrary! I still feel the infinite positive set is larger than the negative set! BAH!
@@ABruckner8 You got me all confused since I thought that your first instinct is right. But I kept thinking and was wondering if you meant, that the positive direction would grow infinitely. Then I thought some more and I realized (hopefully that's right), that when you cancel the terms with each other what is left is an infinite series that converges.
It is always a pleasure to watch your vids. Not only because these are great educational videos, but also because your voice and wordings make them even better
I think it's easy to get distracted by the fact that there is a matching negative for every positive term in the sequence. A similar paradox makes it more intuitive what's wrong with rearranging terms. ∞ = 1 + 1 + 1 +... ∞ = (2 - 1) + (2 - 1) + (2 - 1) +... ∞ = (1 + 1 - 1) + (1 + 1 - 1) +... ∞ = 1 + 1 - 1 + 1 + 1 - 1 +... Then we can pull out positive and negative terms. 1 + 1 + 1... - 1 - 1 - 1... So every +1 is canceled by a - 1. You can even create a mapping from the nth positive 1 to the n*2 negative term, so every positive term has a negative to cancel it. This, to me, intuitively shows why you can't add infinite sums by rearranging terms. You need to look at how it grows as you add terms.
I had the same thought, but I wasn't sure if it was a valid comparison since 1+1-1+1+1-1+1+1-1... is divergent and the sum in the video isn't. I don't really *get* infinite sums tbh.
An infinite series has a sum if the sequence of partial sums converges to a number. If there is such a number, then this number is the sum of our series. If no such number exists, the series does not have a sum. This is the official definition of the sum of an infinite series. And so let’s consider the sequences of partial sums of these two series. For 1 - 1 + 1 - 1 + 1 - 1 + ... the sequence of partial sums is 1, 0, 1, 0, 1, 0, 1, .... It alternates between 0 and 1, never settling down; and so as far as mathematics is concerned this series does not have a sum (at least to start with; see the discussion of supersums in some of my other videos) For 1 - 1 + 1/2 - 1/2 + 1/3 - 1/3 + ..., the sequence of partial sums is 1, 0, 1/2, 0, 1/3, 0, 1/4, .... This sequence of partial sums converges to 0 and so this series has the sum 0 :)
with the last two videos this channel has outdone itself. I have seen and re-watched them several times and as an amateur and enthusiast I believe that they are the two best calculus lessons I have attended. So illuminating and profound, they hold together all those details that leave one confused in a school course and which here instead receive the right attention and are explained with incredible ease. Bravo! ❤
In case you are wondering, the notification for this video works for me. With crazy RUclips algorithms many creators are talking about these days we need these notifications.
Your videos do have a tremendous impact on me, making me wanna attend you at Monash and enjoy the rest being of my life in that kind of Maths you’re reciting to us in every single dope video of yours!
@@42carlos don't get too worked up mate. Produce yourself a new daughter, and get her into maths quickly. And then, when people beg for her number, you don't have to share, and then they make more maths daughters, ad infinitum. This will spawn a world of young girls with a passion for mathematics.
@@42carlos I said it because it's easy to say, but that doesn't necessarily equate to easy to do. Let's see. If a woman wants a daughter, she just has to give a man some bombastic side eyes and bring him home for the night. We can't replicate that with the same level of efficiency I don't doubt. Adoption maybe? Then you don't even have to do production. But then you've got to get a baby, or some passions will already be built in. Hmmm. On second thoughts, maybe it would be easier to get her number. Or just become a maths teacher, find out which girls have the greatest passion for maths, get their numbers, share some of the more interesting maths videos you can find, and wait for reciprocation.
This video was basically the final week of my very first analysis course at uni, and you explained it brilliantly. Maths is one of those things that never makes sense the first time, but then becomes crystal clear the second time. One extra thing that could have been in this video was a bit more on why the positive and negative terms of a conditionally convergent series sum to infinity, because it’s not obvious in general unlike the other key fact about them tending to zero. *Edit* Thinking about it a second time, I’m not sure if you could do that without a full mathematical proof, and it’s at least well-known for the harmonic series, so maybe it was best left unexplained.
Yeah, it is obvious. Think about it. Come up with any example. Say positive terms (1) or (1/k) and negative terms (0) or (1/2^k). Type them into a calculator in any order. The partial sums increase without bound because they have a contribution of some negative partial sum ≤ L and some positive partial sum → ∞. And obviously if both are finite then it converges absolutely. So the remaining option is both are infinite.
Fantastic stuff!!! Actually I found a pattern in your videos. With each new video, the length of your channel's supporter list at the end grows enough to conclude that the length of subsequent videos approaches infinity!
If you use surreal numbers you can make the paradox "disappear". This summation is an infinite set of games which has a total game value of ON (infinite moves for Left Player) + OFF (infinite moves for Right Player) better known as DUD (Deathless Universal Draw). The winning move is not to play, because there is no winning move. Surreals make infinity easy and clean!
Every so often I think to myself, "It sure has been a while since Mathologer put out a new video..." and it seems that more often than not, you come through with something lovely in a day or two :)
Would you please aim to think of this again next Thursday? I have no plans for that night and a cool new Mathologer video would fit in very well indeed 😅
Reminds me of something i looked into once: the random harmonic series. That is, the harmonic series with the sign of each term chosen by a coin flip. The resulting series converges almost surely, and it turns out it has some neat properties as a random variable.
Although there are plenty of great math channels on youtube nowadays, your videos are the best at conveying a sense of discovery, which makes me interested in mathematics in the first place. Keep them coming :) @@Mathologer
Thank you mathologer. You are a great fountain of knowledge. And genius, in your ability to provide visual explanations. You have a gift--that you know this stuff--and a great gift that you teach it to us for free.
Michael Penn had a great video a week ago about alternating harmonic series and a proof that any pattern of positive numbers m and negative numbers n can be expressed as ln(2)+ 1/2 ln(m/n)
Will check it out a some point. Probably replicates the standard approach to this problem as outlined on the wiki page on the Riemann rearrangement theorem ?
@@Mathologer Actually he does it by converting the series into a limit of partial sums. Then after rearranging, you can get partial sums that can be expressed as a harmonic number. Adding and subtracting some specific logarithms to each partial sum, you can construct the Euler-Mascheroni constant times some coefficient. These coefficients times the Euler-Mascheroni constant for each partial sum cancel each other out, leaving him with only the logarithms he added in, which in their limit become the formula mentioned in my previous comment.
Yes, I think these two videos complement each other very well. This one gives a good visual understanding of it and Michael Penn's does the algebra and makes it rigorous with a few interesting insights and results. It's at ruclips.net/video/5lR3y1bTFZ8/видео.htmlsi=pQZoiJsSX4W1SrTO
I learn more from this channel than my years of Calculus 1, 2, and 3. If I had watched these kind of videos before my classes, I would have understood those classes much better.
Your videos are amazing! This is the first one I've watched in a year or so, and I'm just as amazed by this one as by your earlier ones. I learned two important things from this video. First, I learned a very intuitive visual proof that the alternating harmonic series converges to ln(2). Second, I learned another very intuitive proof of Riemann's rearrangement theorem, which I never even knew how to prove before! As always, excellent job!
One has to be careful when dealing with infinite series as not all "infinities" are equal. By taking m positive terms from one infinity and subtracting n negative terms from the other infinity, you no longer have a one to one correspondence between the terms of these two infinities so as you pointed out the difference can be any arbitrary number you choose these to converge to.
Not a mathematician, but watch a lot of your stuff, and find it fascinating. It just seems on some level, that because you don't use an equal number of terms, you're cheating. That it's not just a series, but a series accompanied by another rule that says how many terms you can use. Anyway, it was fascinating.
I'm so happy to have seen your video on anti-squish shapes before this one! That first proof was a real beauty, of which I would not have been able to fully appreciate otherwise
I remember the first and last time watching you like 10 or 11 years ago, avoiding u because it was so hard for me to watch and understand math videos in english (spanish is my first language), now i'm very happy because i can watch, understand and learn from u
I effectively have the math education of a 12 year old. I cant understand even the most basic algebraic well anything. These videos entertain me. Please never stop making them.
This is a really cool explanation of natural logarithms. I remember learning about regular logs and natural logs in algebra 2 but we never really learned what they were or how they came about so to learn this is very cool. Really tempted to get a refresh on logs now lol
If you have qm positive terms and qn < qm negative, you obtain 1/(qn+1) + ... + 1/(qm) (the first qn reciprocals cancelled ). Squish by q to make them 1/q wide each. Stretch (multiply) by q to obtain the heights 1/(n+1/q),...,1/m. The reciprocals n+1/q, ..., m are evenly spaced between n and m, giving you the area under 1/x as q approaches infinity.
Edit: Added proof and precise statement. Clearly, this alternating over- and undershooting works for any two sequences with the properties that you mentioned, but when you presented the nice pattern of m | n -> ln(m/n) for two harmonic series, I had the thought that you should be able to get any limit x by employing a sequence of positve rationals (m_k / n_k)_k converging to e^x: Definition: For natural numbers m and n, let m | n denote the series from the video, i.e. sum_{k=1} (sum_{i=(k-1)m+1}^{km} 1/i) - (sum_{i=(k-1)n+1}^{kn} 1/i) For two sequences of natural numbers (m_k)_k and (n_k)_k, let (m_k)_k | (n_k)_k denote the series sum_{k=1} (sum_{i = (sum_{j=1}^{k-1} m_j) + 1}^{sum_{j=1}^k m_j} 1/i) - (sum_{i = (sum_{j=1}^{k-1} n_j) + 1}^{sum_{j=1}^k n_j} 1/i) Statement: Given two sequences of natural numbers (m_k)_k and (n_k)_k such that (m_k/n_k)_k converges to y, their series (m_k)_k | (n_k)_k converges to ln(y). Proof: Let m/n be a positve rational number not equal to y and consider the corresponding m | n series. By convergence of m_k / n_k, there is a natural number N such that | m_k / n_k - y | < | m/n - y | for all k >= N. Rearranging finitely many terms keeps the limit and we can do that to m | n such that it matches the first N terms of (m_k)_k | (n_k)_k series. If m/n is larger than y, then the rest of the terms of the rearranged m | n series is larger than those of (m_k)_k | (n_k)_k and thus bounding it from above, similar from below if m/n is less than y. Thus (m_k)_k | (n_k)_k is convergent with limit less than ln(m/n) for all m/n larger than y and larger than ln(m/n) for all m/n less than y, meaning than the limit must be ln(y).
@@Mathologer I wonder if it generalizes to other conditionally convergent series. Here I notice that ln of course comes in because int 1/x = ln(x). On second thought, the behavior of 1/x in regards to scaling x is quite essential for these limits, so its probably not very straight forward.
I am always so surprised about how you find these topics and incredible visual and pretty proofs, even of facts I already know and know how to prove (not in a pretty way but rather more technical proofs).
Infinite math is the first subject where you can go really wrong if you’re not careful. In arithmetic and basic algebra, you learn what you can do. Avoiding dividing by 0 is the first hint of this world, but it’s momentary. But calculus is all about avoiding the paradoxes of infinity. It’s the equivalent of moving from a steep hike to mountain scrambling. Then when you get to subjects like algebraic geometry and Lie algebras, the abstractions become so complicated they can become incomprehensible to those not experienced in the field.
An alternative to the iterative method of finding the exact rearrangement for Pi (or any other number). Suppose we have some rearrangement of the positive and negative harmonic sequences (1+1/2+... -1-1/2-...). Consider the first T terms. Denote M(T) the number of positive elements and N(T) the number of negative elements. Naturally M(T)+N(T)=T. In the video you used regular rearrangements, based on fixed parameters m,n, and found that SUM --> ln(m/n). This is true in general. SUM --> ln(M(T)/N(T)). Testing your regular rearrangements. Denote t = m+n, T = at+b, then M(T) = am+b, N(T)=an. M(T)/N(T) --> m/n. You could choose any sequence M(T) such that M(T)/N(T) --> e^Pi. Or M(T) ~ T/(1+e^-Pi)
Another way to think about this sum is that you need to group the expanded form into groups of 3. -- The rationale for this is that the (1/2 - 1) etc. are inseparable. By shifting the terms across multiple groups you are not accounting for the adjusted denominator. I.e. 1/1 + (1/2 - 1/1) = 1/2 -- the 1/1 terms are alike and cancellable 1/3 + (1/4 - 1/2) = 4/12 + (3/12 - 6/12) = 1/12 1/5 + (1/6 - 1/3) = 6/30 + (5/30 - 10/30) = 1/30 etc. It looks like this generalizes to 1/x + (1/(x+1) - 1/((x+1)/2)) = 1/(x*(x+1)), but I'm not currently sure how to prove that. This gives the sequence 1/2 + 1/12 + 1/30 + ... which is cleary between 0.5 and 1 (the first term is 0.5 and the other terms get exponentially smaller, so the other terms cannot sum to 0.5).
For me, the explanation is the pattern... Although every term EVENTUALLY cancels out, until they do, a sum exists. I have noticed this on a huge number of infinite series... It's like filling and draining a sink... It never goes dry.
Rearranging terms can affect the formulation when you're not defining the amount you're using properly. 1 + (1/2) - 1 = 1/2 ( 1/3) + (1/4) - (1/2) = 1/6 (1/5) + (1/6) - (1/3) = 1/30 The top 2 terms can approach infinity faster than the bottom term. If we were to write it out, we would actually get; Sum_{1}^{infinity}(1/k) - Sum_{1}^{infinity/2} (1/k) This means the numerator would still be increasing for a half-infinite amount of times while the denominator has reached it's goal. According to my maths, a half-infinite can be expressed by (-1)!/2 = (-1)(-2)(-3)!/(2) = (-1)²(-3)! = (-3)! so I'll be using this term ahead. We can say our denominator has reached the point of -(1/(-3)!) when our numerator has hit the point of 1/(-1)!, or 1(0) If we continue to add to the denominator until we get to a full (one) infinite, we get -(1/((-3)! + 1)) - (1/((-3)! + 2)) ... = Until we get to... -(1/ 1/((-3)!) + (-3)!) = -(1/(-1)!) = -1(0) This would finally cancel out our numerator properly. Of course, this would give us a ton of expansion, but the same thing happens in the numerator and it all cancels out. If we don't add the rest of the terms into the denominator, our numerator has an additional Fraction of an Infinite amount of Zeroes to different powers. These infinitesimals combine to ln(2) over the course of the the half-infinite summation. Basically if you add 1(-3)! A half infinite amount of times, you get, (-3)! × (1/(-3)!) = 1. If the denominator slowly increases to (-1)! Along the way, it can't quite reach 1. 1 / ((-3)! + 1) = 1 / (1/2(0) + 1) = 1 / ((1 + 2(0)) / 2(0)) = 2(0) / (2(0) + 1) Divide both sides by 2 = 1(0) / (1(0) + 1/2) = 2(0) 1 / ((-3)! + 2) = 1 / ((1/(2(0))) + 2) = 1 / (((1 + 4(0)) / 2(0)) = 2(0) / (4(0) + 1) Multiply by 2 (1/2) × (4(0) / (4(0) + 1) Sacrifice blood (1/2) × ((1(0)) / (1/4)) (1/2) × 4(0) = 2(0) 1 / ((-3)! + 3) = 1 / ((1 + 6(0)) / 2(0)) = 2(0) / (1 + 6(0)) Times 3 = (1/3) × (1(0) / (1/6)) = (1/3) × 6(0) = 2(0) But eventually it'll reach halfway to (-1)! Which is ((-1)! + (-3)!) / 2 = ((1/0) + (1/2(0)) / 2 = (3/2(0)) / 2 = 3/4(0) = (3)(-1)!/4 1/0 - 3/4(0) = 1/4(0) Already at this point we can see that (((1/4(0)) × 2(0)) + (1/4(0) × (3(0)/4) / 2 < 1/2(0) × 2(0) = 0.5 + 3/32 < 1 = 0.59375 The rate it decreases also suggests ln(2) could be in range (I'm just not doing the calculations). We just need to remember that a lot of it will be close to the point where it's equal to 1(0) and our average will favor 1, increasing from 0.59375 to ln(2) as we continue to calculate.
@Mathologer Am I right in saying this? Since area of (1/x) is lnx and if bounds go from 0 to 2, we get ln2 as area. Notice that the graph of (1/x) is symmetric about y=x. Initially we are finding the area under the graph starting a x=1 and going to right. If we reflect the graph about y=x, then the area we found from x=1 to x = infinity is exactly superimposed on area of (1/x) which is lnx from x=0 to x=2. This means that this infinite sum is really = ln2.
Since the sum of the prime reciprocals also diverges incredibly slowly and primes are arbitrarily large, the same process can be applied to get an alternating sum of prime reciprocals to converge to any real number. Fun stuff!
Wow you're actually awesome dude... I envy anybody who had got to have you as a teacher. nobody has the right to expect the awesomeness that you provide as a teacher.
Using a short java program I wrote it looks like once you get past the initial 13 terms to overshoot pi and subtract it takes between 8 and 9 additional terms to overshoot pi again and then you subtract one term. Additionally, it alternates adding 8 and 9 terms before you subtract one term (you always only need to subtract 1 term) but eventually theres 2 9 terms intervals and the pattern continues. What is interesting is the number of "intervals of intervals" aka the number of times this alternating takes place before you get 2 9's varies initially but settles on the pattern 37, 39,39,39, then back to 37. However, every now and then theres 4 39's before going back to 37. It looks like this pattern has a pattern as well. I suspect these inner patterns continue on due to the irrationality of pi and there is never a "straightening out" of the pattern. Edit: I am trying out different values for what you want the series to converge to and it's pretty interesting. First of all, for every number you only ever need to subtract one term to get the current value to go below the desired value. Every integer you try the number of terms before it overshoots generates a pattern with little deviations in it. Im thinking the pattern of the little deviations have their own patterns, and this extends infinitely. Is anyone aware of someone coming across this and looking into it more? I'm thinking it would be fascinating to generate a "3d graph" of desired values vs how the series behaves but I need to get to my computer to try this out. I'll let you guys know what I find
15:25 - A fun thing about this strategy that you didn't mention is that you will always only ever need one negative term following each sequence of positive terms to bring you back down below pi. I'll leave proof of this as an exercise for the reader.
Very well spotted. I actually did not mention this on purpose because by the time I use negative terms for the second time I was already setting up the algorithm to be applicable to any target number :)
You are very well acquainted with your snakes. I'm still in the process of learning how to tame and manipulate them. I hope I can become a great snake charmer as you are someday.❤
Bit of an extra subtlety: An infinite series has a sum if the sequence of partial sums converges to a number. If there is such a number, then this number is the sum of our series. If no such number exists, the series does not have a sum. This is the official definition of the sum of an infinite series. And so let’s consider the sequences of partial sums of these two series. For 1 - 1 + 1 - 1 + 1 - 1 + ... the sequence of partial sums is 1, 0, 1, 0, 1, 0, 1, .... It alternates between 0 and 1, never settling down; and so as far as mathematics is concerned this series does not have a sum (at least to start with; see the discussion of supersums in some of my other videos) For 1 - 1 + 1/2 - 1/2 + 1/3 - 1/3 + ..., the sequence of partial sums is 1, 0, 1/2, 0, 1/3, 0, 1/4, .... This sequence of partial sums converges to 0 and so this series has the sum 0.
Your “every number” demonstration provides a mapping, akin to the Minkowski Question Mark Function, between the reals and binary fractions. Begin with 0. and, with your chosen irrational value, write as many 1s as there are positive fractions before overshooting. Then write 0s for the negative fractions before undershooting. So, for example, ln(2) would map to 0.101010101…. In base 10, this is 2/3.
dude! at the end of the video, the green part's formula for area was given, but it was the area under the curve 1/x from 2 to some mystery number near 4! what was that number near 4‽
I like math. This is a very well known and important problem. The question is when a sequence can be summed. It has taken decades, if not centuries, to answer this question. A sequence is summable if the sum does not depend on the order of the sequence and if the sum is finite. For positive components, the answer is obvious. But if the sign changes constantly and the components are sufficiently large, any result can be achieved.
For any finite amount of terms, adding up both summations isnt the same as adding up 2 terms of one summation and 1 term of another. Therefore you cant expect the combined summation to cancel out to 0 for an arbitrary number of sums.
Marvelous! I pretty sure that using the strategy of approximate a number like min. 16:30 we create a base numeration system! In that pi = 13, 21, 58 ...
A) If both negative and positive sum are finite, the sequence is absolutely convergent (resists rearrangement) B) If only the negative or positive sum is infinite, ebery reordering blows up to (neg./pos.) infinity C) If L was the limit and a>0, there would have to be a last partial sum outside the interval (-a/2+L, L+a/2). But that means all later terms must be in (-a,a). Since a is arbitrary the terms must get (and stay) arbitrarily small. If they don't, the partial sums keep "jumping away too far" from any potential limit.
That's everything that can happen if there is at least one reordering of the series that converges. Otherwise you also have take things like 1-1+1-1+ ... into consideration :)
Someone send this video to Brady of Numberphile. I would not be surprised if he rearranges the positive and negative terms of the alternative harmonic series to get -1/12 + -1/12 and yet rearrange them to get his one and only favorite -1/12.. That is why I love Matloger. Not only you know what you are talking about but explain it in a way that is both mathematically rigorous and easy to understand.
Hello Mr. Polster. Thank you for the videos. I really like your style! The only problem I have with your videos is probably (?) uncompressed audio. Some parts are really quiet, the next moment it's frighteningly loud. I'm not an audio engineer, so I might be wrong, but it will be more comfortable for listeners if you apply compressor to the audio. Then it will be easier to find an appropriate sound volume to watch your awesome videos.
Both the audio files of me speaking and the music that I am working with are .wav files to start with. Things get recompressed when I bundle everything together in Premier and I am sure that RUclips does some more recompressing. I personally and my proof readers don't experience any issues with the audio and I also only very rarely get anybody commenting on audio issues in the comments. One exception is this early video ruclips.net/video/jcKRGpMiVTw/видео.html. Are you using headphones?
@@Mathologer Got it! I'm listening through my phone's mono speaker. It's around 6 years old, maybe that's the problem. I'll try it with the headphones.
Another one i like is 1+2+2+2+2+2... You can rewrite it as 1+2+(3-1)+(4-2)+(5-3)... Which feels like you could interpret as equalling zero. Its the nice idea of the difference between using "x=infinity" and letting x approach infinity (Getting arbitrarily large) Very nice problems :)
An important difference here is that the partial sums of your example do not converge to a single number. Your series is divergent whereas the one in the video is convergent :)
At 3:00, what sticks out to me immediately is that the negatives are cancelling out earlier positive terms, and there are always going to be positive terms that aren't cancelled-out. It's a bit like the dodgy guys from Numberphile trying to tell us a divergent sum totals to -1/12.
Pretty much as I say at this point, there is a conflict between the mathematically sanctioned definition to calculating the sum of an infinite series (=limit of the partial finite sums), vs. what we'd expect the sum to be based on the pairs of terms cancelling out. The main message from this video, apart from the nice visual proofs at the beginning, is that the sums of infinite series don't necessarily commute. But that does not mean that these sums are not useful or don't make sense :) I guess the next question is why don't we define the sum to be zero when terms cancel, right? Well, that is because for most infinite series terms don't cancel in pairs and so our definition would not apply to most infinite series and so wouldn't be very useful.
??? What video did you watch? The whole point of the visualisation at the beginning of this video is to show you a proof off the beaten track that the partial sums (= the areas of the snakes) converge to ln(2) :)
Eddie suggested that I ask.the keen among you the following nice question (first watch the video): How many different ways are there to rearrange a conditionally convergent series to get the sum π? Yes, of course, infinitely many. The real question is whether there are countably infinitely many or uncountably infinitely many ways?
That sounds like it would be related to the amount of ways you can permute an infinite list, which sounds uncountable to me.
It should be uncountably infinite as if you make a list of numbers in a table that add to π, you should be able to do the same thing as with Cantor's diagonal argument to show that you have not listed every single way.
(Edit: I only just realised it said sum to π and not sum to any number)
@@Josh-dj1bl You change the structure by permuting infinite terms, that doesn't guarantee the sum stays the same. And there are already uncountably many real numbers that must be covered by those permutations. My intuition is that it is countably infinite
It's uncountable because for any arbitrarily large integer n, there are 2ⁿ ways to pick the first n terms (either positive or negative) and then the remaining terms are fully determined by the over/under method.
@@ckq Doesn't that mean it should be countable? 2ⁿ should be countable.
This demonstrates something non-math people don't get: Infinity is full of trap doors, subtleties, and other frustrations. The early infinity theorists like Cantor nearly lost their mind over this kind thing.
Yes, poor Cantor :(
I would not exactly agree. Sometimes it works the opposite way, and being a non-math helps to not get lost in math abstractions, or, if you wish, fall into the traps, which are still purely mathematical.
come down from that horse, friend. I have never had an aptitude for maths, I learn much better from visualizations than still reference frames. i.e. plotting by hand vs watching a video of the plot. hell, one does not even require numbers to demonstrate the nuances of infinity, just a line with evenly spaced marks, and a little imagination.
This is less a problem of infinity, and more a problem of simplifications, and particularly of failing to keep track of your simplifications and their implications. There's probably a derivative of units that should be devised to keep track of such things (or perhaps it's already within some mixture of calculus's limits and the concept behind "big O notation").
I think the problem with infinity is the assumption that it can be accurately described in a finite world
Sure, we can make pretty good approximations of how it might be behave, but it's just that: approximations
I really love how there's subtitles for every video since I'm still learning English
Thanks for the great content
Takes time to do, but is definitely worth it. Also, this did become a lot easier since RUclips now allows us to input the full script and then creates proper subtitles automatically from that :)
@@Mathologer Oh, I didn't realize that was a thing, that's pretty cool!
An Italian Math and CS Senior Lecturer here. Just want to share that, as it happened to the Mathologer, when my professor did the Riemann rearrangements theorem in Real Analysis I, as a freshman, was totally upset and amazed by this counterintuitive result. Congrats to The @Mathologer whose videos always make us see the things we know under new and interesting perspectives.
That's great :)
It's interesting to look at the patterns of positive & negative terms when rearranging to Pi. The first 10 terms on the positive side are: 13, 35, 58, 81, 104, 127, 151, 174, 197, 220. If you look at the differences between terms, you get:
22,
23, 23, 23, 23, 24,
23, 23, 23, 23, 23, 23, 23, 24,
23, 23, 23, 23, 23, 23, 24,
23, 23, 23, 23, 23, 23, 24,
23, 23, 23, 23, 23, 23, 24,
23, 23, 23, 23, 23, 23, 24,
23, 23, 23, 23, 23, 23, 23, 24 ...
You get similar almost repeating patterns when your target is e, or the golden ratio.
I think that what's going on here is that ln(23) is close to Pi, so we are very close to the fixed 23 positive 1 negative ratio.
I’m not a mathematician but I was thinking of a question right in line with this.
I can see 22/7 (ish). 🤯. (Edit: on second thoughts, 23+1/7 … which then generalises to continued fractions approximations to e^π, if you want to follow that rabbit-hole.)
Very well spotted. The reason for this is that Gelfond's number is approximately equal to 23. It turns out that if an arrangement of our series has the sum pi, then the ratio of the number of positive to negative terms in the finite partial sums of the series converges to Gelfond's number. This is just one step up from what I said about us being able to get arbitrarily close to pi by turning truncations of the decimal expansion of Gelfond's number into fractions. Similarly for other target numbers. For example, to predict what the repeating pattern for e is, you just have to calculate e^e :)
@@andrewkepert923 When doing path representations of continued fractions of square roots, I noticed that sqrt(13) is the first where the integer part does not unite into repeating period.
For example rabbit holes of sqrt(n^2+1) for 2, 5 and 10 are [1; 2], :[2; 4] and [3; 6]. Using < for L and > for R for nicer visuals, and starting with < for whole number 1, < for sqrt(2). Likewise >>>>>, the integer part and the repeating period with length 10 don't combine into repeating string.
Instead, when combined and arranged in the length of repeating period, there's a bit turn at 3rd digit.
>>>>>>
>>>>>>
>>>>>>
etc.
After 13, 29 and 41 had the same property. Haven't looked further so far, just thought worth mentioning, as 13 is an interesting "turning number" in many ways.
Is this the answer to the Eddie's question?@@Mathologer somebody pointed there that we first need to know the value to find the possible expansion series. for every finite approximation we have countable number of expansion possibilities, multiply it by N numbers of approximation and it still is countable. Or zero if none of them approximates pi :) for sure the infinite solution is computationally hard
There's actually a sort-of-explanation for why e^π is roughly π+20. If you take the sum of (8πk^2-2)e^(-πk^2), it ends up being exactly 1 (using some Jacobi theta function identities). The first term is by far the largest, so that gives (8π-2)e^(-π)≈1, or e^π≈8π-2. Then using the estimate π≈22/7, we get e^π≈π+(7π-2)≈π+20.
That is a very interesting comment. Is this something you noticed before?
@@Mathologer I wouldn't be surprised if it was already published somewhere, but I haven't been able to find it anywhere. I was working on some problems involving modular forms and I tried differentiating the theta function identity
θ(-1/τ)=√(τ/i)*θ(τ). That gave a similar identity for the power series Σk^2 e^(πik^2τ). It turned out that setting τ=i allowed one to find the exact value of that sum.
@@MathFromAlphaToOmega That's great. Learned something new :)
@@Mathologer I don't know if it's new, but it's certainly not well known. To quote the Wolfram MathWorld article "Almost Integer":
"This curious near-identity was apparently noticed almost simultaneously around 1988 by N. J. A. Sloane, J. H. Conway, and S. Plouffe, but no satisfying explanation as to "why" e^π-π≈20 is true has yet been discovered."
I don't know if this means anything but congrats for finding this fact and you just got a new subscriber! Let's blow your channel up everyone!
As soon as you moved the negative fractions below the top line, my first instinct was "Wait...isn't the top part 'outpacing' the bottom part?" Then I lost confidence when you collapsed them, lining up all pos and neg, lol. I was like "but, but, but...." Anyway, I love that stuff!
Yeah, infinite sets make sophistry an easy task, because standard logic dictates that there are no greater or lesser infinities.
Problems like this one, among others, prove that this is not the case; you just have to add to the infinity in a different direction.
@@wolvenedge6214 Yeah, I realized as he went along that the user is choosing more terms in a certain direction to ENSURE arrival at predetermined sum. This still leaves my intuition feeling that if the chosen number is positive, then the sum MUST contain more positive numbers than negative ones...then again...even that can be shown to be untrue, cuz one could arbitrarily choose more (but very small) negatives, and fewer (but very large), to arrive at the same number. Finally, however, the systematic "rule" that Mr Polster used, isn't arbitrary! I still feel the infinite positive set is larger than the negative set! BAH!
I had the same reaction
@@ABruckner8 You got me all confused since I thought that your first instinct is right. But I kept thinking and was wondering if you meant, that the positive direction would grow infinitely. Then I thought some more and I realized (hopefully that's right), that when you cancel the terms with each other what is left is an infinite series that converges.
Infinity welcomes careful drivers 😉
It is always a pleasure to watch your vids. Not only because these are great educational videos, but also because your voice and wordings make them even better
I think it's easy to get distracted by the fact that there is a matching negative for every positive term in the sequence. A similar paradox makes it more intuitive what's wrong with rearranging terms.
∞ = 1 + 1 + 1 +...
∞ = (2 - 1) + (2 - 1) + (2 - 1) +...
∞ = (1 + 1 - 1) + (1 + 1 - 1) +...
∞ = 1 + 1 - 1 + 1 + 1 - 1 +...
Then we can pull out positive and negative terms.
1 + 1 + 1...
- 1 - 1 - 1...
So every +1 is canceled by a - 1.
You can even create a mapping from the nth positive 1 to the n*2 negative term, so every positive term has a negative to cancel it.
This, to me, intuitively shows why you can't add infinite sums by rearranging terms. You need to look at how it grows as you add terms.
Infinite sums are always a process.
This clearly proves that 1+1+1+1+1+1+...=0.
I had the same thought, but I wasn't sure if it was a valid comparison since 1+1-1+1+1-1+1+1-1... is divergent and the sum in the video isn't. I don't really *get* infinite sums tbh.
Thank you, to me it was obvious the mistake but hard to put into words
An infinite series has a sum if the sequence of partial sums converges to a number. If there is such a number, then this number is the sum of our series. If no such number exists, the series does not have a sum. This is the official definition of the sum of an infinite series.
And so let’s consider the sequences of partial sums of these two series.
For 1 - 1 + 1 - 1 + 1 - 1 + ... the sequence of partial sums is 1, 0, 1, 0, 1, 0, 1, .... It alternates between 0 and 1, never settling down; and so as far as mathematics is concerned this series does not have a sum (at least to start with; see the discussion of supersums in some of my other videos)
For 1 - 1 + 1/2 - 1/2 + 1/3 - 1/3 + ..., the sequence of partial sums is 1, 0, 1/2, 0, 1/3, 0, 1/4, .... This sequence of partial sums converges to 0 and so this series has the sum 0 :)
with the last two videos this channel has outdone itself. I have seen and re-watched them several times and as an amateur and enthusiast I believe that they are the two best calculus lessons I have attended. So illuminating and profound, they hold together all those details that leave one confused in a school course and which here instead receive the right attention and are explained with incredible ease. Bravo! ❤
Glad you like them!
In case you are wondering, the notification for this video works for me. With crazy RUclips algorithms many creators are talking about these days we need these notifications.
Well, it's a relief that this works for at least some of the regulars :)
This channel has brought me intellectual ecstasy for years
That's great :)
Relax my guy
well said for me.
Bro 💀
Aaaand SUBSCRIBED!!!
Squeezing and stretching the snake, that sounds like lots of fun, and the result is quite beautiful indeed.
Yes, a nice little discovery. Watch my last video on visual logarithms to find out where the idea for this visualisation came from :)
You've shown me the true beauty in math. Your videos are truly intellectually stimulating
Your videos do have a tremendous impact on me, making me wanna attend you at Monash and enjoy the rest being of my life in that kind of Maths you’re reciting to us in every single dope video of yours!
Well if you ever happen to be in Melbourne, drop by my office :)
My daughter (ninth grade) just sent me this video with the message, “this is so interesting!”
Thank you for the proud dad moment, Mathologer!
That is awesome!
Bro whats her number 😭
@@42carlos don't get too worked up mate. Produce yourself a new daughter, and get her into maths quickly. And then, when people beg for her number, you don't have to share, and then they make more maths daughters, ad infinitum. This will spawn a world of young girls with a passion for mathematics.
@@xinpingdonohoe3978 >produce yourself a new daughter
That's the whole point, mate
@@42carlos I said it because it's easy to say, but that doesn't necessarily equate to easy to do.
Let's see. If a woman wants a daughter, she just has to give a man some bombastic side eyes and bring him home for the night.
We can't replicate that with the same level of efficiency I don't doubt.
Adoption maybe? Then you don't even have to do production. But then you've got to get a baby, or some passions will already be built in.
Hmmm.
On second thoughts, maybe it would be easier to get her number. Or just become a maths teacher, find out which girls have the greatest passion for maths, get their numbers, share some of the more interesting maths videos you can find, and wait for reciprocation.
This video was basically the final week of my very first analysis course at uni, and you explained it brilliantly. Maths is one of those things that never makes sense the first time, but then becomes crystal clear the second time.
One extra thing that could have been in this video was a bit more on why the positive and negative terms of a conditionally convergent series sum to infinity, because it’s not obvious in general unlike the other key fact about them tending to zero. *Edit* Thinking about it a second time, I’m not sure if you could do that without a full mathematical proof, and it’s at least well-known for the harmonic series, so maybe it was best left unexplained.
That's great :)
Isn't it because if one of the two didn't, the other would overwhelm it?
Yeah, it is obvious. Think about it.
Come up with any example. Say positive terms (1) or (1/k) and negative terms (0) or (1/2^k). Type them into a calculator in any order.
The partial sums increase without bound because they have a contribution of some negative partial sum ≤ L and some positive partial sum → ∞.
And obviously if both are finite then it converges absolutely. So the remaining option is both are infinite.
Fantastic stuff!!!
Actually I found a pattern in your videos.
With each new video, the length of your channel's supporter list at the end grows enough to conclude that the length of subsequent videos approaches infinity!
I wish :)
If you use surreal numbers you can make the paradox "disappear". This summation is an infinite set of games which has a total game value of ON (infinite moves for Left Player) + OFF (infinite moves for Right Player) better known as DUD (Deathless Universal Draw). The winning move is not to play, because there is no winning move. Surreals make infinity easy and clean!
There is something weirdly relaxing and also beautiful watching the animations and the number somehow forming! ❤
Every so often I think to myself, "It sure has been a while since Mathologer put out a new video..." and it seems that more often than not, you come through with something lovely in a day or two :)
Would you please aim to think of this again next Thursday? I have no plans for that night and a cool new Mathologer video would fit in very well indeed 😅
9/10 is my birthday and I consider this video a lovely gift. Thank you very much, Mr. Polster!
Happy birthday Alexandre :)
Reminds me of something i looked into once: the random harmonic series. That is, the harmonic series with the sign of each term chosen by a coin flip. The resulting series converges almost surely, and it turns out it has some neat properties as a random variable.
Yes, I also once read an article about this. It may even be in my to do folder :)
this guy is such a legit lecturer thanks alot
I actually teach maths at a university in Australia :)
Awesome! Your visual sum of Ln(2) @22:04 can be also be Ln(m) = \sum_{i=n}^{m*n}1/n for large n
Exactly :)
Your visual demonstrations are great and make very complex ideas simple to grasp.
Glad you think so :)
Banger video as always!
Sure hope so :)
Although there are plenty of great math channels on youtube nowadays, your videos are the best at conveying a sense of discovery, which makes me interested in mathematics in the first place. Keep them coming :) @@Mathologer
Thank you mathologer. You are a great fountain of knowledge. And genius, in your ability to provide visual explanations.
You have a gift--that you know this stuff--and a great gift that you teach it to us for free.
So nice of you :)
Michael Penn had a great video a week ago about alternating harmonic series and a proof that any pattern of positive numbers m and negative numbers n can be expressed as ln(2)+ 1/2 ln(m/n)
Will check it out a some point. Probably replicates the standard approach to this problem as outlined on the wiki page on the Riemann rearrangement theorem ?
@@Mathologer Actually he does it by converting the series into a limit of partial sums. Then after rearranging, you can get partial sums that can be expressed as a harmonic number. Adding and subtracting some specific logarithms to each partial sum, you can construct the Euler-Mascheroni constant times some coefficient. These coefficients times the Euler-Mascheroni constant for each partial sum cancel each other out, leaving him with only the logarithms he added in, which in their limit become the formula mentioned in my previous comment.
Yes, I think these two videos complement each other very well. This one gives a good visual understanding of it and Michael Penn's does the algebra and makes it rigorous with a few interesting insights and results.
It's at ruclips.net/video/5lR3y1bTFZ8/видео.htmlsi=pQZoiJsSX4W1SrTO
Ah, approximately 0.7, the number that shows up all the time when you deal with things like logarithms and roots.
That ln(2) trick is really something special. Thanks for showing it.
Yes, was really happy when I noticed that trick :)
sir, this was a marvellous throwback to your previous video. ❤
Yes, I stumbled across the idea for this visualisation while playing around with squishing and stretching in the last video :)
I learn more from this channel than my years of Calculus 1, 2, and 3. If I had watched these kind of videos before my classes, I would have understood those classes much better.
That's great. Mission accomplished :)
Great video. I missed the -1/12 in the end somehow 🙂
Your videos are amazing! This is the first one I've watched in a year or so, and I'm just as amazed by this one as by your earlier ones. I learned two important things from this video. First, I learned a very intuitive visual proof that the alternating harmonic series converges to ln(2). Second, I learned another very intuitive proof of Riemann's rearrangement theorem, which I never even knew how to prove before! As always, excellent job!
That's great. Mission accomplished as far as you are concerned :)
These animations were better than my whole calculu's teachers. Congrats for this awesome job
Glad you enjoyed these animations :)
15TH! Thanks for the deep math videos Mr Mathologer!
One has to be careful when dealing with infinite series as not all "infinities" are equal. By taking m positive terms from one infinity and subtracting n negative terms from the other infinity, you no longer have a one to one correspondence between the terms of these two infinities so as you pointed out the difference can be any arbitrary number you choose these to converge to.
Not a mathematician, but watch a lot of your stuff, and find it fascinating. It just seems on some level, that because you don't use an equal number of terms, you're cheating. That it's not just a series, but a series accompanied by another rule that says how many terms you can use. Anyway, it was fascinating.
Let A = Infinity 😂
I'm so happy to have seen your video on anti-squish shapes before this one! That first proof was a real beauty, of which I would not have been able to fully appreciate otherwise
Being a squish and stretch master definitely helps :)
Hope this is good
But it always will be good
Of course it will be
Very interesting, this video expanded my knowledge on how infinite sums behave, thank you!
I remember the first and last time watching you like 10 or 11 years ago, avoiding u because it was so hard for me to watch and understand math videos in english (spanish is my first language), now i'm very happy because i can watch, understand and learn from u
Welcome back :)
I effectively have the math education of a 12 year old. I cant understand even the most basic algebraic well anything.
These videos entertain me. Please never stop making them.
That's great.
This is a really cool explanation of natural logarithms. I remember learning about regular logs and natural logs in algebra 2 but we never really learned what they were or how they came about so to learn this is very cool. Really tempted to get a refresh on logs now lol
Yes, very beautiful and really not many people know about this. Hopefully this video will make a difference in this respect :)
If you have qm positive terms and qn < qm negative, you obtain 1/(qn+1) + ... + 1/(qm) (the first qn reciprocals cancelled ). Squish by q to make them 1/q wide each. Stretch (multiply) by q to obtain the heights 1/(n+1/q),...,1/m. The reciprocals n+1/q, ..., m are evenly spaced between n and m, giving you the area under 1/x as q approaches infinity.
2:30makes sense even if you take the og it would be 1-1/2+1/3-1/4... which will become 1/2+1/12... so it will be over half but under .75
Beautiful as always 😍
Edit: Added proof and precise statement.
Clearly, this alternating over- and undershooting works for any two sequences with the properties that you mentioned, but when you presented the nice pattern of m | n -> ln(m/n) for two harmonic series, I had the thought that you should be able to get any limit x by employing a sequence of positve rationals (m_k / n_k)_k converging to e^x:
Definition: For natural numbers m and n, let m | n denote the series from the video, i.e.
sum_{k=1}
(sum_{i=(k-1)m+1}^{km} 1/i)
- (sum_{i=(k-1)n+1}^{kn} 1/i)
For two sequences of natural numbers (m_k)_k and (n_k)_k, let (m_k)_k | (n_k)_k denote the series
sum_{k=1}
(sum_{i = (sum_{j=1}^{k-1} m_j) + 1}^{sum_{j=1}^k m_j} 1/i)
- (sum_{i = (sum_{j=1}^{k-1} n_j) + 1}^{sum_{j=1}^k n_j} 1/i)
Statement: Given two sequences of natural numbers (m_k)_k and (n_k)_k such that (m_k/n_k)_k converges to y, their series (m_k)_k | (n_k)_k converges to ln(y).
Proof: Let m/n be a positve rational number not equal to y and consider the corresponding m | n series. By convergence of m_k / n_k, there is a natural number N such that | m_k / n_k - y | < | m/n - y | for all k >= N. Rearranging finitely many terms keeps the limit and we can do that to m | n such that it matches the first N terms of (m_k)_k | (n_k)_k series. If m/n is larger than y, then the rest of the terms of the rearranged m | n series is larger than those of (m_k)_k | (n_k)_k and thus bounding it from above, similar from below if m/n is less than y. Thus (m_k)_k | (n_k)_k is convergent with limit less than ln(m/n) for all m/n larger than y and larger than ln(m/n) for all m/n less than y, meaning than the limit must be ln(y).
Well spotted. That is correct :)
@@Mathologer I wonder if it generalizes to other conditionally convergent series. Here I notice that ln of course comes in because int 1/x = ln(x). On second thought, the behavior of 1/x in regards to scaling x is quite essential for these limits, so its probably not very straight forward.
I am always amazed by your videos. Thank you.
Glad you like them!
This was an absolute banger. I can see so many uses for this. Thanks!
Mathologer - champion of infinite recursive convergent fractional series. Bravo!
I am always so surprised about how you find these topics and incredible visual and pretty proofs, even of facts I already know and know how to prove (not in a pretty way but rather more technical proofs).
:)
Infinite math is the first subject where you can go really wrong if you’re not careful. In arithmetic and basic algebra, you learn what you can do. Avoiding dividing by 0 is the first hint of this world, but it’s momentary. But calculus is all about avoiding the paradoxes of infinity. It’s the equivalent of moving from a steep hike to mountain scrambling. Then when you get to subjects like algebraic geometry and Lie algebras, the abstractions become so complicated they can become incomprehensible to those not experienced in the field.
I really like the infinite series type videos you make. Those are my favorite. 💙
Also some of my favourites :)
So beautiful! It's like the curve version of infinite fractions summing to previous fraction.
1/2+1/4+1/8.....1
1/3+1/9+1/27....1/2
etc
you always seem to have the best t-shirts
An alternative to the iterative method of finding the exact rearrangement for Pi (or any other number). Suppose we have some rearrangement of the positive and negative harmonic sequences (1+1/2+... -1-1/2-...). Consider the first T terms. Denote M(T) the number of positive elements and N(T) the number of negative elements. Naturally M(T)+N(T)=T. In the video you used regular rearrangements, based on fixed parameters m,n, and found that SUM --> ln(m/n). This is true in general. SUM --> ln(M(T)/N(T)). Testing your regular rearrangements. Denote t = m+n, T = at+b, then M(T) = am+b, N(T)=an. M(T)/N(T) --> m/n. You could choose any sequence M(T) such that M(T)/N(T) --> e^Pi. Or M(T) ~ T/(1+e^-Pi)
Nothing unusual, just a typical awesome video from Mathologer
Another way to think about this sum is that you need to group the expanded form into groups of 3. -- The rationale for this is that the (1/2 - 1) etc. are inseparable. By shifting the terms across multiple groups you are not accounting for the adjusted denominator. I.e.
1/1 + (1/2 - 1/1) = 1/2 -- the 1/1 terms are alike and cancellable
1/3 + (1/4 - 1/2) = 4/12 + (3/12 - 6/12) = 1/12
1/5 + (1/6 - 1/3) = 6/30 + (5/30 - 10/30) = 1/30
etc.
It looks like this generalizes to 1/x + (1/(x+1) - 1/((x+1)/2)) = 1/(x*(x+1)), but I'm not currently sure how to prove that.
This gives the sequence 1/2 + 1/12 + 1/30 + ... which is cleary between 0.5 and 1 (the first term is 0.5 and the other terms get exponentially smaller, so the other terms cannot sum to 0.5).
Always a pleasure to watch your videos!
Glad you like them!
Thanks!
Love your content! Educational, entertaining
I love when he smileys. It's a mix of nervous and sarcastic smile.
For me, the explanation is the pattern... Although every term EVENTUALLY cancels out, until they do, a sum exists. I have noticed this on a huge number of infinite series... It's like filling and draining a sink... It never goes dry.
Rearranging terms can affect the formulation when you're not defining the amount you're using properly.
1 + (1/2) - 1 = 1/2
( 1/3) + (1/4) - (1/2) = 1/6
(1/5) + (1/6) - (1/3) = 1/30
The top 2 terms can approach infinity faster than the bottom term.
If we were to write it out, we would actually get;
Sum_{1}^{infinity}(1/k) - Sum_{1}^{infinity/2} (1/k)
This means the numerator would still be increasing for a half-infinite amount of times while the denominator has reached it's goal. According to my maths, a half-infinite can be expressed by (-1)!/2 = (-1)(-2)(-3)!/(2) = (-1)²(-3)! = (-3)! so I'll be using this term ahead.
We can say our denominator has reached the point of -(1/(-3)!) when our numerator has hit the point of 1/(-1)!, or 1(0)
If we continue to add to the denominator until we get to a full (one) infinite, we get -(1/((-3)! + 1)) - (1/((-3)! + 2)) ... =
Until we get to...
-(1/ 1/((-3)!) + (-3)!) = -(1/(-1)!) = -1(0)
This would finally cancel out our numerator properly. Of course, this would give us a ton of expansion, but the same thing happens in the numerator and it all cancels out. If we don't add the rest of the terms into the denominator, our numerator has an additional Fraction of an Infinite amount of Zeroes to different powers. These infinitesimals combine to ln(2) over the course of the the half-infinite summation.
Basically if you add 1(-3)! A half infinite amount of times, you get, (-3)! × (1/(-3)!) = 1. If the denominator slowly increases to (-1)! Along the way, it can't quite reach 1.
1 / ((-3)! + 1)
= 1 / (1/2(0) + 1)
= 1 / ((1 + 2(0)) / 2(0))
= 2(0) / (2(0) + 1)
Divide both sides by 2
= 1(0) / (1(0) + 1/2)
= 2(0)
1 / ((-3)! + 2)
= 1 / ((1/(2(0))) + 2)
= 1 / (((1 + 4(0)) / 2(0))
= 2(0) / (4(0) + 1)
Multiply by 2
(1/2) × (4(0) / (4(0) + 1)
Sacrifice blood
(1/2) × ((1(0)) / (1/4))
(1/2) × 4(0) = 2(0)
1 / ((-3)! + 3)
= 1 / ((1 + 6(0)) / 2(0))
= 2(0) / (1 + 6(0))
Times 3
= (1/3) × (1(0) / (1/6))
= (1/3) × 6(0)
= 2(0)
But eventually it'll reach halfway to (-1)! Which is ((-1)! + (-3)!) / 2
= ((1/0) + (1/2(0)) / 2
= (3/2(0)) / 2
= 3/4(0)
= (3)(-1)!/4
1/0 - 3/4(0) = 1/4(0)
Already at this point we can see that (((1/4(0)) × 2(0)) + (1/4(0) × (3(0)/4) / 2 < 1/2(0) × 2(0)
= 0.5 + 3/32 < 1
= 0.59375
The rate it decreases also suggests ln(2) could be in range (I'm just not doing the calculations). We just need to remember that a lot of it will be close to the point where it's equal to 1(0) and our average will favor 1, increasing from 0.59375 to ln(2) as we continue to calculate.
@Mathologer Am I right in saying this? Since area of (1/x) is lnx and if bounds go from 0 to 2, we get ln2 as area. Notice that the graph of (1/x) is symmetric about y=x. Initially we are finding the area under the graph starting a x=1 and going to right. If we reflect the graph about y=x, then the area we found from x=1 to x = infinity is exactly superimposed on area of (1/x) which is lnx from x=0 to x=2. This means that this infinite sum is really = ln2.
Finally NEW video! Thanks for sharing!
The piano at the end was nice. Real piano too; I could hear the pedals moving.
Since the sum of the prime reciprocals also diverges incredibly slowly and primes are arbitrarily large, the same process can be applied to get an alternating sum of prime reciprocals to converge to any real number.
Fun stuff!
Wow you're actually awesome dude... I envy anybody who had got to have you as a teacher. nobody has the right to expect the awesomeness that you provide as a teacher.
If we look at (1+1/2 x^a + 1/3 x^2a + … +1/(n+1) x^na) n-> ♾️
This series = -ln(1-x^a)/x^a: -1
Using a short java program I wrote it looks like once you get past the initial 13 terms to overshoot pi and subtract it takes between 8 and 9 additional terms to overshoot pi again and then you subtract one term.
Additionally, it alternates adding 8 and 9 terms before you subtract one term (you always only need to subtract 1 term) but eventually theres 2 9 terms intervals and the pattern continues.
What is interesting is the number of "intervals of intervals" aka the number of times this alternating takes place before you get 2 9's varies initially but settles on the pattern 37, 39,39,39, then back to 37. However, every now and then theres 4 39's before going back to 37. It looks like this pattern has a pattern as well. I suspect these inner patterns continue on due to the irrationality of pi and there is never a "straightening out" of the pattern.
Edit:
I am trying out different values for what you want the series to converge to and it's pretty interesting. First of all, for every number you only ever need to subtract one term to get the current value to go below the desired value. Every integer you try the number of terms before it overshoots generates a pattern with little deviations in it. Im thinking the pattern of the little deviations have their own patterns, and this extends infinitely.
Is anyone aware of someone coming across this and looking into it more? I'm thinking it would be fascinating to generate a "3d graph" of desired values vs how the series behaves but I need to get to my computer to try this out. I'll let you guys know what I find
Please donÄt say "the pattern continues" when the target is pi ;)
@@HagenvonEitzen is right, I think you need to qualify it with "up to n terms"
15:25 - A fun thing about this strategy that you didn't mention is that you will always only ever need one negative term following each sequence of positive terms to bring you back down below pi.
I'll leave proof of this as an exercise for the reader.
Very well spotted. I actually did not mention this on purpose because by the time I use negative terms for the second time I was already setting up the algorithm to be applicable to any target number :)
Another great video. Love these. Sorry I was late, I usually watch them on Sunday but I was making a Lego set. :)
You are very well acquainted with your snakes. I'm still in the process of learning how to tame and manipulate them. I hope I can become a great snake charmer as you are someday.❤
I think the best demo of infinite series weirdness is:
1 = 1 + 0 + 0 + 0 …
= 1 + (-1 + 1) + (-1 + 1) + (-1 + 1) …
=?= (1 + (-1)) + (1 + (-1)) + (1 + (-1)) + …
= 0 + 0 + 0 + … = 0
Gotta be super careful which basic arithmetic properties you use in infinite series!
Bit of an extra subtlety:
An infinite series has a sum if the sequence of partial sums converges to a number. If there is such a number, then this number is the sum of our series. If no such number exists, the series does not have a sum. This is the official definition of the sum of an infinite series.
And so let’s consider the sequences of partial sums of these two series.
For 1 - 1 + 1 - 1 + 1 - 1 + ... the sequence of partial sums is 1, 0, 1, 0, 1, 0, 1, .... It alternates between 0 and 1, never settling down; and so as far as mathematics is concerned this series does not have a sum (at least to start with; see the discussion of supersums in some of my other videos)
For 1 - 1 + 1/2 - 1/2 + 1/3 - 1/3 + ..., the sequence of partial sums is 1, 0, 1/2, 0, 1/3, 0, 1/4, .... This sequence of partial sums converges to 0 and so this series has the sum 0.
The return of the squish-and-stretch! Love it!
Amazing as usual
So beautiful explanation of calculus problems 💛
Glad you think so!
Your “every number” demonstration provides a mapping, akin to the Minkowski Question Mark Function, between the reals and binary fractions. Begin with 0. and, with your chosen irrational value, write as many 1s as there are positive fractions before overshooting. Then write 0s for the negative fractions before undershooting. So, for example, ln(2) would map to 0.101010101…. In base 10, this is 2/3.
I laughed out loud at "rectangle snake charmer". Love this channel.
I don`t understand a single word, because i suck in math. But i like this man`s accent and voice, so i keep watching
The snake visuals were so good!
I'm addicted to your informative videos . 💌 from India .
Glad you like them!
0±0=0 & 0×0=0 but 0÷0≠0.
The moment you use infinity, something breaks. XD
dude! at the end of the video, the green part's formula for area was given, but it was the area under the curve 1/x from 2 to some mystery number near 4! what was that number near 4‽
It's supposed to be exactly 4 :)
I like math. This is a very well known and important problem. The question is when a sequence can be summed.
It has taken decades, if not centuries, to answer this question.
A sequence is summable if the sum does not depend on the order of the sequence and if the sum is finite.
For positive components, the answer is obvious. But if the sign changes constantly and the components are sufficiently large, any result can be achieved.
For any finite amount of terms, adding up both summations isnt the same as adding up 2 terms of one summation and 1 term of another. Therefore you cant expect the combined summation to cancel out to 0 for an arbitrary number of sums.
Marvelous! I pretty sure that using the strategy of approximate a number like min. 16:30 we create a base numeration system! In that pi = 13, 21, 58 ...
A) If both negative and positive sum are finite, the sequence is absolutely convergent (resists rearrangement)
B) If only the negative or positive sum is infinite, ebery reordering blows up to (neg./pos.) infinity
C) If L was the limit and a>0, there would have to be a last partial sum outside the interval (-a/2+L, L+a/2). But that means all later terms must be in (-a,a). Since a is arbitrary the terms must get (and stay) arbitrarily small. If they don't, the partial sums keep "jumping away too far" from any potential limit.
That's everything that can happen if there is at least one reordering of the series that converges. Otherwise you also have take things like 1-1+1-1+ ... into consideration :)
Yeah case C is actually supposed to illustrate the last possible failure case. I added a sentence to clarify.
Someone send this video to Brady of Numberphile. I would not be surprised if he rearranges the positive and negative terms of the alternative harmonic series to get -1/12 + -1/12 and yet rearrange them to get his one and only favorite -1/12.. That is why I love Matloger. Not only you know what you are talking about but explain it in a way that is both mathematically rigorous and easy to understand.
Hello Mr. Polster. Thank you for the videos. I really like your style! The only problem I have with your videos is probably (?) uncompressed audio. Some parts are really quiet, the next moment it's frighteningly loud. I'm not an audio engineer, so I might be wrong, but it will be more comfortable for listeners if you apply compressor to the audio. Then it will be easier to find an appropriate sound volume to watch your awesome videos.
Both the audio files of me speaking and the music that I am working with are .wav files to start with. Things get recompressed when I bundle everything together in Premier and I am sure that RUclips does some more recompressing. I personally and my proof readers don't experience any issues with the audio and I also only very rarely get anybody commenting on audio issues in the comments. One exception is this early video ruclips.net/video/jcKRGpMiVTw/видео.html. Are you using headphones?
@@Mathologer Got it! I'm listening through my phone's mono speaker. It's around 6 years old, maybe that's the problem. I'll try it with the headphones.
Another one i like is
1+2+2+2+2+2...
You can rewrite it as
1+2+(3-1)+(4-2)+(5-3)...
Which feels like you could interpret as equalling zero.
Its the nice idea of the difference between using "x=infinity" and letting x approach infinity (Getting arbitrarily large)
Very nice problems :)
An important difference here is that the partial sums of your example do not converge to a single number. Your series is divergent whereas the one in the video is convergent :)
Great demonstration!!
At 3:00, what sticks out to me immediately is that the negatives are cancelling out earlier positive terms, and there are always going to be positive terms that aren't cancelled-out. It's a bit like the dodgy guys from Numberphile trying to tell us a divergent sum totals to -1/12.
Pretty much as I say at this point, there is a conflict between the mathematically sanctioned definition to calculating the sum of an infinite series (=limit of the partial finite sums), vs. what we'd expect the sum to be based on the pairs of terms cancelling out. The main message from this video, apart from the nice visual proofs at the beginning, is that the sums of infinite series don't necessarily commute. But that does not mean that these sums are not useful or don't make sense :) I guess the next question is why don't we define the sum to be zero when terms cancel, right? Well, that is because for most infinite series terms don't cancel in pairs and so our definition would not apply to most infinite series and so wouldn't be very useful.
Hey man! You should have shown why the alternating sun converges to ln(2). That proof is just amazing.
??? What video did you watch? The whole point of the visualisation at the beginning of this video is to show you a proof off the beaten track that the partial sums (= the areas of the snakes) converge to ln(2) :)
Love yor work! Perhaps you could try mathologerising the Lévy-Steinitz rearrangement theorem.
We'll see :)
Superb. The visuals really help.