At 10:38 we have |\sum_{k = ?}^{?} a_{\tau(k)}|. I can't quite follow the logic of this step. More concretely, I do not understand how indexing the sum with k works here. May understanding is: We have n \geq N so that the sum \sum_{k=1}^n a_{\tau(k)} includes all sequence member a_1, ..., a_{N_1 - 1}. Now I am thinking of an example such as {\tau(1), ..., \tau(n)} contains {1, 2, ..., N_1 -1 } and {N_1 + 3, n_1 + 5}. Let's say that \tau maps N_1+3 to be first element of the reordered series (i.e., \tau(1) = N_1+3, then it maps 1 to be second index (i.e., \tau(2) = 1), then it maps N_1+5 to be the third index (i.e., \tau(3) = N_1 + 5), and then maps 2, 3, ..., N_1-1 to be the fourth, fifth, ..., N_1 + 1 element of the reordered series (i.e., \tau(4) = 2, \tau(5) = 3, ..., \tau(N_1 - 1) = N_1 + 1). If we then look at the difference |\sum_{k=1}^{N_1 - 1} a_k - \sum_{k=1}^n a_{\tau(k)}| we would be left with | - (a_{N_1+3} + a_{N_1 + 5})| = | - (a_{\tau(1)} + a_{\tau(3)})|. If I am not wrong, this leftover sum cannot be expressed by sum indexed using k, since | - \sum_{k=1}^3| would include the case k = 2 which is not part of the leftover sum (i.e., | - \sum_{k=1}^3 a_{\tau(k)}| = | - (a_{N_1+3} + a_1 + a_{N_1+5})|. I hope my reasoning makes sense. I realise that it is somewhat convoluted, but I hope my point comes across. Thanks anyone with a helpful answer in advance.
Hi sir, 11:53, to prove sum n(k) with k=1->inf equals that of n_tau(k), it seems you showed there is N where sum n(k) minus n_tau(k) with n>N is less than epsilon, meaning sum of n(k) vs n_tau(k) up to N are very similar. However, the remnant of sum a(k) minus a_tau(k) from k=1->N, while less than epsilon, is not zero. How does that prove sum a(k) = a_tau(k) ?
This video is a perfect pedagogical example of why it can be very harmful to think of series as being some type of "summation" of infinitely many objects. Such an interpretation is misleading, and it makes it seem as though the idea of absolute convergence versus conditional convergence, to be paradoxical, when in reality, the idea is actually intuitive when series are understood from the correct intuition. Students need to remember: we use Sigma notation to denote the informal concept of series, but this is just a special notational convention (which is understandably confusing) which definitely does not denote summation. It denotes the limit of a sequence, which is defined as the output of a linear operator applied to another sequence, and said linear operator can be interpreted to be a specific pseudo-inverse of the forward difference operator on sequences. As such, understanding this as an example of linear operators acting on vector spaces, rather than as a generalization of summation, is much healthier when it comes to mathematical intuition. Understanding the topic to be about limits of sequences, and about the partial summation operator, which is related to the forward difference operator implicitly, makes the theorem in the video that much more obvious in retrospect. Consider, for simplicity, an arbitrary sequence f : N -> X, where (X, d) is a metric space. Consider the sequence s[f] defined by (s[f])(0) = f(0) and (s[f])(n + 1) = f(n + 1) + (s[f])(n). The sequence s[f] is called the series of f. What does this reveal? It reveals that the primitive concept in the theory is this linear operator s, which acts on every sequence of X. We focus on having sequences that converge have the linear operator s applied to them to produce another sequence, and then study under what conditions is convergence preserved when s is applied. Of course, this was already discussed earlier in the video-series when the topic was introduced, but it is important that people watching be reminded of this fundamental idea, that this what series actually are. Once this is understood, the subject of this video is quite simple: it boils down to understanding that, even if s[f] converges for a given f, s[f°g], for a bijection g : N -> N (bijections are how the concept of re-ordering is made rigorous), may not converge (here, ° denoted composition of functions), and if it does converge, lim s[f] and lim s[f°g] need not agree. This is they key intuition behind the theorem, and it is much easier to understand, because there is no fallacious association with the concept of summation that tricks the student into thinking that permutations need to preserve the value of convergence. Of course, if the topic is presented to a student in this manner, it will be obvious to most students that s[f] and s[f°g] converging to different numbers is entirely unsurprising. f and f°g are completely different sequences, and so s[f] and s[f°g] should be expected to be completely different sequences, and so they should be expected to have completely different convergence properties.
for any reading this in the future, linear operator is a function acting on other functions, or other objects. An example of a linear operator is a matrix
@@rthurw No. A linear operator is simply a function acting between two vector spaces, satisfying the property of linearity, meaning that f(α·u + β·v) = α·f(u) + β·f(v).
@@angelmendez-rivera351 My interpretation of a series is, as you explained, that of being a summation of infinitely many objects. I never thought that the idea of absolute convergence vs conditional convergence was paradoxical. Absolute convergence does indeed changes the series and the transformation is not even bijective. There is IMO no paradox between these two. The paradoxical part is the one where reordering the terms changes the limit. But this is probably because I have this "wrong" idea of what a series should be. You said that sigma notation is a "special notational convetion" and does not denote summation. Is this to be understood literally like that? I think that is incredible interesting/important. I am studying CS and Physics at a supposed "Elite" university and this has never been explained or clarified to us. I also find many other of your points very interesting, could you share sources for learning/reading about analysis that you recommend? I am looking forwards to truly understand whats going on.
Yes, you are right. It does not matter here but ≤ would have been the correct symbol. Or one introduces epsilon prime earlier to get the strict inequality.
What a great explanation. However when (if a I understand properly) you used {1,2,..., N1-1} in {t(1), ..., t(n)} and draw the sets, the images are in the left hand side no (the t(k))? Because you have in the left {1,2,3,...., N, ...} (In the proof, at the end)
At 10:38 we have |\sum_{k = ?}^{?} a_{\tau(k)}|.
I can't quite follow the logic of this step. More concretely, I do not understand how indexing the sum with k works here.
May understanding is: We have n \geq N so that the sum \sum_{k=1}^n a_{\tau(k)} includes all sequence member a_1, ..., a_{N_1 - 1}.
Now I am thinking of an example such as {\tau(1), ..., \tau(n)} contains {1, 2, ..., N_1 -1 } and {N_1 + 3, n_1 + 5}. Let's say that \tau maps N_1+3 to be first element of the reordered series (i.e., \tau(1) = N_1+3, then it maps 1 to be second index (i.e., \tau(2) = 1), then it maps N_1+5 to be the third index (i.e., \tau(3) = N_1 + 5), and then maps 2, 3, ..., N_1-1 to be the fourth, fifth, ..., N_1 + 1 element of the reordered series (i.e., \tau(4) = 2, \tau(5) = 3, ..., \tau(N_1 - 1) = N_1 + 1).
If we then look at the difference |\sum_{k=1}^{N_1 - 1} a_k - \sum_{k=1}^n a_{\tau(k)}| we would be left with | - (a_{N_1+3} + a_{N_1 + 5})| = | - (a_{\tau(1)} + a_{\tau(3)})|. If I am not wrong, this leftover sum cannot be expressed by sum indexed using k, since | - \sum_{k=1}^3| would include the case k = 2 which is not part of the leftover sum (i.e., | - \sum_{k=1}^3 a_{\tau(k)}| = | - (a_{N_1+3} + a_1 + a_{N_1+5})|.
I hope my reasoning makes sense. I realise that it is somewhat convoluted, but I hope my point comes across. Thanks anyone with a helpful answer in advance.
Hi sir, 11:53, to prove sum n(k) with k=1->inf equals that of n_tau(k), it seems you showed there is N where sum n(k) minus n_tau(k) with n>N is less than epsilon, meaning sum of n(k) vs n_tau(k) up to N are very similar. However, the remnant of sum a(k) minus a_tau(k) from k=1->N, while less than epsilon, is not zero. How does that prove sum a(k) = a_tau(k) ?
This video is a perfect pedagogical example of why it can be very harmful to think of series as being some type of "summation" of infinitely many objects. Such an interpretation is misleading, and it makes it seem as though the idea of absolute convergence versus conditional convergence, to be paradoxical, when in reality, the idea is actually intuitive when series are understood from the correct intuition. Students need to remember: we use Sigma notation to denote the informal concept of series, but this is just a special notational convention (which is understandably confusing) which definitely does not denote summation. It denotes the limit of a sequence, which is defined as the output of a linear operator applied to another sequence, and said linear operator can be interpreted to be a specific pseudo-inverse of the forward difference operator on sequences. As such, understanding this as an example of linear operators acting on vector spaces, rather than as a generalization of summation, is much healthier when it comes to mathematical intuition.
Understanding the topic to be about limits of sequences, and about the partial summation operator, which is related to the forward difference operator implicitly, makes the theorem in the video that much more obvious in retrospect. Consider, for simplicity, an arbitrary sequence f : N -> X, where (X, d) is a metric space. Consider the sequence s[f] defined by (s[f])(0) = f(0) and (s[f])(n + 1) = f(n + 1) + (s[f])(n). The sequence s[f] is called the series of f. What does this reveal? It reveals that the primitive concept in the theory is this linear operator s, which acts on every sequence of X. We focus on having sequences that converge have the linear operator s applied to them to produce another sequence, and then study under what conditions is convergence preserved when s is applied. Of course, this was already discussed earlier in the video-series when the topic was introduced, but it is important that people watching be reminded of this fundamental idea, that this what series actually are. Once this is understood, the subject of this video is quite simple: it boils down to understanding that, even if s[f] converges for a given f, s[f°g], for a bijection g : N -> N (bijections are how the concept of re-ordering is made rigorous), may not converge (here, ° denoted composition of functions), and if it does converge, lim s[f] and lim s[f°g] need not agree. This is they key intuition behind the theorem, and it is much easier to understand, because there is no fallacious association with the concept of summation that tricks the student into thinking that permutations need to preserve the value of convergence. Of course, if the topic is presented to a student in this manner, it will be obvious to most students that s[f] and s[f°g] converging to different numbers is entirely unsurprising. f and f°g are completely different sequences, and so s[f] and s[f°g] should be expected to be completely different sequences, and so they should be expected to have completely different convergence properties.
for any reading this in the future, linear operator is a function acting on other functions, or other objects. An example of a linear operator is a matrix
@@rthurw No. A linear operator is simply a function acting between two vector spaces, satisfying the property of linearity, meaning that f(α·u + β·v) = α·f(u) + β·f(v).
@@angelmendez-rivera351 My interpretation of a series is, as you explained, that of being a summation of infinitely many objects. I never thought that the idea of absolute convergence vs conditional convergence was paradoxical. Absolute convergence does indeed changes the series and the transformation is not even bijective. There is IMO no paradox between these two. The paradoxical part is the one where reordering the terms changes the limit. But this is probably because I have this "wrong" idea of what a series should be.
You said that sigma notation is a "special notational convetion" and does not denote summation. Is this to be understood literally like that? I think that is incredible interesting/important. I am studying CS and Physics at a supposed "Elite" university and this has never been explained or clarified to us.
I also find many other of your points very interesting, could you share sources for learning/reading about analysis that you recommend? I am looking forwards to truly understand whats going on.
@Mr Fl0v They are effectively synonymous, there is no real distinction between the two, unless an author choose to make one.
Two (minor) corrections...
At 8:47 (to be mathematically correct), the inequality should read "≤ ε" instead of "< ε". The same holds at 11:28.
Yes, you are right. It does not matter here but ≤ would have been the correct symbol. Or one introduces epsilon prime earlier to get the strict inequality.
Really nice explanation, thanks!
Really neat! Well done :D
Great video, thanks!
A question: at 9:58, it seems to me that it should be "n=N". Am I missing something?
It should be correct with n>=N because we want to have the superset next.
Thanks for the answer! I see the point now, I was making a rather silly mistake 😅
What a great explanation. However when (if a I understand properly) you used {1,2,..., N1-1} in {t(1), ..., t(n)} and draw the sets, the images are in the left hand side no (the t(k))? Because you have in the left {1,2,3,...., N, ...} (In the proof, at the end)
Thanks! I don't understand your question completely. We cover all elements with the images of tau. That is what is happening in the end.
Good video!
Please make Videos on advance Real Analysis..
Thank you