I just watched your video on Gamma Zeta Integral, and I found it very educational. I personally think your approach is beautiful; besides, you have covered a more general case. I love your enthusiasm throughout the video! =)
Thank you for solving this, I was trying to solve a similar integral from past 7 years. I used ur method and I finally completed the damn integral after 7 years.
Hello. :) The first thing I thought when I saw the integral was Bernoulli. There is another beautiful way to solve it in my opinion. If one knows that the integrand is the generating function of the Bernoulli numbers the solution follows immediately from some basic integral properties. :) Details: Integral 0 to inf x^(2n-1) / ( e^(ax) - 1) dx = beta(n) / 4 (2 pi /a) ^ (2n) Where beta(n) = (-1)^(n+1) * bernoulli( 2n ) Using n=1, a=1 => beta(1) = bernoulli(2) = 1/6 gives pi^2 over 6. :)
Awesome vid. One note: to avoid integration by parts, you can evaluate integral from 0 to ∞ of (xe^(-xn) dx) by differentiation under the integral as follows: I(n) = int 0 to ∞ of -e^(-xn) dx I'(n) = int 0 to ∞ of xe^(-xn) dx after differentiating with respect to n, which is what we want However, we can evaluate I(n) easily I(n) = int 0 to ∞ of -e^(-xn) dx = e^(-∞n)/n-e^(-0n)/n = -1/n I'(n) = 1/n^2 So we deduce that int 0 to ∞ of xe^(-xn) dx = 1/n^2, and basically all we had to do was integrate -e^(-xn)
It is a very clean process, totally clear for the viewer. Concerning the final steps, when deciding over integration by parts or Gamma function, one could also consider the Laplace transform of f(x) = x, which is directly readable and there are no needed modifications to compute it, in contrast to the option of using Gamma function. Laplace transform is (I think) known by everyone having seen a first course on ODEs, so it should also be part of the "toolbox" available when solving integrals like this one.
7:54 I'm not sure about this fact... Uniform convergence only allows to exchange sum/integral on a segment like [a,b], not an infinite set. In such case, you would need dominated convergence.
Nice integral, but always great complications !! we can do it easier : x/(exp(x)-1)=x*exp(-x)/(1-exp(-x)) then let u=exp(-x) the integral will be integ from 0 to 1 the fraction -ln(u)/u-1 known 1/1-u=1+u+u^2+u^3+...+u^n the integral will be minus somme from n=0 to infinity integral from 0 to 1 u^n*lnu du integration by parts and it's done somme from n=0 to infinity 1/(n+1)^2 equal to pi^2/6
Come on, it's the very same technique he used. He just wanted to pass through the gamma function definition for the sake of mentioning something extra which could be of interest to the viewers!
True but using the Gamma Function is more practical in my opinion because tougher Integrals (most with non elementary anti derivatives) incorporate incomplete/complete Gamma functions and this is a cool gateway of viewing things
Actually you can't refer to uniform convergence at all, because the sum does NOT converge uniformly. Here's why: for the sake of contradiction assume that the sum whose general term is exp(-nx) converges uniformly on R+. That would mean, by Cauchy criterion, that for a suitable N (chosen sufficiently large) we should have that the sup for x belonging to R+ of the modulus of the sum of the N-th,N+1-th,...N+p-th term would be smaller than a fixed quantity, say 1/2, FOR EVERY NONNEGATIVE INTEGER p>=0. This clearly implies that the general term of the series must actually converge to zero UNIFORMLY, but that's not the case, because FOR EVERY N, we have lim exp(-nx)=1 for x->0+. So the series does NOT converge uniformly, and the theorem couldn't be applied a priori anyway
You are absolutely right. When I was filming the video, I mistakenly thought e^(-nx) only had to be uniformly convergent on (0, inf) for the theorem to apply; however, a uniform convergence at the endpoint(s) is also necessary (in our case, as you pointed out, our sequence of functions doesn't satisfy this at x = 0). I sincerely apologize for the error. The Monotone Convergence should have been used to justify the exchange. Thank you for notifying me! =)
@@LetsSolveMathProblems You're welcome. I honestly don't know if the other comment is proper, cause I watched the video again and I realized only the 2nd time that you actually referred to Taylor series AND TO GEOMETRIC ONES when it came to investigate the uniform convergence, so you considered it the right way, even if you claimed a result for this kind of series (the geometric series of functions) that doesn't hold in general, because you have uniform convergence if the general term is bounded above by a positive real number which is less than 1, but the discussion in the other comment made it clear that's not the case at all (read in particular the limit argument, when I show that the general term does not converge to zero uniformly). Maybe that was the result you thought about, which is a corollary of Weierstrass total convergence criterion and eventually of Cauchy's criterion. But it can't be applied in this case for what I've just texted. Anyway I've appreciated your answer timing a lot. Thx for your answer, I think I'll watch other videos of yours at a later date. Till next time ;-)
Pay attention when you justify the change of the order of the summation and of the integration by referring to the uniform convergence of the series, it is way better to make use of the monotone convergence theorem that is studied in Measure Theory and Lebesgue Theory, as this example shows: pick the sequence of functions given by chi{[0,n]}/n, where chi{•} is the characteristic function of the set written between the brackets. It is really simple to see that this sequence converges uniformly to the constant zero function, nevertheless the integral extended over R+ or R is constantly 1, so you can't apply the uniform convergence theorem result. The reason is that there is another assumption that is made in that theorem, i.e. the fact that the measure of the set involved must actually have finite measure, otherwise situation like the previous example can occur. Since in our case you're integrating over R+, which has infinite measure, I would avoid to use the result about uniform convergent series of functions and I would rather notice that the series has only nonnegative functions, which means you have a monotonic increasing series of functions, so that monotone convergence theorem can be applied eventually to justify the inversion of the order of the operations of summation and integration.
I'm glad you enjoy my videos! Without a doubt, commenters like you make my day. I do remark that high school math classes, generally speaking, cannot accurately gauge your true mathematical ability or potential. Tests based on memorization and blindly following a step-by-step procedure cannot possibly measure the thrill you experience when your creativity intermixes with an interesting problem to light up an elegant solution, nor can such tests successfully predict your love and passion for mathematics, without which learning mathematics often becomes a fruitless endeavor. =)
pi^2/6 again? I can't escape that value. I keep running into it over and over. Something about the zeta function, but it shows up in the trigamma function and the 2nd(?) polylog when evaluated at 1 and it pops up all the time in integrals
The gamma function step at the end seemed a bit convoluted to me. While I probably couldn't have gotten that far into the solution by myself, once there, it seems like the easiest thing to do is to rewrite the integrand as -d/dn (exp[-xn]). Solving the integral is then trivial, and so is redifferentiating afterwards. But I'm not a mathematician, so maybe I am missing something.
I do not see how writing the integrand as -d/dn(exp[-xn]) would help us; after all, we should write the integrand as derivative with respect to x (NOT with respect to n) if we wish to proceed using Fundamental Theorem of Calculus. Perhaps I am missing something as well. I would appreciate it if you could elaborate on your method a little bit more. Thank you for commenting! =)
LetsSolveMathProblems I'm an aspiring physicist, so I'm maybe not being as careful as I should be. But I believe this is one of Feynman's tricks for integration
The part when you use Taylor series of a/1-r is false because for the zero of the integral, exp(-x) is equal to one, which not permitted by the very condition of the radius of convergence
at 8:00 uniform convergence is a good argument for switching sum and integral, but actually it always holds if the terms are non-negative. (Beppo Levis theorem or Tonellis theorem) :D
whoa, letssolvemathproblems fails along the way?!?! this is unprecedented! instant like and you put me in awe, lol. btw, is there something else i can call u by instead of letssolvemathproblems, perhaps your first name?
My first name is Michael. Feel free to call me by LetsSolve, LetsSolveMathProblems, or by my first name. =) I'm glad you enjoyed the video, Jeff Wolfshire!
Use Monotone Convergence Theorem rather than Uniform Convergence as it is much easier to show Monotone Convergence than Uniform Convergence which follows as the sum is a sum of positive quantities.
i have a small trick for this integral x * exp(-xn) . you can write this as minus d/dn exp(-xn) . then put - d/dn out of integral . Then just calculate integral of exp(-xn) from 0 to infinity which is easy. its just 1/n . then let - d/dn act on 1/n and you have 1/n^2
❤ yes actually it is the method of differentiation under the sign of integration given by Leibnitz using it r times we can integrate (x^r) e^(-nx) in [0, infinity ] which is r ! /n^(r+1)
I have used Lebesgue theory about majorant for the change suma And integral. x[(exp(-(N+1)x-1)/(exp(-x)-1)]≤x[ exp(-x)+1-1] ( for N=2), where N Is index S_N, (non-negative Borel-measurable functions guarantee the existence of the Lebesgues integrals).
I know I am too late( but I hope u see this LSMP) but actually it is a direct problem if one is aware of the relation between zeta and gamma function as: integral of (x^(s-1))/((e^ x)-1) from 0 to inf = zeta(s) * gamma(s) which in this case s=2 the answer is zeta(2)*gamma(2) =pi^2/6 *1!=pi^2/6
Elliott Manley a and r are both constants, so it doesn't matter if they are both expressed as functions of something else, the relationship still holds.
For each value of x between 0 and infinity, a and r are constants. Since the definite integral is basically the limit of a Riemann sum, in which we examine each value of x one by one, we can treat a and r as constants.
or let e^(-x)=y then the integral will be int(y=0 to 1) ln(y)/(1-y), use the series expansion for the denominator and you will get sum(n=1 to inf) 1/n^2 which is zeta(2) which pi^2/6
Hey Michael (let's solve math problems) I am a big fan.... I think an easier way to do this integral would be (reference from the new integral at 7:04) doing a u substitution here.... Take u=1-(e^-x) Then you get du=xe^-x From here you get the integral to be integral from 0 to infinity of 1/u du This is ln|u| from 0 to infinity Which is ln|1-e^-x| from 0 to infinity Please let me know if I am right.... And if there is an issue please elaborate on that.... I am not very advanced in calculus and thus would like to get a fresh opinion..... If it turns out it is true perhaps you can create another video depicting this method......
Actually, I've just figured out the resolution to this issue. Dealing with integrals we have to be care about not the values on the endpoints but the value of the limits on these points. Since we approach from 0+ (because the integral from 0 to infinity) e^(-x) is less than 1.
What if I solve the indefinite integral then put 0 to infinity there. I don't seem to be able to put 0 and infinity, there are just indeterminate forms....I was able to solve the definate integral though. Please reply
Holy cow. This was a ride. I kinda new we needed to squeeze out the basel problem. But i wouldn't have had the slightest clue to how to get it. But let's be real would you have been able to solve that thing without knowing it's pi^2/6? Like it gave us the endgame we had to squeeze out the basel problem. How much more of a nightmare would it be without having that endgame? And maybe a crazy idea but could you show a proof that the sum of all the squared reciprocals is pi^2/6?
@@aca792. I guess my question was ambiguous. I think the integral would converge, and could be approximated via a taylor series. But I think I was referring to evaluated to some elementary functions.
Nice Video. But, there is some minor thing that bothed me. When we replaced the integrand with the infinite geometric sum, we assumed r to be smaller than one. Strictly speaking, this is only true when x is bigger than zero. The geometric series evaluates to 0 for x = 0 (a = 0,r=1), while the integrand is equal to 1 at x = 0. Can we neglect this fact, because changing the integrand at one point does not alter the value of the integral?
Wow, that’s an even nicer way of formulating it than on my video, great job! :)
I just watched your video on Gamma Zeta Integral, and I found it very educational. I personally think your approach is beautiful; besides, you have covered a more general case. I love your enthusiasm throughout the video! =)
I'm not sure which one of you has worse English, but I'll settle on the tie.
@@nicholasr79 You don't need to be an ass.
@@griffisme4833 I'm sorry that you're triggered by simple truths. How about you go fucc yourself, buddy?
@@nicholasr79 Well, you clearly perfectly speak every existing language, Could we have a conversation on Seri?
The way he sounds so excited makes this video 10x better.
I appreciate the transparency regarding dead ends and the necessity of perseverence and stamina in taking on mathematical problems such as these.
You know you've explained it well when a maths noob can understand most of what you said, great video!!
Thank you for solving this, I was trying to solve a similar integral from past 7 years. I used ur method and I finally completed the damn integral after 7 years.
My pleasure! I'm glad the explanation helped you out in your endeavor. =)
At that point you should just Google the solution no?
Which iit bro ?!
Genuine application of gamma function.
you mean beta function
i like the way how pi comes out of nowhere
3Blue1Brown has a great video that proves it using circles, so it's not like the pi is random
@김주원 I like the way how, as a native speaker, I'm still full of bad grammar
@@friedkeenan well, Euler proved this originally with the Taylor series of the sin function , and sin has always to a lot with circles.
Not of nowhere, it comes from rotation and circles
well i know not much about math so it seems to me as if it comes outta nowhere lol
W H E R E I S T H I S P I C O M I N G F R O M ?
It is quite complicated but 3Blue1Brown has a video trying to explain it through a geometric model.
Bollibompa I’m pretty sure he did that for the Basel problem not this integral
Check out my channel btw
Pi is EVERYWHERE.
Gamma FUNction! :D
Omg, how does he do to see that kind of relations?
Certainly impresive
I especially love these integration puzzles. Keep up the amazing work, and thank you for your videos :)
An absolutely superb way to get to the solution with all the necessary inter step justification explained properly.
Wheeler: duel monster player and mathematician.
Hello. :)
The first thing I thought when I saw the integral was Bernoulli.
There is another beautiful way to solve it in my opinion.
If one knows that the integrand is the generating function of the Bernoulli numbers the solution follows immediately from some basic integral properties. :)
Details:
Integral 0 to inf
x^(2n-1) / ( e^(ax) - 1) dx
= beta(n) / 4 (2 pi /a) ^ (2n)
Where beta(n) = (-1)^(n+1) * bernoulli( 2n )
Using n=1, a=1 => beta(1) = bernoulli(2) = 1/6
gives pi^2 over 6. :)
Most impressive part of this is how quickly and nearly he is drawing with a click and drag mouse function .
Awesome vid. One note: to avoid integration by parts, you can evaluate integral from 0 to ∞ of (xe^(-xn) dx) by differentiation under the integral as follows:
I(n) = int 0 to ∞ of -e^(-xn) dx
I'(n) = int 0 to ∞ of xe^(-xn) dx after differentiating with respect to n, which is what we want
However, we can evaluate I(n) easily
I(n) = int 0 to ∞ of -e^(-xn) dx = e^(-∞n)/n-e^(-0n)/n = -1/n
I'(n) = 1/n^2
So we deduce that int 0 to ∞ of xe^(-xn) dx = 1/n^2, and basically all we had to do was integrate -e^(-xn)
Excellent suggestion!
I was grinning from ear to ear by the end of this video. Thanks!
It is a very clean process, totally clear for the viewer. Concerning the final steps, when deciding over integration by parts or Gamma function, one could also consider the Laplace transform of f(x) = x, which is directly readable and there are no needed modifications to compute it, in contrast to the option of using Gamma function. Laplace transform is (I think) known by everyone having seen a first course on ODEs, so it should also be part of the "toolbox" available when solving integrals like this one.
You can also obtain the π^2/6 by taking a Fourier series of x, in the interval -π
I didn't knew this. Will look at it up, thanks
This is legitimately the greatest video I've ever seen
I do not understand one thing: how |r|
7:54 I'm not sure about this fact... Uniform convergence only allows to exchange sum/integral on a segment like [a,b], not an infinite set. In such case, you would need dominated convergence.
In this case, Monotone convergence should work fine.
Yeah monotone convergence settles it in this case. You can always swap summation and integration if everything is non-negative.
Nice job x2
Utterly deliteful
Simple and clear presentation of the topics. wow !!
Amazing! I always have trouble with remembering all of these infinite series and their answers, but either way, I enjoyed this video to the fullest!
You must know that by this video , you make other nearly to the solution of the eternal prime number problem. Thank you .
Nice integral, but always great complications !! we can do it easier :
x/(exp(x)-1)=x*exp(-x)/(1-exp(-x)) then let u=exp(-x) the integral will be integ from 0 to 1 the fraction -ln(u)/u-1
known 1/1-u=1+u+u^2+u^3+...+u^n the integral will be minus somme from n=0 to infinity integral from 0 to 1 u^n*lnu du
integration by parts and it's done somme from n=0 to infinity 1/(n+1)^2 equal to pi^2/6
Come on, it's the very same technique he used. He just wanted to pass through the gamma function definition for the sake of mentioning something extra which could be of interest to the viewers!
Fares BERARMA i was thinking the same
True but using the Gamma Function is more practical in my opinion because tougher Integrals (most with non elementary anti derivatives) incorporate incomplete/complete Gamma functions and this is a cool gateway of viewing things
What do u mean with exp?
@@--_9623 exp(x) is just e^x
So beautiful!!!!! Just love it
What a beautiful of explaining something. Thank you :)
Ingenious approach. Brilliantly done
Great video for a greater integral. Thanks for your work.
Very appropriate title! :)
That was amazing
Actually you can't refer to uniform convergence at all, because the sum does NOT converge uniformly. Here's why: for the sake of contradiction assume that the sum whose general term is exp(-nx) converges uniformly on R+. That would mean, by Cauchy criterion, that for a suitable N (chosen sufficiently large) we should have that the sup for x belonging to R+ of the modulus of the sum of the N-th,N+1-th,...N+p-th term would be smaller than a fixed quantity, say 1/2, FOR EVERY NONNEGATIVE INTEGER p>=0. This clearly implies that the general term of the series must actually converge to zero UNIFORMLY, but that's not the case, because FOR EVERY N, we have lim exp(-nx)=1 for x->0+. So the series does NOT converge uniformly, and the theorem couldn't be applied a priori anyway
You are absolutely right. When I was filming the video, I mistakenly thought e^(-nx) only had to be uniformly convergent on (0, inf) for the theorem to apply; however, a uniform convergence at the endpoint(s) is also necessary (in our case, as you pointed out, our sequence of functions doesn't satisfy this at x = 0). I sincerely apologize for the error. The Monotone Convergence should have been used to justify the exchange. Thank you for notifying me! =)
@@LetsSolveMathProblems You're welcome. I honestly don't know if the other comment is proper, cause I watched the video again and I realized only the 2nd time that you actually referred to Taylor series AND TO GEOMETRIC ONES when it came to investigate the uniform convergence, so you considered it the right way, even if you claimed a result for this kind of series (the geometric series of functions) that doesn't hold in general, because you have uniform convergence if the general term is bounded above by a positive real number which is less than 1, but the discussion in the other comment made it clear that's not the case at all (read in particular the limit argument, when I show that the general term does not converge to zero uniformly). Maybe that was the result you thought about, which is a corollary of Weierstrass total convergence criterion and eventually of Cauchy's criterion. But it can't be applied in this case for what I've just texted. Anyway I've appreciated your answer timing a lot. Thx for your answer, I think I'll watch other videos of yours at a later date. Till next time ;-)
thanks a lot for rewvising my concepts again
Wasn’t too familiar with the Gamma function but the partial integral did just fine too to get (1/n)^2.
Pay attention when you justify the change of the order of the summation and of the integration by referring to the uniform convergence of the series, it is way better to make use of the monotone convergence theorem that is studied in Measure Theory and Lebesgue Theory, as this example shows: pick the sequence of functions given by chi{[0,n]}/n, where chi{•} is the characteristic function of the set written between the brackets. It is really simple to see that this sequence converges uniformly to the constant zero function, nevertheless the integral extended over R+ or R is constantly 1, so you can't apply the uniform convergence theorem result. The reason is that there is another assumption that is made in that theorem, i.e. the fact that the measure of the set involved must actually have finite measure, otherwise situation like the previous example can occur. Since in our case you're integrating over R+, which has infinite measure, I would avoid to use the result about uniform convergent series of functions and I would rather notice that the series has only nonnegative functions, which means you have a monotonic increasing series of functions, so that monotone convergence theorem can be applied eventually to justify the inversion of the order of the operations of summation and integration.
Thank you for the video keep up the good work brotha man
7:57 You don’t even need uniform convergence in this case. Everything is non-negative so it follows by monotone convergence theorem.
I never passed pre-calculus in high school haha but I love watching your videos!
I'm glad you enjoy my videos! Without a doubt, commenters like you make my day.
I do remark that high school math classes, generally speaking, cannot accurately gauge your true mathematical ability or potential. Tests based on memorization and blindly following a step-by-step procedure cannot possibly measure the thrill you experience when your creativity intermixes with an interesting problem to light up an elegant solution, nor can such tests successfully predict your love and passion for mathematics, without which learning mathematics often becomes a fruitless endeavor. =)
pi^2/6 again? I can't escape that value. I keep running into it over and over. Something about the zeta function, but it shows up in the trigamma function and the 2nd(?) polylog when evaluated at 1 and it pops up all the time in integrals
The gamma function step at the end seemed a bit convoluted to me. While I probably couldn't have gotten that far into the solution by myself, once there, it seems like the easiest thing to do is to rewrite the integrand as -d/dn (exp[-xn]). Solving the integral is then trivial, and so is redifferentiating afterwards. But I'm not a mathematician, so maybe I am missing something.
I do not see how writing the integrand as -d/dn(exp[-xn]) would help us; after all, we should write the integrand as derivative with respect to x (NOT with respect to n) if we wish to proceed using Fundamental Theorem of Calculus. Perhaps I am missing something as well. I would appreciate it if you could elaborate on your method a little bit more. Thank you for commenting! =)
LetsSolveMathProblems
Int_0^inf x*e^{-xn}
= Int_0^inf -d/dn e^{-xn}
= -d/dn Int_0^inf e^{-xn}
= d/dn e^{-xn}/n |_0^inf
= d/dn (e^{-inf} - e^{0})/n
= -d/dn 1/n
= 1/n^2
LetsSolveMathProblems
I'm an aspiring physicist, so I'm maybe not being as careful as I should be. But I believe this is one of Feynman's tricks for integration
Your argument looks beautiful! It is a fine alternative for gamma function approach. Thank you for sharing it! =)
@@SynysterKezia Bravo. I will remember this trick
The starting integral is the function f(2) where f is the ζ(s)Γ(s) which is equal to S[0->Infinity] (x^(s-1))/(e^x - 1)
Nice job
I klicked the like button and it showed 2800 likes!! I love how many ways you are trying to wvaluate that integral 😘
The part when you use Taylor series of a/1-r is false because for the zero of the integral, exp(-x) is equal to one, which not permitted by the very condition of the radius of convergence
at 8:00 uniform convergence is a good argument for switching sum and integral, but actually it always holds if the terms are non-negative. (Beppo Levis theorem or Tonellis theorem) :D
ِِAmazing solution ... Thanks
whoa, letssolvemathproblems fails along the way?!?! this is unprecedented! instant like and you put me in awe, lol. btw, is there something else i can call u by instead of letssolvemathproblems, perhaps your first name?
My first name is Michael. Feel free to call me by LetsSolve, LetsSolveMathProblems, or by my first name. =)
I'm glad you enjoyed the video, Jeff Wolfshire!
Isn't the function the Bernoulli numbers?
Use Monotone Convergence Theorem rather than Uniform Convergence as it is much easier to show Monotone Convergence than Uniform Convergence which follows as the sum is a sum of positive quantities.
i have a small trick for this integral x * exp(-xn) . you can write this as minus d/dn exp(-xn) . then put - d/dn out of integral . Then just calculate integral of exp(-xn) from 0 to infinity which is easy. its just 1/n . then let - d/dn act on 1/n and you have 1/n^2
❤ yes
actually it is the method of differentiation under the sign of integration given by Leibnitz
using it r times we can integrate (x^r) e^(-nx) in [0, infinity ] which is r ! /n^(r+1)
I have used Lebesgue theory about majorant for the change suma And integral. x[(exp(-(N+1)x-1)/(exp(-x)-1)]≤x[ exp(-x)+1-1] ( for N=2), where N Is index S_N,
(non-negative Borel-measurable functions guarantee the existence of the Lebesgues integrals).
I know I am too late( but I hope u see this LSMP) but actually it is a direct problem if one is aware of the relation between zeta and gamma function as:
integral of (x^(s-1))/((e^ x)-1) from 0 to inf = zeta(s) * gamma(s)
which in this case s=2 the answer is zeta(2)*gamma(2) =pi^2/6 *1!=pi^2/6
That ws amazing,Very clear and understandable,thanq
How can u use GP infinite sum formula if a is not constant??
This is so awesome...it's going on my favorites list
Brilliant!
isn't this a particolar case of the Einstein-bose integral?
Nice. And we are done!
Isn’t that the generating function for Bernoulli numbers?
This series converges uniformly for all x>0 ??(7:50) i can not understand this way..ㅜㅜ
@Lomk how can i check uniformly of f??
@Lomk what should i use to theorem?
It may not be correct. Instead you may interchange by monotone convergence theorem
thank you very much your are amazing
Uniformly converge? What are the cases where the sum doesn’t uniformly converge?? Or does it just mean it doesn’t diverge?
Man! This is love! ❤❤❤❤❤❤
I was bothered by 7'10". Doesn't a have to be a constant rather then a function of r?
Elliott Manley a and r are both constants, so it doesn't matter if they are both expressed as functions of something else, the relationship still holds.
For each value of x between 0 and infinity, a and r are constants. Since the definite integral is basically the limit of a Riemann sum, in which we examine each value of x one by one, we can treat a and r as constants.
Thank you.
Absolutely astonishing.
Beautiful
Man you didn't lie when you write a journey of integration :)
You should do some problems from the Calculus/Analysis section of the Berkeley Math Tournament
or let e^(-x)=y then the integral will be int(y=0 to 1) ln(y)/(1-y), use the series expansion for the denominator and you will get sum(n=1 to inf) 1/n^2 which is zeta(2) which pi^2/6
Do you script your videos or just load up your recording software and start talking?
fantastic. sir let me know how we got the result (pi)'2/6 equals to summation 1/n'2?
Why is uniform convergence required for switching the integration and summation?
Please read my comment. It answers your question properly in the right way :)
Bernoulli numbers ?
Hey Michael (let's solve math problems) I am a big fan....
I think an easier way to do this integral would be (reference from the new integral at 7:04) doing a u substitution here.... Take u=1-(e^-x)
Then you get du=xe^-x
From here you get the integral to be integral from 0 to infinity of 1/u du
This is ln|u| from 0 to infinity
Which is ln|1-e^-x| from 0 to infinity
Please let me know if I am right.... And if there is an issue please elaborate on that.... I am not very advanced in calculus and thus would like to get a fresh opinion..... If it turns out it is true perhaps you can create another video depicting this method......
If u = 1-e^(-x), you get du/dx = e^(-x), not xe^(-x).
Tau by 12 you mean?
Which software are you using?
Great stuff
Bravo!!!
Integral(0,infinity, 1/floor(x^2)) also. Why are these equal?
WaaaaaW❤❤❤❤❤
6:17 How can there be a greater than or equal sign with the 0 and x if e^-0 is = 1 when |r| < 1?
That's exactly what I was going to ask before watching your comment.
Actually, I've just figured out the resolution to this issue. Dealing with integrals we have to be care about not the values on the endpoints but the value of the limits on these points. Since we approach from 0+ (because the integral from 0 to infinity) e^(-x) is less than 1.
excellent explanation
Beautiful!
Nice approach but we could also show that this integral is ζ(2)×Γ(2)
What if I solve the indefinite integral then put 0 to infinity there. I don't seem to be able to put 0 and infinity, there are just indeterminate forms....I was able to solve the definate integral though. Please reply
Impressive, man. You rock.
This is absolutely incredible
Wheeler you say?
素晴らしい! 非常に難しい問題ですが,とても勉強になりました.1/n^2からa/1-rを連想するとは.私自身,最初はe^xのTaylor展開に持ち込むのかと思っていました.
Extraordinary
Holy cow. This was a ride. I kinda new we needed to squeeze out the basel problem. But i wouldn't have had the slightest clue to how to get it. But let's be real would you have been able to solve that thing without knowing it's pi^2/6? Like it gave us the endgame we had to squeeze out the basel problem. How much more of a nightmare would it be without having that endgame?
And maybe a crazy idea but could you show a proof that the sum of all the squared reciprocals is pi^2/6?
Wikipedia does this very well.
en.wikipedia.org/wiki/Basel_problem
avengers endgame confirmed
Wonderful
Is this related to sinh and cosh?
Can this integral be evaluated if the upper limit were less than infinity? Lets say, to Y, creating a function of Y?
no
@@aca792. I guess my question was ambiguous. I think the integral would converge, and could be approximated via a taylor series. But I think I was referring to evaluated to some elementary functions.
5:21 summation from 0 to infinity 😁😁
Nice Video. But, there is some minor thing that bothed me. When we replaced the integrand with the infinite geometric sum, we assumed r to be smaller than one. Strictly speaking, this is only true when x is bigger than zero. The geometric series evaluates to 0 for x = 0 (a = 0,r=1), while the integrand is equal to 1 at x = 0. Can we neglect this fact, because changing the integrand at one point does not alter the value of the integral?
Yes - we can just redefine the integrand to be 0 at x = 0.
Where can i find a proof that explains when can i swap infinite sums for integrals and viceversa?
Probably Fubini’s theorem
Well done.
I love Euler 😊
If you’re a physicist you might recognize this easily :)