The simplest approach: expand the integrand as a power series in e^(-x) and interchanging the integration with the sum, you will get an easy integral. With that, the result will be the sum of 1/n^2 that is a very well known result. There is a minus sign, so the integral is equal to -(π^2)/6
I would substitute for argument of ln Then I could expand with geometric series, change order of summation and integration then by parts Finally compare product and series for sin(x)/x
Note that Michael Penn does 1) an integral representation of the logarithm 2) an integration order exchange [he didn't justify why that's allowed, but that's ok in a problem solving setting] and 3) do a power series expansion of the geometric series 4) exchange the order of the sum and integral (again not justified but that's ok). 5) Integrate 6) Use a well known sum. The approach proposed by MrFtriana is indeed simpler: 1) Power series expansion (well known); not fully justified as the power series expansion is not well defined near x=0; 2) exchange the order of the sum and integral (not justfied) 3) Integrate 4) Use a well known sum. So, 2 steps fewer.
Maths 505 just did this integral with x = i theta, shout out to him. :) Clarification: M505 did the integral from 0 to x of ln (1 - e^i theta) d theta, and he proved an interesting result. Professor Penn skipped some steps when he "moved" the geo sum outside of the double integral. But we're gucci here since |ty|
For me, it seems more intuitive to convert the integrand into a "power series" in e^(-nx), which somewhat directly shows the connection to the Basel problem.
Direct change of variable u=exp(-x) leads to integral of log(1-x)/x,x=0,1 and after you know that this well-known integral is equal to -Pi^2/6 or you expand log(1-x)
Given problem: I = int_0^inf ln(1-e^-x) dx First, substitute u=e^-x, du = -e^-x dx = -u dx. x=0 gives u=1, and x=inf gives u=0. I = int_0^1 ln(1-u)/u du. Insert the taylor series expansion of ln(1-x) = -sum_{k=1}^inf x^k / k: I = -sum_{k=1}^inf 1/k int_0^1 u^k / u du = -sum_{k=1}^inf 1/k int_0^1 u^(k-1) du = -sum_{k=1}^inf 1/k [u^k / k]_0^1 = -sum_{k=1}^inf 1/k² = -ζ(2) = -π²/6
If one uses the series expansion of ln(1-y) , then take y=exp(-x) and integrate terms of the form exp(-k*x) from 0 to inf , you immediately get - (sum of 1/k^2 ,k=1 to inf )= - π^2/6 .
I had a hunch using u = 1 - exp(-x)... du = exp(-x) dx = (1 - u) dx dx = du / (1-u) So it becomes int[0 to 1] (ln u) / (1-u) du Good thing I remembered the generating functions trick 1/(1-x₀) = 1+x₀+x₀²+... = sum x₀ⁿ integrating.... from say... 0 to x₁ -ln(1-x₁) = x₁+x₁²/2+x₁³/3+... = sum x₁ⁿ/n dividing by x₁ then integrating again from 0 to x integral [0 to x] (-ln(1-x₁)/x₁) dx₁ = x+x²/2²+x³/3²+...= sum xⁿ/n² if x = 1 integral [0 to 1] (-ln(1-x₁)/x₁) dx₁ = 1+1/2²+1/3²+...= sum 1/n² = 𝜋²/6 The difference between this integral and the one I did is u=1-x₁
No flex but I did this in my head jn 2 seconds as follows, ln(1-z) is just minus the sum from n=1 to infinity of e^-nx/n, switch the order of summation and integration then you just get -1/n^2.
I wouldn't know how to justify the trick with the geometric series. That equation is only usable if the absolute value of u is strictly less than 1. The boundary of our integrals are from 0 to 1, so I don't know if this is allowed.
Just reaching out for local insight on zeroth and first integrals. What do those mean in this context? I googled and a zeroth integral was stated as integral of zero is a constant as derivative of a constant is zero. And that looked a bit like a circular argument to me at this time of day - of course most zero look like a circular arrangement anyways
Google interpreted your question as "what is the integral of 0" which is just a constant of course. A 0th integral means that there is not actually an integral, but you try to pretend there is one and go backwards, making two bounds and differentiating the function to get an integral that evaluates to the original problem. Usually this is done when the original problem has something that can interact with an integral (most likely just another integral like in this video) and there isn't really much to do besides making a 0th integral. In most cases you try to swap the integrals and evaluate the original one first. I think this is the same as Feynman's technique but slightly altered to be more concise
You couldn't have done this in any worse way. Just write the series expansion of ln(1-u) for u=e^(-x) and it's done. And I don't understand why you love so much to turn things into multiple integrals while you're actually using Feynman's technique. Looks like you love to overcomplicate the stuff. You're really smart and knowledgeable, no doubt about it.
sometimes there is no good place to stop but you have to stop anyway
Nope, not today
Will it ever be a good place to stop again 😢😢😢
Do you know why not?
@@xinpingdonohoe3978The place to stop got stuck at a local maximum.
sad
😟
An infinite sum means that there's no good place to stop I suppose.
this 😂
Engineer: *Calculate the first term of the infinite sum* "Ok, that's a good place to stop"
The simplest approach: expand the integrand as a power series in e^(-x) and interchanging the integration with the sum, you will get an easy integral. With that, the result will be the sum of 1/n^2 that is a very well known result. There is a minus sign, so the integral is equal to -(π^2)/6
I would substitute for argument of ln
Then I could expand with geometric series,
change order of summation and integration then by parts
Finally compare product and series for sin(x)/x
Yes, when he's anyway going to expand an integrand as a series, he could as well do it directly since that works.
@@holyshit922 -- You need grouping symbols for the argument: sin(x)/x.
@@forcelifeforceNit-picky for no reason
Note that Michael Penn does 1) an integral representation of the logarithm 2) an integration order exchange [he didn't justify why that's allowed, but that's ok in a problem solving setting] and 3) do a power series expansion of the geometric series 4) exchange the order of the sum and integral (again not justified but that's ok). 5) Integrate 6) Use a well known sum. The approach proposed by MrFtriana is indeed simpler: 1) Power series expansion (well known); not fully justified as the power series expansion is not well defined near x=0; 2) exchange the order of the sum and integral (not justfied) 3) Integrate 4) Use a well known sum. So, 2 steps fewer.
I prefere this : u=e^(-x). Then expand ln(1-u) with a power series. Switch sum and int. And then it's easy.
I'm eager to know whether that was a good place to stop or not
Maths 505 just did this integral with x = i theta, shout out to him. :)
Clarification: M505 did the integral from 0 to x of ln (1 - e^i theta) d theta, and he proved an interesting result. Professor Penn skipped some steps when he "moved" the geo sum outside of the double integral. But we're gucci here since |ty|
For me, it seems more intuitive to convert the integrand into a "power series" in e^(-nx), which somewhat directly shows the connection to the Basel problem.
Maths 505 recently did a similar one involving a Clausen function.
I never heard of Clausen functions before.
unstoppable professor.
It is simple. Utilize ln(1-x) = - Σ x^k/k, yielding a quick integration resulting in the Basel problem.
The T-Shirt with the chalk stains rocks! 🤩
Direct change of variable u=exp(-x) leads to integral of log(1-x)/x,x=0,1 and after you know that this well-known integral is equal to -Pi^2/6 or you expand log(1-x)
Great problem and an excellent demonstration of reaching its solution.
Okay I now love how ln(1-e^x) looks, it’s it’s own inverse, and its shape fills some sort of void in my heart that’s never been filled before
Given problem:
I = int_0^inf ln(1-e^-x) dx
First, substitute u=e^-x, du = -e^-x dx = -u dx. x=0 gives u=1, and x=inf gives u=0.
I = int_0^1 ln(1-u)/u du.
Insert the taylor series expansion of ln(1-x) = -sum_{k=1}^inf x^k / k:
I = -sum_{k=1}^inf 1/k int_0^1 u^k / u du
= -sum_{k=1}^inf 1/k int_0^1 u^(k-1) du
= -sum_{k=1}^inf 1/k [u^k / k]_0^1
= -sum_{k=1}^inf 1/k²
= -ζ(2)
= -π²/6
If one uses the series expansion of ln(1-y) , then take y=exp(-x) and integrate terms of the form exp(-k*x) from 0 to inf ,
you immediately get - (sum of 1/k^2 ,k=1 to inf )= - π^2/6 .
I'll keep watching this for infinity, because there was no stop... help!
I had a hunch using u = 1 - exp(-x)... du = exp(-x) dx = (1 - u) dx
dx = du / (1-u)
So it becomes
int[0 to 1] (ln u) / (1-u) du
Good thing I remembered the generating functions trick
1/(1-x₀) = 1+x₀+x₀²+... = sum x₀ⁿ
integrating.... from say... 0 to x₁
-ln(1-x₁) = x₁+x₁²/2+x₁³/3+... = sum x₁ⁿ/n
dividing by x₁ then integrating again from 0 to x
integral [0 to x] (-ln(1-x₁)/x₁) dx₁ = x+x²/2²+x³/3²+...= sum xⁿ/n²
if x = 1
integral [0 to 1] (-ln(1-x₁)/x₁) dx₁ = 1+1/2²+1/3²+...= sum 1/n² = 𝜋²/6
The difference between this integral and the one I did is u=1-x₁
Can't stop, addicted to the shindig
is that feynman's trick in disguise?
And that still was a good place to stop
so where should i take a good place to stop?
2:07 Did you just wind up explaining Feynmans technique
I have never seen anything like this. I'm not sure it's for real. I'll come back to it.
When backflip? 👀
Hi Michael, may I request you to make a refresher for zeroth integral and first integral as mentioned at 1:30?
N I C E
Good video.
Good
Something I've only just realized; is this not sort of similar to Feynman's trick of differentiating under the integral sign?
You can also process by using the development of logarithm at the first step and you get to the same answer.
-zeta(2) = -(pi^2)/6
Yes, Timmy, very good.
3:49 Maybe t=1-ye^-x could work better.
The next frontier: solving that with Feynman's trick
Technically equivalent to the zeroth integral that Michael likes to use.
At 2:07 it's pretty similar already
No flex but I did this in my head jn 2 seconds as follows, ln(1-z) is just minus the sum from n=1 to infinity of e^-nx/n, switch the order of summation and integration then you just get -1/n^2.
Is double integration your favorite method?
I wonder what the slowest-growing divergent formula is.
Can we talk about dilogarithms? Like Li2(1)
🔥
I wouldn't know how to justify the trick with the geometric series. That equation is only usable if the absolute value of u is strictly less than 1. The boundary of our integrals are from 0 to 1, so I don't know if this is allowed.
Nice 😮
Now, I wonder: are the hand prints on the t-shirt intentional, or not?
on the interval x>0, 0
To my surprise, this function is mirror symmetric with respect to y = - x
Just reaching out for local insight on zeroth and first integrals. What do those mean in this context?
I googled and a zeroth integral was stated as integral of zero is a constant as derivative of a constant is zero.
And that looked a bit like a circular argument to me at this time of day - of course most zero look like a circular arrangement anyways
A 'zeroth' integral is just evaluating a function at two points.
Google interpreted your question as "what is the integral of 0" which is just a constant of course.
A 0th integral means that there is not actually an integral, but you try to pretend there is one and go backwards, making two bounds and differentiating the function to get an integral that evaluates to the original problem. Usually this is done when the original problem has something that can interact with an integral (most likely just another integral like in this video) and there isn't really much to do besides making a 0th integral. In most cases you try to swap the integrals and evaluate the original one first.
I think this is the same as Feynman's technique but slightly altered to be more concise
@@not_vinkami wow - that is great!
Online stuff is excellent but knothing beats a nowledgeful human
Thank you again my friend!
@@TheEternalVortex42 So there is nothing inbetween?
Hi,
That being said, I would be surprised if Michael decided to remove it, because now it's set in stone, on the channel's banner.
Man just use taylor series for logarithm
I = int_0^\infty ln(1-e^{-x})
=> let u= 1-e^{-x} then du=(1-u)dx
=> I = int_0^1 ln(u)/(1-u) = \sum_{n=0}^\inft \int_0^1 u^n ln(u)
= \sum_{n=0}^\inft ln(u) * u^{n+1}/{n+1} |_0^1 - \sum_{n=0}^\inft \int_0^1 u^n/{n+1} du
=-ln(x)x - ln(1-x)ln(x)|_0^1 - \sum_{n=0}^\inft \int_0^1 u^n/{n+1} du
= 0 - \sum_{n=0}^\inft \int_0^1 u^n/{n+1} du
= - \sum_{n=0}^\inft 1/{n+1}^2 = -\pi^2/6
Only geometric sum no fancy steps
You couldn't have done this in any worse way. Just write the series expansion of ln(1-u) for u=e^(-x) and it's done. And I don't understand why you love so much to turn things into multiple integrals while you're actually using Feynman's technique. Looks like you love to overcomplicate the stuff. You're really smart and knowledgeable, no doubt about it.