@@namansingla2975 He is very fast, which is university style, but these problems, he is solving here are more practical (as you call it in mathmatics) related, but in university, you have to learn a lot more theory. which means theorems and lemmata.
@@namansingla2975 Evaluating limits for converging series using the ratio test and root test was taught in my first year of university (I'm not sure if there a different curriculum order in the US?). He would need to slow down for high school level. Ultimately I like how he explains every little step, many of my university lecturers skipped steps they deemed trivial or unimportant, which every now and then lost people in the lecture hall. At that point, most people just copied down what was being written on the chalkboard so that they had it for their notes, rather than actually understanding what was going on. The more clear the explanations, the clearer each logical step is which makes it easier to understand. As a maths teacher now, I make sure that every little step is explained (relative to the ability of the class I'm teaching) to minimise students getting lost on a problem and keep them engaged. That's why I think Michael Penn is great, as he keeps your attention for the whole video.
I don't know if anybody else feels like this, but your videos are awesome for someone with some formal education in math, but no interest in the minutiae. You always seem to find the right amount of rigor to satisfy my need for completeness, but you never descend into a spiral of proving everything down to the axioms of set theory. It keeps your videos easy to follow while not hand-waving too much away, which is quite impressive. I do enjoy me an hour long Mathologer masterclass now and then, but after a long work day, I need something less intense. Thanks for doing these videos!
I disagree respextfullu.on this videp..dpnt you thinknje should've done it without these tricks that indont see how anyone could or would derive at all?
The natural log of the limit is just equal to the definite integral of ln(x) from x=0 to 1 (by a Riemann Sum). The value of the integral is -1 and hence the limit will be 1/e.
This can be solved without using the two results that proved by assuming the limit equals to L and taking natural logarithm on both sides (as done in the proof of the first result). ln(L) = lim ln(n√(n!/n^n)) By incorporating the n in the denominator into the logarithm of the n!, an infinite sum is obtained which is analogous to the limit of a sum, as shown below. ln(L)= lim (1/n)*(ln(1/n)+ln(2/n)+....+ln(n/n)) n->inf Now the RHS is equivalent to writing integration of ln(x) from 0 to 1. In fact the form of seen in the RHS is the very definition of the aforementioned integration. The equation above is equivalent to writing. ln(L) = ∫ ln(x) dx (definite integral from 0 to 1) Evaluating this integral, we obtain -1 as its value. .°. ln(L) = -1 or, L = 1/e
∫(0->1) ln(x) dx =[1×ln(1)-1]-lim(t->0+)[tln(t)-t] =-1-lim(t->0+)[tln(t)-t] However, lim(t->0+)[tln(t)-t]=lim(t->0+)[ln(t)-1]/(1/t) Use L'Hopital's rule, then you'll find it to be 0
If you start with the idea that e^x is defined to be the function whose value equals its slope (id est: y=e(x) defined by y'=y), then deriving the power series expansion of e^x is pretty simple. It is then a straight forward exercise (using binomial expansion) to show that the limit as N goes to infinity of (1+x/N)^N is exactly the power series of e^x. The limit definition is often used as it provides a solution to a simple way to motivate the study of exponentiation, such as continuously compounding interest rates.
@@Hiltok I was taught that the definition of e was as the limit as n tends to infinity of (1+1/n)^n. From that, I was introduced to the function exp(x) as a power series and it was proven that exp(1)=e.
There are several properties you could take as the definition of e, with any of them you can prove the others. So its completely up to preference which one to take as the definition.
The high school definition of e I learned was the limit. Later on at uni, one of my professors told me that the definition of e is " the real numbers such that [ int(1/t)dt from 1 to x ] =1 " because the natural log has historical been thought of as the integral.
@@kizyzo1348 yeah, thats what i said, when n approaches infinity these things are equal. Thats why i think this method works. But instead of equal you write ~, that means asymptotically equal, that means that the quotient of both is 1 as n→∞
While you are correct about that, consider that the inequality that is being used is still true. Id est: dividing by (L+epsilon)^N instead of (L-epsilon)^N makes that smallest part of the inequality even smaller, so no harm was done to the outcome.
So cool In addition , we may use Stirling’s formula or use the sandwich theorem in the following inequality For large n, we have sqr(2 pi) *n^(n+1/2) *e^(-n)< n! < n^(n+1/2) * e^(1-n)
I believe the definition of ln(x) is the function who's derivative is 1/x. So taking the derivative of ln(x) is using the definition, and the limit "definition" of e is proven. The fundamental definition of e is the value x that makes ln(x) = 1. At least that is my understanding.
Or you can be lazy and know Stirling's formula: log(n!) = n log(n) - n + O( log(n) ). Rewrite n-th root of n! as exp( log(n!) / n) = exp( log(n) - 1 + O( log(n)/n ) ) Therefore the expression inside the limit becomes (1/e) * (n/n) * exp ( O( log(n) / n ) ) Take the limit as n -> infinity, the exponential part goes to exp(0) = 1, the n/n is uniformly 1, and thus the limit is 1/e. Of course, proving the Stirling formula just for this question is a bit of an overkill to say the least LOL
Getting Stirling's ln-formula (i.e. without the constant) is fairly easy. ln(n!) is essentially the sum of ln(i) with i from 1 to n, you can frame this between the integral of ln(t) over 0 to n, and the same integral over 1 to n+1, that gives you n ln n - n ≤ ln(n!) ≤ (n+1) ln(n+1) - (n+1) = n ln n - n + ln(n+1) + n ln(1+1/n) - 1 ≤ n ln n - n + ln(n+1) so ln(n!) - n ln n + n = O(ln n).
This is a really cool way of evaluating this limit! I had to do this limit in my Real Analysis course, and I first calculated the improper integral from 0 to 1 of ln(x) to be -1, and then used the limit definition of the integral on that interval to show that the ln of that limit is the improper integral. Exponentiating both sides and using the continuity of e^x left me with the limit evaluated to be e^-1.
IDK if anybody has mentioned this, but you can just use Stirling approx as well; i.e., n! ~ n^{n}e^{-n}, with approximation getting better as n -> \infty.
I liked how you proved the limit involving the exponential only using differentiability of logarithm. This could bê used to explain why ln is the natural logarithm, but not any other logarithms functions.
Is there a (theorem) name for the second 'tool'? In the proof of the theorem (10:19) I dont understand how these limits tend to 1. He's referring to something that has been proved earlier in his class...
Great video as always though there is a hidden assumption that L is different than 0 therefore the is an epsilon such that L minus epsilon is greater than zero. It's not a major issue since you can regard it as a special case and use 0 as your lower bound but would be nice for it to be addressed so it can be easier to follow through. Keep up the good work!
huh, you can also sub in stirling's approximation for n! early on and it very quickly reduces to e^-1 * lim n-> inf (2*pi*n)^(1/2n) and that limits to 1
@@gemacabero6482 apply stirling and you get lim n-> inf (2*pi*n)^(1/2/n) * 1/e you can factor the 1/e out and then deal with the inner limit. the limit n-> inf (k*n)^(b/n) = 1 for any k and b so the 2s and pi's don't matter. You could prove this to yourself by converting to a continuous variable, taking the log and using L'Hopital's rule, or by using the second tool provided,
I have some issues with your second tool. First, L is not guaranteed to be positive (consider a_n=1/n!) so I feel that the argument is invalid in that case and we want to instead consider absolute values and it works. Furthermore, if L is positive, I think we should restrict to 0
9:54 I am not satisfied with the proof. The nth root of a number only equals 1 when you take the limit. But if you are to apply a limit to one part of an equation, then you apply it to all parts of the equation. So what we've proven is that L- £ < lim n approaches infinity of the (n+1)√(a_(n+1) < L +£. So we've taken the limit but the expression is still within epsilon of L and not equal to L exactly. Note that epsilon is not zero because n is not connected to epsilon in any way during the proof. I'm not sure if making £ < 1/n is enough to correct the proof because the squeeze theorem (to my knowledge) is only used on non-strict inequalities not strict inequalities. Perhaps using a non-strict inequality at the beginning of the proof will fix all the problems but are we even allowed to do that?
Wow, saw that proof that (1+1/n)^n = e the first time ever. I learned it at school, but there we get the formular by differential quotiants and we detected, that there must be a limit by solving with a calculator, but I never ever have seen this proof from the 'other' direction. Very amazing :-)
you could take that as a definition or not. You can just define e as exp(1), and define exp as the only differentiable real function y that satisfies y'=y and y(0)=1 for example. In which case e=lim(1+1/n)^n is something you have to prove, not a definition. Another way to define exp is as the only differentiable real function f such that f(x+y)=f(x)f(y) and f'(0)=1. Or via its power series, or as the inverse function of ln, which can itself be defined as the antiderivative of x->1/x that evaluates to 0 at x=1. There are so many alternative definitions of e, exp, ln and so on.
Michael uses Spivak's Calculus book and there, e is defined as the base of a log for which the derivative is 1/x. It's probably the most rigorous book on Calculus there is.
second tool approach is interesting and not trivial. i think before multiplying the inequalities, we shall take a sequence of small epsilons e.g. epsilon=1/n and then to continue. if L=0, then we have a squeeze inequality 0
Perhaps you can see also as Riemman summation: (n!)^(1/n)/n=exp(sum(k=1:n, 1/n*ln(k/n)) if n goes to infinity (n!)^(1/n)/n=exp(integral(ln(x) from 0 to 1))=1/e
I would just add that the nth root of anything that *does not depend on n* as n goes to infinity is 1, simply because the limit as a^(1/n) (a anything finite not depending on n) as n goes to infinity is like a^1/infinity is like a^0 = 1. Just a small thing.
Alternate solution: The term in the limit can be written as (1/n *2/n *3/n*..n/n)^1/n...taking log on both sides we get log (L) = 1/n * Sigma(log(i/n)) for i= 1 to n . This is just the integral of log (x) from 0 to 1 which is equal to -1 and hence the limit "L" = exp(-1)
It's cool that you can get an intuition of where the limit might be. For instance with this limit I reasoned that n! was less than n^n for sufficiently large n, and so if you replace the n! in the limit with n^n, you would get one, so then if you replace the n^n with something less than that, i.e. n!, then it's going to consequently make the limit less than one, so before I even started watching the video I knew roundabouts where the answer should be
by using stirlings approximation, n! = E(n)*n^n*e^-n*sqrt(2pin) where E(n)->1 as n->infty so just plug that in and you get E(n)^1/n * 1/e * (2pix)^1/2x -> 1/e
At 11:30 he claims that it is pretty easy to show by induction that (n!)^(1/n)/n is decreasing. I'm not able to do it, neither by induction nor otherwise. Can someone show me how to prove that?
So this first proof works with the natural logarithm. what if we used the log with base two? All the steps work the same and at the end we have log{base 2}(L) =1 and than L=2 ??
Natural logs with Stirling's approximation works as well: ln(n-root(n!)/n)) = (1/n)ln(n-root(n!)) - ln(n). (This holds by ln(x^a) = a*ln(x) and ln(x/y) = ln(x) - ln(y). By Stirling's approximation, ln(n!) = n*ln(n) - n for large n (which definitely applies here). Thus we get (n/n)ln(n) - n/n - ln(n) = -1. Restoring the original expression with e^(-1), we get lim(n-root(n!)/n) = 1/e. (I'm sure this 1-minute method wouldn't apply unless Stirling's approximation is justified on the same reasoning as in the video, but I'm a lazy physicist. I accept this lack of rigor.)
Didn't know how to go beyond that, since I didn't know the second given, but IMO an easier way to see that a limit exists is recognizing that, since you know it's bounded by zero from below, it's also bounded by 1 from above (since n! is strictly less than n^n, so you get nroot(n^n)/n = n/n = 1 > nroot(n!)/n > 0). You could also then see that it's actually bounded by 1/2, due to arithmetic-geometric mean inequality (since nroot(n!) is the geometric mean of each natural number up to n, and the arithmetic mean of each natural number up to n is (n+1)/2, so nroot(n!)/n is strictly less than (n+1)/2n for all n > 1, which as n goes to infinity (n+1)/2n goes to 1/2), but that's not really necessary since you already know it's bounded, but it's a nice sanity check.
Actually if you want to apply your second theorem, you still need to show that lim (a_n)^1/n exists, so currently we only know if ((n!)^1/n)/n were to converge it would be 1/e, isnt it? (probably you can show it is monotonous and bounded to ensure convergence)
Wait... in the proof of the 2nd tool. Do you get a problem if L=0? If you then multiply an even number (to make the example simple: 2) of inequalities, you get (-eps)^2 < a_{N+2}/a_{N} < (eps)^2 which is obiously wrong for all eps
Well, the first tool you prove can be used as a definition for e, on which e^x and ln x are based, but for the sake of understanding I will let it slip
Minor little edge-case mistake in the argument about a(n+1)/a(n) -> L implying a(n)^(1/n) -> L. It's possible that L = 0, in which case your claim that "all the parts here are positive, so we maintain the direction of the inequalities" fails. All those "L - epsilon" values are strictly negative when L = 0. It's trivial to correct... just use 0 < a(n+1)/a(n) (via all a(k)'s positive) in your list of inequalities in the case when L = 0, and do all the rest the same way, and it works out fine. At the end have: 0 < a(n+1)^(1/(n+1)) < (0 + epsilon) [ a(N) / (0 + epsilon)^N ]^(1/(n+1)) 0
Isn't that the definition of e hence it doesn't need to be proven. Though I've heard of people starting with taylor series of e^x as a definition and then proving the usual "definition" but that seems so backwards to me.
Typically in a real analysis course, one defines the whole concept of real number exponentiation using e^x = 1+x+x^2/2 +... . Considering he's a professor who is well aware of mathematical rigor, it doesn't surprise me that he accepts this as a definition instead of the standard limit taught to high schoolers
@@EssentialsOfMath I always avoided that definition as I was used to the other one and all the proofs that have anything to do with e came from there for me but seeing how much accepted that definition is I think I'll have to "reprogram" myself.
@@malawigw again, you have to think about what it means to take a real number to a real number exponent. It doesn't make sense. You can define natural number exponentiation in the standard way a^n = a*a*...*a, rational number exponentiation using roots, but when it comes to real number exponentiation you would either have to take a limit of rational exponents or use the Taylor series as starting point. Walter Rudin takes the latter approach in his texts on analysis
Can we do it this way? Let K be the limit. If we take ln of both sides we have (with n going to infinity) ln(K)= ln [lim (n!/n^n)^(1/n)]=lim (1/n *ln(n!/n^n) = lim 1/n * (sum from i=1 to i=n of ln(i/n)) = integral from 0 to 1 of ln(x) =-1 So ln(K)=-1 K=e^(-1)=1/e
I see other comments discussing the validity of assuming e is defined as the base of the natural log during the proof for the first tool. I am still confused, can you not by the same logic begin by taking (instead of ln of both sides) the log base 5 of both sides? Continuing the proof will be identical, thereby proving the limit = 5 (or any base you choose).
10:03 is there a math error in the last 2 lines he wrote? the denominator of the left inequality expression should be (L - epsilon) not (L + epsilon). am i missing something?
Method -1(limit of sum ) method-2(Stirling approximation), method-3 (cauchy 2nd theorem) ,method-4(stolz Cesaro theorem) , method -5(Paul havers formula) .... Comment and let me know if you get some other methods😊👍
Hey Michael, I know it's pretty obvious but exactly why is the limit in the case of a discrete variable equivalent to a continuous variable case? It's possible that a function has some properties evaluated at integers but not on other real numbers.
Actually, the limit of an explicit sequence, say f(n), isn't always the same as the limit of f(x). For example, lim cos(2pi*x) as x->infty doesn't exist whereas lim cos(2pi*n) = 1. More generally, suppose that you fave a sequence. Graphically, this sequence is a set of points. Through those points can pass an infinity of different curves. Some functions might converge and others not. In other words, a sequence doesn't contain enough information to tell the behavior of a function that contains all points of the sequence. However, if f has a limit as x->infty, then so does f(n). It's possible to prove it (you might want to check this out: math.feld.cvut.cz/mt/txta/2/txe3aa2e.htm )
You just need to show that a2 < a1 and then show the general case for all n>1 (so an+1 < an). That will make it monotonous decreasing, and then you can say that because n>0, it's lower-bounded by zero so the limit exists.
Great explanation. I has a doubt, in the proof of the second tool. You say, lim n tends to Infinity n+1√{ sqrt(aN)/ (L +e)^N} =1. How to proof that? Thanks.
Because N is some fixed number, but n tends to infinity. And n root of any fixed number tends to 1. If you want a proof of a^1/n (root of n-power from a) tends to 1 you can look this video ruclips.net/video/9dvfqvqVU5g/видео.html , it is in Russian though, but I think it is easy to understand.
Why would you need to prove that lim(1+1/n)^n =e, if this is a definition of 'e'?. The formula for ln'(x) used in applying l'Hôpital rule is actually a consequence of this definition.
The rearranging of (L-e)^n+1-N to (L-e)^n+1/(L+e)N need at least some explanation or was it a mistake? Because he kind of makes same rearrangement with the other part (L+e)^n+1-N = (L+e)^n+1/(L-e)N and then corrects - to +, so it is (L+e)^n+1/(L+e)N. Of course later it goes to 1, so doesn't matter.
I wish you were one of my university lecturer's, your explanations are so clear.
But isn't this high school level maths?
@@namansingla2975 no, epsilon-N and epsilon delta is taught usually in first year of college (if you're doing engineering)
@@namansingla2975 He is very fast, which is university style, but these problems, he is solving here are more practical (as you call it in mathmatics) related, but in university, you have to learn a lot more theory. which means theorems and lemmata.
@@namansingla2975 Evaluating limits for converging series using the ratio test and root test was taught in my first year of university (I'm not sure if there a different curriculum order in the US?). He would need to slow down for high school level.
Ultimately I like how he explains every little step, many of my university lecturers skipped steps they deemed trivial or unimportant, which every now and then lost people in the lecture hall. At that point, most people just copied down what was being written on the chalkboard so that they had it for their notes, rather than actually understanding what was going on. The more clear the explanations, the clearer each logical step is which makes it easier to understand. As a maths teacher now, I make sure that every little step is explained (relative to the ability of the class I'm teaching) to minimise students getting lost on a problem and keep them engaged. That's why I think Michael Penn is great, as he keeps your attention for the whole video.
I wish he were my professor because he's such eye candy :/
I don't know if anybody else feels like this, but your videos are awesome for someone with some formal education in math, but no interest in the minutiae. You always seem to find the right amount of rigor to satisfy my need for completeness, but you never descend into a spiral of proving everything down to the axioms of set theory. It keeps your videos easy to follow while not hand-waving too much away, which is quite impressive. I do enjoy me an hour long Mathologer masterclass now and then, but after a long work day, I need something less intense. Thanks for doing these videos!
I disagree respextfullu.on this videp..dpnt you thinknje should've done it without these tricks that indont see how anyone could or would derive at all?
@@leif1075 Tastes differ. You may feel differently than I do about this. Doesn't matter to me :)
The natural log of the limit is just equal to the definite integral of ln(x) from x=0 to 1 (by a Riemann Sum). The value of the integral is -1 and hence the limit will be 1/e.
This can be solved without using the two results that proved by assuming the limit equals to L and taking natural logarithm on both sides (as done in the proof of the first result).
ln(L) = lim ln(n√(n!/n^n))
By incorporating the n in the denominator into the logarithm of the n!, an infinite sum is obtained which is analogous to the limit of a sum, as shown below.
ln(L)= lim (1/n)*(ln(1/n)+ln(2/n)+....+ln(n/n))
n->inf
Now the RHS is equivalent to writing integration of ln(x) from 0 to 1. In fact the form of seen in the RHS is the very definition of the aforementioned integration. The equation above is equivalent to writing.
ln(L) = ∫ ln(x) dx (definite integral from 0 to 1)
Evaluating this integral, we obtain -1 as its value.
.°. ln(L) = -1
or, L = 1/e
You must be a JEE boi as well 😂
Is not the integral of ln(x) from 0 to 1 an improper integral? How do you show it is -1?
∫(0->1) ln(x) dx
=[1×ln(1)-1]-lim(t->0+)[tln(t)-t]
=-1-lim(t->0+)[tln(t)-t]
However, lim(t->0+)[tln(t)-t]=lim(t->0+)[ln(t)-1]/(1/t)
Use L'Hopital's rule, then you'll find it to be 0
Great! ruclips.net/video/89d5f8WUf1Y/видео.html
That integral trick is sick! Well done!
The first "tool" is what I was taught as the definition of e.
If you start with the idea that e^x is defined to be the function whose value equals its slope (id est: y=e(x) defined by y'=y), then deriving the power series expansion of e^x is pretty simple. It is then a straight forward exercise (using binomial expansion) to show that the limit as N goes to infinity of (1+x/N)^N is exactly the power series of e^x. The limit definition is often used as it provides a solution to a simple way to motivate the study of exponentiation, such as continuously compounding interest rates.
@@Hiltok I was taught that the definition of e was as the limit as n tends to infinity of (1+1/n)^n. From that, I was introduced to the function exp(x) as a power series and it was proven that exp(1)=e.
@@mathunt1130 me too, I guess it just depends on your initial definition of e.
There are several properties you could take as the definition of e, with any of them you can prove the others. So its completely up to preference which one to take as the definition.
The high school definition of e I learned was the limit. Later on at uni, one of my professors told me that the definition of e is " the real numbers such that [ int(1/t)dt from 1 to x ] =1 " because the natural log has historical been thought of as the integral.
Using Stirling's formula we have n!=(n^n)*(e^(-n))*sqrt(2*pi*n)
so the expresion will be: [n * e^(-1)*sqrt(2*pi*n)^(1/n) ] / n , which goes to e^(-1)
You can't write equality, that just holds for the limit
But nice idea
@@cerwe8861 isn't it equal when n approach infinity?
@@cerwe8861 for very large n it holds the equality
@@kizyzo1348 yeah, thats what i said, when n approaches infinity these things are equal.
Thats why i think this method works.
But instead of equal you write ~, that means asymptotically equal, that means that the quotient of both is 1 as n→∞
yeah but the Stirling formula is not trivial to prove, imo it's kind of overkill to use this to get an elementary limit like this one.
At 8:35, how does the re write of (L-epsilon)^(n+1-N) = (L-epsilon)^(n+1)/(L + epsilon)N hold? Shouldn't it be (L-epsilon)^(n+1)/(L - epsilon)^N?
Yes it should be minus instead of the plus
While you are correct about that, consider that the inequality that is being used is still true. Id est: dividing by (L+epsilon)^N instead of (L-epsilon)^N makes that smallest part of the inequality even smaller, so no harm was done to the outcome.
How do you know L-epsilon is positive for all epsilon?
@@thomaskim5394 the epsilon is chosen such as it satisfies what you are saying
@@youssefelmarjou762 Should any epsilon work? The definition of limit requires any epsilon.
You are my favourite teacher on RUclips because your explanation is best ... thank you very much for your amazing videos.
Thanks!
I especially liked building the product at 6:27.
Almost completely understandable for me without stopping the video too often - good job, Michael!
I love the fact that you prove the tools you're using are true before proceeding. 🥰🥰🥰
Thanks for explaining everything step by step, and proving the tools beforehand.
So cool
In addition , we may use Stirling’s formula or use the sandwich theorem in the following inequality
For large n, we have
sqr(2 pi) *n^(n+1/2) *e^(-n)< n! < n^(n+1/2) * e^(1-n)
I started to follow this channel when we were about 8'000 and now we are 5 times more. Good Work Prof. Penn
But if you use a derivative of ln(x), you use the fact that e is equal to limit (1+1/n)^n
I believe the definition of ln(x) is the function who's derivative is 1/x. So taking the derivative of ln(x) is using the definition, and the limit "definition" of e is proven. The fundamental definition of e is the value x that makes ln(x) = 1. At least that is my understanding.
Another way to demonstrate that is using Newton's Binomial
But you must first prove other logarithm rules such bringing exponent down
Then he needed to prove the limit exist first BEFORE letting L = the limit, and take ln(L).
@@filipbaciak4514 You can do that from the integral
"You can't do this unless you have a continuous variable..." so we change the name of n to x ! :-D :-D :-D !
Or you can be lazy and know Stirling's formula: log(n!) = n log(n) - n + O( log(n) ). Rewrite n-th root of n! as exp( log(n!) / n) = exp( log(n) - 1 + O( log(n)/n ) )
Therefore the expression inside the limit becomes (1/e) * (n/n) * exp ( O( log(n) / n ) )
Take the limit as n -> infinity, the exponential part goes to exp(0) = 1, the n/n is uniformly 1, and thus the limit is 1/e.
Of course, proving the Stirling formula just for this question is a bit of an overkill to say the least LOL
i'm almost sure u can get the Stirling's formula right from the fact in the video. Hope he will in some of the next
Getting Stirling's ln-formula (i.e. without the constant) is fairly easy. ln(n!) is essentially the sum of ln(i) with i from 1 to n, you can frame this between the integral of ln(t) over 0 to n, and the same integral over 1 to n+1, that gives you
n ln n - n ≤ ln(n!) ≤ (n+1) ln(n+1) - (n+1) = n ln n - n + ln(n+1) + n ln(1+1/n) - 1 ≤ n ln n - n + ln(n+1)
so ln(n!) - n ln n + n = O(ln n).
Converting into integral is easier 🙄
I wish I could like more than once! Super enjoyable video. I'm always happy to see epsilon show up in a limit like this.
This is a really cool way of evaluating this limit! I had to do this limit in my Real Analysis course, and I first calculated the improper integral from 0 to 1 of ln(x) to be -1, and then used the limit definition of the integral on that interval to show that the ln of that limit is the improper integral. Exponentiating both sides and using the continuity of e^x left me with the limit evaluated to be e^-1.
IDK if anybody has mentioned this, but you can just use Stirling approx as well; i.e., n! ~ n^{n}e^{-n}, with approximation getting better as n -> \infty.
I liked how you proved the limit involving the exponential only using differentiability of logarithm. This could bê used to explain why ln is the natural logarithm, but not any other logarithms functions.
Is there a (theorem) name for the second 'tool'? In the proof of the theorem (10:19) I dont understand how these limits tend to 1. He's referring to something that has been proved earlier in his class...
There are the constants inside the roots; the limit of (const)^(1/n) is 1 - it's a known fact
For the natural log of L one has to notice that we are interchanging function and limit, so here we use the continuity of ln :)
We can use the Riemann integral . exp (lim ([1/n)summa ln(1-k/n) where k=0 till (n-1))]=exp(integral ln(1-x)dx where x go from 0 till 1)=exp(-1).
Great video as always though there is a hidden assumption that L is different than 0 therefore the is an epsilon such that L minus epsilon is greater than zero.
It's not a major issue since you can regard it as a special case and use 0 as your lower bound but would be nice for it to be addressed so it can be easier to follow through.
Keep up the good work!
huh, you can also sub in stirling's approximation for n! early on and it very quickly reduces to e^-1 * lim n-> inf (2*pi*n)^(1/2n) and that limits to 1
how do you get this? I don't understand how to use Stirling here and get rid of the square root of 2pi. Thanks!
@@gemacabero6482 apply stirling and you get lim n-> inf (2*pi*n)^(1/2/n) * 1/e
you can factor the 1/e out and then deal with the inner limit.
the limit n-> inf (k*n)^(b/n) = 1 for any k and b so the 2s and pi's don't matter. You could prove this to yourself by converting to a continuous variable, taking the log and using L'Hopital's rule, or by using the second tool provided,
I have some issues with your second tool. First, L is not guaranteed to be positive (consider a_n=1/n!) so I feel that the argument is invalid in that case and we want to instead consider absolute values and it works. Furthermore, if L is positive, I think we should restrict to 0
Thanks for the nice video. At 6:50, if L=0, then L-epsilon=0.
You could also use Stirling's approximation (not as rigourous, but quicker) to find the same thing :)
I would say that it's not rigorous at all, it wouldn't even be a proof! That's just hand-waving
9:54 I am not satisfied with the proof. The nth root of a number only equals 1 when you take the limit. But if you are to apply a limit to one part of an equation, then you apply it to all parts of the equation. So what we've proven is that L- £ < lim n approaches infinity of the (n+1)√(a_(n+1) < L +£. So we've taken the limit but the expression is still within epsilon of L and not equal to L exactly. Note that epsilon is not zero because n is not connected to epsilon in any way during the proof. I'm not sure if making £ < 1/n is enough to correct the proof because the squeeze theorem (to my knowledge) is only used on non-strict inequalities not strict inequalities. Perhaps using a non-strict inequality at the beginning of the proof will fix all the problems but are we even allowed to do that?
Channels like yours and flammable maths remind me what mathematics is supposed to look like
At 8:55, shouldn't the coefficient of a subscript n on left hand side of a subscript n+1 be (L-epsilon)^n+1/L-epsilon)^n
you can also use Cesaro's lemma
i know a trick for factorials to the n-th root: the nth root of n factorial is asymptotically equal to n/e turning the limit into lim((n/e)/n)=1/e
Looks related to Stirling's formula. Also, the expression under limit is a geometric mean of 1/n, 2/n, ..., n/n.
Wow, saw that proof that (1+1/n)^n = e the first time ever. I learned it at school, but there we get the formular by differential quotiants and we detected, that there must be a limit by solving with a calculator, but I never ever have seen this proof from the 'other' direction. Very amazing :-)
I love maths, but i don't understand how people can do this kind of mathematical analysis, it's a excellent video :D
Good Place To Sharpen Tools at 1:04
Good Place To Start at 11:50
That first limit is a definition of e though.
It is not definition
I believe the definition of e is it is the value x that makes ln(x) = 1. ln is the function whose derivative is 1/x.
you could take that as a definition or not. You can just define e as exp(1), and define exp as the only differentiable real function y that satisfies y'=y and y(0)=1 for example. In which case e=lim(1+1/n)^n is something you have to prove, not a definition. Another way to define exp is as the only differentiable real function f such that f(x+y)=f(x)f(y) and f'(0)=1. Or via its power series, or as the inverse function of ln, which can itself be defined as the antiderivative of x->1/x that evaluates to 0 at x=1.
There are so many alternative definitions of e, exp, ln and so on.
Michael uses Spivak's Calculus book and there, e is defined as the base of a log for which the derivative is 1/x. It's probably the most rigorous book on Calculus there is.
A small technical thing; the nth root of L-epsilon isn't a valid step.is L-epsilon is negative (either because L is 0, or epsilon>L).
second tool approach is interesting and not trivial.
i think before multiplying the inequalities, we shall take a sequence of small epsilons e.g. epsilon=1/n and then to continue.
if L=0, then we have a squeeze inequality 0
Perhaps you can see also as Riemman summation: (n!)^(1/n)/n=exp(sum(k=1:n, 1/n*ln(k/n)) if n goes to infinity (n!)^(1/n)/n=exp(integral(ln(x) from 0 to 1))=1/e
You can use Stirling approximation
I would just add that the nth root of anything that *does not depend on n* as n goes to infinity is 1, simply because the limit as a^(1/n) (a anything finite not depending on n) as n goes to infinity is like a^1/infinity is like a^0 = 1. Just a small thing.
you are a special prof of maths.thanks so much.
Alternate solution: The term in the limit can be written as (1/n *2/n *3/n*..n/n)^1/n...taking log on both sides we get log (L) = 1/n * Sigma(log(i/n)) for i= 1 to n . This is just the integral of log (x) from 0 to 1 which is equal to -1 and hence the limit "L" = exp(-1)
we would have used Stirling approximation,
that is precisely much easier,
^_^
Need more videos on limits please
You can proove the second tool with cesaro theorem
It's cool that you can get an intuition of where the limit might be. For instance with this limit I reasoned that n! was less than n^n for sufficiently large n, and so if you replace the n! in the limit with n^n, you would get one, so then if you replace the n^n with something less than that, i.e. n!, then it's going to consequently make the limit less than one, so before I even started watching the video I knew roundabouts where the answer should be
Really lovely stuff. That tool equaling e was my school definition of e!!!
That sister sequence convergence theorem is awesome.
Professor you can you stirling approximation it's much easier
An even faster soln can be given by striling's asymptotic approximation of n!
by using stirlings approximation, n! = E(n)*n^n*e^-n*sqrt(2pin) where E(n)->1 as n->infty so just plug that in and you get E(n)^1/n * 1/e * (2pix)^1/2x -> 1/e
At 08:22 i think denominator should be (L-ε)^(N)
Can it be that lim a_(n+1)/a_n = 0 as n goes to infinity? This case seems problematic for the proof of the second tool.
No, because he already assumes that both limits exist and are finite
@ 8:36 The denominator should have L-epsilon, not L+epsilon.
At 11:30 he claims that it is pretty easy to show by induction that (n!)^(1/n)/n is decreasing. I'm not able to do it, neither by induction nor otherwise. Can someone show me how to prove that?
Why can’t we use any base for the first tool. Would it change anything if we used base10?
So this first proof works with the natural logarithm. what if we used the log with base two? All the steps work the same and at the end we have log{base 2}(L) =1 and than L=2 ??
Natural logs with Stirling's approximation works as well: ln(n-root(n!)/n)) = (1/n)ln(n-root(n!)) - ln(n). (This holds by ln(x^a) = a*ln(x) and ln(x/y) = ln(x) - ln(y). By Stirling's approximation, ln(n!) = n*ln(n) - n for large n (which definitely applies here). Thus we get (n/n)ln(n) - n/n - ln(n) = -1. Restoring the original expression with e^(-1), we get lim(n-root(n!)/n) = 1/e. (I'm sure this 1-minute method wouldn't apply unless Stirling's approximation is justified on the same reasoning as in the video, but I'm a lazy physicist. I accept this lack of rigor.)
I think my real and complex analysis textbooks assumed I knew the second tool you used. That would have been so useful
Didn't know how to go beyond that, since I didn't know the second given, but IMO an easier way to see that a limit exists is recognizing that, since you know it's bounded by zero from below, it's also bounded by 1 from above (since n! is strictly less than n^n, so you get nroot(n^n)/n = n/n = 1 > nroot(n!)/n > 0). You could also then see that it's actually bounded by 1/2, due to arithmetic-geometric mean inequality (since nroot(n!) is the geometric mean of each natural number up to n, and the arithmetic mean of each natural number up to n is (n+1)/2, so nroot(n!)/n is strictly less than (n+1)/2n for all n > 1, which as n goes to infinity (n+1)/2n goes to 1/2), but that's not really necessary since you already know it's bounded, but it's a nice sanity check.
This would've been easier, if we had converted it into an integral , from limit of sum, after taking natural log on both sides.
Actually if you want to apply your second theorem, you still need to show that lim (a_n)^1/n exists, so currently we only know if ((n!)^1/n)/n were to converge it would be 1/e, isnt it? (probably you can show it is monotonous and bounded to ensure convergence)
Cauchy's second theorem on limit
I was taught a very different proof for the 2nd tool, using AM-GM inequalities if my memory isn't deceiving me.
Wait... in the proof of the 2nd tool. Do you get a problem if L=0? If you then multiply an even number (to make the example simple: 2) of inequalities, you get
(-eps)^2 < a_{N+2}/a_{N} < (eps)^2
which is obiously wrong for all eps
Just manually plugging in a few values for n on a calculator I see this limit converges slowly.
2:30 circular definition??
i have left an upvote to remind myself that i also left a comment (usually in form of code):
*Limit[(n!)^(1/n)/n , n -> Infinity]*
Well, the first tool you prove can be used as a definition for e, on which e^x and ln x are based, but for the sake of understanding I will let it slip
You can also say that lim n->∞ ((n!)^(1/n))/n =exp( ∫(0 to 1) lnx dx) = exp(-1) = 1/e
Minor little edge-case mistake in the argument about a(n+1)/a(n) -> L implying a(n)^(1/n) -> L.
It's possible that L = 0, in which case your claim that "all the parts here are positive, so we maintain the direction of the inequalities" fails.
All those "L - epsilon" values are strictly negative when L = 0.
It's trivial to correct... just use 0 < a(n+1)/a(n) (via all a(k)'s positive) in your list of inequalities in the case when L = 0, and do all the rest the same way, and it works out fine.
At the end have:
0 < a(n+1)^(1/(n+1)) < (0 + epsilon) [ a(N) / (0 + epsilon)^N ]^(1/(n+1))
0
Isn't that the definition of e hence it doesn't need to be proven. Though I've heard of people starting with taylor series of e^x as a definition and then proving the usual "definition" but that seems so backwards to me.
Typically in a real analysis course, one defines the whole concept of real number exponentiation using e^x = 1+x+x^2/2 +... . Considering he's a professor who is well aware of mathematical rigor, it doesn't surprise me that he accepts this as a definition instead of the standard limit taught to high schoolers
There is so many "definition" of e. For example a=e is a unique number such that d(a^x)/dx=a^x, or d(log_a(x))/dx=1/x, or int 1/t dt from 1 to a =1
@@EssentialsOfMath I use lim h -> 0 (e^h - 1)/h = 1 as the definition of e
@@EssentialsOfMath I always avoided that definition as I was used to the other one and all the proofs that have anything to do with e came from there for me but seeing how much accepted that definition is I think I'll have to "reprogram" myself.
@@malawigw again, you have to think about what it means to take a real number to a real number exponent. It doesn't make sense. You can define natural number exponentiation in the standard way a^n = a*a*...*a, rational number exponentiation using roots, but when it comes to real number exponentiation you would either have to take a limit of rational exponents or use the Taylor series as starting point. Walter Rudin takes the latter approach in his texts on analysis
isnt there an another way to turn root_n(n!/n^n ) into an integral?
Can we do it this way?
Let K be the limit. If we take ln of both sides we have (with n going to infinity)
ln(K)= ln [lim (n!/n^n)^(1/n)]=lim (1/n *ln(n!/n^n) = lim 1/n * (sum from i=1 to i=n of ln(i/n)) = integral from 0 to 1 of ln(x) =-1
So
ln(K)=-1
K=e^(-1)=1/e
I see other comments discussing the validity of assuming e is defined as the base of the natural log during the proof for the first tool. I am still confused, can you not by the same logic begin by taking (instead of ln of both sides) the log base 5 of both sides? Continuing the proof will be identical, thereby proving the limit = 5 (or any base you choose).
The derivative of log_5(x) is not 1/x, thats only true for ln(x) (which is one reason to consider it "natural")
10:03 is there a math error in the last 2 lines he wrote?
the denominator of the left inequality expression should be (L - epsilon)
not (L + epsilon).
am i missing something?
No problem here.
He just didn't explain that (L-e)^(n+1)/(L-e)^N < (L-e)^(n+1)/(L+e)^N.
Same for the other inequality.
@@lecureilfou9254 thanks
The Stirling formula also solves this limit (very quickly), and the proof of this formula is interesting
When you said sqrt(n!/n^n) is equal to this a(n+1)/an form shouldnt you proof that limit of sqrt(n!/n^n) exist before assuming that equality?
He mentioned that but let us to prove it. It is a limted monotonous decreasing sucession therefore has a limit.
How do you just take the n+1-th root of the inequalities? What if it is even?
This seems almost obvious with stirling's approximation.
Sterlings approx would really help, make it a one liner
Could have used natural log then riemann sum
Method -1(limit of sum ) method-2(Stirling approximation), method-3 (cauchy 2nd theorem) ,method-4(stolz Cesaro theorem) , method -5(Paul havers formula) .... Comment and let me know if you get some other methods😊👍
Good job. But i think that there is an error in the inequalities: (l+e)^N are factors. But this don't changes the result. Congratulatios
You should have mentioned that ln of lim = lim of ln b/c Ln is continuous. F(lim) = lim F().
a nice simple way to review/learn a limit trik
Hey Michael, I know it's pretty obvious but exactly why is the limit in the case of a discrete variable equivalent to a continuous variable case? It's possible that a function has some properties evaluated at integers but not on other real numbers.
And how to show that this is decreasing?
Actually, the limit of an explicit sequence, say f(n), isn't always the same as the limit of f(x). For example, lim cos(2pi*x) as x->infty doesn't exist whereas lim cos(2pi*n) = 1. More generally, suppose that you fave a sequence. Graphically, this sequence is a set of points. Through those points can pass an infinity of different curves. Some functions might converge and others not. In other words, a sequence doesn't contain enough information to tell the behavior of a function that contains all points of the sequence.
However, if f has a limit as x->infty, then so does f(n). It's possible to prove it
(you might want to check this out: math.feld.cvut.cz/mt/txta/2/txe3aa2e.htm )
11:39 How can you prove that it is dicreasing with induction? I find it kinda difficult
You just need to show that a2 < a1 and then show the general case for all n>1 (so an+1 < an). That will make it monotonous decreasing, and then you can say that because n>0, it's lower-bounded by zero so the limit exists.
First, (L-\epsilon)^N in the denominator of the L.H. S. Second, what then about the (n+1)st root limit if L=0 and N is odd?
a_n is positive, so a_(n+1)/a_n is positive, so you can change L.H.S. to 0 almost at the beginning.
Great explanation. I has a doubt, in the proof of the second tool. You say, lim n tends to Infinity n+1√{ sqrt(aN)/ (L +e)^N} =1. How to proof that? Thanks.
Because N is some fixed number, but n tends to infinity. And n root of any fixed number tends to 1. If you want a proof of a^1/n (root of n-power from a) tends to 1 you can look this video ruclips.net/video/9dvfqvqVU5g/видео.html , it is in Russian though, but I think it is easy to understand.
I used an integraction method
an easier but rigorous way for this limit is the Stirling approximation for n!
I gues this will be easier using defination of definite Integral .
Take log of the limit both side and we must get log L =int(logx)_0^1
So L=1/e
Great as always
Why would you need to prove that lim(1+1/n)^n =e, if this is a definition of 'e'?. The formula for ln'(x) used in applying l'Hôpital rule is actually a consequence of this definition.
The rearranging of (L-e)^n+1-N to (L-e)^n+1/(L+e)N need at least some explanation or was it a mistake? Because he kind of makes same rearrangement with the other part (L+e)^n+1-N = (L+e)^n+1/(L-e)N and then corrects - to +, so it is (L+e)^n+1/(L+e)N. Of course later it goes to 1, so doesn't matter.