So I think that you built yourself a strawman here. The approach you took is very much a low-brow approach by assuming it's equal when it's only approximate. Expand in terms of power series and you're fine.
I'm unfamiliar with this notation for estimating approximation accuracy, but it looks similar to the "big-O" notation for estimating algorithmic complexity class from CS which I'm more familiar with. Is this similarity more than incidental? Where can I read more about this?
@@xCorvus7x He's saying that sin(x) ~ x + O(x^3), so when taking the limit 1/(sin(x)) - 1/x, we get the limit of O(x^3)/(x^2 + O(x^4)), which simplifies to O(x^3)/O(x^2), and the limit is equal to 0. When we take the limit 1/(sin(x))^2 - 1/x^2, we get the limit of (x^2 - (x + O(x^3))^2) / (x^2 * (x + O(x^3))^2), which simplifies to O(x^4)/O(x^4) = O(1). Therefore, the limit value is a constant in this case.
@@sweetcornwhiskey Still, none of the approximations of sine employed here have an O(x^2). Besides, how does O(x^3)/(x^2 + O(x^4)) simplify to O(x^3)/O(x^2)?
When including just the next term of the McLaurin series of sin(x), this is easy: sin x = x - x^3/3! + higher order terms, so sin^2 x = x^2 - 2x^4/3! + higher order terms Therefore x^2 - sin^2x = 2x^4/3! + h.o.t. and x^2 sin^2x = x^4 + h.o.t. So in the limit where x goes to 0 their ratio is = 2/3! =1/3.
Let's use sin x = x + O(x³) the the first limit becomes (x-sin x)/(xsin x) = -O(x³)/O(x²) -> 0 while the second limit (x²-sin²x)(x²sin²x) = - O(x^4)/O(x^4) -> O(1), the approximation can't give an answer in that case.
That’s probably the answer that best marries the intuition of why the first-order approximation *should* fail with the precise mathematics of why it does.
Before watching let me share how we were taught in physics in our first year at uni (you guys would call a the freshman year, but East of the Atlantic we keep the term Fresher for the first few weeks only, not the entire year) Those demanding rigour (or rigor) should look away now. From either the Taylor series or the fact that sin is an odd function we know that x^2 is going to be irrelevant. The x^1 term is the approximation we are looking at so we can't throw away x^1. Therefore our test of validity "must be" x^3. As a physicist therefore I would take the approximation to hold whenever x^3 is smaller than the absolute size of the accuracy we want. But again as a physicist I usually think in terms of proportional error = x^1 / x^3 So I would limit the validity to x^-2 < desired relative error, or 10.x^2 < desired percentage error And that, for a physicist, is a "good enough" place to stop. Good enough is enough for a physicist. Math(s) tutors may differ on that... PS after watching I see that Michael was on a totally different tack to me. A symptom of how we physicists (ab)use maths ;)
Intuitive explanation of what goes wrong: We have that 1/sin(x) - 1/x goes to 0. So graphically, the graph of 1/sin x and the graph of 1/x get closer and closer together. However, they also both go further and further away from the x-axis. Which means that squaring each term is going to magnify the distance between them, and it will magnify the distance more and more as the graphs grow away from the x-axis. Turns out that this basically perfectly counterbalances the fact that the original graphs get closer and closer together, and ultimately, the distance goes to 1/3.
Very true. I would also mention why the formal naive derivation is wrong. The skipped step is what messes up the conclusion - lim (1/sin x - 1/x) = lim(1/sin x) - lim (1/x) = lim(1/x) - lim (1/x) ≠ lim(1/x - 1/x). The last step does not produce an equality, because lim(1/x) is infinite, and if you substract infinite from it, the answer is undefined. Precisely the same as in 0/0 - that is undefined because any number could satisfy the definition equality that a/b = c is the same as b*c = a. Infinities are tricky things.
We considered only the first term of Taylor expansion for the aporoximation of sin(x) while we were working with that quantity squared. In this case this approximation is not valid because we are neglecting non-neglectible quantities. A similar result should appear if we take the same limit to the fourth power considering only the first two terms of Taylor expansion
@@anuragjuyal7614 i did not do the computation, i was talking about a more general case. Surely i did not expect that this limit did not converge if raised to the fourth power, that's fun
If we write sin(x)=x-eps(x), we know that the limit eps(x)/x^n is 0 for n3 and 1/6 for n=3. The 1st limit becomes (after removing higher terms) eps(x)/x^2 -> 0. The 2nd limit becomes (after removing higher terms) 2x*eps(x)/x^4 -> 1/3. Similarily, one can show that the difference of cubes diverges. Removing higher terms is possible for the limit. Essentially, it boils down to the fact that for any positive delta and m>n, there exist small enough x such that |x^m|
Hi, The point is that you are not allowed to add equivalent functions, that's all. For fun: 3:19 : "ok, good", 4:39 : "ok, nice", 8:33 : "my question to you" (copyright Oskar van Deventer).
After watching the video I entered the second limit into Wolfrum Alpha Pro and asked for the step by step solution. The output involved multiple applications of L'Hopital's Rule and of the product rule, as well as elaborate manipulations of the very complicated intermediiate equations. Your approach is vastly better.
It is not very hard using L'Hopitals actually, Wolfram Alpha just brute forces it. It only requires one instance of the L'Hopital's if you know the basic limits: as x → 0, sinx/x = 1 and (1 - cosx)/x² = 1/2 Take, lim x → 0 (x²-sin²x)/(x²sin²x) = (1+sinx/x) • (x²/sin²x) • (x-sinx)/x³ = 2•1•(x-sinx)/x³ Taking L'Hopitals, = 2•1•(1-cosx)/(3x²) = 1/3
This approximation is good and correct and useful. But you MUST be aware of the 3rd order error and check that it doesn’t contribute anything. The second example was deliberately constructed to expose the 3rd order.
The correct way to do it is to use Taylor series approximation of (sin(x))^2 so we don’t drop any meaningful quadratic terms. And those are meaningful in that case because we already have 1/x^2 in the limit. If we do that properly, everything will work out nicely
sinx≈x is a favourite topic of mine, when during my classes on nonlinear vibrations we come to the point of the oscillation of the pendulum. At small angular displacement sinx≈x holds very accurately and this can be checked experimentally by the Foucoult-pendulum at the entrance of our university. On the other hand, when the next higher term i.e., -(1/6)x3 is included, we get into the mystery of the Duffing-equation and then eventually the parametrically forced vibrations. So this approximation enables us to open new paths to the different topics of nonlinear dynamics.
I don't remember what it's called, but a more useful generalization of L'Hopital's Rule is to convert your quantity into a single fraction, then generate Taylor Series for the numerator and denominator, which will give you a more useful and specific picture of the limit behavior. So for example, 1/sinx - 1/(x) becomes (x-sinx)/(x*sinx), and the numerator and denominator separately expand to ((x^3)/6 - (x^5)/120 + ...) / (x^2 - (x^4)/6 + ...). Now if you just cancel x^2 out of the top and bottom, you can see that the limit as x->0 is x/6, which is more specific than just saying it approaches 0. I hope this is useful to someone!
You really should do these things with Taylor series. sin x / x = 1 - x^2/6 + O(x^4), so (sin x / x)^2 = 1 - x^2/3 + O(x^4), so (x/sin x)^2 = 1 + x^2/3 + O(x^4). Then the limit is that of 1/x^2 ( (x/sin x)^2 - 1) = 1/x^2 (1 + x^2/3 + O(x^4) - 1) = 1/3 + O(x^2), which goes to 1/3. This kind of method works even when you don't have trick factorizations available.
when we were taught to use these taylor siries approximations at uni our teacher allways said to go till the highest polynomial power which is there on its own and to always use small o notation. maybe thats the reason we didn't run in to these kinds of problems. idk im just an engineer. but if i had just used the first polynomial aproximation in this case my teacher would have killed me.
@@momomtjiddu3920 he probably means as many taylor terms as there are powers. I'm looking at a different comment that confirms this, saying that taking a limit to the power n while neglecting the relevant terms, basically truncates any meaning from it, and invalidates the point of approximating, but wouldn't know enough on my own to tell you more.
@@milanstevic8424 Yeah, I'm an EEng and we do taylor series approximations all the time. Basically approximating the "power" of the existing function based on its shape is important. So 1/x should never be approximated with one term, that would be just silly.
The way I think is we basically multiplied the numerator and denominator of the sin term by x squared. Distributing the limit will turn the sin term to the x term but then we have two different limits with infinity in them. That's why it went wrong. The same goes for the previous question even though the answer came out to be correct.
Great point about why to be careful with approximations! Is sin(x) ~= x for small x used often in limit proofs? I’ve seen it often in calculations, and sometimes in physics proofs.
The Problem is that order of the tylor Expansion is Importen In the first example sin(x) = x just happened to be correct up to the third order which is sufficient. sin(×)^2 =x^ 2 is only correct up to O(x^3). That is not sufficient. But other that working it out the long way I don't know any way of figuring out the sufficient order
It’s already been said but: sin(x)= x - (x^3)/6 +o_1(x^4) where o_1 contains all terms which are of order bigger than 4 (negligible compared to x^4) [sin(x)]^2 = x^2 - 2•x•(x^3)/6 + x^6/36 + o_1(x^4)^2 [sin(x)]^2 = x^2 - (x^4)/3 + o_2(x^4) where o_2(x^4)= o_1(x^4)^2 + (x^6)/36 and contains all terms of order higher than 4 (negligible compared to x^4) So by putting x^2 to the other side we obtain, [sin(x)]^2 - x^2 = (-x^4)/3 + o_2(x^4) -[sin(x)]^2 + x^2 = (x^4)/3 - o_2(x^4) x^2 - [sin(x)]^2 = (x^4)/3 + o_3(x^4) where o_3(x^4)= -o_2(x^4) So we have the numerator expression. Now we need to find the denominator: We come back to this result we found earlier: [sin(x)]^2 = x^2 - (x^4)/3 + o_2(x^4) We multiply both side by x^2 x^2•[sin(x)]^2 = x^4 - (x^6)/3 + x^2•o_2(x^4) x^2•[sin(x)]^2 = x^4 + o_4(x^4) where o_4(x^4) = x^2•o_2(x^4) - (x^6)/3 and contains all terms of order higher than 4 (negligible compared to x^4) Finally: [Numerator]/[denominator] = [(x^4)/3 + o_3(x^4)] / [x^4 + o_4(x^4)] = [1/3 + o_3(x^4)/x^4] / [1 + o_4(x^4)/x^4] So by taking limit as x~>0, both o_3(x^4)/x^4 and o_4(x^4)/x^4 goes to zero leaving us with: [1/3]/[1] = 1/3
expand sin(x) to higher order x+x^3/3!+... until you get some nontrivial terms when you have this kind of cancellation, will give you the right result.
This limit actually breaks several naive l'hospital limit calculators, which blindly take derivatives of numerator and denominator until they succeed, as it requires an ungodly amount of steps to converge
As someone unfamiliar with this channel I was really hoping for a discussion on when the approximation is bad in real world scenarios. I was not prepared for an entire video of just pure contextless math.
Michael - I was really confused with "when" in the title and "where" on the chalkboard. When I read the title I thought: Oh! Michael is just going to find using his own rule of thumb when the error between sin(x) and x is ... [enter Michael's rule of thumb]. Then I opened the video and saw "where" and saw the 1/sin(x)-1/x and was like what is goind on here? :D
So (x - x^3 /6)^2 = x^2 - x^4 /3 + ... (let's ignore x^6 term) Of course you can factor x^2 (3 - x^2) / 3 Then the difference becomes 1/x^2 (3/(3-x^2) - 1) A bit of simplifying leaves you with 1/(3-x^2), and of course taking the limit gives 1/3 Moral of the story: don't use linear approximations if you're gonna square them -- you'll miss the linear term of x^2 + 2*epsilon*x -- and be super extra careful if you're sticking it in a function that blows up like 1/x
It's really interesting how the given approximation: sin(x) ~= x | x ~= 0 conflicts with the given fact: lim(x->0) sin(x)/x = 1 If you think about it, they're effectively saying the same thing. x/x = 1, that's just a identity. So, if sin(x)/x = 1 as x approaches 0, that can only be true if sin(x) ~= x as x approaches 0. In fact, take those places where you implement the "limit fact" to get the true answer. You get the same results if you implement the approximation instead, because they seem to simply be different ways of writing the same thing. I've been out of school for awhile, so I don't remember the Taylor series enough to do your exercise, but I think the main difference between the two is that the latter is presented in a way which can't be used as liberally, eliminating these certain situations where it doesn't work.
If: |A - B| < E, then: |1/A - 1/B| < E/|AB| So in general, "approximately equal" near the singularity is a bit dangerous. If E < E'|AB| for some other E' that is also small, then suddenly you're in business again. You can multiply to get: |1/A^2 - 1/B^2| < (E/|AB|)*|1/A + 1/B|, in which case you need E to be even smaller in order to get that the end result must be small.
Calculating values at x= 1/2^n, Excel gets closest to the correct limit at 1/(2^12). Then rounding errors start to take it away from the correct limit at higher values of n -- reporting zero at n=24.
Wait, what? The correct limit for the 1st one is 0 and the correct limit for the 2nd one is 1/3, NOT 1/(2^12), so how in the impossible hell did you get that in Excel?!?!?
@@megauser8512 I didn't say the limit is 1/(2^12). The value of the expression in the second half of the video (near the 7:00 time stamp, for which we want the limit) is "close" to 1/3 when x = 1/(2^12). It is farther away from 1/3 when x = 1/(2^13) -- in Excel, because it starts making important rounding errors at smaller x-values of the form x= 1/2^n.
For higher orders of sin(x) and x than 2, eg. 1/sin3x - 1/x3, the limit goes to infinity. So interesting that order 1 is zero, order 2 is non-zero but finite, order >2 is infinite. Why does only the second order have a finite, non-zero limit.
LHopitals rule is just a short cut for expanding the numerator and denominator of a ratio in Taylor series and then comparing the first non vanishing terms. The number of times that you do this indicates the leading order of the expansion (so if the num and denim start at x^3 you will do lhopitals 3 times). The problem is that sinx ~ x is terminated after the O(x) term so doing lhopitals 3 or more times will not give you the right limit since you are ignoring the x^3 term
Usually, cos x = 1 accompanies sin x = x. However, there are cases where cos x = 1 - x²/2 must be used, no matter how small x is. (I'm thinking of the harmonic oscillator potential energy and similar.) I did one physics problem where using sin x = x - x³/6 was an essential step, again regardless how small x was. Usually, if a cancellation gives zero with the approximation, reconsider the problem with the next-order term.
Hey, I thought it is a common fact, that you can use a reciprocate approximation, only if limit is not 0. I.e. a(x)~b(x) is equivalent to 1/a(x)~1/b(x) only if a(x) and 1/a(x) does not tend to 0 (and of course same is right to b(x))
Isn't sin(x) ~ x for small x saying the same thing as limit as x->0 sin(x)/x = 1 ? So in effect you used the same approximation in both methods of evaluation (for the squared version).
Even though I get the idea that approximations will inevitably fail, I cannot explain why. Still, the idea that small differences may accumulate reminds me of the time when somebody explained to me why 1^infinity is indeterminate. The number 1 in the expression is not the constant 1, but the value 1 achieved as the limit of some non-constant expression.
at the beginning of seeing these videos I was thinking that the formula lim f/g=lim f'/g' was the log beetle rules and I realized that it was l'hôpital rules and I am french😂 American accent is really fun
@@faithnfire4769 yes that is true I was not aware of this some time ago but as in France we have several accent already I cannot imagine how much it exists in USA
For people that want to do it via the L. rule you need to go to the 4th derivative, so it's not terrible, but it's a lot of work. The result clearly is the same.
a red flag for both approximation usages is infinity minus infinity. 1/sinx is infinity with this approximation. By the way the more time you need to apply L'Hôpital's rule the more accurate approximation you'll need to get the right answer.
Your method for 1st was wrong...once you distribute limits ,, you can't bring them back together...so in limx->0 (1/sinx - 1/x) And then u distribute it , it will be limxto0 1/sinx + lim x to 0 1/x... And that will give infinity -infinity ,, u can't combine them
Isn’t the problem that the limits as written are indeterminate (infinity - infinity) forms, so using the approximation is just about guaranteed to break down unless you can rearrange them into something more stable?
Of course it is. Remember your math teacher when s/he told you that arithmetic is defined for numbers? Infinity is not a number. As soon as you have something that is not a number in an arithmetic expression you are in undefined territory. A mild form of saying "all hell breaks loose" or "bad things will happen". Besides, never ever start solving a problem using an approximation if other more solid methods exist. If at the end of a problem you run into something no known method solves, then use it with caution. Take it as a safety parachute in case everything else fails.
In the video you mentioned that you found this problem on MSE, and that the link could be found in the description. However, I checked the description and I couldn’t find it there. Did I miss it?
I’m bothered by the fact that lim x->0 (sin(x)/x)^a = 1 for all a, so rearranging this yields lim x->0sin(x)^a = x^a. I do understand the next step is not legal as the divisor is 0 but nonetheless I am I upset that taking the inverse of both sides is no longer equivalent
Thats insanely informative thanks, never knew this sort of thing would happen, I'm still surprised of the end result there, what does all this mean, how will i carry on?!?!??!!?!?
isnt saying lim x-->0 sinx/x = 1 an equivalent to saying that sin x = x while x tends to 0? If so, how can we utilize the sin x/x = 1 limit to get our answer?
lim_x->0(sin x/x) = 1 can be proven with the squeeze theorem (and some geometry). Its validity is not in question. The question is whether this means you can substitute x for sin x in any limit where x -> 0 The answer is no.
I think my hold up is that the notion of the limit of a product being the product of limits is generally not true. Here, that happens to work, but it seems it's kind of just as arbitrary as assuming that the sin small angle approximation works. If anyone here has more of a background with real analysis and wants to give a brief explanation of why this product/limit argument is true I'm very curious.
I'm no expert, but this is what I see. He knew the limit of two of the parts existed. And he found by the end that the limit for the third part existed. So he thus concluded that the original limit also existed. Is this a valid assumption?
I would think that the fact that dsin(x)/dx=1 while dsin(x)^2/dx=0 when x=0 would indicate that the approximation for the former wouldn't work for the latter.
Even in light of an approximation of the form f(x) ~ g(x) ~ 0 for x close to 0, you typically would NOT expect that limit as x --> 0 of F(f(x)) - F(g(x)) would simplify to zero when the encapsulating function F is not continuous at 0. So is the first limit working out here a fluke, or is there a deeper, nifty result that lets you swap limits through discontinuous functions sometimes?
Hey hey, don't you dare attack the 1st fundamental Theorem of Engineering... It works well enough that planes only crash sometimes.
Don't you mean 2nd Fundamental Theorem of Engineering? The 1st is π = e = 3
Planes crash when accountants overrule engineers.
Reminds me of that video about engineering professors. There was that line: "e is 3, pi is 3, 4 is 3"
@@NavyBlueMan True, although there is also the famous 0th Theorem, "≈ = =", that allows for all other theorems.
So I think that you built yourself a strawman here. The approach you took is very much a low-brow approach by assuming it's equal when it's only approximate. Expand in terms of power series and you're fine.
'those rules were being appropriately bent", quote of the day.
Maths is all about how to bend rules appropriately.
😂😂
That's why such approximations usually come with some "+O(x^n)" term (Here that would be O(x^3), to indicate to what order the approximation holds.
But the approximation fails at order x^2, not x^3.
I'm unfamiliar with this notation for estimating approximation accuracy, but it looks similar to the "big-O" notation for estimating algorithmic complexity class from CS which I'm more familiar with. Is this similarity more than incidental? Where can I read more about this?
@@JoeTaber These are called "Landau-symbols"
@@xCorvus7x He's saying that sin(x) ~ x + O(x^3), so when taking the limit 1/(sin(x)) - 1/x, we get the limit of O(x^3)/(x^2 + O(x^4)), which simplifies to O(x^3)/O(x^2), and the limit is equal to 0. When we take the limit 1/(sin(x))^2 - 1/x^2, we get the limit of (x^2 - (x + O(x^3))^2) / (x^2 * (x + O(x^3))^2), which simplifies to O(x^4)/O(x^4) = O(1). Therefore, the limit value is a constant in this case.
@@sweetcornwhiskey Still, none of the approximations of sine employed here have an O(x^2).
Besides, how does O(x^3)/(x^2 + O(x^4)) simplify to O(x^3)/O(x^2)?
When including just the next term of the McLaurin series of sin(x), this is easy:
sin x = x - x^3/3! + higher order terms, so sin^2 x = x^2 - 2x^4/3! + higher order terms
Therefore x^2 - sin^2x = 2x^4/3! + h.o.t. and x^2 sin^2x = x^4 + h.o.t.
So in the limit where x goes to 0 their ratio is = 2/3! =1/3.
Let's use sin x = x + O(x³) the the first limit becomes (x-sin x)/(xsin x) = -O(x³)/O(x²) -> 0 while the second limit (x²-sin²x)(x²sin²x) = - O(x^4)/O(x^4) -> O(1), the approximation can't give an answer in that case.
Using sin(x)=x-x^3/6 + O(x^5) should be enough for the second one.
That’s probably the answer that best marries the intuition of why the first-order approximation *should* fail with the precise mathematics of why it does.
Before watching let me share how we were taught in physics in our first year at uni (you guys would call a the freshman year, but East of the Atlantic we keep the term Fresher for the first few weeks only, not the entire year)
Those demanding rigour (or rigor) should look away now.
From either the Taylor series or the fact that sin is an odd function we know that x^2 is going to be irrelevant.
The x^1 term is the approximation we are looking at so we can't throw away x^1.
Therefore our test of validity "must be" x^3.
As a physicist therefore I would take the approximation to hold whenever x^3 is smaller than the absolute size of the accuracy we want.
But again as a physicist I usually think in terms of proportional error = x^1 / x^3
So I would limit the validity to x^-2 < desired relative error, or
10.x^2 < desired percentage error
And that, for a physicist, is a "good enough" place to stop.
Good enough is enough for a physicist. Math(s) tutors may differ on that...
PS after watching I see that Michael was on a totally different tack to me. A symptom of how we physicists (ab)use maths ;)
Intuitive explanation of what goes wrong: We have that 1/sin(x) - 1/x goes to 0. So graphically, the graph of 1/sin x and the graph of 1/x get closer and closer together. However, they also both go further and further away from the x-axis. Which means that squaring each term is going to magnify the distance between them, and it will magnify the distance more and more as the graphs grow away from the x-axis. Turns out that this basically perfectly counterbalances the fact that the original graphs get closer and closer together, and ultimately, the distance goes to 1/3.
Very true. I would also mention why the formal naive derivation is wrong. The skipped step is what messes up the conclusion - lim (1/sin x - 1/x) = lim(1/sin x) - lim (1/x) = lim(1/x) - lim (1/x) ≠ lim(1/x - 1/x). The last step does not produce an equality, because lim(1/x) is infinite, and if you substract infinite from it, the answer is undefined.
Precisely the same as in 0/0 - that is undefined because any number could satisfy the definition equality that a/b = c is the same as b*c = a. Infinities are tricky things.
We considered only the first term of Taylor expansion for the aporoximation of sin(x) while we were working with that quantity squared. In this case this approximation is not valid because we are neglecting non-neglectible quantities. A similar result should appear if we take the same limit to the fourth power considering only the first two terms of Taylor expansion
The limit doesn't converge for fourth power, it shoots up to infinity
@@anuragjuyal7614 i did not do the computation, i was talking about a more general case. Surely i did not expect that this limit did not converge if raised to the fourth power, that's fun
"In this case this approximation is not valid because we are neglecting non-neglectible quantities" This argumentation is circular.
@@thomasullmann7447 how so?
@@shayboual1892"the limit doesn't work because we are ignoring terms that would make the limit work, because otherwise it doesn't work"
If we write sin(x)=x-eps(x), we know that the limit eps(x)/x^n is 0 for n3 and 1/6 for n=3.
The 1st limit becomes (after removing higher terms) eps(x)/x^2 -> 0.
The 2nd limit becomes (after removing higher terms) 2x*eps(x)/x^4 -> 1/3.
Similarily, one can show that the difference of cubes diverges.
Removing higher terms is possible for the limit. Essentially, it boils down to the fact that for any positive delta and m>n, there exist small enough x such that |x^m|
It remembers me of some approximations used by engineering students:
Sin x=x for 0
Also 𝒈 = π² (which sort of was meant to be so)
@@allozovsky after all the Earth is a sphere (within 5% error 😂)
@@rafaelgcpp except in Texas school boards where it's a disk. Or disc, I forget which.
Teachers told me to use 22/7 for pi with relatively high accuracy and 3 for quick approximations.
Hi,
The point is that you are not allowed to add equivalent functions, that's all.
For fun:
3:19 : "ok, good",
4:39 : "ok, nice",
8:33 : "my question to you" (copyright Oskar van Deventer).
Hi professor.
It's good to see you're healing from the surgery 👍😊
After watching the video I entered the second limit into Wolfrum Alpha Pro and asked for the step by step solution. The output involved multiple applications of L'Hopital's Rule and of the product rule, as well as elaborate manipulations of the very complicated intermediiate equations. Your approach is vastly better.
But his approach requires justifying a lot of limit splitting. I'm not sure if it will be less rigorous than L'H rule 🤷♀️
It is not very hard using L'Hopitals actually, Wolfram Alpha just brute forces it. It only requires one instance of the L'Hopital's if you know the basic limits:
as x → 0, sinx/x = 1 and (1 - cosx)/x² = 1/2
Take,
lim x → 0 (x²-sin²x)/(x²sin²x) = (1+sinx/x) • (x²/sin²x) • (x-sinx)/x³
= 2•1•(x-sinx)/x³
Taking L'Hopitals,
= 2•1•(1-cosx)/(3x²)
= 1/3
That's why I hate this approximation, it's too intuitive and too "we see that's true" type of thing
thats not true at all, you dweeb
This approximation is good and correct and useful. But you MUST be aware of the 3rd order error and check that it doesn’t contribute anything. The second example was deliberately constructed to expose the 3rd order.
The correct way to do it is to use Taylor series approximation of (sin(x))^2 so we don’t drop any meaningful quadratic terms. And those are meaningful in that case because we already have 1/x^2 in the limit. If we do that properly, everything will work out nicely
Is there also a quick rule for determining how many terms are meaningful?
No, it doesn’t matter if you expand sinx or sin² x since the first term still becomes x² and you need to use the next term for both
sinx≈x is a favourite topic of mine, when during my classes on nonlinear vibrations we come to the point of the oscillation of the pendulum. At small angular displacement sinx≈x holds very accurately and this can be checked experimentally by the Foucoult-pendulum at the entrance of our university. On the other hand, when the next higher term i.e., -(1/6)x3 is included, we get into the mystery of the Duffing-equation and then eventually the parametrically forced vibrations. So this approximation enables us to open new paths to the different topics of nonlinear dynamics.
I don't remember what it's called, but a more useful generalization of L'Hopital's Rule is to convert your quantity into a single fraction, then generate Taylor Series for the numerator and denominator, which will give you a more useful and specific picture of the limit behavior. So for example, 1/sinx - 1/(x) becomes (x-sinx)/(x*sinx), and the numerator and denominator separately expand to ((x^3)/6 - (x^5)/120 + ...) / (x^2 - (x^4)/6 + ...). Now if you just cancel x^2 out of the top and bottom, you can see that the limit as x->0 is x/6, which is more specific than just saying it approaches 0. I hope this is useful to someone!
You really should do these things with Taylor series. sin x / x = 1 - x^2/6 + O(x^4), so (sin x / x)^2 = 1 - x^2/3 + O(x^4), so (x/sin x)^2 = 1 + x^2/3 + O(x^4). Then the limit is that of 1/x^2 ( (x/sin x)^2 - 1) = 1/x^2 (1 + x^2/3 + O(x^4) - 1) = 1/3 + O(x^2), which goes to 1/3. This kind of method works even when you don't have trick factorizations available.
To find the second limit in a faster way, you could factorize it using a^2-b^2 and you would have the 1st limit and use lopital on the other part.
But the 1st limit is zero and using L'Hopital on the 2nd limit gives infinity, so the entire limit is still in indeterminate form.
when we were taught to use these taylor siries approximations at uni our teacher allways said to go till the highest polynomial power which is there on its own and to always use small o notation. maybe thats the reason we didn't run in to these kinds of problems. idk im just an engineer. but if i had just used the first polynomial aproximation in this case my teacher would have killed me.
What does it mean to go till the highest polynomials power can you clarify please ?
@@momomtjiddu3920 he probably means as many taylor terms as there are powers. I'm looking at a different comment that confirms this, saying that taking a limit to the power n while neglecting the relevant terms, basically truncates any meaning from it, and invalidates the point of approximating, but wouldn't know enough on my own to tell you more.
@@milanstevic8424 Yeah, I'm an EEng and we do taylor series approximations all the time. Basically approximating the "power" of the existing function based on its shape is important. So 1/x should never be approximated with one term, that would be just silly.
@@milanstevic8424 fala brate
9:23 Michael acknowledging the classic meme 😂
Wait, what meme?
@@FyneappleJuice There is no meme.
@@FyneappleJuice good place to stop..
The way I think is we basically multiplied the numerator and denominator of the sin term by x squared. Distributing the limit will turn the sin term to the x term but then we have two different limits with infinity in them. That's why it went wrong. The same goes for the previous question even though the answer came out to be correct.
Great point about why to be careful with approximations! Is sin(x) ~= x for small x used often in limit proofs? I’ve seen it often in calculations, and sometimes in physics proofs.
The Problem is that order of the tylor Expansion is Importen
In the first example sin(x) = x just happened to be correct up to the third order which is sufficient.
sin(×)^2 =x^ 2 is only correct up to O(x^3).
That is not sufficient. But other that working it out the long way I don't know any way of figuring out the sufficient order
It’s already been said but:
sin(x)= x - (x^3)/6 +o_1(x^4) where o_1 contains all terms which are of order bigger than 4 (negligible compared to x^4)
[sin(x)]^2 = x^2 - 2•x•(x^3)/6 + x^6/36 + o_1(x^4)^2
[sin(x)]^2 = x^2 - (x^4)/3 + o_2(x^4) where o_2(x^4)= o_1(x^4)^2 + (x^6)/36 and contains all terms of order higher than 4 (negligible compared to x^4)
So by putting x^2 to the other side we obtain,
[sin(x)]^2 - x^2 = (-x^4)/3 + o_2(x^4)
-[sin(x)]^2 + x^2 = (x^4)/3 - o_2(x^4)
x^2 - [sin(x)]^2 = (x^4)/3 + o_3(x^4) where o_3(x^4)= -o_2(x^4)
So we have the numerator expression.
Now we need to find the denominator:
We come back to this result we found earlier:
[sin(x)]^2 = x^2 - (x^4)/3 + o_2(x^4)
We multiply both side by x^2
x^2•[sin(x)]^2 = x^4 - (x^6)/3 + x^2•o_2(x^4)
x^2•[sin(x)]^2 = x^4 + o_4(x^4) where o_4(x^4) = x^2•o_2(x^4) - (x^6)/3 and contains all terms of order higher than 4 (negligible compared to x^4)
Finally:
[Numerator]/[denominator] = [(x^4)/3 + o_3(x^4)] / [x^4 + o_4(x^4)] = [1/3 + o_3(x^4)/x^4] / [1 + o_4(x^4)/x^4]
So by taking limit as x~>0, both o_3(x^4)/x^4 and o_4(x^4)/x^4 goes to zero leaving us with:
[1/3]/[1] = 1/3
Laurent series! 《The shortest path between two truths in the real domain passes through the complex domain》.
I tried to follow the Laurent series but I found it too complex
General rule - any time it looks like you're dividing by something close to zero, be ready for weirdness.
expand sin(x) to higher order x+x^3/3!+... until you get some nontrivial terms when you have this kind of cancellation, will give you the right result.
This limit actually breaks several naive l'hospital limit calculators, which blindly take derivatives of numerator and denominator until they succeed, as it requires an ungodly amount of steps to converge
As someone unfamiliar with this channel I was really hoping for a discussion on when the approximation is bad in real world scenarios. I was not prepared for an entire video of just pure contextless math.
Michael - I was really confused with "when" in the title and "where" on the chalkboard. When I read the title I thought: Oh! Michael is just going to find using his own rule of thumb when the error between sin(x) and x is ... [enter Michael's rule of thumb]. Then I opened the video and saw "where" and saw the 1/sin(x)-1/x and was like what is goind on here? :D
Why not simply expand sin(x) as their power series?
So (x - x^3 /6)^2 = x^2 - x^4 /3 + ... (let's ignore x^6 term)
Of course you can factor x^2 (3 - x^2) / 3
Then the difference becomes 1/x^2 (3/(3-x^2) - 1)
A bit of simplifying leaves you with 1/(3-x^2), and of course taking the limit gives 1/3
Moral of the story: don't use linear approximations if you're gonna square them -- you'll miss the linear term of x^2 + 2*epsilon*x -- and be super extra careful if you're sticking it in a function that blows up like 1/x
It's really interesting how the given approximation:
sin(x) ~= x | x ~= 0
conflicts with the given fact:
lim(x->0) sin(x)/x = 1
If you think about it, they're effectively saying the same thing.
x/x = 1, that's just a identity. So, if sin(x)/x = 1 as x approaches 0, that can only be true if sin(x) ~= x as x approaches 0.
In fact, take those places where you implement the "limit fact" to get the true answer. You get the same results if you implement the approximation instead, because they seem to simply be different ways of writing the same thing. I've been out of school for awhile, so I don't remember the Taylor series enough to do your exercise, but I think the main difference between the two is that the latter is presented in a way which can't be used as liberally, eliminating these certain situations where it doesn't work.
If:
|A - B| < E, then:
|1/A - 1/B| < E/|AB|
So in general, "approximately equal" near the singularity is a bit dangerous. If E < E'|AB| for some other E' that is also small, then suddenly you're in business again.
You can multiply to get:
|1/A^2 - 1/B^2| < (E/|AB|)*|1/A + 1/B|, in which case you need E to be even smaller in order to get that the end result must be small.
In general, X^n - Y^n = (X-Y) sum_(k=0,n-1) (X^(n-k-1) Y^k)
X=1/A, Y=1/B for this example
Very good! Thank you!
Calculating values at x= 1/2^n, Excel gets closest to the correct limit at 1/(2^12). Then rounding errors start to take it away from the correct limit at higher values of n -- reporting zero at n=24.
Wait, what? The correct limit for the 1st one is 0 and the correct limit for the 2nd one is 1/3, NOT 1/(2^12), so how in the impossible hell did you get that in Excel?!?!?
@@megauser8512 I didn't say the limit is 1/(2^12). The value of the expression in the second half of the video (near the 7:00 time stamp, for which we want the limit) is "close" to 1/3 when x = 1/(2^12). It is farther away from 1/3 when x = 1/(2^13) -- in Excel, because it starts making important rounding errors at smaller x-values of the form x= 1/2^n.
@@artsmith1347 Ok, sorry for my misinterpretation of your comment.
Why would there be any expectation of it being a good approximation, except in the neighborhood of zero?
For higher orders of sin(x) and x than 2, eg. 1/sin3x - 1/x3, the limit goes to infinity. So interesting that order 1 is zero, order 2 is non-zero but finite, order >2 is infinite. Why does only the second order have a finite, non-zero limit.
LHopitals rule is just a short cut for expanding the numerator and denominator of a ratio in Taylor series and then comparing the first non vanishing terms. The number of times that you do this indicates the leading order of the expansion (so if the num and denim start at x^3 you will do lhopitals 3 times). The problem is that sinx ~ x is terminated after the O(x) term so doing lhopitals 3 or more times will not give you the right limit since you are ignoring the x^3 term
I used to assign this for extra credit homework for Calculus 2.
Usually, cos x = 1 accompanies sin x = x. However, there are cases where cos x = 1 - x²/2 must be used, no matter how small x is. (I'm thinking of the harmonic oscillator potential energy and similar.) I did one physics problem where using sin x = x - x³/6 was an essential step, again regardless how small x was.
Usually, if a cancellation gives zero with the approximation, reconsider the problem with the next-order term.
You have now shown that you are a real mathematician!
Very good hint!
It feels like the reason why it breaks is because the order of the composition matters.
f(g) is not g(f)
To say (sinx)^2 = x^2, then (sinx)^n = x^n
0:31 where's the math stackexchange link?
Hey, I thought it is a common fact, that you can use a reciprocate approximation, only if limit is not 0. I.e. a(x)~b(x) is equivalent to 1/a(x)~1/b(x) only if a(x) and 1/a(x) does not tend to 0 (and of course same is right to b(x))
Oh, those frisky powers!
Thank you, professor!
Isn't sin(x) ~ x for small x saying the same thing as limit as x->0 sin(x)/x = 1 ?
So in effect you used the same approximation in both methods of evaluation (for the squared version).
No, the fact that lim x->0 sin(x)/x = 1 can proved formally using the squeeze theorem.
it is a very interesting example
Even though I get the idea that approximations will inevitably fail, I cannot explain why. Still, the idea that small differences may accumulate reminds me of the time when somebody explained to me why 1^infinity is indeterminate. The number 1 in the expression is not the constant 1, but the value 1 achieved as the limit of some non-constant expression.
Magnificent thanks a lot 🙏
is he glancing at his notes, or is there a physicist he's holding hostage?
LOL!
You had to derive twice in the first problem. That’s a big hint that you should keep the cubic term in order to avoid problems.
A very nice example showing that your approximations must be consistent.
A squared sin is a cos if you use trig identity, it's much easier to see it won't work if you use trig identities.
Nice work
at the beginning of seeing these videos I was thinking that the formula lim f/g=lim f'/g' was the log beetle rules and I realized that it was l'hôpital rules and I am french😂 American accent is really fun
That rule has got to have a million pronunciations over here. Every highschool math teacher in the nation has there own..
@@faithnfire4769 yes that is true I was not aware of this some time ago but as in France we have several accent already I cannot imagine how much it exists in USA
as a physics major, this is not going to stop me
I don't see anything wrong... 0=1/3, well, approximately.
LOL!
For people that want to do it via the L. rule you need to go to the 4th derivative, so it's not terrible, but it's a lot of work. The result clearly is the same.
Why not apply rule of product at first step only since you have already calculated part of product in previous step ?
a red flag for both approximation usages is infinity minus infinity. 1/sinx is infinity with this approximation.
By the way the more time you need to apply L'Hôpital's rule the more accurate approximation you'll need to get the right answer.
Which rules were you refer to being bent but appropriately bent? at about 8:39
Sir the fundamental law of limit that is lim(x tends to 0) sinx/x =1 is a special form of the approximation of sinx~ x(for short x).
Your method for 1st was wrong...once you distribute limits ,, you can't bring them back together...so in limx->0 (1/sinx - 1/x)
And then u distribute it , it will be limxto0 1/sinx + lim x to 0 1/x... And that will give infinity -infinity ,, u can't combine them
Isn’t the problem that the limits as written are indeterminate (infinity - infinity) forms, so using the approximation is just about guaranteed to break down unless you can rearrange them into something more stable?
Yes!
Of course it is. Remember your math teacher when s/he told you that arithmetic is defined for numbers? Infinity is not a number. As soon as you have something that is not a number in an arithmetic expression you are in undefined territory. A mild form of saying "all hell breaks loose" or "bad things will happen".
Besides, never ever start solving a problem using an approximation if other more solid methods exist. If at the end of a problem you run into something no known method solves, then use it with caution. Take it as a safety parachute in case everything else fails.
sin x = x + Remainder term ...... there is always an error term when using Taylor series, that you cannot make it zero easily
In the video you mentioned that you found this problem on MSE, and that the link could be found in the description. However, I checked the description and I couldn’t find it there. Did I miss it?
Introduce them to Mr. Taylor, Michael.
I’m bothered by the fact that lim x->0 (sin(x)/x)^a = 1 for all a, so rearranging this yields lim x->0sin(x)^a = x^a. I do understand the next step is not legal as the divisor is 0 but nonetheless I am I upset that taking the inverse of both sides is no longer equivalent
If you want to solve it like that you have to first prove that limit of product is equal to product of limits
Thundental Theorems of Engineering:
1. sin(x) = x
2. pi^2 = g
2 does not apply in the US where g=32
Looks like you are approximating your spelling too
Thats insanely informative thanks, never knew this sort of thing would happen, I'm still surprised of the end result there, what does all this mean, how will i carry on?!?!??!!?!?
i have no idea what sin even means but this video sure does make me feel like i shouldn't have gone for computer science
isnt saying lim x-->0 sinx/x = 1 an equivalent to saying that sin x = x while x tends to 0? If so, how can we utilize the sin x/x = 1 limit to get our answer?
The problem was hat we were dividing a positive number by zero, which in the limit becomes infinity.
It's entirely possible to prove that the limit of sin(x)/x as x approaches 0 is 1 without using the Taylor series for sin(x).
lim_x->0(sin x/x) = 1 can be proven with the squeeze theorem (and some geometry). Its validity is not in question. The question is whether this means you can substitute x for sin x in any limit where x -> 0
The answer is no.
@@ZipplyZane Exactly!
It works when sinx = x-x^3/6
Probably because the squared also squares the error?
Michael, you forgot to link the post 😅
hi can u plz solve this exercice for all integers x,y>=0 find x,y such 3x²=5^y +2
Awesome!
2:22 I don't think this is a type 0/0 form again. Plugging in 0, you get 0/1 = 0.
xcos(x) goes to 0, you read it wrong I think
How come the approximation would work to solve lim sinx/x ?
I thought the reason was that we are not allowed to cancel infinities to give 0.
I think my hold up is that the notion of the limit of a product being the product of limits is generally not true. Here, that happens to work, but it seems it's kind of just as arbitrary as assuming that the sin small angle approximation works.
If anyone here has more of a background with real analysis and wants to give a brief explanation of why this product/limit argument is true I'm very curious.
The limit of a product is the product of the limits so long as the individual limits all exist. Michael took this for granted but it worked out anyway
@@CraigNull so in your opinion, is assuming this just as arbitrary as assuming the sin small angle approximation is valid?
@@CraigNull He did mention that this only applies when the individual limits exist at 7:18.
I'm no expert, but this is what I see. He knew the limit of two of the parts existed. And he found by the end that the limit for the third part existed. So he thus concluded that the original limit also existed.
Is this a valid assumption?
@@phamnguyenductin I didn't know that applies generally. I see now, thank you.
Just include the higher order terms using for example big O notation and you should be fine.
How many times can you apply l'hopital's rule like that? Can you just keep going until you find a determinant form?
But isn't sinx=x just a lil' bit brushed version of sinθ/θ 'cause sinθ=(sinθ/θ)*θ=(sinθ/θ=1)*θ=θ so sinθ=θ.
You are good
It seems to me that in the denominator when you expressed x^2 as x^4/x^2, when x is 0, you multiplied by 0/0. I am guessing this is the problem.
Inverse does weird things to power series from my explanation.
The xth root for a to the (x+1)th root for a is really odd as a graph as a changes
but isnt that "fact" also based on the approximation?
what about 1/sin^3 x - 1/x^3 ? 1/sin^n x-1/x^n?
I would think that the fact that dsin(x)/dx=1 while dsin(x)^2/dx=0 when x=0 would indicate that the approximation for the former wouldn't work for the latter.
The world is wrong, let it be true. Jojjon a nyar. Rudolph
Even in light of an approximation of the form f(x) ~ g(x) ~ 0 for x close to 0, you typically would NOT expect that limit as x --> 0 of F(f(x)) - F(g(x)) would simplify to zero when the encapsulating function F is not continuous at 0. So is the first limit working out here a fluke, or is there a deeper, nifty result that lets you swap limits through discontinuous functions sometimes?
Can you split the limits like he does there, where he makes the one limit into a multiplication of three limits? That sounds sketchy
It's valid, assuming the three separate limits exist afaik
Most advance physics knowledge
Makes no sense to take a Taylor approximation and plug that into a nonlinear function. (Just saw video, and apologies to others making same point.)
Good video