the best ever approach to this derivative
HTML-код
- Опубликовано: 18 сен 2024
- 🌟Support the channel🌟
Patreon: / michaelpennmath
Channel Membership: / @michaelpennmath
Merch: teespring.com/...
My amazon shop: www.amazon.com...
🟢 Discord: / discord
🌟my other channels🌟
mathmajor: / @mathmajor
pennpav podcast: / @thepennpavpodcast7878
🌟My Links🌟
Personal Website: www.michael-pen...
Instagram: / melp2718
Twitter: / michaelpennmath
Randolph College Math: www.randolphcol...
Research Gate profile: www.researchga...
Google Scholar profile: scholar.google...
🌟How I make Thumbnails🌟
Canva: partner.canva....
Color Pallet: coolors.co/?re...
🌟Suggest a problem🌟
forms.gle/ea7P...
Pretending not knowing lots of known things in order to learn unknown things ☺
13:37 leet
I haven't seen this method, and I appreciate it, but my favorite method is using implicit differentiation. It's pretty elegant and requires no explicit limit-taking.
We want to compute the expression d/dx e^ln(x) two different ways.
Computing it explicitly, we have d/dx e^ln(x) = d/dx x = 1.
Computing it with chain rule gives d/dx e^ln(x) = e^ln(x) d/dx ln(x) = x * d/dx ln(x).
Then 1 = x * d/dx ln(x), solving gives d/dx ln(x) = 1/x.
Then use log rules and linearity of derivative to generalize to log_b.
But you need to define e such that d/dx e^x = e^x, and this video is basically proving the existence of e.
@@user-po2oi5yq7q Some books introduce e^x first and then e. I think Apostol does it in the following order: ln(x) is defined as the integral from 1 to x of 1/t. This is increasing because 1/t is positive so you can define e^x as its inverse. Then e is what you get when you plug in 1. Although this is a little bit weird, doing it this way has the advantage that you can get far very efficiently. You prove ln(a*b) = ln(a) + ln(b) via an integration by parts calculation, by the way.
Whoops! I misremembered. You actually just split up the integral and do substitution. Sorry about that.
@@supasugaman If you define ln(x) as ∫₁ˣ 1/t dt, you will get d/dx ln(x) = 1/x by the definition, which is what you are trying to prove.
@@user-po2oi5yq7q Oh, yeah, you're right. maxwibert's way is still not reasonable if you do it that way. You would use maxwilbert's way if you defined e^x using power series, for example. It would make sense for a book to do this if they were motivating e^x as the solution to y' = y.
Excellent video! I like using the definition of the derivative to actually determine a specific derivative. Plus I like the manipulations that lead to the final result.
You had me until the one of the last steps when you just defined that limit or sum to be Euler's number. I suppose this is much better than most videos where you lose me at step 2 or 3. Signed, an engineer :-)
I remember I've been to Spivak's lectures back in 2019 for 7-8 graders in MSU, I really liked those back then
When backflip? 🤸
merci sire
It is possible coshtheta =
(Exp(x) - exp(-x) ) /2
Use this e is defined unnaturally. Using rhisike pi we can define e naturally
Indeed, calculus by spivak, I would argue, is one of the best books ever written.
Another way is by writing b^y=X. So (e^ln b) ^y=X. So e^(y ln b)=X. It's derivative is e^(y ln b) ln b y'=1.
So b^y ln b y'=1.
So y'=1/(x ln b).
i'm unsure how you were able to make the leap from (n choose k)*(1/(n^k)) to the whole chain of (1-(1/n))(1-(2/n))...(1-((k-1)/n)) and such, your explanation made sense, but i am still unsure as to how these values arise, maybe a few steps between would help
Use implicit diff
Question: does it matter that there is an x term in the definition of h? Can we assume that x by itself is constant while delta-x and h approach 0? To put it another way: h contains delta x and x, and they look like they might be related, are they?
(I thought for a few minutes and I'm 95% confident they are, but not 100%)
since the original limit was defined in terms of ∆x it was implicit that x was constant, so defining h in terms of both means it only depends on ∆x (think for instance if you have a limit on x and then substitute u = 2x - the limit still depends implicitly on x, and not on the "2", since that's a constant)
@@okra_ thank you!
At 3:44, what is the justification for bringing the power of 1/h inside the log argument?
It always was part of the log argument. Here he just wrote it more clearly
@@skylardeslypere9909 Ah yes, thank you. Had to rewatch to see where it came from.
We proved that e exists but when did we prove that log(e) = 1?
I was thinking the same thing.
I would assume that we implicitly defined the natural log to be the log base e.
@@allanjmcpherson Yes, but try defining the limit to be d and see what happens.
@@jamesfortune243well, then you'll just have to show that the "d" you defined is actually the exact 2.718281828..., aka "e".
And ln is just log₋e where this "e" should be defined exactly the same as the limit of aₙ, aka your "d", so everything should be fine there I think.
@@jamesfortune243Would you get:
log (base d) d / x* log(base d) x ==>
1/ x*log(base d)b
Not ln(b)
But e is not naturalky defined. We should use hyperbola. Define coshx or sec (theta ) to define e.
How can you define e with cosh or sec?
Now instead of x generalize for f(x)
9:30 how can we prove this inequality?
And what's the name of this inequality?
Edit : Thanks for helping to everyone in such a short time :)
1/k! = 1/k * 1/(k-1) * ... * 1/2 * 1/1 < 1/2 * 1/2 * ... * 1/2 * 1/1 = 1/2^(k-1). We have only k-1 in the exponent since you're replacing all 1/k's from 1/2 to 1/k, giving us k-2+1 = k-1 replacements, and obviously 1/1 > 1/2.
The easiest way to notice this is to imagine the quotient 2^(k-1)/k!, and note it approaches 0 as k -> inf, since k! grows much much quicker than the exponential function.
This is the limit comparison test, and the k! term is referred to as 'dominant'.
Because 1/k! is just 1/k * 1/1 * 1/2 * 1/3 * ... * 1/k so you are multiplying k many numbers together on bottom that get larger and larger while 2 * ( 1/2 * 1/2 * 1/2 * ... * 1/2 ) k many times so you are multiplying the same number of numbers on bottom but they stay the same in this case instead of getting larger like the other case so the number overall will be smaller.
@@stefanalecu9532 thx
There is a way to get it in like a minute
This is really the best teacher on RUclips
ეს შენი მტკიცება გგონია? ვაი შენ პატრონს
bro used 13 minutes on something a beginner should be able to solve with basic log identity/property
logb(x)=(ln x)/(ln b)
of course he knows the standard approach you absolute genuis,he writes at the top of the board "a nice way" not "the only way".
I believe he is arguing that this way should not be considered "nice"
The problem is that in this problem you consider that you dont know the result of (lnx)'. You can see him mention it when defining the number e.
He is showing how to derive the differential of log_b(x) from first principles. That is, without making any assumptions about already knowing the differential of ln(x). For an introductory exercise on calculus, that's a reasonable approach and demonstrates some of the underlying relationships between power series and functions like exponentials.
Why don't you geniuses try to define log_b(x) for all x > 0 without using ln x or e^x. You really need these concepts to define the function to begin with. In general b^x is defined as e^(x ln b) and log_b(x) is its inverse function, which boils down to ln x / ln b.
Uh log_b x = ln x/ ln b so its derivative is 1/ (x ln b). But yeah let's use complicated arguments to find the limit of (1 + 1/n)^n, all the while ignoring n might not be an integer.
This approach is an independent way to obtain the derivative of ln(x), prior to even having a concept of e.
If you have a function of a real argument f(x), the limit of that function as x goes to infinity is the same as the limit of the sequence f(n) for integer n. This is pretty easy to show.
If the student is beginning calculus, it's useful to show how the derivative defined as a limit gives rise to d/dx(lnx) = 1/x, rather than just stating it.
Since f(x) = (1 + 1/x)^x is a continuous function, it should be obvious that its limit as x→∞ is the same as the limit as n→∞ of the sequence (1 + 1/n)^n where n ∈ ℕ.
@@RexxSchneider You're assuming the limit exists already. If you don't know that, then for all you know the limit exists as n goes to infinity but not as all x goes to infinity.
@@Xeroxias If you don't have a concept of e then you can define ln(x) as the indefinite integral of 1/x but then the derivative is 1/x by the FTC. And as I mentioned to Rexx, you have to already know the limit exists to use the argument in your second paragraph.
@@michaelz2270 You're right, I remembered the caveat later. In this case, of course, monotonicity comes in clutch once again. As for e, I think the method in the video is most reasonable way to proceed in a world where that limit is unknown.