Это видео недоступно.
Сожалеем об этом.

Stirling's Incredible Approximation // Gamma Functions, Gaussians, and Laplace's Method

Поделиться
HTML-код
  • Опубликовано: 7 авг 2024
  • We prove Stirling's Formula that approximates n! using Laplace's Method.
    ►Get my favorite, free calculator app for your phone or tablet: MAPLE CALCULATOR: www.maplesoft.com/products/ma...
    ►Check out MAPLE LEARN for your browser to make beautiful graphs and much more: www.maplesoft.com/products/le...
    ►Check out Maplesoft's RUclips channel: / @maplesoft
    In his video we prove Stirling's Approximation Formula that n! is approximately sqrt(2 pi n)(n/e)^n in the limit as n goes to infinity. Our proof has three main ingredients. Firstly, we use the Gamma Function definition of n! being a specific improper integral. We will manipulate this integral and apply Taylor's Approximation to turn it into a quadratic, a trick called Laplace's Method. Finally this will become a Gaussian integral and with a change of variables we will get Stirling's Formula.
    Check out my MATH MERCH line in collaboration with Beautiful Equations
    ►www.beautifulequation.com/pag...
    Related Info:
    ►My video comparing n! and n^n: • The Hierarchy of Big F...
    ►Proof that the Gamma Function gives factorials: • Intro to the Laplace T...
    ►The Gaussian Integral: • The Gaussian Integral ...
    ►More reading on Laplace's Method: www.math.utah.edu/~davar/trans...
    0:00 Stirling's Formula
    0:55 Stirling's Formula graphically
    2:25 The Gamma Function
    3:09 Taylor Approximations
    5:01 The Gaussian Integral
    5:33 Laplace's Method
    COURSE PLAYLISTS:
    ►DISCRETE MATH: • Discrete Math (Full Co...
    ►LINEAR ALGEBRA: • Linear Algebra (Full C...
    ►CALCULUS I: • Calculus I (Limits, De...
    ► CALCULUS II: • Calculus II (Integrati...
    ►MULTIVARIABLE CALCULUS (Calc III): • Calculus III: Multivar...
    ►VECTOR CALCULUS (Calc IV) • Calculus IV: Vector Ca...
    ►DIFFERENTIAL EQUATIONS: • Ordinary Differential ...
    ►LAPLACE TRANSFORM: • Laplace Transforms and...
    ►GAME THEORY: • Game Theory
    OTHER PLAYLISTS:
    ► Learning Math Series
    • 5 Tips To Make Math Pr...
    ►Cool Math Series:
    • Cool Math Series
    BECOME A MEMBER:
    ►Join: / @drtrefor
    MATH BOOKS & MERCH I LOVE:
    ► My Amazon Affiliate Shop: www.amazon.com/shop/treforbazett
    SOCIALS:
    ►Twitter (math based): / treforbazett
    ►Instagram (photography based): / treforphotography

Комментарии • 155

  • @mathflipped
    @mathflipped 2 года назад +203

    Stirling's formula was one of my favorite when I was in high school because it was often useful for math olympiad problems. Great video, Trefor!

    • @DrTrefor
      @DrTrefor  2 года назад +24

      Thanks! Ya I love both that it incorporates a bunch of cool math to prove it, but also can be used in so many cool things.

    • @alexsere3061
      @alexsere3061 2 года назад +7

      Question, I participated in math olympiads from 2014 to 2020 and nobody even suggested it to me. What kind of problems did you use it for? The only use I can think of is for upper bounds in combinatoric problems

    • @mathflipped
      @mathflipped 2 года назад +1

      @@alexsere3061 This was 30+ years ago, so I don't remember specific problems when I used this formula.

    • @alexsere3061
      @alexsere3061 2 года назад +2

      @@mathflipped I meant more generally what kind of problems (algebra, combinatorics, number theory), but I understand that it was a long time ago, thanks for the response

    • @Kurtlane
      @Kurtlane 2 года назад +2

      You did this in high school? Wow!

  • @Sarah-tl8cd
    @Sarah-tl8cd 2 года назад +46

    This channel is gold. Started watching for linear algebra help and now I find myself binge watching in my free time.

  • @MyOneFiftiethOfADollar
    @MyOneFiftiethOfADollar 2 года назад +65

    When I first learned about Taylor series approximation of differentiable functions, it was presented in a dry way void of the linear fit, quadratic fit, cubic fit, etc. Thanks for taking the time to present the graphical side of things rather just diving in to the derivation!

    • @idjles
      @idjles 2 года назад +4

      so sad you didn't get this taught to you by someone with passion. Taylor's formula is simply mind-blowing. All my childhood I wondered how my calculator did square roots, sin, cox, log etc. and when I finally learnt Taylor's formula I realized how a calculator worked, and realized i could now do any calculation on paper that a calculator could do.

    • @userAndix
      @userAndix 2 года назад +1

      @@idjles yes this would be a way to implement these functions. But after I read a bit into implementing sin as function they often go with tables because the Taylor approximation is really good for smaller values but for bigher values the precision needed makes the compitational need not worth it. But thats only what i read, still sounds very reasonable in my opinion.

  • @jimschneider799
    @jimschneider799 2 года назад +18

    Why is this not taught in school? I had always assumed that Stirling had approximated ln(n!) as the integral of ln(x) dx from 1 to n (which gives ln(n!) =~ n ln(n) - n + 1 + R(n) for some remainder function R(n), and that implies n! =~ (n/e)^n * e^(1+R(n))), and that the remainder term in that approximation was about R(n) =~ ln(2 pi n)/2 -1. This makes a lot more sense, and it explains where the square root of pi comes from.

    • @honourabledoctoredwinmoria3126
      @honourabledoctoredwinmoria3126 Месяц назад

      I think De Moivre did indeed use what you did, namely approximating ln(n!) as the integral, because he did not report what R(n) was, and then Stirling later worked it out from approximating the gamma function.

  • @jorgesaxon3781
    @jorgesaxon3781 2 года назад +9

    I have found a gold mine! almost all math channels are either completly monotone, gloss over important parts, or upload once a blue moon, yet your channel is the perfect combination of enthusiasm, beutiful maths, and consistency! keep up the amazing work!

  • @dohyun031
    @dohyun031 2 года назад +6

    Hello! This is an Analysis 2 student using Rudin pma! Thank you for your awesome video! Really inspired me to go through Rudin's explanation of Sterling's formula!

  • @ytashu33
    @ytashu33 2 года назад +14

    Great video! Loved your style, not getting bogged down in the details while still doing justice to the underlying concepts is hard. Your approach of explanations coupled with graphical analysis, hits the spot just in that Goldilocks zone. Keep it up!

    • @DrTrefor
      @DrTrefor  2 года назад +2

      Much appreciated!

  • @HansBezemer
    @HansBezemer 2 года назад +11

    I've done MANY implementations of the Gamma function (including one of Ramanujan - from his last notebook). But one of the more interesting ones in this context is the one from Cristinel Mortici: "A substantially improvement of the Stirling formula", to appear in: Applied Mathematics Letters. Received date: 30 August 2009.
    n! ~ sqrt(2*pi*n)(n/e + 1/(12*e*n))^n
    This algorithm gives about 7 good digits, but becomes more inaccurate close to zero.

    • @andrewharrison8436
      @andrewharrison8436 Год назад

      That's crazy: pi, n, e yes the original seems to come from nowhere but "12" sorry inelegant so not acceptable.
      ... but it works
      upvoted and thanks

  • @alexsere3061
    @alexsere3061 2 года назад +1

    This summer I struggled with a probability book and was planning on driving into some of the identities used, seems RUclips read my mind, just subscribed, awesome video

  • @maurocruz1824
    @maurocruz1824 2 года назад +3

    WHAT A BEAUTIFUL VIDEO! A lot of concepts together.

    • @DrTrefor
      @DrTrefor  2 года назад +1

      Thank you so much!

  • @virabhadra2
    @virabhadra2 2 года назад +3

    Nice video! Always loved the formulae connecting e and π together.
    I think, also, it will be properly to say: an approximation of Gamma function, which is one of the generalisations of factorial.

  • @ckq
    @ckq 10 месяцев назад +4

    I recently realized Stirlings approximation is actually kinda simple to derive by considering the peak of a poisson distribution and approximating as a normal.
    I thought it required the integral in the gamma function, but it's much more intuitive using the poisson distribution with mean n (and variance n), which peaks at (n/e)ⁿ/n!

    • @ckq
      @ckq 10 месяцев назад

      Then applying the central limit theorem we have (n/e)ⁿ/n! ≈ 1/√2πn

  • @primmanet
    @primmanet 2 года назад

    Great video! Many thanks!

  • @FazlayElahi
    @FazlayElahi 2 года назад +4

    This man is criminally underrated.... Such a great teacher and explainer he his... Love from Bangladesh 🇧🇩

    • @NaN_000
      @NaN_000 Год назад

      💀 what criminally???

  • @emet744
    @emet744 Год назад +1

    Excellent video. I love how no detail is left out. I read that John Wallis actually discovered the formula first and Stirling's only contribution was the constant Sqrt(2 Pi). It should have been named for Wallis.

  • @debasiskar4662
    @debasiskar4662 Год назад +1

    Just Excellent.

  • @firasnizam
    @firasnizam 2 года назад

    coool, thanks for sharing

  • @nelsonpalraj
    @nelsonpalraj 2 года назад +1

    Thank you professor

  • @cycklist
    @cycklist 2 года назад +3

    I already love you for saying zed and not zee ❤️❤️❤️❤️

  • @randellrussell2400
    @randellrussell2400 2 года назад

    Dude this problem was so hard. Can't wait to see more.

  • @leviathoncannon
    @leviathoncannon 2 года назад +2

    I was thinking, we can justify the taylor approximation because the maximum is at z=1 and, since the function is negative, the region we are integrating over will be concentrated in an interval around z=1. The exponential function will quickly drop to zero to the left and to the right of z=1, especially for large n.

  • @michaeljburt
    @michaeljburt 2 года назад +9

    Great video. You mentioned that we would discuss why taking the Gaussian integral from 0 to infinity doesn't matter, maybe I missed it, but I didnt hear why. Shouldn't the Gaussian integral from 0 to infinity be root(pi)/2? How can we justify omitting this factor of 1/2 from the final result?

    • @DrTrefor
      @DrTrefor  2 года назад +7

      Yes, but our integrals is going to zero away from the point we approximate super fast, so anything outside of that region is not relevant

    • @MichaelRothwell1
      @MichaelRothwell1 2 года назад +5

      Because the peak of our Gaussian is at 1 (and not 0), starting the integral at 0 is not cutting the integral in half, it is only cutting off a tail, which rapidly gets smaller as n increases.

  • @andrewharrison8436
    @andrewharrison8436 2 года назад +1

    This is a crazy bit of maths - 13 min long video that gives the structure for deriving Stirlings formula - that's got to be a rewatch in segments.
    Links to 4 videos that I now must watch (smooth salesman that you are!).
    Time to dust off a spreadsheet and play with this myself.

  • @visualgebra
    @visualgebra 2 года назад +1

    Awesome

  • @ominollo
    @ominollo 8 месяцев назад +1

    Great Stirling’s approximation 🙂

  • @billclinton6040
    @billclinton6040 2 года назад +8

    I have had a lot of calculus, and I have never before seen the technique of simplifying an integral using a Taylor Series expansion. Very cool.

    • @faei
      @faei 2 года назад +2

      As always in math, it is a little bit more complicated than that, in fact, the proof was just a draft of a proof.
      The proof only shows that n! is more or less the sterling formula in a really vague sense
      For the proof to be complete, you need a few more steps:
      - show that exp(ln z - z) is well approximated by exp(-1 + -1/2 (z - 1)^2) (not obvious ! it happens that f ~ g but not exp f ~ exp g, see f = 1 + x, g = x at infinity)
      - show that the integral of exp(ln z - z) is well approximated by the integral of exp(-1 + -1/2 (z - 1)^2) (still not trivial)
      Then it's done (maybe)
      Or you could use the generalisation of this method (Laplace's method) but its way harder I think
      It's definitively a cool trick but a rather hard one for an undergraduate

  • @Mayhem_77100
    @Mayhem_77100 Год назад +2

    Hi, by redoing the proof, I've just noticed at the end when you use the Gaussian integral, you took sqrt (π/a) where a = n/2. But this is valid for the integral goign from -♾️ to + ♾️, and the original integral goes from 0 to +♾️, therefore is the half of the complete Gaussain integral, which adds a 1/2 factor who is not present, did I made a mistake ?

  • @ffggddss
    @ffggddss 2 года назад +4

    Thanks loads for this wonderful demonstration! Approximating the factorial function has been a pet interest of mine for (mumbledy-mumble) decades.
    In college, I started with the simple idea that if you take the logarithm, so that you're dealing with ln(x!), you could represent it graphically by plotting y = ln(x), and drawing boxes of height ∆y = ln(n), for integer n, and width ∆x = 1, centered at each successive integer.
    So (neglecting the n=1 box, which is 0 height anyway), you have a box from x = 1½ to 2½ with ∆y = ln2; from x = 2½ to 3½ with ∆y = ln3; etc.
    Now ln(n!) = the area of all the boxes from 1½ to n+½ (log of a product is just the sum of the logs!), which is approximately the area under ln(x) for the same x-interval:
    ln(n!) ≈ ∫[1½, n+½] ln(x) dx = x(ln(x) - 1) | [1½, n+½]
    So n! ≈ e^(that integral)
    Then I wanted to see how well this approximation worked for various values of n, and this having been in the Dark Ages, when there weren't hand-held electronic calculators, let alone personal computers, I got a fellow student who knew how to write and submit programs for the campus mainframe, to code this formula and print out results showing factorial, the above approximation, and the ratio of the two, for various values of n.
    It turned out that the fractional error tended to (6+)% for large n. Not bad. But it left me wondering what that limiting ratio of 1.06... was, mathematically.
    Then I learned about Stirling's Series for n!, the leading term of which is Stirling's Formula.
    I dimly recall the derivation as having involved the "method of steepest descent;" but I've since forgotten how that worked.
    More recently I've poked around with some other, better approximations. Two very good ones, having excellent error-minimization-to-complexity, are:
    x! ≈ √(2π) (X/e)^X
    x! ≈ [24X/(24X + 1)] √(2π) (X/e)^X
    where X = x + ½
    Fred

  • @davidgillies620
    @davidgillies620 2 года назад +2

    Add in the single extra multiplicative term 1 + 1/12n (which comes from a further application of Laplace) and it is good to eighty parts per million by the time you get to 6!.

  • @foppifokounoe5561
    @foppifokounoe5561 Год назад

    thank you sir. please is there a Formula to calculate factorials of complex numbers? because the gamma function evaluates factorials of negative function

  • @VisuallyExplained
    @VisuallyExplained 2 года назад +5

    It's nice to see a combination of a great math explanation with pertinent numerical simulations. Nicely done!

  • @adw1z
    @adw1z 8 месяцев назад

    Not just the first order formula, but deriving the whole Stirling asymptotic series in my Asymptotic Methods Course - It's a very difficult course, but extremely fun at the same time!

  • @Materialyprestrojarov
    @Materialyprestrojarov 2 года назад +1

    Thank you so much for the video. Can I ask, what Software do you use to animate the equation transitions? They look very clean and profesional. I would love to use it for my channel. Is it just Powerpoint? Many thanks :)

    • @DrTrefor
      @DrTrefor  2 года назад +1

      Yup just PowerPoint lol

  • @Jamesdaman69
    @Jamesdaman69 2 года назад +2

    I figured out the analytic continuation of stirlings formula in the domain of |x|

    • @Jamesdaman69
      @Jamesdaman69 2 года назад +1

      It’s actually a really easy formula.
      It’s 1/(x+1)+x(1-g) where g is the Euler-Mascheroni constant.

  • @carlsloane7198
    @carlsloane7198 2 года назад

    Dear Dr Bazett: I was a little surprised that you didn't mention the contribution of de Moivre, who established the first approximation to factorial n in the third edition of his "Doctrine of chances" (1733). Stirling's principal contribution was determining the constant 1/2 root pi. What is now known as Burnside's formula is now consisered due to Stirling.
    Sincerely,
    Carl Sloane, London, ON

  • @scraps7624
    @scraps7624 2 года назад +3

    I remember using this in my statistical mechanics course back in my day as an undergrad, I loved this explanantion!

  • @cubuffs2028
    @cubuffs2028 3 месяца назад

    The laplace method link claims "It can be shown that for such a function (pictured, but not given specific conditions), all
    of the main contribution to the integral comes from the x values near x0, say, (1 − ε)x0 ≤ x ≤
    (1 + ε)x0." any suggestions on where to find a rigorous proof of this?

  • @jann5s___
    @jann5s___ 2 года назад +3

    isn't Maple included into Matworks Matlab now? Is this a spin-off of the old Maple IP?

    • @DrTrefor
      @DrTrefor  2 года назад +2

      It’s the same Maplesoft company so the engine is the same, but put into a student friendly product

  • @p.j.882
    @p.j.882 2 года назад

    Can it be expressed as an erf function?

  • @rogerkearns8094
    @rogerkearns8094 2 года назад

    Is there any such approximation to the Reimann Zeta Function?

  • @Lolwutdesu9000
    @Lolwutdesu9000 2 года назад

    Wait, so what was the point of making the substitution x = nz? The formula was pretty much identical save the constant outside, but I don't see why that would matter, since we're focusing on the integrand.

  • @jimcurrie
    @jimcurrie 2 года назад

    What is gamma(1/3). Can this formula be used to determine the value of gamma(1/3).

  • @berryesseen
    @berryesseen Год назад

    If Stirling's approximation is not good enough for you, you can always bring more terms from the Taylor series expansion. This way, you will get sqrt(2 pi n) (n/e)^n (1- 1/(12n) + 1/(288n^2) + O(1/n^3)). I had used this in order to approximate the entropy of binomial distribution (n, p) for large n. It gets quite good. I used it in an information theory journal paper.

  • @theimmux3034
    @theimmux3034 Год назад +1

    how does this work when the series used for lnx converges only for 0 < x

    • @birefringent2851
      @birefringent2851 Год назад

      Because in the limit of large n, the exponential inside the integral rapidly decreases away from x = 1. You aren't replacing ln x - x with its full taylor series, just the constant and quadratic terms, and as long as this approximation is good close to x = 1, it doesn't matter what happens when x-1 is not small

  • @philippschuler7155
    @philippschuler7155 5 месяцев назад

    At 7:51 why is it online written first derivative for (-1/z^2) ?

  • @shubhajitkar6928
    @shubhajitkar6928 Месяц назад

    In the last line, where you use Gaussian integral, is that okay? I'm getting √2 in the denominator😢

  • @mastershooter64
    @mastershooter64 2 года назад +2

    oh what a nice coincidence, professor ghrist just released a video that had the stirling's approximation

    • @DrTrefor
      @DrTrefor  2 года назад +2

      Oh cool, love his channel!

  • @crimfan
    @crimfan 2 года назад +4

    Whenever pi shows up in probability distributions, you can bet Stirling's approximation is lurking nearby.

    • @DrTrefor
      @DrTrefor  2 года назад +4

      Lol yup. Very connected to poison distribution for instance

    • @crimfan
      @crimfan 2 года назад +1

      @@DrTrefor Many discrete distributions have close connections to continuous distributions, such as the Poisson and gamma. The first derivation of the Gaussian by de Moivre was to use a Stirling approximation for the binomial. Very accurate as you say.

  • @carlosgordilloolivera6994
    @carlosgordilloolivera6994 Год назад

    What about the integration limit z=0 in the Change of variable y=z-1? It is a error function missing?

    • @birefringent2851
      @birefringent2851 Год назад

      the exponential decreases so rapidly that the part that was left out during the shift of integration limits is totally insignificant

  • @rv706
    @rv706 2 года назад

    What's the approximation order for n-> \infty i.e. o(?)

  • @MrKoteha
    @MrKoteha 6 месяцев назад

    The approximation is best at around x = π^(4/13), where the difference between the functions is very close to sin(π/4) / 10

  • @arturorenteria4943
    @arturorenteria4943 2 года назад

    is there a way to clear n? n = ?

  • @nutronstar45
    @nutronstar45 Год назад

    thank you for providing evidence that pi is always followed by a 2

  • @DavidFMayerPhD
    @DavidFMayerPhD 7 месяцев назад

    You might mention the relationship to the Dirac Delta Function.

  • @inthebackwiththerabbish
    @inthebackwiththerabbish 2 года назад +1

    love u

  • @KamiMountainMan
    @KamiMountainMan 2 года назад +1

    From computational time complexity standpoint what is the point of this approximation?

    • @KamiMountainMan
      @KamiMountainMan 2 года назад +1

      Isn't it both O(n) multiplication?

    • @douglasstrother6584
      @douglasstrother6584 2 года назад +1

      @@KamiMountainMan In Statistical Mechanics, computing ln(n!) is common; in those cases, n ~ Avogadro's Number.

  • @frozenkingfrozenking6989
    @frozenkingfrozenking6989 Год назад +1

    Good good

  • @Jack_Callcott_AU
    @Jack_Callcott_AU 2 года назад

    It really is a mind-blowing formula. I think it says something deep about the structure of our number systems. What are e and Pi doing there.

    • @tomctutor
      @tomctutor 2 года назад +2

      Its no surprise that where 'e' appears 'π' pops up as well, because of Euler's e^(iπ)=-1. This gives π as a precise mathematical function of 'e' and vica-versa.
      Sort of inverse functions of each other in a sense.

    • @Jack_Callcott_AU
      @Jack_Callcott_AU 2 года назад

      @@tomctutor Thanks for the reply. Best wishes.

  • @NaN_000
    @NaN_000 Год назад

    How gamma function is derived?

  • @TheNameOfJesus
    @TheNameOfJesus 2 года назад +1

    Only 14,205 views after 14 months? That's a shame. It's on par with some of the other math channel videos.

  • @jaimeduncan6167
    @jaimeduncan6167 Год назад +1

    It's never worse than 92% Precise is just fantastic.

  • @Froncusiek
    @Froncusiek 2 года назад +1

    Cool video, really liked the graphical interpretation. I think tho you should’ve mentioned the radius of convergence for the Taylor series, since it’s usually a pretty important constraint for this particular approximation technique. What do you think?

  • @stirlingstark1983
    @stirlingstark1983 2 года назад +1

    Wait what?
    I don't remember creating this?

  • @richardgu6646
    @richardgu6646 2 года назад +1

    Great Video! Around 7:58, I think the last line should be the second derivative of f(z), for Taylor approximation.

  • @bhaveshsonalkar
    @bhaveshsonalkar 2 года назад +2

    Nice..Its amazing how simple things can be used to construct something powerful!

  • @eggtimer2
    @eggtimer2 2 года назад

    Why change X to NZ?

  • @billcook4768
    @billcook4768 2 года назад +1

    Yet another formula that would be cleaner using tau instead of pi :)

  • @MultiPaulinator
    @MultiPaulinator 2 года назад

    So, this means that
    Floor(n*Log_10(n/e)+Log_10(Sqrt(2πn)))+1 should do a perfectly fine job giving me its number of digits. Nice. No loops.

  • @tensorix
    @tensorix 2 года назад +1

    Typo on 7:50. f'(z) but "second" derivative. Everything still clear though

    • @DrTrefor
      @DrTrefor  2 года назад +2

      oh good catch!

  • @porcospino289
    @porcospino289 2 года назад +1

    Yes, it is of course cool maths with a capital k. Dr. Bazett should learn, though, that the "g" in "analogous" is hard, not pronounced like a "j" in English.

    • @AubreyBarnard
      @AubreyBarnard 27 дней назад +1

      And this is the first time I've heard "Gauss" as "Gowz".

  • @TomasLKarlik
    @TomasLKarlik 2 года назад +1

    The videos are really nice! I just want to ask of you to look into mic gain and/or compressor filtering. The overloading makes it pretty difficult to listen to 🙂

  • @aashsyed1277
    @aashsyed1277 2 года назад +1

    Is this 2nd video in gogology?

    • @DrTrefor
      @DrTrefor  2 года назад +1

      Sort of, I don’t know that either of these functions is really quite big enough to be considered googology

  • @geeache1891
    @geeache1891 2 года назад +1

    Nice if the approximation would have been a simpler function than the original.

    • @douglasstrother6584
      @douglasstrother6584 2 года назад +1

      In Statistical Mechanics, computing ln(n!) is common. In that context, n ~ Avogadro's Number.

  • @nrpbrown
    @nrpbrown 2 года назад +1

    Damn thats pretty cool

  • @syedmdabid7191
    @syedmdabid7191 2 месяца назад

    But this formula says that the factorial of A NEGATIVE NUMBER would be IMAGINARY.
    But we know 1/ (--1)! =0
    So (--)! = infinity

  • @bernhardbauer5301
    @bernhardbauer5301 2 года назад +1

    This is not crazy.
    This is not a crazy relationship.
    This is math.

  • @andrewharrison8436
    @andrewharrison8436 2 года назад +1

    (n+1)! is n!(n+1) so is Stirling's formula consistent with this? (Well, it better be, but how?)
    To get an aprox to (n+1)! either apply Stirling's formula to n+1 or apply it to n and then multiply by n+1.
    Well to cut a long story short lots of things cancel and you get e is aprox (1+1/n)^(n+.5).
    Let me get nostalgic here and remember my compound interest from 48 years ago, ahh yes:
    (1+1/n)^n is interest of 100% compounded n times a year
    (1+1/n)^(n+1) is where the interest is payable at the beginning of each interval - oops, edited to add - that it includes the payment at the beginning of the next year - so my memory of formulae isn't as good as I hoped.
    So as n gets large the 2 values get closer to each other and converge to e. That's rather satisfying.

    • @andrewharrison8436
      @andrewharrison8436 2 года назад

      Probaby - bad etiquette to reply to my own comment - but I am enjoying the maths.
      I compared e to (1+1/n)^n and (1+1/n)^(n+.5) as n grows both tend towards e, however that extra .5 means the approximation gets better rather faster.
      The errors are in opposite directions but just looking at magnitude:
      when n=1 the error is 1/6
      n=17, the first formula has 100 times the error of the second
      n=167, the advantage is 1000.
      This makes sense, e is the result of continuous growth.
      (1+1/n)^n is discrete growth at the end of intervals size 1/n.
      (1+1/n)^(n+.5) shifts the discrete growth to the middle of each interval so is a better aproximation to the continuous growth we are after.

    • @andrewharrison8436
      @andrewharrison8436 2 года назад

      To show my working, let S(n) be Sterlings approximation to n!.
      We would like n+1 = S(n+1)/S(n) + x where x is a small adjustment
      1 + x/(n+1) = S(n+1)/((n+1)S(n))
      Now use the formula
      x/(n+1) = ((n+1)/e)^(n+1)sqrt(2pi(n+1)) / ((n+1)(n/e)^n sqrt(2pi.n)
      splitting the exponent into n and 1 on the top and cancelling sqrt(2pi):
      ={((n+1)/e)^n/(n/e)^n} (n+1)/e /(n+1) {sqrt(n+1)/sqrt(n)}
      Consolidating the ^n terms and cancelling the (n+1) in the middle and multiplying by e
      e + e.x/(n+1) = ((n+1)/n*e/e)^n sqrt((n+1)/n)
      Finally we make the sqrt just an extra .5 on the exponent and rewrite (n+1)/n as 1+1/n to give:
      e + e.x/(n+1) = (1+1/n)^(n+.5)
      But really my initial question was about n+1 compared with S(n+1)/S(n)
      so we need x so:
      x = (n+1)/e {(1+1/n)^(n+.5) - e}
      Interesting, does this decrease? the curly brackets term decreases, does it do so faster than (n+1) increases? Empirically the answer is yes.

  • @Nik-qh7cq
    @Nik-qh7cq 2 года назад +1

    Damn, i have serious suspicions you have cameras in my math class

  • @scottmiller2591
    @scottmiller2591 Год назад

    The link to Laplace's Method - how much do you have to hate your readers to use that color scheme?

  • @aashsyed1277
    @aashsyed1277 2 года назад

    It's Taylor polynomial

  • @SlimThrull
    @SlimThrull 2 года назад +2

    And suddenly pi appears with no circles in sight.

    • @MattMcIrvin
      @MattMcIrvin 2 года назад +3

      Where it actually comes from is quite weird--it's from that integral of a Gaussian, but the integral of a Gaussian gets it because the way to make it tractable is to square it, so it's an integral over two variables, and then transform that to polar coordinates, and the integrand turns out to have circular symmetry (because of the Pythagorean theorem). So that's where the circle is lurking.

    • @SlimThrull
      @SlimThrull 2 года назад +1

      @@MattMcIrvin Ahhh, just a very well hidden circle. Thanks.

  • @bernardlemaitre4701
    @bernardlemaitre4701 2 года назад

    .... approximatively at (1 - 1/12n + o(1/n) ) ..... With 1 + 2 + 3 + 4 +.... = 1/12....

  • @muskyoxes
    @muskyoxes 2 года назад

    Before watching this video, i could justify everything in Sterling's approximation except the 2 pi and i wondered "is that really the best constant? Would 6.28 be better or worse? How do we know?" And after watching this, i still don't get it.
    ln z - z is basically linear for large z, so it seems like a uniquely bad idea to approximate it with a _quadratic_ centered at _small_ z. And that's where your 2 pi comes from. So i'm unsettled.

    • @faei
      @faei 2 года назад +2

      It depends on the definition of what is a good approximation, here, the definition used is that f(n) ~ g(n) if f(n)/g(n) -> 1 when n goes to infinity
      The video proved that n! ~ sqrt(2 pi) sqrt (n) (n/e)^n (small technicals details are missing, but it's not a math course)
      So, you can see that if you change sqrt(2 pi) by sqrt(6.28), you will have
      n! / [sqrt(6.28) sqrt (n) (n/e)^n] -> sqrt(2 pi)/sqrt(6.28) ! = 1
      so we have NOT n! ~ [sqrt(6.28) sqrt (n) (n/e)^n]
      In fact, any other constant will fail this test
      The method comes from the following intuition:
      if f(x) > f(y) then e^f(x) > e^f(y) and (e^f(y) / e^f(x))^n -> 0 when n -> infinity
      So, the majority of the weight of the integral of e^(nf(x)) dx will be around the maximum of f when n goes to infinity
      Thus, using an approximation around the maximum is the way to go
      (a linear approximation is not used because it too rough for our need, it just doesn't work, because it doesn't keep information about the maximum of f)

  • @xyz.ijk.
    @xyz.ijk. 3 месяца назад

    Excellent video, I doubt you'll see this comment, but it would be great if you didn't use colors so close together for those of us who are colorblind.

  • @matiestrella510
    @matiestrella510 2 года назад +1

    Why is doctor strange teaching math now ?

  • @isakwatz11
    @isakwatz11 2 года назад +4

    Even as a pure mathematician, you gotta love a great approximation!

    • @DrTrefor
      @DrTrefor  2 года назад +2

      Isn't it just so cool?!

  • @Mmatt69
    @Mmatt69 2 года назад +1

    I'm searching for Veritasium's comment.. :)

  • @mohamedridamahir1960
    @mohamedridamahir1960 2 года назад +1

    why oh why didn't i know you sooner

  • @aweebthatlovesmath4220
    @aweebthatlovesmath4220 2 года назад

    I was expecting less approximation 😞

  • @romanmoskalenko22
    @romanmoskalenko22 2 года назад

    Why has the exponential expression been replaced by a quadratic approximation? This approximation is good only in some (not very large) neighborhood of z=1. The integral is taken from zero to infinity..

    • @romanmoskalenko22
      @romanmoskalenko22 2 года назад

      Even the sign "almost equal" is too rude..

    • @DrTrefor
      @DrTrefor  2 года назад

      It is true it is only a good approximation near by. But if you plot the gaussian, as n gets large it becomes and extremely steep spike and extremely close to zero anywhere that isn't near by. I still haven't proven the method, but it isn't unreasonable that it works out this way:D

    • @romanmoskalenko22
      @romanmoskalenko22 2 года назад

      @@DrTrefor thanks =)

    • @alexl3231
      @alexl3231 2 года назад

      Also in the final formula, for large n (even with 3,4,5,6...), the most "important" part is clearly nⁿ, then 1/eⁿ and the rest that comes from the integral/Taylor-approx

  • @othmanaljbory3649
    @othmanaljbory3649 2 года назад +1

    😘😘💚❤️❤️❤️

  • @alysdexia
    @alysdexia 2 года назад

    little bit: pick one; not nice < niais < nescius := not-skilled but you are; thick -> coarse; thin -> fine; large -> great; away from -> froward

  • @cauchyschwarz3295
    @cauchyschwarz3295 2 года назад +1

    Unlike 3b1b, the way you explain math makes it seem like an empty husk of algebraic tricks without meaning.
    It honestly makes me a bit sad.

  • @hctibsseltraeheleda9041
    @hctibsseltraeheleda9041 Год назад

    ఒక్క ముక్క అర్దమైతే చెప్పుచ్చుకు కొట్టండి

  • @otisarmyalso
    @otisarmyalso 10 месяцев назад

    Nothing burger

  • @thegreatestdemon1288
    @thegreatestdemon1288 11 месяцев назад

    So called Taylor series was already invented by hindu mathematician way before than europeans

  • @Halleluyah83
    @Halleluyah83 2 года назад

    Blaa-blaa-blaa!!!