an excruciatingly deep dive into the power sum.

Поделиться
HTML-код
  • Опубликовано: 13 янв 2025

Комментарии • 149

  • @Nikolas_Davis
    @Nikolas_Davis 2 года назад +132

    Me, as a physicist, watching Michael "play fast and loose" with notation:
    -Good, good... embrace the dark side...

  • @omeragam8628
    @omeragam8628 2 года назад +74

    Wow never thought about plugging an operator into a generating function that's crazy!

    • @kristianwichmann9996
      @kristianwichmann9996 2 года назад +20

      Physicists everywhere: Of course you can! :D

    • @Debg91
      @Debg91 2 года назад +9

      @@kristianwichmann9996 as a theoretical physicist with interest in pure mathematics, my thought is: "I see no reason why this shouldn't work, but nevertheless I would like to see a rigurous explanation"

    • @JJ-tp2lc
      @JJ-tp2lc 2 года назад

      If you think that is crazy, just watch Supware's video on operational calculus

    • @thatdude_93
      @thatdude_93 2 месяца назад

      You should look into functional analysis, in particular the functional calculus, if you're interested in the math behind it

  • @jkid1134
    @jkid1134 2 года назад +27

    Very excited to plunge into this. In high school, I did the first 10 power sums using matrix reductions, all on one sheet of graph paper with a LOT of erasing, and all on some shaky assumption of existence. So let's see what happens when instead of pencil and paper, we take a Penn to it!

  • @leif_p
    @leif_p 2 года назад +45

    3:49 There is a small mistake, the first sum should have been reindexed to start at m=0 to apply the Cauchy product formula: That changes the r to an r+1 in the formula at 6:10

    • @Cofausto
      @Cofausto 2 года назад +8

      Even more: the final answer stars at k=1, or another hang it divided by 0!

    • @krisbrandenberger544
      @krisbrandenberger544 2 года назад +1

      Yes, and that means the inner sum will be = 0 if r is greater than or = 1, not just greater than 1.

    • @MohamedBenamer940
      @MohamedBenamer940 11 месяцев назад

      e^x - 1 should start by 1 (m = 1)
      Because e^x = 1 + x + ½x².....
      => e^x - 1 = x¹ + ½x² + ⅙x³...
      Which explains m = 1

  • @mathmathician8250
    @mathmathician8250 2 года назад +18

    I first learned about bernoulli number at mathloger video. Thanks for the derivation of it!!!

    • @eetulehtonen69
      @eetulehtonen69 2 года назад +3

      Bernoulli numbers, as introduced by Bernoulli himself, were originally defined to be a factor of the coefficients in the expanded sum of the kth powers of the first n integers. One can quite easily derive the recursion formula from this using algebraic manipulation for the sum and a bit of calculus. Furthermore, one can derive an explicit sum for the kth bernoulli number from this very recursion. That sum can then be used to verify that generating function (first introduced by euler). As such, the actual derivation of these numbers is more complicated than whats shown. The bernoulli numbers are an amazingly versatile object that appears in calculating the values of the Riemann zeta-function (and other L-functions), in combinatorics they appear in the formula for stirling numbers of both kind (and for many other special numbers), in differential topology as part of the kervaire-milnor formula and much more. Truly an all purpose object of outstanding power.

  • @Cicote
    @Cicote 2 года назад +16

    The exponential operator exp(y*d/dx) at 8:50 resembles the translation operator T in quantum mechanics a lot.
    In quantum mechanics, the translation operator shifts the wave function ψ by a set distance x: ψ(r - x) = T(x)ψ(r). It is defined to be T(x) = exp(-i*x*p/ħ) with the position operator x and momentum operator p = -i*ħ*d/dx (and the reduced Planck constant ħ = h/2π). That is, T(x)=exp(-x*d/dx), which is exactly the operator in the video (with an additional minus), that also shifts the argument of the function f.

    • @APaleDot
      @APaleDot 2 года назад +2

      This connection can be understood by defining an operator called the shift operator E, which is defined as shifting a function over by one unit, written like so: Ef(z) = f(z+1)
      Obviously this operator can be applied multiple times and it behaves as you'd expect E²f(z) = f(z + 2), and inverses and fractional powers also make sense, so in general (E^y)f(z) = f(z + y). The connection to translations in space should also be obvious.
      The Taylor series of any function f centered at a is given by ∑ (1 / k!) (d^n/dx^n) f(a) (x - a)^n for n between 0 and ∞. So, a Taylor series centered at z for a function after applying the shift operator would have a = z and x = z+1 which would be written as f(z+1) = ∑ (1 / k!) (d^n/dz^n) f(z) ((z+1) - z)^n. That final term equals 1, so we are left with ∑ (1 / k!) (d^n/dz^n) f(z).
      If you know your Taylor series you can clearly see that ∑ (1 / k!) (d^n/dz^n) is simply e^x but with x = d/dz which means that Ef(z) = (e^d/dz) f(z) and we get your general translation operator by doing E^y = (e^(d/dz))^y = e^(y * d/dz)

    • @user-en5vj6vr2u
      @user-en5vj6vr2u 2 года назад

      Damn I hate when people start talking like textbooks in the comment section

    • @APaleDot
      @APaleDot 2 года назад +3

      @@user-en5vj6vr2u
      Bruh, this is a math video. Expect math in the comment section.

    • @user-en5vj6vr2u
      @user-en5vj6vr2u 2 года назад +1

      @@APaleDot please leave out yhe passive voice and the “obviously” and the “you can clearly see”

    • @APaleDot
      @APaleDot 2 года назад

      @@user-en5vj6vr2u
      why?
      And what does that have to do with textbooks?

  • @goodplacetostop2973
    @goodplacetostop2973 2 года назад +36

    16:42 That makes me think… is there a closed form for 1 + 4 + 27 + … + n^n ?

    • @Noam_.Menashe
      @Noam_.Menashe 2 года назад +6

      I don't think so. I've spent a good half an hour looking/getting sidetracked.
      You can always approximate it.

    • @billbill1235
      @billbill1235 2 года назад +11

      I have been thinking about this for about 1 year. Not only I haven't managed to calculate it myself but I haven't even found anything on the internet about it. It is a very difficult and interesting problem. I asked my A-level teacher who has a math degree from Harvard but he had no idea too. It most likely hasn't been discovered yet...

    • @LinigDrake
      @LinigDrake 2 года назад +6

      @@angelmendez-rivera351 Can you link it, please?

    • @JM-us3fr
      @JM-us3fr 2 года назад +3

      I would just approximate it with the Euler-Maclauren formula. It can allow you to estimate the sum with an integral plus some correction terms

    • @LinigDrake
      @LinigDrake 2 года назад +4

      @@angelmendez-rivera351 What I meant to say was: Would you be so kind as to link some research proving the absence of such a closed formula?

  • @stefanomarchesani7684
    @stefanomarchesani7684 2 года назад +3

    I think the final formula has two big problems: the term in the summation does not make sense if k=0 or if k > p+1. The k>p+1 problem is clearly due to the fact that the function x^(p+1) has only the first p+1 derivatives nonvanishing. While the term k=0 does not work because of the simplifications made in writing the term p choose k-1 (which are not valid if k=0).

  • @SirCalculator
    @SirCalculator Год назад +2

    Take first 3 rows of pascals triangle and erase the rightmost 1's.
    1
    11 ----> 1
    121 12
    Insert these rows into a matrix as follows and by placing minus signs at every 2nd diagonal:
    ( 1 0 )
    ( -1 2)
    Invert this matrix to get
    ( 1 0 )
    ( 1/2 1/2 )
    Now you simultaneously get the first power sums by this matrix-vector equation:
    ( 1^0 + ... + n^0 ) (1 0) * ( n )
    ( 1^1 + ... + n^1 ) = (1/2 1/2) ( n^2 )
    This procedure is easily generalized to get all power sums up to any exponent p. The computational difficulty only arises in the task to invert bigger and bigger matrices, but the obvious benefit is that you get all formulas at once.
    Props to Mathologer and his video on power sums which is where I got this from

  • @Noam_.Menashe
    @Noam_.Menashe 2 года назад +10

    12:38 this reminds me of the Euler-Maclaurin summation formula.
    It's possible to derive this sum of powers/Faulhaber's formula from it (Pretty trivially).
    You could make a video on it.

    • @ZedaZ80
      @ZedaZ80 2 года назад +2

      ♥️ I love Euler-Maclaurin
      I actually started from the 1^p+2^p+...+n^p in highschool (technically middle school, I still have the original notebook) and ended up deriving the Euler-Maclaurin formula and had a notebook with a page of constants. College came and one day I saw some very familiar fractions and turned out to be "Bernoulli numbers" and then after that I found out the name for the formula I came up with was the Euler-Maclaurin formula XD I had a LOT of fun with that and still do.

  • @bosorot
    @bosorot 2 года назад +14

    If you find that the topic is too difficult . My suggestion is to watch Power sum video MASTER CLASS by mathologer . These videos complement each other.

  • @richardheiville937
    @richardheiville937 2 года назад +4

    It's easy to make such formulas observe that f(n)=1^p+2^p+...+n^p is a polynomial in n of degree p+1 with rational coefficients thus one have to determine the coefficients to do that one has to solve a linear equations system.

  • @ostorjulien2562
    @ostorjulien2562 2 года назад +7

    The sum should not go to infinity, it should stop at p+1 since the kth derivative of x^(p+1) are zéro for k bigger than p+1

    • @deinauge7894
      @deinauge7894 2 года назад +3

      and p choose k-1 is also 0 for k>p+1. but that's not a problem. what is problematic is the k=0 term ;-)

  • @diniaadil6154
    @diniaadil6154 2 года назад +9

    Your formula for nth bernoulli number is wrong since the left taylor expansion( of e^x -1) starts from m=1. It is correct if the sum stops at r-1.

  • @romajimamulo
    @romajimamulo 2 года назад +3

    6:07 the r at the top of all of these sums should be r-1 as there's no m=0 term

  • @wavyblade6810
    @wavyblade6810 2 года назад

    4:00 we can do this because both series converge and at least one converges absolutely - this is what Mertens theorem says.

  • @PowerhouseCell
    @PowerhouseCell 2 года назад +1

    Beautifully done! 💛

  • @abrahammekonnen
    @abrahammekonnen 2 года назад +1

    14:11 when you cancel the p+1's out you limit k to the natural numbers instead of the natural numbers AND 0 (which is a loss in information, but makes sense to do just wanted to make note of it).

  • @FrankHarwald
    @FrankHarwald 2 года назад

    For reference: The formula that gives a closed solution to 1^p + 2^p + 3^p + ... + n^p is also known as "Faulhaber's Formula".
    It's interesting but not particularly efficient when actually wanting to compute these sum for many cases as it requires precomputing a long recursive sequence in with GROWING order of recursion (number of previous steps required to compute the next one). So to compute the p-th power series, you'd have to apply the recursion p-steps, but each step will also require you to keep & use all previously computed p-steps anew resulting in an overall use of p^2 arithmetic operations to get to the p-th power series. Because of this, I've been wondering whether there is a more efficient or alternative way to derive or write the p-th power sum formula & I had been diggin into several archives of mathematical sites & archives & helas, while there are a few alternative ways of writing Faulhaber's formula, none I've found are computationally simpler than the one by Faulhaber - if not even more involved then his. The most concise way of writing the closed form of the p-th power series I've found involves taking the last row vector of the inverse of a p*p square matrix comprised of all the numbers of Pascal's triangle up to p-th order, row by row, fit into a right upper triangle, padded with 0 & shifted right by 1 column. It's not more efficient then Paulhaber's formula because computing the determinant of an arbitrary p*p square matrix requires at least p^2 operations, however, this isn't a fully arbitrary matrix, it's a near upper-right triangular matrix (just with one additional non-zero diagonal) & based off of special numbers (Pascal triangle, or binomial coefficients), so maybe there are better ways of computing the explicit polynomial for the p-th power series...

    • @nickruffmath
      @nickruffmath 2 года назад

      You might be able to use the Euler-Maclaurin Formula to approximate the sum using an integral (since in this case the integral is usually simpler) and then going to as many terms as needed for the floating point accuracy of your algorithm

    • @nickruffmath
      @nickruffmath 2 года назад

      Although maybe not because that formula also uses bernoulli numbers...
      A way I've done it by hand is by Difference Calculus (also known as, or at least closely related to, Umbral calculus) which is kind of like a discrete analog to calculus. It lets you recursively define the next sum in terms of the previous ones by way of expanding (n+1)^p - n^p which is the difference operator acting on n^p

  • @QuantumHistorian
    @QuantumHistorian 2 года назад +1

    I'm not sure that turning a finite sum over the integers into an infinite one over a bunch of fractional constants that have to be defined iteratively counts as solving the problem in anyway..

  • @dr.matthiaskruger8538
    @dr.matthiaskruger8538 2 года назад +2

    In which sense is the result a closed form? You started with a finite sum und ended by a sum...

  • @_zelatrix
    @_zelatrix Год назад +1

    How can the n-th Bernoulli number B_n be defined in terms of B_n itself? You need to calculate B_n to find B_n?

  • @holyshit922
    @holyshit922 2 года назад +4

    I would use "Difference calculus"
    Instead of derifative we have difference operator
    which can be defined as follows
    Δ^{0}f(n) = f(n)
    Δ^{n}f(n) = Δ(Δ^{n-1}f(n+1) - Δ^{n-1}f(n))
    Instead of integral we have sum
    Indefinite sum Σ f(n)
    and definite sum Σ_{k=0}^{n} f(k)
    We have also summation by parts
    (Coincidence of names is accidental - I dont think so)
    Instead of powers we have falling factorials
    --------------------------------------------------------------------------------------------
    In my opinion to calculate b_{1} we need limit of first derivative at zero
    If generating function is rational then geometric series and its derivatives will be enough

  • @dracoinvictus
    @dracoinvictus 2 года назад +1

    Me: I totally don't understand how you jumped from one equation to the next equation. Michael Penn: OK. Nice!

  • @johnchessant3012
    @johnchessant3012 2 года назад +2

    The formula at 12:32 seems related to the Euler-Maclaurin formula.

    • @yavuz1779
      @yavuz1779 2 года назад

      Watch Mathologer's video on this subject

  • @cbbuntz
    @cbbuntz 2 года назад +1

    I just solve them with a matrix (I use the formulas quite a bit). The coefficients for the finite difference of x^p - (x-1) ^p yields binomial coefficients with the highest degree term removed, so the main diagonal should be 1, 2, 3, 4.... Those polynomials have an antidifference that is x^p, which is also useful. Fill the matrix with those coefficients and take the inverse, and boom, you've got your polynomial coefficients for power sums. With both matrices, you can map between derivatives and integrals and finite differences / antidifferences.
    Another useful property for those truncated binomial coefficients is that those coefficients are what comes from fitting a polynomial to a sequence and then evaluating the polynomial at the next term. A very cheap way of extrapolating. You don't need the whole matrix, a single vector does the job.

    • @davidblauyoutube
      @davidblauyoutube 2 года назад

      Didn't think the umbral calculus would make an appearance here, interesting.

  • @Bennici
    @Bennici 2 года назад +1

    I can tell you talked to someone teaching college physics by the way that one of your next videos you say something like "we're playing it fast and loose with the existence [of these objects], but everything will work out in the end".
    And so rigor dies a lonely death, ignored in a corner of an unlit closet. :P

  • @aweebthatlovesmath4220
    @aweebthatlovesmath4220 2 года назад +1

    Thank you so much Michael!!!
    Really want to know about bernoulli numbers.

    • @eetulehtonen69
      @eetulehtonen69 2 года назад +1

      Bernoulli numbers, as introduced by Bernoulli himself, were originally defined to be a factor of the coefficients in the expanded sum of the kth powers of the first n integers. One can quite easily derive the recursion formula from this using algebraic manipulation for the sum and a bit of calculus. Furthermore, one can derive an explicit sum for the kth bernoulli number from this very recursion. That sum can then be used to verify that generating function (first introduced by euler). As such, the actual derivation of these numbers is more complicated than whats shown. The bernoulli numbers are an amazingly versatile object that appears in calculating the values of the Riemann zeta-function (and other L-functions), in combinatorics they appear in the formula for stirling numbers of both kind (and for many other special numbers), in differential topology as part of the kervaire-milnor formula and much more. Truly an all purpose object of outstanding power.

  • @emmanuellaurens2132
    @emmanuellaurens2132 2 года назад +1

    1:52 I was very confused, and had to look it up on the web. You, sir, are in fact not defining the nth Bernoulli number as you claim, which would imply that n or at least bn is fixed in the equation even though it is the index of the sum, but are defining all the Bernoulli numbers at once.

    • @iabervon
      @iabervon 2 года назад

      Yeah, although if you think of the RHS as a power series, that definition says that the nth Bernoulli number is the coefficient of x^n on the right times n!, which is complicated but doesn't depend on other Bernoulli numbers. Half the time, mathematicians think of power series as infinite tables of expressions for coefficients, rather than something you would actually evaluate, and equality is term-by-term.

  • @abrahammekonnen
    @abrahammekonnen 2 года назад

    Wow. Yeah I'll be coming back to this video multiple times.

  • @CTJ2619
    @CTJ2619 2 года назад +1

    LIKED “A little detour into calculus”

  • @renesperb
    @renesperb 7 месяцев назад

    It is not hard to derive a fairly simple recursion for 1+2^p+3^p + ....n^p ,starting with the well known case p = 1.But of course you don't get a general formula .

  • @Kapomafioso
    @Kapomafioso 2 года назад +5

    After watching this, I don't really get what you mean by a "closed form" anymore...
    You start from a finite sum and as a result you get...infinite sum? How is one considered closed form while the other isn't? To me, closed form is something you plug one number in and get another number out, without having to summing increasingly more terms (or infinite). So not only this is an infinite sum, for each term you have to find the k-th Bernoulli number, for which you have to employ another computation requiring a recursion or another sum...
    BTW I don't think that sum should go to infinity, term k = 0 is also weird, with p choose -1...
    If the final sum is finite, you trade a sum of n terms for a sum of p terms, albeit every needs another sum of increasingly more terms (to get those Bernoulli numbers)...

    • @EebstertheGreat
      @EebstertheGreat 2 года назад +2

      The sum is finite because the higher order derivatives of a polynomial are all zero. In the final formula, he is using the convention that (a choose b) = 0 for a < b, so the sum is still technically finite. But yeah, he forgot to reindex the sum, it should start at 1, not 0 (not only is he choosing -1, but also dividing by 0). Personally, I found this exposition very confusing.

  • @daniellosh8341
    @daniellosh8341 2 года назад

    Operating on both sides sum(BnD^n/n!) could avoid division of (e^D - 1) or the existence(definition) of inverse of (e^D - 1)

  • @fransisigos
    @fransisigos 2 года назад +4

    I'm not sure an infinite sum is a closed-form expression though?

    • @kevinmartin7760
      @kevinmartin7760 2 года назад +8

      p choose k-1 becomes zero when k-1 exceeds p so for any particular p there are only a finite number of nonzero terms. But I see your point...

    • @Macieks300
      @Macieks300 2 года назад

      @@kevinmartin7760 Yes, that means that the formula is useful but only when n is way bigger than p because there are only p elements in the sum instead of n.

    • @guruone
      @guruone 2 года назад

      RIGHT.... Playing fast and loose with definitions here...... Maybe if the sum converges and you can express it in terms of elementary functions, ay ;)

  • @chrisbarrington108
    @chrisbarrington108 2 года назад +2

    Hi Michael, in your last two lines I’m not sure the telescoping works as the b(k) will be different at each stage… Also, won’t it blow up at k=0… Maybe I’m missing something… Love your channel by the way

  • @j.d.kurtzman7333
    @j.d.kurtzman7333 2 года назад

    7:00 don’t you need to know b0 ANd b1 (not just b1) given the blue boxed formula to calculate the rest of the Bernoulli numbers

    • @gibbogle
      @gibbogle 2 года назад

      My question too. Apparently there is a mistake.

  • @dawkinsfan660
    @dawkinsfan660 Год назад

    Which is the crazy video on the sum of cubes? Could somebody put the link here, please?

  • @erictrefeu5041
    @erictrefeu5041 Год назад

    ...et pour p non entier ? can you find a closed form ? moi j'en ai une !

  • @The1RandomFool
    @The1RandomFool 2 года назад

    Another interesting place that Bernoulli numbers pop up is the closed form of the Riemann zeta function of 2n, where n is a natural number. It's related to the coefficients of the Laurent series expansion of cotangent, which is used in a complex analytical method of calculating zeta(2n). Also notice that the initial definition of Bernoulli numbers in the video bears resemblance to the integral form of the Riemann zeta function.
    Additionally, it can be shown that the Bernoulli numbers can be calculated with subsequent derivatives of x*cot(x) then evaluated at the limit as x approaches 0.

    • @eetulehtonen69
      @eetulehtonen69 2 года назад

      Not only 2n but also -n. One does not even need cotangent to obtain these results. They can be achieved by just using the properties of Bernoulli polynomials!

    • @The1RandomFool
      @The1RandomFool 2 года назад

      @@eetulehtonen69 Ah, I have not done any work with Bernoulli polynomials, so I did not know that.

    • @eetulehtonen69
      @eetulehtonen69 2 года назад

      @@The1RandomFool that is not suprising since the proof came out in 2013 and even though elementary, its not easy nor necessary by any means. I happen to be an expert in this field so I know that bit of math trivia.

  • @gosias6328
    @gosias6328 Год назад

    Nice, but this final Formula, even after correcting for the k=0 (p choose -1 :) ) does not work for p=1, giving the wrong result 1/2 * n * (n+2)

  • @bhavikdesaichemobbc
    @bhavikdesaichemobbc 2 года назад +1

    Amazing appraoch sir

  • @qschroed
    @qschroed 2 года назад

    what type of mathematical tool could be used to properly justify the operator maths you did there?

  • @samueldevulder
    @samueldevulder 2 года назад +1

    I don’t get why the final sum is a closed form. This is an infinite series. It is hard to believe that this infinite series equals n(n+1)/2 when p=2 for instance. Do the terms gets null or cancels at some point in the summation ? This needs clarification here IMHO.

    • @jedthorpe9535
      @jedthorpe9535 2 года назад

      The sum is actually finite - for k=p+2 and above the k-th derivative of f(x) is just 0, so the p choose k-1 term vanishes for these (if you like I suppose you could define n choose k to be 0 for k > n).

    • @samueldevulder
      @samueldevulder 2 года назад

      @@jedthorpe9535 Oh, well indeed. Of course if p choose k is 0 for k>p. Many thanks.

    • @Kapomafioso
      @Kapomafioso 2 года назад

      @@jedthorpe9535 But even if it's finite, the original sum is also finite. In what sense is trading a finite sum for another finite sum (more complicated I'd argue, since you have to also calculate the Bernoulli numbers), considered "closed form"?

    • @tracyh5751
      @tracyh5751 2 года назад

      @@Kapomafioso I wouldn't call it a closed form, but if p

  • @holyshit922
    @holyshit922 2 года назад +1

    In my opinion you are wrong, you have wrong indexes in Cauchy product so
    binomial coefficients are wrong
    Should it be r+1 in the top of binomial coefficients ?

  • @ecoidea100
    @ecoidea100 2 года назад

    So the operator just divides when applying the derivative at 11:33 because it does not depend on x?

    • @deinauge7894
      @deinauge7894 2 года назад

      yes, the two operators commute

    • @ecoidea100
      @ecoidea100 2 года назад

      @@deinauge7894 I can see the question is why

    • @deinauge7894
      @deinauge7894 2 года назад

      @@ecoidea100 as you said, it does not depend on x.
      d/dx commutes with itself (and with any function of itself), as every operator does

  • @gibbogle
    @gibbogle 2 года назад

    What about b0?

  • @a.osethkin55
    @a.osethkin55 2 года назад

    Thanks!

  • @नवलशर्मा-ड7ज
    @नवलशर्मा-ड7ज 2 года назад

    Sum of all terms of series defined in first place is -1/2(e^x) not x/e,,

  • @n8cantor
    @n8cantor 2 года назад +5

    Love the idea that we found a "closed form" for a finite series of powers and it was an infinite series of powers (and Bernoulli numbers and binomial coefficients)
    For another derivation and application of the Bernoulli numbers, I highly recommend this video on the Basel Problem: ruclips.net/video/nxJI4Uk4i00/видео.html

    • @srijanraghunath4642
      @srijanraghunath4642 2 года назад +2

      Except it is a finite sum since the binomial coefficient in the sum contains a (p-(k-1))! in the denominator! The coefficient will go to 0 when p-(k-1)=-1,-2,-3,… since 1/(-1)!=1/(-2)!=1/(-3)!=…=0.
      And as for the link you posted, I think that the alternate derivation of the Euler maclaurin formula is much better mathematically than the rough operator arguments

  • @daniellosh1015
    @daniellosh1015 2 года назад

    -1 within the summation can be dropped, property of Bernoulli numbers

  • @MyOneFiftiethOfADollar
    @MyOneFiftiethOfADollar 2 года назад

    Never would have a imagined the product of two infinite series could be written as product of an infinite and finite series

    • @eetulehtonen69
      @eetulehtonen69 2 года назад +2

      Its not a product, its a double sum. A sum that has sums as an element. It's called the cauchy product.

    • @MyOneFiftiethOfADollar
      @MyOneFiftiethOfADollar 2 года назад

      @@eetulehtonen69 he expressed product of two infinite series as a Cauchy product

  • @kingfrozen4257
    @kingfrozen4257 2 года назад +1

    ok so just replaced a sigma with another sigma ! whats the point?

  • @gregy1570
    @gregy1570 2 года назад

    please, if you haven't yet, could you make a video about the limit of nth root of n factorial? i can't remember what problem i was working on where it came up, but i thought it was interesting

    • @nikoladjuric9904
      @nikoladjuric9904 2 года назад

      By Stiirling approximation
      n!~(n/e)ⁿ*_/(2pi*n)
      ⁿ_/(n!)~(n/e)* ²ⁿ_/(2pi*n)
      As n getting bigger
      ²ⁿ_/(2pi*n) is close to 1
      So for big n you can approximate ⁿ_/(n!) as n/e .
      If n tends indinity then ⁿ_/(n!) Tends infinity also.
      Or
      ln(n!^(1/n))=sum(lnk)/n
      =sum(ln(k/n)+lnn)/n
      =sum(ln(k/n))/n+lnn
      As n tends infinity it tends
      ln(n)+S(lnx) as x goes 0 to 1
      =ln(n)+(x(lnx-1) from 0 to 1)
      =ln(n)-1=ln(n/e)
      So actually it's showed
      ⁿ_/(n!)~n/e for big n

    • @soupy5890
      @soupy5890 9 месяцев назад

      Or you can compare n factorial to b^n for any positive b and note n! eventually outgrows it, and the proof follows by taking nth roots and noting nth root is an increasing function

  • @zazinjozaza6193
    @zazinjozaza6193 2 года назад

    If you have the time, could you make more videos explaining where bernoulli numbers come from?

    • @eetulehtonen69
      @eetulehtonen69 2 года назад

      A japanese mathematician by the name of seki takakazu first introduced bernoulli numbers as the coefficients of terms in the expanded closed form of the sum of powers of integers. A few years later bernoulli had the same approach in his book ars conjectandi (1712). Bernoulli numbers look rather arbitrary, and its clear why there would be confusion. They are like term that glues together the faulhaber formula. In a sense, defined to be the arbitrary-looking sequence of numbers to make a coherent formula. The recursion shown in this video can be derived with elementary algebra and calculus. From that one can get a explicit double sum for the kth bernoulli number. Furthermore that double sum can be used to verify the generating function shown here.

  • @bmdiscover7827
    @bmdiscover7827 2 года назад

    great calculus behavior great maths

  • @khoozu7802
    @khoozu7802 2 года назад

    To prove the closed form in the end of the video
    Notice that n^(p+1) is the highest term of (n+1)^(p+1) and
    pC-1=p!/((-1)!(p+1)!)=1/((-1)!(p+1))
    Then we have k=0
    b0/0*pC-1*((n+1)^(p+1)-1)
    =1/0*1/(-1)!*n^(p+1)/(p+1)
    =1/(0*(-1)!)*n^(p+1)/(p+1)
    =1/0!*n^(p+1)/(p+1)
    =n^(p+1)/(p+1)+...

    • @khoozu7802
      @khoozu7802 2 года назад

      But it was wrong because we cannot have factorial of negative integers

    • @khoozu7802
      @khoozu7802 2 года назад

      and also cannot have something divided by zero

  • @laretslucky2899
    @laretslucky2899 2 года назад

    Expressing a finite sum via infinite... I believe there could be some more ways to make it.

    • @ZedaZ80
      @ZedaZ80 2 года назад +1

      For natural numbers p, it is finite (after the (p+1)-th term it's all zeros).
      The formula is written as going to infinity because that is the generalization for non-integers :)

  • @byronwatkins2565
    @byronwatkins2565 2 года назад +1

    I don't see why an infinite series is preferred to a finite series.

    • @byronwatkins2565
      @byronwatkins2565 2 года назад

      @@angelmendez-rivera351 You find it easier to add an infinity of terms instead of n terms? That is ridiculous.

    • @schweinmachtbree1013
      @schweinmachtbree1013 2 года назад

      @@byronwatkins2565 I know it seems counter-intuitive but it really is easier to evaluate infinite series. There are only a handful of finite series with closed forms, namely
      - arithmetic progressions: series like sum_{k=1}^n (a k),
      - geometric progressions: series like sum_{k=1}^n (a r^k),
      - arithmetico-geometric progressions: series like sum_{k=1}^n (a k r^k),
      - generalizations of these of the form sum_{k=1}^n (a k^m r^k),
      - certain sums of binomial coefficients,
      - certain very specific trigonometric sums, and
      - the power sums which are the topic of this video.
      In contrast there are many more infinite series with closed forms, and the reason this is not 'ridiculous' is that one doesn't add up infinitely many terms to evaluate an infinite series (which would of course be ridiculous); rather one has to apply some ingenuity (for example to evaluate sum_{k=1}^{infinity} r^k, if you call the series S then you can relate S and rS which lets you solve for S = r/(1-r), provided |r|

    • @byronwatkins2565
      @byronwatkins2565 2 года назад

      @@schweinmachtbree1013 If someone presented a closed-form equivalent, then I would agree that the closed-form equivalent is preferred.

  • @pedrocasella1327
    @pedrocasella1327 2 года назад

    Challenge: prove that, for every p prime, there is a prime q bigger than p but less than 2p

  • @davidbrisbane7206
    @davidbrisbane7206 2 года назад +3

    This is the first time I felt that Michael packed too much into one video. This needs three videos to do the problem justice.

  • @holyshit922
    @holyshit922 Месяц назад

    It is not so difficult to write program for power sums even if we dont know formula
    using System;
    namespace NamespaceName
    {
    public class ClassName
    {
    public static void Main(string[] args)
    {
    double[] x,y,a;
    double xi;
    int n;
    char esc;
    do
    {
    Console.WriteLine("Enter number n:");
    n = Convert.ToInt32(Console.ReadLine());
    x = new double[n+2];
    y = new double[n+2];
    a = new double[n+2];
    for(int i=0;i=0;i--)
    y[i+1] = y[i];
    y[0] = 0;
    for(int i=1;i=0;i--)
    {
    xi = x[i];
    a[i] = y[i];
    for(int k = i;k < n;k++)
    a[k] -= xi * a[k+1];
    }
    for(int i=0;i

  • @danielbranscombe6662
    @danielbranscombe6662 2 года назад

    ok, so you were able to calculate the value of B4, but can you now calculate the value of After lol

  • @nittabhowmick8042
    @nittabhowmick8042 2 года назад +1

    Not so cool, it will be better that you express the sum as a finite sum

  • @josephmartos
    @josephmartos 2 года назад +1

    This is like hardcore sex in maths. Pls tell why and why it is safe yo set y=1.

  • @youtubeuser6581
    @youtubeuser6581 2 года назад

    Bernoulli numbers

  • @michaelguenther7105
    @michaelguenther7105 2 года назад

    Oliver Heaviside would approve!

  • @pedroalonso7606
    @pedroalonso7606 2 года назад

    Faulhaber polynomials!

  • @detectiveandspynovels7140
    @detectiveandspynovels7140 10 месяцев назад

    Wonder

  • @四步道君
    @四步道君 10 дней назад

    =(1/Γ(p))∫(0,∞)((t)^(p-1))(1-exp(-nt))dt/(exp(t)-1)

  • @bethhentges
    @bethhentges 2 года назад

    0:50 Dr. Penn says “this guy” when he could have said “this expression.”
    6:15 Dr. Penn says, “We can think about this guy right here, b sub 1…” when he could have said, “We can think about b sub 1 right here…”
    9:12 Dr. Penn says, “If we were to expand this guy,” when he could have said, “If we were to expand y.”
    14:00 Dr. Penn says, “This guy right here,” when he could have said “This product right here.”
    16:22 Dr. Penn says, “I’ll let you guys,” when he could have said, “I’ll let you folks” or “I’ll let you viewers.”

    • @Kapomafioso
      @Kapomafioso 2 года назад +2

      Your point being...?

    • @bethhentges
      @bethhentges 2 года назад

      @@Kapomafioso
      There was a long discussion the other day regarding Dr. Penn’s unnecessary and probably unconscious use of “this guy” and how such usage subtly creates an environment where girls and women feel excluded which can lead them to select a different field of study.

    • @russellsharpe288
      @russellsharpe288 2 года назад +2

      @@bethhentges 'guy' is now a gender-neutral term. This is particularly obvious when it is being used to refer to algebraic expressions.

    • @bethhentges
      @bethhentges 2 года назад

      @@russellsharpe288 Nope.

  • @와우-m1y
    @와우-m1y 2 года назад +1

    asnwer= n 1 ! n1 ! hmm