As a physicist, we worked through these definitions/derivations in undergrad, in some of our earliest classes for our major. We do lots of rigorous math.
MS Physics here, great video. We really love the dirac delta die as well as Heaviside. Most of the time we avoid calling it a function or distribution. In the day to day flow, we just say Dirac delta
I mean yeah it makes much more sense for it to be symmetric, because otherwise there is ambiguity of what happens when you take the integral that has 0 as one of its endpoints. It makes sense to make it symmetric and then specify precisely 0- or 0+ as one of the limits, and that decides whether or not the integral covers the entire dirac delta "spike". Then the dirac delta sits between 0- and 0+. It's actually a really important discussion point in signal processing
I also haven't seen it defined in the way he used at the start in any of the books I had, it was always just "take a bell curve and squeeze it" because usually that's what it's being used for.
In my mind it’s delta(X)=limit as h->0 1_[-h/2,h/2]/h, with 1_[] being the indicator function. So it’s kinda the average value of a function at a infinity small Intervall around zero.
You should do some distribution theory or dive into Sobolev spaces and PDEs I'd eagerly watch P.S. Also liked applying of analysis tools in this video.
Claim done at 8:03 does not seem to be persuasive enough because the integral is (indeed) bounded from above by M/2n, but if we open the paranthesis we will have just M/2 in place of the integral which doesn't have to be equal to 0
When we were doing the deep dive into the delta and other such "functions" during my physics degree, we derived it in terms of Fourier transforms of sinc functions. This turns out to be very illuminating when it comes to the thorny question of what it means when we say a system has an observable, and in particular why the Heisenberg Uncertainty Principle is a _necessary_ consequence of taking observables as conjugate pairs. This was the same approach that we took when applying it to the Shannon sampling theorem and why the Nyquist criterion applies.
I learned it in the Electronics Engineering course. The teachers called it the "impulse" function. It's the idealization of a very short duration, high energy signal spike.
Here is a mathematically rigorous way to think about \delta. It is a _measure_ on the real numbers R. A measure µ on R is just a function that assigns to a subset A of R a positive number µ(A), its "size" (strictly speaking it only does for subsets that are not too "wild", so called measurable subsets, e.g. all subsets that you can make using countable union, countable intersection and complement operation starting with intervals). The delta measure, or rather the delta measure \delta_a(dx) for a number a \in R is very simple \delta_a(A) = 1 if a \in A and 0 otherwise Standard arguments then tell that one can integrate (measurable) functions on R to get \int_R f(x) \delta_a(dx) = f(a) "By abuse of notation" one writes \delta(x - a)dx. This is strictly speaking incorrect (the technical terminology is "\delta_a(dx) is not absolutely continuous with respect to Lebesgue measure dx"). Michael's presentation shows that if A is reasonable (e.g. an interval) there exists sequence of "weight" functions w_n(x) such that _for fixed A_ \lim_{n -> ∞} \int_A w_n(x - a) dx = \delta_a (A) = 1 if a in A and 0 otherwise. And by "standard" arguments \lim_{n -> ∞} \int_R f(x) w_n(x - a) dx = \int f(x) \delta_a(dx) = f(a) Therefore if you know what it really means \delta(x -a)dx is often a convenient short hand. P.s If you want to learn about measures and why they are useful there is a very nice and no-nonsense You tube series by @brightsideofmaths ruclips.net/video/xZ69KEg7ccU/видео.html
The dirac function is incredible, not just mathematically but in practical terms. It blew my mind, it's one of those really cool things to learn about.
Speaking as a physicist, it's definitely a useful short-hand notation that saves a lot of writing during various derivations. Sort of like just using d/dx f(x) on various functions without fully writing out and calculating d/dx f(x) = lim(h->0) (f(x+h)-f(x))/h each time you take a derivative. I found the delta function particularly useful using Green's functions in electromagnetics problems which I did my dissertation on and later professionally.
@@TheGlassgubben Like to be a little what? As for answering the question, it's ambiguous where you're taking the limit if you're not explicitly stating it. Now, I don't know what context it's used however if the delta function is used as f(δ(x)), since the lim part is taken out it isn't clear whether it's: lim f(δ(x)) or f(lim δ(x)) which for certain f and x can prove to be a big difference.
@@pedrosso0, from Camebridge dictionary: "obtuse adjective [...] stupid and slow to understand, or unwilling to try to understand". In this case it was mostly meant in jest. That issue doesn't affect anything too often when we're limited to physically relevant problems. If we ever find out that we have to decide on one definition we can care about it then.
Following the physicist's math series, I think it could be interesting to have a video about the Baker-Campbell-Haussdorf formula. Maybe by showing that exp(A)exp(B) does not equal exp(A+B) if A and B are two matrices with non-zero commutator.
Yeah, and the Magnus expansion is a very interesting application of this, which is the expansion of the evolution of a process in commutators. en.wikipedia.org/wiki/Magnus_expansion
Is the Dirac delta function a continuous version of the Dirac Measure? The Dirac Measure is equal to 1 if there exists an element inside a specified set, or region, or interval. And the Dirac Measure is 0 if there is no element inside the specified region. So, in the Dirac Measure the element can act like a variable that is free to take on values inside the interval of the set and values outside the set as well. When values are inside the interval that specifies the set, then the Dirac Measure is 1, otherwise not. On a discrete number line, the set can be specified at one point along the number line. And the element can take on discrete values along the number line. Only when the element number is equal to the set number will the Dirac Measure equal one, but when the numbers are not equal, the Dirac measure is zero. This turns the Dirac Measure into the Kronecker delta. We can do a summation of the Kronecker delta over the entire number line. And it will add up to 1, because only in one place will the two numbers be equal. This looks like a discrete version of the integration of the Dirac delta function.
Never have I been so insulted by an intro that's so correct. This "function" came up a lot in my work generally, including in one section of my PhD. In some strange edge cases situations, I had to integrate \delta(x) from 0 to infinity, which - after digging around in the literature, turns out to not really be defined. I ended up calling it 1/2, both from the geometric intuition, and because it made it agree with the solution I got from a completely different method to the same underlying problem (a method that was only applicable in those strange edge cases).
if the integral of f(x) \delta(x) dx is defined to be f(0), then wouldn't the integral of \delta(x) dx on its own just be 1? since f(x) = 1 would give that integral
@@GeekProdigyGuy I tried lol. Their answer was something along the lines of "it's not a real function, so it's not properly defined outside of the right sort of integral, so we don't care". I essentially just extended the definition of the dirac delta so that the different answers I had could be written as one formula, rather than having separate cases. @ The integral from -infinity to infinity of the dirac delta is 1. But I was integrating over half of that. If you take the diract delta as limit of the gaussian in the video, you'd get a half. If you take the step function definition, you'd get 1. You can easily construct an another limit definition of the distribution to give you 0, or in fact any value between 0 and 1. In different contexts, it's usually taken to be 0, 1/2, or 1, which ever is most convenient. It's not a big deal, it's fundamentally just notation.
Usually the answer is going to depend on how the delta function was derived as the answer to a formula in the first place, and often the correct result is going to depend on the symmetry and dimensionality of the function that is being integrated against the delta function. In a course I try to show for example, how because the delta function is the result of a limiting process, one may have to include that process in the problem in which one applies the delta function.
if you want to use an approach similar to the first in the video, it would be better to consider the Dirac delta as the limit of a step function that is n/2 between -1/n and +1/n, so that it is symmetric around 0.
If you want an idea for a follow up question, a common post-grad interview question is _"What is the integral of f(x) times the _*_derivative_* of the dirac delta function (from -infinity to +infinity)?" It can be done by just doing integration by parts and the defining properties of the delta function, but it might be fun to do rigorously using a limit definition of the delta function.
By definition, the dirac function is the Fourier transform of a constant function such as f(x)=1. Starting from the Fourier transform of a step function and then finding the limit when the width of the step goes to infinity.
I'm glad you didn't go through the whole making topologies from infinite families of seminorms ... and whatever else was needed for distribution theory. It's been 20+ years and was by far the hardest math class I ever took - The details are a bit fuzzy.
Physicist here, don't shoot, I know it's not really a function. Can't we all just get along? 🙂 (Once, in my senior year as an undergrad, a mathematically-minded colleague gave a small lecture, properly introducing distributions. It was intriguing, and not that hard to understand, really. But I only really understood the subject when I studied quantum mechanics, and specifically the problem of wavefunction normalization in the continuum case)
That's why we don't talk about continuum wave functions, we just close our eyes and hope that linear algebra works the same in finite and infinite dimensions :p
Nice motivational video for the power (and beauty) of "Schwarz' Distribution Theory". Sadly, if you _really_ want to take the deep dive eliminating the "sketchy parts", you need to encounter - the space of test functions (and its topology based on convexity) - actually define distributions as functionals from the space of test functions to ℂ Only then will things like the limit at 3:10 actually make sense -- and yes, "Distribution Theory" borrows heavily from "Functional Analysis", making it an advanced subject in a mathematician's masters program.
the integral he's writing at 11:07 looks really similar to the integral definition of the Gamma Function and i'm now super curious as to what the connection between the two could be
This is the first time I've seen this delta. It made a lot more sense when you brought out the shrinking bell-curve. Good stuff. Would definitely watch more videos on sketchy maths in physics :)
There is a reason why theory of distributions was introduced. The tedious approach using test spaces and distributions is definitely the way to go for the study of dirac but it involves a lot of pure math knowledge.
I believe that the Dirac delta could be defined as a normal function in the context of non-standard analysis. In particular as a normalized Gaussian that has the infinitesimal standard deviation (and therefore the maximum of infinite ordinate).
The "sketchy" source I got that definition from was several physics professors. They usual followed it up with a warning that "Some effete mathematicians will have a problem with this definition, but..."
You do usually at least get a disclaimer of the nature of "This is not really a function--it's a *distribution*." Though there may not be a lot of discussion of what that means. Physicists use convolutions of functions a LOT, and it turns out to be very useful to have a "function" that will just pick out a certain value of another function by convoluting them. They also think a lot about locality and conservation laws, and the Dirac delta is often a convenient tool for *restricting* an interaction in some way, by locality or to obey a conservation law.
Can you maybe do a video on what happens if the delta distribution lies on the boundary of the integration domain D? I think it should be provable that if the boundary itself is differentiable, it should be 1/2 f(x) for x on del D. This would also clear up the "sketchy physicist's" definition of the Heaviside theta function, which some (including me) take to have a value of 1/2 at 0. (sry if what I'm saying doesn't make sense, I'm only a second year)
I remember this coming up when I took a quantum mechanics class during my undergrad. The lecturer just kinda pulled it out of nowhere, admitted it wasn't really a function and then just rolled with it
Nice video, but I have 2 questions: why could you just swap the "limit" and integral and what happens if f is not differentiable(Your proof only works when f is differentiable)? And what is an actual rigorous way to define this function(I'm guessing it's just an abuse of notation, because the sequence of functions you defined is not pointwise convergent in 0)?
The answer to the first question is simple: you can't. I generally love Michael's work, but what he has done here is as sketchy as defining the Dirac Delta as he pointed out in the start of the video. Using the Lebesgue measure for integrating, which is what is assumed here, you can't swap limit and integral in this case. In fact, if you go to the extended reals, defining a function as taking the value "infinity" makes sense, but if it takes that value only on a set of measure zero (like just one point) it's integral is not changed. Thus, the Lebesgue integral of a Dirac's Delta is zero without debate. The trick is, however, not to use the Lebesgue Measure, but changing it. Thus, the "integral of a function times the Dirac Delta under the Lebesgue measure" is in reality an abuse of notation to refer to "the integral of a function under the discrete measure" that assigns zero to every point except for zero, and assigns one to zero. So, in reality, the Dirac Delta is not a function and is nothing similar to a function, but an object that means: change the measure here when integrating this function. That, or the integral can also be seen abuse of notation to refer to the Dirac Delta distribution aplied to the function, too, which is a totally different object, and steps into a topic that needs more introduction. I recommend you check it out tho, especially the Lebesgue spaces, the concept of dual space and why the dual of certain spaces is not what we think to be. It is quite mind opening
It was so cool finally seeing the Dirac Delta rigorously defined and used without any hand waving in a functional analysis course I did on c0ursera a few years ago back when they had awesome free classes. Had used it all the time self studying physics but never came across it in my undergrad math degree.
Thinking of it as the Gaussian distribution with the variance approaching zero does make a lot of sense, it also clicks when looking at the characteristic function.
This has to be the nicest explanation of the Dirac delta I have seen so far, seasoned with a bit of mathematician humor towards those sketchy physicist 🙂 *tips hat*
As a physicist it's nice to know that the way we learnt about it formally was a bit more rigorous than these "sketchy" constructions. (Mainly by always saying that this is only valid if the function f is "nice" enough and that in the physical world all function can be assumed to be "nice" enough)
I recently saw a small proof of the fact that de Dirac delta "function" is not a function using Lebesgue integrals. Do you think you can do a couple of videos talking about Lebesgue integrals and their uses?
All this makes me crave a full MP "Math Major" series on {checks early 90s sketchy physics notes...} Fourier Transforms. Always struck my (then) young brain as a profound impact of maths on physics.
Here’s the thing, it’s ok for calculations to be sketchy up to the point you get an actually interesting result, then you go back and try to patch things up. I mean, QFT has amazing prediction power and it’s all sketch, you’ll get a million bucks if you show yang mills actually makes sense.
It does, by existence of limit theres some e such that |x| < e -> |f(x)-f(0)| < 1 so we can take N > 1/e and M = |f(0)|+1. But the derivative can be problematic, so integration by parts isn't exactly the best way. In fact, we don't need a condition on the derivative if the problem is done properly.
hope he hits that its like a limit as sigma goes to zero of a mean 0 sigma normal distribution, coming from actuarial/stats this makes the most sense to me *shrug*
People have commented about measures, but again, it feels like the dirac delta "function" is trying to be the density of a point distribution, which doesn't make a whole lot of sense for discrete distributions. However it makes complete sense as a measure; if my point mass is at 0, then the dirac measure gives any region that contains 0 the weight 1, including really tiny sets around 0. Density is a somewhat relative thing, it tells you how much more mass one region has than another, but its overkill here since a set either contains all the mass or none of it, and the ratio is degenerate. That said, dirac measures kinda seem to turn Lebesgue integrals into sums, and the dirac function seems to perform the same role. It makes sense that it crops up in physics, because you might have mass or charge "distributed over" or localised at a point instead of regions of non-zero area...
For the "first way" at 8:30 he assumes f being continous (as he pulls the limit inside) but assumes that f' is bounded earlier If f is cont., isnt f' automatically bounded?
I would have gone with the Breit-Wigner distribution (delta(x)=lim_{gamma->0)gamma/(pi(x^2+gamma^2)) for the continuous version instead of the Gaussian distribution. But, that's because I'm a nuclear theorist. The guys in condensed matter would probably prefer the Gaussian distribution. It's all the same. There are several continuous distributions where a zero limit or infinity limit produce the Dirac delta.
This is opening the door to the annoying part of physics - we "happily" jump between being lax (it's fine, physics makes sense no mathematical peculiarities will come and bite us) and being rigorous (to get it right we need to be cautious about everything)
Problem: Dirac delta function is not really a function. It only works if you integrate it. Solution: define it in terms of hyperreal numbers: delta(x) = {omega if 0
IIRC Hilbert famously said that the math of physics is too hard for physicists. Also, I find it a little annoying that it is attributed to Dirac, but I guess Fourier and others have plenty of things already named after them.
That's because when he apply formula for ( d/dn) ^m f(x) =-1/2*-3/2*-5/2... the - 1/2 is starting from m=1 not m=0. So, if u want to calculate (d/dn)^0 f(x)=f(x), u cannot apply the formula, u have to do separately
Michael "This is really sketchy", Physicists "That's the most rigorous maths i have ever seen" 🙂
Correction: most rigorous maths I've stayed awake for :p
@@QuantumHistorian Same thing
Physics is all sketchy math. My physics degree really left me disillusioned now that I knew how physics was really done.
As a physicist, we worked through these definitions/derivations in undergrad, in some of our earliest classes for our major. We do lots of rigorous math.
Engineers*
Yes, the Dirac distribution is one of the greatest tools on the physics bag. It's good that Michael talk about it
Physics, signal processing, so many applications 😊
@@Ablatius It's a distribution with measure 1 at zero.
@@cv990a4So?
MS Physics here, great video. We really love the dirac delta die as well as Heaviside. Most of the time we avoid calling it a function or distribution. In the day to day flow, we just say Dirac delta
True
"The Forgotten Genius of Oliver Heaviside: A Maverick of Electrical Science" ~ Basil Mahon
@@douglasstrother6584 Thanks for the book recommendation.
As a physicist the limit of a Gaussian always seemed like the best way to think of it to me. Seems to make everything rhyme a little better.
Also thinking of it as a continuous Kronecker delta makes a lot of hard calculations really easy.
I mean yeah it makes much more sense for it to be symmetric, because otherwise there is ambiguity of what happens when you take the integral that has 0 as one of its endpoints. It makes sense to make it symmetric and then specify precisely 0- or 0+ as one of the limits, and that decides whether or not the integral covers the entire dirac delta "spike". Then the dirac delta sits between 0- and 0+. It's actually a really important discussion point in signal processing
As a non physicist my favorite limit has to be delta(x) = lim eps->0 Ai(x/eps) / eps. Ai() as in the Airy function.
I also haven't seen it defined in the way he used at the start in any of the books I had, it was always just "take a bell curve and squeeze it" because usually that's what it's being used for.
In my mind it’s delta(X)=limit as h->0 1_[-h/2,h/2]/h, with 1_[] being the indicator function. So it’s kinda the average value of a function at a infinity small Intervall around zero.
You should do some distribution theory or dive into Sobolev spaces and PDEs I'd eagerly watch
P.S. Also liked applying of analysis tools in this video.
that would be gold
Claim done at 8:03 does not seem to be persuasive enough because the integral is (indeed) bounded from above by M/2n, but if we open the paranthesis we will have just M/2 in place of the integral which doesn't have to be equal to 0
When we were doing the deep dive into the delta and other such "functions" during my physics degree, we derived it in terms of Fourier transforms of sinc functions. This turns out to be very illuminating when it comes to the thorny question of what it means when we say a system has an observable, and in particular why the Heisenberg Uncertainty Principle is a _necessary_ consequence of taking observables as conjugate pairs. This was the same approach that we took when applying it to the Shannon sampling theorem and why the Nyquist criterion applies.
First time reading the description. Truly a sight...
I learned it in the Electronics Engineering course. The teachers called it the "impulse" function. It's the idealization of a very short duration, high energy signal spike.
A pure mathematician's greatest enemy is a physicist.
Can't we just agree that pi=3=e?
I can think of worse enemies, like those incorrectly saying 2 + 2 = 5.
@@Facetime_Curvature 🤣🤣🤣🤣🤣🤣 there's even worse pi = 22/7 😆
You just destroyed the universe with that
@@OuroborosVengeance: Who Cares?! Not I!!!
20:02 Palindromic timestamp
NEEEERRRRRRD! ;)
-Stephanie
MP editor
@@MichaelPennMath Damn right, I’m a freaking nerd 😂
This thread makes me happy.
Here is a mathematically rigorous way to think about \delta. It is a _measure_ on the real numbers R.
A measure µ on R is just a function that assigns to a subset A of R a positive number µ(A), its "size" (strictly speaking it only does for subsets that are not too "wild", so called measurable subsets, e.g. all subsets that you can make using countable union, countable intersection and complement operation starting with intervals). The delta measure, or rather the delta measure \delta_a(dx) for a number a \in R is very simple
\delta_a(A) = 1 if a \in A and 0 otherwise
Standard arguments then tell that one can integrate (measurable) functions on R to get
\int_R f(x) \delta_a(dx) = f(a)
"By abuse of notation" one writes \delta(x - a)dx. This is strictly speaking incorrect (the technical terminology is "\delta_a(dx) is not absolutely continuous with respect to Lebesgue measure dx").
Michael's presentation shows that if A is reasonable (e.g. an interval) there exists sequence of "weight" functions w_n(x) such that _for fixed A_
\lim_{n -> ∞} \int_A w_n(x - a) dx = \delta_a (A) = 1 if a in A and 0 otherwise.
And by "standard" arguments
\lim_{n -> ∞} \int_R f(x) w_n(x - a) dx = \int f(x) \delta_a(dx) = f(a)
Therefore if you know what it really means \delta(x -a)dx is often a convenient short hand.
P.s If you want to learn about measures and why they are useful there is a very nice and no-nonsense You tube series by @brightsideofmaths
ruclips.net/video/xZ69KEg7ccU/видео.html
Hi Dr. Penn!
Great chalkwork as always! Very clean lines.
The dirac function is incredible, not just mathematically but in practical terms. It blew my mind, it's one of those really cool things to learn about.
This is the function I was dying for to see Michael sir doing an analysis on it. Love you sir!!!❤❤
As a physicist, thanks for acknowledging that it isn't thaaaat sketchy after all! p.s.: Love your videos
Gnarly and sketchy rolled into one.
Thank you, professor.
A nice sample of something that seems quite convoluted but turns out rather nice and discrete.
Speaking as a physicist, it's definitely a useful short-hand notation that saves a lot of writing during various derivations. Sort of like just using d/dx f(x) on various functions without fully writing out and calculating d/dx f(x) = lim(h->0) (f(x+h)-f(x))/h each time you take a derivative. I found the delta function particularly useful using Green's functions in electromagnetics problems which I did my dissertation on and later professionally.
The thing is though, d/dx is a notation which means exactly that, but the delta "function" is a bit more ambiguous
@@pedrosso0, in what way exactly is it ambiguous? Come on mathematicians, admit that you simply like to be a little bit obtuse some times.
@@TheGlassgubben Like to be what?
@@TheGlassgubben Like to be a little what?
As for answering the question, it's ambiguous where you're taking the limit if you're not explicitly stating it. Now, I don't know what context it's used however if the delta function is used as f(δ(x)), since the lim part is taken out it isn't clear whether it's: lim f(δ(x)) or f(lim δ(x)) which for certain f and x can prove to be a big difference.
@@pedrosso0, from Camebridge dictionary: "obtuse adjective [...] stupid and slow to understand, or unwilling to try to understand". In this case it was mostly meant in jest.
That issue doesn't affect anything too often when we're limited to physically relevant problems. If we ever find out that we have to decide on one definition we can care about it then.
Following the physicist's math series, I think it could be interesting to have a video about the Baker-Campbell-Haussdorf formula. Maybe by showing that exp(A)exp(B) does not equal exp(A+B) if A and B are two matrices with non-zero commutator.
Yeah, and the Magnus expansion is a very interesting application of this, which is the expansion of the evolution of a process in commutators.
en.wikipedia.org/wiki/Magnus_expansion
Definitively. The construction of the S matrix in scattering theory rest in some way in this formula
Reminds me of my physics studies 40 years ago !! Thank You Michael. We see the link between math and physics. PAM Dirac was a misunderstood genius.
Hi, Michael you reminde me 21 years back when I used to take distribution course involving this one thanks
I seem to remember this when dealing with the convolution of functions
Is the Dirac delta function a continuous version of the Dirac Measure? The Dirac Measure is equal to 1 if there exists an element inside a specified set, or region, or interval. And the Dirac Measure is 0 if there is no element inside the specified region. So, in the Dirac Measure the element can act like a variable that is free to take on values inside the interval of the set and values outside the set as well. When values are inside the interval that specifies the set, then the Dirac Measure is 1, otherwise not. On a discrete number line, the set can be specified at one point along the number line. And the element can take on discrete values along the number line. Only when the element number is equal to the set number will the Dirac Measure equal one, but when the numbers are not equal, the Dirac measure is zero. This turns the Dirac Measure into the Kronecker delta. We can do a summation of the Kronecker delta over the entire number line. And it will add up to 1, because only in one place will the two numbers be equal. This looks like a discrete version of the integration of the Dirac delta function.
Never have I been so insulted by an intro that's so correct. This "function" came up a lot in my work generally, including in one section of my PhD. In some strange edge cases situations, I had to integrate \delta(x) from 0 to infinity, which - after digging around in the literature, turns out to not really be defined. I ended up calling it 1/2, both from the geometric intuition, and because it made it agree with the solution I got from a completely different method to the same underlying problem (a method that was only applicable in those strange edge cases).
it would've been faster to ask somebody from the maths department :)
if the integral of f(x) \delta(x) dx is defined to be f(0), then wouldn't the integral of \delta(x) dx on its own just be 1? since f(x) = 1 would give that integral
@@GeekProdigyGuy I tried lol. Their answer was something along the lines of "it's not a real function, so it's not properly defined outside of the right sort of integral, so we don't care". I essentially just extended the definition of the dirac delta so that the different answers I had could be written as one formula, rather than having separate cases.
@ The integral from -infinity to infinity of the dirac delta is 1. But I was integrating over half of that. If you take the diract delta as limit of the gaussian in the video, you'd get a half. If you take the step function definition, you'd get 1. You can easily construct an another limit definition of the distribution to give you 0, or in fact any value between 0 and 1. In different contexts, it's usually taken to be 0, 1/2, or 1, which ever is most convenient. It's not a big deal, it's fundamentally just notation.
@@QuantumHistorian what exact topic was your Ph.D. on?
Usually the answer is going to depend on how the delta function was derived as the answer to a formula in the first place, and often the correct result is going to depend on the symmetry and dimensionality of the function that is being integrated against the delta function. In a course I try to show for example, how because the delta function is the result of a limiting process, one may have to include that process in the problem in which one applies the delta function.
In the place of integration by parts, you can use directly fundamental calculus theorem, which requires only continuity of f
That's why physics texts just writes that further calculations are left to the masochists!
11:55 Here I can see Gamma function after using ttat integrand is even on interval symmetric around zero
and substitution t= -nx^2
Then you multiply it by (x-1)(x-2)(x-3)...
In the convolution integral to get the signal. Useful and simple.
if you want to use an approach similar to the first in the video, it would be better to consider the Dirac delta as the limit of a step function that is n/2 between -1/n and +1/n, so that it is symmetric around 0.
If you want an idea for a follow up question, a common post-grad interview question is _"What is the integral of f(x) times the _*_derivative_* of the dirac delta function (from -infinity to +infinity)?" It can be done by just doing integration by parts and the defining properties of the delta function, but it might be fun to do rigorously using a limit definition of the delta function.
By definition, the dirac function is the Fourier transform of a constant function such as f(x)=1. Starting from the Fourier transform of a step function and then finding the limit when the width of the step goes to infinity.
I'm glad you didn't go through the whole making topologies from infinite families of seminorms ... and whatever else was needed for distribution theory. It's been 20+ years and was by far the hardest math class I ever took - The details are a bit fuzzy.
It's one of the most difficult topics especially when it comes to understanding the Topology in the test spaces!
Physicist here, don't shoot, I know it's not really a function. Can't we all just get along? 🙂
(Once, in my senior year as an undergrad, a mathematically-minded colleague gave a small lecture, properly introducing distributions. It was intriguing, and not that hard to understand, really. But I only really understood the subject when I studied quantum mechanics, and specifically the problem of wavefunction normalization in the continuum case)
That's why we don't talk about continuum wave functions, we just close our eyes and hope that linear algebra works the same in finite and infinite dimensions :p
Its a function but not locally lebesque integrable function
Don't forget the electrodynamics and how the dirac delta distribution appears in the construction of the Green functions (Aka propagators in QFT)
Nice motivational video for the power (and beauty) of "Schwarz' Distribution Theory". Sadly, if you _really_ want to take the deep dive eliminating the "sketchy parts", you need to encounter
- the space of test functions (and its topology based on convexity)
- actually define distributions as functionals from the space of test functions to ℂ
Only then will things like the limit at 3:10 actually make sense -- and yes, "Distribution Theory" borrows heavily from "Functional Analysis", making it an advanced subject in a mathematician's masters program.
the integral he's writing at 11:07 looks really similar to the integral definition of the Gamma Function and i'm now super curious as to what the connection between the two could be
No way you uploaded this just a few hours after I had to submit my homework using this...
I planned it out specifically to affect you, in this way, personally. ;)
-Stephanie
MP Editor
This is the first time I've seen this delta.
It made a lot more sense when you brought out the shrinking bell-curve.
Good stuff. Would definitely watch more videos on sketchy maths in physics :)
There is a reason why theory of distributions was introduced.
The tedious approach using test spaces and distributions is definitely the way to go for the study of dirac but it involves a lot of pure math knowledge.
I'd love to see Michael do a video of that.
@@michaelguenther7105 me too.
Quality content as always. Great job!
I believe that the Dirac delta could be defined as a normal function in the context of non-standard analysis. In particular as a normalized Gaussian that has the infinitesimal standard deviation (and therefore the maximum of infinite ordinate).
The "sketchy" source I got that definition from was several physics professors. They usual followed it up with a warning that "Some effete mathematicians will have a problem with this definition, but..."
I'm a Physicist, and I love the delta function so much!!!
Us Physicists need to write a song, "My Delta Function", to the tune "My Generation".
I read the title and immediately knew it would be dirac delta.
as an EE undergrad, i've seen levels of non-rigor you never would have thought possible
You should collaborate with ThatMathThing, an analyst who claims they are functions!
You do usually at least get a disclaimer of the nature of "This is not really a function--it's a *distribution*." Though there may not be a lot of discussion of what that means.
Physicists use convolutions of functions a LOT, and it turns out to be very useful to have a "function" that will just pick out a certain value of another function by convoluting them. They also think a lot about locality and conservation laws, and the Dirac delta is often a convenient tool for *restricting* an interaction in some way, by locality or to obey a conservation law.
Pretty sure this is an engineer's favorite function too. I'm doing a PhD in control theory and this function becomes your entire life.
It was always taught as a "function", so not an actual function, but it's still a useful tool to have
It doesn't seem right to swap the sum the limit and the integral for (2m - 1)!!/(2n) ^m because it doesn't converge uniformly
Can you maybe do a video on what happens if the delta distribution lies on the boundary of the integration domain D? I think it should be provable that if the boundary itself is differentiable, it should be 1/2 f(x) for x on del D. This would also clear up the "sketchy physicist's" definition of the Heaviside theta function, which some (including me) take to have a value of 1/2 at 0. (sry if what I'm saying doesn't make sense, I'm only a second year)
The Heaviside function being a 1/2 at zero reminds me of Fouriere series.
I think they call objects like these "distributions". Fascinating little things.
Doesn't 10:20 only work for finite bounds as the bounds are independent limits?
I remember this coming up when I took a quantum mechanics class during my undergrad. The lecturer just kinda pulled it out of nowhere, admitted it wasn't really a function and then just rolled with it
Nice video, but I have 2 questions: why could you just swap the "limit" and integral and what happens if f is not differentiable(Your proof only works when f is differentiable)? And what is an actual rigorous way to define this function(I'm guessing it's just an abuse of notation, because the sequence of functions you defined is not pointwise convergent in 0)?
The answer to the first question is simple: you can't. I generally love Michael's work, but what he has done here is as sketchy as defining the Dirac Delta as he pointed out in the start of the video. Using the Lebesgue measure for integrating, which is what is assumed here, you can't swap limit and integral in this case. In fact, if you go to the extended reals, defining a function as taking the value "infinity" makes sense, but if it takes that value only on a set of measure zero (like just one point) it's integral is not changed. Thus, the Lebesgue integral of a Dirac's Delta is zero without debate.
The trick is, however, not to use the Lebesgue Measure, but changing it. Thus, the "integral of a function times the Dirac Delta under the Lebesgue measure" is in reality an abuse of notation to refer to "the integral of a function under the discrete measure" that assigns zero to every point except for zero, and assigns one to zero. So, in reality, the Dirac Delta is not a function and is nothing similar to a function, but an object that means: change the measure here when integrating this function.
That, or the integral can also be seen abuse of notation to refer to the Dirac Delta distribution aplied to the function, too, which is a totally different object, and steps into a topic that needs more introduction. I recommend you check it out tho, especially the Lebesgue spaces, the concept of dual space and why the dual of certain spaces is not what we think to be. It is quite mind opening
Yay, physics!!!
Also, the video description is... interesting. Could it be that Stephanie the Editor is having a little too much fun??? Naaah... 😂
It was so cool finally seeing the Dirac Delta rigorously defined and used without any hand waving in a functional analysis course I did on c0ursera a few years ago back when they had awesome free classes. Had used it all the time self studying physics but never came across it in my undergrad math degree.
I think it makes sense to think of Dirac delta as the limit of PDFs, you're finding the expected value of some function with probability \delta_n(x)
Thinking of it as the Gaussian distribution with the variance approaching zero does make a lot of sense, it also clicks when looking at the characteristic function.
This has to be the nicest explanation of the Dirac delta I have seen so far, seasoned with a bit of mathematician humor towards those sketchy physicist 🙂 *tips hat*
As a physicist it's nice to know that the way we learnt about it formally was a bit more rigorous than these "sketchy" constructions. (Mainly by always saying that this is only valid if the function f is "nice" enough and that in the physical world all function can be assumed to be "nice" enough)
I live the dirac FUNCTION because of how well it FUNCTIONS.
Engineers use a form of the Dirac function where 1, x=0 in signal processing.
I recently saw a small proof of the fact that de Dirac delta "function" is not a function using Lebesgue integrals. Do you think you can do a couple of videos talking about Lebesgue integrals and their uses?
In essence every integral you do is a Lebesgue integral, and integrals are pretty useful.
Nice video and nice GPT4 greetings 😂
I'm not using GPT4 to write the descriptions. I'm a weirdo. That's all I need.
-Stephanie
MP Editor
@@MichaelPennMath no offence, just joking 👍
what if you used hyperfinite numbers / infinitessimals?
ah yes
The Dirac delta function. I recently learned about it in diff.eq. in relarion to lagrange transformations
All this makes me crave a full MP "Math Major" series on {checks early 90s sketchy physics notes...} Fourier Transforms. Always struck my (then) young brain as a profound impact of maths on physics.
All kinds of limits and infinite sums flying out of all kinds of integrals in this one. Seems appropriate for the material 😅
That was so cool!
Here’s the thing, it’s ok for calculations to be sketchy up to the point you get an actually interesting result, then you go back and try to patch things up. I mean, QFT has amazing prediction power and it’s all sketch, you’ll get a million bucks if you show yang mills actually makes sense.
What does otherwise mean in a piecewise function?
Electrical engineers also love the Dirac Delta function, it's very crucial for evaluating a system's transfer function for example.
also the very core of digital signal processing
@@tommihommi1 yup, you need to Delta function to even consider thinking about the Fourier Transform.
I believe you can be less restrictive with your constraint on f(x) when deriving the integral with the delta function for d_n(x) = n for 0
It does, by existence of limit theres some e such that |x| < e -> |f(x)-f(0)| < 1 so we can take N > 1/e and M = |f(0)|+1. But the derivative can be problematic, so integration by parts isn't exactly the best way. In fact, we don't need a condition on the derivative if the problem is done properly.
Yo, Michael, any other fantastic math channels on youtube similar to yours?
I wish I knew when it was a good place to stop
Don't diss my delta, bro!
hope he hits that its like a limit as sigma goes to zero of a mean 0 sigma normal distribution, coming from actuarial/stats this makes the most sense to me *shrug*
I love this sketchy function
I rember trying a function series for d_sub_n such as d_sub_n(0)=0, but i don't remember if it gave the same result.
I've always enjoyed irritating mathematicians by calling it a function. Good times...
That is it's ... purpose.
People have commented about measures, but again, it feels like the dirac delta "function" is trying to be the density of a point distribution, which doesn't make a whole lot of sense for discrete distributions. However it makes complete sense as a measure; if my point mass is at 0, then the dirac measure gives any region that contains 0 the weight 1, including really tiny sets around 0. Density is a somewhat relative thing, it tells you how much more mass one region has than another, but its overkill here since a set either contains all the mass or none of it, and the ratio is degenerate. That said, dirac measures kinda seem to turn Lebesgue integrals into sums, and the dirac function seems to perform the same role. It makes sense that it crops up in physics, because you might have mass or charge "distributed over" or localised at a point instead of regions of non-zero area...
Now make a good pairing of the dirac delta distribution with fourier and laplace transforms and you're an engineer! :)
The engineers' favourite function!
For the "first way" at 8:30 he assumes f being continous (as he pulls the limit inside) but assumes that f' is bounded earlier
If f is cont., isnt f' automatically bounded?
I guess we only need f to be continous at 0
Continuous don't have to be differentiable, let alone have a bounded derivative!
What if you use the continuous version of the function with the Ackermann function?
The engineer's second best tool after excel.
I would have gone with the Breit-Wigner distribution (delta(x)=lim_{gamma->0)gamma/(pi(x^2+gamma^2)) for the continuous version instead of the Gaussian distribution. But, that's because I'm a nuclear theorist. The guys in condensed matter would probably prefer the Gaussian distribution. It's all the same. There are several continuous distributions where a zero limit or infinity limit produce the Dirac delta.
I was expecting a delta-epsilon proof. Do you need differentiability?
This is opening the door to the annoying part of physics - we "happily" jump between being lax (it's fine, physics makes sense no mathematical peculiarities will come and bite us) and being rigorous (to get it right we need to be cautious about everything)
very unhinged description as always😂
Yeah i remember that Wolf function . Using limit … and … how we implement it programming languages… lll
Problem: Dirac delta function is not really a function. It only works if you integrate it.
Solution: define it in terms of hyperreal numbers:
delta(x) = {omega if 0
Called the unit impulse.
The function of existance. You exist here!
Interesting "function"
how distributions are related to least squares method?
IIRC Hilbert famously said that the math of physics is too hard for physicists. Also, I find it a little annoying that it is attributed to Dirac, but I guess Fourier and others have plenty of things already named after them.
At 16:20 the result for m=0 is "-1" this is due to an error at 14:36, where you wrote -2m+1 instead of -(2m+1) for the end-term of the derivatives.
That is not a mistake
That's because when he apply formula for
( d/dn) ^m f(x) =-1/2*-3/2*-5/2...
the - 1/2 is starting from m=1 not m=0. So, if u want to calculate (d/dn)^0 f(x)=f(x), u cannot apply the formula, u have to do separately
correct, I agree
Fun is for Function, I guess