I've solved these integrals so many times using various techniques: Feynman's trick, Laplace transforms, the complex gaussian and a parametrization using the gamma function....its the last one that's definitely my favourite.
I'd turn them into complex Gaussian integrals from the start. The Gaussian integral converges as long as you approach infinity at 45° or less, and the Fresnel integral is the sum or difference of Gaussian integrals at exactly 45°.
The “s” in “Fresnel” is silent. “Fresnel” is a French name. Sorry for being pedantic. Being an optics person, the Fresnel equations are daily part of my life.
@@ernestomamedaliev4253 ha ha ... One of my favourites, being half German, is the car maker VW ... 'fau vay' if you're German 'vee double u' if you're an English speaker.
Nice demonstration, in particular with the use of the complex z = a+ib cos(x^2)+i.sin(x^2) = exp(I.x^2) = exp (-x^2/i) It’s then possible to calculate the integral of z from 0 to oo, noted Int(z). It gives sqrt(i.pi)/2 as the Gaussian integral from 0 to oo equals sqrt(pi)/2, a well-known result. You mentioned sqrt(i) = (1 + i)/sqrt(2). And this is it. Result of integral of z is [(1 + i)/sqrt(2)]*sqrt(pi)/2 a = Re(int(z)) = sqrt(pi/8) b = Im(int(z)) = sqrt(pi/8) = a
My first step would be to eliminate all the trig by defining c=a+ib. Then work out c and separate into real and imaginary parts. Ah, I see you did this towards the end but I reckon doing it earlier would mean only having to go through the integration by parts malarchy once rather than twice. And that feels like a good place to stop.
I would have done a contour integration of e^(-z*x^2) on a closed anti-clockwise path (assuming Re(z)>0), one part of the path being the interval [0, R] where R is a large positive real number.
This is a very clever approach, but it relies on a magic choice of substitution at the start. How would you come up with that choice of A(t, u) and B(t, u)?
If you rewatch that part, it’ll explain that the lower case a & b integrals are actually themselves the special cases of the later capital A & B integrals. So it’s not really magic after all. :) -Stephanie MP Editor
@@MichaelPennMath Sure, that much was pretty clear. But why would you generalize a and b that way? Why introduce the factor of e^-tx^2 or the factor of u at all?
@@mostly_mental One reason I can see is that sometimes generalizing by introducing more degrees of freedom makes a problem easier to solve, and I believe sinusoids and exponentials form a basis. Taken together, this makes it a reasonable thing to try.
@@allanjmcpherson True, but there's plenty of other exponential or sinusoidal factors you could use. Why multiply by e^-tx^2 instead of e^tx or cos(tx^2) or any other factor? Is this just a case of trying them all until one happens to work out nicely, or is there a principled reason to pick this particular factor?
In my experience they are found by "playing" around. Putting a u in sin/cos(u x^2) and taking the derivative might be something you try just to see what happens and cos and sin are real/imaginary parts of e^(iux^2). Then you stumble across something like lim x->infty(xe^(iux^2)/(iu)), which seems a bit non convergent if you're unlucky. But just putting another factor of e^(-tx^2) at the start with t real and t>0 gives you lim x->infty(xe^((-t+iu)x^2)/(-t+iu)) which does converge. And then doing everything in detail and taking care of all technicalities you find the simple differential equation and then solution from the video with "weird" random function definitions at the start. In the end it turns out what you get is convergent in the case t->(0+) and u->1 which is what you started with
Recently I stated that your videos are the equivalent of a Master's degree in Mathematics. I'm revising that opinion to believing that your body of videos is beyond the equivalent of a Master's degree in Mathematics. Nice work!
Excellent video, as usual. But before starting the integration procedure, how do we know these integrals converge, in the first place? The integrand functions look to me as if they oscillate more and more towards infinity, without ever decreasing…
It's the x^2 bit. Think about the roots as x increases. Since x^2 grows faster and faster, the roots get closer and closer, so the signed area between them gets smaller and smaller. It so happens that the rate at which this occurs is such that the integrals converge.
@@fahrenheit2101 of course it's not a proof. Also, which kind of integral are we referring to? These Fresnel integrals of the video are Riemann's improper integrals, not Lebesgue's, which I'm pretty sure to diverge.
@@MarcoMate87 Idk the difference - thats why I stopped before addressing the heart of the question. How are lebesgue integrals defined, and why does one definition converge while the other diverges?
At the end, you assumed that if log(z) = log(w), then z=w. This is true for real z and w, but for complex arguments the log function is multi-valued. I suppose there is some justification for what was done because it gave a correct result. Can anyone clarify?
That branch choice can be absorbed into the constant C; the constant will take different values on different branches so that the final result is independent of branch choice.
@@leostein128 I'm not sure I get it since a and b are actually coming from the constant. I feel like if we change the constant, we change the value of a and b.
I thoroughly enjoyed this unusual evaluation of the Fresnel integrals right up until you started taking logarithms of complex numbers. You omitted to mention the fact that the complex logarithm function is not single-valued. You obviously know that but I don't think you can silently assume your viewers know that and can fill in the missing details, which do not appear trivial to me.
The natural log is multi-valued, but exponentiation is single-valued. So if z = w, you cannot always conclude that log(z)=log(w). But the reverse logic is perfectly valid: if log(z)=log(w), raising e to the power of both sides gives z = w, which is always correct.
If Michael were to be giving this lecture in French, you can be sure he'd pronounce "Fresnel" the French way-the "r" as a guttural throat clearing, the e's as as "uh", "s" silent. With the "s" silent, the "n" is probably nasal-"en" would be nasalized for sure. And the "l" is probably silent too-one can never be sure with French. The last consonant is sometimes pronounced, but often not-like in "Paris." Like I said, if he were French, he'd know. But he isn't. Get over it.
Wouldn't it be easier considering t just as a "parameter" and u as a "variable"? I mean, the result is the same, but I think it would be less "cumbersome" in the sense that we would not have "partial derivatives" but "total derivatives". Nice vid as always! Thanks for the content!
No doubt the solution is elegant but the intuition behind the various subtle steps employed through the solution phase is hard to comprehend (e.g introducing the imaginary number i)
And this whole technique of adding an extra parameter, creating a differential equation, solving it and then getting to the original problem by setting the extra parameter to whatever value…. Is this also known as the Feynman technique?
Instead of solving the differential equations and putting A+iB=Z together anyway, couldn't you just do that immediately with the integral definitions of A and B? Then Z is just a Gaussian integral and you get Z immediately (and can take real and imaginary parts and then take the limit t->0 and set u=1 as before).
I also noticed that this approach works. It's no big surprise but it's not obvious either. Why take sqrt(i) = (1+i)/2, not sqrt(i) = (-1-i)/2? Some extra arguments (probably including analyticity) are needed to justify the approach.
@@insouciantFox There is nothing that says that only the positive or principle branch should be used. And is it really okay for the value of an integral to depend on what choices we make when evaluating it?
So why does Michael pronounce Euler and Fourier correctly? Seems like a double standard to me. I was shocked he said Fresnel the way he did. Major faux pas.
This proof is fantastic, but in these integrals, one should first prove that the integral converges; otherwise, we could manipulate infinities without even noticing that. Also, are we talking about Riemann's improper integral or Lebesgue integral? I suppose the first because I think those functions are not Lebesgue-integrable.
Both "sin(x^2), cos(x^2)" are not L1-functions on ℝ, so you are right -- even in the Lebesgue sense, the Fresnel integrals only exist as improper integrals.
THANK YOU FOR THE ANIMATIONS! They help me a lot
I've solved these integrals so many times using various techniques: Feynman's trick, Laplace transforms, the complex gaussian and a parametrization using the gamma function....its the last one that's definitely my favourite.
hi big fan excited for your complex analysis series
ramanujan's master theorem also works surprisingly well, even for different powers of x in sin
oh shit i learned it from your channel lmao
fresnel integrals are highly used in physics in the branch of Physical optics. Thank you so much sir.
Are they perhaps used for Fresnel lenses? If so what do they tell us about them?
@@lbgstzockt8493 it is mostly used in deriving relations in Fresnel diffraction
I'd turn them into complex Gaussian integrals from the start. The Gaussian integral converges as long as you approach infinity at 45° or less, and the Fresnel integral is the sum or difference of Gaussian integrals at exactly 45°.
20:20
That was a lot of a lot.
Thank you, professor.
Very very nice. Thank you.
A high school teacher would deduct points for not “simplifying” the final boxed answer. 😂
Wow. Amazing.
Thank you 🌸🌸🌸
Mesmerizing in its beauty!
That was a fun derivation. Thanks for exposing me to this.
The description reads like a Tyler 1 stream title
The “s” in “Fresnel” is silent. “Fresnel” is a French name. Sorry for being pedantic. Being an optics person, the Fresnel equations are daily part of my life.
So what? In English we also pronounce the s in Paris , that's just how it sometimes goes across languages
@@bbbb98765 You mean "Pagih", right?
@@ernestomamedaliev4253 ha ha ... One of my favourites, being half German, is the car maker VW ... 'fau vay' if you're German 'vee double u' if you're an English speaker.
He clearly said "Sorry for being pedantic" guys leave him alone
@@sarithasaritha.t.r147 what a kind person ❤
Nice demonstration, in particular with the use of the complex z = a+ib
cos(x^2)+i.sin(x^2) = exp(I.x^2) = exp (-x^2/i)
It’s then possible to calculate the integral of z from 0 to oo, noted Int(z). It gives sqrt(i.pi)/2 as the Gaussian integral from 0 to oo equals sqrt(pi)/2, a well-known result.
You mentioned sqrt(i) = (1 + i)/sqrt(2). And this is it.
Result of integral of z is [(1 + i)/sqrt(2)]*sqrt(pi)/2
a = Re(int(z)) = sqrt(pi/8)
b = Im(int(z)) = sqrt(pi/8) = a
B-E-A-utiful!!
Genius solution !
Really well explained! Thanks!
My first step would be to eliminate all the trig by defining c=a+ib. Then work out c and separate into real and imaginary parts.
Ah, I see you did this towards the end but I reckon doing it earlier would mean only having to go through the integration by parts malarchy once rather than twice.
And that feels like a good place to stop.
Nice! I remember my sheer joy the first time I integrated the general quadratic polynomial under the cosine👍
I would have done a contour integration of e^(-z*x^2) on a closed anti-clockwise path (assuming Re(z)>0), one part of the path being the interval [0, R] where R is a large positive real number.
Quick question at 20:00. How do we know that ln(x) is injective over complex numbers?
It doesn't have to be for the purpose of this proof. Check out the tread started by Big Jazbo.
@@RexxSchneider ahh cool thank you thank you
رائع جدا كالعادة.
هكذا يكون المحترفون
Awesome derivation 🎉
A simply gorgeous and sexy solution!
Beautiful 😌
This is a very clever approach, but it relies on a magic choice of substitution at the start. How would you come up with that choice of A(t, u) and B(t, u)?
If you rewatch that part, it’ll explain that the lower case a & b integrals are actually themselves the special cases of the later capital A & B integrals. So it’s not really magic after all. :)
-Stephanie
MP Editor
@@MichaelPennMath Sure, that much was pretty clear. But why would you generalize a and b that way? Why introduce the factor of e^-tx^2 or the factor of u at all?
@@mostly_mental One reason I can see is that sometimes generalizing by introducing more degrees of freedom makes a problem easier to solve, and I believe sinusoids and exponentials form a basis. Taken together, this makes it a reasonable thing to try.
@@allanjmcpherson True, but there's plenty of other exponential or sinusoidal factors you could use. Why multiply by e^-tx^2 instead of e^tx or cos(tx^2) or any other factor? Is this just a case of trying them all until one happens to work out nicely, or is there a principled reason to pick this particular factor?
In my experience they are found by "playing" around. Putting a u in sin/cos(u x^2) and taking the derivative might be something you try just to see what happens and cos and sin are real/imaginary parts of e^(iux^2). Then you stumble across something like lim x->infty(xe^(iux^2)/(iu)), which seems a bit non convergent if you're unlucky. But just putting another factor of e^(-tx^2) at the start with t real and t>0 gives you lim x->infty(xe^((-t+iu)x^2)/(-t+iu)) which does converge. And then doing everything in detail and taking care of all technicalities you find the simple differential equation and then solution from the video with "weird" random function definitions at the start. In the end it turns out what you get is convergent in the case t->(0+) and u->1 which is what you started with
Recently I stated that your videos are the equivalent of a Master's degree in Mathematics. I'm revising that opinion to believing that your body of videos is beyond the equivalent of a Master's degree in Mathematics. Nice work!
so cool!
Excellent video, as usual. But before starting the integration procedure, how do we know these integrals converge, in the first place? The integrand functions look to me as if they oscillate more and more towards infinity, without ever decreasing…
It's the x^2 bit. Think about the roots as x increases. Since x^2 grows faster and faster, the roots get closer and closer, so the signed area between them gets smaller and smaller. It so happens that the rate at which this occurs is such that the integrals converge.
@@fahrenheit2101 of course it's not a proof. Also, which kind of integral are we referring to? These Fresnel integrals of the video are Riemann's improper integrals, not Lebesgue's, which I'm pretty sure to diverge.
@@MarcoMate87 Idk the difference - thats why I stopped before addressing the heart of the question. How are lebesgue integrals defined, and why does one definition converge while the other diverges?
@@MarcoMate87 They do converge, by Leibniz's test (en.wikipedia.org/wiki/Alternating_series_test).
At the end, you assumed that if log(z) = log(w), then z=w. This is true for real z and w, but for complex arguments the log function is multi-valued. I suppose there is some justification for what was done because it gave a correct result. Can anyone clarify?
That branch choice can be absorbed into the constant C; the constant will take different values on different branches so that the final result is independent of branch choice.
@@leostein128 Thank you! I overlooked the arbitrary constant.
@@leostein128 I'm not sure I get it since a and b are actually coming from the constant. I feel like if we change the constant, we change the value of a and b.
I thoroughly enjoyed this unusual evaluation of the Fresnel integrals right up until you started taking logarithms of complex numbers. You omitted to mention the fact that the complex logarithm function is not single-valued. You obviously know that but I don't think you can silently assume your viewers know that and can fill in the missing details, which do not appear trivial to me.
The natural log is multi-valued, but exponentiation is single-valued. So if z = w, you cannot always conclude that log(z)=log(w). But the reverse logic is perfectly valid: if log(z)=log(w), raising e to the power of both sides gives z = w, which is always correct.
Nice
If Michael were to be giving this lecture in French, you can be sure he'd pronounce "Fresnel" the French way-the "r" as a guttural throat clearing, the e's as as "uh", "s" silent. With the "s" silent, the "n" is probably nasal-"en" would be nasalized for sure. And the "l" is probably silent too-one can never be sure with French. The last consonant is sometimes pronounced, but often not-like in "Paris." Like I said, if he were French, he'd know. But he isn't. Get over it.
But this the case for the"l" in Fresnel.Francois from Paris(where the s is silent)
I remember complex logarithms to be cyclic. So at the end, how are we sure that the value will be on the principal branch of the function?
Great
Happy to see the video getting a meaningful and useful title.
Wouldn't it be easier considering t just as a "parameter" and u as a "variable"? I mean, the result is the same, but I think it would be less "cumbersome" in the sense that we would not have "partial derivatives" but "total derivatives". Nice vid as always! Thanks for the content!
insane
Way cool
Frenchmen cringing in the comment section say "Oui!"
No doubt the solution is elegant but the intuition behind the various subtle steps employed through the solution phase is hard to comprehend (e.g introducing the imaginary number i)
x^2=t, and using ramanujan master theorem is interesting.
And this whole technique of adding an extra parameter, creating a differential equation, solving it and then getting to the original problem by setting the extra parameter to whatever value…. Is this also known as the Feynman technique?
No s in Fresnel ;) (well, in French anyway) Great video!
Instead of solving the differential equations and putting A+iB=Z together anyway, couldn't you just do that immediately with the integral definitions of A and B? Then Z is just a Gaussian integral and you get Z immediately (and can take real and imaginary parts and then take the limit t->0 and set u=1 as before).
Why did you mention that it's not A(0,1)= the cosine integral, instead it's equivalent to the limit of it?
20:57 ∈ {20mins or less}
hmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
You'd better say. "Frê-nel". Skip 's'.
Why wouldn't this work:
a+ib= Int[e^(-wx²)]|w=1/i
= sqrt(π/w)/2= sqrt(iπ)/2
= sqrt(π/2)/2+i*sqrt(π/2)/2
==> a=b = sqrt(π/8) ■
I also noticed that this approach works. It's no big surprise but it's not obvious either. Why take sqrt(i) = (1+i)/2, not sqrt(i) = (-1-i)/2? Some extra arguments (probably including analyticity) are needed to justify the approach.
@@md2perpe As fas as I know sqrt(x) only yields the positive branch. Plus, just like log, we only consider principle branches anyway.
@@insouciantFox There is nothing that says that only the positive or principle branch should be used. And is it really okay for the value of an integral to depend on what choices we make when evaluating it?
Just tuned into the description again after the first one, what is even going on?
There’s no coherent story I assure you. And random all for fun.
Stephanie
MP editor
@@MichaelPennMath I appreciate your energy
20:21?? What is this click bait? I want my money back!
Really nice presentation. ⭐️ (…Fresnel can evidently be pronounced either way.)
I always thought it was pronounced fray-NELL integrals.
unfortunately, it has to go through a complex function... which seemed to be in contradiction with the initial objective, right ? but it is nice :)
These talks are cool but some applications info would be awesome for the folks that are more scientists and engineers than mathematicians.
Great video! But also, "pack your brain full of knowledge"? Really? :D
They can’t all be dimes lol
Stephanie
MP Editor
So why does Michael pronounce Euler and Fourier correctly? Seems like a double standard to me. I was shocked he said Fresnel the way he did. Major faux pas.
Less of a double standard and more of a human mistake. Evidently he won't do it again now that everyone here has let him know how wrong he is.
i really dont like the animations they're so silly
It's hard to take you seriously when you can't even pronounce Fresnel correctly.
This proof is fantastic, but in these integrals, one should first prove that the integral converges; otherwise, we could manipulate infinities without even noticing that. Also, are we talking about Riemann's improper integral or Lebesgue integral? I suppose the first because I think those functions are not Lebesgue-integrable.
Both "sin(x^2), cos(x^2)" are not L1-functions on ℝ, so you are right -- even in the Lebesgue sense, the Fresnel integrals only exist as improper integrals.
Optics is a bit picky with silent s in Fresnel French way, just saying.