I wish a name was given for the process but I do not know. In practice, matching boundaries up to certain order of derivatives is used all the time in numerical analysis (e.g. Hermite Polynomials) which are practical, but to require infinite differentiability in a closed formula is something you may or may not see in a course in smooth manifolds/functional analysis which are graduate level subjects, and thats why I thought it was appropriate to cover it since it is not very accessible at an elementary level in context. But if I were to give the closest concept for this kind of "filling in the middle smoothly" process with a known name, I would say searching Smooth Urysohn's Lemma would get you most relevant results. My next video in the series is going to cover that topic
@@EpsilonDeltaMain I see! Thank you for your quick response. Another thing I would like to ask is: are there different solutions, that is, solutions other than taking e^(-1/x) ? If not, wouldn't it mean that the limit of Hermite polynomials when their degrees tend to infinity converges to the found solution? In the video, you said that using an infinite series would result in the Taylor series, but isn't this different from doing the previous limit? Because with limits you get what the polynomial approaches when it goes to infinity, not what it is at infinity.
@@pedrokrause7553 Very good questions. I like your questions and it adds so much value to some missing details in the video, so I pinned it if you don't mind 1. It doesn't have to be e^(-1/x). as long as the f in the f(-1/x) decays asymptotically a lot faster than -1/x shoots off to infinity as x -> 0+. For example f(x) = 5^(-x^2) will do it as well [*this is a gaussian], and if you are clever enough you can find an infinite family of these kinds of functions, including ones that does not directly use exponential function, such as erf(x) or 1/Γ(-x+2). But tail ends of functions like arctan(x) or 1/(x^10+1) will not decay fast enough to make derivatives of all order = 0 at x=0 for f(-1/x). Plus, we dont even have to use f(-1/x), and use something like f(-1/x^2) as this example shows: en.wikipedia.org/wiki/Flat_function 2. You are right, I assumed that if such Taylor series existed, it would fail to satisfy the left and the right simultaneously. e.g. if the left function was sin x, then taylor series uniquely defines the function extension to be the sin x. 3. But if we instead take a look at the limit of these hermite polynomials, the series wouldn't converge. Just take a look at first few hermite polynomials. for step interpolation. en.wikipedia.org/wiki/Smoothstep The coefficients blow up to tens of thousands fairly quickly, and the function only is bounded since the terms are alternating and pluses and minuses cancel each other. The limit of the polynomial would not exist since it would be like ∞x-∞x^2+∞x^3-∞x^4... if you look at the closed formula of coefficients of the hermite polynomial for each order
Maybe look up "bump function" or "mollifier". If I understand it correctly, this sort of interpolation uses such bump-functions to "crossfade" between the two target functions.
Good stuff. Kind of gives insight as to why finding an "analytic" continuation (but for this video not really as we are only dealing with reals, but more generally) can be difficult. Why "infinitely differentiable" is such a constraining condition. (Hopefully) Constructive criticism: I found myself losing track of which "Greek letter function" was modeling what parts of our goal. Like it would be helpful to have a line like "φ will be the continuous step function used to interpolate" or something when you defined the function. Same for the ψ function too. If you did already describe it, there was enough time between when you and when you stated it and performing the proofs and derivations (8:30 ish) that it deserved having a reminder at that point.
Usually the phrase "analytic continuation" applies to complex functions, and the Taylor series is _the unique_ analytic continuation. This construction introduces nasty essential singularities at x=0 & x=1 (not that real valued functions care).
The smooth continuation is in fact not analytic, at a and b. The function exp(-1/x), has all of its derivatives equal to 0 at x=0, so it would be equal to the null function in the neighborhood of 0 if it were analytic.
@@An-ht8so I mean, it doesn't need to be analytic, the fact that the transition is smooth already removes a lot of headaches when stitching functions together.
This video is just mindbowing, never thought that there would even be a way to construct truly smooth interpolation. Also, your visuals and presentation is really great, loved it. Keep going!
Interestingly enough, it was teached to us in the university when we were introduced into Dirac delta function. This was a part of some 3rd year math for physics students. These c-infinite functions are required to properly define and prove theorems which involve the Dirac delta, and by an extension, Green's functions which are "supercharged deltas" and therefore QED propagators, which are essentially Green's functions. This is why this was important for physics students - we actually need this to learn QED (which we learned afterwards).
I had to play the video twice to finally understand that the solution you expose is what is known as "crossfading" in audio engineering, and that most of the video is devoted to how to easily build a decent S-shape transition signal. I appreciate when ideas and intentions come first in plain language, and the maths come after, it's more easy for me to follow. However, I subscribed to your channel. Please, keep the pace slow, and the music down ! :D
Woah. When I just saw this video in my feed I tried a few ideas on paper, and it's really cool to see what the actual solution is, and which ideas I had were in the right direction and which weren't.
Very beautiful technique. I also love that that function e^(-1/x) is bonkers in the complex plane so this argument totally breaks down on the complexes.
Well, if a complex function has a first derivative on an open set, then it has derivatives of all orders on that set, and is even analytic there. (It's possible to construct a real function which is infinitely differentiable on all of R and yet is nowhere analytic. Real analysis is so good at crushing reasonable expectations.)
@@tomkerruish2982 e^(1/x) doesn't have a first derivative at 0. Looks flat in the real numbers but move the slightest bit in the imaginary direction and it's totally chaotic.
@@LukePalmer First, I'll admit that I glanced at your comment and read it as "exp(-1/x²)", erroneously inserting the exponent. Second, however, like you I was highlighting the (to me) main difference between the real and complex derivative. exp(-1/x²) has real derivatives of all orders at x=0, but is so badly behaved for complex values that it has an essential singularity. I certainly confess to the twin sins of reading too quickly and writing too tersely.
This is a very good video. I only ask that you look into stabilizing the volume of the voice over. I found that it was drifting up and down, occasionally to the point that I couldn't hear it over the music. You can probably do this with a single button press in your editing software. Thank you for bringing this interesting math to the public eye. There's no way I would have seen something like this without you. I hope you keep making videos
I recently used the same thing in one of my projects and I ended up using the cubic interpolation approach. I might implement something similar to what was shown at the end. Thanks for the knowledge.
My intuition would say it's possible with real functions but not complex functions, but I'm not certain of that. Differentiable just has different consequences for each.
@@xinpingdonohoe3978that's true - complex differentiable (even just once) functions are *very* rigid. They're automatically infinitely complex differentiable (holomorphic) and all holomorphic functions are (complex) analytic
@@xinpingdonohoe3978complex differentiable functions are locally analytic, so you are correct that you cannot have a complex smooth function that isn't analytic
I was working on a differentiable smoothstep function a short while back for a personal project. Compared, you definitely win for your much more rigorous approach (not to mention a full video! Excellently done!); I was just poking around with limits on different asymptotic forms as arguments for sigmoid functions. What gets me is that what I got in the end was still an equivalent form of yours! On my end, the analysis came out to 1/2 * ( 1 - tanh( (2x - 1) / ( 2x * (x - 1) ) ) ), whose exponential form simplifies to your phi function in the video! That's so cool!
So cool! I had a feeling it should be possible to do this conceptually by taking the limit as k -> infinity of the k-differentiable approximations, but it's great to see a general construction of the infinitely differentiable version. Great video! My intuition says the equivalent problem in 2D is impossible in general, but I can't wait for the video on it!
It’s easy. He gradually and smoothly transitioned from one function to another function where the two functions are both smooth and chosen to meet perfectly with the ends of the two given functions.
@@terjeoseberg990ah, okay. personally, i’m just confused on the whole “C^k” and “psi” and “phi” stuff. what is “C”? all that kinda stumped me on my first time watching.
@@sans1331 C^k is a set, it is the set of functions that are k times differentiable with all those k derivatives being continuous. You can also use the notation C^k(Omega) which means that its functions are k times differentiable and the k-th derivative is continuous over the domain Omega.
That's really cool! The exercise of proving e^(-1/x) is smooth at x=0 must've come up in like five different math classes I took and now finally I see how that might be useful.
@@HilbertXVI It does have a Taylor expansion at 0 (every smooth function does) - the kicker is that its Taylor series doesn't converge to it at (in any neighborhood of) 0.
One small thought. The way I like to teach Taylor polynomials is by going "Okay, a tangent line is a good approximation but it doesn't approximate the derivative well, so what if we use a tangent line to approximate the derivative instead, and then take the integral ?" Assuming both functions are smooth too, wouldn't that also be a possibility ? Take the derivatives, use a line to interpolate, and take the integral ?
It should, as long as you choose the constant terms in the integration process so that your function and one (should not need to be both) of the bounding functions match, up to their k-th derivative.
@@tracyh5751 OP's post was about teaching how Taylor polynomials work. So ending up with a Taylor polynomial shows that it's a valid perspective on it.
Godddamn, you talked to me right on the cusp of my knowledge. I saw that interpolation a whole 5 minutes before you revealed it, but you built the conceptual framework so well that it basically taught itself. You made the knowledge jump out of the words and equations. Incredible!
@@SoumilSahu That depends on your specific definition I guess, if you use monotonic increasing for the usual definition of strictly monotonic increasing, then the 0-function is not monotonically increasing I suppose. Although monotonic non-decreasing is a weird term in my opinion.
@@SoumilSahu No the video isn't correct because it uses the condition f' ≥ 0 for a function f being 'monotone increasing', which is the condition for weak monotone increasingness; if strict monotone increasingness was meant then the condition f' > 0 would have been used (which would have ruled out the zero function)
I have been studying mathematics for over four years now and I had never seen Faà di Bruno's formula. Today, I've suddenly stumbled across it twice for completely unrelated reasons 😂 Just goes to show that there's always more to learn in mathematics
This exactly what I wanted to find months ago when I created a function adder. A function that can add the graphs of two different functions. Unfortunately it was undefined at the cutting point because of a division.
A little fast, and I had to take your word for a decent amount of it, but still very followable and intriguing, especially since I've thought about a very vaguely related thing before - how no 2 polynomials look the same over any interval, barring trivial exceptions like translations. I'm not even 100% sure it's true and I wouldn't have a clue how to prove it, but if it is true, it's fascinating to me that each polynomial shape is completely unique. This kinda links in to how it would be difficult to get 2 different functions to 'agree' with one another via a smooth transition function, though I admit it's a bit of a stretch.
Hmm I don’t think it would be possible to overlap two polynomials like that because every polynomial is analytic and has a unique Taylor series, which means that you can determine what it looks like over all the reals just by looking at all the derivatives at one point. It does feel crazy though, that with all the infinitely many polynomials, there aren’t two that line up for some interval.
Neat. Im not a math person, and im not good at math, but i love computer graphics and wish i could model certain processes, so i always end up having math questions out of my league. This was one. And i wouldnt have known what to search.
Really nice illustration. When I learned the distribution theory, books usually just introduce the "test function" by showing the f(x) = { exp(-1/x) (x>0); 0 (x
Great video. This topic reminds me of the notion of mollifier functions. It seems like you could use a mollifier function along with some arbitrary continuous interpolation to create a smooth interpolation, but I'm not actually sure if that's true.
i think i remember this exact interpolation function coming up in thermodynamics somewhere, as a statistical distribution of energy states or something of that sort it's been a while and it was never well explained at the time, but I've always remembered it as "that function that could probably interpolate two other functions _really_ nicely"
Hi, very interesting video/concept, did you ever end up making the video for higher dimensions ? I work on differential geometry for quantum physics for my PhD and am looking for similar stuff !
wow! I remember thinking about this for ages when I was a student and I really thought no such method existed! should have thought of e^(-1/x) obviously...
The interpolation function you used here would actually be very useful for using bezier curves to construct smooth tracks, e.g. for rollercoasters. Cubic beizers are very intuitive to work with, but the acceleration (2nd deriv of the track position) is discontinuous. If you modify the beizer curve's interpolation function from stepped linear to this smoothstep, the acceleration will be continuous. You could use a gradient descent solver to minimize lateral G forces by manipulation of the curve control points!
the is actually a concept of geometric continuity (which is different to parameter continuitiy which is obviously to severe) of 1st and 2nd derivatives of bezier curves. You can google for it. Quite interesting.
The tool I would immediately reach for is the error function. Define h(x)=(1+erf( (x-x0)/a ))/2 for some point x0 and width a, then your interpolated function is f(x)+(g(x)-f(x))*h(x).
Very interesting video. I hope I won’t need any of this as I try to build my own animation script, but it’s good to know. My only issue with the video is that your music is a bit loud.
By the way: is it possible to write this function in a numerically stable way? I mean without the infinities which occur temporarily in intermediate results during the computation due to the 1/x parts near 0 and 1.
Do you know about Exporational B-Splines? Those are the limit of rational B-Splines for infinite degree. Another fun variant is the Fabius function which can be defined through repeatedly integrating and rescaling an interval
Is e^(-1/x) the simplest option? What if we want the transition to be as "gentle" as possible in some sense? Is there some kind of natural definition for "gentleness" so that we could try to optimize it? Would e^(-1/x) be the "gentlest" option by some simple definition of "gentleness"?
I like the question, the answer to the question you are looking for is Sobolev norm, and it is a notion of boundedness of the higher derivatives. For example, if we bound the first derivative, then the function will not have steep slope anywhere and if we bound the second derivative, the function will not have abrupt curvature, and such are good measure of "gentleness". e^(-1/x) by no means is the optimal solution if you want such "gentle" functions, but you can actually tweak e^(-1/x) to get something much gentler. pick one notion of gentleness and see if you can come up with something maximally gentle
1:25 To the leftmost of the hierarchy should be D^0 which means the existence of the 0th order derivative, or equivalently that the function itself exists at all.
If you want your smooth step function to have a bit lower slope, try -(√3/2)/x instead of -1/x for the exponent. Anything less and you get more inflection points.
This solution does assume that you have full descriptions for each of the two functions between x=a and x=b, and that in that (a,b) interval neither function is discontinuous. Is there a general solution for functions f and g which might be undefined / discontinuous in the interval of transition? Can we find a smooth transition function between [f(x)=1/x, x < -1] and [g(x)=sin(1/x), x > 1]?
@@EpsilonDeltaMain OK, that works fine. I still feel a bit deflated, as it seems to me like there might be some solution for generating an interpolating smooth function out of whole cloth. My intuition wants to somehow mash a power series together with your 1/(e^(1/x)).
@@turnpikelad If you come up with a clever solution that works, congrats! I couldn't seem to find one explicit formula myself in case f and g doesnt behave too nicely in the domain of transition
You are correct, this solution assumes that the functions f and g are defined in the interval (a,b) and are smooth everywhere. I don't know if I am correct but we can extend this solution to any pair of f and g which don't need to be smooth in the interval, provided they must be smooth at the endpoints x=a and x=b. Just find the two functions f1 and g1 which are the Taylor series of f around x=a and Taylor series of g around x=b respectively and replace f(x) and g(x) with f1(x) and g1(x) in the function h(x) (at 5:21).
@@EpsilonDeltaMain What if you take the Taylor series of each function at the boundary points and interpolate the two Taylor series with the step function? By definition all derivatives of the Taylor series match the derivatives of the function at a boundary point, so the result must be smooth as well.
Thank you maybe I skipped too many steps there, quotient rule has 2 cancelling parts, and taking derivative of phi(1-x), we get a negative sign pulled out www.wolframalpha.com/input?i=D%5Bf%5Bx%5D%2F%28f%5Bx%5D%2Bf%5B1-x%5D%29%2Cx%5D here is the result of the calculation
for 3D functions: h(x,y) = (1 - φ(C(x,y))*f(x,y) + φ(C(x,y))*g(x,y) where C is the path of smoothness, for example, the one in the video can be connected circularly by defining C as the hypotenuse C(x,y) = sqrt(x^2 + y^2)
The pacing in the latter half was way too fast. I had to rewind and pause a lot to follow. Like at 13:28 we finally have the actual function and it's on screen for like 2 seconds. No time to digest or take it in.
You also immediately went from having found the function to try to expand its domain and use it for something else. I think you should have paused and shown the function used in the context you were making it for (as you did) but then also shown some derivatives to give an indication of its smoothness and perhaps shown it working on other function pairs. Not just immediately go and do something else.
That's so interesting google showed this. I was actually working on something like this for a personal CS project and was just hammering out in mathematica. Ill be curious to plug this in abd see what it looks like Thanks!
Great video! I was just wondering if there isn't supposed to be f prime instead of f at 8:08. If not, it would mean that C^n+1 is always a subgroup of C^n, right?
Amazing Video! How to interpolate between two arbitrary points at the end was a bit rushed tho... Perhaps you could go a little more in depth in the next video?
This was very good (both an interesting topic, and well-presented), although a little fast IMO. More descriptive names (like I for "interpolater", or T/τ for "transitioner", rather than Φ, and G for "generator", K for "kernel", or S/σ/ς for "smooth", instead of ψ) would've helped to keep track of what was going on.
the name are fine; the symbol isn't any more descriptive just because the first letter matches; G could be goofball, not generator, so how do you even know G is descriptive? It isn't, so it is the same as using whatever
Is there a name for this kind of interpolation so that I can search more about?
I wish a name was given for the process but I do not know. In practice, matching boundaries up to certain order of derivatives is used all the time in numerical analysis (e.g. Hermite Polynomials) which are practical, but to require infinite differentiability in a closed formula is something you may or may not see in a course in smooth manifolds/functional analysis which are graduate level subjects, and thats why I thought it was appropriate to cover it since it is not very accessible at an elementary level in context.
But if I were to give the closest concept for this kind of "filling in the middle smoothly" process with a known name, I would say searching Smooth Urysohn's Lemma would get you most relevant results. My next video in the series is going to cover that topic
@@EpsilonDeltaMain I see! Thank you for your quick response. Another thing I would like to ask is: are there different solutions, that is, solutions other than taking e^(-1/x) ? If not, wouldn't it mean that the limit of Hermite polynomials when their degrees tend to infinity converges to the found solution? In the video, you said that using an infinite series would result in the Taylor series, but isn't this different from doing the previous limit? Because with limits you get what the polynomial approaches when it goes to infinity, not what it is at infinity.
@@pedrokrause7553 Very good questions. I like your questions and it adds so much value to some missing details in the video, so I pinned it if you don't mind
1. It doesn't have to be e^(-1/x). as long as the f in the f(-1/x) decays asymptotically a lot faster than -1/x shoots off to infinity as x -> 0+. For example f(x) = 5^(-x^2) will do it as well [*this is a gaussian], and if you are clever enough you can find an infinite family of these kinds of functions, including ones that does not directly use exponential function, such as erf(x) or 1/Γ(-x+2). But tail ends of functions like arctan(x) or 1/(x^10+1) will not decay fast enough to make derivatives of all order = 0 at x=0 for f(-1/x).
Plus, we dont even have to use f(-1/x), and use something like f(-1/x^2) as this example shows:
en.wikipedia.org/wiki/Flat_function
2. You are right, I assumed that if such Taylor series existed, it would fail to satisfy the left and the right simultaneously. e.g. if the left function was sin x, then taylor series uniquely defines the function extension to be the sin x.
3. But if we instead take a look at the limit of these hermite polynomials, the series wouldn't converge. Just take a look at first few hermite polynomials. for step interpolation.
en.wikipedia.org/wiki/Smoothstep
The coefficients blow up to tens of thousands fairly quickly, and the function only is bounded since the terms are alternating and pluses and minuses cancel each other. The limit of the polynomial would not exist since it would be like ∞x-∞x^2+∞x^3-∞x^4... if you look at the closed formula of coefficients of the hermite polynomial for each order
Look up partitions of unity.
Maybe look up "bump function" or "mollifier". If I understand it correctly, this sort of interpolation uses such bump-functions to "crossfade" between the two target functions.
The subtle humor in this is incredible
The "Can we do any better?" with Lara Croft got me good
Good stuff.
Kind of gives insight as to why finding an "analytic" continuation (but for this video not really as we are only dealing with reals, but more generally) can be difficult. Why "infinitely differentiable" is such a constraining condition.
(Hopefully) Constructive criticism:
I found myself losing track of which "Greek letter function" was modeling what parts of our goal.
Like it would be helpful to have a line like "φ will be the continuous step function used to interpolate" or something when you defined the function.
Same for the ψ function too.
If you did already describe it, there was enough time between when you and when you stated it and performing the proofs and derivations (8:30 ish) that it deserved having a reminder at that point.
I agree, proper distinction goes a long way into making whatever you're saying more understandable
lost me there as well
Usually the phrase "analytic continuation" applies to complex functions, and the Taylor series is _the unique_ analytic continuation. This construction introduces nasty essential singularities at x=0 & x=1 (not that real valued functions care).
The smooth continuation is in fact not analytic, at a and b. The function exp(-1/x), has all of its derivatives equal to 0 at x=0, so it would be equal to the null function in the neighborhood of 0 if it were analytic.
@@An-ht8so I mean, it doesn't need to be analytic, the fact that the transition is smooth already removes a lot of headaches when stitching functions together.
This video is just mindbowing, never thought that there would even be a way to construct truly smooth interpolation. Also, your visuals and presentation is really great, loved it. Keep going!
Interestingly enough, it was teached to us in the university when we were introduced into Dirac delta function. This was a part of some 3rd year math for physics students. These c-infinite functions are required to properly define and prove theorems which involve the Dirac delta, and by an extension, Green's functions which are "supercharged deltas" and therefore QED propagators, which are essentially Green's functions. This is why this was important for physics students - we actually need this to learn QED (which we learned afterwards).
I had to play the video twice to finally understand that the solution you expose is what is known as "crossfading" in audio engineering, and that most of the video is devoted to how to easily build a decent S-shape transition signal. I appreciate when ideas and intentions come first in plain language, and the maths come after, it's more easy for me to follow. However, I subscribed to your channel. Please, keep the pace slow, and the music down ! :D
Thank you so much! This has been in the back of my mind for a while now. Great explanation.
This is such quality content! I will be sure to send it to everyone I know who will be interested.
Woah. When I just saw this video in my feed I tried a few ideas on paper, and it's really cool to see what the actual solution is, and which ideas I had were in the right direction and which weren't.
How did you do it?
What was your idea?
Very beautiful technique. I also love that that function e^(-1/x) is bonkers in the complex plane so this argument totally breaks down on the complexes.
Well, if a complex function has a first derivative on an open set, then it has derivatives of all orders on that set, and is even analytic there. (It's possible to construct a real function which is infinitely differentiable on all of R and yet is nowhere analytic. Real analysis is so good at crushing reasonable expectations.)
@@tomkerruish2982 e^(1/x) doesn't have a first derivative at 0. Looks flat in the real numbers but move the slightest bit in the imaginary direction and it's totally chaotic.
@@LukePalmer First, I'll admit that I glanced at your comment and read it as "exp(-1/x²)", erroneously inserting the exponent. Second, however, like you I was highlighting the (to me) main difference between the real and complex derivative. exp(-1/x²) has real derivatives of all orders at x=0, but is so badly behaved for complex values that it has an essential singularity.
I certainly confess to the twin sins of reading too quickly and writing too tersely.
This is a very good video. I only ask that you look into stabilizing the volume of the voice over. I found that it was drifting up and down, occasionally to the point that I couldn't hear it over the music. You can probably do this with a single button press in your editing software.
Thank you for bringing this interesting math to the public eye. There's no way I would have seen something like this without you. I hope you keep making videos
turn it up and compress.
This is really surprising! When I first saw the title I figured smooth transitions would be impossible, but here we are lol
I recently used the same thing in one of my projects and I ended up using the cubic interpolation approach.
I might implement something similar to what was shown at the end.
Thanks for the knowledge.
This channel truly has a future. Signed.
I had heard about analytic continuation, but hadn’t thought about how a function could be smooth and not analytic before now. Thanks
My intuition would say it's possible with real functions but not complex functions, but I'm not certain of that. Differentiable just has different consequences for each.
@@xinpingdonohoe3978that's true - complex differentiable (even just once) functions are *very* rigid. They're automatically infinitely complex differentiable (holomorphic) and all holomorphic functions are (complex) analytic
@@xinpingdonohoe3978complex differentiable functions are locally analytic, so you are correct that you cannot have a complex smooth function that isn't analytic
So -- I tinker with math sometimes. And this might actually be exactly what I needed to take an idea to the next step. Great video!
Your use of color makes this much easier to follow. Subscribed.
I was working on a differentiable smoothstep function a short while back for a personal project. Compared, you definitely win for your much more rigorous approach (not to mention a full video! Excellently done!); I was just poking around with limits on different asymptotic forms as arguments for sigmoid functions. What gets me is that what I got in the end was still an equivalent form of yours!
On my end, the analysis came out to 1/2 * ( 1 - tanh( (2x - 1) / ( 2x * (x - 1) ) ) ), whose exponential form simplifies to your phi function in the video! That's so cool!
Thanks! And we need video about higher dimensions!
MAN, KEEP DOING WHAT UR DOING, YOU'LL GET A LOT OF SUBSCRIBERS IN NO TIME
I’ve actually always wondered about this. Thanks for sharing!
I did not know I needed this, but glad I found it. Long live smooth transitions.
So cool! I had a feeling it should be possible to do this conceptually by taking the limit as k -> infinity of the k-differentiable approximations, but it's great to see a general construction of the infinitely differentiable version. Great video!
My intuition says the equivalent problem in 2D is impossible in general, but I can't wait for the video on it!
Well, an infinite series of 2K terms would be disappointing, but as he showed it can be done in closed form.
7:56 “It is too big to fit in the margin” - Pierre de Fermat has entered the chat
Ive never been more thankful for python math modules that abstract this all into a function call i don't have to worry about lmao
What a wonderful video, one of the best in #SoME2! Keep on the good work!
Nice vid. I cant wait to see what videos you post next
Only 330 subscribers? This is a crime for such amazing content
This should have millions of views, this is incredibly useful in practice :D
Safe to say, I’m confused
k on fused functions
@@PTAlisPT this got me 😂😂😂
It’s easy. He gradually and smoothly transitioned from one function to another function where the two functions are both smooth and chosen to meet perfectly with the ends of the two given functions.
@@terjeoseberg990ah, okay. personally, i’m just confused on the whole “C^k” and “psi” and “phi” stuff. what is “C”? all that kinda stumped me on my first time watching.
@@sans1331 C^k is a set, it is the set of functions that are k times differentiable with all those k derivatives being continuous. You can also use the notation C^k(Omega) which means that its functions are k times differentiable and the k-th derivative is continuous over the domain Omega.
OMG that cliffhanger at the end 😭 what a great video, congrats!
you are gonna go big keep it up
great video, and thanks for putting link to proofs in the description
You just revived my love for calculus🥺✨✨.
Great motion of mathematical thoughts!
your feelings are irrational
Would you like some pi with that?
Awesome topic with great presentation. I would not have guessed the solution is this elegant. Just great job on video.
That's really cool! The exercise of proving e^(-1/x) is smooth at x=0 must've come up in like five different math classes I took and now finally I see how that might be useful.
The real kicker is that even though it's smooth at 0, it doesn't have a Taylor series expansion around 0.
@@HilbertXVI That's what Laurent series are for.
@@HilbertXVI It does have a Taylor expansion at 0 (every smooth function does) - the kicker is that its Taylor series doesn't converge to it at (in any neighborhood of) 0.
@@schweinmachtbree1013 Not a very useful "Taylor expansion" if it doesn't converge to the function
One small thought. The way I like to teach Taylor polynomials is by going "Okay, a tangent line is a good approximation but it doesn't approximate the derivative well, so what if we use a tangent line to approximate the derivative instead, and then take the integral ?"
Assuming both functions are smooth too, wouldn't that also be a possibility ? Take the derivatives, use a line to interpolate, and take the integral ?
It should, as long as you choose the constant terms in the integration process so that your function and one (should not need to be both) of the bounding functions match, up to their k-th derivative.
This will just construct a truncated Taylor series which will have the same problems as the Hermite and Taylor approaches.
@@tracyh5751 OP's post was about teaching how Taylor polynomials work. So ending up with a Taylor polynomial shows that it's a valid perspective on it.
I 100% needed this, thank you so much
Godddamn, you talked to me right on the cusp of my knowledge. I saw that interpolation a whole 5 minutes before you revealed it, but you built the conceptual framework so well that it basically taught itself. You made the knowledge jump out of the words and equations. Incredible!
amazing content !
I think it would be interesting didatically if you did a small recap at the end, but please keep doing this amazing work !
Great video man keep it up
Just found this gem, brought back memories of an intro analysis class of a few years back. Thank you!
9:13 Psi needs to be *strictly* monotone increasing. Since otherwise the 0-function would satisfy your conditions, but phi could not be defined
monotonic increasing does mean that it's not the 0 function. The 0 function would be monotonic non-decreasing. So the video is correct.
@@SoumilSahu That depends on your specific definition I guess, if you use monotonic increasing for the usual definition of strictly monotonic increasing, then the 0-function is not monotonically increasing I suppose. Although monotonic non-decreasing is a weird term in my opinion.
@@SoumilSahu No the video isn't correct because it uses the condition f' ≥ 0 for a function f being 'monotone increasing', which is the condition for weak monotone increasingness; if strict monotone increasingness was meant then the condition f' > 0 would have been used (which would have ruled out the zero function)
Very good videos, please continue!
Awesome video! Thank you!
Amazing quality of mathematical argumentation, balancing rigor and pedagogy! I instantly subscribed
Thanks. That smoothly connected several topics for me. You seemed to be approaching the halted problem when you halted.
I have been studying mathematics for over four years now and I had never seen Faà di Bruno's formula. Today, I've suddenly stumbled across it twice for completely unrelated reasons 😂
Just goes to show that there's always more to learn in mathematics
This exactly what I wanted to find months ago when I created a function adder. A function that can add the graphs of two different functions. Unfortunately it was undefined at the cutting point because of a division.
A little fast, and I had to take your word for a decent amount of it, but still very followable and intriguing, especially since I've thought about a very vaguely related thing before - how no 2 polynomials look the same over any interval, barring trivial exceptions like translations. I'm not even 100% sure it's true and I wouldn't have a clue how to prove it, but if it is true, it's fascinating to me that each polynomial shape is completely unique. This kinda links in to how it would be difficult to get 2 different functions to 'agree' with one another via a smooth transition function, though I admit it's a bit of a stretch.
Hmm I don’t think it would be possible to overlap two polynomials like that because every polynomial is analytic and has a unique Taylor series, which means that you can determine what it looks like over all the reals just by looking at all the derivatives at one point. It does feel crazy though, that with all the infinitely many polynomials, there aren’t two that line up for some interval.
Watch out, jumpscare at 0:13
lol, you summed up 2 years of Calculus in the first 50 seconds of the video
Neat. Im not a math person, and im not good at math, but i love computer graphics and wish i could model certain processes, so i always end up having math questions out of my league.
This was one. And i wouldnt have known what to search.
Really nice illustration. When I learned the distribution theory, books usually just introduce the "test function" by showing the f(x) = { exp(-1/x) (x>0); 0 (x
My initial guess was to use interpolation between the two functions in the smoothing area with a shifted sine as the weight
Wow! Great video an great explanations! Thanks a lot!!
A simplification of the phi function is 1/(1+e^((1-2x)/x(1-x)). Just for those who don’t want a ton of e^-1/x in the phi function
Nice. I guess, that will be useful for optimizing the code in a practical implementation because it reduces the number of calls to exp from 2 to 1.
I remember being very impressed when I found out about this
Great video. This topic reminds me of the notion of mollifier functions. It seems like you could use a mollifier function along with some arbitrary continuous interpolation to create a smooth interpolation, but I'm not actually sure if that's true.
This is indeed a “bump” or “mollifier” function. Not sure if the terms are equivalent
Nice! This was surprisingly interesting (:
i think i remember this exact interpolation function coming up in thermodynamics somewhere, as a statistical distribution of energy states or something of that sort
it's been a while and it was never well explained at the time, but I've always remembered it as "that function that could probably interpolate two other functions _really_ nicely"
it could have just been one of so many variants of the logistic function, though; it's been ten years
At 8:11 for the first statement if f' and not f. But still a great video with interesting topic
Hi, very interesting video/concept, did you ever end up making the video for higher dimensions ? I work on differential geometry for quantum physics for my PhD and am looking for similar stuff !
3:26 you just blew my mind
wow! I remember thinking about this for ages when I was a student and I really thought no such method existed! should have thought of e^(-1/x) obviously...
That last part was important, it can be used to form a good number of questions even for basic calculus.
The interpolation function you used here would actually be very useful for using bezier curves to construct smooth tracks, e.g. for rollercoasters. Cubic beizers are very intuitive to work with, but the acceleration (2nd deriv of the track position) is discontinuous. If you modify the beizer curve's interpolation function from stepped linear to this smoothstep, the acceleration will be continuous. You could use a gradient descent solver to minimize lateral G forces by manipulation of the curve control points!
the is actually a concept of geometric continuity (which is different to parameter continuitiy which is obviously to severe) of 1st and 2nd derivatives of bezier curves. You can google for it. Quite interesting.
what a fun way to look at partial-integration!
The tool I would immediately reach for is the error function. Define h(x)=(1+erf( (x-x0)/a ))/2 for some point x0 and width a, then your interpolated function is f(x)+(g(x)-f(x))*h(x).
Very cool 🔥💖
Very interesting video. I hope I won’t need any of this as I try to build my own animation script, but it’s good to know. My only issue with the video is that your music is a bit loud.
Very cool! I have been looking for such a transition function but because of the Taylor series issue I thought it might not exist.
By the way: is it possible to write this function in a numerically stable way?
I mean without the infinities which occur temporarily in intermediate results during the computation due to the 1/x parts near 0 and 1.
Do you know about Exporational B-Splines?
Those are the limit of rational B-Splines for infinite degree.
Another fun variant is the Fabius function which can be defined through repeatedly integrating and rescaling an interval
Is e^(-1/x) the simplest option? What if we want the transition to be as "gentle" as possible in some sense? Is there some kind of natural definition for "gentleness" so that we could try to optimize it? Would e^(-1/x) be the "gentlest" option by some simple definition of "gentleness"?
I like the question, the answer to the question you are looking for is Sobolev norm, and it is a notion of boundedness of the higher derivatives. For example, if we bound the first derivative, then the function will not have steep slope anywhere and if we bound the second derivative, the function will not have abrupt curvature, and such are good measure of "gentleness". e^(-1/x) by no means is the optimal solution if you want such "gentle" functions, but you can actually tweak e^(-1/x) to get something much gentler. pick one notion of gentleness and see if you can come up with something maximally gentle
@@EpsilonDeltaMain Great, thanks. I'll check out the Sobolev norm.
At 2:24/14:04, the set of 4 equation (simultaneous), last equation f'(-1) = c1 -2c2 +3c3 = - 0.3678 (or -e^-1)
1:25 To the leftmost of the hierarchy should be D^0 which means the existence of the 0th order derivative, or equivalently that the function itself exists at all.
nice
If you want your smooth step function to have a bit lower slope, try -(√3/2)/x instead of -1/x for the exponent. Anything less and you get more inflection points.
My brain is smooth now
The Lara craft pic while talking about jagged edges killed me
Great video! Thank you! But we all are waiting for higher dimensions. It is upcomming for two year - the people starts rioting! 🙂
This is used in motion control. I typically use 3rd and 5th order interpolation
This solution does assume that you have full descriptions for each of the two functions between x=a and x=b, and that in that (a,b) interval neither function is discontinuous. Is there a general solution for functions f and g which might be undefined / discontinuous in the interval of transition? Can we find a smooth transition function between [f(x)=1/x, x < -1] and [g(x)=sin(1/x), x > 1]?
We can patch up with some smooth function in the middle that goes from -.9 and +.9 then interpolate 2 times on each boundary
@@EpsilonDeltaMain OK, that works fine. I still feel a bit deflated, as it seems to me like there might be some solution for generating an interpolating smooth function out of whole cloth. My intuition wants to somehow mash a power series together with your 1/(e^(1/x)).
@@turnpikelad If you come up with a clever solution that works, congrats! I couldn't seem to find one explicit formula myself in case f and g doesnt behave too nicely in the domain of transition
You are correct, this solution assumes that the functions f and g are defined in the interval (a,b) and are smooth everywhere.
I don't know if I am correct but we can extend this solution to any pair of f and g which don't need to be smooth in the interval, provided they must be smooth at the endpoints x=a and x=b. Just find the two functions f1 and g1 which are the Taylor series of f around x=a and Taylor series of g around x=b respectively and replace f(x) and g(x) with f1(x) and g1(x) in the function h(x) (at 5:21).
@@EpsilonDeltaMain What if you take the Taylor series of each function at the boundary points and interpolate the two Taylor series with the step function? By definition all derivatives of the Taylor series match the derivatives of the function at a boundary point, so the result must be smooth as well.
Great video. I think there is an error in the quotient rule part though. Would love to see more videos diving deeper into distribution theory.
Thank you
maybe I skipped too many steps there, quotient rule has 2 cancelling parts, and taking derivative of phi(1-x), we get a negative sign pulled out
www.wolframalpha.com/input?i=D%5Bf%5Bx%5D%2F%28f%5Bx%5D%2Bf%5B1-x%5D%29%2Cx%5D
here is the result of the calculation
@@EpsilonDeltaMain ah I see it now, thanks.
@@EpsilonDeltaMain If you dont mind my asking, what editing programs do you use for the text flow and transitions?
@@9WEAVER9 most of it was done with Manim, python library made by 3blue1brown. Little bit of powerpoint here and there
for 3D functions:
h(x,y) = (1 - φ(C(x,y))*f(x,y) + φ(C(x,y))*g(x,y)
where C is the path of smoothness, for example, the one in the video can be connected circularly by defining C as the hypotenuse
C(x,y) = sqrt(x^2 + y^2)
Excellent video. One small critique: the volume of background music is too high, relative to narrator's voice.
Awesome video! Learned a lot :)
The pacing in the latter half was way too fast. I had to rewind and pause a lot to follow. Like at 13:28 we finally have the actual function and it's on screen for like 2 seconds. No time to digest or take it in.
You also immediately went from having found the function to try to expand its domain and use it for something else. I think you should have paused and shown the function used in the context you were making it for (as you did) but then also shown some derivatives to give an indication of its smoothness and perhaps shown it working on other function pairs. Not just immediately go and do something else.
It’s pretty easy to setup on desmos if you want to play around with it. It looks pretty interesting
Great video ! But maybe next time lower the volume of the music a little bit so we can hear you better
I think the thereom in 8:40 is wrong f(x) =X^2 is 3 times diffrentiable while x^(-2) is infilntly diffrentiable right ?
x^2 is inf times differentiable. After 3 derivatives, all of its derivatives are 0, which is still a function.
@@calvindang7291 you are right yes 😅
That's so interesting google showed this. I was actually working on something like this for a personal CS project and was just hammering out in mathematica. Ill be curious to plug this in abd see what it looks like
Thanks!
Great video! I was just wondering if there isn't supposed to be f prime instead of f at 8:08. If not, it would mean that C^n+1 is always a subgroup of C^n, right?
GREAT !!! THANK YOU~~~
Use a compressor on your voiceover track, volume is all over the place. Otherwise great video!
Yea, they really do work magic for normalizing volume throughout the video!!! thanks, my next vid will definitely have it
I wonder when the next video is done
Amazing Video!
How to interpolate between two arbitrary points at the end was a bit rushed tho... Perhaps you could go a little more in depth in the next video?
I was like “wow this is really an analysis heavy question,” then I read the channel name.
This is a great video
this is highly useful for data compression!
This was very good (both an interesting topic, and well-presented), although a little fast IMO.
More descriptive names (like I for "interpolater", or T/τ for "transitioner", rather than Φ, and G for "generator", K for "kernel", or S/σ/ς for "smooth", instead of ψ) would've helped to keep track of what was going on.
the name are fine; the symbol isn't any more descriptive just because the first letter matches; G could be goofball, not generator, so how do you even know G is descriptive? It isn't, so it is the same as using whatever
This is like complaining that the wave equation isn't labelled with a "W"
Awesome
And then that cliffhanger...
You Sir, know what you're doing.