The only problem with the second, and perhaps the first approach too, is determining whether the series “converges” in some sense of operators, i.e. that the infinite sum exists.
Well it depends on the specific functions and operators. If those are given, one can test for uniform convergence. For those that are given in the video it works fine. However, there might be some problems with less nice functions.
If I remember correctly, on the space of continous functions with the supremum norm, there's a to show that some definite integral with "x" as an upper bound is a contraction, and then you can show that the sum of infinite integrals is convergent, and by some weird version of Weiestrass fixed point, you can prove this has a single solution, and build up the Taylor series for it using just 1 as the first function and then iterating the sequence. There's a MSE question with something like this. 4871850.
Hey man, it’s been a while that I came on this channel and dropped a comment, but just wanted to remind you that this channel is frankly my favorite youtube channel here; I don’t think there’s a channel which motivated me more on my math journey than this channel; so keep up the insane maths! I think we can also achieve this result using laplace, it might generate a similar result.
Can’t we differentiate both sides and arrive at f’=2f yielding f(x)= ke^(2x) ? I mean it is sort of like the first approach but without all of the extra stuff.
I also had the same idea but I want to add something new so with similar reasoning we can evaluate g(x)=[integral mess in terms of f] g'=f+g g'-g=f now time to IF r(x)=e^(-x) so now g=e^x(integral of f(x) e^-x dx)
I love formal operational methods. Heaviside lives! This somehow reminds me of some infinite cascade matrix problems, which have two solutions depending on boundary values at infinity. (But in this case the infinite equations is martially replaces by a 2nd order equation, not a 1st order.)
I approached it like this: y = int y + int int y +.... differentiating both sides, y' = y + int y + int int y +.... or y' - y = int y + int int y +... subtracting the above equation with the original equation, we get y' - y = y so y' = 2y, solving this simple differential equation we get y = Cexp(2x) where c is some positive constant but where did i go wrong?
@@RanEncounter Thats because C here is actually e^c where c is the actual constant of integration. You basically get something like |y| = exp(2x+c) so i guess from here you can see that the constant term here e^c which we can just denote as another constant C. exponential functions are always positive for all inputs. e^c is always positive no matter what c is. Dont confuse C with c.
@@chinmay1958 But it doesn't have to be e^c. Try it out. C can be any real number and yet it is a valid solution to the problem. You made a logical error somewhere.
@@maths_505 It is a really cool concept. It extends the idea of Taylor Series to include time delays. The nth order term in the series expansion, instead of being raised to the power of n as in c_n(x-x_o)**n, is instead the nth order convolution and c_n is a function that is an n-th order impulse response. It’s a beautiful idea that allows for nonlinear expansions of systems beyond what Taylor series can offer. Here is the Wikipedia article about it that explains it much better than I could in a comment: en.m.wikipedia.org/wiki/Volterra_series
I saw the thumbnail, then then the title and went "ha! that looks like fun" I was thinking of doing something like this for a while (doing a video of me exploring math) but I've been worried about doing it. this actually might make me try again.
My answer is: All f's fit. Here's how: Solution 1: So, let's say we have some f. First, compute the right-hand-side. We need to do this in a way that it converges everywhere near x=0. I think one way to do that is by assuming each integral to be 0 at x=0. Then, changing the innermost constant while retaining the value at x=0 for further integrals should add x to the corresponding power with coefficients proportional to the change in the constant. By using the Taylor series expansion of the difference, you can make them equal Solution 2: First, integrate it in the way that the point at x=0 is -f(0)-f'(0) where f' is the first derivative of f. Then, integrate the result in the way that the point at x=0 is f'(0) - f''(0) where f'' is the second derivative. Then, integrate the new result in the way that the point at x=0 is f''(0) - f'''(0). Continue this ad infinitum, with the first iteration having been the only exception (where the lower-degree derivative is negative in the formula for the value at 0). When you add up all the results, it'll give f
My approach was totally unrigoristic and "it was presented to me in my dream" type of shi but it went something like this: The solution has to be in a form of Ae^(kx), where A and k are constants (that's the dream part). N-th integral of this function is equal to (1/k^N) * f. We get: f = (1/k + 1/k^2 + 1/k^3 + ... )*f So we are looking for constant "k" such that infinite sum of 1/k^n converges to 1. We get k=2, so f=Ae^(2x).
Let's integrate the right- and left-hand sides of the original equation (S is the integration sign) :) S f = SS f + SSS f + ... (1) The right hand side of (1) is no more than f - Sf (2) Really, the RHS of (1) IS f without Sf according to the original equation. Then, the whole thing is reduced to f = 2S f f = C exp(2x)
integrate and add integral(f) to both sides to get: 2 integral(f) = integral(f) + integral(integral(f)) + ... = f much simpler, this results in f = e^(2x) and indeed: e^2x = e^(2x) * sum (1/2 + 1/4 +...) = sum ( integral(e^(2x)) + integral(integral(e^(2x))) +....)
The thumbnail imediately reminded me of the operator notation for light transport, introduced in Eric Veach's thesis. He uses the geometric series to derive an approximation for the equation $L = L_e + T L$ where $T L = \int f(x, w_o, w_i), L(x'(x, w_i), -w_i dw_i$.
technically 1 is a solution to the first thing because 1 = 0e^2x + 1, while the integral of f is the same thing as the integral of f + any constant, for example 1. It's also valid for any function represented as c0+x¹c1+x²c2+..., though that may converge not everywhere.
I found as a solution f(x)=Ae^(2x)+B with A an B as constants of integration. Here's my reassoning: Let's call J the sum of all integrals from the double one onwards, the equaton becomes: f(x)=Sf(x)+J, I also choose to esplicitate NOW the consants of integration, which summed uo give a singular constant: f(x)=Sf(x)+J+C. Now let's ttake the original equation (WITH THE INTEGRATION CONSTANT MADE EXPLICIT): f(x)=Sf(x)+SSf(x)+SSSf(x)+SSSSf(x)+...+C Now if we derive this equation we will get: f'(x)=f(x)+Sf(x)+SSf(x)+SSSf(x)+... Now there is no constant of integration, it was cancelled by the derivation: we can rewrite this equations as: f'(x)=f(x)+Sf(x)+J --> f(x)=f'(x)-Sf(x)-J Now we have: f(x)=Sf(x)+J+C f(x)=f'(x)-Sf(x)-J If we sum this two equations we get: 2f(x)=f'(x)+C Which can be easily solved with separation of variables giving as result: f(x)=Ae^(2x)+B I hope that I was useful!
It seems that, when you plug the second order partial solutions back into the original equation, the fact that the D = -1 term diverges has to imply that C2 = 0. Essentially, the original equation being an 'infinite series' creates an implied Constraint on the solution that demands the result converge, which, if we run the numbers, is likely to imply that every partial solution other than the [C1]e^(x/2) will have its 'arbitrary constant' constrained to 0 in order to remove the divergent solutions. If we run the numbers for a third degree equation, we get: f - f' - f'' - f''' = f''' (2D^3 + D^2 + D - 1)f = 0 Characteristic Equation factors into (2D - 1)(D2 + D + 1); the second term factors into D = -1/2 +- i*sq(3)/2, which generates a partial solution: (a.cos(x.sq(3)/2) + b.sin(x.sq(3)/2)).e^(-x/2) For compactness, we write y == e^(-x/2); k == sq(3)/2 (note y' = (-1/2)y; k^2 = 3/4) f = (a.cos(kx) + b.sin(kx)).y f' = (-1/2)f + k(-a.sin(kx) + b.cos(kx)).y f'' = (-1/2)f - k(-a.sin(kx) + b.cos(kx)).y f''' = f We reach a cyclic pattern, as expected. As such, the series on the right does not converge, as the series of partial cycles between three separate functions, unless both (a=0) and (b = 0), in which case the series of partial sums is trivially 0. From this we can conjecture that for any integer n > 2, the rearrangement of the equation to: f - f' - f'' - ... - f(n) = f(n) will yield a -1/2 characteristic partial solution producing the familiar f = Ce^(-x/2) solution, and some combination of divergent partial solutions that must be constrained by the requirements of Convergence so that their 'arbitrary' constants are set to 0.
My first thought, before even watching the video, was that the trivial solution is f(x) = 0. I know this works for the derivative version, but I have a tiny doubt about the integral version since Int{0 dx} = C. Are we allowed to just pick arbitrary constants such that they cancel out?
the idea of derivating the function however the most correct and the first idea to spring into our mind there is still a subtle parameter that is a bit neglected but if you pay close attention when you integrate f(x) one time you get a new function plus the constant which can be overlooked if we are taking it easy however going on for more we get more than a function with a constant on the side we get a whole polynome aside from integrating the sole function f(x) for i dont know how many times if we considered the constants from each integration to take 0 as value , I hope that you as a reader understand what I am trying to highlight
I am afraid that the way you are summing up the constants is incorrect since we have parameters are more than once integrated so if we happen to come across a non zero constant for at least one parameter we will have to account for a monome that is if we are taking it easy .........
@@tenebrae711 Yeah I am beat to the answer either but I think if we were given more infos on the function in way for our solution to be bound to a few criteria
Does that first integral trick with the geometric series work because in the vector space of real integrable functions, the determinant of the integral operator is less than 1?
Couldnt you use the partial sum of a geometirc series to obtain a generell solution for finite N and play around with those, if you want deeper insides about the behavior of the finite N solutions
The problem with the geometric series approach is that first you have to prove that int operator is between -1 and 1 to come to the conclusion on the video...
Aren't you supposed to add a constant when taking an integral, so int(f)=whatever+c, int(int(f))=whatever+dx+e and so on. So in the end when doing the integrals you can add any (maybe infinite) polynomial to the result. As any function can be expressed as an (maybe infinite) polynomial, shouldn't every function be a solution to the equation?
So for example we can have f(x)=x. Now we can have int(f)=x²/2, and then we can have int(int(f))=x³/6+x (this is also a valid integral of the function). We continue with int(int(int(f)))=x^4/24-x²/2, the next integral would be x^5/120-x³/6 and so on. In the end when you sum them up everything cancels except the x.
I have a summation problem, hope you can solve it It's a double summation of 1/(mn)^2 , where m goes from 1 to infinity and n goes from m to infinity, let's do some math for fun
How I did it was by taking derivative on both sides , we have df=f + (...) df=f + f( by original definition) And this here clearly gives a differential equation with solution ke^2x, can apply similar approach to the bonus question?
Yeah that's correct but I think for convergence purposes we need to ignore the constants and just call them zero. Working it out in my head, the polynomial (actually infinite series) converges to e^x for the RHS doesn't converge.
I was thinking e^x and then got a little annoyed when i remmbered we add all of them and then it xame to me e^2x. You get the same function but its coefficents are 1/2+1/4.... infinite sum which tends to one therefore e^2x is a solution
Ok, what I did was f= ∫f+∫∫f+∫∫∫f+... By properties of integrals= ∫(f+∫f+∫∫f+∫∫∫f+...) but thats f+f ∫(f+f)=∫(2f)=2∫f(x)dx Now f(x)=2∫f(x)dx Derivate both sides f'(x)=2f(x) Lets say f(x)=y y'-2y=0 Its a linear EDO if you resolve that you get y=ce^(2x) I would not like write how do I got the solution of the EDO but if someone want to see me write that on a RUclips comment just ask
The reason why you’re bringing it up is the reason they ignore the addition constants in integration. It doesn’t work in those cases, unless the constants are 0. Try it yourself.
how do you know you can use the geometric series formula when the common ratio is the integration operator? usually the requirement is that it has to be < 1 but that doesn’t really apply here
In normed spaces it’s okay if the norm is < 1 (the same for real or complex numbers). If we have a good notion of the norm for operators on functions then it can work. First we fix a space of functions (for example, analytic functions, or smooth functions, or continuous functions, or Riemann-integrable functions), check if there any good norms coming with it, then use an operator norm definition to find out what’s ∫ norm is, and if it’s okay we can go!
This reminds me of formal variables, the things they use for generating functions, where convergence or divergence of the series is trivial when compared to what it produces, just with a functional operator.
Finally, i am back.Also is functional analysis is same as operational calculas ? Because before watching this video i thought this as operational calculas. Fun fact : before using this in modern physics, it was already used in electrical circuits in the olden days.
I must be misunderstanding your reasoning in the finite sum case. When you rewrite the partial sums of f_N with N being of the order of the partial sum, when you factor out a differential operator the result is not D*f_N but rather D*f_{N-1}. So I'm unsure about the validity of the solution thereon, since it seems you are still trying to invoke the ininite sum in the case of the partial. I'll try to show my logic more clearly in the morning.
It is an extreme abuse of notation, but in this case it yields a correct answer, which is basically the criteria upon which some will allow abuse of notation.
Earliest i have ever been😅 A req for u kamal can you pls make a video discussing some basic approach to feynman technique? Like how to ket parameters based on ques and stuff
There are techniques for finding sums of non-convergent series. Cesàro sums, Zeta-summation, and variations thereof. If you were to allow such more general forms of convergence, is there still a problem there, or do you suddenly get more solutions somehow?
The integral operator doesn't produce a single function like the derivative operator but a whole family of functions with all possible constants of integration. By choosing your constants of integration appropriately, I believe you can add an arbitrary Maclaurin series to the right-hand side of the integral equation, which should give you a huge family of possible solutions.
Yes indeed. But there could be problems with this. Let's say we just guess the solution to be Cexp(2x) and try it for the equation. Plugging it in means successive integrations of the functions and of the constants of integration produced. That gives the mclauren series of exp(x) times the sum of all constants c1+c2+... Which even if we assume converges to some c we still get another term c*exp(x) to go with the C*exp(2x). One "fix" here is to just assume the other constants to be zero for convergence i.e. getting rid of the divergent exp(x) term.
@@maths_505 All the integrals are independent so you can add an arbitrary polynomial in x of degree N-1 to each of them (where N is the amount of integral signs). Without loss of generality, you can just add a single term a_N*x^(N-1) to the N-th integral and construct the power series with each integral contributing a single term. As long as the resulting series converges, I don't think there's a problem with that.
Hi,
"terribly sorry about that" : 1:11 , 2:10 , 5:31 , 7:10 , 7:58 , 8:12 , 9:57 , 10:10 , 10:20 ,
"ok, cool" : 1:24 , 10:40 .
Glad you're back bro
@@maths_505 Glad to be home, the railway company offered me the hotel at Bordeaux because of the delay.
With all due respect, do you do this with a program/AI
@@bilkishchowdhury8318 Not at all, just by hand, and for fun.
10:20 doesnt exist
The into reminds me of old bprp, "Lets do some math for fun!"
That's exactly what I had in mind. I loved his math for fun videos.
@@maths_505 YFWI
Absolutely abysmal integral signs, the mark of a well-experienced mathematician
Everything about this comment including the username is legendary
absolutely fearmongering username
When you are so incredibly bored that you get the idea to put a derivative operation into the quadratic formula:
As far as I know, the first to consider stuff like that was Oliver Heaviside, more than 100 years ago.
balls
Only on this channel can a reply like this get likes + a heart
@@Nottherealbegula4 hell yeah 🔥
cool video, you don’t need to be terribly sorry for not writing prime though haha
greatest comments and replies coming from a math chanel i ever seen, earned a sub
The only problem with the second, and perhaps the first approach too, is determining whether the series “converges” in some sense of operators, i.e. that the infinite sum exists.
Well it depends on the specific functions and operators. If those are given, one can test for uniform convergence. For those that are given in the video it works fine. However, there might be some problems with less nice functions.
If I remember correctly, on the space of continous functions with the supremum norm, there's a to show that some definite integral with "x" as an upper bound is a contraction, and then you can show that the sum of infinite integrals is convergent, and by some weird version of Weiestrass fixed point, you can prove this has a single solution, and build up the Taylor series for it using just 1 as the first function and then iterating the sequence. There's a MSE question with something like this. 4871850.
I love how your differential operator D just becomes a triangle over time, thus a big delta, a Laplace operator, which technically is D^2
Hey man, it’s been a while that I came on this channel and dropped a comment, but just wanted to remind you that this channel is frankly my favorite youtube channel here; I don’t think there’s a channel which motivated me more on my math journey than this channel; so keep up the insane maths!
I think we can also achieve this result using laplace, it might generate a similar result.
I'm thinking about Laplace but I don't see how it would work
i immediately saw the recursion for df=2f, i’ve spent way too long with recursion problems
Can’t we differentiate both sides and arrive at f’=2f yielding f(x)= ke^(2x) ? I mean it is sort of like the first approach but without all of the extra stuff.
Yeah it's just about as quick as the 2nd approach
I also had the same idea but I want to add something new so
with similar reasoning we can evaluate g(x)=[integral mess in terms of f]
g'=f+g
g'-g=f
now time to IF
r(x)=e^(-x)
so now g=e^x(integral of f(x) e^-x dx)
I love formal operational methods. Heaviside lives! This somehow reminds me of some infinite cascade matrix problems, which have two solutions depending on boundary values at infinity. (But in this case the infinite equations is martially replaces by a 2nd order equation, not a 1st order.)
Interesting
I approached it like this:
y = int y + int int y +....
differentiating both sides,
y' = y + int y + int int y +....
or y' - y = int y + int int y +...
subtracting the above equation with the original equation, we get y' - y = y
so y' = 2y, solving this simple differential equation we get
y = Cexp(2x) where c is some positive constant
but where did i go wrong?
Its correct
I used the linearity to figure out that f = int f + int (int f + int int f + ...)
f = 2 int f
f' = 2f
f = Cexp(2x)
Why does the constant C have to be positive? Isn't the range for the constant just real numbers?
@@RanEncounter Thats because C here is actually e^c where c is the actual constant of integration. You basically get something like |y| = exp(2x+c) so i guess from here you can see that the constant term here e^c which we can just denote as another constant C. exponential functions are always positive for all inputs. e^c is always positive no matter what c is. Dont confuse C with c.
@@chinmay1958 But it doesn't have to be e^c. Try it out. C can be any real number and yet it is a valid solution to the problem. You made a logical error somewhere.
This almost looks like a Volterra series expansion. Those get really fun! Especially in the frequency domain.
Fascinating
@@maths_505 It is a really cool concept. It extends the idea of Taylor Series to include time delays. The nth order term in the series expansion, instead of being raised to the power of n as in c_n(x-x_o)**n, is instead the nth order convolution and c_n is a function that is an n-th order impulse response. It’s a beautiful idea that allows for nonlinear expansions of systems beyond what Taylor series can offer.
Here is the Wikipedia article about it that explains it much better than I could in a comment:
en.m.wikipedia.org/wiki/Volterra_series
I saw the thumbnail, then then the title and went "ha! that looks like fun"
I was thinking of doing something like this for a while (doing a video of me exploring math) but I've been worried about doing it. this actually might make me try again.
My answer is: All f's fit. Here's how:
Solution 1: So, let's say we have some f. First, compute the right-hand-side. We need to do this in a way that it converges everywhere near x=0. I think one way to do that is by assuming each integral to be 0 at x=0. Then, changing the innermost constant while retaining the value at x=0 for further integrals should add x to the corresponding power with coefficients proportional to the change in the constant. By using the Taylor series expansion of the difference, you can make them equal
Solution 2: First, integrate it in the way that the point at x=0 is -f(0)-f'(0) where f' is the first derivative of f. Then, integrate the result in the way that the point at x=0 is f'(0) - f''(0) where f'' is the second derivative. Then, integrate the new result in the way that the point at x=0 is f''(0) - f'''(0). Continue this ad infinitum, with the first iteration having been the only exception (where the lower-degree derivative is negative in the formula for the value at 0). When you add up all the results, it'll give f
The answer is kind of immediate, e^(2x) first integral is e^(2x)/2 then e^2x/4 and it all adds up to 1.
My first instinct was just Ce^x but I was too lazy to actually sit and solve.
thank you. pats you on the back. you are doing great thank you for teaching me about math
Mathematicians don't mean the same thing by "fun" that normal humans do!
Definitely, I need ~pArental~ _Mathematician_ advisory!
The Abel Prize Equation!!!!!!
My approach was totally unrigoristic and "it was presented to me in my dream" type of shi but it went something like this:
The solution has to be in a form of Ae^(kx), where A and k are constants (that's the dream part). N-th integral of this function is equal to (1/k^N) * f. We get:
f = (1/k + 1/k^2 + 1/k^3 + ... )*f
So we are looking for constant "k" such that infinite sum of 1/k^n converges to 1. We get k=2, so f=Ae^(2x).
Let's integrate the right- and left-hand sides of the original equation (S is the integration sign) :)
S f = SS f + SSS f + ... (1)
The right hand side of (1) is no more than
f - Sf (2)
Really, the RHS of (1) IS f without Sf according to the original equation. Then, the whole thing is reduced to
f = 2S f
f = C exp(2x)
integrate and add integral(f) to both sides to get:
2 integral(f) = integral(f) + integral(integral(f)) + ... = f
much simpler, this results in f = e^(2x)
and indeed: e^2x = e^(2x) * sum (1/2 + 1/4 +...) = sum ( integral(e^(2x)) + integral(integral(e^(2x))) +....)
The thumbnail imediately reminded me of the operator notation for light transport, introduced in Eric Veach's thesis. He uses the geometric series to derive an approximation for the equation $L = L_e + T L$ where $T L = \int f(x, w_o, w_i), L(x'(x, w_i), -w_i dw_i$.
technically 1 is a solution to the first thing because 1 = 0e^2x + 1, while the integral of f is the same thing as the integral of f + any constant, for example 1. It's also valid for any function represented as c0+x¹c1+x²c2+..., though that may converge not everywhere.
Also technically 0x is a solution
I found as a solution f(x)=Ae^(2x)+B with A an B as constants of integration. Here's my reassoning:
Let's call J the sum of all integrals from the double one onwards, the equaton becomes:
f(x)=Sf(x)+J, I also choose to esplicitate NOW the consants of integration, which summed uo give a singular constant:
f(x)=Sf(x)+J+C.
Now let's ttake the original equation (WITH THE INTEGRATION CONSTANT MADE EXPLICIT):
f(x)=Sf(x)+SSf(x)+SSSf(x)+SSSSf(x)+...+C
Now if we derive this equation we will get:
f'(x)=f(x)+Sf(x)+SSf(x)+SSSf(x)+... Now there is no constant of integration, it was cancelled by the derivation:
we can rewrite this equations as:
f'(x)=f(x)+Sf(x)+J --> f(x)=f'(x)-Sf(x)-J
Now we have:
f(x)=Sf(x)+J+C
f(x)=f'(x)-Sf(x)-J
If we sum this two equations we get:
2f(x)=f'(x)+C
Which can be easily solved with separation of variables giving as result:
f(x)=Ae^(2x)+B
I hope that I was useful!
maths teachers would absolutely flip out if they saw this... "no it's repeated integration! you can't just plug it into a geometric series formula!"
Functional analysis profs would smile
It seems that, when you plug the second order partial solutions back into the original equation, the fact that the D = -1 term diverges has to imply that C2 = 0.
Essentially, the original equation being an 'infinite series' creates an implied Constraint on the solution that demands the result converge, which, if we run the numbers, is likely to imply that every partial solution other than the [C1]e^(x/2) will have its 'arbitrary constant' constrained to 0 in order to remove the divergent solutions.
If we run the numbers for a third degree equation, we get:
f - f' - f'' - f''' = f'''
(2D^3 + D^2 + D - 1)f = 0
Characteristic Equation factors into (2D - 1)(D2 + D + 1);
the second term factors into D = -1/2 +- i*sq(3)/2, which generates a partial solution: (a.cos(x.sq(3)/2) + b.sin(x.sq(3)/2)).e^(-x/2)
For compactness, we write y == e^(-x/2); k == sq(3)/2 (note y' = (-1/2)y; k^2 = 3/4)
f = (a.cos(kx) + b.sin(kx)).y
f' = (-1/2)f + k(-a.sin(kx) + b.cos(kx)).y
f'' = (-1/2)f - k(-a.sin(kx) + b.cos(kx)).y
f''' = f
We reach a cyclic pattern, as expected. As such, the series on the right does not converge, as the series of partial cycles between three separate functions, unless both (a=0) and (b = 0), in which case the series of partial sums is trivially 0.
From this we can conjecture that for any integer n > 2, the rearrangement of the equation to: f - f' - f'' - ... - f(n) = f(n) will yield a -1/2 characteristic partial solution producing the familiar f = Ce^(-x/2) solution, and some combination of divergent partial solutions that must be constrained by the requirements of Convergence so that their 'arbitrary' constants are set to 0.
Does this logic apply to perturbation theory?
seems like all other solutions are exp(k x) with k being all n-th roots of 1, excluding 1 itself
Just replace 1-1+1-1 by the geometric formula 1/[1-(-1)] =1/2 and it should work
My first thought, before even watching the video, was that the trivial solution is f(x) = 0. I know this works for the derivative version, but I have a tiny doubt about the integral version since Int{0 dx} = C. Are we allowed to just pick arbitrary constants such that they cancel out?
the idea of derivating the function however the most correct and the first idea to spring into our mind there is still a subtle parameter that is a bit neglected but if you pay close attention when you integrate f(x) one time you get a new function plus the constant which can be overlooked if we are taking it easy however going on for more we get more than a function with a constant on the side we get a whole polynome aside from integrating the sole function f(x) for i dont know how many times if we considered the constants from each integration to take 0 as value , I hope that you as a reader understand what I am trying to highlight
Hmm that is really true, but I really don't know how one would approach this
The polynomial you're talking about will be of the form (c1+c2+...)e^(2x) so you have a constant times e^(2x)
I am afraid that the way you are summing up the constants is incorrect since we have parameters are more than once integrated so if we happen to come across a non zero constant for at least one parameter we will have to account for a monome that is if we are taking it easy .........
@@tenebrae711 Yeah I am beat to the answer either but I think if we were given more infos on the function in way for our solution to be bound to a few criteria
@@achrafsaadali7459 I've accounted for that. Give it a try in writing, it should work out.
This can effectively be generalized to an eigenvalue problem for an operator exponential of some sorts, right?
Does that first integral trick with the geometric series work because in the vector space of real integrable functions, the determinant of the integral operator is less than 1?
9:05 my first thought was to integrate both sides then get 2f'=f+f^(n+1)'
call int(f) is the integral of f(x)dx
we have f(x) = int(f) + int(int(f)) +.....
---> f'(x) = f(x) + int(f) + inf(int(f)).... = 2f(x) ---> int(f'(x)/f(x) = int(2x) ---> f(x) = Ce^2x
Thank you.
Couldnt you use the partial sum of a geometirc series to obtain a generell solution for finite N and play around with those, if you want deeper insides about the behavior of the finite N solutions
The problem with the geometric series approach is that first you have to prove that int operator is between -1 and 1 to come to the conclusion on the video...
Tru
What do you use to write for your videos? Is this an iPad?
(Havent fully watched yet)
The obvious lazy answer is f=0, true as int(0d[anything])=0 and no matter how many integrations you add, int(f)=f
Yo can you make a video where you prove from the basic what the gamma function and its uses ive been struggling to wrap my head around it
I'll make a post on Instagram soon
Admin used it a lot ,just check the old videos in this channel.
Aren't you supposed to add a constant when taking an integral, so int(f)=whatever+c, int(int(f))=whatever+dx+e and so on. So in the end when doing the integrals you can add any (maybe infinite) polynomial to the result. As any function can be expressed as an (maybe infinite) polynomial, shouldn't every function be a solution to the equation?
So for example we can have f(x)=x. Now we can have int(f)=x²/2, and then we can have int(int(f))=x³/6+x (this is also a valid integral of the function). We continue with int(int(int(f)))=x^4/24-x²/2, the next integral would be x^5/120-x³/6 and so on. In the end when you sum them up everything cancels except the x.
0x is a valid solution as well right?
I have a summation problem, hope you can solve it
It's a double summation of 1/(mn)^2 , where m goes from 1 to infinity
and n goes from m to infinity, let's do some math for fun
How I did it was by taking derivative on both sides , we have
df=f + (...)
df=f + f( by original definition)
And this here clearly gives a differential equation with solution ke^2x, can apply similar approach to the bonus question?
Yeah that works too
What about adding an arbitary constant/ polynomial of integration in each of the multiple integrals in the sum
Yeah that's correct but I think for convergence purposes we need to ignore the constants and just call them zero. Working it out in my head, the polynomial (actually infinite series) converges to e^x for the RHS doesn't converge.
I was thinking e^x and then got a little annoyed when i remmbered we add all of them and then it xame to me e^2x. You get the same function but its coefficents are 1/2+1/4.... infinite sum which tends to one therefore e^2x is a solution
For the 1st method, the GP formula
Only applies when r
That's quite alot to explain in a comment so:
en.m.wikipedia.org/wiki/Neumann_series
Some searching on math stack exchange should help too.
Ok, what I did was
f= ∫f+∫∫f+∫∫∫f+...
By properties of integrals=
∫(f+∫f+∫∫f+∫∫∫f+...)
but thats f+f
∫(f+f)=∫(2f)=2∫f(x)dx
Now
f(x)=2∫f(x)dx
Derivate both sides
f'(x)=2f(x)
Lets say f(x)=y
y'-2y=0
Its a linear EDO
if you resolve that you get
y=ce^(2x)
I would not like write how do I got the solution of the EDO but if someone want to see me write that on a RUclips comment just ask
Hmmm... Why did I expect to see Laplace here?
Fun how fast 1=1/2+1/4+1/8... (and int(f)=0.5f) jumped out to me
How are you just ignoring the constants of integration? Like lets say when integraring the function thrice, we'll get c×e^2x + d×x^2 + e×x.
The reason why you’re bringing it up is the reason they ignore the addition constants in integration. It doesn’t work in those cases, unless the constants are 0. Try it yourself.
We assume a constant of integration such that f(0)=0. At least, that’s how I think of it. I might be missing something
how do you know you can use the geometric series formula when the common ratio is the integration operator? usually the requirement is that it has to be < 1 but that doesn’t really apply here
In normed spaces it’s okay if the norm is < 1 (the same for real or complex numbers). If we have a good notion of the norm for operators on functions then it can work. First we fix a space of functions (for example, analytic functions, or smooth functions, or continuous functions, or Riemann-integrable functions), check if there any good norms coming with it, then use an operator norm definition to find out what’s ∫ norm is, and if it’s okay we can go!
This reminds me of formal variables, the things they use for generating functions, where convergence or divergence of the series is trivial when compared to what it produces, just with a functional operator.
Finally, i am back.Also is functional analysis is same as operational calculas ? Because before watching this video i thought this as operational calculas.
Fun fact : before using this in modern physics, it was already used in electrical circuits in the olden days.
Operational calculus is a part of functional analysis
I must be misunderstanding your reasoning in the finite sum case. When you rewrite the partial sums of f_N with N being of the order of the partial sum, when you factor out a differential operator the result is not D*f_N but rather D*f_{N-1}. So I'm unsure about the validity of the solution thereon, since it seems you are still trying to invoke the ininite sum in the case of the partial. I'll try to show my logic more clearly in the morning.
Since when did you become a physicist.. if yk yk
Wait wait, factoring out the integrand wasn't a meme or abuse of notation???
It is an extreme abuse of notation, but in this case it yields a correct answer, which is basically the criteria upon which some will allow abuse of notation.
@@ambrisabelle it's only a model, after all
@@omfgacceptmynameI don’t exactly know what you mean
get the pitchforks
Thats pretty cool
12:01 so true so true.
Is it the right enswer?
*******
f'=f+int(f)+int(int(f)...
=> f'=2f
e^(2x+c)'=2e^(2x+c)
=> f=e^(2x+c)+cte.
*******
I think is a good enswer.
I think that's a great answer too❤
@@モハメドイブラヒム-k8f Thank you
Please make a discord server for the community
My guess before watching is e^(2x)
I believe exp(2x) works :)
Earliest i have ever been😅
A req for u kamal can you pls make a video discussing some basic approach to feynman technique? Like how to ket parameters based on ques and stuff
Mostly just experience
f(x)=0? doesnt that also work
@@ManuelManzur-Luengo when you integrate zero you get a constant of integration. So again....all depends on convergence.
I can’t tell why, but it looks like he’s doing all the illegal math things my calculus teacher told me not to do.
f'=f+int f +...=f+ f=2f
f=c* e^2t
There are techniques for finding sums of non-convergent series. Cesàro sums, Zeta-summation, and variations thereof.
If you were to allow such more general forms of convergence, is there still a problem there, or do you suddenly get more solutions somehow?
f(x)=0
F=0 works
why do you draw your integrals like that?
Just habit
I got this from rizzy on Instagram. Great page, you should definitely check it out.
you didn’t justify the existence of the infinite sum of integrals…
Papa flammy made a video on the first one didn't he?
@@GamerDS76 I've watched alot of flammy and I don't think he's done anything along the lines of infinite order differential/integral equations
The integral operator doesn't produce a single function like the derivative operator but a whole family of functions with all possible constants of integration.
By choosing your constants of integration appropriately, I believe you can add an arbitrary Maclaurin series to the right-hand side of the integral equation, which should give you a huge family of possible solutions.
Yes indeed. But there could be problems with this.
Let's say we just guess the solution to be Cexp(2x) and try it for the equation. Plugging it in means successive integrations of the functions and of the constants of integration produced. That gives the mclauren series of exp(x) times the sum of all constants c1+c2+... Which even if we assume converges to some c we still get another term c*exp(x) to go with the C*exp(2x). One "fix" here is to just assume the other constants to be zero for convergence i.e. getting rid of the divergent exp(x) term.
@@maths_505 All the integrals are independent so you can add an arbitrary polynomial in x of degree N-1 to each of them (where N is the amount of integral signs).
Without loss of generality, you can just add a single term a_N*x^(N-1) to the N-th integral and construct the power series with each integral contributing a single term. As long as the resulting series converges, I don't think there's a problem with that.
not watching f=e^(2x) final answer
Who does this for fun?!
Maybe out of boredom
You can actually solve this problem with Pascal’s triangle/the choose function (nCr). That’s how I stumbled across it
why is your voice so different??
I have a slight cough and throat irritation so that could be a cause but I just watched the video and I didn't feel the difference.
Hi ❤
Hey bro
Please stop apologizing. Making mistakes writing isn't distracting, but you saying apologies repeatedly is.
Otherwise good video
f = 0 gg ez
f = ∫f + ∫∫f + ∫∫∫f + ...
f = ∫(f + ∫f + ∫∫f + ...)
f = ∫(f + f)
f = 2 * ∫f
When you don't know what to comment so you comment about not knowing what to comment
Cool😊
The way he writes bugs the hell out of me. But ig being a real mathematician means being able to read and write unreadable "code"
You are the only person who thinks this
That's why I'm gay
AsymptoticSum[(2 E^-(t x)^2)/Sqrt[\[Pi]] x/z/.t->n/z,{n,1,z},z->Infinity]
I rtpedv this co.mebt wirh my nose