I think we should introduce a norm in our vector space (L^2 for example) to consider the action of the operator so that we can distinguish between eigenfunctions and almost-eigenfunctions Indeed, I think that the Sobolev space W^(1,2)[a,b] is what you are looking for, namely the space of functions in L^2[a,b] with the first derivative also in L^2[a,b] This way we know that the derivative maps L^2 functions into L^2 that is separable, in particular L^2[a,b] has a suitable Fuorier complete ortonormal system depending on the interval [a,b] in this sense the differential operator T(f) = f ' is diagonalizable, since the function e^{i k x} with k in R is a complete ortonormal system for the hilbert space L^2[a,b]
I think they are a basis. Like you said imaginary eigenvalues lead to the Fourier transform and more generally for complex (real) eigenvalues to the laplace transform
Thinking over it, there's a generalization you can make at the end about the eigenfunctions. You end up with y'=λy, equivalently y' - λy = 0. You can express this with the operator [D - λ](y)=0. Applying the operator p more times gives us multiple roots of order p which generalizes to y(t) = C(t^q)(e^λt), q
I think to make sin(x) diagonizable, you'd have to split it into (e^it)/2i - (e^-it)/2i However, for sin(x)+cos(x) you could probably diagonalize it directly.
The integral you got at the end resembled the Laplace transform (assuming we stick to real numbers only). I haven't thought about this question much, but I think you'd have to think about how to get the inverse of Laplace transforms (or similar integrals). Sadly, I don't know much about them. I really appreciate the question posed. Makes me want to revisit so much of what I learned!
With a purely algebraic definition of "basis" (i.e. every vector is a finite linear combination of basis elements), the putative basis of exponentials is not one. If you allow infinite linear combinations, then you get into topology and the answer depends on which one you choose.
I tried to write a function as a real exponential series for a while, but the constants that I ended up with didn't go to zero over time, so I don't know if it's actually possible to make a convergent series with only real values for general functions.
It makes sense you cannot diagonalize the derivative matrix: if you could, then you could easily compute its matrix powers, and hence powers of the derivative operator i.e. repeated application of the derivative. But that would mean even more - since taking the powers of a diagonal matrix is simply obtained by powering the diagonal entries, you could use this to compute fractional derivatives, like the half-derivative or ith derivative. Yet the half-derivative of a polynomial is not a polynomial, and these matrices must map polynomials into polynomials, a contradiction.
It really depends on the Vectorspace you are talking about. I suppose there are vector spaces, where it is possible to do this. Namely since lambda is any real number, the vector space would be the space of all functions, which can be uniquely written as f(x)=Integral over R d(lambda) C(Lambda)*exp(lambda*x) with some unique C(lambda).
Keep in mind that the span of a set is the space of finite linear combinations over the set, even if the set is infinite. You can not get, for example, the function x by a FINITE linear combination of exponentials. You can discretize and approximate, but you can not truly achieve the function x.
If you bound domain not to all real numbers but on positive semi-axis you will get basis for Laplace transform. As was said in comment below. Also, if you restrict domain from both sides you will get overdetermined function system. I am not sure if discrete basis can be extracted from it and that closure of this function system in C^inf is possible. It feels like yes. 😅 If you use formal series you can get exp(mu*(t-a))/(1 + exp(lambda*(t-b)))^N. Then using 1/cosh(l*(t-x))^N one can have nicer kernels for Laplace-like transformations but on all real domain... Just first thoughts on this... It is hard to mess with infinity. Functional analysis is not easy. 😅
Doc seriously you gotta make a playlist for undergrad math students...... Because honestly I don't understand this and your way of telling and teaching makes me wanna learn
on the basis e^x, is e^kx diagonalizable? f(x) = b^k, so... the derivative is kb^k. it's perfectly invertable, and can be extended to trigonometric functions (by the complex definitions of sin & cos) i wonder, however, if the space of derivatives for all polynomials of degree k-1 divided by x^k are diagonalizable? the derivative of a/x + b/x^2 + ... doesn't ever delete information, and is therefore perfectly invertable.
13:58 as all polynomials can be expressed as a combination of the form sum(ke^cx) for some set k and some other set c, and since differentiation & matrices are linear, and since ke^cx is trivial to work with, having derivative kce^cx, it should be possible
It's not a basis. In order for a set to be a basis every element must be written as a _finite_ linear combination of elements of the basis. So under this, purely algebraic definition, of (Hamel) basis the set outlined is not a basis. If you introduce a norm, or an inner product there are different definitions of basis that become available. But you need to provide a norm/inner product for that.
let's say the base space is C_0(R)∩L_2(R),by integrate by parts we have =\int(Dfg)dx=-\int(fDg)dx=, therefore as an operator D^*=-D,then DD^*=D^*D=-D^2, D is normal, hence diagonalizable.
If we restrict to f:V->W {V is P4 without constant term} and {W is P3}, then we have three eigen value 1,2,3 and their vectors are (0,2,3)(-1,0,1)(-2,-1,0). Is it diagonalizable ?
I know I'm late to the party, but wouldn't the answer to the diagnolizability of functions Cinf(R)->Cinf(R) be a no? I remember from my RA class in undergrad, we studied a function defined like so: f(x) = 0 if x = 0, and else f(x) = e^(-1/x^2). We proved that this function is in Cinf(R), and every derivative of f at 0 is equal to 0 (Basically, it was an example of an infinitely differentiable function, that nonetheless didn't have a Taylor Sequence). If we assume that f is a sum of functions C_k *e^(L_k * x), then that would mean that the sum of C_k is 0, and so is the sum of C_k * L_k, and C_k * L_k^2 = 0, C_k * L_k^3 = 0 etc etc. I didn't cook up a formal proof, but my spidey-senses are tingling, and it feels like there is a contradiction hiding somewhere here.
These do not form a basis. Proof: I claim sin is not in the span. If it was, we could write sin(x)=sum c_t e^tx where t ranges over R. Now, differentiate both sides 4 times. We get sin(x)=sum t^4c_te^tx. Since writing a vector in a basis gives unique coefficients, we must have t^4c_t=c_t for all t in R. The only way this can happen is if c_t=0 or t=1 or t=-1. So we can now write sin(x)=ae^x+be^-x, for some real a,b. But this clearly cannot happen.
If we assume the space of all Infinitely many differentiable functions to be separable, then we could consider a set of functions which are dense in that space and form a basis for it, and all the exponential functions Include imaginary exponents, are dense in the space of all infintly many differentiable functions, but i'm not sure that, this space is separable.
I claim the answer is no here, because the rate of change of a sum of real exponentials cannot change sign away from zero... but I need to think more carefully to be sure
I've been spending all evening trying to answer that final question. I don't have an answer, but, given λ ∈ ℂ, it's sufficient to prove that f(x)=x can be represented in terms of exponential functions. On a related but not quite sufficient note, If f(λ) = d/dλ ce^λx, x = f(0)/c. Which is cool because it's the derivative of the eigenvector with respect to the eigenvalue. This is equivalent to x = lim [h->0] (e^hx - 1)/h. I don't think this satisfies the proof about the basis, but I also can't articulate why it might not. Maybe it does. I did look a little bit into whether you could use fourier transforms to define f(x)=x, but I don't think there's a useful route, you'd be trying to find an alternative way to express the fourier transform of the derivative of the dirac delta function.
I think there is a problem for functions defined on ℝ, but have no proof. You have already proved that the basis could contain nothing else than exp(λx) for all λ. So, to prove that it is not a basis, we need to find a function that is not representable as f(x)=∫C(λ)exp(λx)dλ. I would suggest looking at some parameterized family of functions, e.g. exp(-ax²) or 1/(ax² + 1), a>= 0, or maybe something else where the solution is trivial for a=0 and the functions do not grow infinitely at the positive and negative infinities for a > 0. Then maybe try to differentiate on the parameter "a" and come to some contradiction. I have not tried that. Still, the result, even if negative, might not be interesting, because we already have to talk about generalized functions (including Dirac delta) when interpreting such integrals as composition of something from a basis. Edit: we definitely need to say precisely which functions our vector space consists of. For the vector space of "functions representable as f(x)=∫C(λ)exp(λx)dλ", the statement is trivial if we allow C(λ) to be a generalized function. But for all functions ℝ→ℝ, even the allusion to the Fourier transform does not make sense, because e.g. the Dirichlet function is not expressible as a Fourier integral.
I think we should introduce a norm in our vector space (L^2 for example) to consider the action of the operator so that we can distinguish between eigenfunctions and almost-eigenfunctions
Indeed, I think that the Sobolev space W^(1,2)[a,b] is what you are looking for, namely the space of functions in L^2[a,b] with the first derivative also in L^2[a,b]
This way we know that the derivative maps L^2 functions into L^2 that is separable, in particular L^2[a,b] has a suitable Fuorier complete ortonormal system depending on the interval [a,b]
in this sense the differential operator T(f) = f ' is diagonalizable, since the function e^{i k x} with k in R is a complete ortonormal system for the hilbert space L^2[a,b]
I think they are a basis. Like you said imaginary eigenvalues lead to the Fourier transform and more generally for complex (real) eigenvalues to the laplace transform
Thinking over it, there's a generalization you can make at the end about the eigenfunctions.
You end up with y'=λy, equivalently y' - λy = 0.
You can express this with the operator [D - λ](y)=0.
Applying the operator p more times gives us multiple roots of order p
which generalizes to y(t) = C(t^q)(e^λt), q
I think to make sin(x) diagonizable, you'd have to split it into (e^it)/2i - (e^-it)/2i
However, for sin(x)+cos(x) you could probably diagonalize it directly.
The integral you got at the end resembled the Laplace transform (assuming we stick to real numbers only). I haven't thought about this question much, but I think you'd have to think about how to get the inverse of Laplace transforms (or similar integrals). Sadly, I don't know much about them.
I really appreciate the question posed. Makes me want to revisit so much of what I learned!
Btw, a student asked me about linear algebra problems today. Oh man, I should pick up my linear algebra again...
yes
Dude, just refer the student to my videos, haha
Dr Peyam
Hahaha I did. I told the student that you have a phd in linear algebra, Integrals, and derivatives hahaha
With a purely algebraic definition of "basis" (i.e. every vector is a finite linear combination of basis elements), the putative basis of exponentials is not one. If you allow infinite linear combinations, then you get into topology and the answer depends on which one you choose.
I tried to write a function as a real exponential series for a while, but the constants that I ended up with didn't go to zero over time, so I don't know if it's actually possible to make a convergent series with only real values for general functions.
You can’t, unfortunately
Unless with integrals maybe
It makes sense you cannot diagonalize the derivative matrix: if you could, then you could easily compute its matrix powers, and hence powers of the derivative operator i.e. repeated application of the derivative. But that would mean even more - since taking the powers of a diagonal matrix is simply obtained by powering the diagonal entries, you could use this to compute fractional derivatives, like the half-derivative or ith derivative. Yet the half-derivative of a polynomial is not a polynomial, and these matrices must map polynomials into polynomials, a contradiction.
Using an inifinite-dimensional polynomial basis { 1, x, x^2, x^3, ... } works, of course.
T ( v0, v1, v2, ... ) = t (v1, 2 v2, 3 v3, ... )
v1 = t v0
v2 = (1/2) t v1 = (1/2) t^2 v0
v3 = (1/3) t v2 = (1/3!) t^3 v0
etc
It really depends on the Vectorspace you are talking about. I suppose there are vector spaces, where it is possible to do this. Namely since lambda is any real number, the vector space would be the space of all functions, which can be uniquely written as f(x)=Integral over R d(lambda) C(Lambda)*exp(lambda*x) with some unique C(lambda).
Dr p u r the coolest mathematician ever great work p
Keep in mind that the span of a set is the space of finite linear combinations over the set, even if the set is infinite.
You can not get, for example, the function x by a FINITE linear combination of exponentials.
You can discretize and approximate, but you can not truly achieve the function x.
It is a different thing, but just redefine "span" to be the set of convergent lineär combinations of the functions/vectors.
Whatup Curtis, it’s Noah! You watch Dr Peyam too?
If you bound domain not to all real numbers but on positive semi-axis you will get basis for Laplace transform. As was said in comment below.
Also, if you restrict domain from both sides you will get overdetermined function system. I am not sure if discrete basis can be extracted from it and that closure of this function system in C^inf is possible. It feels like yes. 😅
If you use formal series you can get exp(mu*(t-a))/(1 + exp(lambda*(t-b)))^N. Then using 1/cosh(l*(t-x))^N one can have nicer kernels for Laplace-like transformations but on all real domain... Just first thoughts on this...
It is hard to mess with infinity.
Functional analysis is not easy. 😅
How old were you in this video?
31, haha, why?
Dr Peyam Hahahaha I just wonder if this is an old vid or not.
Doc seriously you gotta make a playlist for undergrad math students...... Because honestly I don't understand this and your way of telling and teaching makes me wanna learn
This is actually undergrad math, haha. Some of the earlier playlists cover more basic material, we are now in the core of the advanced playlists
Check this out for example: Linear Equations ruclips.net/p/PLJb1qAQIrmmD_u31hoZ1D335sSKMvVQ90
@@drpeyam damn...... Now I have to over think my choices again 😂😂😂😂 thanks for the hard work though
on the basis e^x, is e^kx diagonalizable? f(x) = b^k, so... the derivative is kb^k. it's perfectly invertable, and can be extended to trigonometric functions (by the complex definitions of sin & cos) i wonder, however, if the space of derivatives for all polynomials of degree k-1 divided by x^k are diagonalizable? the derivative of a/x + b/x^2 + ... doesn't ever delete information, and is therefore perfectly invertable.
13:58 as all polynomials can be expressed as a combination of the form sum(ke^cx) for some set k and some other set c, and since differentiation & matrices are linear, and since ke^cx is trivial to work with, having derivative kce^cx, it should be possible
It's not a basis. In order for a set to be a basis every element must be written as a _finite_ linear combination of elements of the basis. So under this, purely algebraic definition, of (Hamel) basis the set outlined is not a basis.
If you introduce a norm, or an inner product there are different definitions of basis that become available. But you need to provide a norm/inner product for that.
let's say the base space is C_0(R)∩L_2(R),by integrate by parts we have =\int(Dfg)dx=-\int(fDg)dx=, therefore as an operator D^*=-D,then DD^*=D^*D=-D^2, D is normal, hence diagonalizable.
Oooh, that’s really cool!!!
(i don't know much about fourier series but) Maybe the exponential functions are a basis of periodic functions or something
If we restrict to f:V->W {V is P4 without constant term} and {W is P3}, then we have three eigen value 1,2,3 and their vectors are (0,2,3)(-1,0,1)(-2,-1,0). Is it diagonalizable ?
So yes and no, technically diagonalization is just for f: V to V, but you do have the right idea!
Oh no, the matrix is already diagonal lol.
I know I'm late to the party, but wouldn't the answer to the diagnolizability of functions Cinf(R)->Cinf(R) be a no? I remember from my RA class in undergrad, we studied a function defined like so: f(x) = 0 if x = 0, and else f(x) = e^(-1/x^2). We proved that this function is in Cinf(R), and every derivative of f at 0 is equal to 0 (Basically, it was an example of an infinitely differentiable function, that nonetheless didn't have a Taylor Sequence). If we assume that f is a sum of functions C_k *e^(L_k * x), then that would mean that the sum of C_k is 0, and so is the sum of C_k * L_k, and C_k * L_k^2 = 0, C_k * L_k^3 = 0 etc etc. I didn't cook up a formal proof, but my spidey-senses are tingling, and it feels like there is a contradiction hiding somewhere here.
Now I wonder if you can make an argument whether antiderivative and/or integral is diagonalizable or not by a similar argument?
These do not form a basis.
Proof: I claim sin is not in the span. If it was, we could write sin(x)=sum c_t e^tx where t ranges over R.
Now, differentiate both sides 4 times. We get sin(x)=sum t^4c_te^tx. Since writing a vector in a basis gives unique coefficients, we must have t^4c_t=c_t for all t in R. The only way this can happen is if c_t=0 or t=1 or t=-1.
So we can now write sin(x)=ae^x+be^-x, for some real a,b. But this clearly cannot happen.
Fourier likes to disagree with that ;)
@@drpeyam Oh, is there something wrong with my proof?
If we assume the space of all Infinitely many differentiable functions to be separable, then we could consider a set of functions which are dense in that space and form a basis for it, and all the exponential functions Include imaginary exponents, are dense in the space of all infintly many differentiable functions, but i'm not sure that, this space is separable.
Not easy to understand this but interesting. With infinite basis is it possible..? Oh my god, i've a headache ! Thanks !
I claim the answer is no here, because the rate of change of a sum of real exponentials cannot change sign away from zero... but I need to think more carefully to be sure
I've been spending all evening trying to answer that final question. I don't have an answer, but, given λ ∈ ℂ, it's sufficient to prove that f(x)=x can be represented in terms of exponential functions.
On a related but not quite sufficient note,
If f(λ) = d/dλ ce^λx,
x = f(0)/c.
Which is cool because it's the derivative of the eigenvector with respect to the eigenvalue.
This is equivalent to x = lim [h->0] (e^hx - 1)/h.
I don't think this satisfies the proof about the basis, but I also can't articulate why it might not. Maybe it does.
I did look a little bit into whether you could use fourier transforms to define f(x)=x, but I don't think there's a useful route, you'd be trying to find an alternative way to express the fourier transform of the derivative of the dirac delta function.
Hey man other than RUclips do you do research or something like that? Like what do you do...just curious
Check out my 100th video special!
I think there is a problem for functions defined on ℝ, but have no proof. You have already proved that the basis could contain nothing else than exp(λx) for all λ. So, to prove that it is not a basis, we need to find a function that is not representable as f(x)=∫C(λ)exp(λx)dλ. I would suggest looking at some parameterized family of functions, e.g. exp(-ax²) or 1/(ax² + 1), a>= 0, or maybe something else where the solution is trivial for a=0 and the functions do not grow infinitely at the positive and negative infinities for a > 0. Then maybe try to differentiate on the parameter "a" and come to some contradiction. I have not tried that. Still, the result, even if negative, might not be interesting, because we already have to talk about generalized functions (including Dirac delta) when interpreting such integrals as composition of something from a basis.
Edit: we definitely need to say precisely which functions our vector space consists of. For the vector space of "functions representable as f(x)=∫C(λ)exp(λx)dλ", the statement is trivial if we allow C(λ) to be a generalized function. But for all functions ℝ→ℝ, even the allusion to the Fourier transform does not make sense, because e.g. the Dirichlet function is not expressible as a Fourier integral.
Having a column of zeroes A has det = 0. isn't this enough to say it isn't diagonalizable?
No, diagonalizable has nothing to do with invertible. The zero matrix is diagonalizable
i need help to solve this equation ut=uxx
how i can to speak with you i need your fb
Check out my video on the heat equation