@@brightsideofmaths Thank you for this! Can you please share your email address or inbox me at saadtahir96@gmail.com? I have some useful material that you may like, and ultimately also help me with this course too! :D
Congratulations man! This is an amazing intro to a topic that I like very much (although I am not a mathematician) but I struggle to understand through my self-study. It really helps me a lot! Again congrats and keep up the good work :)
I know that this is a meme, but: I really enjoy the statements of basic Functional Analysis, because they resemble stuff from representation theory (A continuous function H -> F seems to be an analytic version of an exact Functor from a 'nice' triangulated category T to k-Vect). For a certain triangulated category T one can show that K_0(T) is the power series ring k[[t]]. Alltogether we obtain a map from K_0(T)= k[[t]]-> k=K_0(k-Vect) (At least after tensoring with k). The statement is that every such Functor which is continuous (i.e. exact and sends arbitrary coproducts to arbitrary coproducts) is representable (This is Brown Representability theorem).
at 4:45, is it always true that by continuity, the pre-image of closed sets are closed ? You said that with the continuity translates to closed sets for complements. I don't understand what you mean for complements , is there an extra-criterion for it to translate to closed sets? Or is it always true that if continuous the pre-image of closed sets is always closed? I'm simply asking this to know whether it's possible to have the preimage of an closed set being open which wouldn't go against the definition of continuity we saw. Thank you ! Additionally it seems that at 5:16, x_l can be defined by any x^ (x-hat) that satisfies given properties, is it true that only one x^ satisfies these properties since x_l is unique ?
By the abstract continuity we have: preimages of open sets are also open. This translates to: preimages of closed sets are also closed. Please also note that a set can be closed and open at the same time.
I'm unsure if I'm being dumb but for 6:34, doesn't the complex conjugate come when we multiply the scalar in the second component? I might be confused but kindly clarify.
in wikipedia and conway's functional analysis as well, I saw that it is conjugates for second component and normal for first component. I checked out your previous video where you said linear in second component though.
Ok sorry i rewatched that video and found at 6:18 that you clarified that you had chosen this definition. I also understood that eventually it is there to ensure positivity so it is our choice to choose linearity in first (or second) argument. Thanks. Stuff is much clear now
Can you do a course on differential geometry? Starting elementary then continuing with manifolds. Maybe you can do something with Banach- and Hilbert-Manifolds. Would be nice! ^^
Wanted to give myself a quick refresher for the proof of Riesz representation theorem, and this was extremely clear and helpful, just like I remembered it to be ! I hope you will get the chance to cover orthogonal projections as some point
We also have the assumption that ker(l) is not the whole space. Hence closedness means that ker(l) is a proper subset and a Hilbert space in the Hilbert space X. Does this already help you?
@@brightsideofmaths hmmm now I'm wondering why the closedness is necessary. If we say ker(l) is a strict subset of X, and k is in ker(l) and let x be in X but not ker(l), then = 0 by definition, so x is in ker(l)^ortho. Since 0 is in ker(l) and we defined x to not be in ker(l), x is not 0, and ker(l)^ortho is nontrivial. Where does the closedness of ker(l) come into play?
@@mathieumaticien To even be able to split up the whole space into a subspace and its orthogonal complement you need to apply the Hilbert projection theorem. (Which is done implicitly in the video) And the theorem requires a closed subspace. (Just look at its proof) So its really a condition imposed by that theorem if you want to be able to split the space up in the first place.
Domain (pre-image) is dual to the co-domain (image) -- rank nullity theorem in linear algebra. Isomorphism (sameness) is dual to homomorphism (similar or relative sameness) -- Group Theory.
In a finite dimensional Euclidean space, we often represent linear functionals via row vectors, which map column vectors into the underlying field via a dot/inner product. I guess the Riesz Representation Theorem guarantees: a) that this operation can be justified rigorously b) that the analogue of this operation in infinite dimensional vector spaces also exists
I think that is a short rough summary one can always have in mind. However, in infinite-dimensional spaces some technical details are involved as well: We need completeness for example and the dual space consists of *continuous* functionals.
Hi, thank you so much for your video! I am sorry if I throw too many questions on the same day... I am wondering could you please share insights on why x_l must belongs to the orthogonal complement of the kernel of l? I know the kernel is a subspace of a vector space, and I know the row space(or column space) is orthogonal to the null space. I can sort of following every step to where l(x)= but I don't get the insight of choosing x_l from orthogonal complement of the kernel. Also, it seems this special x_l chosen is analog to the singular vector in a finite space... are these two concepts somehow connected? Sorry I wasn't majored in Maths and have a very limited background in all sorts of maths subjects. I hope you don't find my question naive and lack in basic understanding. I am glad if you could point me to the right direction of study!
Don't worry at all. All questions are welcome here. Even naive ones can help other viewers here quite a lot. The choice of x_l makes sense here because in the inner product all elements in ker(l) have to be sent to 0 as well. This is then what the inner product can do.
3:50 has made me realise I ddin't understand this. The professor at my functional analysis course did say that the theorem wouldn't work without X being a hilbert space but he didn't explicitly say why. Judging form your video I also probably don't quite understand orthogonal projectors as well as I'd like to. I've tryied looking into the book Functional Analysis by Peter Lax , but got even more confused. There it almost seems like you need a vector subspace (not jusz an arbitrary set) in order to even define an orthogonal complement. Besides this it would seem that perhaps the classical relation from linear algebra, namely that X=Y directsum Y^ortho, for any vector substace Y, only holds true in a general hilber space if Y is closed ?
Thanks for the video. My doubt is "Whenever you are entering l(x^) in inner product , you are taking Conjugate of l(x^)" why? We know conjugate come if we take with second term of inner product. Please clear it.
No clue where you would put it, but it would be great if somehow Fréchet and Gateaux were discussed in this Functional Analysis series. Unless you think it should belong elsewhere? Thanks for the videos!
With your constructed x\hat, the proof is done like a knife through butter. But it raises a bigger question, how did you come up with the construction?
@@brightsideofmaths I'm trying to understand this without any construction (for these constructions were perhaps invented after the theorem was proven, and may hinder deeper understanding) The original inspiration may be the Euclidean space R^n. Consider a vector r in R^3, r=(x,0,0)+(0,y,0)+(0,0,z). When we study l(r), just take l((x,0,0)) for example, the linearity of l(r) implies that l((x,0,0)) is just a multiple of x, therefore l(r) is just . This also implies that dim(ker(l))=n-1 for the space R^n. Knowing that x_l exists, then any vector is the sum of the parallel part and the orthogonal part with x_l. Then it's natural to propose the unified parallel component x\hat (meaning x_l is a multiple of x\hat) and then the parallel part is easily l(x)/l(x\hat)*x\hat = \lambda * x\hat. The next big leap is 6:17 where is miraculously put there. It's natural to approach from =, comparing to the equation l(x)=\lambda * l(x\hat), knowing that x_l is a multiple of x\hat, say x_l=a*x\hat, we finally get = \lambda * l(x\hat), and solve for a=l(x\hat), i.e. x_l=a*x\hat=l(x\hat)*x\hat. And this process can be generalized to Hilbert spaces. Sorry for the messy writing, but the reasoning is completely natural without any prior-construction, all from what we already have in the derivation. I prefer this derivation for it's more basic and learner-friendly.
Do a series on topology and algebra too. If you have done it before please share the link. I like how you present the ideas and it gives right intuition.
Thanks for your super helpful videos! 😀 I have a quick question: how do we know that $x_l := l(\hat{x}) \hat{x}$ is still inside the set $X$? Since we have scaled $\hat{x}$ by $l(\hat{x})$, and the scaling might be large, so it could be that $x_l$ now lies outside of the set $X$?
Why orthogonal complement being closed has to contain something other than zero vector? Closed is something topological I am confused here. For finite dimension Gram- Schmidt process may help but in general I am not sure.
I'm loving the frequent videos! Such a happy surprise to open RUclips and see a new functional analysis video every day! :)
Thank you! I am working hard at the moment :)
@@brightsideofmaths Thank you for this! Can you please share your email address or inbox me at saadtahir96@gmail.com? I have some useful material that you may like, and ultimately also help me with this course too! :D
@@saadtahir96 ruclips.net/user/brightsideofmathsabout
I don’t understand why (lambda)*x(hat)-x is in the Kernel of l... 7:08
If you apply the map l, you get zero. This is same calculation as done by the blue brackets above.
Ahhh got it! And by the way, thanks for the videos. They are really amazing. I even already sign up to your your website steadyhq.
@@dibeos Thank you very much :)
Congratulations man! This is an amazing intro to a topic that I like very much (although I am not a mathematician) but I struggle to understand through my self-study. It really helps me a lot! Again congrats and keep up the good work :)
Thank you! I am glad that I can help :)
Not even joking, yesterday I was about to ask whether you would make a video on this topic. So this was a nice surprise :D
I know that this is a meme, but: I really enjoy the statements of basic Functional Analysis, because they resemble stuff from representation theory (A continuous function H -> F seems to be an analytic version of an exact Functor from a 'nice' triangulated category T to k-Vect). For a certain triangulated category T one can show that K_0(T) is the power series ring k[[t]]. Alltogether we obtain a map from K_0(T)= k[[t]]-> k=K_0(k-Vect) (At least after tensoring with k). The statement is that every such Functor which is continuous (i.e. exact and sends arbitrary coproducts to arbitrary coproducts) is representable (This is Brown Representability theorem).
Thank you, good sir. I'm writing my thesis and never took functional analysis so your videos help a lot
at 4:45, is it always true that by continuity, the pre-image of closed sets are closed ? You said that with the continuity translates to closed sets for complements. I don't understand what you mean for complements , is there an extra-criterion for it to translate to closed sets? Or is it always true that if continuous the pre-image of closed sets is always closed? I'm simply asking this to know whether it's possible to have the preimage of an closed set being open which wouldn't go against the definition of continuity we saw. Thank you !
Additionally it seems that at 5:16, x_l can be defined by any x^ (x-hat) that satisfies given properties, is it true that only one x^ satisfies these properties since x_l is unique ?
By the abstract continuity we have: preimages of open sets are also open. This translates to: preimages of closed sets are also closed.
Please also note that a set can be closed and open at the same time.
The proofs are very elegant, they really bring out the beauty and bright side of mathematics!
Glad you like them! And thanks for the support :)
Thank you! Your video really helped me understand better the material. I feel more confident for my final tomorrow
Nice! Good luck :)
Finally an answer to why we can just transpose vector space elements and they become OK as an functional. Thanks!
I'm unsure if I'm being dumb but for 6:34, doesn't the complex conjugate come when we multiply the scalar in the second component? I might be confused but kindly clarify.
in wikipedia and conway's functional analysis as well, I saw that it is conjugates for second component and normal for first component. I checked out your previous video where you said linear in second component though.
Ok sorry i rewatched that video and found at 6:18 that you clarified that you had chosen this definition. I also understood that eventually it is there to ensure positivity so it is our choice to choose linearity in first (or second) argument. Thanks. Stuff is much clear now
Great :)
Can you do a course on differential geometry? Starting elementary then continuing with manifolds. Maybe you can do something with Banach- and Hilbert-Manifolds. Would be nice! ^^
Wanted to give myself a quick refresher for the proof of Riesz representation theorem, and this was extremely clear and helpful, just like I remembered it to be !
I hope you will get the chance to cover orthogonal projections as some point
I am learning a lot from your videos, man. Thank you for posting this content
I hope you still get enough sleep with all these high quality videos coming out in a short time ;)
Why does the closedness of ker(l) imply that ker(l)^ortho is nontrivial?
We also have the assumption that ker(l) is not the whole space. Hence closedness means that ker(l) is a proper subset and a Hilbert space in the Hilbert space X. Does this already help you?
@@brightsideofmaths hmmm now I'm wondering why the closedness is necessary. If we say ker(l) is a strict subset of X, and k is in ker(l) and let x be in X but not ker(l), then = 0 by definition, so x is in ker(l)^ortho. Since 0 is in ker(l) and we defined x to not be in ker(l), x is not 0, and ker(l)^ortho is nontrivial.
Where does the closedness of ker(l) come into play?
@@mathieumaticien To even be able to split up the whole space into a subspace and its orthogonal complement you need to apply the Hilbert projection theorem. (Which is done implicitly in the video) And the theorem requires a closed subspace. (Just look at its proof) So its really a condition imposed by that theorem if you want to be able to split the space up in the first place.
Domain (pre-image) is dual to the co-domain (image) -- rank nullity theorem in linear algebra.
Isomorphism (sameness) is dual to homomorphism (similar or relative sameness) -- Group Theory.
In a finite dimensional Euclidean space, we often represent linear functionals via row vectors, which map column vectors into the underlying field via a dot/inner product. I guess the Riesz Representation Theorem guarantees:
a) that this operation can be justified rigorously
b) that the analogue of this operation in infinite dimensional vector spaces also exists
I think that is a short rough summary one can always have in mind.
However, in infinite-dimensional spaces some technical details are involved as well: We need completeness for example and the dual space consists of *continuous* functionals.
Hi, thank you so much for your video! I am sorry if I throw too many questions on the same day... I am wondering could you please share insights on why x_l must belongs to the orthogonal complement of the kernel of l? I know the kernel is a subspace of a vector space, and I know the row space(or column space) is orthogonal to the null space. I can sort of following every step to where l(x)= but I don't get the insight of choosing x_l from orthogonal complement of the kernel. Also, it seems this special x_l chosen is analog to the singular vector in a finite space... are these two concepts somehow connected?
Sorry I wasn't majored in Maths and have a very limited background in all sorts of maths subjects. I hope you don't find my question naive and lack in basic understanding. I am glad if you could point me to the right direction of study!
Don't worry at all. All questions are welcome here. Even naive ones can help other viewers here quite a lot.
The choice of x_l makes sense here because in the inner product all elements in ker(l) have to be sent to 0 as well. This is then what the inner product can do.
Excellent video
Thank you very much!
Wow this was a really cool topic, can't wait to see the applications.
Excellent video! Would you reference where the proof that the orthogonal complement of a closed set in a Hilbert space contains elements other than 0?
Thank you very much for your excellent videos!
The course I am taking also has a step that proves that the dimension of the ortho-complement of ker(l) is 1, do you know why is it? Thanks
why L-norm's bound is L(uni-vector of X)'s norm?
3:50 has made me realise I ddin't understand this. The professor at my functional analysis course did say that the theorem wouldn't work without X being a hilbert space but he didn't explicitly say why. Judging form your video I also probably don't quite understand orthogonal projectors as well as I'd like to. I've tryied looking into the book Functional Analysis by Peter Lax , but got even more confused. There it almost seems like you need a vector subspace (not jusz an arbitrary set) in order to even define an orthogonal complement. Besides this it would seem that perhaps the classical relation from linear algebra, namely that X=Y directsum Y^ortho, for any vector substace Y, only holds true in a general hilber space if Y is closed ?
This is something I really want to cover later :)
Thanks for the video. My doubt is "Whenever you are entering l(x^) in inner product , you are taking Conjugate of l(x^)" why? We know conjugate come if we take with second term of inner product. Please clear it.
I defined the conjugate in the first term of the inner product.
@@brightsideofmaths Is it not against the inner product formula since we know = a and = b*.
@@anowarali668 What is not against it?
@@brightsideofmaths l(x^)
@@anowarali668 As I said: we defined the inner product with the property = b
No clue where you would put it, but it would be great if somehow Fréchet and Gateaux were discussed in this Functional Analysis series. Unless you think it should belong elsewhere? Thanks for the videos!
Great suggestion!
Thank You
With your constructed x\hat, the proof is done like a knife through butter. But it raises a bigger question, how did you come up with the construction?
Thanks. We know the start and the goal. One just tries to fill in the gaps and finds x_l.
@@brightsideofmaths I'm trying to understand this without any construction (for these constructions were perhaps invented after the theorem was proven, and may hinder deeper understanding)
The original inspiration may be the Euclidean space R^n. Consider a vector r in R^3, r=(x,0,0)+(0,y,0)+(0,0,z). When we study l(r), just take l((x,0,0)) for example, the linearity of l(r) implies that l((x,0,0)) is just a multiple of x, therefore l(r) is just . This also implies that dim(ker(l))=n-1 for the space R^n.
Knowing that x_l exists, then any vector is the sum of the parallel part and the orthogonal part with x_l. Then it's natural to propose the unified parallel component x\hat (meaning x_l is a multiple of x\hat) and then the parallel part is easily l(x)/l(x\hat)*x\hat = \lambda * x\hat.
The next big leap is 6:17 where is miraculously put there. It's natural to approach from =, comparing to the equation l(x)=\lambda * l(x\hat), knowing that x_l is a multiple of x\hat, say x_l=a*x\hat, we finally get = \lambda * l(x\hat), and solve for a=l(x\hat), i.e. x_l=a*x\hat=l(x\hat)*x\hat. And this process can be generalized to Hilbert spaces.
Sorry for the messy writing, but the reasoning is completely natural without any prior-construction, all from what we already have in the derivation. I prefer this derivation for it's more basic and learner-friendly.
Do a series on topology and algebra too. If you have done it before please share the link. I like how you present the ideas and it gives right intuition.
Bonita y muy bien explicada la demostración de este importante teorema.
we are looking forward to giving us a video series in topology
Check out my manifold series :)
Thank you so much!
Thx
Great video!
Thanks for your super helpful videos! 😀 I have a quick question: how do we know that $x_l := l(\hat{x}) \hat{x}$ is still inside the set $X$? Since we have scaled $\hat{x}$ by $l(\hat{x})$, and the scaling might be large, so it could be that $x_l$ now lies outside of the set $X$?
X is not just a set but a vector space. Hence you can never leave it just by scaling :)
@@brightsideofmaths oh got you! Thanks very much for your quick reply!😃
Thank you for this video :)
thanks good explain can give me site to solve questions .thx
Can you share your slides
Oh sorry! I totally forgot. Now they are all in :) steadyhq.com/en/brightsideofmaths/posts/c6641292-1666-4a24-a4b9-cd9c4147d7d3
@@brightsideofmaths it is asking for member access... please share on some open source platform
@@RohanKumar-zn4qg PDFs are a perk for my Steady members.
want to cry, calculus 3 incoming hahahah
My proof is much simpler than this. Of course, it is wrong!
Why orthogonal complement being closed has to contain something other than zero vector? Closed is something topological I am confused here. For finite dimension Gram- Schmidt process may help but in general I am not sure.
The orthogonal complement is always a closed set. So maybe you can clarify your question?