I've always said that things thought to be difficult are very often simply poorly explained. This video is a confirmation: it makes things considered difficult easy. Thank you.
Thank you for this video. Do you think u could make a video or just quickly explain how we can extend the tensor product of two vector spaces to the tensor product of two linear transformations. Also how does this tensor product relate to the Kronecker product between 2 matrices
Remember that * is not denoting ordinary multiplication in this case. v*w just means "the basis vector associated with the pair (v,w)". Since each distinct pair has its own basis vector, v*w and 2v*2w are distinct basis vectors, hence not multiples of each other.
Consider the vector space V=RxR and a bilinear map f from V to R. Fix the standard basis e_i so that f is represented by the matrix A with element a_ij =f(e_i,e_j). This way f(x,y)= in the standard way being the standard scalar product. So basically your correspondence takes A to f and is a bijection. The tensor product is then the space the linear map represented by A acts uniquely. This work for finite dimensional vector spaces or even finitely generated R modules over a commutative ring R. Do you agree?
Yes, the converse holds. In other words, for every linear map g on V⊗W there exists a unique bilinear map f on V×W such that g ∘ τ = f. Uniqueness is obvious from the equation g ∘ τ = f because this specifies that, as functions, f is equal to g ∘ τ. We can write the map as f(v,w) = g(τ(v,w)) = g(v⊗w). Checking that this map is bilinear follows straightforwardly from the construction of the tensor product as a quotient space shown in this video and the fact that g is assumed to be linear.
But what about non- linear functions from the tensor product space? Naively it would seem that this is the vast majority of them, and they would be amenable to this construction. Don’t they exist, or are they simply not interesting to study?
This has got to be the best video on tensor products on RUclips. I hope you make more videos on this stuff!
I've always said that things thought to be difficult are very often simply poorly explained. This video is a confirmation: it makes things considered difficult easy. Thank you.
Very well done! I really liked how you discussed all details 100% concisely.
This is fire thank you. Perfectly rigorous with great examples.
Thank you for this video. Do you think u could make a video or just quickly explain how we can extend the tensor product of two vector spaces to the tensor product of two linear transformations. Also how does this tensor product relate to the Kronecker product between 2 matrices
Always as clear as crystal🎉🎉🎉
Outstanding video!
Great skills of explaining!
Thank you for this enlightening lecture
Excellent! Thanks a lot!
I thank you so much for this video, helped me a lot!
Fantastic explanation ❤️
Awesome video!!! Thank you so much!!!
4:40 Is v*w not a multiple of 2v*2w?
Remember that * is not denoting ordinary multiplication in this case. v*w just means "the basis vector associated with the pair (v,w)". Since each distinct pair has its own basis vector, v*w and 2v*2w are distinct basis vectors, hence not multiples of each other.
amazing video.
Great video! ❤
Consider the vector space V=RxR and a bilinear map f from V to R. Fix the standard basis e_i so that f is represented by the matrix A with element a_ij =f(e_i,e_j). This way f(x,y)= in the standard way being the standard scalar product. So basically your correspondence takes A to f and is a bijection. The tensor product is then the space the linear map represented by A acts uniquely. This work for finite dimensional vector spaces or even finitely generated R modules over a commutative ring R. Do you agree?
But does the converse hold? This is, every linear well-defined map on the tensor product comes from a bilinear map on the cartesian product?
Yes, the converse holds. In other words, for every linear map g on V⊗W there exists a unique bilinear map f on V×W such that g ∘ τ = f.
Uniqueness is obvious from the equation g ∘ τ = f because this specifies that, as functions, f is equal to g ∘ τ. We can write the map as
f(v,w) = g(τ(v,w)) = g(v⊗w).
Checking that this map is bilinear follows straightforwardly from the construction of the tensor product as a quotient space shown in this video and the fact that g is assumed to be linear.
@@MuPrimeMath I see. Thanks a lot. Keep up the good work!
But what about non- linear functions from the tensor product space? Naively it would seem that this is the vast majority of them, and they would be amenable to this construction. Don’t they exist, or are they simply not interesting to study?
Hi sorry what text are you working off of?
This video wasn't working off of any particular text.
@@MuPrimeMath Thanks for the excellent exposition. Which book(s) would you recommend--that address this topic in a similar way?
7