I hope you reach millions you deserve. I have never seen a detailed and clear explanation as yours And you should benifets from ads and also donations as well you deserve every dollar you get from the content.
When i found this channel a year ago it felt like finding a gemstone, i still think of this channel that way. Unfortunatelly it is not within my possibilityes to support via money donations, but what i can say for sure is that, as someone who truly loves your content whatching 1 or more ads would only make me happy knowing you are less restricted and more incentivised to explain math with your beautifull videos ❤ So i hope this comment is also a way to support you!
Every comment, every like, and every time you share one of our videos, is already a very real contribution to keeping the channel alive. Thank you very much for all those contributions. Oh, and it also feels nice to be called a gemstone from time to time 💎 😄
it is better to get intrupted by sponsers ship time stamps rather than knowing there is someone paying to get more premium content in a more useful way
Oooh. I'm literally trying to learn SVC / spectral decomposition atm! I'm got a reasonable-ish grasp of Eigenvalues/vectors (although my teenage son has to help me with the factorisation! I only know LA in maths...) but I'd like to find the eigenvector between two vectors in an attention matrix for a neural net. My goal is, rather than just make more matrices to store context data as is done now, but to shape latent space to make a valley between contexts. Anyway, this channel seems spot on for me. Subbed and looking forward to it.
That sounds like a really cool application in AI. When you say "valley" and "latent space", does it mean you are optimizing some cost function in a fitness landscape? I'd love to learn more.
@@AllAnglesMath To be clear, these are just concepts I'm playing around with as I try things. As for the jargon: yes, "latent space" is just the geometric representation of the co-domain matrix defining the fitness landscape. So in 2D say we have fitness as popularity of a cupcake against sugar content. There's a hump probably. With more ingredients, flour you have a 3d landscape. Add thousands of ingredients and you have a hyperdimensional landscape full of hills and valleys. With that in mind let's imagine an LLM considering the term "convention" which is a vector within a latent space. The meaning of that term (the direction of the vector) will shift if the term "traditional" appears or if the term "star trek" appears. The current solution for dealing with this is attention which is complex and expensive involving three additional matrices. Well, I was thinking of transforming the matrix specifically to construct an eigenvector in the landscape between the vector the vectors. That's the "valley" I was referring to. Like I say, just an idea I'm playing with.
@@davidmurphy563 It sounds really promising. I mean, if you could rotate your space to that eigenbasis, you might discover that each of the dimensions has a unique semantic meaning. The next question would then be: how many dimensions are enough to capture all the information? That would teach us something about the number of "big concepts" that are present in human language. Very interesting.
@@AllAnglesMath You know, I didn't think of rotation. There's the classic vector addition: "king - ""man" + "woman" = "queen" so you would presume rotation would be meaningful. Dimensionality is typically governed by tokenisation and there are loads of different strategies for that. The norm these days is 2048 bit. Plus, anything over 3 can't be pictured so I lump 4D and 555,555,555D into the same bucket. :))
@@AllAnglesMath”The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix.”
Good content man! You make me all sentimental learning math :)
I hope you reach millions you deserve. I have never seen a detailed and clear explanation as yours
And you should benifets from ads and also donations as well you deserve every dollar you get from the content.
When i found this channel a year ago it felt like finding a gemstone, i still think of this channel that way.
Unfortunatelly it is not within my possibilityes to support via money donations, but what i can say for sure is that, as someone who truly loves your content whatching 1 or more ads would only make me happy knowing you are less restricted and more incentivised to explain math with your beautifull videos ❤
So i hope this comment is also a way to support you!
Every comment, every like, and every time you share one of our videos, is already a very real contribution to keeping the channel alive. Thank you very much for all those contributions.
Oh, and it also feels nice to be called a gemstone from time to time 💎 😄
it is better to get intrupted by sponsers ship time stamps rather than knowing there is someone paying to get more premium content in a more useful way
Thanks for your explanations, they are allway very interesting and help me understand things I have missed in school.
Glad you like our channel. Thanks!
Thanks for your work! Can't wait to see all this new content
Oooh. I'm literally trying to learn SVC / spectral decomposition atm! I'm got a reasonable-ish grasp of Eigenvalues/vectors (although my teenage son has to help me with the factorisation! I only know LA in maths...) but I'd like to find the eigenvector between two vectors in an attention matrix for a neural net. My goal is, rather than just make more matrices to store context data as is done now, but to shape latent space to make a valley between contexts.
Anyway, this channel seems spot on for me. Subbed and looking forward to it.
That sounds like a really cool application in AI. When you say "valley" and "latent space", does it mean you are optimizing some cost function in a fitness landscape? I'd love to learn more.
@@AllAnglesMath To be clear, these are just concepts I'm playing around with as I try things. As for the jargon: yes, "latent space" is just the geometric representation of the co-domain matrix defining the fitness landscape. So in 2D say we have fitness as popularity of a cupcake against sugar content. There's a hump probably. With more ingredients, flour you have a 3d landscape. Add thousands of ingredients and you have a hyperdimensional landscape full of hills and valleys.
With that in mind let's imagine an LLM considering the term "convention" which is a vector within a latent space. The meaning of that term (the direction of the vector) will shift if the term "traditional" appears or if the term "star trek" appears.
The current solution for dealing with this is attention which is complex and expensive involving three additional matrices. Well, I was thinking of transforming the matrix specifically to construct an eigenvector in the landscape between the vector the vectors. That's the "valley" I was referring to.
Like I say, just an idea I'm playing with.
@@davidmurphy563 It sounds really promising. I mean, if you could rotate your space to that eigenbasis, you might discover that each of the dimensions has a unique semantic meaning. The next question would then be: how many dimensions are enough to capture all the information? That would teach us something about the number of "big concepts" that are present in human language. Very interesting.
@@AllAnglesMath You know, I didn't think of rotation. There's the classic vector addition: "king - ""man" + "woman" = "queen" so you would presume rotation would be meaningful.
Dimensionality is typically governed by tokenisation and there are loads of different strategies for that. The norm these days is 2048 bit. Plus, anything over 3 can't be pictured so I lump 4D and 555,555,555D into the same bucket. :))
0:36 is this about Eigenfaces?
It's about PCA (Principal Component Analysis). I don't know if that's related to eigenfaces.
@@AllAnglesMath”The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix.”
Awesome if you could make one video explaining how and what tools you use to make your videos
We may make such a video some day, but for now the priority is on a lot more algebra.
Bro is 6blue2brown
Cool! We will need a new logo, with 2 eyes 👁👁