Channel update: Representation theory, AI, and more

Поделиться
HTML-код
  • Опубликовано: 16 янв 2025

Комментарии • 21

  • @Actual_Genius_Intellectual
    @Actual_Genius_Intellectual 9 месяцев назад +4

    Good content man! You make me all sentimental learning math :)

  • @halim_Ha
    @halim_Ha 9 месяцев назад +8

    I hope you reach millions you deserve. I have never seen a detailed and clear explanation as yours
    And you should benifets from ads and also donations as well you deserve every dollar you get from the content.

  • @fedebonons8453
    @fedebonons8453 9 месяцев назад +13

    When i found this channel a year ago it felt like finding a gemstone, i still think of this channel that way.
    Unfortunatelly it is not within my possibilityes to support via money donations, but what i can say for sure is that, as someone who truly loves your content whatching 1 or more ads would only make me happy knowing you are less restricted and more incentivised to explain math with your beautifull videos ❤
    So i hope this comment is also a way to support you!

    • @AllAnglesMath
      @AllAnglesMath  9 месяцев назад +3

      Every comment, every like, and every time you share one of our videos, is already a very real contribution to keeping the channel alive. Thank you very much for all those contributions.
      Oh, and it also feels nice to be called a gemstone from time to time 💎 😄

  • @yashS4201
    @yashS4201 9 месяцев назад +12

    it is better to get intrupted by sponsers ship time stamps rather than knowing there is someone paying to get more premium content in a more useful way

  • @philippwettmann7649
    @philippwettmann7649 9 месяцев назад +7

    Thanks for your explanations, they are allway very interesting and help me understand things I have missed in school.

    • @AllAnglesMath
      @AllAnglesMath  9 месяцев назад +1

      Glad you like our channel. Thanks!

  • @sigma914
    @sigma914 9 месяцев назад +4

    Thanks for your work! Can't wait to see all this new content

  • @davidmurphy563
    @davidmurphy563 9 месяцев назад +3

    Oooh. I'm literally trying to learn SVC / spectral decomposition atm! I'm got a reasonable-ish grasp of Eigenvalues/vectors (although my teenage son has to help me with the factorisation! I only know LA in maths...) but I'd like to find the eigenvector between two vectors in an attention matrix for a neural net. My goal is, rather than just make more matrices to store context data as is done now, but to shape latent space to make a valley between contexts.
    Anyway, this channel seems spot on for me. Subbed and looking forward to it.

    • @AllAnglesMath
      @AllAnglesMath  9 месяцев назад +1

      That sounds like a really cool application in AI. When you say "valley" and "latent space", does it mean you are optimizing some cost function in a fitness landscape? I'd love to learn more.

    • @davidmurphy563
      @davidmurphy563 9 месяцев назад +2

      @@AllAnglesMath To be clear, these are just concepts I'm playing around with as I try things. As for the jargon: yes, "latent space" is just the geometric representation of the co-domain matrix defining the fitness landscape. So in 2D say we have fitness as popularity of a cupcake against sugar content. There's a hump probably. With more ingredients, flour you have a 3d landscape. Add thousands of ingredients and you have a hyperdimensional landscape full of hills and valleys.
      With that in mind let's imagine an LLM considering the term "convention" which is a vector within a latent space. The meaning of that term (the direction of the vector) will shift if the term "traditional" appears or if the term "star trek" appears.
      The current solution for dealing with this is attention which is complex and expensive involving three additional matrices. Well, I was thinking of transforming the matrix specifically to construct an eigenvector in the landscape between the vector the vectors. That's the "valley" I was referring to.
      Like I say, just an idea I'm playing with.

    • @AllAnglesMath
      @AllAnglesMath  9 месяцев назад +1

      @@davidmurphy563 It sounds really promising. I mean, if you could rotate your space to that eigenbasis, you might discover that each of the dimensions has a unique semantic meaning. The next question would then be: how many dimensions are enough to capture all the information? That would teach us something about the number of "big concepts" that are present in human language. Very interesting.

    • @davidmurphy563
      @davidmurphy563 9 месяцев назад +3

      @@AllAnglesMath You know, I didn't think of rotation. There's the classic vector addition: "king - ""man" + "woman" = "queen" so you would presume rotation would be meaningful.
      Dimensionality is typically governed by tokenisation and there are loads of different strategies for that. The norm these days is 2048 bit. Plus, anything over 3 can't be pictured so I lump 4D and 555,555,555D into the same bucket. :))

  • @RAyLV17
    @RAyLV17 9 месяцев назад +5

    0:36 is this about Eigenfaces?

    • @AllAnglesMath
      @AllAnglesMath  9 месяцев назад +2

      It's about PCA (Principal Component Analysis). I don't know if that's related to eigenfaces.

    • @AdrianBoyko
      @AdrianBoyko 3 месяца назад

      @@AllAnglesMath”The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix.”

  • @martinsanchez-hw4fi
    @martinsanchez-hw4fi 9 месяцев назад +3

    Awesome if you could make one video explaining how and what tools you use to make your videos

    • @AllAnglesMath
      @AllAnglesMath  9 месяцев назад +1

      We may make such a video some day, but for now the priority is on a lot more algebra.

  • @dereksmith9332
    @dereksmith9332 9 месяцев назад +4

    Bro is 6blue2brown

    • @AllAnglesMath
      @AllAnglesMath  9 месяцев назад +1

      Cool! We will need a new logo, with 2 eyes 👁👁