Training an AutoEncoder On Random Data

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024
  • in this video, i created a small autoencoder and tested if i can train it using only randomly generated data.
    project files:
    github.com/mar...

Комментарии • 20

  • @Zach010ROBLOX
    @Zach010ROBLOX 9 месяцев назад +4

    It's interesting how the B noise preserved overall shape and prominent horizontal and vertical edges better and had flatter colors, whereas the control had more variance in pixel color but preserved more minute details.

  • @Asterism_Desmos
    @Asterism_Desmos 10 месяцев назад +7

    Something I’ve noticed is a lot of recent comments being made, I think your channel may be being noticed!

    • @8AAFFF
      @8AAFFF  10 месяцев назад +1

      yes the video with the website scanning thing blew up for some reason :D

  • @andueskitzoidneversolo2823
    @andueskitzoidneversolo2823 10 месяцев назад

    There is an old art project called the library of babel. It is a theoretical library that contains every possible combination of every possible word to create every possible sentence that eventually contains all possible knowledge writable thing.
    There is also a old project called the canvas of babel. It it a canvas that contains every possible image.
    You can search both these websites and try to learn anything you ever wanted to know. But no matter how long you search you will never find anything but noise. Maybe. Its a very very small chance you'll learn everything you wanted. But highly unlikely.
    Maybe you can use a library like that for training.
    Would be cool if AI or something did prove if the library does in fact contain all knowledge.

    • @8AAFFF
      @8AAFFF  10 месяцев назад

      yes i know about both of them :) fascinating thought expiriment
      btw there is also that picbreeder project with the same concept, just a neural network generates all possible images with its weights randomized

  • @vsevolod_pl
    @vsevolod_pl 6 месяцев назад

    Thats very interesting results, but
    My brother in christ you created THE LINEAR MODEL...
    Basically thats 2 matrix multiplications... Pls use activation functions...

  • @meuhfoot
    @meuhfoot Год назад +3

    Very nice! What I think happens - the optimal autoencoder without activations performs downsample and then upsample. On completely random images (type A), where pixels are independent, minimizing the loss is impossible, thus the network diverges and produces noise. On the type B, which have patches of uniform color, the downsample-upsample process is nearly lossless, the network converges to uniform downsample-upsample and a result the blur is uniform. Finally on natural images the downsample-upsample depends on the texture, which allows the network to blur (a bit less) high-frequency areas. Great job and insights!

    • @8AAFFF
      @8AAFFF  Год назад +1

      thanks, this also might be related to why the real data autoencoder has circular type imperfections on the output, and the type B one has lines

  • @luciengrondin5802
    @luciengrondin5802 10 месяцев назад +2

    Without activation function, a NN is a multivariate polynomial.

    • @8AAFFF
      @8AAFFF  10 месяцев назад

      yes pretty much, just with vectors
      thanks for pointing that out :)

    • @luciengrondin5802
      @luciengrondin5802 10 месяцев назад

      @@8AAFFF Above all this means you can probably optimize the training process or something. There is a paper somewhere about that, I think. arxiv 1806.06850

  • @lionlight9514
    @lionlight9514 10 месяцев назад

    These AI videos are very entertaining, I'd love to see more! I've in particular have tried using an AutoEncoder in the past with very surprising results, and seeing someone else try it with different approaches is very entertaining!

  • @AllExistence
    @AllExistence 3 месяца назад

    Since you used axis aligned boxes, it never learned curved surfaces or even diagonals.

  • @theLollox1000
    @theLollox1000 10 месяцев назад

    It would be interesting to compare this with basic singular value decomposition compression, with number of singular values used matching the latent dimension, since a NN without activation functions is just a linear transformation with some bias added.

  • @nielskersic328
    @nielskersic328 10 месяцев назад

    This channel is seriously underrated

  • @feathersm7966
    @feathersm7966 10 месяцев назад

    Incredible video friend

  • @tiagotiagot
    @tiagotiagot 10 месяцев назад

    How about training a network that takes an index, and also X and Y coordinates, and outputs RGB values, scored on how well the images generated with it work for training an autoencoder?

  • @musaplusplus
    @musaplusplus Год назад

    Very impressive, I never thought of this, Am going to use more abstract shapes to see if I can get a better result.

  • @sysfab
    @sysfab 10 месяцев назад

    Wow! Cool stuff, i just found you on yt main page! (can you share your discord? i wanna ask some questions)

    • @8AAFFF
      @8AAFFF  10 месяцев назад

      oh dont have a discord yet :) might create in the future