What is an Autoencoder? | Two Minute Papers #86

Поделиться
HTML-код
  • Опубликовано: 9 сен 2024
  • Autoencoders are neural networks that are capable of creating sparse representations of the input data and can therefore be used for image compression. There are denoising autoencoders that after learning these sparse representations, can be presented with noisy images. What is even better is a variant that is called the variational autoencoder that not only learns these sparse representations, but can also draw new images as well. We can, for instance, ask it to create new handwritten digits and we can actually expect the results to make sense!
    _____________________________
    The paper "Auto-Encoding Variational Bayes" is available here:
    arxiv.org/pdf/1...
    Recommended for you:
    Recurrent Neural Network Writes Sentences About Images - • Recurrent Neural Netwo...
    Andrej Karpathy's convolutional neural network that you can train in your browser:
    cs.stanford.edu...
    Sentdex's RUclips channel is available here:
    / sentdex
    Francois Chollet's blog post on autoencoders:
    blog.keras.io/...
    More reading on autoencoders:
    probablydance....
    WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
    David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang.
    / twominutepapers
    We also thank Experiment for sponsoring our series. - experiment.com/
    Subscribe if you would like to see more of these! - www.youtube.com...
    Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (creativecommon...)
    Artist: audionautix.com/
    Thumbnail background image source (we have edited the colors and edited it some more): pixabay.com/hu...
    Splash screen/thumbnail design: Felícia Fehér - felicia.hu
    Károly Zsolnai-Fehér's links:
    Facebook → / twominutepapers
    Twitter → / karoly_zsolnai
    Web → cg.tuwien.ac.a...

Комментарии • 73

  • @niaei
    @niaei 2 года назад +2

    Came here from The Coding Train. And now you are sending me to sentdex. I knew about you all. Means I am on the right track

  • @salman3112
    @salman3112 8 лет назад +34

    Just discovered this channel. Would call it my best online discovery ever. Thanks a lot for this. :)

    • @TwoMinutePapers
      @TwoMinutePapers  8 лет назад +3

      Thanks so much for the kind words and happy to have you around! :)

  • @feraudyh
    @feraudyh 7 лет назад +7

    I think you explain it much better than some of the others.

  • @varunmahanot5766
    @varunmahanot5766 6 лет назад +7

    Its really nice of you to promote a good channel like sentdex.

  • @JS-lf4sm
    @JS-lf4sm Год назад +1

    I have to put my paws to the 'like' button immediately!

  • @jfk_the_second
    @jfk_the_second 2 года назад +1

    Wow. It's fascinating to see what this channel was like when it was sprinting up. The style is largely the same, but less fine-tuned. Karoly had learned a lot more about engaging speech, and the icon looks just a little different. Also, we have two favorite phrases that have basically become a culture: 1) "Hold on to your papers" (and variations stemming therefrom), and 2) "Just two more papers down the line" (and variations therefrom).

  • @atrumluminarium
    @atrumluminarium 6 лет назад +4

    I think the main advantage of AE compression over the standard compression techniques is that it is possibly a bit more general as opposed to something like JPEG which is only limited to images

  • @offchan
    @offchan 8 лет назад +5

    1:48 Shouldn't we call it a very dense representation instead of the sparse one? Here's how I think about it: the less number of neurons has to compress the data from a large representation into a very dense small one. Compressing should mean that you are making things dense, isn't it? And usually, we refer to a sparse vector as a really large representation.

    • @TwoMinutePapers
      @TwoMinutePapers  8 лет назад +8

      It's good that you raised this point, thanks! It is dense in a sense that there is likely "a lot of stuff" that neuron would be firing for, but the mathematical description of that representation is sparse in a sense that the basis vector is containing a tiny number of elements (the # of neurons, that is).

  • @TwoMinutePapers
    @TwoMinutePapers  8 лет назад +29

    The next episode is going to be about Two Minute Papers itself, and after that, we'll be back to the usual visual fireworks. :)

    • @TwoMinutePapers
      @TwoMinutePapers  8 лет назад +4

      I am very well aware of the existence of stacked autoencoders, and was looking for an entire separate episode for that (while mentioning that we were only scratching the surface here). It would be great presenting it together with PCA and some matrix factorization techniques like SVD that I really wanted to do for a while. Still trying to find a way to do it in a way that is visually and intellectually exciting. :) Thanks for the feedback!

    • @jfk_the_second
      @jfk_the_second 2 года назад

      @@TwoMinutePapers You've evolved a lot since five years ago! ❤️

  • @summerxia7474
    @summerxia7474 2 года назад +1

    Very clear and to the point!!! Why my teacher can't just talk in this way?

  • @ahmed.ea.abdalla
    @ahmed.ea.abdalla 8 лет назад +5

    Thanks for pointing us to such a valuable channel :D

    • @TwoMinutePapers
      @TwoMinutePapers  8 лет назад +3

    • @ahmed.ea.abdalla
      @ahmed.ea.abdalla 8 лет назад

      +Károly Zsolnai-Fehér (Two Minute Papers) and of course as usual, thanks for the awesomeness you give us ;)

  • @CopperHermit
    @CopperHermit 3 года назад

    I'm glad I found this channel, thank you!

  • @Ludifant
    @Ludifant 4 года назад +1

    A great application could be in denoising before vectorisation of mid-lines or in animation when you need to automatically morph complex shapes. It seems to do that with quite a lot of understanding of what lines are.

  • @ServetEdu
    @ServetEdu 8 лет назад +5

    I love this channel, thank you! I am setting up a Patreon account asap :)

    • @TwoMinutePapers
      @TwoMinutePapers  8 лет назад +2

      Happy to hear that you are enjoying the series. Thank you so much for your generous support! :)

  • @Vextrove
    @Vextrove 4 года назад +1

    The inner nodes represent abstract concepts!

  • @ndavid42
    @ndavid42 8 лет назад +1

    I glad to see this kind of ratio on youtube at the likes-dislikes, it's well deserved! keep up the good work! (egyik kedvenc csatornám, nagyon jó témákat szedsz össze!)

    • @TwoMinutePapers
      @TwoMinutePapers  8 лет назад +2

      Nagyon orulok, hogy tetszett, es udv a klubban! :)

    • @proloycodes
      @proloycodes 2 года назад

      this aged like fine wine

  • @vijayvaswani3812
    @vijayvaswani3812 3 года назад

    Amazing channel.

  • @hedgehog_fox
    @hedgehog_fox 6 лет назад

    BEST CHANNEL EVER!

  • @wentworthmiller1890
    @wentworthmiller1890 3 года назад

    Though we weren't asked, but I'm holding on to my papers! Might squeeze a bit too! :)

  • @tyan4380
    @tyan4380 3 года назад

    excellent explanation

  • @AbgezocktXD
    @AbgezocktXD 5 лет назад

    Damn that map at 3:05! Crazy stuff

  • @muaazzakria287
    @muaazzakria287 4 года назад

    Thanks for explaining

  • @TheAwesomeDudeGuy
    @TheAwesomeDudeGuy 8 лет назад

    I love what you are doing. Pleasure to watch your videos!

  • @TheDiscoMole
    @TheDiscoMole 7 лет назад +17

    Wasted you chance to say 'bear necessities' at 1:41

  • @sirajkhan4571
    @sirajkhan4571 6 лет назад

    Nice video as usual! Thanks

  • @bobsmithy3103
    @bobsmithy3103 7 лет назад

    Thanks a ton for the link. It'll probably help with my schol dts

  • @haroldsu1696
    @haroldsu1696 6 лет назад

    thank you for the great lecture!

  • @rnbbexyjlobt
    @rnbbexyjlobt 8 лет назад +1

    I love machine learning and simulations and I don't want the videos about them to stop; however, I think that this channel would attract a wider audience and lead the viewers to do more research on their own if two minute papers also reported on other topics like astrophysics, quantum physics, bioengineering, nanotech, and the plethora of others available. Either way, keep up the good work

    • @TwoMinutePapers
      @TwoMinutePapers  8 лет назад +3

      I completely agree, we have episodes on these topics every now and then, but widening further is definitely on our todo list. However, since these topics are further away from my field of expertise, and therefore require even more preparation, which is currently not possible with a full time job. If it will be possible in the future to do Two Minute Papers as a full time thing, I can't wait to do more of those. :)

  • @ellisiverdavid7978
    @ellisiverdavid7978 3 года назад

    Concise and truly informative lecture!
    I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to apply the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images?
    Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos?
    I’d love to learn more from your expertise and advices as I explore this topic further. Thank you for the insightful explanation and demo by the way! Subscribed! :)

  • @adityaraut5966
    @adityaraut5966 4 года назад

    This is amazing

  • @viratponugoti7735
    @viratponugoti7735 Год назад

    YOUR daily dose of research papers (get the reference?).

  • @jacobstegemann8192
    @jacobstegemann8192 8 лет назад

    nice music at the ending :D

  • @KaranDoshicool
    @KaranDoshicool 4 года назад

    Can you give link of research paper which uses autoencoder to generate handwritten digits?

  • @robosergTV
    @robosergTV 8 лет назад +1

    Any chance you know the video of Sentdex's where you show the tank game?

    • @TwoMinutePapers
      @TwoMinutePapers  8 лет назад +1

      I have asked him about this through twitter, let's see if we can find it out! :)

    • @robosergTV
      @robosergTV 8 лет назад +1

      thanks!

  • @r3ijmsszf3bsrew7tw7o
    @r3ijmsszf3bsrew7tw7o 8 лет назад

    But, regular (non variational) autoencoders are generative models too!

  • @thomasblackmore1509
    @thomasblackmore1509 3 года назад

    do you have a link to the video that explains how to build the 'tanks' game shown at 3:24?

  • @kvreddy1985
    @kvreddy1985 5 лет назад

    Thank you..

  • @tariqulislam2512
    @tariqulislam2512 8 лет назад

    Nice video as usual! :)

  • @jordia.2970
    @jordia.2970 3 года назад

    Now I get why they compare it to PCA!

  • @bosepukur
    @bosepukur 6 лет назад

    thanks

  • @wasaamhazm
    @wasaamhazm 6 лет назад

    Why SoftMax is better than svm with autoencoder
    if you have paper explain that

  • @ArvindDevaraj1
    @ArvindDevaraj1 5 лет назад +1

    Am I the only one thinking about impostor syndrome when he says "dear fellow scholars"

  • @code-grammardude5974
    @code-grammardude5974 2 года назад

    imagine using this to create datasets from very few samples

  • @kim15742
    @kim15742 7 лет назад +1

    Hey, just wanted to ask what IDE/text editor you use for coding.

    • @kim15742
      @kim15742 7 лет назад +1

      Also, what operating system?

    • @TwoMinutePapers
      @TwoMinutePapers  7 лет назад +1

      Generally, I have projects spanning all 3 major operating systems - whichever is fit for the job at hand. As an editor, I use vim 90% of the time.

    • @kim15742
      @kim15742 7 лет назад

      Okay, thanks for the info. Which language do you actually use?

    • @PiyushPallav49
      @PiyushPallav49 7 лет назад

      Kim sublime text is also one of the more widely used text editor. .There is this cool feature of multiple text edit in single go which I find quite time saving..u can have a look at this too :)

  • @miltondossantos9876
    @miltondossantos9876 5 лет назад

    Hi Dear, thanks for the video. How do you make that script at ~ 0:29 min?

    • @WMTeWu
      @WMTeWu 5 лет назад

      SOURCE in the top-left corner

  • @SFtheWolf
    @SFtheWolf 8 лет назад +1

    I see nefarious applications for both captcha breaking and signature forgery.

    • @HY-dd6sc
      @HY-dd6sc 8 лет назад

      Captcha breaking? How so?

    • @Kram1032
      @Kram1032 8 лет назад

      there are absolutely stunning results about writing in a given style of handwriting given just single handwritten note primer examples. The networks can also serve to "beautify" handwritten text simply by making it a bit less divergent. I suspect with a correspondingly extended dataset you could train those to faithfully generate hand signatures and, on top of that, manage to write entire books in a given signature style. Wanna read a novel in Dr's Claw font? :D

  • @dibyakantaacharya4104
    @dibyakantaacharya4104 4 года назад

    Can u send me the cat and dog detection source code

  • @AviPars
    @AviPars 7 лет назад +1

    can you collab with sirag rival or Udacity and their self driving AI nano degree. It will help you grow your channel.

  • @underdoge6862
    @underdoge6862 9 месяцев назад

    Damn, I just realized I'm a hardcore nerd

  • @MartinDxt
    @MartinDxt 8 лет назад

    1/7 like ratio :D

  • @NorthIT
    @NorthIT 2 года назад +1

    WHERE'S YOUR ACCENT

  • @sophiacristina
    @sophiacristina 5 лет назад

    I'm getting so addicted with AIs... :/