What is an Autoencoder? | Two Minute Papers #86
HTML-код
- Опубликовано: 9 сен 2024
- Autoencoders are neural networks that are capable of creating sparse representations of the input data and can therefore be used for image compression. There are denoising autoencoders that after learning these sparse representations, can be presented with noisy images. What is even better is a variant that is called the variational autoencoder that not only learns these sparse representations, but can also draw new images as well. We can, for instance, ask it to create new handwritten digits and we can actually expect the results to make sense!
_____________________________
The paper "Auto-Encoding Variational Bayes" is available here:
arxiv.org/pdf/1...
Recommended for you:
Recurrent Neural Network Writes Sentences About Images - • Recurrent Neural Netwo...
Andrej Karpathy's convolutional neural network that you can train in your browser:
cs.stanford.edu...
Sentdex's RUclips channel is available here:
/ sentdex
Francois Chollet's blog post on autoencoders:
blog.keras.io/...
More reading on autoencoders:
probablydance....
WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang.
/ twominutepapers
We also thank Experiment for sponsoring our series. - experiment.com/
Subscribe if you would like to see more of these! - www.youtube.com...
Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (creativecommon...)
Artist: audionautix.com/
Thumbnail background image source (we have edited the colors and edited it some more): pixabay.com/hu...
Splash screen/thumbnail design: Felícia Fehér - felicia.hu
Károly Zsolnai-Fehér's links:
Facebook → / twominutepapers
Twitter → / karoly_zsolnai
Web → cg.tuwien.ac.a...
Came here from The Coding Train. And now you are sending me to sentdex. I knew about you all. Means I am on the right track
Just discovered this channel. Would call it my best online discovery ever. Thanks a lot for this. :)
Thanks so much for the kind words and happy to have you around! :)
I think you explain it much better than some of the others.
Its really nice of you to promote a good channel like sentdex.
I have to put my paws to the 'like' button immediately!
Wow. It's fascinating to see what this channel was like when it was sprinting up. The style is largely the same, but less fine-tuned. Karoly had learned a lot more about engaging speech, and the icon looks just a little different. Also, we have two favorite phrases that have basically become a culture: 1) "Hold on to your papers" (and variations stemming therefrom), and 2) "Just two more papers down the line" (and variations therefrom).
I think the main advantage of AE compression over the standard compression techniques is that it is possibly a bit more general as opposed to something like JPEG which is only limited to images
1:48 Shouldn't we call it a very dense representation instead of the sparse one? Here's how I think about it: the less number of neurons has to compress the data from a large representation into a very dense small one. Compressing should mean that you are making things dense, isn't it? And usually, we refer to a sparse vector as a really large representation.
It's good that you raised this point, thanks! It is dense in a sense that there is likely "a lot of stuff" that neuron would be firing for, but the mathematical description of that representation is sparse in a sense that the basis vector is containing a tiny number of elements (the # of neurons, that is).
The next episode is going to be about Two Minute Papers itself, and after that, we'll be back to the usual visual fireworks. :)
I am very well aware of the existence of stacked autoencoders, and was looking for an entire separate episode for that (while mentioning that we were only scratching the surface here). It would be great presenting it together with PCA and some matrix factorization techniques like SVD that I really wanted to do for a while. Still trying to find a way to do it in a way that is visually and intellectually exciting. :) Thanks for the feedback!
@@TwoMinutePapers You've evolved a lot since five years ago! ❤️
Very clear and to the point!!! Why my teacher can't just talk in this way?
Thanks for pointing us to such a valuable channel :D
+Károly Zsolnai-Fehér (Two Minute Papers) and of course as usual, thanks for the awesomeness you give us ;)
I'm glad I found this channel, thank you!
A great application could be in denoising before vectorisation of mid-lines or in animation when you need to automatically morph complex shapes. It seems to do that with quite a lot of understanding of what lines are.
I love this channel, thank you! I am setting up a Patreon account asap :)
Happy to hear that you are enjoying the series. Thank you so much for your generous support! :)
The inner nodes represent abstract concepts!
I glad to see this kind of ratio on youtube at the likes-dislikes, it's well deserved! keep up the good work! (egyik kedvenc csatornám, nagyon jó témákat szedsz össze!)
Nagyon orulok, hogy tetszett, es udv a klubban! :)
this aged like fine wine
Amazing channel.
BEST CHANNEL EVER!
Though we weren't asked, but I'm holding on to my papers! Might squeeze a bit too! :)
excellent explanation
Damn that map at 3:05! Crazy stuff
Thanks for explaining
I love what you are doing. Pleasure to watch your videos!
Wasted you chance to say 'bear necessities' at 1:41
Nice video as usual! Thanks
Thanks a ton for the link. It'll probably help with my schol dts
thank you for the great lecture!
I love machine learning and simulations and I don't want the videos about them to stop; however, I think that this channel would attract a wider audience and lead the viewers to do more research on their own if two minute papers also reported on other topics like astrophysics, quantum physics, bioengineering, nanotech, and the plethora of others available. Either way, keep up the good work
I completely agree, we have episodes on these topics every now and then, but widening further is definitely on our todo list. However, since these topics are further away from my field of expertise, and therefore require even more preparation, which is currently not possible with a full time job. If it will be possible in the future to do Two Minute Papers as a full time thing, I can't wait to do more of those. :)
Concise and truly informative lecture!
I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to apply the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images?
Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos?
I’d love to learn more from your expertise and advices as I explore this topic further. Thank you for the insightful explanation and demo by the way! Subscribed! :)
This is amazing
YOUR daily dose of research papers (get the reference?).
nice music at the ending :D
Can you give link of research paper which uses autoencoder to generate handwritten digits?
Any chance you know the video of Sentdex's where you show the tank game?
I have asked him about this through twitter, let's see if we can find it out! :)
thanks!
But, regular (non variational) autoencoders are generative models too!
do you have a link to the video that explains how to build the 'tanks' game shown at 3:24?
Thank you..
Nice video as usual! :)
Thanks for watching! :)
Now I get why they compare it to PCA!
thanks
Why SoftMax is better than svm with autoencoder
if you have paper explain that
Am I the only one thinking about impostor syndrome when he says "dear fellow scholars"
imagine using this to create datasets from very few samples
Hey, just wanted to ask what IDE/text editor you use for coding.
Also, what operating system?
Generally, I have projects spanning all 3 major operating systems - whichever is fit for the job at hand. As an editor, I use vim 90% of the time.
Okay, thanks for the info. Which language do you actually use?
Kim sublime text is also one of the more widely used text editor. .There is this cool feature of multiple text edit in single go which I find quite time saving..u can have a look at this too :)
Hi Dear, thanks for the video. How do you make that script at ~ 0:29 min?
SOURCE in the top-left corner
I see nefarious applications for both captcha breaking and signature forgery.
Captcha breaking? How so?
there are absolutely stunning results about writing in a given style of handwriting given just single handwritten note primer examples. The networks can also serve to "beautify" handwritten text simply by making it a bit less divergent. I suspect with a correspondingly extended dataset you could train those to faithfully generate hand signatures and, on top of that, manage to write entire books in a given signature style. Wanna read a novel in Dr's Claw font? :D
Can u send me the cat and dog detection source code
can you collab with sirag rival or Udacity and their self driving AI nano degree. It will help you grow your channel.
Damn, I just realized I'm a hardcore nerd
1/7 like ratio :D
WHERE'S YOUR ACCENT
I'm getting so addicted with AIs... :/