![Paul Hand](/img/default-banner.jpg)
- Видео 18
- Просмотров 85 428
Paul Hand
Добавлен 29 ноя 2011
Carbon Storage Demo - Tapia Camps at Rice University
Dr. Paul Hand shows a demonstration of the Carbon Capture and Storage engineering challenge that will be delivered at Tapia Camps at Rice University. In the project, students build a model saltwater reservoir using standard household materials like oats, beans, and Play-Doh. The students will attempt to inject and store as much vegetable oil, without any of it leaking. The vegetable oil represents liquid carbon dioxide in industrial carbon storage applications. To find out more, reach out to Dr. Hand at hand@rice.edu.
Просмотров: 204
Видео
Generative Adversarial Networks
Просмотров 1,5 тыс.4 года назад
A lecture that discusses Generative Adversarial Networks. We discuss generative modeling, latent spaces, semantically meaningful arithmetic in latent space, minimax optimization formulation for GANs, theory for minimax formulation, Earth mover distance, Wasserstein GANs, and challenges of GANs. This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by ...
Variational Autoencoders
Просмотров 31 тыс.4 года назад
A lecture that discusses variational autoencoders. We discuss generative models, plain autoencoders, the variational lower bound and evidence lower bound, variational autoencoder architecture, and stochastic optimization of the variational lower bound. This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand. The notes are available at: khou...
Gradient Descent and Stochastic Gradient Descent
Просмотров 1,6 тыс.4 года назад
A lecture that discusses gradient descent and stochastic gradient descent in deep neural networks. We discuss the effects of learning rates that are too large or too small. We discuss convergence rates for gradient descent and stochastic gradient descent for convex functions. This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand. The note...
Continual Learning and Catastrophic Forgetting
Просмотров 13 тыс.4 года назад
A lecture that discusses continual learning and catastrophic forgetting in deep neural networks. We discuss the context, methods for evaluating algorithms, and algorithms based on regularization, dynamic architectures, and Complementary Learning Systems. Specifically, we discuss data permutation tasks, incremental task learning, multimodal learning, the Learning without Forgetting algorithm, El...
Adversarial Examples for Deep Neural Networks
Просмотров 11 тыс.4 года назад
A lecture that discusses adversarial examples for deep neural networks. We discuss white box attacks, black box attacks, real world attacks, and adversarial training. We discuss Projected Gradient Descent, the Fast Gradient Sign Method, Carlini-Wagner methods, Universal Adversarial Perturbations, Adversarial Patches, Transferability Attacks, Zeroth Order Optimization, and more. This lecture is ...
Neural Network Architectures for Images
Просмотров 9834 года назад
A lecture that discusses architectures of neural networks that process images. We discuss tasks including classification, segmentation, denoising, blind deconvolution, superresolution, and inpainting. We discuss Multilayer Perceptrons, Convolutional Neural Networks, Residual Nets, Encoder-decoder nets, and autoencoders, along with the idea that you sometimes want to spare nets from needing to l...
Supervised Machine Learning Review
Просмотров 1,6 тыс.4 года назад
A lecture that reviews ideas from supervised machine learning that are relevant for understanding deep neural networks. Includes the statistical machine learning framework, principles for selecting loss functions, and the bias-variance tradeoff. The lecture ends with the surprising double-descent behavior that neural networks can perform well even when highly overparameterized. This lecture is ...
Architectural Elements Of Neural Networks
Просмотров 9284 года назад
Online lecture on the fundamental building blocks of convolutional neural networks. This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand. The notes are available at: khoury.northeastern.edu/home/hand/teaching/cs7150-summer-2020/Architectural_Elements_Of_Neural_Networks.pdf
Invertible Neural Networks and Inverse Problems
Просмотров 12 тыс.4 года назад
Online lecture on Invertible Neural Networks as priors for inverse problems in imaging. This lecture is from Northeastern University's CS 7180 Spring 2020 class on Special Topics in Artificial Intelligence, taught by Paul Hand. The notes are available at: khoury.northeastern.edu/home/hand/teaching/cs7180-spring-2020/lecture12-invertible-neural-networks-and-inverse-problems.pdf The papers mentio...
Unlearned Neural Networks as Image Priors for Inverse Problems
Просмотров 2,8 тыс.4 года назад
Online lecture on Deep Image Prior, Deep Decoder, and Deep Geometric Prior for inverse problems in imaging. This lecture is from Northeastern University's CS 7180 Spring 2020 class on Special Topics in Artificial Intelligence, taught by Paul Hand. The notes are available at: khoury.northeastern.edu/home/hand/teaching/cs7180-spring-2020/lecture11-Unlearned-Neural-Networks-as-Image-Priors.pdf The...
Theory of GANs for Compressed Sensing
Просмотров 1,2 тыс.4 года назад
Online lecture on Theory for GAN priors in Compressed Sensing. This lecture is from Northeastern University's CS 7180 Spring 2020 class on Special Topics in Artificial Intelligence, taught by Paul Hand. The notes are available at: khoury.northeastern.edu/home/hand/teaching/cs7180-spring-2020/lecture10-GANS-for-compressed-sensing.pdf The papers mentioned: Bora, Ashish, Ajil Jalal, Eric Price, an...
Most movies are boring -- Paul E. Hand
Просмотров 2,8 тыс.9 лет назад
What allows us to stream movies over the internet is the fact that most movies are boring. This fact has far reaching implications: it allows us to store pictures and movies with very few bytes, to communicate efficiently with distant spacecraft, and to take life-saving medical images more quickly. Paul E. Hand explains these topics and the importance of mathematics to an audience of high schoo...
Paul Hand - Determined
Просмотров 65011 лет назад
My first piano composition, "Determined", which I played at the MIT math department recital in January 2013.
Choice of coordinates for inscribed triangle.mp4
Просмотров 14012 лет назад
Choice of coordinates for inscribed triangle.mp4
Rectangular prism of maximal volume.mp4
Просмотров 16412 лет назад
Rectangular prism of maximal volume.mp4
Very good presentation. Thanks a lot!
Training of generative models start here: 6:26
indeed very good video. well and easy explained.
Thank you for the clear explanationof these concepts!
Amazing video!
Amazing . Thank you.
very insightful
This is amazing! Thank you for the clear explanation :)
This is a gem. Finally, someone that is able to give concise teaching well! Thank you!
How do we know that p(x|z) is normally distributed?? Do we just assume it? x|z is just a neural network and I dont see any reason for p(x|z) to distribute normally. Actually, the relation between x and z must be deterministic.
This is clear and awesome
Clear, concise and very accurate. Thank you so much for sharing with us this wonderful explanation.
Why not just make data argumentation with some random noise?
thank you so much. so underrated
Please Make More Videos.
Viewer nr. 10000
It would be really perfect if someone started giving some examples on each step since we are talking about real things that exist in the world. Each step has its meaning and intention and is made to overcome challenges or obstacles that come up on the way. I want to know what we are doing and what is the purpose. And what is gonna happen if we wouldn't do it this way. I cannot find anything non abstract, I need examples to put my imagination on. It is clear and good only if you have prior knowledge of the things being discussed. Otherwise there are million ways to interpret things and even more to get lost
At 11:00 it seems like if we are talking about pictures the formula written in blue should generate an image with real random noise which doesn't make sense. It should have been done differently like is said in other articles so that random distributions of different images (sets of parameters or pixels) overlap. So that it is not purely random noise which is not we're trying to reach
I don't understand what phi and theta mean. "the parameters of the model", does that mean the weights of the neural network? or the parameters of the distribution, eg if it is gaussian, the parameters correspond to a mu and sigma. I appreciate if anyone can clarify, thank you!
parameters of the model. we use MLE principles to find the optimal phi and theta
I'm pretty sure phi and theta represent the parameters in terms of weights and biases in the encoder/decoder neural networks.
Thank you excellent explanation!!
Brilliant lecture!
Loved this video, thanks a lot. Will patiently wait for the next one :)
Hi @Paul Hand, thank you for the lecture. What is the intuition behind using q(z|x) in the expectation or the expectation at all? I see that it makes sense mathematically, but how would one get the idea? In contrast, there is a derivation of the ELBO via importance sampling and then applying Jensen Inequality or via the optimal sampler.
If anyone is wondering how they thought about these architectures, look at the Feistel Network in cryptography- not sure if they reference it in their paper, but that’s definitely how they got the insight.
Great talk! Thank you for the video. Just two comments regarding typing on ruclips.net/video/vjaq03IYgSk/видео.html: 1. During initialization, wouldn't Y_o be Y^hat_o? Because that is the output of the network 2. In the argmin formula isn't Y_o the same as Y_n?
Thank you so much for your lecture. You truly have a talent for teaching!
Hi Paul. Thanks for explaining DIP and related methods in such a clear and practical way. This content is simply amazing. I hope you continue to do more videos. I am super subscribed to your channel.
Loved your video on VAEs, and really like this one for Vanilla GANs, but I couldn't hang in there with the math for the Wasserstein GAN.
I enjoyed your explanation. I needed something like this video to get a little deeper into the theory of the VAEs. Thank you!
Thanks a lot great series of lectures.
Stunning review!
I have a question regarding some main intuitions regarding Variatonal Autoencoders. The video is here ruclips.net/video/EKURiwsRVlo/видео.html
I have watched many YT videos on GANs but this is by far one of the very best at explaining GANs. Thank you and keep up the good work!
awesome survey on attacks, thanks!
One of the best explanations on VAE on YT. Thank you and keep up the good work!
24:48, how maximizing vlb will roughly maximize p(x) because, since x is given p(x) should be constant.
p(x) is actually parameterized therefore it's not constant
great video
Paul, although a very good explanation but dude c'mon why so low volume!
Excellent review of this topic! Thank you very much!
Excellent content to get a quick overview.
Very intresting!
Very nice and comprehensive lecture. Thanks
Great aproach to the problem, best explanation i´ve found.
Best explanation on RUclips. Exactly what I was looking for. Thorough, logical, intuitive.
Thank you!! Excellent explanation!
wow, this is so well explained.
Love your channel Paul, you should make moar videos, you're really great at explaining things! Can't wait for the next one!
Thank you Paul Very Much for this brilliant summary of the "Continual Learning" topic , you saved my day!
Very good presentation but please watch recording levels. It is not loud enough 👍
great explanation. Thanks professor.
very nice explanation!