Paul Hand
Paul Hand
  • Видео 18
  • Просмотров 85 428
Carbon Storage Demo - Tapia Camps at Rice University
Dr. Paul Hand shows a demonstration of the Carbon Capture and Storage engineering challenge that will be delivered at Tapia Camps at Rice University. In the project, students build a model saltwater reservoir using standard household materials like oats, beans, and Play-Doh. The students will attempt to inject and store as much vegetable oil, without any of it leaking. The vegetable oil represents liquid carbon dioxide in industrial carbon storage applications. To find out more, reach out to Dr. Hand at hand@rice.edu.
Просмотров: 204

Видео

Generative Adversarial Networks
Просмотров 1,5 тыс.4 года назад
A lecture that discusses Generative Adversarial Networks. We discuss generative modeling, latent spaces, semantically meaningful arithmetic in latent space, minimax optimization formulation for GANs, theory for minimax formulation, Earth mover distance, Wasserstein GANs, and challenges of GANs. This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by ...
Variational Autoencoders
Просмотров 31 тыс.4 года назад
A lecture that discusses variational autoencoders. We discuss generative models, plain autoencoders, the variational lower bound and evidence lower bound, variational autoencoder architecture, and stochastic optimization of the variational lower bound. This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand. The notes are available at: khou...
Gradient Descent and Stochastic Gradient Descent
Просмотров 1,6 тыс.4 года назад
A lecture that discusses gradient descent and stochastic gradient descent in deep neural networks. We discuss the effects of learning rates that are too large or too small. We discuss convergence rates for gradient descent and stochastic gradient descent for convex functions. This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand. The note...
Continual Learning and Catastrophic Forgetting
Просмотров 13 тыс.4 года назад
A lecture that discusses continual learning and catastrophic forgetting in deep neural networks. We discuss the context, methods for evaluating algorithms, and algorithms based on regularization, dynamic architectures, and Complementary Learning Systems. Specifically, we discuss data permutation tasks, incremental task learning, multimodal learning, the Learning without Forgetting algorithm, El...
Adversarial Examples for Deep Neural Networks
Просмотров 11 тыс.4 года назад
A lecture that discusses adversarial examples for deep neural networks. We discuss white box attacks, black box attacks, real world attacks, and adversarial training. We discuss Projected Gradient Descent, the Fast Gradient Sign Method, Carlini-Wagner methods, Universal Adversarial Perturbations, Adversarial Patches, Transferability Attacks, Zeroth Order Optimization, and more. This lecture is ...
Neural Network Architectures for Images
Просмотров 9834 года назад
A lecture that discusses architectures of neural networks that process images. We discuss tasks including classification, segmentation, denoising, blind deconvolution, superresolution, and inpainting. We discuss Multilayer Perceptrons, Convolutional Neural Networks, Residual Nets, Encoder-decoder nets, and autoencoders, along with the idea that you sometimes want to spare nets from needing to l...
Supervised Machine Learning Review
Просмотров 1,6 тыс.4 года назад
A lecture that reviews ideas from supervised machine learning that are relevant for understanding deep neural networks. Includes the statistical machine learning framework, principles for selecting loss functions, and the bias-variance tradeoff. The lecture ends with the surprising double-descent behavior that neural networks can perform well even when highly overparameterized. This lecture is ...
Architectural Elements Of Neural Networks
Просмотров 9284 года назад
Online lecture on the fundamental building blocks of convolutional neural networks. This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand. The notes are available at: khoury.northeastern.edu/home/hand/teaching/cs7150-summer-2020/Architectural_Elements_Of_Neural_Networks.pdf
Invertible Neural Networks and Inverse Problems
Просмотров 12 тыс.4 года назад
Online lecture on Invertible Neural Networks as priors for inverse problems in imaging. This lecture is from Northeastern University's CS 7180 Spring 2020 class on Special Topics in Artificial Intelligence, taught by Paul Hand. The notes are available at: khoury.northeastern.edu/home/hand/teaching/cs7180-spring-2020/lecture12-invertible-neural-networks-and-inverse-problems.pdf The papers mentio...
Unlearned Neural Networks as Image Priors for Inverse Problems
Просмотров 2,8 тыс.4 года назад
Online lecture on Deep Image Prior, Deep Decoder, and Deep Geometric Prior for inverse problems in imaging. This lecture is from Northeastern University's CS 7180 Spring 2020 class on Special Topics in Artificial Intelligence, taught by Paul Hand. The notes are available at: khoury.northeastern.edu/home/hand/teaching/cs7180-spring-2020/lecture11-Unlearned-Neural-Networks-as-Image-Priors.pdf The...
Theory of GANs for Compressed Sensing
Просмотров 1,2 тыс.4 года назад
Online lecture on Theory for GAN priors in Compressed Sensing. This lecture is from Northeastern University's CS 7180 Spring 2020 class on Special Topics in Artificial Intelligence, taught by Paul Hand. The notes are available at: khoury.northeastern.edu/home/hand/teaching/cs7180-spring-2020/lecture10-GANS-for-compressed-sensing.pdf The papers mentioned: Bora, Ashish, Ajil Jalal, Eric Price, an...
Most movies are boring -- Paul E. Hand
Просмотров 2,8 тыс.9 лет назад
What allows us to stream movies over the internet is the fact that most movies are boring. This fact has far reaching implications: it allows us to store pictures and movies with very few bytes, to communicate efficiently with distant spacecraft, and to take life-saving medical images more quickly. Paul E. Hand explains these topics and the importance of mathematics to an audience of high schoo...
Paul Hand - Determined
Просмотров 65011 лет назад
My first piano composition, "Determined", which I played at the MIT math department recital in January 2013.
Choice of coordinates for inscribed triangle.mp4
Просмотров 14012 лет назад
Choice of coordinates for inscribed triangle.mp4
Rectangular prism of maximal volume.mp4
Просмотров 16412 лет назад
Rectangular prism of maximal volume.mp4
Rectangle of maximal area.mp4
Просмотров 22912 лет назад
Rectangle of maximal area.mp4
Path of Steepest Descent.mp4
Просмотров 3,5 тыс.12 лет назад
Path of Steepest Descent.mp4

Комментарии

  • @yurigansmith
    @yurigansmith 15 дней назад

    Very good presentation. Thanks a lot!

  • @yurigansmith
    @yurigansmith 16 дней назад

    Training of generative models start here: 6:26

  • @boburniyozov62
    @boburniyozov62 Месяц назад

    indeed very good video. well and easy explained.

  • @faelexxxx
    @faelexxxx Месяц назад

    Thank you for the clear explanationof these concepts!

  • @reihanehmirjalili7467
    @reihanehmirjalili7467 3 месяца назад

    Amazing video!

  • @beem6401
    @beem6401 3 месяца назад

    Amazing . Thank you.

  • @user-ve6il1tu7i
    @user-ve6il1tu7i 5 месяцев назад

    very insightful

  • @ipheiman3658
    @ipheiman3658 5 месяцев назад

    This is amazing! Thank you for the clear explanation :)

  • @wilsonlwtan3975
    @wilsonlwtan3975 6 месяцев назад

    This is a gem. Finally, someone that is able to give concise teaching well! Thank you!

  • @sahhaf1234
    @sahhaf1234 8 месяцев назад

    How do we know that p(x|z) is normally distributed?? Do we just assume it? x|z is just a neural network and I dont see any reason for p(x|z) to distribute normally. Actually, the relation between x and z must be deterministic.

  • @MeowlaMars
    @MeowlaMars 8 месяцев назад

    This is clear and awesome

  • @pietrocestola7856
    @pietrocestola7856 8 месяцев назад

    Clear, concise and very accurate. Thank you so much for sharing with us this wonderful explanation.

  • @alryabov
    @alryabov 9 месяцев назад

    Why not just make data argumentation with some random noise?

  • @Procuste34iOSh
    @Procuste34iOSh 10 месяцев назад

    thank you so much. so underrated

  • @ojasgupta1189
    @ojasgupta1189 Год назад

    Please Make More Videos.

  • @modernsolutions6631
    @modernsolutions6631 Год назад

    Viewer nr. 10000

  • @maximmaximov4147
    @maximmaximov4147 Год назад

    It would be really perfect if someone started giving some examples on each step since we are talking about real things that exist in the world. Each step has its meaning and intention and is made to overcome challenges or obstacles that come up on the way. I want to know what we are doing and what is the purpose. And what is gonna happen if we wouldn't do it this way. I cannot find anything non abstract, I need examples to put my imagination on. It is clear and good only if you have prior knowledge of the things being discussed. Otherwise there are million ways to interpret things and even more to get lost

    • @maximmaximov4147
      @maximmaximov4147 Год назад

      At 11:00 it seems like if we are talking about pictures the formula written in blue should generate an image with real random noise which doesn't make sense. It should have been done differently like is said in other articles so that random distributions of different images (sets of parameters or pixels) overlap. So that it is not purely random noise which is not we're trying to reach

  • @oFabianLoL
    @oFabianLoL Год назад

    I don't understand what phi and theta mean. "the parameters of the model", does that mean the weights of the neural network? or the parameters of the distribution, eg if it is gaussian, the parameters correspond to a mu and sigma. I appreciate if anyone can clarify, thank you!

    • @ThatQCboy
      @ThatQCboy Год назад

      parameters of the model. we use MLE principles to find the optimal phi and theta

    • @doyney
      @doyney 10 месяцев назад

      I'm pretty sure phi and theta represent the parameters in terms of weights and biases in the encoder/decoder neural networks.

  • @slemanbisharat6390
    @slemanbisharat6390 Год назад

    Thank you excellent explanation!!

  • @silviasanmartindeporres7033
    @silviasanmartindeporres7033 2 года назад

    Brilliant lecture!

  • @ayankashyap5379
    @ayankashyap5379 2 года назад

    Loved this video, thanks a lot. Will patiently wait for the next one :)

  • @gomctigger4439
    @gomctigger4439 2 года назад

    Hi @Paul Hand, thank you for the lecture. What is the intuition behind using q(z|x) in the expectation or the expectation at all? I see that it makes sense mathematically, but how would one get the idea? In contrast, there is a derivation of the ELBO via importance sampling and then applying Jensen Inequality or via the optimal sampler.

  • @JTMoustache
    @JTMoustache 2 года назад

    If anyone is wondering how they thought about these architectures, look at the Feistel Network in cryptography- not sure if they reference it in their paper, but that’s definitely how they got the insight.

  • @yeyerrd
    @yeyerrd 2 года назад

    Great talk! Thank you for the video. Just two comments regarding typing on ruclips.net/video/vjaq03IYgSk/видео.html: 1. During initialization, wouldn't Y_o be Y^hat_o? Because that is the output of the network 2. In the argmin formula isn't Y_o the same as Y_n?

  • @sucramgnat8157
    @sucramgnat8157 2 года назад

    Thank you so much for your lecture. You truly have a talent for teaching!

  • @kristiantorres1080
    @kristiantorres1080 2 года назад

    Hi Paul. Thanks for explaining DIP and related methods in such a clear and practical way. This content is simply amazing. I hope you continue to do more videos. I am super subscribed to your channel.

  • @robwasab
    @robwasab 2 года назад

    Loved your video on VAEs, and really like this one for Vanilla GANs, but I couldn't hang in there with the math for the Wasserstein GAN.

  • @amirhosseinramazani757
    @amirhosseinramazani757 2 года назад

    I enjoyed your explanation. I needed something like this video to get a little deeper into the theory of the VAEs. Thank you!

  • @mariasargsyan5170
    @mariasargsyan5170 2 года назад

    Thanks a lot great series of lectures.

  • @amirm7373
    @amirm7373 2 года назад

    Stunning review!

  • @thegistofcalculus
    @thegistofcalculus 3 года назад

    I have a question regarding some main intuitions regarding Variatonal Autoencoders. The video is here ruclips.net/video/EKURiwsRVlo/видео.html

  • @bluestar2253
    @bluestar2253 3 года назад

    I have watched many YT videos on GANs but this is by far one of the very best at explaining GANs. Thank you and keep up the good work!

  • @supercat4742
    @supercat4742 3 года назад

    awesome survey on attacks, thanks!

  • @bluestar2253
    @bluestar2253 3 года назад

    One of the best explanations on VAE on YT. Thank you and keep up the good work!

  • @madhusudanverma6564
    @madhusudanverma6564 3 года назад

    24:48, how maximizing vlb will roughly maximize p(x) because, since x is given p(x) should be constant.

    • @josephpalermo8898
      @josephpalermo8898 2 года назад

      p(x) is actually parameterized therefore it's not constant

  • @s4life91
    @s4life91 3 года назад

    great video

  • @prasenjitgiri919
    @prasenjitgiri919 3 года назад

    Paul, although a very good explanation but dude c'mon why so low volume!

  • @GermanTutorials
    @GermanTutorials 3 года назад

    Excellent review of this topic! Thank you very much!

  • @niraj5582
    @niraj5582 3 года назад

    Excellent content to get a quick overview.

  • @roro4787
    @roro4787 3 года назад

    Very intresting!

  • @hubertnguyen8855
    @hubertnguyen8855 3 года назад

    Very nice and comprehensive lecture. Thanks

  • @juandiego2045
    @juandiego2045 3 года назад

    Great aproach to the problem, best explanation i´ve found.

  • @gorgolyt
    @gorgolyt 3 года назад

    Best explanation on RUclips. Exactly what I was looking for. Thorough, logical, intuitive.

  • @gana1597
    @gana1597 3 года назад

    Thank you!! Excellent explanation!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 года назад

    wow, this is so well explained.

  • @alexandramalyugina4311
    @alexandramalyugina4311 3 года назад

    Love your channel Paul, you should make moar videos, you're really great at explaining things! Can't wait for the next one!

  • @3koozy
    @3koozy 3 года назад

    Thank you Paul Very Much for this brilliant summary of the "Continual Learning" topic , you saved my day!

  • @juliusfucik4011
    @juliusfucik4011 3 года назад

    Very good presentation but please watch recording levels. It is not loud enough 👍

  • @CRTagadiya
    @CRTagadiya 3 года назад

    great explanation. Thanks professor.

  • @trongduong1047
    @trongduong1047 3 года назад

    very nice explanation!