Lesson 11 2022: Deep Learning Foundations to Stable Diffusion
HTML-код
- Опубликовано: 5 июл 2024
- (All lesson resources are available at course.fast.ai.) In this lesson, we discuss various techniques and experiments shared by students on the forum, such as interpolating between prompts for visually appealing transitions and improving the update process in text-to-image generation, and a novel approach to decreasing the guidance scale during image generation. We then dive into a new paper called DiffEdit, which focuses on semantic image editing using text-conditioned diffusion models. We walk through the process of reading and understanding the paper, emphasizing the importance of grasping the main idea and not getting bogged down in every detail.
We then embark on a deep exploration of matrix multiplication using Python, compare APL with PyTorch, and introduce the concept of Frobenius norm. We also discuss the powerful concept of broadcasting, which allows for operations between tensors of different shapes, and demonstrate its efficiency in speeding up matrix multiplication. The techniques introduced in this lesson allow us to speed up our initial Python implementation by a factor of around five million, including leveraging the GPU for massive parallelism!
0:00 - Introduction
0:20 - Showing student’s work
13:03 - Workflow on reading an academic paper
16:20 - Read DiffEdit paper
26:27 - Understanding the equations in the “Background” section
46:10 - 3 steps of DiffEdit
51:42 - Homework
59:15 - Matrix multiplication from scratch
1:08:47 - Speed improvement with Numba library
1:19:25 - Frobenius norm
1:25:54 - Broadcasting with scalars and matrices
1:39:22 - Broadcasting rules
1:42:10 - Matrix multiplication with broadcasting
Thanks to raymond-wu on forums.fast.ai for the timestamps, and to fmussari for the transcript.
You do not know how long I've have been waiting for this course, Thank you so much!
Thank you Jeremy for all the contributions you have made and for democratising the knowledge to public domain
Great style of teaching! Really cracking deep into the stuff, but also keeping it fun and light. Thank you Jeremy!
The timing of this was the second best timing possible for me (the first one would have been when recorded), thank you for making this openly available.
You're so welcome!
You,re really a Code Artist. Very like your approach to see big picture :)
Thanks for the course. Very different from any other course I have done.. Will complete this one..
14 videos at once. Jesus...
guess holy week vacations are cancelled.
Excellent teaching. Really helped me understand the stable diffusion model. As a request, could also provide similar training for GAN style models like VQGan or StyleGAN?
I see you went over numba for faster operations, do you plan on teaching cython or pybind11 or codon? That can move the needle even more
God is back with a new course :)
Any chance you can change the timestamp to "Homework" 55:43 - 57:00 instead of "Experiments and benchmarks".
Amazing video as usual of course.
RReally? another 10 videos about Stable diffusion?!
E is not the expected value, but ELBO, the evidence lower bound.
Say more …
I think the paper should've just said "MSE".
From ruclips.net/video/Tf-8F5q8Xww/видео.htmlsi=_E-KtZmXlqsFfgXy&t=1255 to 21:09 I have spit out my lungs by laughing. Thanks for this part, one of the most useful of the entire course. 💪