man, this video is such a great explainer. I was confused about the use of skip connections since a long a time, but he explained the intuition behind it very nicely.
You might not find my comment since the video is too old, but man I just want to thank you for this video. I am a student who has always been interested in computer graphics and related fields like game engines, physical rendering, ray tracing, etc, and jst didnt get the ML/AI hype everyone was on the past 2 years. I only ever managed to study ML basics for 2 weeks before I left it for good. But recently I got in a team where my friends were working on CNN based projects, and that made me learn about many basics about NNs and DL. This explaination for Unet seals the deal for me, and I will strive to work on integrating my two interests into one and hopefully create something I love.
Thanks, this is really good. One thing that would be helpful is if the example was itself convoluted like the algorithm, to make easier to visualise the algo.
Man i like you ! . you are the best ! how you simplify thing and how you are careful to deliver the idea perfectly >> please keep this great presentation up >>
This was great, would love a video on diffusion transformers! It looks like they are taking off and replacing U-Net's as the backbone to new diffusion models.
Clearly explained. What caused my consfusion in the first place is, in the graphic in the original paper, why does the segmentation mask not have the same dimensionality than the input image?
Dude, you're great. I'm from Portuga 🇵🇹 🟩🟨🟥🟥and I'm learning Machine Learning and Neural Networks. Thank you very much! I loved how you teach. You are intuitive and dynamic. A person is learning a difficult subject and still manages to laugh when watching the videos. I loved. I already subscribed and liked. I'm going to watch more of your videos now. Hugs from Portugal😉
I liked it but you did not explain the role of the 3*3 kernel, and how it scans the pixels of the image at each layer, and the reason for the downsampling is because it is more expensive to increase the size of the kernel at each layer so we downsample the image so we get the same relative size differential as if we did increase the size of the kernel. Apart from that, it’s brilliant.
I'm not an expert but here's what I understand. The conv filters on the earlier full resolution image will learn highly detailed features such as edges. The conv layers run on downsampled (lower resolution) images can't see edges because they're all fuzzy now, so they will learn more large-scale features, such as shapes, then objects. As for how the 3*3 kernel (filter) scans, I believe it's just a standard convolution which you can learn from other videos.
@@arf9759 Exactly and these make sense to machines during training(When backpropagating errors). this is the reason filters are initialized randomly and are trained.
@@arf9759 I don't know what are you referring to, but there's actually a mathematical proof why conv NN's are used in image classification. Check out geometric deep learning by michael bronstein
You didn't explain how the skip connections are connected across. What is the data that's transferred and how is it incorporated into the output half of the U-Net?
Glad you liked it! Its not currently on my list of to-do videos as I like to cover the most popular fundamentals at the moment, but I'll let you know if I get around to it! :)
If downsampling works by max-pooling, how does upsampling work? In traditional image processing, we would just interpolate image colors, but how does the network apply it's "convolution" in this process? I would understand "deconvolution", but in my mind it wouldn't work here.
I feel like this is more a description to experts than an actual explanation of how and why it works. Questions I'm left with: What is the purpose of downsampling/upsampling (I'm guessing performance?) How is segmentation actually done by the u-net? How is feature extraction actually done? What are max pooling layers? What does "channel doubling" mean, and what does it achieve? How does the encoder know "these are the pixels where the bike is"? Why is it beneficial to connect the encoder features to the decoder features at each step, versus in the last step? How does unet achieve anything other than downscaling/upscaling performance efficiency? Where are the actual operations to derive features? How is u-net specifically applied for various use cases like diffusion? What does diffusion add or change, for example.
(Disclaimer: I am a beginner, and this is not intended to be a complete answer.) You should read about convolutional layers and pooling layers to better understand this video. At any rate: A colored image has three channels: R, G, and B. A convolutional layer is specified by some spatial parameters (stride, kernel size, padding) and how many filters are there - the number of filters is the number of channels of the output. You can think of each filter as trying to capture different information. Doubling the channels, therefore, means using double the number of filters when using a stride of 2. The segmentation is done just like any ML task - the training data consists of pairs of images and their annotated versions. I think it's often hard to decipher the inner workings of a particular neural networks, and your question can/should be asked in a more general way - how do neural networks learn?
Hahaha well there are actually plenty of online code implementations available but I will see if I can get round to a code tutorial on the u-net sooner rather than later!
man, this video is such a great explainer. I was confused about the use of skip connections since a long a time, but he explained the intuition behind it very nicely.
This architecture is one of the truly brilliant ones in the world of deep learning in terms of its simplicity and efficiency.
Why didn't I find your channel before. Please upload more content, the best content on Deep Learning I have seen.
Thanks a lot :)
your explained under 10 minutes videos are goated
You might not find my comment since the video is too old, but man I just want to thank you for this video. I am a student who has always been interested in computer graphics and related fields like game engines, physical rendering, ray tracing, etc, and jst didnt get the ML/AI hype everyone was on the past 2 years. I only ever managed to study ML basics for 2 weeks before I left it for good. But recently I got in a team where my friends were working on CNN based projects, and that made me learn about many basics about NNs and DL. This explaination for Unet seals the deal for me, and I will strive to work on integrating my two interests into one and hopefully create something I love.
This channel deserves more subss!! Great content and delivery :)
The best ever video you can get on Unet explaination
Not even close lol
This was the best unet explanation I have ever seen
dude thankssssss i thought this was another one of these things thatll take me 2 hours of youtube to *not* understand, but u saved me
Thanks, this is really good. One thing that would be helpful is if the example was itself convoluted like the algorithm, to make easier to visualise the algo.
Man i like you ! . you are the best ! how you simplify thing and how you are careful to deliver the idea perfectly >> please keep this great presentation up >>
This was great, would love a video on diffusion transformers! It looks like they are taking off and replacing U-Net's as the backbone to new diffusion models.
I LIKEED THE ANIMATIONS AND YOUR PTESENTING STYLE IN THE VIDEO. THANKS.
Clearly explained. What caused my consfusion in the first place is, in the graphic in the original paper, why does the segmentation mask not have the same dimensionality than the input image?
thanks for the video, I am trying to use U-net for anomaly detection in time series and your video gave me the idea.
Dude, you're great. I'm from Portuga 🇵🇹 🟩🟨🟥🟥and I'm learning Machine Learning and Neural Networks. Thank you very much! I loved how you teach. You are intuitive and dynamic. A person is learning a difficult subject and still manages to laugh when watching the videos. I loved. I already subscribed and liked. I'm going to watch more of your videos now. Hugs from Portugal😉
brilliant! thank you for this illustration!
Continue this series, very helpful
Thank you Rupert! Excellent, excellent explanation and intuition for this :)
very good explanation of U-NET
Extremely useful for beginners like me. This is very good
Yooo...this is quality content right here. Thank you so much for putting this out
great vide mate , would love to see more brilliant stuff like this❤❤
I liked it but you did not explain the role of the 3*3 kernel, and how it scans the pixels of the image at each layer, and the reason for the downsampling is because it is more expensive to increase the size of the kernel at each layer so we downsample the image so we get the same relative size differential as if we did increase the size of the kernel. Apart from that, it’s brilliant.
I'm not an expert but here's what I understand. The conv filters on the earlier full resolution image will learn highly detailed features such as edges. The conv layers run on downsampled (lower resolution) images can't see edges because they're all fuzzy now, so they will learn more large-scale features, such as shapes, then objects. As for how the 3*3 kernel (filter) scans, I believe it's just a standard convolution which you can learn from other videos.
Hey just show this first video from your channel and immediately subscribed to your:) Great explaination with visuals
Still don't know how it works
me when reading goodfellow all night
@@vardhan254 dude hows the book ,what would u suggest so that one has a good read
No one really knows how/why a CNN works!
@@arf9759 Exactly and these make sense to machines during training(When backpropagating errors). this is the reason filters are initialized randomly and are trained.
@@arf9759 I don't know what are you referring to, but there's actually a mathematical proof why conv NN's are used in image classification. Check out geometric deep learning by michael bronstein
great, hope you continue with the videos
Thank you for great explanation.On basic level it helps better understand unet
Oh my god man. Awesome videos. Keep it up, I'm really enjoying them!
I’m interested at multiclass problems (recognising bike, human AND house). Also what would you choose instead of confusion matrix?
Thank you so much. Now I just need to figure out how to implement this for my project lol
bestvideo for understanding U-net model
This was extremely helpful. Thank you
Very nice my friend, this has been most helpful
Very useful and great explanation.
Really impressive vedio! And fun work at the end!!!!! LOVE LOVE LOVE!!!
Thank you very much! :)
i love your presentation style
Thank you that was so helpful and cute! 🤩
You didn't explain how the skip connections are connected across. What is the data that's transferred and how is it incorporated into the output half of the U-Net?
Nice explanation
Absolutely amazing work 🎉
Thank you for creating this video! Its the best explaination of how a U-Net works that was easy to understand. The visual animation is superbly done!!
Great presentation!, Easy to understand
Thank you very much for the time put on doing thisvideo. Interesting and helpful :)
Amazing video, cleared everything!
Yooo the effort haha. Amazing Video!!!
Great summary, Great thanks
this is extreeeemely helpful,and funny
Thanks John!
Woooooow! Finally I understood it , really great explanation, thank you
great explanation thanks!
Very nice explanation. Thanks a lot.
wow awesome video and explanation
This video has been extremely useful. I subbed.
thanks, good explanation
If you want to just use the Decoder how would you do it?
Great Explanation.
nice video, very helpful
What's the background music called in this video?
Amazing video!
Hi, thank u for this video. can u pls do a video to explain YOLO?
best explainer!! great video, I had an "aaaaááaaa" moment at 8:05
This is Just awesome, great video
This explains inference (I think) by decomposition (dividing) and recomposition (adding) images. Is that accurate?
whould you please make a presentation on 3D Unet . that would be really appreciated
hi its very helpful, how can I reach the PowerPoint of it?
such a well made video
Hi. I find the video very interresting. As I'm at the begining, i'm little confused. please, can you also propose a pdf file ? thank yu. Nicely
I still don't understand that the output is x2 or x3 or x4.I don't understand why that is the case?
Very helpful
Helpful
Thank you very much bro...
thank you so much!
awesome! can you calso make similar (actually) for Unet++ and Unet3+ please??? thank you so much.
Glad you liked it! Its not currently on my list of to-do videos as I like to cover the most popular fundamentals at the moment, but I'll let you know if I get around to it! :)
very nice dude thank you so much
Thanks for sharing!
If downsampling works by max-pooling, how does upsampling work? In traditional image processing, we would just interpolate image colors, but how does the network apply it's "convolution" in this process? I would understand "deconvolution", but in my mind it wouldn't work here.
May be Transpose Convolution
bro , immediate subscribe!
nice effort, but the sound of music is distracting.
I found this while looking up UNet ELI5...
😭😭
cool videos
I feel like this is more a description to experts than an actual explanation of how and why it works.
Questions I'm left with:
What is the purpose of downsampling/upsampling (I'm guessing performance?)
How is segmentation actually done by the u-net?
How is feature extraction actually done?
What are max pooling layers?
What does "channel doubling" mean, and what does it achieve?
How does the encoder know "these are the pixels where the bike is"?
Why is it beneficial to connect the encoder features to the decoder features at each step, versus in the last step?
How does unet achieve anything other than downscaling/upscaling performance efficiency? Where are the actual operations to derive features?
How is u-net specifically applied for various use cases like diffusion? What does diffusion add or change, for example.
(Disclaimer: I am a beginner, and this is not intended to be a complete answer.)
You should read about convolutional layers and pooling layers to better understand this video. At any rate:
A colored image has three channels: R, G, and B. A convolutional layer is specified by some spatial parameters (stride, kernel size, padding) and how many filters are there - the number of filters is the number of channels of the output. You can think of each filter as trying to capture different information. Doubling the channels, therefore, means using double the number of filters when using a stride of 2.
The segmentation is done just like any ML task - the training data consists of pairs of images and their annotated versions. I think it's often hard to decipher the inner workings of a particular neural networks, and your question can/should be asked in a more general way - how do neural networks learn?
good stuff
Thanks a lot lot. I understand it!
Dalle 3 is coming to gpt 4 and it can write text!
Perfect
Now how they coded it?
Hahaha well there are actually plenty of online code implementations available but I will see if I can get round to a code tutorial on the u-net sooner rather than later!
@@rupert_ai can u provide one
If anyone wonders how to concatenate the features if they don't match the size... they crop it.
Great video champ
Make a video on I-JEPA
U NETS RULEEEEEEEEEEEEE
Nice Comment: Useful 👍👍😎😎
nice explanation. but why distracting background music?
Agreed. Good explanation but I wish people would stop using background music.
Me seeing the video at 1.5x 😂😅
hope you can come back to life
Is he dead?
@@c.e1187nah, just busy I imagine. He was active on github in December so
bro why did u stop making videos i need you lmao (its a painful lmao.)
nice video, but ideo i hate the music in the background ( so disturbing )
TIGHT TIGHT TIGHT
goodgood
You are very funny!
music is too distracting... :(
no