I wish youtube give us an option of liking video after every minute, this idea came in my mind for the first time in this video, I really want to give this video a like on every small bit of concept. Because it is explained so well. Respect Sir.
Thanks Professor, there's so much knowledge in you channel, i'll need months to go through as it seems it's right in the deep learning area i want to focus, as an Computer Engineering going throught Veterinary course, blood sample analysis may be my final project, thanks from Brazil
This is the first video I'm watching on this channel, and I need to say huge THANK YOU. You helped me connect so many dots that were all over the place in understanding this. Amazing.
@@DigitalSreeni Of course, I've already watched the full course and the next thing is time series forecasting. Thanks for your reply and everything you do!
Why does the feature space and thus depth increase as we go down? Is this a design choice or a consequence? It's confusing for me that each first convolutional operation increases the depth and the second one which seems identical, does not.
Well explained....sreeni you have amazing teaching skills...your explanation pretty good. i watched more and more videos in youtube...you also one of the best person thanks for sharing information
As im just starting to dig into this field im not quite sure but my suggestion would be that the output has to be a segmented image. Segmented images have value 1 for the segmented part and value 0 for the remaining non segmented part of the picture. Usually when using segmentation grey values are considered. And for grey values only one channel is needed.
@@kunalsuri8316 In the second last layer of decoder (corresponding to P1), its input to the last layer of decoder is incorrect. Just check the original paper, one can easily notice it.
from 3 channels and applying 96 filters to each channel, shouldn't we get 288 channels? Also, in the max-pooling step, from 96 channels, how do we have 256 channels? shouldn't we still have 96 channels? Sorry, if these questions seem very basic but I am new to these things! Thank you!
Thanks for your video,but i have question regarding the U-net and i hope that you can answer me from my understanding that the u-net is ended by image of the same input size ?but how we can predict the class of each pixel. i understand classification problem that it the last convolution is following by flatting and fully-connected layer so the number of n-classes as outputs ,but i don't understand how we get the result in segmentation
The convolution pooling operations (down sampling) understands the 'what' information in the image but has no information on the 'where' aspect which is required for semantic segmentation (pixel level). In order to get the 'where' information Unet uses upsampling (decoder), converting low resolution to high resolution. Please read the original paper for more information: arxiv.org/abs/1505.04597
First of all thx for your work here on RUclips, when I'm done with your series I will definitely support you. One question here: I thought that in the upward path you do add the upsampled features and the corresponding ones from the contracting path, but in your code you have concat?
He's concatenating and then uses a convolution layer. This has a similar effect to adding since the convolution operation adds the results after multiplication
You’ll be concatenating data with same dimensions, not different dimensions. Please have a second look at the graphic describing the architecture, the two layers fused together showing dimensions are being concatenating to form a dataset with combined dimension.
Can you make a video in which your code detect the orientation of page from a photography of the page , suppose to the page is up-side down or 90° let /right rotated.
Nice video, thanks! One question - this architecture is for semantic segmentation right? How would the final layer (or layers) differ for the instance segmentation, wherein the output would be bounding boxes or co-ordinates of the instances?
Instance segmentation requires different architecture, you cannot swap the final layer to convert them from one to another application. I only wish life were that easy!!!
It can be either. Please watch the following video if interested in learning about the differences between the two. But, you can use either as the idea is to get back to the large resolution image from a smaller size. ruclips.net/video/fMwti6zFcYY/видео.html
Hi sir i was wondering if you could help to train my model i am trying to create a dataset where only the element of interest is visible and the rest is blacked out with transparent background , will this be great or i should create a binary mask by coloring the element of interest in white and keeping the background white
Not sure where your confusion is.... I am referring to the filtered results (after convolutional filtering) as feature space. This is where you will have multiple responses for every input image and these responses contain the information about features in the image.
I wanted to ask about feature space that was 64 in start then 128 in 2nd block of unet 64 means 64 output filtered results? is that true? or we can say 64 filters were applied, then 128 filters and so on ...?
Great explanation....Please keep on posting such high-value videos..... If we have less data, then we should go for Transfer learning or Machine Learning approach??
As usual! Amazing tutorial. I just want to confirm, in the training phase, all images have to be of the same shape (width, height and depth), right? what if my training data varies in shape? Do I need to resize the images? Also, I Will be really thankful if you can give a tutorial on Mask RCNN. It's also a very good algorithm that can be used for semantic segmentation. Thanks a lot for your time.
I see, thanks for the reply. Image rotation will be performed for data augmentation. but regarding the image resizing, I think it's a requirement by the algorithm.
You will represent your data as numpy array so you need all images to be of same size. Yes, it is customary to resize images to a predefined shape in machine learning. I will consider making Mask-RCNN videos.
I want to make a 3D medical image segmentation , can you tell me how to start, I want the input to be .obj file and the output to be either .dcm files ( for each segment ) or .obj files
Thank you for the video. this is best video. My only request please make same type video for Mask R CNN for image segmentation i have a project on this i have to submit in this week but Mask R cnn is confusing. so please help me on that.
This is a late reply but yes, you have to expand your thinking... You can't assume just because someone made a tutorial this is what i have to do. Ask yourself these questions instead of trying to get help, What is a grayscale image? (1 is white, 0 is black, in between is gray) Can I apply this concept to RGB? (Three color channels, each same principle) How does my code change, (Input shoudl be three, maybe I need to flatten differently), etc. Good luck learning!
Sorry, I wasn't very planned with my presentation slides so unfortunately I cannot share them. Also, I often use images and content from Google searching that come with copyright. I cannot legally distribute them.
Compiling just defines the model, you need to train the model on real data to update the weights and customize it for a specific job to be done, for example identify cats and dogs.
Awesome! I'm trying to use GAN for augmenting my images and masks which I will use as input to my semantic segmentation models, but I can't find any tutorials online. Most of them are for classification datasets, any advice, please?
@@DigitalSreeni ok sir ...Thank you for your response...sir i have one more question ...when we are combining t2,flair and t1ce...do we call that combined image a single channel image or 3 channel image...please sir reply
No sure what you mean by parameters. If you are asking about hyper parameters that go into defining your network then it is not an easy answer. People are still researching the effect of parameters for various applications.
Think of it as applying 16 different digital filters and then 32 and the 64 and so on.... Therefore, if you take a single image of size 256x256 and apply 16 different filters on it you will end up with 16 responses from this single image --> 256x256x16 data points.
@@DigitalSreeni What is the design principle behind this filters, any rules of thumb? Are they generated at random? Or are this manually configured? Thanks again for sharing this video!
The best method is always traditional approaches of using histogram for thresholding and then some operators like open/close to clean up. If that is not possible then the next best option is to use traditional machine learning (extract features and then Random Forest or SVM). I covered that topic on my channel. FInally, if you have the luxury of 1000s of labeled images then use deep learning.
@@DigitalSreeni , please let me one more question. My purpose is to avoid manual settings to use macros or python over big amount of images taken at cells on a big microscale range. Any suggestions there? Have you any reference for deep learning?
Hello sir, plcan u provide the links of videos for creating our own dataset from scratch fro satellite images, pl sir.. its very important.I hope you will...
Upsampling is not reducing the feature maps in half, it is expanding dimensions by 2 times as upsampling is like the opposite of maxpooling. The feature maps are reduced by half because that is what we defined in our network as part of convolution operation. The number of features has nothing to do with upsampling.
hello! I want ask something, can I train my unet model with the input training images having only single channel? like (img_height, img_width, 1) or (img_height, img_width) ?
Yes. Please watch my other videos on U-net. Every network expects certain dimensions and you can reshape your arrays to fit those dimensions. For example if you have grey images with dimensions (x, y, 1) and if the network takes 3 channels then just copy the image 2 more times to convert to (x, y, 3).
If you checkout the part2 of this video, you can see that it uses Conv2DTranspose (transpose convolutions) for upsampling instead of just simply UpSampling2D (repeat the value to match the desired dimosions), because the filter number is set to 128, so we end up with 8*8*256 -> 16*16*128. Check this for more details: www.jeremyjordan.me/semantic-segmentation/#upsampling
Thank you again Would you please tell me, is it possible to use data augmentation befor semantic segmentation an how to apply same function on both image and mask
Yes, it is missing because it was about getting system ready for GPU and the process does not make sense any more with new TensorFlow. I am planning on recording a new video on the topic.
I wish youtube give us an option of liking video after every minute, this idea came in my mind for the first time in this video, I really want to give this video a like on every small bit of concept. Because it is explained so well. Respect Sir.
Thank you for the video. I think the best video for basic levels / intermediate levels.
I am speechless. your tutorials are beyond the amazing. thank you so much for all you have done!
Glad you like them!
the best explanation I found on the internet. Thank you
I can not express my wishes for you in the words.
You are more than the best.
Thank you so much.
You are most welcome
Thanks Professor, there's so much knowledge in you channel, i'll need months to go through as it seems it's right in the deep learning area i want to focus, as an Computer Engineering going throught Veterinary course, blood sample analysis may be my final project, thanks from Brazil
I am sure you'll benefit from my tutorials if your goal is to analyze images by writing code in python.
Pick up line for data scientists:
Why is U-Net architecture so beautiful?
Cause it looks like U
It's actually crazy how people just make tutorials on this knowledge stuff for free.
This is the first video I'm watching on this channel, and I need to say huge THANK YOU. You helped me connect so many dots that were all over the place in understanding this. Amazing.
Thank you very much for your kind feedback. I hope you’ll watch other videos on my channel and find them useful too.
@@DigitalSreeni Of course, I've already watched the full course and the next thing is time series forecasting. Thanks for your reply and everything you do!
Why does the feature space and thus depth increase as we go down? Is this a design choice or a consequence?
It's confusing for me that each first convolutional operation increases the depth and the second one which seems identical, does not.
Incredible that someone as dedicated as you gave accss to such great knowledge. Thanks you, you help create better sciences
thank you, professor, helps a lot in my understanding of deep learning.
Hi every one. It is really amazing video on U-Net.
But waht about U2-Net? is it better?
thank you. can you please explain what does it mean to add C4 to U6 in the first Upsample step?
Sir, please do a video for segmentation of BRATS dataset
Thanks for your amazing presentation!
Great explanation SIR
You made us simple
Glad to hear that
Is there a reason why always two convolutions are applied after the max pooling step? Is it a convention to use always two?
No reason. It may appear that 2 convolutions are added after maxpool on some architectures but that is not the general case.
basically, ReLU is used to prevent overfilling to maintain non-linearity
Great Video! Really helped me understand U-Nets for my own use!
Great to hear!
You are doing a great work, I have learnt a lot from you. could you please treat segmentation using DeepLab? thank you.
14:25 In upsampling (before adding C4), why the 8*8*256 got transformed to 16*16*128 ? Why not 16*16*256 ?
Sir i am doing image segmentation with coco like dataset sir already see yours tutorials but still not able to implement
Amazing Lecture. You can also create one on UNET++ and attention UNET. I was looking for these topics and I wish you had one on these topics... :)
Great suggestion!
Well explained....sreeni
you have amazing teaching skills...your explanation pretty good.
i watched more and more videos in youtube...you also one of the best person
thanks for sharing information
Thank you so much 🙂
Thank you for this very helpful video. In the unet diagram, there are 3 output features, but your implementation only has one. I'm confused as to why?
As im just starting to dig into this field im not quite sure but my suggestion would be that the output has to be a segmented image. Segmented images have value 1 for the segmented part and value 0 for the remaining non segmented part of the picture. Usually when using segmentation grey values are considered. And for grey values only one channel is needed.
13:54 Do check, second last layer in the decoder side have wrong connections !
How is it wrong?
@@kunalsuri8316 In the second last layer of decoder (corresponding to P1), its input to the last layer of decoder is incorrect. Just check the original paper, one can easily notice it.
from 3 channels and applying 96 filters to each channel, shouldn't we get 288 channels? Also, in the max-pooling step, from 96 channels, how do we have 256 channels? shouldn't we still have 96 channels? Sorry, if these questions seem very basic but I am new to these things! Thank you!
Thanks for sharing! Very well presented and super informative. Saving this video
Thanks for your video,but i have question regarding the U-net and i hope that you can answer me
from my understanding that the u-net is ended by image of the same input size ?but how we can predict the class of each pixel.
i understand classification problem that it the last convolution is following by flatting and fully-connected layer so the number of n-classes as outputs ,but i don't understand how we get the result in segmentation
The convolution pooling operations (down sampling) understands the 'what' information in the image but has no information on the 'where' aspect which is required for semantic segmentation (pixel level). In order to get the 'where' information Unet uses upsampling (decoder), converting low resolution to high resolution. Please read the original paper for more information: arxiv.org/abs/1505.04597
Can you give the implementation for unsupervised semantic segmentation also
Hi, Sreeni, Nice explanation and I managed to clear my doubts. Thanks. Do you have any videos on image segmentation with pertained models.
First of all thx for your work here on RUclips, when I'm done with your series I will definitely support you. One question here: I thought that in the upward path you do add the upsampled features and the corresponding ones from the contracting path, but in your code you have concat?
He's concatenating and then uses a convolution layer. This has a similar effect to adding since the convolution operation adds the results after multiplication
Great work......which tool creates Image masks?
What implication do the cross-links have for backpropagation in the U-net architecture?
5:12 . which architecture would be good for cassava leaf disease detection dataset?
Very excellent explanation
Glad it was helpful!
still confused with the concatnation operation how it works, such as adding 16x16x128 featuremap with upsampled 8x8x256, the dimension is different
You’ll be concatenating data with same dimensions, not different dimensions. Please have a second look at the graphic describing the architecture, the two layers fused together showing dimensions are being concatenating to form a dataset with combined dimension.
Great demonstration thank you so much
Can you make a video in which your code detect the orientation of page from a photography of the page , suppose to the page is up-side down or 90° let /right rotated.
Nice video, thanks! One question - this architecture is for semantic segmentation right? How would the final layer (or layers) differ for the instance segmentation, wherein the output would be bounding boxes or co-ordinates of the instances?
Instance segmentation requires different architecture, you cannot swap the final layer to convert them from one to another application. I only wish life were that easy!!!
Thanks for the video,so is it transposed convolution or up-sampling for the expansive path because they are 2 different things.
It can be either. Please watch the following video if interested in learning about the differences between the two. But, you can use either as the idea is to get back to the large resolution image from a smaller size.
ruclips.net/video/fMwti6zFcYY/видео.html
Hi sir i was wondering if you could help to train my model i am trying to create a dataset where only the element of interest is visible and the rest is blacked out with transparent background , will this be great or i should create a binary mask by coloring the element of interest in white and keeping the background white
Thank you for your explanation.
can someone tell me and give examples of why the u-net architecture uses the 'copy and crop' for every block?
Great job by you sir salute to u❤
Good explanation, thank you.
very good lecture
just one thing I am unable to understand about feature space or dimension?plz reply with answer
thanks
Not sure where your confusion is.... I am referring to the filtered results (after convolutional filtering) as feature space. This is where you will have multiple responses for every input image and these responses contain the information about features in the image.
I wanted to ask about feature space that was 64 in start then 128 in 2nd block of unet
64 means 64 output filtered results? is that true?
or we can say 64 filters were applied, then 128 filters and so on ...?
Great explanation....Please keep on posting such high-value videos.....
If we have less data, then we should go for Transfer learning or Machine Learning approach??
As usual! Amazing tutorial. I just want to confirm, in the training phase, all images have to be of the same shape (width, height and depth), right? what if my training data varies in shape? Do I need to resize the images?
Also, I Will be really thankful if you can give a tutorial on Mask RCNN. It's also a very good algorithm that can be used for semantic segmentation.
Thanks a lot for your time.
Yes. Always apply transformation on image ( like resizing and rotation etc)
I see, thanks for the reply. Image rotation will be performed for data augmentation. but regarding the image resizing, I think it's a requirement by the algorithm.
@@mqfk3151985 Yes , there is never that you might find images of all same size. Unless you go for normal competation
So better resize :)
You will represent your data as numpy array so you need all images to be of same size. Yes, it is customary to resize images to a predefined shape in machine learning.
I will consider making Mask-RCNN videos.
I want to make a 3D medical image segmentation , can you tell me how to start, I want the input to be .obj file and the output to be either .dcm files ( for each segment ) or .obj files
Very well !!! more videos please
Thank you for the video. this is best video. My only request please make same type video for Mask R CNN for image segmentation i have a project on this i have to submit in this week but Mask R cnn is confusing. so please help me on that.
Thanks first of all. Can you provide the image you have used, the architecture image?
You can search for U-net on Google. I did the same and created my own, to make sure I do not infringe on copyright.
@@DigitalSreeni sir you are great sir it would be a great help if you could upload a video on semantic segmentation using double-UNET model
thank you for your sharing, however, do you have the training part?
Please keep watching videos on this playlist, I have training and segmentation part covered.
Depth estimation using neural network. please make the lecture
input size for U-nET?
Great explanation!
Glad you think so!
Very helpful video thank you!
Glad it was helpful!
Thank you very much for this explanation. I have one question, could I use this same method on an RGB image? Or does it have to be grayscale? Thanks!
This is a late reply but yes, you have to expand your thinking... You can't assume just because someone made a tutorial this is what i have to do. Ask yourself these questions instead of trying to get help, What is a grayscale image? (1 is white, 0 is black, in between is gray) Can I apply this concept to RGB? (Three color channels, each same principle) How does my code change, (Input shoudl be three, maybe I need to flatten differently), etc. Good luck learning!
Amazing content as usual, well done :)
Thank you and Respect Sir
You are welcome.
thank you, this video is the best
first of all thank u for the great explanation and wanted to ask u about the slides if they are available
Sorry, I wasn't very planned with my presentation slides so unfortunately I cannot share them. Also, I often use images and content from Google searching that come with copyright. I cannot legally distribute them.
Thanks bro. Cheers!
classy explanation!
do you have slides for all these videos?
Hello sir, can u please make a video on brain tumor segmentation using u net architecture integrated with correlation model and fusion mechanism.
Hello, i have a rgb masks, it's possible to do the image segmentation? thanks in advance
Yes. I have done that here. ruclips.net/video/jvZm8REF2KY/видео.html
Question: What happens if it is 128X128X1 ? will it still become 128X128X16 ?
Fantastic video!
17:56 Does the model need to be trained after compiling?
Compiling just defines the model, you need to train the model on real data to update the weights and customize it for a specific job to be done, for example identify cats and dogs.
@@DigitalSreeni ok thanks
Please do a W-net tutorial
Many thanks, well done !
Many thanks!
I can't Find Code Please Tell the name of folder
Awesome!
I'm trying to use GAN for augmenting my images and masks which I will use as input to my semantic segmentation models, but I can't find any tutorials online.
Most of them are for classification datasets, any advice, please?
Sir is this unet architecture for multiclass segmentation or binary segmentation??
Kindly response
This is binary. I got many other videos on multiclass.
@@DigitalSreeni ok sir ...Thank you for your response...sir i have one more question ...when we are combining t2,flair and t1ce...do we call that combined image a single channel image or 3 channel image...please sir reply
Best resource
why the first parameters don't work very well and how we can determine the best parameters
No sure what you mean by parameters. If you are asking about hyper parameters that go into defining your network then it is not an easy answer. People are still researching the effect of parameters for various applications.
Thank you very informative tutorial
Glad it was helpful!
I still don't get it, what exactly is the 16, 32, 64, 128, 256 that being called features in the next two layers each?
Think of it as applying 16 different digital filters and then 32 and the 64 and so on.... Therefore, if you take a single image of size 256x256 and apply 16 different filters on it you will end up with 16 responses from this single image --> 256x256x16 data points.
@@DigitalSreeni What is the design principle behind this filters, any rules of thumb? Are they generated at random? Or are this manually configured?
Thanks again for sharing this video!
Thanks for video
Hi!which is the best segmentation technique I can use in python for cells image counting/object detection/size definition?
The best method is always traditional approaches of using histogram for thresholding and then some operators like open/close to clean up. If that is not possible then the next best option is to use traditional machine learning (extract features and then Random Forest or SVM). I covered that topic on my channel. FInally, if you have the luxury of 1000s of labeled images then use deep learning.
@@DigitalSreeni , please let me one more question. My purpose is to avoid manual settings to use macros or python over big amount of images taken at cells on a big microscale range. Any suggestions there? Have you any reference for deep learning?
Hello sir, plcan u provide the links of videos for creating our own dataset from scratch fro satellite images, pl sir.. its very important.I hope you will...
You just need to annotate your images using any of the image annotation tools out there. I use www.apeer.com as that is what our team does at work.
Could someone explain why on upsampling the number of the feature maps reduce to the half?
Upsampling is not reducing the feature maps in half, it is expanding dimensions by 2 times as upsampling is like the opposite of maxpooling. The feature maps are reduced by half because that is what we defined in our network as part of convolution operation. The number of features has nothing to do with upsampling.
sir how to use this 3d images and what is 3d image can you please can you make a video on that
I will try to do 3D image processing some day.
@@DigitalSreeni do you va e vid on 3D ?
Thank you sir, very nice video.
Most welcome
hello! I want ask something, can I train my unet model with the input training images having only single channel? like (img_height, img_width, 1) or (img_height, img_width) ?
Yes. Please watch my other videos on U-net. Every network expects certain dimensions and you can reshape your arrays to fit those dimensions. For example if you have grey images with dimensions (x, y, 1) and if the network takes 3 channels then just copy the image 2 more times to convert to (x, y, 3).
@14.11.. can anyone please explain how size changes from 8*8*256 to 16*16*128 due to up sampling??. why number of channels get reduced in this step??
If you checkout the part2 of this video, you can see that it uses Conv2DTranspose (transpose convolutions) for upsampling instead of just simply UpSampling2D (repeat the value to match the desired dimosions), because the filter number is set to 128, so we end up with 8*8*256 -> 16*16*128. Check this for more details: www.jeremyjordan.me/semantic-segmentation/#upsampling
Thank you again
Would you please tell me, is it possible to use data augmentation befor semantic segmentation an how to apply same function on both image and mask
For better results what changes we have to do in the u net sir ???
Many things. For example you can try replacing generic encoder (down sampling) part with something sophisticated like efficientnet.
Hello, great content! Where is the code for U-Net? Can u post the link here pls?
github.com/bnsreenu/python_for_microscopists
hi can you also create one tutorial on unet based segmentation for isbi 2012 data set or brats data set
?
I already did Brats. Please check my videos 231 to 234.
Very nice
Thanks a lot for your video
You are most welcome
Hi, Sir. Is chapter 72 missing?
Yes, it is missing because it was about getting system ready for GPU and the process does not make sense any more with new TensorFlow. I am planning on recording a new video on the topic.
Is this keras code?
How much memory does the original unet require?
Not a simple answer. Here is some good reading material on this topic. imatge-upc.github.io/telecombcn-2016-dlcv/slides/D2L1-memory.pdf
Please sir make a video on sliver07 dataset