Ajarn - Can fully understand the efforts and time you are putting in to create these contents.......The real value of gold is not known to the one who wears it......It is know to the miners who take out tons and tons of slush to extract 1 ounz of gold........Pranams......You have an amazing sense of sequels.......And I am sure, you are not going to stop the sequels on U-nets with this.......
Thank you sreeni for the labelencoder path, all other places it was simply -1 , but my masks were in color and i just realised that differnce after wathing this tutorial..... super helpful insight.
Your U-net videos are very helpful for me. I would appreciate if you could produce videos on instance segmentation as well and particularly Mask RCNN model. Thanks a lot. 🙏🙏
Thank you for your tutorial. I would like to request an open-slide tutorial for generating patches from the whole-slide images. This is very important for the analysis of histopathology images.
Amazing content my and of many professor of Deep Learning, I think that a nice suggestion to your next videos, could be the addition of the version of the installed libraries and modules in each notebook. That´s it thanks.
Hi Sreeni, Many thanks for the very useful materials. I tried your code and have the following question for you: When I tried to do the same as you did in the code, i.e., commenting the class_weight=class_weights, I cannot get a reduction in the loss at all! And when I tried to execute class_weight=class_weights, I am getting "ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()". Can you please give me some guidance? Appreciated.
Is this on a different data set or same one I showed? If it is the same data set then the same code should work, please make sure you haven’t skipped any steps. Also, try different kernel initializers, optimizer and loss function.
Thank you very much for your videos. They have been of immense help for a histopathology cell counting project I am working on. I am trying to investigate the impact of auxiliary outputs on UNets for microscopic cell detection and counting but have been stuck with a bug for over a week now. Most documentation online hasn't helped. My auxiliary outputs use various blocks of the UNet model as inputs as such output different shapes from the original input size of (256,256,3). So the main challenge is how to declare this during training so it takes this into consideration. Error Message obtained: ValueError: Error when checking target: expected aux1 to have shape (32, 32, 1) but got an array with shape (256, 256, 1) Model Summary: Layer (type) Output Shape Param # Connected to ================================================================================================== input_6 (InputLayer) (None, 256, 256, 3) 0 __________________________________________________________________________________________________ ... __________________________________________________________________________________________________ aux1 (Conv2D) (None, 32, 32, 1) 33 activation_74[0][0] __________________________________________________________________________________________________ aux2 (Conv2D) (None, 64, 64, 1) 33 activation_76[0][0] __________________________________________________________________________________________________ aux3 (Conv2D) (None, 128, 128, 1) 33 activation_78[0][0] __________________________________________________________________________________________________ original (Conv2D) (None, 256, 256, 1) 33 activation_72[0][0]
Thank you very much for your video, it helped a lot. Only thing I need to ask you about is the calculation of the IoU. I have a very unbalanced dataset, and I ran the model on it several times with several different loss functions, including some that are explicitly made to handle data unbalance, but every single time the IoU confusion matrix looks as if my model classified everything as background (i.e. the most common "class"). Since I'm sure the data is correctly labelled and I doubt there can be something wrong with the model especially after running it with different functions, I think there is something wrong with the IoU calculation. Do you have any idea? Thank you.
Thank you for the great work! I have one question. Is the number of classes related to the number of colours/categories presented in the masks? If so, that means that in your case it's 4 but it could have been 5 or 20? Do we need to change the code in any way if the number of classes gets too much? Seems I'm having 224....Thank you in advance.
yes depending on color number of classes depends.Yes it could be anything depending on labels 5 or 20. if number of classes is more the model should be robust no need to change the model attempt it and explore if you are having 224 give input shape 224*224*n_channels
I had this problem with the class_weight -> ValueError: `class_weight` not supported for 3+ dimensional targets. Do you have any suggestions to solve it?
Nice class sir. SIr, Can you please make some videos like how to read a scientific research paper and how we can get their results by performing our own code or reading that articles. It will really help many of us.
Hi Sreeni, I have just a small doubt, is it really a multiclass problem? or is it a multilabel? because as per the definition given in video "140 - What in the world is regression, multi-label, multi-class and binary classification?" for me it's more likely a multilabel problem, or am I getting it wrong? Thanks in advance!
Sir thank you for the video. Can you please help me with this error i am getting with compute class weight. It says compute class weight() takes 1 positional argument but 3 were given.
Using imageJ, how can I save my semantic labels in only one mask? Like in this vide where you get a single mask but represented with diferrent gray-scale levels
thank you sir your lectures are very helpful. i have been stuck in class weight problems i have tried different methods but still got error. please help me out in this. how could possibly i do it. i have also tried focal loss but no benefit. i get 3D+ dimension error
Amazing content. Can you please name the tool you used for image analysis? The one with which you checked number of class, histogram, changing contrast and so on.
If I use the iou loss and iou as metric do I have to do class_weighting ? I know that for semantic segmentation the accuracy and the crossentropy loss are not the right ones to use because of the unbalanced data but I use the iou loss and iou metric do I have to use class weighting ?
Where can i get a video that explains datasets - I) Kidney (RCC) (II) Triple Negative Breast Cancer (TNBC) (III) MoNuSeg-2018 and many other nuclei segmentation datasets ?
In this example, there is nothing like background. If you have a background class then that can be assigned a value 0. The way I have written my code, the background would be the 5th class.
As usual, your videos make life very easy for researchers. I have a question regarding class weights, when I uncommented the class_weight part in the model fitting, it returned an error that class_weights has to be a dictionary, something like this (on my own dataset): Class weights are...: {0: 0.4280686779466047, 1: 1.54654951724371, 2: 0.40951813587110275, 3: 42.324187597545105, 4: 1.5749410555965808, 5: 2.2925788973162344, 6: 2.080430679675916} even upon changing the class_weights into a dictionary, I faced another issue: `class_weight` not supported for 3+ dimensional targets meaning that my y_test_cat is a 3-D matrix which is not supported for class_weights. References suggested to use "sample weights" instead of class_weights any suggestions on how to solve this issue? Again, Many thanks for your amazing videos.
@@mqfk3151985 after some research, it seems that you cannot apply weight to 2D array. The model output is (height, width, number of class), and should be flatten as (height * width, number of class) for the weights to be applied. Will try that tomorrow and tell you if it helps
Great Video! 1- So in that dataset you labeled all the containing of the images. If you had like background that obviously you don't want to classify should it be value -1? 2- I am doing predicitons for land use and my building roofs predictions are not as straight as i want in fact they are a bit roundish is there a way that i can fix that expecially in the encoder part. 3 - How can I metric accuracy with IOU (since the predefined accuracy is not valid for semantic segmentation with high background) if i don't have everything labeled on my input data but my model can predict it. Should I only use intersection or add the part that my model predict more the labeled part. If anyone in chat what to respond I would mostly appreciate that.
Hi Joao, 1 = I am facing the issue of no labeled pixels in my predictions. I don't really know how to deal with it. 2 = For roundish predictions, there is a solution that you can use "Anchoring" by sending the coordinates of the corner pixels along with the training. I really didn't work on it, however, some of my colleagues have suggested this method.
Ran into this error: File "C:\Users\anish\208_multiclass_Unet_sandstone.py", line 63, in n, h, w = train_masks.shape ValueError: not enough values to unpack (expected 3, got 1) Anyone know how to fix this?
I have a question. I have a task semantic segmentation with 2 classes: leg and foot in a first view order of leg and foot. So what is the number of channels of my output should be? 2 or 3 because I wonder if the background should be labeled
in my case I have 10 classes, some of the splitted images that contains the classes dosen't necessarily contain all the labels, so in one image n number of classes and in another image I have a diffrent number of classes, exp: image 1 contains class 1 2 3 4 , image 2 contains class 1 2 5 6 7 8 9 10, image 3 contains class 5 4 8 2 ect... will this work ?
Hi sir, what if i have 17 classes and all of them in NIFTI format as well as the volumes ( three volumes with three different voltage/energy), what's the changes that i should make besides num_classes, thank you for the videos.
thanks, Sreeni. Was the original training image carefully segmented in APEER by an expert? or is that job also done with Machine Learning? What is the weight of the EM images you are working with (100MB, 1GB, 10GB, 100GB)? I will follow your channel more closely :)! What kind of filter operations can we do in APEER platform for creating the feature maps to improve segmentation? I ask this final question thinking on QuPath (DoG, LoG, Structure and Hessian filters). Thanks in advance for your answer.
@@matancadeporco I didn't solve this problem;however, I use a loss function named "weighted categorical_crossentropy" instead. Hope you find this information helpful.
Thank you very much for your videos. if i change img=cv2.imread(img_patch,0) to img=cv2.imread(img_patch,3) .i.e, use rgb channels. what are the necessary changes in the code that i have to make.
please help me. when I plot the testing image, testing label, and prediction mask, it gives me different images (I plot it several times and it still gives me different images). any solution? thank you very much.
Thanks for the video. I'm having a problem with the code, I'm getting error ``` column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True) ValueError: Shapes (16, 128, 128, 4) and (16, 128, 128, 1) are incompatible ``` How can i fix it please?
Thanks for the great content. However, I noticed that class_weight does not work for multiclass segmentation. It keeps throwing an error when I run the script you shared. Could there be a solution for this?
I did not test class_weight for multiclass. In fact, I recommend using focal loss for multiclass. You can also use a combination of focal loss and dice loss and for dice you can provide class weights. This is probably the easiest way to handle this. In general, focal loss did a great job for my datasets with multiple classes.
Hi Sreeni, Thank you very much for your videos on segmentation. I have watched most of them and have learned much. I am doing brain tumor segmentation and in brain tumor MRI scans, each scan came in 4 sets, so it is [240,240,155,4]. So in training, how should I prepare my data. Should I stay with the dimensions or should I squashed the 4th dimension into the 3rd like [240,240,620] ? The label shape is [240,240,155]. Your inputs will be very helpful
You seem to be referring to dataset similar to Brats2020. I will be releasing a couple of videos on this topic in August. The way I handled this dataset is by using 3D unet. I only used 3 channels instead of 4 as I found one of those to be redundant. I also broke the volumes down to 64x64x64 to make sure they fit my system memory. Also, I dropped all sub-volumes with less than 1% labeled regions.
@@DigitalSreeni Thanks Sreeni, I was referring to the BRATS2021 dataset. Anyway I think the NifTI dataset format is the same. Any problem if I use the 4 channels instead of the 3. Looking forward to your coming videos!
Hello! RUclips recommended me this video so I started with this one, but I can see that you have more than 208 ! I have one question, maybe there is a video where you explain this. If so, please recommend me that video. If keras works with jpg or png, is it possible to work with .tiff with reflectance units (0-1) ? Thank you so much.
Great videos. You are working with images which do not have more than 3 channels (3 bands). Do you think is possible use these models with images with more than 3 channels? I'm telling this because I'm working with hyperspectral images.
Yes, of course. I've covered multichannel images in a few other videos, for example search for BraTS videos on my channel. Here is the first one in that series: ruclips.net/video/0Rpbhfav7tE/видео.html
I don't understand the labels of your classes. I have multi-labeled colored images, each class is either red, green, or yellow, .... If I looked into the image vlaues it is between (0-255) so how did you make it 1,2,3... and should I change mine too?
I have watched your videos several times to get my master's thesis right. I have a question, how can I pass the weight information from SegSem to a GAN?
Had to translate to find out what that means, apparently Thank you in Indonesian. Thank you too for watching the video, I hope you found it to be useful and educational.
Suppose I have train set and test set of having 80k and 10k images respectively, should I have to label all of those images of both train and test set?
Your videos are wonderful. I had a problem on the line 88. It said "class_weights = class_weight.compute_class_weight('balanced', np.unique(train_masks_reshaped_encoded), train_masks_reshaped_encoded) *** TypeError: compute_class_weight() takes 1 positional argument but 3 were given" Could you help me?
Sir, Thank you for wonderfull class; I am getting an error : cannot import name 'MeanIoU' from 'keras.metrics' (C:\Users\sunny\Anaconda3\lib\site-packages\keras\metrics.py)
When we download the data from the link, we do not see the images and masks sub-folders. We only see images_as_128x128_patches.tif and similarly masks_as_128x128_patches.tif. How do we extract the images and masks from these, can you please give the code. Might seem elementary, but it will be helpful.
No, you should not convert your labels to categorical when using sparse categorical cross-entropy. Sparse CCE can work with integer encoded labels. For mutually exclusive classes you can use either loss functions, SCCE or CCE as long as you make sure the labels are encoded the correct way.
This channel deserves millions of subscribers. Thanks for the amazing contents.
true very true --- he can sell this course for at least 100 dollars ...but he has done it for free ...
Exactly!
Needed this so much. Seems like every time I run into a problem with my research you put out a video answering my prayers. Thanks Sreeni.
One of the best channels for Research Students of Computer Vision discipline.
Thank you :)
Exactly what I was looking for, you are a very knowledgeable person with a great talent for explaining things!!! Please don't stop!
Ajarn - Can fully understand the efforts and time you are putting in to create these contents.......The real value of gold is not known to the one who wears it......It is know to the miners who take out tons and tons of slush to extract 1 ounz of gold........Pranams......You have an amazing sense of sequels.......And I am sure, you are not going to stop the sequels on U-nets with this.......
Thank you sreeni for the labelencoder path, all other places it was simply -1 , but my masks were in color and i just realised that differnce after wathing this tutorial..... super helpful insight.
Your U-net videos are very helpful for me.
I would appreciate if you could produce videos on instance segmentation as well and particularly Mask RCNN model. Thanks a lot. 🙏🙏
Best RUclips channel for deep learning researchers.
I'm glad you think so :)
Thanks for Multiclass segmentation. In Segmentation or even Image related Deep Learning your Videos are best.....
The only word : Great! please keep continue Sir. thank you so much.
thanks I was just working on a multiclass segmentation with Unet
Thank you for your tutorial. I would like to request an open-slide tutorial for generating patches from the whole-slide images. This is very important for the analysis of histopathology images.
he did already, follow this link:
ruclips.net/video/7IL7LKSLb9I/видео.html
Wow the best explain of these concepts I have seen in a long time. Thanks for this
I am a simple man. I see your new video I press like!
Sreeni thank you so much for all the work you put into these videos. It has helped me so much get started with segmentation
You are so welcome!
Amazing content my and of many professor of Deep Learning, I think that a nice suggestion to your next videos, could be the addition of the version of the installed libraries and modules in each notebook.
That´s it thanks.
I needed this.
Hi Sreeni,
Many thanks for the very useful materials. I tried your code and have the following question for you:
When I tried to do the same as you did in the code, i.e., commenting the class_weight=class_weights, I cannot get a reduction in the loss at all! And when I tried to execute class_weight=class_weights, I am getting "ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()". Can you please give me some guidance? Appreciated.
Is this on a different data set or same one I showed? If it is the same data set then the same code should work, please make sure you haven’t skipped any steps. Also, try different kernel initializers, optimizer and loss function.
class_weight=class_weights is NOT working either on the given dataset or any other type of dataset. Can you kindly give us any suggestions?
Thank you. Your tutorials are life savers for me
I've just found what I was looking for.
Thank you!
Glad I could help!
Thanks Sreeni. You always bring new ideas to the AI world.
My pleasure 😊
Thank you very much for your videos. They have been of immense help for a histopathology cell counting project I am working on. I am trying to investigate the impact of auxiliary outputs on UNets for microscopic cell detection and counting but have been stuck with a bug for over a week now. Most documentation online hasn't helped.
My auxiliary outputs use various blocks of the UNet model as inputs as such output different shapes from the original input size of (256,256,3). So the main challenge is how to declare this during training so it takes this into consideration.
Error Message obtained: ValueError: Error when checking target: expected aux1 to have shape (32, 32, 1) but got an array with shape (256, 256, 1)
Model Summary:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_6 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
...
__________________________________________________________________________________________________
aux1 (Conv2D) (None, 32, 32, 1) 33 activation_74[0][0]
__________________________________________________________________________________________________
aux2 (Conv2D) (None, 64, 64, 1) 33 activation_76[0][0]
__________________________________________________________________________________________________
aux3 (Conv2D) (None, 128, 128, 1) 33 activation_78[0][0]
__________________________________________________________________________________________________
original (Conv2D) (None, 256, 256, 1) 33 activation_72[0][0]
Finally video explained to details. Thanks
Thank you, your tutorials are one of the best.
Thank you for your tutorials and lectures.
My pleasure.
Thank you very much for your video, it helped a lot. Only thing I need to ask you about is the calculation of the IoU. I have a very unbalanced dataset, and I ran the model on it several times with several different loss functions, including some that are explicitly made to handle data unbalance, but every single time the IoU confusion matrix looks as if my model classified everything as background (i.e. the most common "class"). Since I'm sure the data is correctly labelled and I doubt there can be something wrong with the model especially after running it with different functions, I think there is something wrong with the IoU calculation. Do you have any idea? Thank you.
Great job srini, I'm learning alot
Great lectures, I follow up with your series.
Great to hear!
Excellent explanation !!
Thanks for the free content through your channel!
Glad you like them!
Thank you for the great work! I have one question. Is the number of classes related to the number of colours/categories presented in the masks? If so, that means that in your case it's 4 but it could have been 5 or 20? Do we need to change the code in any way if the number of classes gets too much? Seems I'm having 224....Thank you in advance.
yes depending on color number of classes depends.Yes it could be anything depending on labels 5 or 20.
if number of classes is more the model should be robust no need to change the model attempt it and explore
if you are having 224 give input shape 224*224*n_channels
this channel is love !! supported me a lot
Happy to hear that!
Hi Sreeni, Thanks for great video. How does one generate multiclass masks from already annotated images
i would also like to know.
Amazing explanation
Great!! I was in need of it badly :) Great work. !!
I had this problem with the class_weight -> ValueError: `class_weight` not supported for 3+ dimensional targets. Do you have any suggestions to solve it?
Nice class sir. SIr, Can you please make some videos like how to read a scientific research paper and how we can get their results by performing our own code or reading that articles. It will really help many of us.
hello, thank you for this great tutorial. I want to download the exactly dataset but in the given link there are a few images. What should I do?
thanks for the contribution, appreciated.
i wish you showed how to use focal loss
Thank you for your videos. They are very much helpful.
Sir, you are the best!!!!!
Thank you!!
class_weight in .fit is not working it says "`class_weight` not supported for 3+ dimensional targets".
Thank you soo much, i was looking for the exact same stuff and this single video helped me alot.
Glad it helped
Hi Sreeni, I have just a small doubt, is it really a multiclass problem? or is it a multilabel? because as per the definition given in video "140 - What in the world is regression, multi-label, multi-class and binary classification?" for me it's more likely a multilabel problem, or am I getting it wrong? Thanks in advance!
Sir thank you for the video. Can you please help me with this error i am getting with compute class weight.
It says compute class weight() takes 1 positional argument but 3 were given.
May be this video helps: ruclips.net/video/QntLBvUZR5c/видео.html
Thanks for the video it was very helpful!
Thank you for the video, but I have a probleme, every time I try to fit the model, the kernel crashes, does anyone experienced the same issue?
Using imageJ, how can I save my semantic labels in only one mask? Like in this vide where you get a single mask but represented with diferrent gray-scale levels
thank you sir your lectures are very helpful. i have been stuck in class weight problems i have tried different methods but still got error. please help me out in this. how could possibly i do it. i have also tried focal loss but no benefit. i get 3D+ dimension error
Amazing content. Can you please name the tool you used for image analysis? The one with which you checked number of class, histogram, changing contrast and so on.
I like the way that you explain the concept... I will subscribe for future excellent content.... Thank you
If I use the iou loss and iou as metric do I have to do class_weighting ? I know that for semantic segmentation the accuracy and the crossentropy loss are not the right ones to use because of the unbalanced data but I use the iou loss and iou metric do I have to use class weighting ?
Where can i get a video that explains datasets - I) Kidney (RCC) (II) Triple Negative
Breast Cancer (TNBC) (III) MoNuSeg-2018 and many other nuclei segmentation datasets ?
Thank you so much! How can I do multiclass instance segmentation in unet?
Thank you for the video. A question, 4 classes including background?
In this example, there is nothing like background. If you have a background class then that can be assigned a value 0. The way I have written my code, the background would be the 5th class.
@@DigitalSreeni 👍.
Thank you so much, this video is really helpful
As usual, your videos make life very easy for researchers.
I have a question regarding class weights, when I uncommented the class_weight part in the model fitting, it returned an error that class_weights has to be a dictionary, something like this (on my own dataset):
Class weights are...: {0: 0.4280686779466047,
1: 1.54654951724371,
2: 0.40951813587110275,
3: 42.324187597545105,
4: 1.5749410555965808,
5: 2.2925788973162344,
6: 2.080430679675916}
even upon changing the class_weights into a dictionary, I faced another issue:
`class_weight` not supported for 3+ dimensional targets
meaning that my y_test_cat is a 3-D matrix which is not supported for class_weights. References suggested to use "sample weights" instead of class_weights
any suggestions on how to solve this issue?
Again, Many thanks for your amazing videos.
Hi, I face the same problem, did you manage to solve it ? :)
@@finlyk not yet, I was hoping to get some answer. I may end up trying to solve it myself
@@mqfk3151985 after some research, it seems that you cannot apply weight to 2D array. The model output is (height, width, number of class), and should be flatten as (height * width, number of class) for the weights to be applied. Will try that tomorrow and tell you if it helps
I didn't managed to fix it unfortunately.. would appreciate any help if you try to handle the issue :)
does anyone solve this problem? i'm stuck here
Thank you very much! One question, I can only see 2 images under the folder 128_patches. did I miss anything here?
Great Video!
1- So in that dataset you labeled all the containing of the images. If you had like background that obviously you don't want to classify should it be value -1?
2- I am doing predicitons for land use and my building roofs predictions are not as straight as i want in fact they are a bit roundish is there a way that i can fix that expecially in the encoder part.
3 - How can I metric accuracy with IOU (since the predefined accuracy is not valid for semantic segmentation with high background) if i don't have everything labeled on my input data but my model can predict it. Should I only use intersection or add the part that my model predict more the labeled part.
If anyone in chat what to respond I would mostly appreciate that.
Hi Joao,
1 = I am facing the issue of no labeled pixels in my predictions. I don't really know how to deal with it.
2 = For roundish predictions, there is a solution that you can use "Anchoring" by sending the coordinates of the corner pixels along with the training. I really didn't work on it, however, some of my colleagues have suggested this method.
Ran into this error:
File "C:\Users\anish\208_multiclass_Unet_sandstone.py", line 63, in
n, h, w = train_masks.shape
ValueError: not enough values to unpack (expected 3, got 1)
Anyone know how to fix this?
Thanks sir, for this wonderful tutorial. I wanted to know what is the software that you were using to view the masks?
This class helped me sooo much! Thanks a lot s2
hello,
why the sgd optimizer gives bad result. it can predict only 3 classes while adam can predict all class
thank you
Fantastic Explanation. Thank You.
You are welcome!
thanks, i am waiting for this and requested also
Why hot-encoding is used here? What is the performance difference between this and having normal interger number's?
I have a question. I have a task semantic segmentation with 2 classes: leg and foot in a first view order of leg and foot. So what is the number of channels of my output should be? 2 or 3 because I wonder if the background should be labeled
in my case I have 10 classes, some of the splitted images that contains the classes dosen't necessarily contain all the labels, so in one image n number of classes and in another image I have a diffrent number of classes, exp: image 1 contains class 1 2 3 4 , image 2 contains class 1 2 5 6 7 8 9 10, image 3 contains class 5 4 8 2 ect... will this work ?
Hi sir, what if i have 17 classes and all of them in NIFTI format as well as the volumes ( three volumes with three different voltage/energy), what's the changes that i should make besides num_classes, thank you for the videos.
Can you help get ground truth i.e mask image from a raw CXR image for segmentation using Unet..
solution of class_weight for multiclass semantic segmentation is-> SMOTE (Synthetic Oversampling Technique (SMOTE))
Thanks Sreeni, this is great!
thanks, Sreeni. Was the original training image carefully segmented in APEER by an expert? or is that job also done with Machine Learning? What is the weight of the EM images you are working with (100MB, 1GB, 10GB, 100GB)? I will follow your channel more closely :)! What kind of filter operations can we do in APEER platform for creating the feature maps to improve segmentation? I ask this final question thinking on QuPath (DoG, LoG, Structure and Hessian filters). Thanks in advance for your answer.
Your videos are great, thank you!
Did you occur the error of " 'class_weight' not support for 3+ dimensional targets " when using class_weight?
Yes. For multiclass I recommend dice loss where you supply class weights or use focal loss that works well without providing class weights.
@@DigitalSreeni Thanks for your reply.
@@connielee4359 have u solve this? i'm stuck on this
@@matancadeporco I didn't solve this problem;however, I use a loss function named "weighted categorical_crossentropy" instead. Hope you find this information helpful.
how can detrmine the number of class if i used UCF data set
Thank you for the amazing tutorial!!
Thank you for your content!
Sir we r using same data for validation as well as same data for testing “x_test”,”y_test_cat”
could you upload a mask RCNN for the instance image segmentation?
Thank you very much for your videos. if i change img=cv2.imread(img_patch,0) to img=cv2.imread(img_patch,3) .i.e, use rgb channels. what are the necessary changes in the code that i have to make.
please help me. when I plot the testing image, testing label, and prediction mask, it gives me different images (I plot it several times and it still gives me different images). any solution? thank you very much.
Thanks for the video. I'm having a problem with the code, I'm getting error
```
column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
ValueError: Shapes (16, 128, 128, 4) and (16, 128, 128, 1) are incompatible ```
How can i fix it please?
Thanks for the great content. However, I noticed that class_weight does not work for multiclass segmentation. It keeps throwing an error when I run the script you shared. Could there be a solution for this?
I did not test class_weight for multiclass. In fact, I recommend using focal loss for multiclass. You can also use a combination of focal loss and dice loss and for dice you can provide class weights. This is probably the easiest way to handle this. In general, focal loss did a great job for my datasets with multiple classes.
Hi Sreeni,
Thank you very much for your videos on segmentation. I have watched most of them and have learned much. I am doing brain tumor segmentation and in brain tumor MRI scans, each scan came in 4 sets, so it is [240,240,155,4]. So in training, how should I prepare my data. Should I stay with the dimensions or should I squashed the 4th dimension into the 3rd like [240,240,620] ? The label shape is [240,240,155]. Your inputs will be very helpful
You seem to be referring to dataset similar to Brats2020. I will be releasing a couple of videos on this topic in August. The way I handled this dataset is by using 3D unet. I only used 3 channels instead of 4 as I found one of those to be redundant. I also broke the volumes down to 64x64x64 to make sure they fit my system memory. Also, I dropped all sub-volumes with less than 1% labeled regions.
@@DigitalSreeni Thanks Sreeni, I was referring to the BRATS2021 dataset. Anyway I think the NifTI dataset format is the same. Any problem if I use the 4 channels instead of the 3. Looking forward to your coming videos!
Hello! RUclips recommended me this video so I started with this one, but I can see that you have more than 208 ! I have one question, maybe there is a video where you explain this. If so, please recommend me that video.
If keras works with jpg or png, is it possible to work with .tiff with reflectance units (0-1) ?
Thank you so much.
Great videos. You are working with images which do not have more than 3 channels (3 bands). Do you think is possible use these models with images with more than 3 channels? I'm telling this because I'm working with hyperspectral images.
Yes, of course. I've covered multichannel images in a few other videos, for example search for BraTS videos on my channel. Here is the first one in that series: ruclips.net/video/0Rpbhfav7tE/видео.html
@@DigitalSreeni Thank you so much.
Could you do a video about predicting continuous variable using Unet? Thanks!
How are the different classes (labels) for the classes of images specified? Is there a specific directory structure for that?
hi, thank you for the knowledge you shared,how to calculate dice score for each class similer to IoU? i need for brats dataset(3D), thank you again
I don't understand the labels of your classes. I have multi-labeled colored images, each class is either red, green, or yellow, .... If I looked into the image vlaues it is between (0-255) so how did you make it 1,2,3... and should I change mine too?
I have watched your videos several times to get my master's thesis right. I have a question, how can I pass the weight information from SegSem to a GAN?
please help me implement this unet on hyperspectral images
Terimakasih banyak sir
Had to translate to find out what that means, apparently Thank you in Indonesian. Thank you too for watching the video, I hope you found it to be useful and educational.
Suppose I have train set and test set of having 80k and 10k images respectively, should I have to label all of those images of both train and test set?
If your train set and test set have no labels then all you have is just a raw data set.
Your videos are wonderful.
I had a problem on the line 88. It said "class_weights = class_weight.compute_class_weight('balanced',
np.unique(train_masks_reshaped_encoded),
train_masks_reshaped_encoded)
*** TypeError: compute_class_weight() takes 1 positional argument but 3 were given"
Could you help me?
Sir, Thank you for wonderfull class;
I am getting an error :
cannot import name 'MeanIoU' from 'keras.metrics' (C:\Users\sunny\Anaconda3\lib\site-packages\keras\metrics.py)
When we download the data from the link, we do not see the images and masks sub-folders. We only see images_as_128x128_patches.tif and similarly masks_as_128x128_patches.tif. How do we extract the images and masks from these, can you please give the code. Might seem elementary, but it will be helpful.
Please try using colab or Jupiter other than spyder. That just old school. It would be helpful
Do you need to_categorical when using SparseCategoricalCorssentropy? Are there differences?
No, you should not convert your labels to categorical when using sparse categorical cross-entropy. Sparse CCE can work with integer encoded labels. For mutually exclusive classes you can use either loss functions, SCCE or CCE as long as you make sure the labels are encoded the correct way.