It has been on my list for a long time but I just need to sit down and create code and content. I never got a chance to work with object detection so did not explore those yet. Actually, I explored them a while ago and found them to be useless for my needs. I need object detection but not just bounding boxes, I need every pixel segmented so I can summarize size and other object measurements.
Hi Sreeni, I have been following your videos for segmentation and I found them extremely useful. Lately, I tried the contents covered in this video for a multiclass segmentation problem but somehow it's not working. I followed the code covered earlier by you in other UNet segmentation videos to change the labels to categorical with sparse_crosss_entropy but it gives a dimension mismatch error. It would be great if you kindly suggest a solution.
Hello, why for num_labels do you use 1 for binary classification instead of 2 like the author of the collection said that it should be for binary classification?
The example I showed was for semantic segmentation with one class which is why the number of classes would be 1. Even for binary classification problem the number of classes can be one depending on your approach. Your network can have one output that predicts the probability of a class. So if your Cat is 0 and Dog is 1, then a single output can help in classification - a probability value of 0.1 can mean the image is of a cat and similarly a value of 0.7 can mean that it is of a dog.
@@DigitalSreeni how is it that there is only one class when there are two things in the segmentation, background and mitochondria, or is my reasoning wrong, is background aka a part of the image with no mitochondria not a class by itself?
If background is one class and mitochondria a second class, do you really need to define both classes to properly classify a given pixel? If I tell you that a pixel has 90% probability to be mitochondria, doesn't that mean it is 10% background? So do you really need to calculate two different probabilities? In summary, when you have two classes, you only need one probability to define a specific object. By default, 1-prob will be the probability for the other class (in this case background).
Hi Sreeni Your videos are extremely useful. It would be great if you can make video on Auxiliary classifier WGAN with gradient penalty and regression based WGAN with gradient penalty
For Unet (or any deep learning), your inputs are numpy arrays. The only way to get numpy arrays into a shape compatible for network training is by shaping the images into same size. You can do this either by resizing (not recommended for semantic segmentation) or by cropping.
I am not sure, you need to check. But if that becomes an issue this approach may help... ruclips.net/video/5kbpoIQUB4Q/видео.html&lc=Ugy_ET3Aur2kbH_LBed4AaABAg
hello sir, all your videos are very helpful.sir how can we ensemble two different architecture model. can you send we the code to ensembling the models of different architectures.
Hi Sreeni, The code you showed in last video, shows an error "ValueError: Dimensions must be equal, but are 16384 and 81920 for '{{node mul_1}} = Mul[T=DT_FLOAT](Reshape, Reshape_1)' with input shapes: [16384], [81920]." I tried with evertyhing to search it but not suceeded. Could you help please?
Not sure of this error, sounds like a mismatch in input shapes but not sure where. May be it is the input image shape, try using shapes that are power of 2, 64x64 or 128x128 or 256x256, etc.
Hello, Which one is the attention residual Unet (which is the best one as per the results in the last video) variant from the available models in this library? I can see separate models ( attention Unet and residual Unet) but not the combination of these two? Can anyone please help me out here?
Your videos are very helpful and well explained. Subscribed. Keep it up! Can't wait to see your first 100k subscribers!
Hi Sreeni sir, Thanks a lot, and your videos are always super useful and helpful.
My pleasure 😊
Thanks a lot Sir. It helped a lot in my Mtech project.
You are most welcome
Another great video. Thanks!
Thank you! 👍
You're welcome!
You're amazing 🙂
Thanks
Hello. Thanks again. Can you do some videos on object detection and YOLO algorithms and stuffs?
It has been on my list for a long time but I just need to sit down and create code and content. I never got a chance to work with object detection so did not explore those yet. Actually, I explored them a while ago and found them to be useless for my needs. I need object detection but not just bounding boxes, I need every pixel segmented so I can summarize size and other object measurements.
@@DigitalSreeni How about Mask RCNN then ?
What should I do when I find this library cannot add dropout layers in building the model?
Any help is appreciated.
@DigitalSreeni Sir how we will change this architecture to 3d?
Please check my 3D segmentation videos.
Hi Sreeni, I have been following your videos for segmentation and I found them extremely useful. Lately, I tried the contents covered in this video for a multiclass segmentation problem but somehow it's not working. I followed the code covered earlier by you in other UNet segmentation videos to change the labels to categorical with sparse_crosss_entropy but it gives a dimension mismatch error. It would be great if you kindly suggest a solution.
Just a question, the kerass-unet-collection does not include the Attention ResUNet right? It only has Attention UNet and Residual UNet.
Hello, why for num_labels do you use 1 for binary classification instead of 2 like the author of the collection said that it should be for binary classification?
The example I showed was for semantic segmentation with one class which is why the number of classes would be 1. Even for binary classification problem the number of classes can be one depending on your approach. Your network can have one output that predicts the probability of a class. So if your Cat is 0 and Dog is 1, then a single output can help in classification - a probability value of 0.1 can mean the image is of a cat and similarly a value of 0.7 can mean that it is of a dog.
@@DigitalSreeni how is it that there is only one class when there are two things in the segmentation, background and mitochondria, or is my reasoning wrong, is background aka a part of the image with no mitochondria not a class by itself?
If background is one class and mitochondria a second class, do you really need to define both classes to properly classify a given pixel? If I tell you that a pixel has 90% probability to be mitochondria, doesn't that mean it is 10% background? So do you really need to calculate two different probabilities? In summary, when you have two classes, you only need one probability to define a specific object. By default, 1-prob will be the probability for the other class (in this case background).
@@DigitalSreeni got it! Thank you so much for your time and explainations!
Hi Sreeni
Your videos are extremely useful. It would be great if you can make video on Auxiliary classifier WGAN with gradient penalty and regression based WGAN with gradient penalty
How to train UNET with variable image sizes ?
For Unet (or any deep learning), your inputs are numpy arrays. The only way to get numpy arrays into a shape compatible for network training is by shaping the images into same size. You can do this either by resizing (not recommended for semantic segmentation) or by cropping.
is the model capable to use 1 channel/ grayscale images?
I am not sure, you need to check. But if that becomes an issue this approach may help... ruclips.net/video/5kbpoIQUB4Q/видео.html&lc=Ugy_ET3Aur2kbH_LBed4AaABAg
did it work with you?, I am planning to use it with grayscale images.
hello sir,
all your videos are very helpful.sir how can we ensemble two different architecture model. can you send we the code to ensembling the models of different architectures.
Hi Sreeni, The code you showed in last video, shows an error "ValueError: Dimensions must be equal, but are 16384 and 81920 for '{{node mul_1}} = Mul[T=DT_FLOAT](Reshape, Reshape_1)' with input shapes: [16384], [81920]." I tried with evertyhing to search it but not suceeded. Could you help please?
Not sure of this error, sounds like a mismatch in input shapes but not sure where. May be it is the input image shape, try using shapes that are power of 2, 64x64 or 128x128 or 256x256, etc.
how is this better than segmentation_models?
Not better or worse than segmentation models, this offers a few different models that you will not find in the other library.
thanks
You're welcome!
hello sir,
it is requested to you, please make a video on RCNN.
Hello,
Which one is the attention residual Unet (which is the best one as per the results in the last video) variant from the available models in this library?
I can see separate models ( attention Unet and residual Unet) but not the combination of these two?
Can anyone please help me out here?
I guess they do not have all combinations.