I have a doubt. When performing multi channel segmentation that is (cancer, non cancer, background) do I need to have 3 channel mask or 1 channel mask?
As far as i know, it depends in how you calculate your loss function, because you can have the one-hot encoded mask that you mention or a single channel matrix with 0,1,2 ... each number representing a class
The problem is related to this file: /usr/local/lib/python3.7/dist-packages/segmentation_models_pytorch/losses/dice.py in forward(self, y_pred, y_true) 92 y_true = y_true.permute(0, 2, 1) * mask.unsqueeze(1) # N, C, H*W 93 else: ---> 94 y_true = F.one_hot(y_true, num_classes) # N,H*W -> N,H*W, C 95 y_true = y_true.permute(0, 2, 1) # N, C, H*W 96
@@najouarahal753 This code works. But it uses magic constant "SomeClass(var, classes=4,...)". Change it to "SomeClass(var, classes=my_number_of_classes,...)" and add "my_number_of_classes = %actual number of your classes"
Code works. But two separate binary models work way better (IoU 10 times higher), than this three-class model. Is it an issue of class imbalabces (I have small objects and the background is 98% of an image), or is there a better loss function that I can use? No, I cannot crop out small (64x64 pixels) patches and select those containing my desired class. This way I get a lot of false positives and I need to show false positives to the model.
I'm really confused about the use of validation and test test here, either I don't get it or you switched them over... shouldn't validation be the dataset that we use during training to verify the results but not used to train the model. and the test set has no ground truth and is the dataset that we want to figure out the prediction for. I hope I make sense.
The validation subset is used to validate the model after each epoch, and you do need to have the ground truth mask for the testing subset, because you're going to measure (for example with IoU) the performance of your model, that's why you need to have a refference with which you will compare the prediction of the model
Thanks for the tutorial Aaron. I wonder which other encoder could have better results as you said? thanks!
I have a doubt. When performing multi channel segmentation that is (cancer, non cancer, background) do I need to have 3 channel mask or 1 channel mask?
As far as i know, it depends in how you calculate your loss function, because you can have the one-hot encoded mask that you mention or a single channel matrix with 0,1,2 ... each number representing a class
I had this problem "RuntimeError: Class values must be smaller than num_classes." in training model, although I followed the steps correctly
The problem is related to this file:
/usr/local/lib/python3.7/dist-packages/segmentation_models_pytorch/losses/dice.py in forward(self, y_pred, y_true)
92 y_true = y_true.permute(0, 2, 1) * mask.unsqueeze(1) # N, C, H*W
93 else:
---> 94 y_true = F.one_hot(y_true, num_classes) # N,H*W -> N,H*W, C
95 y_true = y_true.permute(0, 2, 1) # N, C, H*W
96
@@najouarahal753 This code works. But it uses magic constant "SomeClass(var, classes=4,...)". Change it to "SomeClass(var, classes=my_number_of_classes,...)" and add "my_number_of_classes = %actual number of your classes"
Code works. But two separate binary models work way better (IoU 10 times higher), than this three-class model. Is it an issue of class imbalabces (I have small objects and the background is 98% of an image), or is there a better loss function that I can use?
No, I cannot crop out small (64x64 pixels) patches and select those containing my desired class. This way I get a lot of false positives and I need to show false positives to the model.
I usually like a lot to use Categorical Crossentropy Loss with Class Weights, it helps a lot in the case of class imbalances with Unet models.
I'm really confused about the use of validation and test test here, either I don't get it or you switched them over... shouldn't validation be the dataset that we use during training to verify the results but not used to train the model. and the test set has no ground truth and is the dataset that we want to figure out the prediction for.
I hope I make sense.
The validation subset is used to validate the model after each epoch, and you do need to have the ground truth mask for the testing subset, because you're going to measure (for example with IoU) the performance of your model, that's why you need to have a refference with which you will compare the prediction of the model