177 - Semantic segmentation made easy (using segmentation models library)

Поделиться
HTML-код
  • Опубликовано: 3 июн 2024
  • Code generated in the video can be downloaded from here:
    github.com/bnsreenu/python_fo...
    Segmentation Models library info:
    pip install segmentation-models
    github.com/qubvel/segmentatio...
    Recommended for colab execution
    TensorFlow ==2.1.0
    keras ==2.3.1
    For this demo it is working on a local workstation...
    Python 3.5
    TensorFlow ==1.
    keras ==2
    Dataset link: www.epfl.ch/labs/cvlab/data/d...
  • НаукаНаука

Комментарии • 190

  • @Dustyinc
    @Dustyinc 2 года назад +2

    Your lectures are amazing and so easy to follow. Thank you so much for all your work!

  • @goodwilrv
    @goodwilrv 2 года назад

    You are definitely the best hands on tutor online Sreeni.

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Thank you Gautam. I am glad you find my videos useful.

  • @jaydip7526
    @jaydip7526 3 года назад +2

    Thanks a lot for the really good content, I am learning a lot from your videos daily. I have one question regarding image size. I have high res microscopic images (2048 x 2048) and I want to do cell segmentation
    - Do I need to crop these images and make smaller patches to train this model. If yes, do I need to do this patching operation during inference as well?
    or I can use high res 2048 x 2048 images and start training. If I can train the model with high res images, how does the model deal with the change of dimension (original model architecture is not suitable for high res input images, or I am misunderstanding something)?

  • @ownhaus
    @ownhaus 3 года назад +3

    Thanks for the tutorial. 3D UNet would be very interesting for an upcoming video, since I work with 3D localization microscopy data

  • @bijoyalala5685
    @bijoyalala5685 3 года назад

    Hello Sreeni Sir. Thank you for your wonderful tutorial. I have some questions regarding this issue. I have my customized image dataset of needle tip. I use this segmentation model for sementic segmentation. For 2000 'needle tip' image I put the value of batch size=8, epoch=10 and the predicted image comes okay. Then I increased the dataset at 4000, right now I keep the same batch size and epoch but the prediction is not seems okay. Can you please tell me is there any relation between increasing the data samples and batch size? What should be the optimal batch size for a specific number of image dataset?

  • @windiasugiarto6041
    @windiasugiarto6041 3 года назад +4

    Thank you very much for the tutorial. I learnt a lot from your videos. I hope you would do tutorials on semantic segmentation using HRNet model one day. God bless you, Sreeni...

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      I know of HRNet and did look into it a few weeks ago but most of the code I explored was written in Pytorch which is not yet in focus for my channel. I am hoping someone to put together keras based code so I can cover it in my video. Putting it together from scratch may be time consuming and I am not convinced if that time is worthwhile.

  • @konkoboaxel8887
    @konkoboaxel8887 3 года назад

    thank you for the tutorial, it's very well explained. I tested the training on google colab but when importing the model on my pc, an error occurs at: prediction_image = prediction.reshape (mask.shape) "mask is not defined ", any help from you?

  • @Thetejano1987
    @Thetejano1987 3 года назад

    Hey quick question. Do you know the differences between using the JaccardLoss vs bce_jaccard_loss? I'm using segmentation_models.pytorch but they dont have a bce_jaccard_loss

  • @umairsabir6686
    @umairsabir6686 3 года назад

    Thanks for this wonderful video Mr. Sreeni.

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      My pleasure 😊

    • @umairsabir6686
      @umairsabir6686 3 года назад

      @@DigitalSreeni I want to clear one more doubt. In one of your previous tutorial you presented Autoencoders using transfer learning where you took the encoder architecture and built the decoder architecture and trained it. Can I say that we are doing the similar thing in Semantic segmentation here as both Encoder and Decoder architectures are available using backbone models and we do not need to explicitly define our decoder model here. we can just retrain the decoder part or the whole architecture ?

  • @piyushkumarprajapati9972
    @piyushkumarprajapati9972 3 года назад

    I have a dataset with bounding boxes of the cells. Can you please suggest how to proceed with it?
    Boxes enclose cells

  • @samardarooei7940
    @samardarooei7940 3 года назад

    Hi thanks a lot for nice video but what is difference between backbone and weights?

  • @dattijomakama9703
    @dattijomakama9703 3 года назад

    Thanks a lot for this awesome tutorial. I like your channel.

  • @gulshanmohiddinshaik7224
    @gulshanmohiddinshaik7224 3 года назад

    Thank you Sir, chala baga explain chesaru

  • @bikkikumarsha
    @bikkikumarsha 3 года назад +1

    Can we convert the final model to TfLite format?

  • @visheshbreja3341
    @visheshbreja3341 3 года назад

    Hello sir
    Thank you for such an interesting tutorial.
    I am stuck at a point where I have 50 classes to predict. I don’t know how can I map my 50 classes on the model to learn and the corresponding color map for each class. Any kind of help will be appreciable.
    Thank you in advance

  • @carpelev
    @carpelev 2 года назад

    Hi Sreeni,
    Thanks a lot for the video! It is very clear and explains the thought process very well.
    I was trying to re-implement it, and have two questions to you:
    1) in your video at 20:16 you have a negative loss value, why is that?
    I have a similar problem (regardless of whether I'm using jaccard or bse etc.)
    Any suggestions how to resolve this issue?
    2) could you please provide some detail why you do not freeze the encoder weights? If I understand correctly, we would like to initialize the pretrained eoncoder and only train the decoder, but sm does not by default freeze the weights and you did not do it either. I tried both but I think because of question (1) I still dont get proper results.
    Thanks a lot!

  • @ruqayyahessa103
    @ruqayyahessa103 11 месяцев назад

    Thank so much for your good explanation.
    Could you please explane , how can I feed the segmented output or the result that segmented by (Unet with Resnet34 as encoder) to the pretrained EfficientNet classifier to make binary classification if the input have disease or not

  • @imageprocessing9645
    @imageprocessing9645 2 года назад

    thanks alot for this great tutorial....How can we evaluate testdataset in this code? for example test accuracy

  • @bijulijin812
    @bijulijin812 2 года назад +1

    How to get the mask image ?. Do we need to create it or It should be created by dataset creator.

  • @deepalisharma1327
    @deepalisharma1327 2 года назад +12

    Hi Sreeni, I have recently discovered your channel and found it extremely useful. It would be really helpful if you can create a video on how to create masked images for images with more number of classes (non-binary).

  • @montsegur7173
    @montsegur7173 2 года назад +2

    Hi, thanks for your vids, super helpful!
    I am playing with segmentation-models library and this dataset you used in your 73-78 vids. At the beginning, I was using only Unet and some heavy backbones, like resnets or vgg, results were fine. Now I switched into playing with PSPnet (with the same dataset) and no matter which backbone I choose, I always get like 0.1614 accuracy and I just wonder - is it because PSPnet is that awful for bio-datasets or am I doing something wrong? I am aware, that results actually should be worse, but such low and repeating accuracy is kinda worrying for me. Should it be this way?

    • @laohu1514
      @laohu1514 2 года назад

      Same for me, I'm getting worse results with PSPnet and FPN on industirial inspection dataset, Unet and Linknet are fine, not sure what is going on. Another thing that I find strange is that the IOU score for Unet and Linknet sometimes exceeds 100

  • @pallavi_4488
    @pallavi_4488 2 года назад

    I am also a biomedical engineer your tutorials are the best

  • @ahpacific
    @ahpacific 2 года назад

    Hi @DigitalSreeni thank you for video - does the preprocessing that you use take care of one-hot encoding your masks or do you do that yourself? If you do it yourself can you cover how? Thank you.

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +2

      I covered one-hot encoding (categorical) in many of my videos. Please checkout videos on multiclass segmentation.

  • @vivekyadav-zl5dl
    @vivekyadav-zl5dl Год назад

    A very informative video, Thank you

  • @alirezasoltani3049
    @alirezasoltani3049 3 года назад

    In many articles on segmentation in the field of remote sensing, it is mentioned that the input of networks is patchs, for example, 24 by 24 or 50 by 50, etc. However, I do not understand that a network that is trained on the dimensions of 50 by 50 Has he seen how he can segment high-resolution satellite images, for example 8,000 by 8,000 pixels? Also, does a patch contain only one complication, such as a building or a road or ...?

  • @jharris30
    @jharris30 3 года назад +1

    Another great video, thanks!
    QUESTION: Do you prefer this method or pre-trained CNN with VGG16 & RF as in video 159b?
    Thanks!

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +3

      I always prefer Random Forest over deep neural networks. I only use neural networks if traditional approaches fail to solve the problem. Based on my experience VGG16 + RF is very robust and works for most use cases. It only fails for situations where you have a very busy background and trying to segment objects that are hard to distinguish against the background.

  • @xtraeone5947
    @xtraeone5947 Год назад

    Do I need to change anything more if I'm using vgg16 as backbone architecture?

  • @jzjMacwolfz
    @jzjMacwolfz 3 года назад

    Thank you for thevideo! I am really grateful I learned quite much!
    Sorry to ask a rookie question, the content of "Label" folder are just the images inverted or how may I create those?

    • @shreyasbharadwaj724
      @shreyasbharadwaj724 2 года назад

      The contents of the "label" folder are segmented versions of the images in the "images" folder. That is, every pixel of each image in the "images" folder is assigned to a particular label; if the images have biological cells, the labels might be 1 and 0 for "this pixel is part of a cell" and "this pixel is not part of a cell." This assignment may be done manually or the use of some other algorithm.

  • @greatoxidationevent1639
    @greatoxidationevent1639 3 года назад

    First of all, thank you so much for the great lecture series. So you augmented 2000 images but only use 1000 images for training?

  • @diegostaubfelipe4310
    @diegostaubfelipe4310 2 года назад

    Congratulations on your channel, it is really useful and very well organized.
    Is the preprocess image (preprocess_input(x_train)) only used at the time of training while in inference is not necessary?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      Preprocessing needs to be done to both training and testing data exactly the same way.

  • @upasana2657
    @upasana2657 3 года назад

    Thank you, Mr Sreeni

  • @sayedhamdi4472
    @sayedhamdi4472 2 года назад

    Thanks sir for this wonderful explanation, but dataset link not working for me i want to reach to images and masks, help me please

  • @danishMalik_
    @danishMalik_ 3 года назад +1

    Please add some videos regarding instance segmentation and how to make its datasets.

  • @bhargavireddy604
    @bhargavireddy604 3 года назад

    Hi Sreeni Sir, will you please share the google colab link for this tutorial.

  • @marcusbranch2100
    @marcusbranch2100 3 года назад +1

    Awesome video, good job and thanks for sharing this with us, Sreeni. Can you tell me how can I do data augmentation on device in this case? No needing to create two new folders/paths of images and masks

  • @kaushalyasivayogaraj5862
    @kaushalyasivayogaraj5862 2 года назад

    Sir, Your videos are really good and those are very helpful for learning. can you please make videos on few-shot learning semantic segmentation?

  • @khondokermirazulmumenin8201
    @khondokermirazulmumenin8201 3 года назад

    thank you for your tutorial ,well explained .

  • @djdekabaruah3457
    @djdekabaruah3457 3 года назад +2

    Very useful tutorial. Could you please add the code for augmentation?

  • @samarafroz9852
    @samarafroz9852 3 года назад +1

    Wow this is best tutorial sir

  • @talha_anwar
    @talha_anwar 3 года назад

    While random augmentation, let suppose image rotate by 30 and mask rotate by 40, because it's random, how you handle this

  • @AlgoTribes
    @AlgoTribes 3 года назад +1

    Hey Sreeni..if you don't mind coming up with the Semantic Analysis for the text data..that would be of great help...BTW your content are more often if not great than just Awesome!!

  • @rachelbj3840
    @rachelbj3840 2 года назад

    Thanks Digital Sreeni !!!

  • @evgeniynekrasov1146
    @evgeniynekrasov1146 2 года назад

    Thnks for nice video, but may SIZE_X and SIZE_Y of input picture and mask be different? For example 240x216? Thanks!

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      Your inputs can be of any size, as long as the image and mask sizes match.

  • @NS-te8jx
    @NS-te8jx Год назад

    could you share all the slides for your various videos. because that helps me to revise. I see only code in git

  • @rishikhajuria3029
    @rishikhajuria3029 2 года назад

    For semantic segmentaion getting error expected sigmoid to have 4 dimensions,but got array with shape (800,1)..how to reshape...??

  • @johnnysmith6876
    @johnnysmith6876 2 года назад

    Beware negative losses! Suffered from that initially.

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      What is wrong with negative losses? May be I am missing something here. Loss is just a scalar value that can be positive or negative and this gets minimized during the training process. Of course, it becomes an issue if you just take the magnitude of the loss.

    • @johnnysmith6876
      @johnnysmith6876 2 года назад

      @@DigitalSreeni Makes sense Prof. Had assumed losses always have to be positive. Thanks for the clarification and greater thanks for sharing these videos. You’re doing an amazing job! Thank you.

  • @PriyankaJain-dg8rm
    @PriyankaJain-dg8rm 9 месяцев назад

    Can you please lemme know from where to get the exact dataset, as the link provided, when visited, only let me download .tif file. Is there anything wrong I am considering.

  • @nor4eto999
    @nor4eto999 Год назад

    Hello, my original images are dicoms. How to read them and the script still works properly?

  • @talha_anwar
    @talha_anwar 3 года назад +1

    I think data split should be before augmentation to avoid data leakage

  • @asmabenbrahem6
    @asmabenbrahem6 3 года назад

    Hello sir, Thank you very much for this tutorial, it is very helpful. I tried using segmentation models repository and tried to train unet on colab with pretrained imagenet weights and using jaccard loss as loss function but the training is very slow (one epoch took 15 min) and the training loss is going down (0.6 on epoch 1 ->0.17 on epoch4) but not the val_loss (it is stuck in 0.8) also the iou score for the training set is 0.82 and 0.2 for validation set. Can you help me with this ?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      Did you enable GPU on colab?

    • @asmabenbrahem6
      @asmabenbrahem6 3 года назад

      @@DigitalSreeni That was the problem, I forgot to enable the GPU, that was stupid. Anyway, thank you sir for these nice tutorials, they are very helpful. You are amazing, keep up the good work. May god bless you.

  • @rohinigaikar4117
    @rohinigaikar4117 3 года назад

    Hi, thank you for this video.
    Will you prepare a video about multi-label segmentation in medical images?
    I want to know how to create training data as it is different than binary segmentation.
    Thanks

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      Yes, please stay tuned. I am planning on U-net based videos for binary, multiclass and even 3D images.

  • @AbdullahJirjees
    @AbdullahJirjees 2 года назад

    Thank you for this video, but there is an important part I wished you showed in this video is how did you create the labeled images?

    • @dimane7631
      @dimane7631 2 года назад

      this is the reponse i get from M.DigitalSreeni
      "If you do not have labeled data then you need to label it yourself. I covered a few videos on this topic and you may find this to be useful: ruclips.net/video/KopmsnC8GWI/видео.html"

  • @djdekabaruah3457
    @djdekabaruah3457 3 года назад

    Hello Sir, what was the accuracy of the model built for this tutorial? I tried the same method with my images (around 55) and got an accuracy of around 62-63% (tried resnet, vgg, efficient net). Segmentation output was not very good. Any suggestion/methods to improve the results?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      The tutorial is about using segmentation models library for semantic segmentation. The library contains many models and it is hard to say which one works best for your images. The whole point of the video is that if you plan on writing code for one of the standard models may be it is not worth wasting time rewriting code as you can use the library. This does not mean the standard models are going to give you the best accuracy. It depends on many factors including the amount and quality of your labels. Also, accuracy is not a good metric for semantic segmentation, I hope you will look into IOU and other metrics.
      In summary, please use a subset of your data to test various models from this library. Then, pick the best one to see if it performs well on your entire dataset. If the accuracy (or other metric) is not to your goal you will have to put together your own network, for example replacing encoder with Efficient net. For that you need to have the required knowledge.

    • @djdekabaruah3457
      @djdekabaruah3457 3 года назад

      @@DigitalSreeni Thank you Sir..I understand your comments. I was thinking of one more approach.. Will the model performance improve if we increase no. of epochs (though it is very time consuming)?

  • @rohitgupta2004
    @rohitgupta2004 2 года назад

    thank you sir for the tutorial, it's very well explained. Can you please add some vidoes on EfficientNet architecture with some dataset?

  • @rezadarooei248
    @rezadarooei248 2 года назад

    Thanks a lot for your nice video that was awesome, but I have a question, why your loss is negative? I think you need to normalize your image and masks, is it coorect?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Loss can be negative, depends on the loss function. For example, if you want to use accuracy as loss function, you want to maximize accuracy but you want to minimize the loss function. So you multiply your loss with -1 to make it negative. Now, the loss will be minimized (as it is a negative number and -90 is smaller than -80), and the accuracy maximized (80 to 90 and so on...).

  • @Hmmm0135
    @Hmmm0135 Месяц назад +1

    Hey everyone,
    I trained my model, it showing good result while predicting segmentation on image. But during training it giving negative loss and IOU more than 1 , Can anyone please tell what am doing wrong?

  • @gloryprecious1133
    @gloryprecious1133 3 года назад

    nice explanation n very informative sir. kindly upload video for 3D volumetric segmentation.

  • @ruthikasiddineni3873
    @ruthikasiddineni3873 2 года назад +1

    Sir,I'm getting this error while fitting the model
    ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).
    please provide an answer

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      stackoverflow.com/questions/58636087/tensorflow-valueerror-failed-to-convert-a-numpy-array-to-a-tensor-unsupporte

  • @tapansharma460
    @tapansharma460 2 года назад +2

    sir please make us more familiar with 3d image processing as you have created on bratts data set .......I am working in neuro-imaging domain on brain aneurysms detection and classification

  • @khaledbenaggoune8598
    @khaledbenaggoune8598 3 года назад

    Thanks a lot. Could you please explain attentiin for conv1d and conv2d in your future videos.

  • @vikashkumar-cr7ee
    @vikashkumar-cr7ee Год назад

    Dear Sreeni,
    Would it be possible to upload Electron Microscopy Dataset directly into google colab without being downloaded to a local drive or google drive

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      Here is a tutorial on how to load Kaggle data directly into Colab. A similar approach can be followed for other data sets: ruclips.net/video/yEXkEUqK52Q/видео.html

    • @vikashkumar-cr7ee
      @vikashkumar-cr7ee Год назад

      @@DigitalSreeni I have gone through your Kaggle dataset download tutorial, and I followed a similar approach to download the mitochondria dataset, but it didn't work. I am requesting you to write a code here for the same dataset, which can directly be downloaded in the Goggle colab.
      Many thanks in advance

  • @suganyasambasivam8359
    @suganyasambasivam8359 Год назад

    Thanks a lot sir it was very helpful
    Can we do segmentation without using ground truth pls clarify my doubt sir

  • @chouchou2445
    @chouchou2445 3 года назад

    please what is the number or the name of blocnote in github directory you published? can't find the exact code

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      I forgot to upload it, now it is there. The number should be 177. I uploaded multiple files with same number, all supporting content for this tutorial.

    • @chouchou2445
      @chouchou2445 3 года назад

      @@DigitalSreeni thank you soo much you're the best

  • @anishjain3663
    @anishjain3663 3 года назад

    Sir really big thanks but sir let say i have data set in mscooc format so first i need to create mask so mask array value should be what here i have 273 unique classes please sir can u explain how to do multi classes image segmentation i kinda confused

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      COCO format is for instance segmentation (object). If you would like to use it for semantic segmentation you will have to find a way to import COCO as pixel level labels. Also, I don't understand having 273 unique classes for semantic segmentation. I have a feeling you are looking for object detection and not semantic segmentation.

    • @anishjain3663
      @anishjain3663 3 года назад

      @@DigitalSreeni sir its food data set where it has 273 unique categories for food items , and about 20000 images

  • @mechanicalloop1231
    @mechanicalloop1231 3 года назад

    Can you please do a tutorial on Reinforcement Learning too?

  • @nobinmathew2861
    @nobinmathew2861 Год назад

    Is this unsupervised or supervised learning ?

  • @geponen
    @geponen 3 года назад

    Are those masks 3 channel or 1 channel?

  • @letslovestraydogs4648
    @letslovestraydogs4648 2 года назад

    hi thanks for the video but when i write the code the IOU passes one and loss goes to negative number. i even did normalizing by (xtrain/ 255.0) but still the code dosent work. i`m looking foeward to ur help thanks

    • @lijinp3430
      @lijinp3430 2 года назад

      iam having the same problem.have you resolved it?

    • @letslovestraydogs4648
      @letslovestraydogs4648 2 года назад

      @@lijinp3430 not yet sementation is just so hard

  • @greatoxidationevent1639
    @greatoxidationevent1639 3 года назад

    I used your code in Colab and I ran into this issue. For the sanity check, I found out that the images and masks are not matching. can you give me some advice on this problem?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Yes, I can. I will record a tips and tricks video soon on the topic. In summary, load the file names first for images and masks, sort them and then load them. This ensures that the names are all sorted the same way for images and masks.

  • @biswassarkarinusa3230
    @biswassarkarinusa3230 3 года назад

    Hello sir, I was trying to segment the exudates from the retinal fundus images for detection of diabetic retinopathy. But I encountered a bug while doing (model.fit) section.I got the following error:
    ValueError: Error when checking target: expected sigmoid to have shape (None, None, 1) but got array with shape (2848, 4288, 3)
    I tried to reshape /resizing the training image but could not fix it. Can you give some idea regarding that? Thank you.

    • @samk4584
      @samk4584 3 года назад

      Did you find a solution please?

    • @biswassarkarinusa3230
      @biswassarkarinusa3230 3 года назад

      @@samk4584 No brother, I am still stuck with that issue -_-

    • @samk4584
      @samk4584 3 года назад

      @@biswassarkarinusa3230 i am working on the same subject , trying to segment those lesions is hard, good luck!

    • @biswassarkarinusa3230
      @biswassarkarinusa3230 3 года назад

      @@samk4584 Thank you. Are you facing the same issues? If you have any other solutions please let me know. Thank you.

    • @rishikhajuria3029
      @rishikhajuria3029 2 года назад

      @@biswassarkarinusa3230 I am also similar problem of sigmoid shape ..do u got solution??

  • @indirakar5095
    @indirakar5095 2 года назад

    I have some CT image data but I don't know how to do the masking. Any idea ?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      You can try annotation tools like Label Studio or Labelme.

    • @indirakar5095
      @indirakar5095 2 года назад

      @@DigitalSreeni thank you so much. I will try with this

  • @ananyabhattacharjee4217
    @ananyabhattacharjee4217 2 года назад

    This piece didn't work for me. I am not able to fit the model at the end

  • @successlucky7619
    @successlucky7619 6 месяцев назад +2

    Hi Dr Sreeni, I must confess that following your teachings has made me see that I can continue in this field! thank you for the effort, time, and resources you put into making these videos. Two years later this is still evergreen... while following your videos I ran into an issue that I've tried to resolve but without success. it is with the segmentation library and the error I get when I try to import it -AttributeError: module 'keras.utils' has no attribute 'generic_utils'... I've gone on stack overflow and tried out the solution of downgrading the version of keras but it still isn't working... Please kindly assist to resolve this issue as I'd love to explore this library. Thank you so much

    • @DigitalSreeni
      @DigitalSreeni  6 месяцев назад +2

      Troubleshooting is an important skill that you need to develop. In your case, the error mentions about keras.utils not having an attribute 'generic_utils'. I quickly checked it on colab and it gave the same error and also pointed to the specific file contributing to this error. This is exactly the path to the file on colab.
      /usr/local/lib/python3.10/dist-packages/efficientnet/__init__.py
      You need to identify this file in your specific location, in case you are testing this on your local system.
      In this file, search for generic_utils. Here is the line giving us the error (line 71):
      keras.utils.generic_utils.get_custom_objects().update(custom_objects)
      In the newer versions of tensorflow.keras, the get_custom_objects() is available directly under keras.utils. This means you just need to delete the 'generic_utils' part from the above line making it simply: keras.utils.get_custom_objects().update(custom_objects)
      Restart the colab runtime and run the cell again. You just run the code again if you are working locally.
      How did I figure this out?
      - I installed the segmentation models library and imported it to see the error.
      - I paid attention to the error and realized that a specific line in a specific file is the cause.
      - I then experiment with the line giving the error:
      First: I did - from tensorflow.keras import utils - this worked fine
      Then, I tried - utils.generic_utils which gave me the same error. But I don't really care about this specific method. I actually care about get_custom_objects method. So I tried directly importing from utils and it worked fine: utils.get_custom_objects
      So I edited the __init__.py file by removing the generic_utils part and everything worked fine.
      What would I have done if utils.get_custom_objects did not work? Perform a search on keras documentation for "get_custom_objects" to find where exactly it got moved to and updated the code accordingly.
      This entire thing took just about 5 to 10 minutes. Please consider such error messages to be opportunities to dig deeper and learn more about troubleshooting. Good Luck and I hope this response helped you.

    • @Fan-vk9gx
      @Fan-vk9gx 2 месяца назад

      @DigitalSreeni I had the exact same issue, and it bugged me for a while. I just surfing online purposelessly, and I found your answer. You response helped me a lot! Not only in this case, but also a path I can follow for troubleshooting. Thank you soooo much!

  • @rishikhajuria3029
    @rishikhajuria3029 2 года назад

    I am also getting error in fit function as u were getting sir..same underlined in red..how to proceed sir??

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Looks like some syntax error, remove the last comma.

  • @chaosdesigner123
    @chaosdesigner123 3 года назад +6

    I'd be very happy if you can share your code for augmentation

    • @marcusbranch2100
      @marcusbranch2100 3 года назад

      It would be really awesome. Do you already know how to do this data augmentation?

  • @dentonlister
    @dentonlister 2 года назад

    I've followed your code exactly but am getting error: AttributeError: 'Unet' object has no attribute 'compile'. The docs don't seem to mention compiling at all, or the search bar on it isn't working for me. Could you help me?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Looks like you may be having a file in the same directory called unet.py. So when you import unet it may be importing your file rather than the one from another library. Just rename your local unet.py to something else. This is what I can think of with limited information about your system.

    • @dentonlister
      @dentonlister 2 года назад

      @@DigitalSreeni I don't have any files called unet.py. Is there anything else you can think of?

  • @lalitsingh5150
    @lalitsingh5150 3 года назад

    Sir, I have breast thermograms for segmentations...how do I generate a mask?

    • @NehadHirmiz
      @NehadHirmiz 3 года назад +1

      I would recommend using Label Studio. github.com/heartexlabs/label-studio. This is a fantastic tool to do data annotation

  • @kibruyesfamesele3087
    @kibruyesfamesele3087 Год назад

    I am happy with your tutorials and I want to apply on plant disease detection with four class(folder of disease) with 6000 image I got the error got multiple values for argument 'batch_size' on validation please help me on it

  • @jinchuntew2738
    @jinchuntew2738 2 года назад

    Hi Sir, I am running the code on Google Colab and I faced an error in model.fit. The error message is "TypeError: Input 'y' of 'Mul' Op has type uint8 that does not match type float32 of argument 'x'." May I know how to resolve this issue on Google Colab?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Just convert your data into float32 and see if that helps.
      x = x.astype(np.float32)
      y = y.astype(np.float32)

    • @jinchuntew2738
      @jinchuntew2738 2 года назад

      @@DigitalSreeni Thank you for your reply. I have tried to convert it to float32. Then is able to run for first epoch before same error appears. (Originally the error occurs when I run before first epoch). I am still not able to fit the model with same error.

    • @jinchuntew2738
      @jinchuntew2738 2 года назад

      Also, in 22:45 of the video seems like there is similar error as there is a red underline below validation_data which is similar to my problem.

  • @ccr4igg
    @ccr4igg Год назад

    Thank you so much sir

  • @unamattina6023
    @unamattina6023 Год назад

    I can not download the dataset, do you have any available link?

  • @chouchou2445
    @chouchou2445 3 года назад

    hello please i tried the data aug code and meet up this error
    ------ IndexError: list index out of range
    with the line (mask=masks[number])
    the code doesn't generate more than 20 augmented images then it stops with this error !!!!!

    • @chouchou2445
      @chouchou2445 3 года назад +1

      i solved it thanks .... ^_^

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      The pleasure you get in solving your own issues is incredible. I learn a lot during troubleshooting.

  • @hamidt-sarraf3069
    @hamidt-sarraf3069 3 года назад

    Hi sir, I did the training, but for prediction, I get the following error:
    ------> cannot reshape array of size 1048576 into shape (1024,1024,3)
    the SIZE_X and SIZE_Y = 1024
    it performs the prediction but when I apply:
    #View and Save segmented image
    pred = prediction.reshape(mask.shape)
    I got that error.
    The mask size shape and test_image shape is (1024, 1024, 3) array of unit 8
    when I apply: test_img = np.expand_dims(test_img, axis=0)
    the test_image became (1, 1024, 1024, 3) array of unit 8
    and then by applying prediction I got: (1, 1024, 1024, 1) array of fload32
    thank you for your amazing tutorials.

  • @wg2
    @wg2 11 месяцев назад +3

    Litteral goldmine could have solved a lot of my problems 1 year ago 🤦‍♂.

  • @giulsdrll
    @giulsdrll 3 года назад +1

    iou_score should be a number between 0 and 1, according to its definition and to the segmentation model library documentation. Unfortunately, I obtain iou_score bigger than 1 such as 16 or 3 for example and no error occurs in the code. Can anyone help me understand what I am doing wrong, please?
    Is IoU expressed as a percentage? It doesn't seem like that in the documentation...
    For completeness, in the model.compile function, I use the bce_jaccard_loss as loss function while the iou_score as a metrics. (I use Colab)
    @DigitalSreeni Thank you for the useful content and your plain explanations

    • @lijinp3430
      @lijinp3430 2 года назад

      I having the same issue.can you resolved it?

    • @giulsdrll
      @giulsdrll 2 года назад

      @@lijinp3430 Yes. The problem was in the data normalization. You need to check that the values of your image are normalized (between 0 and 1), before putting them into the model. Otherwise the metrics will give you problems. To be sure, check data normalization immediately before the model training.

    • @lijinp3430
      @lijinp3430 2 года назад

      @@giulsdrll i have normalaised by dividing with 255.but the problem is that the score becomes 0.025 or like 0.0something value.Never touches 0.2 or greater than that.

  • @fardinsaboori8770
    @fardinsaboori8770 3 года назад

    Thanks a lot for this great tutorial, can you please share the dataset(pictures) with us?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      The link to dataset is given in the description of the video.

    • @fardinsaboori8770
      @fardinsaboori8770 3 года назад

      @@DigitalSreeni thanks a lot

    • @fardinsaboori8770
      @fardinsaboori8770 3 года назад

      @@DigitalSreeni Hello, I have been trying to sign up and download the dataset from the website but the website has technical issues and I can't receive it, can you please upload the dataset to your GitHub account so we can dl the dataset from there?

  • @arindamkashyap6308
    @arindamkashyap6308 2 года назад

    Sir can make a video in image segmentation for dental data

  • @satyasismishra3489
    @satyasismishra3489 2 года назад

    sir, good evening, can i run this program ANACONDA, JUPITER, please provide an answer

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      The IDE that you use does not matter.

  • @dardar9913
    @dardar9913 3 года назад

    Is anyone able to access the dataset?

  • @darasingh8937
    @darasingh8937 2 года назад

    Thank you!!

  • @talha_anwar
    @talha_anwar 3 года назад

    if we have two or three classes, for example mitochondria and nucleus in one images, still we treat it as 2D image or 3D image

  • @iamkrty522
    @iamkrty522 Год назад

    I am neither finding the code nor the dataset anywhere

    • @DigitalSreeni
      @DigitalSreeni  Год назад +1

      The code is on my GitHub, link provided under description. Alternate link to the dataset is: www.epfl.ch/labs/cvlab/data/data-em/

  • @sattysattu
    @sattysattu 3 года назад

    Please make a video on Tensorflow Object detection Api :)

  • @AA-qe9hm
    @AA-qe9hm 3 года назад

    It says attribute error when I try and import segmentation_models

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Please read their documentation, you need to have minimum version for keras and tensorflow.

  • @prohibited1125
    @prohibited1125 Год назад

    amazing

  • @djdekabaruah3457
    @djdekabaruah3457 3 года назад

    Hello sir, very good tutorial. Could you please share the code for data augmentation?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      It is on my github. Just look for 177.
      github.com/bnsreenu/python_for_microscopists

    • @djdekabaruah3457
      @djdekabaruah3457 3 года назад

      @@DigitalSreeni thank you very much

  • @venkatesanr9455
    @venkatesanr9455 3 года назад

    Thanks a lot for your informative video. Actually, I am beginner of image segmentation but i hav some knowledge on image processing. I hav started to work on prostate cancer detection but struck on finding medium or large dataset. Can anyone specify some sources/links on biomedical dataset for doing ML approaches that will be helpful.
    Thanks

    • @anishjain3663
      @anishjain3663 3 года назад

      U may find data in kaggle

    • @venkatesanr9455
      @venkatesanr9455 3 года назад

      @@anishjain3663 I believe the images not available in kaggle. If so kindly refer me the same

  • @chouchou2445
    @chouchou2445 3 года назад

    thank you for the video
    i am interested in data augmentation please make a video about it
    and also a am stuck with AJI how to code it ????
    i became a fan of yourse and i will be so thankfull if you explaine that to me

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      I'm planning a video on data augmentation, should be out soon.

  • @HenrikSahlinPettersen
    @HenrikSahlinPettersen 2 года назад

    For a tutorial on how to do deep learning based segmentation without the need to write any code using only open-source free software, we have recently published an arXiv preprint of this pipeline with a tutorial video here: ruclips.net/video/9dTfUwnL6zY/видео.html (especially suited for histopathological whole slide images).