204 - U-Net for semantic segmentation of mitochondria

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2024

Комментарии • 116

  • @DigitalSreeni
    @DigitalSreeni  3 года назад +1

    Just want to let you guys know that I love Kite's AI-powered coding assistant. Works great in giving smart completions and documentation as we type. Check it out if you are looking for smart completion tools while coding.
    www.kite.com/get-kite/?downloadstart=false

  • @kannanv9304
    @kannanv9304 3 года назад +3

    Dear Ajarn, my telepathy was telling me, and I was expecting this on U-net from you, as I was about to go the "Convolutional + RF" way, for a segmentation task........Can't wink while learning through your tutorials......Another in depth and informative tutorial......And awaiting the sequels to it, for "Instance segmentation".......And as always, my humble pranams to you.....

  • @amarug
    @amarug 2 года назад +1

    the quality of your videos is insane, thank you so much!!

  • @ankurgupta3749
    @ankurgupta3749 3 года назад +2

    Trying to apply u-net for Glaucoma detection
    This helped a lot sir, thank you so much
    🙏🏽

  • @DrRubidium
    @DrRubidium 3 года назад

    That is the superb application of U-Net. Thank you

  • @willberger96
    @willberger96 Год назад

    Hey Sreeni, first thank you for all your training sessions. Truly appreciate them and enjoy the way you present it!
    I did notice a small bug. I followed your steps and when I was extracting the images from tiff file where you break them out into 256x256 images and save them to the images and masks folder. The issue is in linux when you retrieve them using os.listdir they are not returned in the same order. Meaning mask_dataset[0] will likely not correspond to the image_dataset[0] . To fix I did the following.
    images = os.listdir(image_directory)
    images = sorted(images)
    masks = os.listdir(mask_directory)
    masks = sorted(masks)
    Hope that helps someone.

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      Thanks for pointing this out. I should have mentioned it in my videos. I sort the files when I work on colab where images and masks may not be lined up based on the file name by default.

  • @yogitasawant3017
    @yogitasawant3017 5 месяцев назад

    thank u so much ! for all your informative videos sir

  • @roby1251
    @roby1251 2 года назад

    Hello Sreeni Sir, I'm so happy that I discovered your channel and videos on neual networks and cell image classifications, they're really a gold mine of information!
    May I know in what video exactly you talked about how to save and load models and their results, for the first time? (I'm referring to 17:36). I've watched your first 60 videos and lost track on them. Thank you in advance and cheers!

  • @NopeYup-i5f
    @NopeYup-i5f 2 месяца назад

    Thank you very much for your work, it has proven really helpful! Is it possible to use image augmentation in this simple model? (without having to use flow from directory)

  • @linameghouche1392
    @linameghouche1392 8 месяцев назад

    hi i really liked the way you explain the network ! i have a question about normalshization of images should we normalize even for RGB images ? for example ISIC dataset

  • @deepakraj008
    @deepakraj008 3 года назад

    Little Correction reading image using cv2 and converting them to PIL image.
    numpy.asarray(PIL.Image.open('test.jpg'))

  • @bijoyalala5685
    @bijoyalala5685 3 года назад

    Hello Sreeni Sir, Thanks for an informative video about Sementic Segmentation. I have a request to make. How about explaining the Learning Curve of the respective segmentation that you have done.
    I have saw your tutorial about Learning Curve, loss function, accuracy metric. Those videos are also very informative. In addition to this, I want to learn, based on this sementic segmentation, what is the outcome value of loss function, validation accuracy --what is impact of learning curve on segmentation purpose?
    You have done well so far for explaining everything so preciously. It will be a great help for me, if you consider my request.
    Thank you.

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      The only point of learning (loss) curve is for us to keep an eye on it to make sure it is trending downward. Also, to make sure training and validation curves are close together. For semantic segmentation where you are tracking IoU, you can monitor this instead of accuracy so you can understand the general trend when it comes to segmentation quality. Please stay tuned for the upcoming videos where we monitor IoU instead of Accuracy.

  • @amitkumar-od1ui
    @amitkumar-od1ui Год назад

    hey man you are oding great job! thanks for these video lessons

  • @khondokermirazulmumenin8201
    @khondokermirazulmumenin8201 3 года назад +2

    you are best .

  • @nouraalmusaynid751
    @nouraalmusaynid751 2 года назад

    Keep going 👏🏻👏🏻👏🏻

  • @joaosacadura6097
    @joaosacadura6097 3 года назад

    I applied U-NET to some datasets with ORTOFOTOS with RGB + IR that i have with high resolution (9-30cm per pixel) with the groud truth mask corresponding to vehicles, roofs, roads etc...
    So i prepare all the images before inputing into the model and i cut the ORTOFOTOS with size (10000, 10000, 4) into pieces of (512, 512, 4) and before inputing into the model i have to resize them to (128, 128, 4) because in higher sizes the kernel just implodes and i can't run the model. I'm afraid that with all this resizes i'm losing the the form of the objects that i want to predict. I thought on applying gaussian filters just to smooth the resolution. However since, I heard that in next videos you will talk about it i will wait to see. I wonder if it improves the results.
    Keep up the great videos!

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      This is a common problem for us when working with limited computing resources. Big companies invest in big hardware that lets them handle large datasets. But, as individuals we need to work with resources we can access such as Google colab. In summary, in general it is recommended to work with batch sizes 32 or 64. This makes sure that enough data is provided in a batch and by using smaller batch sizes we also ensure that the model is generalized. So you need to find down the smallest image size that you can handle in a batch of 32. You cannot fit 512x512 images at batch 32 on typical hardware available to us. So you need to crop them down to may be 128x128 which you seem to have done. Unfortunately, when you crop images too small you may be cropping large features and they lose context which is important for segmentation. In such cases you can consider using 256x256 but smaller batch size. Once you train the model you can apply the model to large images by cropping them to smaller sizes and combining the patches.

    • @joaosacadura6097
      @joaosacadura6097 3 года назад

      ​@@DigitalSreeni Yup that's exactly what i did. With 256x256 i assigned the batch size at 16 or 32 with 2136 images and it runs for the class cars it labeled with the IoU score 70% for 10 epochs. I was ondering if there's a way of changing the parameters of the model in order to input this images at a higher size and train the model not losing features of the objects or i just need to try with better resources.
      Thank's for the reply!

    • @matancadeporco
      @matancadeporco 3 года назад

      @@joaosacadura6097 Ei João, estou tentando fazer segmentação semantica multiclasse do sintoma de uma praga em folhas de tomate, porém estou com dificuldades, poderia me ajudar?

  • @alokchauhan6653
    @alokchauhan6653 Год назад

    Great explanation, a question though @DigitalSreeni - would it be possible to somehow extract the region of interest from predicted image and store pixel point coordinates somehow into csv or json format so that we can import the point coordinated in ImageJ? if in case one wants to correct/adjust the ROI's manually. Cheers!

  • @sohailmaqsood381
    @sohailmaqsood381 Год назад

    i have a question can anyone help me. how to single slice of 1600 images stack into 1600 images? I am stuck over their...downloaded dataset with one image slice stack. can anyone help me with this issue?

  • @rachelbj3840
    @rachelbj3840 2 года назад

    Hello Sreeni, loads of thanks for the wonderful video lecture. Am I allowed to use your code for research purposes?

  • @khaleddawoud363
    @khaleddawoud363 3 года назад

    Hi Sir, First of all thank you for work, I want to apply 3 RGB channel in model I have tried to enter this in the list but did not work, can you please provide me with code of RGB for images and masks part

  • @Hartvig5k
    @Hartvig5k 2 года назад

    Amazing, as always! A general question reg input channels: Considering HE stained images, do you think color deconvolution as a preprocessing step should help segmentation? Having tried it, it seems not make much difference. I am assuming network training would regardless focus on decisive color ranges?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Color deconvolution may not make a difference if you are using deep learning for segmentation. This is because deep learning learns from the raw data much better than other techniques. Color deconvolution helps when you perform traditional image analysis.

  • @vikashkumar-cr7ee
    @vikashkumar-cr7ee 2 года назад

    Dear Sreeni,
    I am requesting you to make a tutorial on how to run this lecture or any lecture on Google colab, as colab support compatibility and GPU. As well as how to import the dataset from sites (without uploading on a local drive/ or google drive).

  • @hosniboughanmi4130
    @hosniboughanmi4130 3 года назад +1

    Dear Ajarn, thank you for this video, could you please tell me how did generate the patches from the data?

    • @СвятославЕрмилин
      @СвятославЕрмилин 3 года назад

      You can use any online resource to split one .tif file to many (or google some python code, like i did :) ). And python module image_slicer to generate patches from source data. (smth like image_slicer.slice('file.tif', ))

  • @ShahidKhan-jp9mt
    @ShahidKhan-jp9mt 6 месяцев назад

    Please make a video on HubMap competition:- hacking the human vasculature structure from 2D PAS-stained human kidney images

  • @anitadalla
    @anitadalla 3 года назад

    Thank you very much ...I have learned a lot from your methods...Can you please apply one or two pretrained models like ResNet50/EfficientNetB7 or ensemble of 3-4 models on HAM10000...please please please make a video on that also

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Another thing to add to my list. But it is easy as I showed in my ensemble videos so I hope you don't have to wait for me to make these videos, they take time.

  • @kctsui4351
    @kctsui4351 Год назад

    Thanks very much and it is very useful. I encountered an error: "NotImplementedError: Cannot convert a symbolic tf.Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported." when I ran the code in google colab. I have tried to change the versions of tensorflow and numpy but the issue persists. How can I solve the issue?

  • @akashdebnath3631
    @akashdebnath3631 3 года назад +1

    great sir

  • @padmavathiv2429
    @padmavathiv2429 3 года назад

    hello sir ....your videos are too helpful for me sir....is UNET apply for LUNG segmentation too? will it give better accuracy ?
    thanks

  • @samarafroz9852
    @samarafroz9852 3 года назад

    Superb sir

  • @hadeerabdellatif2335
    @hadeerabdellatif2335 3 года назад

    i apply the same in the video but give me this result "loss: 0.0477 - accuracy: 0.0022 - val_loss: 0.0600 - val_accuracy: 0.0022
    " what is wrong i do? please reply

  • @meysamakbari6523
    @meysamakbari6523 3 месяца назад

    Hello,
    I have set of images with size of about 1024x1024 but the size of object to be semantic segmented are big almost like 600x600. My dataset size is too low but not sure if patchify makes sense here?

  • @venkatesanr9455
    @venkatesanr9455 3 года назад

    Hi Sreeni sir, Thanks for the highly informative content as usual. I have one doubt that whether U-net can perform/applied on a small dataset like 100 images or we can go with only the ML approach. Kindly share your experience

  • @tapansharma460
    @tapansharma460 3 года назад

    great work sir

  • @moisesdesouzafeitosa3364
    @moisesdesouzafeitosa3364 3 года назад

    Hi thank you for the amazing video.
    is there in your channel some video using an u-net with data augmentation?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      It will come soon. Please stay tuned.

  • @dimane7631
    @dimane7631 2 года назад +2

    to get image and labels patches : ruclips.net/video/7IL7LKSLb9I/видео.html

  • @farizasiddiqua17
    @farizasiddiqua17 Год назад

    Thank you so much!

  • @saqibqamar9270
    @saqibqamar9270 2 года назад

    Hello Sreeni Sir,
    You videos are very interesting and helpful for deep learning guys. I want to make instance segmentation of overlapped region of two tissues in my microscopic images. Which method is better to do measure overlapped region of same tissues for instance segmentation as you mentioned UNet+ Warshed in your next lecture.

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      When you have overlapped objects that you'd like to segment you have to find a method that incorporates shape in the training process. If the overlapped region comes in many shapes, the problem gets challenging. If you do not care about the overlapped region that is underneath another region and only want to separate the boundary between objects, you can try Unet followed by watershed.

    • @saqibqamar9270
      @saqibqamar9270 2 года назад

      @@DigitalSreeni Thank you so much for your response. Would you suggest methods to incorporate objects during training process. I want to segment overlapped regions that come in many shapes. I would be highly grateful to your suggation or particular videos on it...I can send the images as sample....

  • @almag4810
    @almag4810 Год назад

    i tried making a similar project, but for some reason, my predicted masks are all black, and im not sure how to fix this. I double-checked, triple-checked and the model is correct, my dataset is also correct, and my images match their ground truth masks...i used dice score as a metric, but it's always 0

    • @edenvelascohernandez7633
      @edenvelascohernandez7633 Год назад

      igual no he podido hacerlo funcionar con mi propia base de datos :C ¿ya lo solucionaste?

  • @NourElhudaAlqudah
    @NourElhudaAlqudah 10 месяцев назад

    Hello, Thank you for your channel, Im interestind in extracting and selecting features while image processing phase, and if you have any code please share it, thank you

  • @rehmanayounis8429
    @rehmanayounis8429 3 года назад

    Hi Seerni, Thank you for the amazing videos. In the start of the video you mentioned about a script for dividing large size images into small chunks. Could you please elaborate or share code on it.

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      You can use patchify library to do this task, very easy to use and works for 2D and 3D images. pypi.org/project/patchify/

    • @GodsOwn4142
      @GodsOwn4142 3 года назад

      The link to the tutorial on how to divide images of large size is here: ruclips.net/video/7IL7LKSLb9I/видео.html

  • @chandrakanthats2523
    @chandrakanthats2523 2 года назад

    please let me know the tensorflow and keras version requirement for this

  • @ism_9648
    @ism_9648 3 года назад

    These early studies proposed hybrid solutions based on independent component analysis (ICA) [3], wavelet transform [4-6], support vector machine (SVM) [5] and principal component analysis (PCA) [6]. In [6] features are extracted from MR images with discrete wavelet transformation (DWT). Then PCA is employed for feature reduction and finally feed forward backpropagation artificial neural network (FP-ANN) and k-nearest neighbor (k-NN) based classifiers are used to classify the normal and abnormal brain MR images. help me with can you make video

    • @NourElhudaAlqudah
      @NourElhudaAlqudah 10 месяцев назад

      hello, would you share the references for those articles you mensioned above, Im interestind in extracting and selecting features while omage processing phase, and if you have any code please share it, thank you

  • @Brickkzz
    @Brickkzz 3 года назад

    Thank for his amaizng vid. How big is your training set? (i.e. how many labelled mitochondria?) I'm thinking of doing something similar but labelling thousands of pictures may be tedious

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      The training dataset is 165 large images with about 10 mitochondria per image on average. So about 1600 total mitochondria. Labeling is time consuming but if your research relies on it then you find a way to get your dataset labeled. If not, try augmentation but the results will not be as accurate.

    • @kaydee6328
      @kaydee6328 3 года назад

      @@DigitalSreeni Thanks a lot for your videos. I learned quite a lot from them. Could you recommend some efficient tools/software that could speed up image labeling or annotating? Thanks!

  • @RuchiTripathi-gu3hu
    @RuchiTripathi-gu3hu 2 года назад

    sir when i am trying to load images and masks they are not coming in proper sequence during image and mask directory creation.what i need to do

  • @JS-tk4ku
    @JS-tk4ku 3 года назад

    I suppose that if we train with 280X280 but how can I apply this model for 4000x4000 something like that?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +3

      That video is coming soon.... a video on applying models trained on small patches to segment large images. Please stay tuned.

    • @JS-tk4ku
      @JS-tk4ku 3 года назад

      @@DigitalSreeni since I’ve followed you so far, That makes me feel familiar with deep learning, tks for your contributions

  • @agnarrenolen1336
    @agnarrenolen1336 7 месяцев назад

    I'm a Python newbe, and I am unable to set up my Python environment to work with your code. Almost got there with Miniconda, but couldn't find a way to install pathchify. Would you please give some tips on how to set up Python ans Spyder with the correct environment and all the needed libraries. (I'm stuck to Windows)

  • @sherrishah
    @sherrishah 2 года назад

    Does this Unet work with different input sizes? for ex. 1024 as well?

  • @minipc123
    @minipc123 Год назад

    hello sir, may I know what did you choose to normalize axis 1 instead of -1 which is the default axis to be normalized?

  • @sohailmalic
    @sohailmalic Год назад

    Hey Sreeni, I have faced the issue IndexError: list index out of range in the spyder interface. while i am applying both of your codes and downloading the dataset through the mentioned link. I have downloaded one image slide with all 1600 images.
    how to overcome this problem

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      Sohail miya, assalam alaikum, sab khairiyat?
      This video shows U-net based segmentation where the input needs to be 256x256x1. You need to get the data ready into a format (N x 256 x 256 x1), as shown in the video. In my case, N was 1600 as I got that many images. If you are working with the same data set and follow all the steps accordingly, you will end up with a shape (1600, 256, 256, 1). If not, you may be making some mistake which is not possible for me to guess. Please go through line by line and execute each line at a time. Look at the variable explorer to see if the result is what you expect. If things are still confusing, you may have take a step back and learn a bit more about numpy or where ever you are getting stuck. Good luck.
      Khuda Hafiz

  • @ahmedgaber8819
    @ahmedgaber8819 2 года назад

    thanks sir for this video i have simple question s
    1-test_img_other_norm=test_img_other_norm[:,:,0][:,:,None]
    what dose it mean [:,:,0][:,:,None] ?
    2- prediction_other = (model.predict(test_img_other_input)[0,:,:,0] > 0.2).astype(np.uint8)
    what dose it mean (test_img_other_input)[0,:,:,0] ?
    thanks

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      In both cases I am just choosing the appropriate sliced array from a larger array. Please print the results and shapes without the [:,:,:] part to understand how it looks before and after.

    • @ahmedgaber8819
      @ahmedgaber8819 2 года назад

      @@DigitalSreeni thank you sir

  • @harrishvar7677
    @harrishvar7677 9 месяцев назад

    Does this works only on 256x256 patches?

  • @tuyenlevan6804
    @tuyenlevan6804 3 года назад

    Hi thank you for the amazing video.
    How to create new dataset?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      If you are inquiring about annotating your images to generate labels then I use APEER for that purpose. (www.apeer.com)

  • @vanshikahari4746
    @vanshikahari4746 3 года назад

    Hi. I wanted to know what should be done if my images are not matching the corresponding masks when I di the sanity check. Please let me know

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      You may find this tutorial useful... ruclips.net/video/XNf1ATR9OSk/видео.html

  • @منةالرحمن
    @منةالرحمن 3 года назад

    hi sir
    please any solution? i tried this with 45 images dataset to do nuclei segmentation
    no error but result image is black any nuclei was detected !!!

  • @marcusbranch2100
    @marcusbranch2100 3 года назад

    Awesome video, Sreeni! Thanks a lot. How can I do online augmentation in that case? My dataset has two folders like yours (images and masks) and I want to apply online augmentation, feeding directly to the network. Can you help me with this?

    • @surflaweb
      @surflaweb 3 года назад +1

      Hi Marcus, I have a question about U-net. Where did you labeled your images, what tool did you used?
      This Is compatible the code of this video?.
      Please share your knowledge.
      Thanks so much.

    • @marcusbranch2100
      @marcusbranch2100 3 года назад +1

      @@surflaweb Hey, I didn't need to label my images because the dataset I'm using comes ready with two folders (images and masks, as seen in the video), the only difference is that the format of the images is .png, super easy to manipulate and deal with. So I didn't need to use any tools to label them, and yes, it is very compatible with the code of this video. Now I want to know how I can do an online data augmentation and feed directly on the network

    • @surflaweb
      @surflaweb 3 года назад +1

      @@marcusbranch2100 Ok Man thanks, remenber if you make data augmentation you Will need to label those new images.

    • @marcusbranch2100
      @marcusbranch2100 3 года назад

      @@surflaweb Yeah, for sure. But the data augmentation is already applyed to the images and masks

    • @surflaweb
      @surflaweb 3 года назад +1

      @@marcusbranch2100 do you know a tool to do that? The code of this video work only for binary clasification?

  • @talha_anwar
    @talha_anwar 3 года назад

    is decoder in unet is exactly opposite of encoder, all the time?

  • @surflaweb
    @surflaweb 3 года назад

    Hi dear, I want to run this code, but where I can labelling my images. I want to use RGB images. Another question if i have 3 classes, I should changue the last layer to a softmax classifier?
    Thanks so much.

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      You can label/annotate your images here: www.apeer.com (it is free)
      Regarding multiclass U-net - please stay tuned...

    • @surflaweb
      @surflaweb 3 года назад

      @@DigitalSreeni Last question sir, U-net works with RGB images only with one channel?

    • @surflaweb
      @surflaweb 3 года назад

      Hi @DigitalSreeni I tried use apeer.com but I don't know what annotation download "annotations as image" I have two options: binary mask or labeled image. which of this two annotations I should choice for U-net?
      thanks so much.

  • @alirajabi2388
    @alirajabi2388 Год назад

    Hi Sreeni, can you upload the dataset on your github page?

  • @sudhakumaravel8277
    @sudhakumaravel8277 2 года назад

    why i am getting my output as black screen.any one reply me guys

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Black screen for what? Do you mean segmented images?

    • @sudhakumaravel8277
      @sudhakumaravel8277 2 года назад

      @@DigitalSreeni yes sir, my input is xray image (RGB),mask is also(RGB),but it shows output only in plain black image. 70 images are enough to train the model.

    • @edenvelascohernandez7633
      @edenvelascohernandez7633 Год назад

      @@sudhakumaravel8277 pudo solucionarlo??

  • @manjaripalanichamy9800
    @manjaripalanichamy9800 3 года назад

    Can we use any other kernel_initializers other than he_normal?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      For ReLu activation layers it is recommended to use he_normal which makes sure the variance is appropriate based on your input data.

    • @manjaripalanichamy9800
      @manjaripalanichamy9800 3 года назад

      @@DigitalSreeni thank you so much for the explanation

  • @matancadeporco
    @matancadeporco 3 года назад

    anyone with experience on multiclass semantic segmentation to help me? truly appreciated

  • @AhmedKhaled-qr7vc
    @AhmedKhaled-qr7vc 2 года назад

    where the data please

  • @jizhang02
    @jizhang02 3 года назад

    hello, what's the difference of 'concatenate' and 'add' operation in Keras, thanks

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Add operation adds two tensors and concatenate, as the name suggests just puts the two tensors together along the defined axis. keras.io/api/layers/merging_layers/