Just want to let you guys know that I love Kite's AI-powered coding assistant. Works great in giving smart completions and documentation as we type. Check it out if you are looking for smart completion tools while coding. www.kite.com/get-kite/?downloadstart=false
Dear Ajarn, my telepathy was telling me, and I was expecting this on U-net from you, as I was about to go the "Convolutional + RF" way, for a segmentation task........Can't wink while learning through your tutorials......Another in depth and informative tutorial......And awaiting the sequels to it, for "Instance segmentation".......And as always, my humble pranams to you.....
Hey Sreeni, first thank you for all your training sessions. Truly appreciate them and enjoy the way you present it! I did notice a small bug. I followed your steps and when I was extracting the images from tiff file where you break them out into 256x256 images and save them to the images and masks folder. The issue is in linux when you retrieve them using os.listdir they are not returned in the same order. Meaning mask_dataset[0] will likely not correspond to the image_dataset[0] . To fix I did the following. images = os.listdir(image_directory) images = sorted(images) masks = os.listdir(mask_directory) masks = sorted(masks) Hope that helps someone.
Thanks for pointing this out. I should have mentioned it in my videos. I sort the files when I work on colab where images and masks may not be lined up based on the file name by default.
Hello Sreeni Sir, I'm so happy that I discovered your channel and videos on neual networks and cell image classifications, they're really a gold mine of information! May I know in what video exactly you talked about how to save and load models and their results, for the first time? (I'm referring to 17:36). I've watched your first 60 videos and lost track on them. Thank you in advance and cheers!
Thank you very much for your work, it has proven really helpful! Is it possible to use image augmentation in this simple model? (without having to use flow from directory)
hi i really liked the way you explain the network ! i have a question about normalshization of images should we normalize even for RGB images ? for example ISIC dataset
Hello Sreeni Sir, Thanks for an informative video about Sementic Segmentation. I have a request to make. How about explaining the Learning Curve of the respective segmentation that you have done. I have saw your tutorial about Learning Curve, loss function, accuracy metric. Those videos are also very informative. In addition to this, I want to learn, based on this sementic segmentation, what is the outcome value of loss function, validation accuracy --what is impact of learning curve on segmentation purpose? You have done well so far for explaining everything so preciously. It will be a great help for me, if you consider my request. Thank you.
The only point of learning (loss) curve is for us to keep an eye on it to make sure it is trending downward. Also, to make sure training and validation curves are close together. For semantic segmentation where you are tracking IoU, you can monitor this instead of accuracy so you can understand the general trend when it comes to segmentation quality. Please stay tuned for the upcoming videos where we monitor IoU instead of Accuracy.
I applied U-NET to some datasets with ORTOFOTOS with RGB + IR that i have with high resolution (9-30cm per pixel) with the groud truth mask corresponding to vehicles, roofs, roads etc... So i prepare all the images before inputing into the model and i cut the ORTOFOTOS with size (10000, 10000, 4) into pieces of (512, 512, 4) and before inputing into the model i have to resize them to (128, 128, 4) because in higher sizes the kernel just implodes and i can't run the model. I'm afraid that with all this resizes i'm losing the the form of the objects that i want to predict. I thought on applying gaussian filters just to smooth the resolution. However since, I heard that in next videos you will talk about it i will wait to see. I wonder if it improves the results. Keep up the great videos!
This is a common problem for us when working with limited computing resources. Big companies invest in big hardware that lets them handle large datasets. But, as individuals we need to work with resources we can access such as Google colab. In summary, in general it is recommended to work with batch sizes 32 or 64. This makes sure that enough data is provided in a batch and by using smaller batch sizes we also ensure that the model is generalized. So you need to find down the smallest image size that you can handle in a batch of 32. You cannot fit 512x512 images at batch 32 on typical hardware available to us. So you need to crop them down to may be 128x128 which you seem to have done. Unfortunately, when you crop images too small you may be cropping large features and they lose context which is important for segmentation. In such cases you can consider using 256x256 but smaller batch size. Once you train the model you can apply the model to large images by cropping them to smaller sizes and combining the patches.
@@DigitalSreeni Yup that's exactly what i did. With 256x256 i assigned the batch size at 16 or 32 with 2136 images and it runs for the class cars it labeled with the IoU score 70% for 10 epochs. I was ondering if there's a way of changing the parameters of the model in order to input this images at a higher size and train the model not losing features of the objects or i just need to try with better resources. Thank's for the reply!
@@joaosacadura6097 Ei João, estou tentando fazer segmentação semantica multiclasse do sintoma de uma praga em folhas de tomate, porém estou com dificuldades, poderia me ajudar?
Great explanation, a question though @DigitalSreeni - would it be possible to somehow extract the region of interest from predicted image and store pixel point coordinates somehow into csv or json format so that we can import the point coordinated in ImageJ? if in case one wants to correct/adjust the ROI's manually. Cheers!
i have a question can anyone help me. how to single slice of 1600 images stack into 1600 images? I am stuck over their...downloaded dataset with one image slice stack. can anyone help me with this issue?
Hi Sir, First of all thank you for work, I want to apply 3 RGB channel in model I have tried to enter this in the list but did not work, can you please provide me with code of RGB for images and masks part
Amazing, as always! A general question reg input channels: Considering HE stained images, do you think color deconvolution as a preprocessing step should help segmentation? Having tried it, it seems not make much difference. I am assuming network training would regardless focus on decisive color ranges?
Color deconvolution may not make a difference if you are using deep learning for segmentation. This is because deep learning learns from the raw data much better than other techniques. Color deconvolution helps when you perform traditional image analysis.
Dear Sreeni, I am requesting you to make a tutorial on how to run this lecture or any lecture on Google colab, as colab support compatibility and GPU. As well as how to import the dataset from sites (without uploading on a local drive/ or google drive).
You can use any online resource to split one .tif file to many (or google some python code, like i did :) ). And python module image_slicer to generate patches from source data. (smth like image_slicer.slice('file.tif', ))
Thank you very much ...I have learned a lot from your methods...Can you please apply one or two pretrained models like ResNet50/EfficientNetB7 or ensemble of 3-4 models on HAM10000...please please please make a video on that also
Another thing to add to my list. But it is easy as I showed in my ensemble videos so I hope you don't have to wait for me to make these videos, they take time.
Thanks very much and it is very useful. I encountered an error: "NotImplementedError: Cannot convert a symbolic tf.Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported." when I ran the code in google colab. I have tried to change the versions of tensorflow and numpy but the issue persists. How can I solve the issue?
i apply the same in the video but give me this result "loss: 0.0477 - accuracy: 0.0022 - val_loss: 0.0600 - val_accuracy: 0.0022 " what is wrong i do? please reply
Hello, I have set of images with size of about 1024x1024 but the size of object to be semantic segmented are big almost like 600x600. My dataset size is too low but not sure if patchify makes sense here?
Hi Sreeni sir, Thanks for the highly informative content as usual. I have one doubt that whether U-net can perform/applied on a small dataset like 100 images or we can go with only the ML approach. Kindly share your experience
Hello Sreeni Sir, You videos are very interesting and helpful for deep learning guys. I want to make instance segmentation of overlapped region of two tissues in my microscopic images. Which method is better to do measure overlapped region of same tissues for instance segmentation as you mentioned UNet+ Warshed in your next lecture.
When you have overlapped objects that you'd like to segment you have to find a method that incorporates shape in the training process. If the overlapped region comes in many shapes, the problem gets challenging. If you do not care about the overlapped region that is underneath another region and only want to separate the boundary between objects, you can try Unet followed by watershed.
@@DigitalSreeni Thank you so much for your response. Would you suggest methods to incorporate objects during training process. I want to segment overlapped regions that come in many shapes. I would be highly grateful to your suggation or particular videos on it...I can send the images as sample....
i tried making a similar project, but for some reason, my predicted masks are all black, and im not sure how to fix this. I double-checked, triple-checked and the model is correct, my dataset is also correct, and my images match their ground truth masks...i used dice score as a metric, but it's always 0
Hello, Thank you for your channel, Im interestind in extracting and selecting features while image processing phase, and if you have any code please share it, thank you
Hi Seerni, Thank you for the amazing videos. In the start of the video you mentioned about a script for dividing large size images into small chunks. Could you please elaborate or share code on it.
These early studies proposed hybrid solutions based on independent component analysis (ICA) [3], wavelet transform [4-6], support vector machine (SVM) [5] and principal component analysis (PCA) [6]. In [6] features are extracted from MR images with discrete wavelet transformation (DWT). Then PCA is employed for feature reduction and finally feed forward backpropagation artificial neural network (FP-ANN) and k-nearest neighbor (k-NN) based classifiers are used to classify the normal and abnormal brain MR images. help me with can you make video
hello, would you share the references for those articles you mensioned above, Im interestind in extracting and selecting features while omage processing phase, and if you have any code please share it, thank you
Thank for his amaizng vid. How big is your training set? (i.e. how many labelled mitochondria?) I'm thinking of doing something similar but labelling thousands of pictures may be tedious
The training dataset is 165 large images with about 10 mitochondria per image on average. So about 1600 total mitochondria. Labeling is time consuming but if your research relies on it then you find a way to get your dataset labeled. If not, try augmentation but the results will not be as accurate.
@@DigitalSreeni Thanks a lot for your videos. I learned quite a lot from them. Could you recommend some efficient tools/software that could speed up image labeling or annotating? Thanks!
I'm a Python newbe, and I am unable to set up my Python environment to work with your code. Almost got there with Miniconda, but couldn't find a way to install pathchify. Would you please give some tips on how to set up Python ans Spyder with the correct environment and all the needed libraries. (I'm stuck to Windows)
Hey Sreeni, I have faced the issue IndexError: list index out of range in the spyder interface. while i am applying both of your codes and downloading the dataset through the mentioned link. I have downloaded one image slide with all 1600 images. how to overcome this problem
Sohail miya, assalam alaikum, sab khairiyat? This video shows U-net based segmentation where the input needs to be 256x256x1. You need to get the data ready into a format (N x 256 x 256 x1), as shown in the video. In my case, N was 1600 as I got that many images. If you are working with the same data set and follow all the steps accordingly, you will end up with a shape (1600, 256, 256, 1). If not, you may be making some mistake which is not possible for me to guess. Please go through line by line and execute each line at a time. Look at the variable explorer to see if the result is what you expect. If things are still confusing, you may have take a step back and learn a bit more about numpy or where ever you are getting stuck. Good luck. Khuda Hafiz
thanks sir for this video i have simple question s 1-test_img_other_norm=test_img_other_norm[:,:,0][:,:,None] what dose it mean [:,:,0][:,:,None] ? 2- prediction_other = (model.predict(test_img_other_input)[0,:,:,0] > 0.2).astype(np.uint8) what dose it mean (test_img_other_input)[0,:,:,0] ? thanks
In both cases I am just choosing the appropriate sliced array from a larger array. Please print the results and shapes without the [:,:,:] part to understand how it looks before and after.
hi sir please any solution? i tried this with 45 images dataset to do nuclei segmentation no error but result image is black any nuclei was detected !!!
Awesome video, Sreeni! Thanks a lot. How can I do online augmentation in that case? My dataset has two folders like yours (images and masks) and I want to apply online augmentation, feeding directly to the network. Can you help me with this?
Hi Marcus, I have a question about U-net. Where did you labeled your images, what tool did you used? This Is compatible the code of this video?. Please share your knowledge. Thanks so much.
@@surflaweb Hey, I didn't need to label my images because the dataset I'm using comes ready with two folders (images and masks, as seen in the video), the only difference is that the format of the images is .png, super easy to manipulate and deal with. So I didn't need to use any tools to label them, and yes, it is very compatible with the code of this video. Now I want to know how I can do an online data augmentation and feed directly on the network
Hi dear, I want to run this code, but where I can labelling my images. I want to use RGB images. Another question if i have 3 classes, I should changue the last layer to a softmax classifier? Thanks so much.
Hi @DigitalSreeni I tried use apeer.com but I don't know what annotation download "annotations as image" I have two options: binary mask or labeled image. which of this two annotations I should choice for U-net? thanks so much.
@@DigitalSreeni yes sir, my input is xray image (RGB),mask is also(RGB),but it shows output only in plain black image. 70 images are enough to train the model.
Add operation adds two tensors and concatenate, as the name suggests just puts the two tensors together along the defined axis. keras.io/api/layers/merging_layers/
Just want to let you guys know that I love Kite's AI-powered coding assistant. Works great in giving smart completions and documentation as we type. Check it out if you are looking for smart completion tools while coding.
www.kite.com/get-kite/?downloadstart=false
Dear Ajarn, my telepathy was telling me, and I was expecting this on U-net from you, as I was about to go the "Convolutional + RF" way, for a segmentation task........Can't wink while learning through your tutorials......Another in depth and informative tutorial......And awaiting the sequels to it, for "Instance segmentation".......And as always, my humble pranams to you.....
the quality of your videos is insane, thank you so much!!
Trying to apply u-net for Glaucoma detection
This helped a lot sir, thank you so much
🙏🏽
You are most welcome
That is the superb application of U-Net. Thank you
More on the way, stay tuned 😌
Hey Sreeni, first thank you for all your training sessions. Truly appreciate them and enjoy the way you present it!
I did notice a small bug. I followed your steps and when I was extracting the images from tiff file where you break them out into 256x256 images and save them to the images and masks folder. The issue is in linux when you retrieve them using os.listdir they are not returned in the same order. Meaning mask_dataset[0] will likely not correspond to the image_dataset[0] . To fix I did the following.
images = os.listdir(image_directory)
images = sorted(images)
masks = os.listdir(mask_directory)
masks = sorted(masks)
Hope that helps someone.
Thanks for pointing this out. I should have mentioned it in my videos. I sort the files when I work on colab where images and masks may not be lined up based on the file name by default.
thank u so much ! for all your informative videos sir
Hello Sreeni Sir, I'm so happy that I discovered your channel and videos on neual networks and cell image classifications, they're really a gold mine of information!
May I know in what video exactly you talked about how to save and load models and their results, for the first time? (I'm referring to 17:36). I've watched your first 60 videos and lost track on them. Thank you in advance and cheers!
Thank you very much for your work, it has proven really helpful! Is it possible to use image augmentation in this simple model? (without having to use flow from directory)
hi i really liked the way you explain the network ! i have a question about normalshization of images should we normalize even for RGB images ? for example ISIC dataset
Little Correction reading image using cv2 and converting them to PIL image.
numpy.asarray(PIL.Image.open('test.jpg'))
Hello Sreeni Sir, Thanks for an informative video about Sementic Segmentation. I have a request to make. How about explaining the Learning Curve of the respective segmentation that you have done.
I have saw your tutorial about Learning Curve, loss function, accuracy metric. Those videos are also very informative. In addition to this, I want to learn, based on this sementic segmentation, what is the outcome value of loss function, validation accuracy --what is impact of learning curve on segmentation purpose?
You have done well so far for explaining everything so preciously. It will be a great help for me, if you consider my request.
Thank you.
The only point of learning (loss) curve is for us to keep an eye on it to make sure it is trending downward. Also, to make sure training and validation curves are close together. For semantic segmentation where you are tracking IoU, you can monitor this instead of accuracy so you can understand the general trend when it comes to segmentation quality. Please stay tuned for the upcoming videos where we monitor IoU instead of Accuracy.
hey man you are oding great job! thanks for these video lessons
you are best .
Keep going 👏🏻👏🏻👏🏻
I applied U-NET to some datasets with ORTOFOTOS with RGB + IR that i have with high resolution (9-30cm per pixel) with the groud truth mask corresponding to vehicles, roofs, roads etc...
So i prepare all the images before inputing into the model and i cut the ORTOFOTOS with size (10000, 10000, 4) into pieces of (512, 512, 4) and before inputing into the model i have to resize them to (128, 128, 4) because in higher sizes the kernel just implodes and i can't run the model. I'm afraid that with all this resizes i'm losing the the form of the objects that i want to predict. I thought on applying gaussian filters just to smooth the resolution. However since, I heard that in next videos you will talk about it i will wait to see. I wonder if it improves the results.
Keep up the great videos!
This is a common problem for us when working with limited computing resources. Big companies invest in big hardware that lets them handle large datasets. But, as individuals we need to work with resources we can access such as Google colab. In summary, in general it is recommended to work with batch sizes 32 or 64. This makes sure that enough data is provided in a batch and by using smaller batch sizes we also ensure that the model is generalized. So you need to find down the smallest image size that you can handle in a batch of 32. You cannot fit 512x512 images at batch 32 on typical hardware available to us. So you need to crop them down to may be 128x128 which you seem to have done. Unfortunately, when you crop images too small you may be cropping large features and they lose context which is important for segmentation. In such cases you can consider using 256x256 but smaller batch size. Once you train the model you can apply the model to large images by cropping them to smaller sizes and combining the patches.
@@DigitalSreeni Yup that's exactly what i did. With 256x256 i assigned the batch size at 16 or 32 with 2136 images and it runs for the class cars it labeled with the IoU score 70% for 10 epochs. I was ondering if there's a way of changing the parameters of the model in order to input this images at a higher size and train the model not losing features of the objects or i just need to try with better resources.
Thank's for the reply!
@@joaosacadura6097 Ei João, estou tentando fazer segmentação semantica multiclasse do sintoma de uma praga em folhas de tomate, porém estou com dificuldades, poderia me ajudar?
Great explanation, a question though @DigitalSreeni - would it be possible to somehow extract the region of interest from predicted image and store pixel point coordinates somehow into csv or json format so that we can import the point coordinated in ImageJ? if in case one wants to correct/adjust the ROI's manually. Cheers!
i have a question can anyone help me. how to single slice of 1600 images stack into 1600 images? I am stuck over their...downloaded dataset with one image slice stack. can anyone help me with this issue?
Hello Sreeni, loads of thanks for the wonderful video lecture. Am I allowed to use your code for research purposes?
Hi Sir, First of all thank you for work, I want to apply 3 RGB channel in model I have tried to enter this in the list but did not work, can you please provide me with code of RGB for images and masks part
Amazing, as always! A general question reg input channels: Considering HE stained images, do you think color deconvolution as a preprocessing step should help segmentation? Having tried it, it seems not make much difference. I am assuming network training would regardless focus on decisive color ranges?
Color deconvolution may not make a difference if you are using deep learning for segmentation. This is because deep learning learns from the raw data much better than other techniques. Color deconvolution helps when you perform traditional image analysis.
Dear Sreeni,
I am requesting you to make a tutorial on how to run this lecture or any lecture on Google colab, as colab support compatibility and GPU. As well as how to import the dataset from sites (without uploading on a local drive/ or google drive).
Dear Ajarn, thank you for this video, could you please tell me how did generate the patches from the data?
You can use any online resource to split one .tif file to many (or google some python code, like i did :) ). And python module image_slicer to generate patches from source data. (smth like image_slicer.slice('file.tif', ))
Please make a video on HubMap competition:- hacking the human vasculature structure from 2D PAS-stained human kidney images
Thank you very much ...I have learned a lot from your methods...Can you please apply one or two pretrained models like ResNet50/EfficientNetB7 or ensemble of 3-4 models on HAM10000...please please please make a video on that also
Another thing to add to my list. But it is easy as I showed in my ensemble videos so I hope you don't have to wait for me to make these videos, they take time.
Thanks very much and it is very useful. I encountered an error: "NotImplementedError: Cannot convert a symbolic tf.Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported." when I ran the code in google colab. I have tried to change the versions of tensorflow and numpy but the issue persists. How can I solve the issue?
great sir
hello sir ....your videos are too helpful for me sir....is UNET apply for LUNG segmentation too? will it give better accuracy ?
thanks
Superb sir
i apply the same in the video but give me this result "loss: 0.0477 - accuracy: 0.0022 - val_loss: 0.0600 - val_accuracy: 0.0022
" what is wrong i do? please reply
Hello,
I have set of images with size of about 1024x1024 but the size of object to be semantic segmented are big almost like 600x600. My dataset size is too low but not sure if patchify makes sense here?
Hi Sreeni sir, Thanks for the highly informative content as usual. I have one doubt that whether U-net can perform/applied on a small dataset like 100 images or we can go with only the ML approach. Kindly share your experience
great work sir
Keep watching
Hi thank you for the amazing video.
is there in your channel some video using an u-net with data augmentation?
It will come soon. Please stay tuned.
to get image and labels patches : ruclips.net/video/7IL7LKSLb9I/видео.html
Thank you so much.
Thank you so much!
Hello Sreeni Sir,
You videos are very interesting and helpful for deep learning guys. I want to make instance segmentation of overlapped region of two tissues in my microscopic images. Which method is better to do measure overlapped region of same tissues for instance segmentation as you mentioned UNet+ Warshed in your next lecture.
When you have overlapped objects that you'd like to segment you have to find a method that incorporates shape in the training process. If the overlapped region comes in many shapes, the problem gets challenging. If you do not care about the overlapped region that is underneath another region and only want to separate the boundary between objects, you can try Unet followed by watershed.
@@DigitalSreeni Thank you so much for your response. Would you suggest methods to incorporate objects during training process. I want to segment overlapped regions that come in many shapes. I would be highly grateful to your suggation or particular videos on it...I can send the images as sample....
i tried making a similar project, but for some reason, my predicted masks are all black, and im not sure how to fix this. I double-checked, triple-checked and the model is correct, my dataset is also correct, and my images match their ground truth masks...i used dice score as a metric, but it's always 0
igual no he podido hacerlo funcionar con mi propia base de datos :C ¿ya lo solucionaste?
Hello, Thank you for your channel, Im interestind in extracting and selecting features while image processing phase, and if you have any code please share it, thank you
Hi Seerni, Thank you for the amazing videos. In the start of the video you mentioned about a script for dividing large size images into small chunks. Could you please elaborate or share code on it.
You can use patchify library to do this task, very easy to use and works for 2D and 3D images. pypi.org/project/patchify/
The link to the tutorial on how to divide images of large size is here: ruclips.net/video/7IL7LKSLb9I/видео.html
please let me know the tensorflow and keras version requirement for this
These early studies proposed hybrid solutions based on independent component analysis (ICA) [3], wavelet transform [4-6], support vector machine (SVM) [5] and principal component analysis (PCA) [6]. In [6] features are extracted from MR images with discrete wavelet transformation (DWT). Then PCA is employed for feature reduction and finally feed forward backpropagation artificial neural network (FP-ANN) and k-nearest neighbor (k-NN) based classifiers are used to classify the normal and abnormal brain MR images. help me with can you make video
hello, would you share the references for those articles you mensioned above, Im interestind in extracting and selecting features while omage processing phase, and if you have any code please share it, thank you
Thank for his amaizng vid. How big is your training set? (i.e. how many labelled mitochondria?) I'm thinking of doing something similar but labelling thousands of pictures may be tedious
The training dataset is 165 large images with about 10 mitochondria per image on average. So about 1600 total mitochondria. Labeling is time consuming but if your research relies on it then you find a way to get your dataset labeled. If not, try augmentation but the results will not be as accurate.
@@DigitalSreeni Thanks a lot for your videos. I learned quite a lot from them. Could you recommend some efficient tools/software that could speed up image labeling or annotating? Thanks!
sir when i am trying to load images and masks they are not coming in proper sequence during image and mask directory creation.what i need to do
I suppose that if we train with 280X280 but how can I apply this model for 4000x4000 something like that?
That video is coming soon.... a video on applying models trained on small patches to segment large images. Please stay tuned.
@@DigitalSreeni since I’ve followed you so far, That makes me feel familiar with deep learning, tks for your contributions
I'm a Python newbe, and I am unable to set up my Python environment to work with your code. Almost got there with Miniconda, but couldn't find a way to install pathchify. Would you please give some tips on how to set up Python ans Spyder with the correct environment and all the needed libraries. (I'm stuck to Windows)
Does this Unet work with different input sizes? for ex. 1024 as well?
hello sir, may I know what did you choose to normalize axis 1 instead of -1 which is the default axis to be normalized?
Hey Sreeni, I have faced the issue IndexError: list index out of range in the spyder interface. while i am applying both of your codes and downloading the dataset through the mentioned link. I have downloaded one image slide with all 1600 images.
how to overcome this problem
Sohail miya, assalam alaikum, sab khairiyat?
This video shows U-net based segmentation where the input needs to be 256x256x1. You need to get the data ready into a format (N x 256 x 256 x1), as shown in the video. In my case, N was 1600 as I got that many images. If you are working with the same data set and follow all the steps accordingly, you will end up with a shape (1600, 256, 256, 1). If not, you may be making some mistake which is not possible for me to guess. Please go through line by line and execute each line at a time. Look at the variable explorer to see if the result is what you expect. If things are still confusing, you may have take a step back and learn a bit more about numpy or where ever you are getting stuck. Good luck.
Khuda Hafiz
thanks sir for this video i have simple question s
1-test_img_other_norm=test_img_other_norm[:,:,0][:,:,None]
what dose it mean [:,:,0][:,:,None] ?
2- prediction_other = (model.predict(test_img_other_input)[0,:,:,0] > 0.2).astype(np.uint8)
what dose it mean (test_img_other_input)[0,:,:,0] ?
thanks
In both cases I am just choosing the appropriate sliced array from a larger array. Please print the results and shapes without the [:,:,:] part to understand how it looks before and after.
@@DigitalSreeni thank you sir
Does this works only on 256x256 patches?
Hi thank you for the amazing video.
How to create new dataset?
If you are inquiring about annotating your images to generate labels then I use APEER for that purpose. (www.apeer.com)
Hi. I wanted to know what should be done if my images are not matching the corresponding masks when I di the sanity check. Please let me know
You may find this tutorial useful... ruclips.net/video/XNf1ATR9OSk/видео.html
hi sir
please any solution? i tried this with 45 images dataset to do nuclei segmentation
no error but result image is black any nuclei was detected !!!
x2
lo pudiste solucionar??
Awesome video, Sreeni! Thanks a lot. How can I do online augmentation in that case? My dataset has two folders like yours (images and masks) and I want to apply online augmentation, feeding directly to the network. Can you help me with this?
Hi Marcus, I have a question about U-net. Where did you labeled your images, what tool did you used?
This Is compatible the code of this video?.
Please share your knowledge.
Thanks so much.
@@surflaweb Hey, I didn't need to label my images because the dataset I'm using comes ready with two folders (images and masks, as seen in the video), the only difference is that the format of the images is .png, super easy to manipulate and deal with. So I didn't need to use any tools to label them, and yes, it is very compatible with the code of this video. Now I want to know how I can do an online data augmentation and feed directly on the network
@@marcusbranch2100 Ok Man thanks, remenber if you make data augmentation you Will need to label those new images.
@@surflaweb Yeah, for sure. But the data augmentation is already applyed to the images and masks
@@marcusbranch2100 do you know a tool to do that? The code of this video work only for binary clasification?
is decoder in unet is exactly opposite of encoder, all the time?
Hi dear, I want to run this code, but where I can labelling my images. I want to use RGB images. Another question if i have 3 classes, I should changue the last layer to a softmax classifier?
Thanks so much.
You can label/annotate your images here: www.apeer.com (it is free)
Regarding multiclass U-net - please stay tuned...
@@DigitalSreeni Last question sir, U-net works with RGB images only with one channel?
Hi @DigitalSreeni I tried use apeer.com but I don't know what annotation download "annotations as image" I have two options: binary mask or labeled image. which of this two annotations I should choice for U-net?
thanks so much.
Hi Sreeni, can you upload the dataset on your github page?
why i am getting my output as black screen.any one reply me guys
Black screen for what? Do you mean segmented images?
@@DigitalSreeni yes sir, my input is xray image (RGB),mask is also(RGB),but it shows output only in plain black image. 70 images are enough to train the model.
@@sudhakumaravel8277 pudo solucionarlo??
Can we use any other kernel_initializers other than he_normal?
For ReLu activation layers it is recommended to use he_normal which makes sure the variance is appropriate based on your input data.
@@DigitalSreeni thank you so much for the explanation
anyone with experience on multiclass semantic segmentation to help me? truly appreciated
where the data please
hello, what's the difference of 'concatenate' and 'add' operation in Keras, thanks
Add operation adds two tensors and concatenate, as the name suggests just puts the two tensors together along the defined axis. keras.io/api/layers/merging_layers/