Amazing tutorials! What I like the most is the accelerated nature and your judgement of what needs to be explained and what needs to be figured out by the audience. kudos to you! Look forward to many more such videos in future.
I appreciate how you explain the same thing over and over just to ensure the audiences understand everything properly and keep pace with you. Thank you for such dedication.
I really enjoyed your U-Net series! They deliver what the titles advertise: high-level overview of U-Net, a bit of theory, hands-on implementation, and applying it to a real dataset. Thank you so much!
Thanks for your patient explanation 🙂 👍. That was highly motivating. Such a big code was developed from scratch and each step was excellently explained. Thanks for everything 🙏
The quality of the content of your videos is amazing sir! Thank you for such invaluable lectures! I had one doubt, i was wondering if you could make a video possibly explaining the thought process and intuition which goes into coming up with such architectures because by no means do they look arbitrary or fortuitous! understanding the creation process would not only help with possible improvements, but also enable us to maybe come up with architectures of our own! Thanks in advance Sir
If you’d like to understand the thought process for designing deep learning you need to read relevant papers. Typically these architectures are designed by people researching in the field of AI and machine learning. My goal for this channel is to explain how to use these available tools towards image processing and data analysis. I’m not an expert at deep learning architecture so i am not eligible to talk about it.
Excellent work, Just one question, On what basis we can decide the input image size, for example: in this tutorial you have taken 128*128. Which will be the best size and how can we decide which will give the best result. Thanks :)
The larger the better but the limitation would be your computing resources. No one likes to chop images into smaller sizes or resize them but we have to do it in order to make sure our data fits the RAM/GPU. 128 works for most modern systems but if your system crashes you know that you need to reduce the size.
Hey can you pls help :( I was trying to run the code but got an error saying "mask = np.maximum(mask, mask_) ValueError: operands could not be broadcast together with shapes (128,128,1) (128,128,3,1) " How can I solve it? I am trying this with different images
Both arrays need to be of the same shape but it appears that your mask is 128x128x1 and your mask_ is 128x128x3x1. You may be reading your mask_ as RGB instead of gray, please check.
Amazing tutorial. Thank you for explaining it much better. One question I have is how to perform data augmentation on the train images and how can we use transfer learning in U-Net?
Please watch my video 92 on transfer learning. It was on autoencoders but the principle should be the same. For U-nets transfer learning can be tricky as you are concatenating data from other parts of the network. If my video doesn't help then I recommend looking for other better videos on RUclips. Regarding data augmentation, I will be recording a couple of videos next week. So please stay tuned.
@@DigitalSreeni Even i was wondering about the augmentation part. because while applying augmentation on images we have to apply similar augmentation on masks as well i guess., and if that's the case how to apply augmentation on masks ?
Helpful video, thanks very much. Could you explain some metrics for validation proposes and it's implementation? It will be amazing. Congratulations again 👍
TypeError: numpy boolean subtract, the `-` operator, is not supported, use the bitwise_xor, the `^` operator, or the logical_xor function instead. displays three images simultaneously , but no segmented images, kindly suggest what to do
Good work with this tutorial series. Few follow-up comments/clarifications: 1. Why does the test set not have any labels available? I know we don't need them as we're merely using them for testing, but for generating performance metrics for the test set, we would still need the labels to compare them against model inferences. 2. How are the model performance metrics generated for semantic segmentation approaches in comparison to say an object detector? Are we looking at an individual pixel level to understand which of them belonged to the 'object of interest' OR are we just counting up the number of semantic 'objects' detected by the model? (In your case the total number of cells correctly identified) 3. Does the tensor board only show metrics for training and validation sets or can we also configure it to show metrics for the test set?
1. Test set probably has labels, we just didn't import. If I had imported them I would be able to compare them against ground truth for accuracy. 2. For any machine learning approach the performance metrics are generated by comparing ground truth (labels) and the result. For semantic segmentation every pixel has a label and for object detection every object has a label. That is the only difference. 3. I'm not an expert on Tensorboard but here is my view on the topic. When you perform model.fit you can supply validation information. This validation can be a small fraction split from the original data (using validation_split) or it can be other test data (using validation_data). Once you specify validation data, the metrics are tracked for every epoch and can be plotted in Tensorboard. If you have a test data that is not part of model.fit then I don't see how you can keep track of metrics after every epoch. I hope these clarify your questions.
@@DigitalSreeni Hi...great tutorial...thanks for your effort to develop such easy and straightforward videos....I am new in this field...and trying hard to catch it...i have few questions....your guidance will help me a lot... 1) the ground truth labels (comparing with testing results) are just used only for computing the performance metrics???? 2) during our testing process...masks are not needed (or optional) ???? 3) for classification problem, different metrics are used (specificity, sensitivity, precision, F1 score) for computing performance...what metrics are used for in semantic segmentation???
Hi, have you performed testing and evaluation of the model? i am looking for resources to apply prediction using the trained model and evaluation using performance metrics. kindly share any useful resources if you have any.
Thank you for your nice video. that is awesome!. can you do a video for multi-class instance segmentation, specifically how the labeled mask is arranged? what will be the dimension of the mask and output layer of the network?
@Python for Microscopists by Sreeni Thanks for such an awesome video. Can you please tell me what are the modifications I have to do if I have multiple objects with labels?
The reason why you don’t have a clean version is because you ran the training multiple times and tensorboard used the same directory. A good practice is to use a different directory each time you run the training (maybe give a dynamic directory name depending on current time). Or simply delete the logs folder every time
I would like to inquire about a matter: In your prior tutorial, you made reference to the possibility of employing a range of filters within the convolutional layer, such as the Gabor filter, canny,etc. Subsequently, it was mentioned that a decision-making process involving the dense layer could be employed to determine the most suitable filter for segmentation. Could you please provide an explanation of the procedure for accomplishing this?
Dear Sreeni I have gone through the whole segmentation lecture, but I couldn't figure out how did you justify the test image/mask is actually performing on the trained model here. I saw the threshold mask ,mask, what it infered?
Dear sir , what i am actually trying to know we have train our system with stage1_train but you didnt have plotted some of test images to see if our U-net can detect our nuclei or not . I try to do it but i had the hole time the same problem we dont have any Y_test to plott the segmented fotos and how want our U-net to segment a foto with out masks i mean test photos plz let me know if it possible to do it i appreciate you nice Tutorials !!!
Thank you for great tutorial series! One thing I am curious about is whether accuracy is a fair metric for segmentation or not? I heard MeanIoU is more suitable for segmentation tasks. Is it possible to implement MeanIoU metric in Keras?
Yes, of course you can implement MeanIoU in Keras. Here is a link that explains the process. www.tensorflow.org/api_docs/python/tf/keras/metrics/MeanIoU
thank you for your videos they are helpful. i’m trying to understand what changes is going on to the shapes of the test and train data in: model.predict(X_train[:int(X_train.shape[0]*0.9)], verbose=1) thank you.
Thanks for the suggestion. I intend to record GAN tutorials but unfortunately my workstation is at my office. With stay at home in California I cannot get to my office at least until end of May 2020.
Using 'seed =x, np.random.seed = seed' do help at all. Still the outputs come different at least for me. Is there something else which decides series of random numbers coming in?
the part with running the tensor board (!tensorboard --logdir=logs/ --host localhost --port 8089) in the consule doesn't work for me in visual studio code... any help pls?
Thank you for the series. What would be the prediction function for this using the trained model , the image preprocessing steps are somehow complicated, should I use them as it is in prediction i could it figure it out and I believe many have the same issue? another important part we did not see in tutorials is how to evaluate segmentation models using performance metrics such as the most popular (IOU, sensitivity, specificity, dice coefficient). for example here we have used binary cross entropy as a loss function, if i want to evaluate my model using IOU should i use it instead as a loss function as well or it is not necessary!. another thing is when to evaluate the model after training or I can save the model then evaluate it!
Hello. Thanks for the tutorial. When I try to plot or predict using Y-train, I'm getting TypeError: numpy boolean subtract, the - operator, is deprecated, use the bitwise_xor, the ^ operator, or the logical_xor function instead. I got a warning using np.bool but I ignored the warning. What can I do to solve this error?
Hi, Thanks for amazing lecture, I have an question for you, It is reasonable to combine U-Net and other Network (like le-Net) to improve segmentation performance or not? Simply put, Can we use U-Net for Image Improvement and after that we use Le-Net or other Network for Segmentation?
You can combine multiple networks provided they make sense. I am not an expert in this field and active researchers constantly publish papers with results from these type of combinations. Here is an example: papers.nips.cc/paper/6448-combining-fully-convolutional-and-recurrent-neural-networks-for-3d-biomedical-image-segmentation.pdf
hey i'm getting an error '>' not supported between instances of 'list' and 'float' where you have mentioned : preds_train_t = (preds_train>0.5).astype(np.uint8) any help with this ?
@Python for Microscopists by Sreeni Firstly, I thank you. Your videos are really good and helpful. I have a request to you about making video on segmentation of brain tumor. Actually I am having difficulty in dealing with BraTS dataset.
Can I use U-Net for brain tumor detection with my own build dataset? I want to create my own dataset. But I’m a little confused about creating masks(sub images) for each individual image. Do you know how that can possibly be done? Also while I tried to fit the model on jupyter notebook with early stoping & checkpoints as my callbacks, it has showed an error pointing to the callbacks and at the end it was written that “Function call stack: Keras_scratch_graph” for which i couldn’t get any solution so far. What could be the possible reason? Thank you.
Yes, you can definitely use U-net for brain tumor segmentation using your own images. To label images (create masks) you can paint pixels for each class and assign a pixel value to all tumor pixels and a different value to all other regions. You can do all of that on APEER; it is free to use. www.apeer.com/annotate Regarding your error: I have never encountered it. A quick Google search gave this page which may contain useful information for you. stackoverflow.com/questions/57062456/function-call-stack-keras-scratch-graph-error
Python for Microscopists by Sreeni Truly thanks for your recommendation but i’m not familiar with categorizing areas by assigning label value(for mask) in the same image since i’m a beginner in this field. Is there any video/article that may clear my concept? As I’m looking forward to implement this with other dataset it would be extreme help for me to know how these things really get done. Thanks again!!
sir iam working with unet for my project.if i build the model using lamda,in the same code what are other lines of code i have to add.when i use the model created using lamda for testing it is showing error like permissiom denied and some errors.how toovercome it sir
Thanks for the videos... I was just wondering how to test an image base on the model and weights already generated. Because you show for train and validation..
Please watch my other videos. For example, video 173 talks about IoU metric, video 131 talks about loading trained model to continue training or just predicting and validating accuracy.
HI Sreeni, nice video , can you please make a video of semantic segmentation with data augmentation approach and then compare the result with this present approach ? Data augmentation with semantic segmentation is a bit tricky because while training the images should have their corresponding masks. But it will be great if you can show that. Thanks
Data augmentation for semantic segmentation is not as tricky as you may think. You can use Albumentations library to augment image and mask at the same time. I will record a video on this topic.
@@DigitalSreeni I tried using it. When I use flow module from Data Generator. There is no issue because it zip images and masks together and then uploads an array into the RAM but when I use flow_from_directory. There is no way to find if the data gen is picking up the correct corresponding masks for each image. In a numpy array at least i am sure because I created it in that way.
hello, excellent tutorial, need help: if i want to show all segmented results from test samples instead of just random ones, what changes would be required in the code. or else if i want to save my segmented results to a new folder once model is trained what would be the code to do that.
If I want to do segmentation retinal blood vessel there are three types of image-original image,ground truth image and border mask for the training and testing except ground truth on testing.Can I implement it in that code?It would be great help if you give some suggestion.Thank you for the tutorial by the way.
Thank you so much for your reply sir. I need clarification sir. image path setting in your vidio i am confused. In my dataset i have i mask for each corresponding image, When i used your coding for mine, it even not read the image. kindly give me suggestions, in your program you didnt mention about folder where to save, i saved d : but there is a mistake , i didnt understand the reading bcoz in your code its complecated for me, but for you its easy.how to change this path.i have tried lot of ways but no use. I am new to deep learning kindly give me your valuable suggestions. Thank you sir.
hi sir, i am getting following error, kindly help me out plt.imshow(X_train[int(X_train.shape[0]*0.9):][ix]) IndexError: index 4 is out of bounds for axis 0 with size 2 with a blank image as output. 
Hi...! For semantic segmentation every pixel has a label, so for accuracy metric it will compare ground truth image with the predicted one, but each pixel of ground truth by each pixel of predicted image... Is it possible to find accuracy metric by comparing ground truth img with predicted img ..but not pixel by pixel.. As it should compare ground truth image with predicted image as a whole image.
Hey Sreeni, thank you for the amazing tutorial, I could get this to run on my custom dataset but for some reason, I am running into a problem when I try to load the saved model using keras. TypeError: __init__() missing 2 required positional arguments: 'num_classes' and 'target_class_ids' Would you be able to give any suggestions on how to tackle this
Excellent Tutorial.. One question - In preds_train=model.predict(X_train[:int(X_train.shape[0]*0.9,verbose=1], why is 0.9 factor multiplied? Same question for preds_val also? Why is not done for preds_test?
It has been a while since I recoded that video and I apparently was bad at adding comments back then. In any case, upon quick look it appears that I clearly separated train and test data initially so I have those data sets ready for prediction. During training I took 90% for training and 10% for validation. So for prediction I separated 90% from train and assigned it as train and the remaining 10% as validation. Obviously all this is not necessary but I guess it made sense back then while I was coding :)
@@DigitalSreeni Thanks. I understand. I have one more question. Can we resize the U-Net predicted images back to the original sizes? Will there be any data loss due to resizing?
I converted the model to tflite so as to use in android application but i doenst seem to work. Do i need to add anything else? (eg. metadata) to the model?
thanks alot for this series,i apply the same code at my dataset and give me high accuracy but when i make prediction on test data it give me black image why ?also bool numpy array not plotten well as the original images ?please reply
your work is really helpful for my research, but i have a question , the input is 128*128*3, could i change the width_size and high_size to other number ,like 256,512, or whether i only fixed the number of 128, thanks for your great work!
You can change input size to any dimension. You will have to adjust the network parameters to make sure the encoder and decoder are symmetric. You can also use a library like 'segmentation models' that will autogenerate it for you. ruclips.net/video/J_XSd_u_Yew/видео.html
Yes, you can use segmentation and classification techniques to gain insights on cancer cells. Please keep watching other videos that you may find useful.
Save the model. Load the model whenever you want to apply it on other images. Pre-process other images just the way you processed images for training. Apply the model. You may learn some of this from my video number 131.
An error ocurred while starting the kernel 2022 19:03:28.523179: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance‑critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022 19:03:29.504231: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1251 MB memory: ‑> device: 0, name: NVIDIA GeForce MX450, pci bus id: 0000:01:00.0, compute capability: 7.5 2022 19:03:32.487596: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8400. Kindly suggest how to deal with this
Nice tutorial !! but I am getting a problem while loading this model. I have to use this trained model in real-time so can you make another video that uses this model for predicting segmented image in real-time.
For real time images you're just getting images as frames. You should be able to apply trained models on these images just the way you apply to regular folder full of images. Depending on your system the application may be slow.
your way of explaining is very nice,thanks for sharing this precious knowledge, could you tell me please about structure if i have one folder of images contain 500 original images and second folder contains mask related original images. could it be applicable as you described files structure in your videos?
Yes of course. Please watch my other videos on deep learning to see how I imported images that are stored in different ways on my drive. The whole point is that you need to get your images into a variable (X) and masks into a variable (Y). It is up to your skill and creativity to figure out how you do it.
for my dataset i have seqeezed all train , mask and test data . But my thresholed output not showing thersholded output like yours. what is the problem
If you are using same data sets as mine then you should see similar results. If not, please make sure you defined the network properly and that all other parameters match the ones I used.
Hello! I've got a question, somehow my tensorboard doesn't show the train graph! I was following your tutorial, and even tried to use your code, but somehow i get only one line. my tensorflow version is 1.14, cuda v10. Do you know how can I manage this problem?
Thank you for this Amazing tutorial, Just one question I got the following error "MemoryError: Unable to allocate 24.6 GiB for an array with shape (134120, 256, 256, 3) and data type uint8 ", which mean that the memory of my PC is not sufficient. So, I wonder what you suggest? Can I build a Pre-trained model with part of the the data and then update it using the built Pre-trained model ? or any other suggestions?
I recommend loading data in batches using ImageDataGenerator. from keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator() datagen.flow_from_directory(your_directory)
@@DigitalSreeni regarding the BATCH_SIZE, if may training datasets size is about 130K and my PC can handle about 25K is it better to make the batch size 25K or smaller ?
idx was left over from my attempt to randomly pick images for testing. Also, preds_test_t is there in case you want to test on test images rather than train or validation images.
This image segmentation tutorials are pure gold, thank you so much for sharing.
indeed !
The best U-Net and CNN tutorials I've ever seen. As another commenter said, pure gold. Thank you!
Thanks for watching and your kind comment.
yeah it is simply incredible.
Amazing tutorials! What I like the most is the accelerated nature and your judgement of what needs to be explained and what needs to be figured out by the audience. kudos to you! Look forward to many more such videos in future.
well said he explains even beginner can understand
I appreciate how you explain the same thing over and over just to ensure the audiences understand everything properly and keep pace with you. Thank you for such dedication.
Absolutely fanstastic! I went from zero to hero in 6 videos! Brilliantly explained.
Thank you :)
I really enjoyed your U-Net series! They deliver what the titles advertise: high-level overview of U-Net, a bit of theory, hands-on implementation, and applying it to a real dataset. Thank you so much!
Thank you very much for your time. It was really amazing for getting started with Keras. Cannot wait for more videos on the segmentation
Please keep watching... there are lot more videos, you seem to be only at video 78 right now.
seeing the graph after I walkthrough the series from part 1 is really satisfying. Thank you very much
You're very welcome!
Watched your whole series and got the complete idea on how to make own u- net architecture. I got what I looking for thank you
Thank you so much! This series has been really helpful. Please keep making quality content like these
Just luv ur channel! Thank you for being the best teacher!!
Your video is super useful for me. I just start to learning deep learning. Appreciate for providing this tutorials.
Amazing! thank you very much for these 6 parts, I really benefited from them.
Thanks for your patient explanation 🙂 👍. That was highly motivating. Such a big code was developed from scratch and each step was excellently explained. Thanks for everything 🙏
Coding is easy, if you try to understand it in bite size chunks.
Really great! thank you very much for the parts, I really understand a lot about a sematic segmentation from those
Glad it was helpful!
Thank you a million times. This has explained all the confusions I had and has helped me understand how to debug my code. ❤
Glad it helped!
Thank you for the video. The explanation is really crystal clear!!
Glad to hear that!
So well explained. It’s very helpful to understand the architecture with its implementation.
I am glad you find it useful.
Excellent series of tutorials on U-Net, thanks for sharing!
Glad you like them!
Nice tutorial. Can you make a video on multiclass segmentation using UNet or any other deep learning models?
The quality of the content of your videos is amazing sir! Thank you for such invaluable lectures! I had one doubt, i was wondering if you could make a video possibly explaining the thought process and intuition which goes into coming up with such architectures because by no means do they look arbitrary or fortuitous! understanding the creation process would not only help with possible improvements, but also enable us to maybe come up with architectures of our own!
Thanks in advance Sir
If you’d like to understand the thought process for designing deep learning you need to read relevant papers. Typically these architectures are designed by people researching in the field of AI and machine learning. My goal for this channel is to explain how to use these available tools towards image processing and data analysis. I’m not an expert at deep learning architecture so i am not eligible to talk about it.
Thanks so much; I really appreciate your generosity and knowledge. It would be great if you also discuss and share concepts in Pytorch.
Really best explanation about U-Net, hats off to you
I'm glad you like it.
i am feeling so good after watching all these videos
I have that affect on people :)
Thank you so much! This series was a very well explained one!
You're very welcome!
Excellent work, Just one question, On what basis we can decide the input image size, for example: in this tutorial you have taken 128*128. Which will be the best size and how can we decide which will give the best result. Thanks :)
The larger the better but the limitation would be your computing resources. No one likes to chop images into smaller sizes or resize them but we have to do it in order to make sure our data fits the RAM/GPU. 128 works for most modern systems but if your system crashes you know that you need to reduce the size.
Hey can you pls help :( I was trying to run the code but got an error saying "mask = np.maximum(mask, mask_)
ValueError: operands could not be broadcast together with shapes (128,128,1) (128,128,3,1)
" How can I solve it? I am trying this with different images
Pls help we need this
Both arrays need to be of the same shape but it appears that your mask is 128x128x1 and your mask_ is 128x128x3x1. You may be reading your mask_ as RGB instead of gray, please check.
Thank you for answer😃
Great, thanks a lot for all of your efforts in DL.
My pleasure!
Amazing tutorial. Thank you for explaining it much better. One question I have is how to perform data augmentation on the train images and how can we use transfer learning in U-Net?
Please watch my video 92 on transfer learning. It was on autoencoders but the principle should be the same. For U-nets transfer learning can be tricky as you are concatenating data from other parts of the network. If my video doesn't help then I recommend looking for other better videos on RUclips.
Regarding data augmentation, I will be recording a couple of videos next week. So please stay tuned.
@@DigitalSreeni Even i was wondering about the augmentation part. because while applying augmentation on images we have to apply similar augmentation on masks as well i guess., and if that's the case how to apply augmentation on masks ?
Helpful video, thanks very much. Could you explain some metrics for validation proposes and it's implementation? It will be amazing. Congratulations again 👍
Excellent works professor!! very Thanks yours tutorial video.
TypeError: numpy boolean subtract, the `-` operator, is not supported, use the bitwise_xor, the `^` operator, or the logical_xor function instead. displays three images simultaneously , but no segmented images, kindly suggest what to do
same as i
Thank you for U-Net Series.
You are welcome.
Good work with this tutorial series. Few follow-up comments/clarifications:
1. Why does the test set not have any labels available? I know we don't need them as we're merely using them for testing, but for generating performance metrics for the test set, we would still need the labels to compare them against model inferences.
2. How are the model performance metrics generated for semantic segmentation approaches in comparison to say an object detector? Are we looking at an individual pixel level to understand which of them belonged to the 'object of interest' OR are we just counting up the number of semantic 'objects' detected by the model? (In your case the total number of cells correctly identified)
3. Does the tensor board only show metrics for training and validation sets or can we also configure it to show metrics for the test set?
1. Test set probably has labels, we just didn't import. If I had imported them I would be able to compare them against ground truth for accuracy.
2. For any machine learning approach the performance metrics are generated by comparing ground truth (labels) and the result. For semantic segmentation every pixel has a label and for object detection every object has a label. That is the only difference.
3. I'm not an expert on Tensorboard but here is my view on the topic. When you perform model.fit you can supply validation information. This validation can be a small fraction split from the original data (using validation_split) or it can be other test data (using validation_data). Once you specify validation data, the metrics are tracked for every epoch and can be plotted in Tensorboard. If you have a test data that is not part of model.fit then I don't see how you can keep track of metrics after every epoch.
I hope these clarify your questions.
Python for Microscopists makes sense. Appreciate the response. Thank you.
@@DigitalSreeni Hi...great tutorial...thanks for your effort to develop such easy and straightforward videos....I am new in this field...and trying hard to catch it...i have few questions....your guidance will help me a lot...
1) the ground truth labels (comparing with testing results) are just used only for computing the performance metrics????
2) during our testing process...masks are not needed (or optional) ????
3) for classification problem, different metrics are used (specificity, sensitivity, precision, F1 score) for computing performance...what metrics are used for in semantic segmentation???
Hi, have you performed testing and evaluation of the model? i am looking for resources to apply prediction using the trained model and evaluation using performance metrics. kindly share any useful resources if you have any.
Thank you for your nice video. that is awesome!. can you do a video for multi-class instance segmentation, specifically how the labeled mask is arranged? what will be the dimension of the mask and output layer of the network?
Awesome content!
Nice course
just a question do you have Github for accessing to your codes?
@Python for Microscopists by Sreeni Thanks for such an awesome video. Can you please tell me what are the modifications I have to do if I have multiple objects with labels?
excellent tutorial... Just a question please... I tried running this model. But my validation loss is always nan. Not sure why .
The reason why you don’t have a clean version is because you ran the training multiple times and tensorboard used the same directory. A good practice is to use a different directory each time you run the training (maybe give a dynamic directory name depending on current time). Or simply delete the logs folder every time
Thanks for the clarification.
@@DigitalSreeni Thank you for these amazing tutorial. you're saving my graduation! :)
I would like to inquire about a matter: In your prior tutorial, you made reference to the possibility of employing a range of filters within the convolutional layer, such as the Gabor filter, canny,etc. Subsequently, it was mentioned that a decision-making process involving the dense layer could be employed to determine the most suitable filter for segmentation. Could you please provide an explanation of the procedure for accomplishing this?
Cant thank you enough! please continue!
You are only at the 78th video, please continue watching, a lot more to learn :)
@@DigitalSreeni haha did not mean literally continue with that particular video. I meant generally, continue on making those valuable videos 😁.
Thanks for your sharing!
My pleasure!
Thank you SO much! You made it all so clear
You're very welcome!
Amazing tutorial, you saved my but and my pal's one!
Dear Sreeni
I have gone through the whole segmentation lecture, but I couldn't figure out how did you justify the test image/mask is actually performing on the trained model here. I saw the threshold mask ,mask, what it infered?
your videos are very useful and informative
Glad you think so!
Dear sir ,
what i am actually trying to know we have train our system with stage1_train
but you didnt have plotted some of test images to see if our U-net can detect our nuclei or not . I try to do it but i had the hole time the same problem we dont have any Y_test
to plott the segmented fotos
and how want our U-net to segment a foto with out masks i mean test photos
plz let me know if it possible to do it
i appreciate you nice Tutorials !!!
Thank you for great tutorial series! One thing I am curious about is whether accuracy is a fair metric for segmentation or not? I heard MeanIoU is more suitable for segmentation tasks. Is it possible to implement MeanIoU metric in Keras?
Yes, of course you can implement MeanIoU in Keras. Here is a link that explains the process.
www.tensorflow.org/api_docs/python/tf/keras/metrics/MeanIoU
No comment for ur teaching ability #giveup
I hope this is good feedback, not sure how to interpret :)
Great Videos.
So good! Very useful!
Hi Sreeni Sir, Thanks for sharing this tutorial.how can I use VGGNet16/19 on same dataset instead of UNet?Please let me know
thank you for your videos they are helpful. i’m trying to understand what changes is going on to the shapes of the test and train data in:
model.predict(X_train[:int(X_train.shape[0]*0.9)], verbose=1)
thank you.
Thank you sir
Its good that masks were not as you wanted else we would have missed on such good data pre-processing
Great lectures.
Best tutorial sir pls make video on Generative adversarial network (GAN) and Variational autoincoders for image segmentation and processing
Thanks for the suggestion. I intend to record GAN tutorials but unfortunately my workstation is at my office. With stay at home in California I cannot get to my office at least until end of May 2020.
Using 'seed =x, np.random.seed = seed' do help at all. Still the outputs come different at least for me. Is there something else which decides series of random numbers coming in?
the part with running the tensor board (!tensorboard --logdir=logs/ --host localhost --port 8089) in the consule doesn't work for me in visual studio code... any help pls?
Thank you for the series. What would be the prediction function for this using the trained model , the image preprocessing steps are somehow complicated, should I use them as it is in prediction i could it figure it out and I believe many have the same issue? another important part we did not see in tutorials is how to evaluate segmentation models using performance metrics such as the most popular (IOU, sensitivity, specificity, dice coefficient). for example here we have used binary cross entropy as a loss function, if i want to evaluate my model using IOU should i use it instead as a loss function as well or it is not necessary!. another thing is when to evaluate the model after training or I can save the model then evaluate it!
Hello. Thanks for the tutorial. When I try to plot or predict using Y-train, I'm getting TypeError: numpy boolean subtract, the - operator, is deprecated, use the bitwise_xor, the ^ operator, or the logical_xor function instead. I got a warning using np.bool but I ignored the warning. What can I do to solve this error?
Hi Sangharsh
Can you tell me how did you solve the problem with the boolean subtract?
I am facing the same issue
Error coming at Dropout(0.1) Value error
Exception encountered when calling layer
Dropout
Hi, Thanks for amazing lecture, I have an question for you, It is reasonable to combine U-Net and other Network (like le-Net) to improve segmentation performance or not? Simply put, Can we use U-Net for Image Improvement and after that we use Le-Net or other Network for Segmentation?
You can combine multiple networks provided they make sense. I am not an expert in this field and active researchers constantly publish papers with results from these type of combinations. Here is an example: papers.nips.cc/paper/6448-combining-fully-convolutional-and-recurrent-neural-networks-for-3d-biomedical-image-segmentation.pdf
@@DigitalSreeni thanks for sharing your knowledge
hey i'm getting an error '>' not supported between instances of 'list' and 'float' where you have mentioned :
preds_train_t = (preds_train>0.5).astype(np.uint8)
any help with this ?
Super helpful, Cheers!
Glad it helped!
thank you, it helps a lot
Thanks !!!!
@Python for Microscopists by Sreeni Firstly, I thank you. Your videos are really good and helpful.
I have a request to you about making video on segmentation of brain tumor. Actually I am having difficulty in dealing with BraTS dataset.
Never worked with BraTS data set. Let me look into it.
@@DigitalSreeni I'm also interested in it. BRATS Dataset. It's really difficult as it id 3d data.
Can I use U-Net for brain tumor detection with my own build dataset? I want to create my own dataset. But I’m a little confused about creating masks(sub images) for each individual image. Do you know how that can possibly be done? Also while I tried to fit the model on jupyter notebook with early stoping & checkpoints as my callbacks, it has showed an error pointing to the callbacks and at the end it was written that “Function call stack: Keras_scratch_graph” for which i couldn’t get any solution so far. What could be the possible reason? Thank you.
Yes, you can definitely use U-net for brain tumor segmentation using your own images. To label images (create masks) you can paint pixels for each class and assign a pixel value to all tumor pixels and a different value to all other regions. You can do all of that on APEER; it is free to use. www.apeer.com/annotate
Regarding your error: I have never encountered it. A quick Google search gave this page which may contain useful information for you. stackoverflow.com/questions/57062456/function-call-stack-keras-scratch-graph-error
Python for Microscopists by Sreeni Truly thanks for your recommendation but i’m not familiar with categorizing areas by assigning label value(for mask) in the same image since i’m a beginner in this field. Is there any video/article that may clear my concept? As I’m looking forward to implement this with other dataset it would be extreme help for me to know how these things really get done. Thanks again!!
sir iam working with unet for my project.if i build the model using lamda,in the same code what are other lines of code i have to add.when i use the model created using lamda for testing it is showing error like permissiom denied and some errors.how toovercome it sir
Thanks for the videos... I was just wondering how to test an image base on the model and weights already generated. Because you show for train and validation..
Please watch my other videos. For example, video 173 talks about IoU metric, video 131 talks about loading trained model to continue training or just predicting and validating accuracy.
HI Sreeni, nice video , can you please make a video of semantic segmentation with data augmentation approach and then compare the result with this present approach ?
Data augmentation with semantic segmentation is a bit tricky because while training the images should have their corresponding masks. But it will be great if you can show that.
Thanks
Data augmentation for semantic segmentation is not as tricky as you may think. You can use Albumentations library to augment image and mask at the same time. I will record a video on this topic.
@@DigitalSreeni I tried using it. When I use flow module from Data Generator. There is no issue because it zip images and masks together and then uploads an array into the RAM but when I use flow_from_directory. There is no way to find if the data gen is picking up the correct corresponding masks for each image. In a numpy array at least i am sure because I created it in that way.
@@DigitalSreeni Also can you please use IOU as a metric with accuracy during training model ?
hello, excellent tutorial, need help: if i want to show all segmented results from test samples instead of just random ones, what changes would be required in the code. or else if i want to save my segmented results to a new folder once model is trained what would be the code to do that.
please share the solution if you found any.
hello sir...your videos are awesome.very helpful.
please let me knw can we do u net segmentation withuot mask images
If I want to do segmentation retinal blood vessel there are three types of image-original image,ground truth image and border mask for the training and testing except ground truth on testing.Can I implement it in that code?It would be great help if you give some suggestion.Thank you for the tutorial by the way.
hey,man im in your situation 4years ago
did you figure this out ?
Hi
I had used your code for ALL_IDB1 dataset,the result of predicted mask is not getting displayed.....I need your help.
Thank you so much for your reply sir. I need clarification sir. image path setting in your vidio i am confused. In my dataset i have i mask for each corresponding image, When i used your coding for mine, it even not read the image. kindly give me suggestions, in your program you didnt mention about folder where to save, i saved d : but there is a mistake , i didnt understand the reading bcoz in your code its complecated for me, but for you its easy.how to change this path.i have tried lot of ways but no use. I am new to deep learning kindly give me your valuable suggestions. Thank you sir.
hi sir, i am getting following error, kindly help me out
plt.imshow(X_train[int(X_train.shape[0]*0.9):][ix])
IndexError: index 4 is out of bounds for axis 0 with size 2
with a blank image as output.

Hi...!
For semantic segmentation every pixel has a label, so for accuracy metric it will compare ground truth image with the predicted one, but each pixel of ground truth by each pixel of predicted image...
Is it possible to find accuracy metric by comparing ground truth img with predicted img ..but not pixel by pixel..
As it should compare ground truth image with predicted image as a whole image.
Hey Sreeni, thank you for the amazing tutorial, I could get this to run on my custom dataset but for some reason, I am running into a problem when I try to load the saved model using keras. TypeError: __init__() missing 2 required positional arguments: 'num_classes' and 'target_class_ids'
Would you be able to give any suggestions on how to tackle this
Excellent Tutorial.. One question - In preds_train=model.predict(X_train[:int(X_train.shape[0]*0.9,verbose=1], why is 0.9 factor multiplied? Same question for preds_val also? Why is not done for preds_test?
It has been a while since I recoded that video and I apparently was bad at adding comments back then. In any case, upon quick look it appears that I clearly separated train and test data initially so I have those data sets ready for prediction. During training I took 90% for training and 10% for validation. So for prediction I separated 90% from train and assigned it as train and the remaining 10% as validation. Obviously all this is not necessary but I guess it made sense back then while I was coding :)
@@DigitalSreeni Thanks. I understand. I have one more question. Can we resize the U-Net predicted images back to the original sizes? Will there be any data loss due to resizing?
Wow great tutorial for image Segmentation. I need python code for retinal vessel Segmentation like this????
hey,man im in your situation 4years ago
did you figure this out ?
you solvedmy problem.
I converted the model to tflite so as to use in android application but i doenst seem to work. Do i need to add anything else? (eg. metadata) to the model?
thanks alot for this series,i apply the same code at my dataset and give me high accuracy but when i make prediction on test data it give me black image why ?also bool numpy array not plotten well as the original images ?please reply
Nice tutorial. can u please give the citation of Modified UNET architecture. I need to compare my work with UNET architecture u explained.
your work is really helpful for my research, but i have a question , the input is 128*128*3, could i change the width_size and high_size to other number ,like 256,512, or whether i only fixed the number of 128, thanks for your great work!
You can change input size to any dimension. You will have to adjust the network parameters to make sure the encoder and decoder are symmetric. You can also use a library like 'segmentation models' that will autogenerate it for you. ruclips.net/video/J_XSd_u_Yew/видео.html
@@DigitalSreeni thanks your reply I will watch it!
Sir, Can I extend this information for
my research works based on cancer cells detection and classification...
Yes, you can use segmentation and classification techniques to gain insights on cancer cells. Please keep watching other videos that you may find useful.
How to apply this code to testing an other image (this image is not in the testing images)
Save the model. Load the model whenever you want to apply it on other images. Pre-process other images just the way you processed images for training. Apply the model. You may learn some of this from my video number 131.
@@DigitalSreeni thanks for your response , i'll sea your next videos
An error ocurred while starting the kernel
2022 19:03:28.523179: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance‑critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022 19:03:29.504231: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1251 MB memory: ‑> device: 0, name: NVIDIA GeForce MX450, pci bus id: 0000:01:00.0, compute capability: 7.5
2022 19:03:32.487596: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8400. Kindly suggest how to deal with this
Nice tutorial !! but I am getting a problem while loading this model. I have to use this trained model in real-time so can you make another video that uses this model for predicting segmented image in real-time.
For real time images you're just getting images as frames. You should be able to apply trained models on these images just the way you apply to regular folder full of images. Depending on your system the application may be slow.
@@DigitalSreeni thank you but I figured out what is the problem I am getting at that time and now it is working properly on real time
thanks for the great video. How can the model be enhanced to handle multiple distinct masks?
Do you mean multiple labels instead of single?
your way of explaining is very nice,thanks for sharing this precious knowledge, could you tell me please about structure if i have one folder of images contain 500 original images and second folder contains mask related original images. could it be applicable as you described files structure in your videos?
Yes of course. Please watch my other videos on deep learning to see how I imported images that are stored in different ways on my drive. The whole point is that you need to get your images into a variable (X) and masks into a variable (Y). It is up to your skill and creativity to figure out how you do it.
Can I just use my Mac Laptop without using GPU?
for my dataset i have seqeezed all train , mask and test data . But my thresholed output not showing thersholded output like yours. what is the problem
If you are using same data sets as mine then you should see similar results. If not, please make sure you defined the network properly and that all other parameters match the ones I used.
Hello! I've got a question, somehow my tensorboard doesn't show the train graph! I was following your tutorial, and even tried to use your code, but somehow i get only one line. my tensorflow version is 1.14, cuda v10. Do you know how can I manage this problem?
I never encountered this issue. I hope you verified all the basics such as adding tensorboard to the callbacks, etc.
Thank you for this Amazing tutorial, Just one question I got the following error "MemoryError: Unable to allocate 24.6 GiB for an array with shape (134120, 256, 256, 3) and data type uint8 ", which mean that the memory of my PC is not sufficient. So, I wonder what you suggest? Can I build a Pre-trained model with part of the the data and then update it using the built Pre-trained model ? or any other suggestions?
I recommend loading data in batches using ImageDataGenerator.
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator()
datagen.flow_from_directory(your_directory)
@@DigitalSreeni regarding the BATCH_SIZE, if may training datasets size is about 130K and my PC can handle about 25K is it better to make the batch size 25K or smaller ?
Great tutorial! It seems like the preds_test_t and idx variables are never being used, can you please explain why? Thank you
idx was left over from my attempt to randomly pick images for testing. Also, preds_test_t is there in case you want to test on test images rather than train or validation images.
# Perform test3
ix = random.randint(0, len(preds_test_t))
imshow(X_test[ix])
plt.show()
imshow(np.squeeze(preds_test_t[ix]))
plt.show()