78 - Image Segmentation using U-Net - Part 6 (Running the code and understanding results)

Поделиться
HTML-код
  • Опубликовано: 20 сен 2024

Комментарии • 236

  • @sergiodelgado2020
    @sergiodelgado2020 4 года назад +39

    This image segmentation tutorials are pure gold, thank you so much for sharing.

  • @juanpablotavacchi5799
    @juanpablotavacchi5799 2 года назад +3

    The best U-Net and CNN tutorials I've ever seen. As another commenter said, pure gold. Thank you!

  • @rakeshraushan6573
    @rakeshraushan6573 4 года назад +14

    Amazing tutorials! What I like the most is the accelerated nature and your judgement of what needs to be explained and what needs to be figured out by the audience. kudos to you! Look forward to many more such videos in future.

    • @briskminded9020
      @briskminded9020 4 года назад

      well said he explains even beginner can understand

  • @ifranrahman
    @ifranrahman Год назад

    I appreciate how you explain the same thing over and over just to ensure the audiences understand everything properly and keep pace with you. Thank you for such dedication.

  • @andrewmcmillan5464
    @andrewmcmillan5464 4 месяца назад

    Absolutely fanstastic! I went from zero to hero in 6 videos! Brilliantly explained.

  • @amin-sadeghi
    @amin-sadeghi 2 года назад

    I really enjoyed your U-Net series! They deliver what the titles advertise: high-level overview of U-Net, a bit of theory, hands-on implementation, and applying it to a real dataset. Thank you so much!

  • @hamedamiri232
    @hamedamiri232 4 года назад +2

    Thank you very much for your time. It was really amazing for getting started with Keras. Cannot wait for more videos on the segmentation

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Please keep watching... there are lot more videos, you seem to be only at video 78 right now.

  • @chawza8402
    @chawza8402 4 года назад

    seeing the graph after I walkthrough the series from part 1 is really satisfying. Thank you very much

  • @jampavy6446
    @jampavy6446 2 года назад

    Watched your whole series and got the complete idea on how to make own u- net architecture. I got what I looking for thank you

  • @MrKD-dt9ff
    @MrKD-dt9ff 3 года назад +1

    Thank you so much! This series has been really helpful. Please keep making quality content like these

  • @anhtuannguyenai7118
    @anhtuannguyenai7118 4 месяца назад

    Just luv ur channel! Thank you for being the best teacher!!

  • @kjm15246
    @kjm15246 2 года назад

    Your video is super useful for me. I just start to learning deep learning. Appreciate for providing this tutorials.

  • @pycad
    @pycad 3 года назад +2

    Amazing! thank you very much for these 6 parts, I really benefited from them.

  • @sandeepmandrawadkar9133
    @sandeepmandrawadkar9133 2 года назад

    Thanks for your patient explanation 🙂 👍. That was highly motivating. Such a big code was developed from scratch and each step was excellently explained. Thanks for everything 🙏

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Coding is easy, if you try to understand it in bite size chunks.

  • @fsehayeteweldebrhan1265
    @fsehayeteweldebrhan1265 3 года назад +1

    Really great! thank you very much for the parts, I really understand a lot about a sematic segmentation from those

  • @uniqcoda
    @uniqcoda Год назад

    Thank you a million times. This has explained all the confusions I had and has helped me understand how to debug my code. ❤

  • @irasalsabilar.a2782
    @irasalsabilar.a2782 3 года назад +1

    Thank you for the video. The explanation is really crystal clear!!

  • @dhanashripatil8844
    @dhanashripatil8844 3 года назад

    So well explained. It’s very helpful to understand the architecture with its implementation.

  • @rhgong
    @rhgong 4 года назад

    Excellent series of tutorials on U-Net, thanks for sharing!

  • @AbdulQayyum-kd3gf
    @AbdulQayyum-kd3gf 4 года назад +10

    Nice tutorial. Can you make a video on multiclass segmentation using UNet or any other deep learning models?

  • @knot2knot90
    @knot2knot90 4 года назад +2

    The quality of the content of your videos is amazing sir! Thank you for such invaluable lectures! I had one doubt, i was wondering if you could make a video possibly explaining the thought process and intuition which goes into coming up with such architectures because by no means do they look arbitrary or fortuitous! understanding the creation process would not only help with possible improvements, but also enable us to maybe come up with architectures of our own!
    Thanks in advance Sir

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      If you’d like to understand the thought process for designing deep learning you need to read relevant papers. Typically these architectures are designed by people researching in the field of AI and machine learning. My goal for this channel is to explain how to use these available tools towards image processing and data analysis. I’m not an expert at deep learning architecture so i am not eligible to talk about it.

  • @sepidkh6249
    @sepidkh6249 Год назад

    Thanks so much; I really appreciate your generosity and knowledge. It would be great if you also discuss and share concepts in Pytorch.

  • @pankajkumarchoudhary3845
    @pankajkumarchoudhary3845 4 года назад

    Really best explanation about U-Net, hats off to you

  • @DiverselyArtistic
    @DiverselyArtistic 4 года назад

    i am feeling so good after watching all these videos

  • @bamitsmanas
    @bamitsmanas 4 года назад +1

    Thank you so much! This series was a very well explained one!

  • @Shivar
    @Shivar 4 года назад +1

    Excellent work, Just one question, On what basis we can decide the input image size, for example: in this tutorial you have taken 128*128. Which will be the best size and how can we decide which will give the best result. Thanks :)

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +4

      The larger the better but the limitation would be your computing resources. No one likes to chop images into smaller sizes or resize them but we have to do it in order to make sure our data fits the RAM/GPU. 128 works for most modern systems but if your system crashes you know that you need to reduce the size.

  • @maideyldz573
    @maideyldz573 3 года назад +4

    Hey can you pls help :( I was trying to run the code but got an error saying "mask = np.maximum(mask, mask_)
    ValueError: operands could not be broadcast together with shapes (128,128,1) (128,128,3,1)
    " How can I solve it? I am trying this with different images

    • @hilalalkan5237
      @hilalalkan5237 3 года назад +4

      Pls help we need this

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +2

      Both arrays need to be of the same shape but it appears that your mask is 128x128x1 and your mask_ is 128x128x3x1. You may be reading your mask_ as RGB instead of gray, please check.

    • @hilalalkan5237
      @hilalalkan5237 3 года назад

      Thank you for answer😃

  • @falahfakhri2729
    @falahfakhri2729 3 года назад

    Great, thanks a lot for all of your efforts in DL.

  • @deekshaaggarwal7981
    @deekshaaggarwal7981 4 года назад +2

    Amazing tutorial. Thank you for explaining it much better. One question I have is how to perform data augmentation on the train images and how can we use transfer learning in U-Net?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Please watch my video 92 on transfer learning. It was on autoencoders but the principle should be the same. For U-nets transfer learning can be tricky as you are concatenating data from other parts of the network. If my video doesn't help then I recommend looking for other better videos on RUclips.
      Regarding data augmentation, I will be recording a couple of videos next week. So please stay tuned.

    • @ashishjohnsonburself
      @ashishjohnsonburself 4 года назад

      @@DigitalSreeni Even i was wondering about the augmentation part. because while applying augmentation on images we have to apply similar augmentation on masks as well i guess., and if that's the case how to apply augmentation on masks ?

  • @franklinsierra1287
    @franklinsierra1287 4 года назад +2

    Helpful video, thanks very much. Could you explain some metrics for validation proposes and it's implementation? It will be amazing. Congratulations again 👍

  • @nying3452
    @nying3452 3 года назад

    Excellent works professor!! very Thanks yours tutorial video.

  • @nahidanazir3746
    @nahidanazir3746 2 года назад +2

    TypeError: numpy boolean subtract, the `-` operator, is not supported, use the bitwise_xor, the `^` operator, or the logical_xor function instead. displays three images simultaneously , but no segmented images, kindly suggest what to do

  • @abubakrshafique7335
    @abubakrshafique7335 4 года назад

    Thank you for U-Net Series.

  • @messi3210
    @messi3210 4 года назад

    Good work with this tutorial series. Few follow-up comments/clarifications:
    1. Why does the test set not have any labels available? I know we don't need them as we're merely using them for testing, but for generating performance metrics for the test set, we would still need the labels to compare them against model inferences.
    2. How are the model performance metrics generated for semantic segmentation approaches in comparison to say an object detector? Are we looking at an individual pixel level to understand which of them belonged to the 'object of interest' OR are we just counting up the number of semantic 'objects' detected by the model? (In your case the total number of cells correctly identified)
    3. Does the tensor board only show metrics for training and validation sets or can we also configure it to show metrics for the test set?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      1. Test set probably has labels, we just didn't import. If I had imported them I would be able to compare them against ground truth for accuracy.
      2. For any machine learning approach the performance metrics are generated by comparing ground truth (labels) and the result. For semantic segmentation every pixel has a label and for object detection every object has a label. That is the only difference.
      3. I'm not an expert on Tensorboard but here is my view on the topic. When you perform model.fit you can supply validation information. This validation can be a small fraction split from the original data (using validation_split) or it can be other test data (using validation_data). Once you specify validation data, the metrics are tracked for every epoch and can be plotted in Tensorboard. If you have a test data that is not part of model.fit then I don't see how you can keep track of metrics after every epoch.
      I hope these clarify your questions.

    • @messi3210
      @messi3210 4 года назад

      Python for Microscopists makes sense. Appreciate the response. Thank you.

    • @edcbazyxw9698
      @edcbazyxw9698 4 года назад

      @@DigitalSreeni Hi...great tutorial...thanks for your effort to develop such easy and straightforward videos....I am new in this field...and trying hard to catch it...i have few questions....your guidance will help me a lot...
      1) the ground truth labels (comparing with testing results) are just used only for computing the performance metrics????
      2) during our testing process...masks are not needed (or optional) ????
      3) for classification problem, different metrics are used (specificity, sensitivity, precision, F1 score) for computing performance...what metrics are used for in semantic segmentation???

    • @SawsanAAlowa
      @SawsanAAlowa 2 года назад

      Hi, have you performed testing and evaluation of the model? i am looking for resources to apply prediction using the trained model and evaluation using performance metrics. kindly share any useful resources if you have any.

  • @abelworku8475
    @abelworku8475 3 года назад +2

    Thank you for your nice video. that is awesome!. can you do a video for multi-class instance segmentation, specifically how the labeled mask is arranged? what will be the dimension of the mask and output layer of the network?

  • @videosbuff
    @videosbuff Год назад

    Awesome content!

  • @alizindari4044
    @alizindari4044 4 года назад +3

    Nice course
    just a question do you have Github for accessing to your codes?

  • @abhilasht6471
    @abhilasht6471 4 года назад +3

    @Python for Microscopists by Sreeni Thanks for such an awesome video. Can you please tell me what are the modifications I have to do if I have multiple objects with labels?

  • @sriharimohan618
    @sriharimohan618 3 года назад +1

    excellent tutorial... Just a question please... I tried running this model. But my validation loss is always nan. Not sure why .

  • @visintel
    @visintel 4 года назад +1

    The reason why you don’t have a clean version is because you ran the training multiple times and tensorboard used the same directory. A good practice is to use a different directory each time you run the training (maybe give a dynamic directory name depending on current time). Or simply delete the logs folder every time

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Thanks for the clarification.

    • @visintel
      @visintel 4 года назад +2

      @@DigitalSreeni Thank you for these amazing tutorial. you're saving my graduation! :)

  • @shafagh_projects
    @shafagh_projects Год назад

    I would like to inquire about a matter: In your prior tutorial, you made reference to the possibility of employing a range of filters within the convolutional layer, such as the Gabor filter, canny,etc. Subsequently, it was mentioned that a decision-making process involving the dense layer could be employed to determine the most suitable filter for segmentation. Could you please provide an explanation of the procedure for accomplishing this?

  • @tonix1993
    @tonix1993 2 года назад

    Cant thank you enough! please continue!

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      You are only at the 78th video, please continue watching, a lot more to learn :)

    • @tonix1993
      @tonix1993 2 года назад

      @@DigitalSreeni haha did not mean literally continue with that particular video. I meant generally, continue on making those valuable videos 😁.

  • @ziruntv9625
    @ziruntv9625 3 года назад

    Thanks for your sharing!

  • @edithkarinaaquinocantero1265
    @edithkarinaaquinocantero1265 4 года назад

    Thank you SO much! You made it all so clear

  • @saracaramaschi234
    @saracaramaschi234 4 года назад

    Amazing tutorial, you saved my but and my pal's one!

  • @vikashkumar-cr7ee
    @vikashkumar-cr7ee 2 года назад

    Dear Sreeni
    I have gone through the whole segmentation lecture, but I couldn't figure out how did you justify the test image/mask is actually performing on the trained model here. I saw the threshold mask ,mask, what it infered?

  • @kanipalomidsazanpayairic1272
    @kanipalomidsazanpayairic1272 2 года назад

    your videos are very useful and informative

  • @sejadayoubi5402
    @sejadayoubi5402 4 года назад +1

    Dear sir ,
    what i am actually trying to know we have train our system with stage1_train
    but you didnt have plotted some of test images to see if our U-net can detect our nuclei or not . I try to do it but i had the hole time the same problem we dont have any Y_test
    to plott the segmented fotos
    and how want our U-net to segment a foto with out masks i mean test photos
    plz let me know if it possible to do it
    i appreciate you nice Tutorials !!!

  • @TeachAI-UZ
    @TeachAI-UZ 4 года назад +2

    Thank you for great tutorial series! One thing I am curious about is whether accuracy is a fair metric for segmentation or not? I heard MeanIoU is more suitable for segmentation tasks. Is it possible to implement MeanIoU metric in Keras?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      Yes, of course you can implement MeanIoU in Keras. Here is a link that explains the process.
      www.tensorflow.org/api_docs/python/tf/keras/metrics/MeanIoU

  • @kevinkarlwillem4814
    @kevinkarlwillem4814 2 года назад

    No comment for ur teaching ability #giveup

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      I hope this is good feedback, not sure how to interpret :)

  • @SaeedDev
    @SaeedDev 3 года назад

    Great Videos.

  • @cvformedicalimages6466
    @cvformedicalimages6466 Год назад

    So good! Very useful!

  • @BasitAli-mq9lk
    @BasitAli-mq9lk 3 года назад +1

    Hi Sreeni Sir, Thanks for sharing this tutorial.how can I use VGGNet16/19 on same dataset instead of UNet?Please let me know

  • @krispkrispy
    @krispkrispy 3 года назад +1

    thank you for your videos they are helpful. i’m trying to understand what changes is going on to the shapes of the test and train data in:
    model.predict(X_train[:int(X_train.shape[0]*0.9)], verbose=1)
    thank you.

  • @zeynepsozen2230
    @zeynepsozen2230 2 года назад

    Thank you sir

  • @turtlepedia5149
    @turtlepedia5149 3 года назад +3

    Its good that masks were not as you wanted else we would have missed on such good data pre-processing

  • @caiyu538
    @caiyu538 2 года назад

    Great lectures.

  • @samarafroz9852
    @samarafroz9852 4 года назад

    Best tutorial sir pls make video on Generative adversarial network (GAN) and Variational autoincoders for image segmentation and processing

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Thanks for the suggestion. I intend to record GAN tutorials but unfortunately my workstation is at my office. With stay at home in California I cannot get to my office at least until end of May 2020.

  • @fahadp7454
    @fahadp7454 Год назад

    Using 'seed =x, np.random.seed = seed' do help at all. Still the outputs come different at least for me. Is there something else which decides series of random numbers coming in?

  • @Tomerkad
    @Tomerkad 9 месяцев назад

    the part with running the tensor board (!tensorboard --logdir=logs/ --host localhost --port 8089) in the consule doesn't work for me in visual studio code... any help pls?

  • @SawsanAAlowa
    @SawsanAAlowa 2 года назад

    Thank you for the series. What would be the prediction function for this using the trained model , the image preprocessing steps are somehow complicated, should I use them as it is in prediction i could it figure it out and I believe many have the same issue? another important part we did not see in tutorials is how to evaluate segmentation models using performance metrics such as the most popular (IOU, sensitivity, specificity, dice coefficient). for example here we have used binary cross entropy as a loss function, if i want to evaluate my model using IOU should i use it instead as a loss function as well or it is not necessary!. another thing is when to evaluate the model after training or I can save the model then evaluate it!

  • @sangharshsharma6175
    @sangharshsharma6175 3 года назад +1

    Hello. Thanks for the tutorial. When I try to plot or predict using Y-train, I'm getting TypeError: numpy boolean subtract, the - operator, is deprecated, use the bitwise_xor, the ^ operator, or the logical_xor function instead. I got a warning using np.bool but I ignored the warning. What can I do to solve this error?

    • @basetsedighi1646
      @basetsedighi1646 2 года назад

      Hi Sangharsh
      Can you tell me how did you solve the problem with the boolean subtract?
      I am facing the same issue

  • @phg45
    @phg45 2 года назад

    Error coming at Dropout(0.1) Value error
    Exception encountered when calling layer
    Dropout

  • @mostafarahmani2772
    @mostafarahmani2772 4 года назад +1

    Hi, Thanks for amazing lecture, I have an question for you, It is reasonable to combine U-Net and other Network (like le-Net) to improve segmentation performance or not? Simply put, Can we use U-Net for Image Improvement and after that we use Le-Net or other Network for Segmentation?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      You can combine multiple networks provided they make sense. I am not an expert in this field and active researchers constantly publish papers with results from these type of combinations. Here is an example: papers.nips.cc/paper/6448-combining-fully-convolutional-and-recurrent-neural-networks-for-3d-biomedical-image-segmentation.pdf

    • @mostafarahmani2772
      @mostafarahmani2772 4 года назад

      @@DigitalSreeni thanks for sharing your knowledge

  • @ayamekajou291
    @ayamekajou291 Год назад

    hey i'm getting an error '>' not supported between instances of 'list' and 'float' where you have mentioned :
    preds_train_t = (preds_train>0.5).astype(np.uint8)
    any help with this ?

  • @shaoyanz2905
    @shaoyanz2905 3 года назад

    Super helpful, Cheers!

  • @chisumwong3014
    @chisumwong3014 4 года назад

    thank you, it helps a lot

  • @tilkesh
    @tilkesh 2 года назад

    Thanks !!!!

  • @fatemaahmed499
    @fatemaahmed499 4 года назад

    @Python for Microscopists by Sreeni Firstly, I thank you. Your videos are really good and helpful.
    I have a request to you about making video on segmentation of brain tumor. Actually I am having difficulty in dealing with BraTS dataset.

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      Never worked with BraTS data set. Let me look into it.

    • @BiswajitJena_chandu
      @BiswajitJena_chandu 3 года назад

      @@DigitalSreeni I'm also interested in it. BRATS Dataset. It's really difficult as it id 3d data.

  • @raisa3456
    @raisa3456 4 года назад +1

    Can I use U-Net for brain tumor detection with my own build dataset? I want to create my own dataset. But I’m a little confused about creating masks(sub images) for each individual image. Do you know how that can possibly be done? Also while I tried to fit the model on jupyter notebook with early stoping & checkpoints as my callbacks, it has showed an error pointing to the callbacks and at the end it was written that “Function call stack: Keras_scratch_graph” for which i couldn’t get any solution so far. What could be the possible reason? Thank you.

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      Yes, you can definitely use U-net for brain tumor segmentation using your own images. To label images (create masks) you can paint pixels for each class and assign a pixel value to all tumor pixels and a different value to all other regions. You can do all of that on APEER; it is free to use. www.apeer.com/annotate
      Regarding your error: I have never encountered it. A quick Google search gave this page which may contain useful information for you. stackoverflow.com/questions/57062456/function-call-stack-keras-scratch-graph-error

    • @raisa3456
      @raisa3456 4 года назад

      Python for Microscopists by Sreeni Truly thanks for your recommendation but i’m not familiar with categorizing areas by assigning label value(for mask) in the same image since i’m a beginner in this field. Is there any video/article that may clear my concept? As I’m looking forward to implement this with other dataset it would be extreme help for me to know how these things really get done. Thanks again!!

  • @shruthikeerthi6231
    @shruthikeerthi6231 8 месяцев назад

    sir iam working with unet for my project.if i build the model using lamda,in the same code what are other lines of code i have to add.when i use the model created using lamda for testing it is showing error like permissiom denied and some errors.how toovercome it sir

  • @CristhianSanchez
    @CristhianSanchez 3 года назад

    Thanks for the videos... I was just wondering how to test an image base on the model and weights already generated. Because you show for train and validation..

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Please watch my other videos. For example, video 173 talks about IoU metric, video 131 talks about loading trained model to continue training or just predicting and validating accuracy.

  • @umairsabir6686
    @umairsabir6686 4 года назад

    HI Sreeni, nice video , can you please make a video of semantic segmentation with data augmentation approach and then compare the result with this present approach ?
    Data augmentation with semantic segmentation is a bit tricky because while training the images should have their corresponding masks. But it will be great if you can show that.
    Thanks

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Data augmentation for semantic segmentation is not as tricky as you may think. You can use Albumentations library to augment image and mask at the same time. I will record a video on this topic.

    • @umairsabir6686
      @umairsabir6686 3 года назад

      @@DigitalSreeni I tried using it. When I use flow module from Data Generator. There is no issue because it zip images and masks together and then uploads an array into the RAM but when I use flow_from_directory. There is no way to find if the data gen is picking up the correct corresponding masks for each image. In a numpy array at least i am sure because I created it in that way.

    • @umairsabir6686
      @umairsabir6686 3 года назад

      @@DigitalSreeni Also can you please use IOU as a metric with accuracy during training model ?

  • @yousrazafar5303
    @yousrazafar5303 4 года назад +1

    hello, excellent tutorial, need help: if i want to show all segmented results from test samples instead of just random ones, what changes would be required in the code. or else if i want to save my segmented results to a new folder once model is trained what would be the code to do that.

    • @SawsanAAlowa
      @SawsanAAlowa 2 года назад

      please share the solution if you found any.

  • @deepikagupta8895
    @deepikagupta8895 Год назад

    hello sir...your videos are awesome.very helpful.
    please let me knw can we do u net segmentation withuot mask images

  • @biswassarkarinusa3230
    @biswassarkarinusa3230 4 года назад +1

    If I want to do segmentation retinal blood vessel there are three types of image-original image,ground truth image and border mask for the training and testing except ground truth on testing.Can I implement it in that code?It would be great help if you give some suggestion.Thank you for the tutorial by the way.

    • @taharabs8006
      @taharabs8006 5 месяцев назад

      hey,man im in your situation 4years ago
      did you figure this out ?

  • @gopigariniveditha3864
    @gopigariniveditha3864 2 года назад

    Hi
    I had used your code for ALL_IDB1 dataset,the result of predicted mask is not getting displayed.....I need your help.

  • @tamilbala6239
    @tamilbala6239 4 года назад

    Thank you so much for your reply sir. I need clarification sir. image path setting in your vidio i am confused. In my dataset i have i mask for each corresponding image, When i used your coding for mine, it even not read the image. kindly give me suggestions, in your program you didnt mention about folder where to save, i saved d : but there is a mistake , i didnt understand the reading bcoz in your code its complecated for me, but for you its easy.how to change this path.i have tried lot of ways but no use. I am new to deep learning kindly give me your valuable suggestions. Thank you sir.

  • @RA-pr6qd
    @RA-pr6qd 4 года назад

    hi sir, i am getting following error, kindly help me out
    plt.imshow(X_train[int(X_train.shape[0]*0.9):][ix])
    IndexError: index 4 is out of bounds for axis 0 with size 2
    with a blank image as output.

  • @zeeshanahmed3997
    @zeeshanahmed3997 4 года назад

    Hi...!
    For semantic segmentation every pixel has a label, so for accuracy metric it will compare ground truth image with the predicted one, but each pixel of ground truth by each pixel of predicted image...
    Is it possible to find accuracy metric by comparing ground truth img with predicted img ..but not pixel by pixel..
    As it should compare ground truth image with predicted image as a whole image.

  • @parthkadav9176
    @parthkadav9176 2 года назад

    Hey Sreeni, thank you for the amazing tutorial, I could get this to run on my custom dataset but for some reason, I am running into a problem when I try to load the saved model using keras. TypeError: __init__() missing 2 required positional arguments: 'num_classes' and 'target_class_ids'
    Would you be able to give any suggestions on how to tackle this

  • @djdekabaruah3457
    @djdekabaruah3457 3 года назад

    Excellent Tutorial.. One question - In preds_train=model.predict(X_train[:int(X_train.shape[0]*0.9,verbose=1], why is 0.9 factor multiplied? Same question for preds_val also? Why is not done for preds_test?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      It has been a while since I recoded that video and I apparently was bad at adding comments back then. In any case, upon quick look it appears that I clearly separated train and test data initially so I have those data sets ready for prediction. During training I took 90% for training and 10% for validation. So for prediction I separated 90% from train and assigned it as train and the remaining 10% as validation. Obviously all this is not necessary but I guess it made sense back then while I was coding :)

    • @djdekabaruah3457
      @djdekabaruah3457 3 года назад

      @@DigitalSreeni Thanks. I understand. I have one more question. Can we resize the U-Net predicted images back to the original sizes? Will there be any data loss due to resizing?

  • @nasirmalik678
    @nasirmalik678 4 года назад +1

    Wow great tutorial for image Segmentation. I need python code for retinal vessel Segmentation like this????

    • @taharabs8006
      @taharabs8006 5 месяцев назад

      hey,man im in your situation 4years ago
      did you figure this out ?

  • @umarjibrilmohd8660
    @umarjibrilmohd8660 Год назад

    you solvedmy problem.

  • @radiator007
    @radiator007 3 года назад

    I converted the model to tflite so as to use in android application but i doenst seem to work. Do i need to add anything else? (eg. metadata) to the model?

  • @hadeerabdellatif2335
    @hadeerabdellatif2335 3 года назад

    thanks alot for this series,i apply the same code at my dataset and give me high accuracy but when i make prediction on test data it give me black image why ?also bool numpy array not plotten well as the original images ?please reply

  • @shilpashree4860
    @shilpashree4860 3 года назад

    Nice tutorial. can u please give the citation of Modified UNET architecture. I need to compare my work with UNET architecture u explained.

  • @sophiez7952
    @sophiez7952 Год назад

    your work is really helpful for my research, but i have a question , the input is 128*128*3, could i change the width_size and high_size to other number ,like 256,512, or whether i only fixed the number of 128, thanks for your great work!

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      You can change input size to any dimension. You will have to adjust the network parameters to make sure the encoder and decoder are symmetric. You can also use a library like 'segmentation models' that will autogenerate it for you. ruclips.net/video/J_XSd_u_Yew/видео.html

    • @sophiez7952
      @sophiez7952 Год назад

      @@DigitalSreeni thanks your reply I will watch it!

  • @jeevanmsijin7973
    @jeevanmsijin7973 2 года назад

    Sir, Can I extend this information for
    my research works based on cancer cells detection and classification...

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Yes, you can use segmentation and classification techniques to gain insights on cancer cells. Please keep watching other videos that you may find useful.

  • @smmi747
    @smmi747 4 года назад

    How to apply this code to testing an other image (this image is not in the testing images)

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      Save the model. Load the model whenever you want to apply it on other images. Pre-process other images just the way you processed images for training. Apply the model. You may learn some of this from my video number 131.

    • @smmi747
      @smmi747 4 года назад

      @@DigitalSreeni thanks for your response , i'll sea your next videos

  • @nahidanazir3746
    @nahidanazir3746 2 года назад

    An error ocurred while starting the kernel
    2022󈚩󈚱 19:03:28.523179: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance‑critical operations: AVX AVX2
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2022󈚩󈚱 19:03:29.504231: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1251 MB memory: ‑> device: 0, name: NVIDIA GeForce MX450, pci bus id: 0000:01:00.0, compute capability: 7.5
    2022󈚩󈚱 19:03:32.487596: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8400. Kindly suggest how to deal with this

  • @yashagrawal8659
    @yashagrawal8659 4 года назад

    Nice tutorial !! but I am getting a problem while loading this model. I have to use this trained model in real-time so can you make another video that uses this model for predicting segmented image in real-time.

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      For real time images you're just getting images as frames. You should be able to apply trained models on these images just the way you apply to regular folder full of images. Depending on your system the application may be slow.

    • @yashagrawal8659
      @yashagrawal8659 4 года назад

      @@DigitalSreeni thank you but I figured out what is the problem I am getting at that time and now it is working properly on real time

  • @abramswee
    @abramswee 4 года назад

    thanks for the great video. How can the model be enhanced to handle multiple distinct masks?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      Do you mean multiple labels instead of single?

  • @ramzan097
    @ramzan097 4 года назад

    your way of explaining is very nice,thanks for sharing this precious knowledge, could you tell me please about structure if i have one folder of images contain 500 original images and second folder contains mask related original images. could it be applicable as you described files structure in your videos?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      Yes of course. Please watch my other videos on deep learning to see how I imported images that are stored in different ways on my drive. The whole point is that you need to get your images into a variable (X) and masks into a variable (Y). It is up to your skill and creativity to figure out how you do it.

  • @sabaal-jalal3710
    @sabaal-jalal3710 3 года назад

    Can I just use my Mac Laptop without using GPU?

  • @thepaikaritraveller
    @thepaikaritraveller 4 года назад

    for my dataset i have seqeezed all train , mask and test data . But my thresholed output not showing thersholded output like yours. what is the problem

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      If you are using same data sets as mine then you should see similar results. If not, please make sure you defined the network properly and that all other parameters match the ones I used.

  • @0xcalmaf976
    @0xcalmaf976 3 года назад

    Hello! I've got a question, somehow my tensorboard doesn't show the train graph! I was following your tutorial, and even tried to use your code, but somehow i get only one line. my tensorflow version is 1.14, cuda v10. Do you know how can I manage this problem?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      I never encountered this issue. I hope you verified all the basics such as adding tensorboard to the callbacks, etc.

  • @civilskills4691
    @civilskills4691 3 года назад

    Thank you for this Amazing tutorial, Just one question I got the following error "MemoryError: Unable to allocate 24.6 GiB for an array with shape (134120, 256, 256, 3) and data type uint8 ", which mean that the memory of my PC is not sufficient. So, I wonder what you suggest? Can I build a Pre-trained model with part of the the data and then update it using the built Pre-trained model ? or any other suggestions?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      I recommend loading data in batches using ImageDataGenerator.
      from keras.preprocessing.image import ImageDataGenerator
      datagen = ImageDataGenerator()
      datagen.flow_from_directory(your_directory)

    • @civilskills4691
      @civilskills4691 3 года назад

      @@DigitalSreeni regarding the BATCH_SIZE, if may training datasets size is about 130K and my PC can handle about 25K is it better to make the batch size 25K or smaller ?

  • @noamsuissa
    @noamsuissa 4 года назад

    Great tutorial! It seems like the preds_test_t and idx variables are never being used, can you please explain why? Thank you

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      idx was left over from my attempt to randomly pick images for testing. Also, preds_test_t is there in case you want to test on test images rather than train or validation images.

    • @muhamadsharifuddin8708
      @muhamadsharifuddin8708 4 года назад

      # Perform test3
      ix = random.randint(0, len(preds_test_t))
      imshow(X_test[ix])
      plt.show()
      imshow(np.squeeze(preds_test_t[ix]))
      plt.show()