73 - Image Segmentation using U-Net - Part1 (What is U-net?)

Поделиться
HTML-код
  • Опубликовано: 22 май 2024
  • Many deep learning architectures have been proposed to solve various image processing challenges. SOme of the well known architectures include LeNet, ALexNet, VGG, and Inception. U-net is a relatively new architecture proposed by Ronneberger et al. for semantic image segmentation. This video explains the U-Net architecture; a good understanding is essential before coding.
    Link to the original U-Net paper: arxiv.org/abs/1505.04597
    The code from this video is available at: github.com/bnsreenu/python_fo...
  • НаукаНаука

Комментарии • 190

  • @burakkahveci4123
    @burakkahveci4123 4 года назад +22

    Thank you for the video. I think the best video for basic levels / intermediate levels.

  • @Rocky-xb3vc
    @Rocky-xb3vc 3 года назад

    This is the first video I'm watching on this channel, and I need to say huge THANK YOU. You helped me connect so many dots that were all over the place in understanding this. Amazing.

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      Thank you very much for your kind feedback. I hope you’ll watch other videos on my channel and find them useful too.

    • @Rocky-xb3vc
      @Rocky-xb3vc 3 года назад

      @@DigitalSreeni Of course, I've already watched the full course and the next thing is time series forecasting. Thanks for your reply and everything you do!

  • @iamadarshmohanty
    @iamadarshmohanty 2 года назад +1

    the best explanation I found on the internet. Thank you

  • @shafagh_projects
    @shafagh_projects 8 месяцев назад

    I am speechless. your tutorials are beyond the amazing. thank you so much for all you have done!

  • @brunospfc8511
    @brunospfc8511 2 года назад +5

    Thanks Professor, there's so much knowledge in you channel, i'll need months to go through as it seems it's right in the deep learning area i want to focus, as an Computer Engineering going throught Veterinary course, blood sample analysis may be my final project, thanks from Brazil

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +2

      I am sure you'll benefit from my tutorials if your goal is to analyze images by writing code in python.

  • @andresbergsneider6644
    @andresbergsneider6644 3 года назад

    Thanks for sharing! Very well presented and super informative. Saving this video

  • @zeeshankhanyousafzai5229
    @zeeshankhanyousafzai5229 Год назад

    I can not express my wishes for you in the words.
    You are more than the best.
    Thank you so much.

  • @boy1190
    @boy1190 3 года назад +26

    I wish youtube give us an option of liking video after every minute, this idea came in my mind for the first time in this video, I really want to give this video a like on every small bit of concept. Because it is explained so well. Respect Sir.

  • @user-gy8km4km6y
    @user-gy8km4km6y 6 месяцев назад

    thank you, professor, helps a lot in my understanding of deep learning.

  • @tonihullzer1611
    @tonihullzer1611 2 года назад

    First of all thx for your work here on RUclips, when I'm done with your series I will definitely support you. One question here: I thought that in the upward path you do add the upsampled features and the corresponding ones from the contracting path, but in your code you have concat?

    • @MrAmgadHasan
      @MrAmgadHasan Год назад

      He's concatenating and then uses a convolution layer. This has a similar effect to adding since the convolution operation adds the results after multiplication

  • @MrChudhi
    @MrChudhi Год назад

    Hi, Sreeni, Nice explanation and I managed to clear my doubts. Thanks. Do you have any videos on image segmentation with pertained models.

  • @azamatjonmalikov9553
    @azamatjonmalikov9553 2 года назад

    Amazing content as usual, well done :)

  • @sarahs.3395
    @sarahs.3395 4 года назад +2

    Good explanation, thank you.

  • @Vibertex
    @Vibertex 2 года назад

    Great Video! Really helped me understand U-Nets for my own use!

  • @VLM234
    @VLM234 3 года назад

    Great explanation....Please keep on posting such high-value videos.....
    If we have less data, then we should go for Transfer learning or Machine Learning approach??

  • @hanfeng32
    @hanfeng32 4 года назад +2

    thank you, this video is the best

  • @lazotteliquide
    @lazotteliquide 4 месяца назад

    Incredible that someone as dedicated as you gave accss to such great knowledge. Thanks you, you help create better sciences

  • @tamerius1
    @tamerius1 3 года назад +3

    Why does the feature space and thus depth increase as we go down? Is this a design choice or a consequence?
    It's confusing for me that each first convolutional operation increases the depth and the second one which seems identical, does not.

  • @Tomerkad
    @Tomerkad 5 месяцев назад +1

    thank you. can you please explain what does it mean to add C4 to U6 in the first Upsample step?

  • @ramanjaneyuluthanniru1428
    @ramanjaneyuluthanniru1428 4 года назад

    Well explained....sreeni
    you have amazing teaching skills...your explanation pretty good.
    i watched more and more videos in youtube...you also one of the best person
    thanks for sharing information

  • @victorcahui732
    @victorcahui732 3 года назад

    Thank you for your explanation.

  • @kebabsharif9627
    @kebabsharif9627 2 года назад

    Can you make a video in which your code detect the orientation of page from a photography of the page , suppose to the page is up-side down or 90° let /right rotated.

  • @BareqRaad
    @BareqRaad 2 года назад

    Great demonstration thank you so much

  • @pratheeeeeesh4839
    @pratheeeeeesh4839 4 года назад +1

    classy explanation!

  • @shanisssss5906
    @shanisssss5906 4 года назад

    Fantastic video!

  • @icomment4692
    @icomment4692 3 года назад

    What implication do the cross-links have for backpropagation in the U-net architecture?

  • @Irfankhan-jt9ug
    @Irfankhan-jt9ug 3 года назад

    Great work......which tool creates Image masks?

  • @siddharthmagadum16
    @siddharthmagadum16 2 года назад

    5:12 . which architecture would be good for cassava leaf disease detection dataset?

  • @bhavanigarrepally4164
    @bhavanigarrepally4164 2 года назад

    Can you give the implementation for unsupervised semantic segmentation also

  • @varungoel185
    @varungoel185 3 года назад +1

    Nice video, thanks! One question - this architecture is for semantic segmentation right? How would the final layer (or layers) differ for the instance segmentation, wherein the output would be bounding boxes or co-ordinates of the instances?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +2

      Instance segmentation requires different architecture, you cannot swap the final layer to convert them from one to another application. I only wish life were that easy!!!

  • @vikaskarade5585
    @vikaskarade5585 3 года назад +3

    Amazing Lecture. You can also create one on UNET++ and attention UNET. I was looking for these topics and I wish you had one on these topics... :)

  • @anishjain3663
    @anishjain3663 3 года назад +1

    Sir i am doing image segmentation with coco like dataset sir already see yours tutorials but still not able to implement

  • @joshizic6917
    @joshizic6917 Год назад

    Hi sir i was wondering if you could help to train my model i am trying to create a dataset where only the element of interest is visible and the rest is blacked out with transparent background , will this be great or i should create a binary mask by coloring the element of interest in white and keeping the background white

  • @NeverTrustTheMarmot
    @NeverTrustTheMarmot Год назад +2

    Pick up line for data scientists:
    Why is U-Net architecture so beautiful?
    Cause it looks like U

  • @matthewchung74
    @matthewchung74 4 года назад +3

    Thank you for this very helpful video. In the unet diagram, there are 3 output features, but your implementation only has one. I'm confused as to why?

    • @julianwittmann7302
      @julianwittmann7302 3 года назад

      As im just starting to dig into this field im not quite sure but my suggestion would be that the output has to be a segmented image. Segmented images have value 1 for the segmented part and value 0 for the remaining non segmented part of the picture. Usually when using segmentation grey values are considered. And for grey values only one channel is needed.

  • @ioannisgkan8930
    @ioannisgkan8930 2 года назад

    Great explanation SIR
    You made us simple

  • @doraadventurer9933
    @doraadventurer9933 3 года назад

    thank you for your sharing, however, do you have the training part?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      Please keep watching videos on this playlist, I have training and segmentation part covered.

  • @josemiguelc.tasayco4028
    @josemiguelc.tasayco4028 3 года назад

    Very well !!! more videos please

  • @adityagoel237
    @adityagoel237 2 года назад

    14:25 In upsampling (before adding C4), why the 8*8*256 got transformed to 16*16*128 ? Why not 16*16*256 ?

  •  4 года назад +1

    Thanks for video

  • @saifeddinebarkia7186
    @saifeddinebarkia7186 2 года назад

    Thanks for the video,so is it transposed convolution or up-sampling for the expansive path because they are 2 different things.

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      It can be either. Please watch the following video if interested in learning about the differences between the two. But, you can use either as the idea is to get back to the large resolution image from a smaller size.
      ruclips.net/video/fMwti6zFcYY/видео.html

  • @BiswajitJena_chandu
    @BiswajitJena_chandu 3 года назад +2

    Sir, please do a video for segmentation of BRATS dataset

  • @-arabsoccer1553
    @-arabsoccer1553 4 года назад +1

    Thanks for your video,but i have question regarding the U-net and i hope that you can answer me
    from my understanding that the u-net is ended by image of the same input size ?but how we can predict the class of each pixel.
    i understand classification problem that it the last convolution is following by flatting and fully-connected layer so the number of n-classes as outputs ,but i don't understand how we get the result in segmentation

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      The convolution pooling operations (down sampling) understands the 'what' information in the image but has no information on the 'where' aspect which is required for semantic segmentation (pixel level). In order to get the 'where' information Unet uses upsampling (decoder), converting low resolution to high resolution. Please read the original paper for more information: arxiv.org/abs/1505.04597

  • @ahmedhafez3758
    @ahmedhafez3758 3 года назад

    I want to make a 3D medical image segmentation , can you tell me how to start, I want the input to be .obj file and the output to be either .dcm files ( for each segment ) or .obj files

  • @NS-te8jx
    @NS-te8jx Год назад

    do you have slides for all these videos?

  • @tonix1993
    @tonix1993 2 года назад

    Very helpful video thank you!

  • @TheedonCritic
    @TheedonCritic 2 года назад +1

    Awesome!
    I'm trying to use GAN for augmenting my images and masks which I will use as input to my semantic segmentation models, but I can't find any tutorials online.
    Most of them are for classification datasets, any advice, please?

  • @CristhianSanchez
    @CristhianSanchez 3 года назад

    Great explanation!

  • @Julian-ri9od
    @Julian-ri9od 2 года назад +2

    Is there a reason why always two convolutions are applied after the max pooling step? Is it a convention to use always two?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      No reason. It may appear that 2 convolutions are added after maxpool on some architectures but that is not the general case.

  • @carolinchensita
    @carolinchensita Год назад +1

    Thank you very much for this explanation. I have one question, could I use this same method on an RGB image? Or does it have to be grayscale? Thanks!

    • @rohanaggarwal8718
      @rohanaggarwal8718 5 месяцев назад

      This is a late reply but yes, you have to expand your thinking... You can't assume just because someone made a tutorial this is what i have to do. Ask yourself these questions instead of trying to get help, What is a grayscale image? (1 is white, 0 is black, in between is gray) Can I apply this concept to RGB? (Three color channels, each same principle) How does my code change, (Input shoudl be three, maybe I need to flatten differently), etc. Good luck learning!

  • @poopenfarten4222
    @poopenfarten4222 Год назад

    what are the numbers above the layers, for eg in the first layer 16 is written above it, what does it signify could someone please explain

  • @chouchou2445
    @chouchou2445 3 года назад +1

    Thank you again
    Would you please tell me, is it possible to use data augmentation befor semantic segmentation an how to apply same function on both image and mask

  • @maciejkolodziejczyk4136
    @maciejkolodziejczyk4136 3 года назад

    Many thanks, well done !

  • @chitti1120
    @chitti1120 2 года назад

    can someone tell me and give examples of why the u-net architecture uses the 'copy and crop' for every block?

  • @4MyStudents
    @4MyStudents 2 года назад

    basically, ReLU is used to prevent overfilling to maintain non-linearity

  • @NH-gl8do
    @NH-gl8do 3 года назад

    Very excellent explanation

  • @haythammagdi3956
    @haythammagdi3956 6 месяцев назад

    Hi every one. It is really amazing video on U-Net.
    But waht about U2-Net? is it better?

  • @ApPillon
    @ApPillon Год назад

    Thanks bro. Cheers!

  • @zeeshanahmed3997
    @zeeshanahmed3997 4 года назад

    hello! I want ask something, can I train my unet model with the input training images having only single channel? like (img_height, img_width, 1) or (img_height, img_width) ?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +3

      Yes. Please watch my other videos on U-net. Every network expects certain dimensions and you can reshape your arrays to fit those dimensions. For example if you have grey images with dimensions (x, y, 1) and if the network takes 3 channels then just copy the image 2 more times to convert to (x, y, 3).

  • @mohamedelbshier2818
    @mohamedelbshier2818 Год назад

    Thank you and Respect Sir

  • @mqfk3151985
    @mqfk3151985 3 года назад +1

    As usual! Amazing tutorial. I just want to confirm, in the training phase, all images have to be of the same shape (width, height and depth), right? what if my training data varies in shape? Do I need to resize the images?
    Also, I Will be really thankful if you can give a tutorial on Mask RCNN. It's also a very good algorithm that can be used for semantic segmentation.
    Thanks a lot for your time.

    • @manishsharma2211
      @manishsharma2211 3 года назад +2

      Yes. Always apply transformation on image ( like resizing and rotation etc)

    • @mqfk3151985
      @mqfk3151985 3 года назад +1

      I see, thanks for the reply. Image rotation will be performed for data augmentation. but regarding the image resizing, I think it's a requirement by the algorithm.

    • @manishsharma2211
      @manishsharma2211 3 года назад +1

      @@mqfk3151985 Yes , there is never that you might find images of all same size. Unless you go for normal competation
      So better resize :)

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      You will represent your data as numpy array so you need all images to be of same size. Yes, it is customary to resize images to a predefined shape in machine learning.
      I will consider making Mask-RCNN videos.

  • @RAZZKIRAN
    @RAZZKIRAN Год назад

    input size for U-nET?

  • @Abhisingh-cl9xm
    @Abhisingh-cl9xm 4 года назад +1

    Best resource

  • @efremyohannes2334
    @efremyohannes2334 3 года назад

    Thank you sir, very nice video.

  • @mstozdag
    @mstozdag 4 года назад

    Hello, great content! Where is the code for U-Net? Can u post the link here pls?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      github.com/bnsreenu/python_for_microscopists

  • @temurochilov
    @temurochilov 2 года назад

    Thank you very informative tutorial

  • @Shadow-pn2us
    @Shadow-pn2us 4 года назад +1

    still confused with the concatnation operation how it works, such as adding 16x16x128 featuremap with upsampled 8x8x256, the dimension is different

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      You’ll be concatenating data with same dimensions, not different dimensions. Please have a second look at the graphic describing the architecture, the two layers fused together showing dimensions are being concatenating to form a dataset with combined dimension.

  • @lorizoli
    @lorizoli 2 года назад

    Great video!

  • @mohammadkarami8984
    @mohammadkarami8984 3 года назад

    Thanks a lot for your video

  • @alessioandreoli2145
    @alessioandreoli2145 4 года назад

    Hi!which is the best segmentation technique I can use in python for cells image counting/object detection/size definition?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      The best method is always traditional approaches of using histogram for thresholding and then some operators like open/close to clean up. If that is not possible then the next best option is to use traditional machine learning (extract features and then Random Forest or SVM). I covered that topic on my channel. FInally, if you have the luxury of 1000s of labeled images then use deep learning.

    • @alessioandreoli2145
      @alessioandreoli2145 4 года назад

      @@DigitalSreeni , please let me one more question. My purpose is to avoid manual settings to use macros or python over big amount of images taken at cells on a big microscale range. Any suggestions there? Have you any reference for deep learning?

  • @ahtishamulhaq1415
    @ahtishamulhaq1415 2 года назад

    I can't Find Code Please Tell the name of folder

  • @nourhanelsayedelaraby4271
    @nourhanelsayedelaraby4271 2 года назад

    first of all thank u for the great explanation and wanted to ask u about the slides if they are available

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Sorry, I wasn't very planned with my presentation slides so unfortunately I cannot share them. Also, I often use images and content from Google searching that come with copyright. I cannot legally distribute them.

  • @kethusnehalatha6091
    @kethusnehalatha6091 3 года назад

    For better results what changes we have to do in the u net sir ???

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Many things. For example you can try replacing generic encoder (down sampling) part with something sophisticated like efficientnet.

  • @MadharapuKavyaP
    @MadharapuKavyaP 2 года назад

    Hello sir, can u please make a video on brain tumor segmentation using u net architecture integrated with correlation model and fusion mechanism.

  • @ariouathanane
    @ariouathanane Год назад

    Hello, i have a rgb masks, it's possible to do the image segmentation? thanks in advance

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      Yes. I have done that here. ruclips.net/video/jvZm8REF2KY/видео.html

  • @talha_anwar
    @talha_anwar 3 года назад +1

    Thanks first of all. Can you provide the image you have used, the architecture image?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      You can search for U-net on Google. I did the same and created my own, to make sure I do not infringe on copyright.

    • @sourabhsingh4895
      @sourabhsingh4895 3 года назад

      @@DigitalSreeni sir you are great sir it would be a great help if you could upload a video on semantic segmentation using double-UNET model

  • @leo46728
    @leo46728 2 года назад

    17:56 Does the model need to be trained after compiling?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      Compiling just defines the model, you need to train the model on real data to update the weights and customize it for a specific job to be done, for example identify cats and dogs.

    • @leo46728
      @leo46728 2 года назад

      @@DigitalSreeni ok thanks

  • @snehalwagh2283
    @snehalwagh2283 2 года назад

    Question: What happens if it is 128X128X1 ? will it still become 128X128X16 ?

  • @nailashah6918
    @nailashah6918 2 года назад

    very good lecture
    just one thing I am unable to understand about feature space or dimension?plz reply with answer
    thanks

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      Not sure where your confusion is.... I am referring to the filtered results (after convolutional filtering) as feature space. This is where you will have multiple responses for every input image and these responses contain the information about features in the image.

    • @nailashah6918
      @nailashah6918 2 года назад

      I wanted to ask about feature space that was 64 in start then 128 in 2nd block of unet
      64 means 64 output filtered results? is that true?
      or we can say 64 filters were applied, then 128 filters and so on ...?

  • @amaniayadi9591
    @amaniayadi9591 3 года назад

    so useful thnks :*

  • @RizwanAli-jy9ub
    @RizwanAli-jy9ub 3 года назад +1

    salute

  • @xianglongchen3088
    @xianglongchen3088 Год назад

    Is this keras code?

  • @shreearmygirl9878
    @shreearmygirl9878 2 года назад

    Hello sir, plcan u provide the links of videos for creating our own dataset from scratch fro satellite images, pl sir.. its very important.I hope you will...

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      You just need to annotate your images using any of the image annotation tools out there. I use www.apeer.com as that is what our team does at work.

  • @chouchou2445
    @chouchou2445 3 года назад

    thank you this is how you know what are y doing :)

  • @deepMOOC
    @deepMOOC 4 года назад

    Thank you,but how can I get the code

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      You can get the code from my GitHub page. The link is provided under my channel description.

  • @ExV6120
    @ExV6120 4 года назад

    I still don't get it, what exactly is the 16, 32, 64, 128, 256 that being called features in the next two layers each?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Think of it as applying 16 different digital filters and then 32 and the 64 and so on.... Therefore, if you take a single image of size 256x256 and apply 16 different filters on it you will end up with 16 responses from this single image --> 256x256x16 data points.

    • @andresbergsneider6644
      @andresbergsneider6644 3 года назад

      ​@@DigitalSreeni What is the design principle behind this filters, any rules of thumb? Are they generated at random? Or are this manually configured?
      Thanks again for sharing this video!

  • @soumyadrip
    @soumyadrip 4 года назад +1

    ❤❤❤

  • @pearlmarysamuel4809
    @pearlmarysamuel4809 3 года назад

    How much memory does the original unet require?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Not a simple answer. Here is some good reading material on this topic. imatge-upc.github.io/telecombcn-2016-dlcv/slides/D2L1-memory.pdf

  • @jijiqueen5823
    @jijiqueen5823 3 года назад

    thanks

  • @govtjobs7063
    @govtjobs7063 Год назад

    Sir is this unet architecture for multiclass segmentation or binary segmentation??
    Kindly response

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      This is binary. I got many other videos on multiclass.

    • @govtjobs7063
      @govtjobs7063 Год назад

      @@DigitalSreeni ok sir ...Thank you for your response...sir i have one more question ...when we are combining t2,flair and t1ce...do we call that combined image a single channel image or 3 channel image...please sir reply

  • @nickpgr10
    @nickpgr10 3 года назад

    @14.11.. can anyone please explain how size changes from 8*8*256 to 16*16*128 due to up sampling??. why number of channels get reduced in this step??

    • @zhenxingzhang6429
      @zhenxingzhang6429 3 года назад

      If you checkout the part2 of this video, you can see that it uses Conv2DTranspose (transpose convolutions) for upsampling instead of just simply UpSampling2D (repeat the value to match the desired dimosions), because the filter number is set to 128, so we end up with 8*8*256 -> 16*16*128. Check this for more details: www.jeremyjordan.me/semantic-segmentation/#upsampling

  • @mimo-wx9mc
    @mimo-wx9mc 4 года назад

    why the first parameters don't work very well and how we can determine the best parameters

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      No sure what you mean by parameters. If you are asking about hyper parameters that go into defining your network then it is not an easy answer. People are still researching the effect of parameters for various applications.

  • @anishjain3663
    @anishjain3663 3 года назад +1

    sir how to use this 3d images and what is 3d image can you please can you make a video on that

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      I will try to do 3D image processing some day.

    • @aishstha6669
      @aishstha6669 Год назад

      @@DigitalSreeni do you va e vid on 3D ?

  • @akainu3668
    @akainu3668 2 года назад

    hi can you also create one tutorial on unet based segmentation for isbi 2012 data set or brats data set
    ?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      I already did Brats. Please check my videos 231 to 234.

  • @qw4316
    @qw4316 2 года назад

    Hello sir is there any possible use U net to denoise ?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      I am curious why you're asking this question. If you are an ML researcher trying to design new models then you can try U-net approach and see if it works or requires any modification. If you are trying to find the right tool for a given job, I do not recommend experimenting with U-Net as it is not designed for denoising. It is designed to perform image segmentation. For denoising scientific images, you may want to look into Noise2Noise or Noise2Void techniques.

    • @qw4316
      @qw4316 2 года назад

      @@DigitalSreeni thanks ,Sir!

    • @qw4316
      @qw4316 2 года назад

      @@DigitalSreeni yup , I found most of the method of denoise are applied to image , but my application is used on data of structure 1X256 .mat so this is the point confused me . I

    • @qw4316
      @qw4316 2 года назад

      @@DigitalSreeni yup actually I followed you vedio ,I think I can command the U-net design , but for my data ,I am confused . Could you help to have a look my data and give me a suggestion ,sir ?

  • @codebeings
    @codebeings 2 года назад +1

    13:54 Do check, second last layer in the decoder side have wrong connections !

    • @kunalsuri8316
      @kunalsuri8316 2 года назад

      How is it wrong?

    • @codebeings
      @codebeings 2 года назад

      @@kunalsuri8316 In the second last layer of decoder (corresponding to P1), its input to the last layer of decoder is incorrect. Just check the original paper, one can easily notice it.

  • @HenrikSahlinPettersen
    @HenrikSahlinPettersen 2 года назад +1

    For a tutorial on how to do deep learning based segmentation without the need to write any code using only open-source free software, we have recently published an arXiv preprint of this pipeline with a tutorial video here: ruclips.net/video/9dTfUwnL6zY/видео.html (especially suited for histopathological whole slide images).

  • @muhammadzubairbaloch3224
    @muhammadzubairbaloch3224 Год назад

    Depth estimation using neural network. please make the lecture

  • @prashant007420
    @prashant007420 2 года назад

    Thank you for the video. this is best video. My only request please make same type video for Mask R CNN for image segmentation i have a project on this i have to submit in this week but Mask R cnn is confusing. so please help me on that.