73 - Image Segmentation using U-Net - Part1 (What is U-net?)

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 195

  • @boy1190
    @boy1190 3 года назад +27

    I wish youtube give us an option of liking video after every minute, this idea came in my mind for the first time in this video, I really want to give this video a like on every small bit of concept. Because it is explained so well. Respect Sir.

  • @burakkahveci4123
    @burakkahveci4123 4 года назад +22

    Thank you for the video. I think the best video for basic levels / intermediate levels.

  • @shafagh_projects
    @shafagh_projects Год назад +1

    I am speechless. your tutorials are beyond the amazing. thank you so much for all you have done!

  • @iamadarshmohanty
    @iamadarshmohanty 3 года назад +1

    the best explanation I found on the internet. Thank you

  • @zeeshankhanyousafzai5229
    @zeeshankhanyousafzai5229 2 года назад

    I can not express my wishes for you in the words.
    You are more than the best.
    Thank you so much.

  • @brunospfc8511
    @brunospfc8511 2 года назад +5

    Thanks Professor, there's so much knowledge in you channel, i'll need months to go through as it seems it's right in the deep learning area i want to focus, as an Computer Engineering going throught Veterinary course, blood sample analysis may be my final project, thanks from Brazil

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +2

      I am sure you'll benefit from my tutorials if your goal is to analyze images by writing code in python.

  • @NeverTrustTheMarmot
    @NeverTrustTheMarmot 2 года назад +3

    Pick up line for data scientists:
    Why is U-Net architecture so beautiful?
    Cause it looks like U

  • @blueicer101
    @blueicer101 2 месяца назад

    It's actually crazy how people just make tutorials on this knowledge stuff for free.

  • @Rocky-xb3vc
    @Rocky-xb3vc 4 года назад

    This is the first video I'm watching on this channel, and I need to say huge THANK YOU. You helped me connect so many dots that were all over the place in understanding this. Amazing.

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Thank you very much for your kind feedback. I hope you’ll watch other videos on my channel and find them useful too.

    • @Rocky-xb3vc
      @Rocky-xb3vc 4 года назад

      @@DigitalSreeni Of course, I've already watched the full course and the next thing is time series forecasting. Thanks for your reply and everything you do!

  • @tamerius1
    @tamerius1 3 года назад +3

    Why does the feature space and thus depth increase as we go down? Is this a design choice or a consequence?
    It's confusing for me that each first convolutional operation increases the depth and the second one which seems identical, does not.

  • @lazotteliquide
    @lazotteliquide 11 месяцев назад

    Incredible that someone as dedicated as you gave accss to such great knowledge. Thanks you, you help create better sciences

  • @张衡-m4m
    @张衡-m4m Год назад

    thank you, professor, helps a lot in my understanding of deep learning.

  • @haythammagdi3956
    @haythammagdi3956 Год назад

    Hi every one. It is really amazing video on U-Net.
    But waht about U2-Net? is it better?

  • @Tomerkad
    @Tomerkad 11 месяцев назад +1

    thank you. can you please explain what does it mean to add C4 to U6 in the first Upsample step?

  • @BiswajitJena_chandu
    @BiswajitJena_chandu 4 года назад +2

    Sir, please do a video for segmentation of BRATS dataset

  • @mincasurong
    @mincasurong 5 месяцев назад

    Thanks for your amazing presentation!

  • @ioannisgkan8930
    @ioannisgkan8930 3 года назад

    Great explanation SIR
    You made us simple

  • @Julian-ri9od
    @Julian-ri9od 2 года назад +2

    Is there a reason why always two convolutions are applied after the max pooling step? Is it a convention to use always two?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      No reason. It may appear that 2 convolutions are added after maxpool on some architectures but that is not the general case.

  • @4MyStudents
    @4MyStudents 2 года назад

    basically, ReLU is used to prevent overfilling to maintain non-linearity

  • @Vibertex
    @Vibertex 3 года назад

    Great Video! Really helped me understand U-Nets for my own use!

  • @isaaciwediba3380
    @isaaciwediba3380 4 месяца назад

    You are doing a great work, I have learnt a lot from you. could you please treat segmentation using DeepLab? thank you.

  • @adityagoel237
    @adityagoel237 2 года назад

    14:25 In upsampling (before adding C4), why the 8*8*256 got transformed to 16*16*128 ? Why not 16*16*256 ?

  • @anishjain3663
    @anishjain3663 4 года назад +2

    Sir i am doing image segmentation with coco like dataset sir already see yours tutorials but still not able to implement

  • @vikaskarade5585
    @vikaskarade5585 4 года назад +3

    Amazing Lecture. You can also create one on UNET++ and attention UNET. I was looking for these topics and I wish you had one on these topics... :)

  • @ramanjaneyuluthanniru1428
    @ramanjaneyuluthanniru1428 4 года назад

    Well explained....sreeni
    you have amazing teaching skills...your explanation pretty good.
    i watched more and more videos in youtube...you also one of the best person
    thanks for sharing information

  • @matthewchung74
    @matthewchung74 4 года назад +3

    Thank you for this very helpful video. In the unet diagram, there are 3 output features, but your implementation only has one. I'm confused as to why?

    • @tomrob123
      @tomrob123 4 года назад

      As im just starting to dig into this field im not quite sure but my suggestion would be that the output has to be a segmented image. Segmented images have value 1 for the segmented part and value 0 for the remaining non segmented part of the picture. Usually when using segmentation grey values are considered. And for grey values only one channel is needed.

  • @codebeings
    @codebeings 3 года назад +1

    13:54 Do check, second last layer in the decoder side have wrong connections !

    • @kunalsuri8316
      @kunalsuri8316 3 года назад

      How is it wrong?

    • @codebeings
      @codebeings 3 года назад

      @@kunalsuri8316 In the second last layer of decoder (corresponding to P1), its input to the last layer of decoder is incorrect. Just check the original paper, one can easily notice it.

  • @ahmad3823
    @ahmad3823 2 месяца назад

    from 3 channels and applying 96 filters to each channel, shouldn't we get 288 channels? Also, in the max-pooling step, from 96 channels, how do we have 256 channels? shouldn't we still have 96 channels? Sorry, if these questions seem very basic but I am new to these things! Thank you!

  • @andresbergsneider6644
    @andresbergsneider6644 3 года назад

    Thanks for sharing! Very well presented and super informative. Saving this video

  • @-arabsoccer1553
    @-arabsoccer1553 4 года назад +1

    Thanks for your video,but i have question regarding the U-net and i hope that you can answer me
    from my understanding that the u-net is ended by image of the same input size ?but how we can predict the class of each pixel.
    i understand classification problem that it the last convolution is following by flatting and fully-connected layer so the number of n-classes as outputs ,but i don't understand how we get the result in segmentation

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      The convolution pooling operations (down sampling) understands the 'what' information in the image but has no information on the 'where' aspect which is required for semantic segmentation (pixel level). In order to get the 'where' information Unet uses upsampling (decoder), converting low resolution to high resolution. Please read the original paper for more information: arxiv.org/abs/1505.04597

  • @bhavanigarrepally4164
    @bhavanigarrepally4164 2 года назад

    Can you give the implementation for unsupervised semantic segmentation also

  • @MrChudhi
    @MrChudhi 2 года назад

    Hi, Sreeni, Nice explanation and I managed to clear my doubts. Thanks. Do you have any videos on image segmentation with pertained models.

  • @tonihullzer1611
    @tonihullzer1611 2 года назад

    First of all thx for your work here on RUclips, when I'm done with your series I will definitely support you. One question here: I thought that in the upward path you do add the upsampled features and the corresponding ones from the contracting path, but in your code you have concat?

    • @MrAmgadHasan
      @MrAmgadHasan Год назад

      He's concatenating and then uses a convolution layer. This has a similar effect to adding since the convolution operation adds the results after multiplication

  • @Irfankhan-jt9ug
    @Irfankhan-jt9ug 3 года назад

    Great work......which tool creates Image masks?

  • @icomment4692
    @icomment4692 3 года назад

    What implication do the cross-links have for backpropagation in the U-net architecture?

  • @siddharthmagadum16
    @siddharthmagadum16 2 года назад

    5:12 . which architecture would be good for cassava leaf disease detection dataset?

  • @NH-gl8do
    @NH-gl8do 4 года назад

    Very excellent explanation

  • @Shadow-pn2us
    @Shadow-pn2us 4 года назад +1

    still confused with the concatnation operation how it works, such as adding 16x16x128 featuremap with upsampled 8x8x256, the dimension is different

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      You’ll be concatenating data with same dimensions, not different dimensions. Please have a second look at the graphic describing the architecture, the two layers fused together showing dimensions are being concatenating to form a dataset with combined dimension.

  • @BareqRaad
    @BareqRaad 3 года назад

    Great demonstration thank you so much

  • @kebabsharif9627
    @kebabsharif9627 3 года назад

    Can you make a video in which your code detect the orientation of page from a photography of the page , suppose to the page is up-side down or 90° let /right rotated.

  • @varungoel185
    @varungoel185 4 года назад +1

    Nice video, thanks! One question - this architecture is for semantic segmentation right? How would the final layer (or layers) differ for the instance segmentation, wherein the output would be bounding boxes or co-ordinates of the instances?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +2

      Instance segmentation requires different architecture, you cannot swap the final layer to convert them from one to another application. I only wish life were that easy!!!

  • @saifeddinebarkia7186
    @saifeddinebarkia7186 3 года назад

    Thanks for the video,so is it transposed convolution or up-sampling for the expansive path because they are 2 different things.

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      It can be either. Please watch the following video if interested in learning about the differences between the two. But, you can use either as the idea is to get back to the large resolution image from a smaller size.
      ruclips.net/video/fMwti6zFcYY/видео.html

  • @joshizic6917
    @joshizic6917 Год назад

    Hi sir i was wondering if you could help to train my model i am trying to create a dataset where only the element of interest is visible and the rest is blacked out with transparent background , will this be great or i should create a binary mask by coloring the element of interest in white and keeping the background white

  • @victorcahui732
    @victorcahui732 3 года назад

    Thank you for your explanation.

  • @chitti1120
    @chitti1120 3 года назад

    can someone tell me and give examples of why the u-net architecture uses the 'copy and crop' for every block?

  • @DAYYAN294
    @DAYYAN294 5 месяцев назад

    Great job by you sir salute to u❤

  • @sarahs.3395
    @sarahs.3395 4 года назад +2

    Good explanation, thank you.

  • @nailashah6918
    @nailashah6918 3 года назад

    very good lecture
    just one thing I am unable to understand about feature space or dimension?plz reply with answer
    thanks

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      Not sure where your confusion is.... I am referring to the filtered results (after convolutional filtering) as feature space. This is where you will have multiple responses for every input image and these responses contain the information about features in the image.

    • @nailashah6918
      @nailashah6918 3 года назад

      I wanted to ask about feature space that was 64 in start then 128 in 2nd block of unet
      64 means 64 output filtered results? is that true?
      or we can say 64 filters were applied, then 128 filters and so on ...?

  • @VLM234
    @VLM234 3 года назад

    Great explanation....Please keep on posting such high-value videos.....
    If we have less data, then we should go for Transfer learning or Machine Learning approach??

  • @mqfk3151985
    @mqfk3151985 4 года назад +1

    As usual! Amazing tutorial. I just want to confirm, in the training phase, all images have to be of the same shape (width, height and depth), right? what if my training data varies in shape? Do I need to resize the images?
    Also, I Will be really thankful if you can give a tutorial on Mask RCNN. It's also a very good algorithm that can be used for semantic segmentation.
    Thanks a lot for your time.

    • @manishsharma2211
      @manishsharma2211 4 года назад +2

      Yes. Always apply transformation on image ( like resizing and rotation etc)

    • @mqfk3151985
      @mqfk3151985 4 года назад +1

      I see, thanks for the reply. Image rotation will be performed for data augmentation. but regarding the image resizing, I think it's a requirement by the algorithm.

    • @manishsharma2211
      @manishsharma2211 4 года назад +1

      @@mqfk3151985 Yes , there is never that you might find images of all same size. Unless you go for normal competation
      So better resize :)

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      You will represent your data as numpy array so you need all images to be of same size. Yes, it is customary to resize images to a predefined shape in machine learning.
      I will consider making Mask-RCNN videos.

  • @ahmedhafez3758
    @ahmedhafez3758 3 года назад

    I want to make a 3D medical image segmentation , can you tell me how to start, I want the input to be .obj file and the output to be either .dcm files ( for each segment ) or .obj files

  • @josemiguelc.tasayco4028
    @josemiguelc.tasayco4028 3 года назад

    Very well !!! more videos please

  • @prashant007420
    @prashant007420 2 года назад

    Thank you for the video. this is best video. My only request please make same type video for Mask R CNN for image segmentation i have a project on this i have to submit in this week but Mask R cnn is confusing. so please help me on that.

  • @talha_anwar
    @talha_anwar 4 года назад +1

    Thanks first of all. Can you provide the image you have used, the architecture image?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      You can search for U-net on Google. I did the same and created my own, to make sure I do not infringe on copyright.

    • @sourabhsingh4895
      @sourabhsingh4895 4 года назад

      @@DigitalSreeni sir you are great sir it would be a great help if you could upload a video on semantic segmentation using double-UNET model

  • @doraadventurer9933
    @doraadventurer9933 4 года назад

    thank you for your sharing, however, do you have the training part?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Please keep watching videos on this playlist, I have training and segmentation part covered.

  • @muhammadzubairbaloch3224
    @muhammadzubairbaloch3224 2 года назад

    Depth estimation using neural network. please make the lecture

  • @RAZZKIRAN
    @RAZZKIRAN 2 года назад

    input size for U-nET?

  • @CristhianSanchez
    @CristhianSanchez 4 года назад

    Great explanation!

  • @tonix1993
    @tonix1993 3 года назад

    Very helpful video thank you!

  • @carolinchensita
    @carolinchensita Год назад +1

    Thank you very much for this explanation. I have one question, could I use this same method on an RGB image? Or does it have to be grayscale? Thanks!

    • @rohanaggarwal8718
      @rohanaggarwal8718 11 месяцев назад

      This is a late reply but yes, you have to expand your thinking... You can't assume just because someone made a tutorial this is what i have to do. Ask yourself these questions instead of trying to get help, What is a grayscale image? (1 is white, 0 is black, in between is gray) Can I apply this concept to RGB? (Three color channels, each same principle) How does my code change, (Input shoudl be three, maybe I need to flatten differently), etc. Good luck learning!

  • @azamatjonmalikov9553
    @azamatjonmalikov9553 3 года назад

    Amazing content as usual, well done :)

  • @mohamedelbshier2818
    @mohamedelbshier2818 2 года назад

    Thank you and Respect Sir

  • @hanfeng32
    @hanfeng32 4 года назад +2

    thank you, this video is the best

  • @nourhanelsayedelaraby4271
    @nourhanelsayedelaraby4271 3 года назад

    first of all thank u for the great explanation and wanted to ask u about the slides if they are available

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Sorry, I wasn't very planned with my presentation slides so unfortunately I cannot share them. Also, I often use images and content from Google searching that come with copyright. I cannot legally distribute them.

  • @ApPillon
    @ApPillon 2 года назад

    Thanks bro. Cheers!

  • @pratheeeeeesh4839
    @pratheeeeeesh4839 4 года назад +1

    classy explanation!

  • @NS-te8jx
    @NS-te8jx 2 года назад

    do you have slides for all these videos?

  • @MadharapuKavyaP
    @MadharapuKavyaP 2 года назад

    Hello sir, can u please make a video on brain tumor segmentation using u net architecture integrated with correlation model and fusion mechanism.

  • @ariouathanane
    @ariouathanane Год назад

    Hello, i have a rgb masks, it's possible to do the image segmentation? thanks in advance

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      Yes. I have done that here. ruclips.net/video/jvZm8REF2KY/видео.html

  • @snehalwagh2283
    @snehalwagh2283 2 года назад

    Question: What happens if it is 128X128X1 ? will it still become 128X128X16 ?

  • @shanisssss5906
    @shanisssss5906 4 года назад

    Fantastic video!

  • @leo46728
    @leo46728 3 года назад

    17:56 Does the model need to be trained after compiling?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Compiling just defines the model, you need to train the model on real data to update the weights and customize it for a specific job to be done, for example identify cats and dogs.

    • @leo46728
      @leo46728 3 года назад

      @@DigitalSreeni ok thanks

  • @xxxtj3679
    @xxxtj3679 2 года назад

    Please do a W-net tutorial

  • @maciejkolodziejczyk4136
    @maciejkolodziejczyk4136 4 года назад

    Many thanks, well done !

  • @ahtishamulhaq1415
    @ahtishamulhaq1415 2 года назад

    I can't Find Code Please Tell the name of folder

  • @TheedonCritic
    @TheedonCritic 2 года назад +1

    Awesome!
    I'm trying to use GAN for augmenting my images and masks which I will use as input to my semantic segmentation models, but I can't find any tutorials online.
    Most of them are for classification datasets, any advice, please?

  • @govtjobs7063
    @govtjobs7063 Год назад

    Sir is this unet architecture for multiclass segmentation or binary segmentation??
    Kindly response

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      This is binary. I got many other videos on multiclass.

    • @govtjobs7063
      @govtjobs7063 Год назад

      @@DigitalSreeni ok sir ...Thank you for your response...sir i have one more question ...when we are combining t2,flair and t1ce...do we call that combined image a single channel image or 3 channel image...please sir reply

  • @Sam56891
    @Sam56891 4 года назад +1

    Best resource

  • @mimo-wx9mc
    @mimo-wx9mc 4 года назад

    why the first parameters don't work very well and how we can determine the best parameters

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      No sure what you mean by parameters. If you are asking about hyper parameters that go into defining your network then it is not an easy answer. People are still researching the effect of parameters for various applications.

  • @temurochilov
    @temurochilov 3 года назад

    Thank you very informative tutorial

  • @ExV6120
    @ExV6120 4 года назад

    I still don't get it, what exactly is the 16, 32, 64, 128, 256 that being called features in the next two layers each?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Think of it as applying 16 different digital filters and then 32 and the 64 and so on.... Therefore, if you take a single image of size 256x256 and apply 16 different filters on it you will end up with 16 responses from this single image --> 256x256x16 data points.

    • @andresbergsneider6644
      @andresbergsneider6644 3 года назад

      ​@@DigitalSreeni What is the design principle behind this filters, any rules of thumb? Are they generated at random? Or are this manually configured?
      Thanks again for sharing this video!

  •  4 года назад +1

    Thanks for video

  • @alessioandreoli2145
    @alessioandreoli2145 4 года назад

    Hi!which is the best segmentation technique I can use in python for cells image counting/object detection/size definition?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      The best method is always traditional approaches of using histogram for thresholding and then some operators like open/close to clean up. If that is not possible then the next best option is to use traditional machine learning (extract features and then Random Forest or SVM). I covered that topic on my channel. FInally, if you have the luxury of 1000s of labeled images then use deep learning.

    • @alessioandreoli2145
      @alessioandreoli2145 4 года назад

      @@DigitalSreeni , please let me one more question. My purpose is to avoid manual settings to use macros or python over big amount of images taken at cells on a big microscale range. Any suggestions there? Have you any reference for deep learning?

  • @shreearmygirl9878
    @shreearmygirl9878 3 года назад

    Hello sir, plcan u provide the links of videos for creating our own dataset from scratch fro satellite images, pl sir.. its very important.I hope you will...

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      You just need to annotate your images using any of the image annotation tools out there. I use www.apeer.com as that is what our team does at work.

  • @mager8460
    @mager8460 3 года назад

    Could someone explain why on upsampling the number of the feature maps reduce to the half?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Upsampling is not reducing the feature maps in half, it is expanding dimensions by 2 times as upsampling is like the opposite of maxpooling. The feature maps are reduced by half because that is what we defined in our network as part of convolution operation. The number of features has nothing to do with upsampling.

  • @anishjain3663
    @anishjain3663 4 года назад +1

    sir how to use this 3d images and what is 3d image can you please can you make a video on that

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      I will try to do 3D image processing some day.

    • @aishstha6669
      @aishstha6669 Год назад

      @@DigitalSreeni do you va e vid on 3D ?

  • @efremyohannes2334
    @efremyohannes2334 4 года назад

    Thank you sir, very nice video.

  • @zeeshanahmed3997
    @zeeshanahmed3997 4 года назад

    hello! I want ask something, can I train my unet model with the input training images having only single channel? like (img_height, img_width, 1) or (img_height, img_width) ?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +3

      Yes. Please watch my other videos on U-net. Every network expects certain dimensions and you can reshape your arrays to fit those dimensions. For example if you have grey images with dimensions (x, y, 1) and if the network takes 3 channels then just copy the image 2 more times to convert to (x, y, 3).

  • @nickpgr10
    @nickpgr10 3 года назад

    @14.11.. can anyone please explain how size changes from 8*8*256 to 16*16*128 due to up sampling??. why number of channels get reduced in this step??

    • @zhenxingzhang6429
      @zhenxingzhang6429 3 года назад

      If you checkout the part2 of this video, you can see that it uses Conv2DTranspose (transpose convolutions) for upsampling instead of just simply UpSampling2D (repeat the value to match the desired dimosions), because the filter number is set to 128, so we end up with 8*8*256 -> 16*16*128. Check this for more details: www.jeremyjordan.me/semantic-segmentation/#upsampling

  • @منةالرحمن
    @منةالرحمن 4 года назад +1

    Thank you again
    Would you please tell me, is it possible to use data augmentation befor semantic segmentation an how to apply same function on both image and mask

  • @kethusnehalatha6091
    @kethusnehalatha6091 3 года назад

    For better results what changes we have to do in the u net sir ???

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Many things. For example you can try replacing generic encoder (down sampling) part with something sophisticated like efficientnet.

  • @mstozdag
    @mstozdag 4 года назад

    Hello, great content! Where is the code for U-Net? Can u post the link here pls?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      github.com/bnsreenu/python_for_microscopists

  • @akainu3668
    @akainu3668 3 года назад

    hi can you also create one tutorial on unet based segmentation for isbi 2012 data set or brats data set
    ?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      I already did Brats. Please check my videos 231 to 234.

  • @ati43888
    @ati43888 3 месяца назад

    Very nice

  • @mohammadkarami8984
    @mohammadkarami8984 4 года назад

    Thanks a lot for your video

  • @eastlee9090
    @eastlee9090 3 года назад

    Hi, Sir. Is chapter 72 missing?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      Yes, it is missing because it was about getting system ready for GPU and the process does not make sense any more with new TensorFlow. I am planning on recording a new video on the topic.

  • @xianglongchen3088
    @xianglongchen3088 2 года назад

    Is this keras code?

  • @pearlmarysamuel4809
    @pearlmarysamuel4809 4 года назад

    How much memory does the original unet require?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      Not a simple answer. Here is some good reading material on this topic. imatge-upc.github.io/telecombcn-2016-dlcv/slides/D2L1-memory.pdf

  • @qazisamiullahkhan1497
    @qazisamiullahkhan1497 2 года назад

    Please sir make a video on sliver07 dataset