158b - Transfer learning using CNN (VGG16) as feature extractor and Random Forest classifier

Поделиться
HTML-код
  • Опубликовано: 26 янв 2025

Комментарии • 130

  • @wayne7936
    @wayne7936 9 месяцев назад

    I got so many ideas watching you walk through transfer learning step-by-step. 🙏🙏

  • @shilpamanocha3195
    @shilpamanocha3195 4 месяца назад

    Thorough explanation! Really helpful and informative. Thanks 🙏

  • @RealMe12075
    @RealMe12075 Год назад +5

    Its amazing....... helped alottttttt!!!!!!! Thanks alot it saved uss🙏

  • @samarafroz9644
    @samarafroz9644 4 года назад +6

    You're damn great sir best tutorial on RUclips

  • @diptilandge0202
    @diptilandge0202 Год назад +1

    Awesome thankyou so much keep making more videos
    😊😊😊😊😊😊😊

  • @emma7ist
    @emma7ist 2 года назад

    Thanks

  • @AIdAssist
    @AIdAssist Год назад

    Thank you so much for the amazing tutorial!! It was very informative and helpful!!

  • @senthilkumar-u4j
    @senthilkumar-u4j Год назад

    Kindly tell us how to do the feature selection using the MRI IMAGE DATASET

  • @inhibited44
    @inhibited44 Год назад

    you have the pictures of barns and landscapes. If you were trying to distinguish between a person with freckles and the same person without freckles , is the detail too small for ML to pick it up?

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      No detail is too small for deep learning; chances are if you can see them the algorithm can also see them.

  • @pedroramon3942
    @pedroramon3942 Год назад

    Why do you perform on-hot encoding if you will not use it in the rest of the code?

  • @inhibited44
    @inhibited44 Год назад

    I am wondering how you deploy this on a website if you want to classify an image from your iphone? I can save an h5 file using video 158 , but don't see how to save it on this vgg program.

    • @inhibited44
      @inhibited44 Год назад

      I actually bumped into your lesson 268 and I am reviewing it. Thanks

  • @ecemiren3370
    @ecemiren3370 9 месяцев назад

    Dear thank you very much for this video. I have implemented the same codes on brain tumor dataset but getting the accuracy result of 0, what should I do ? Also I have 4 class labels but confison matrix shows 7 labels. Could you please help me ?

  • @Bomerang23
    @Bomerang23 2 года назад

    should nt u change test_labels in line 121 change to test_labels_encoded?????

  • @bielmonaco
    @bielmonaco 2 года назад

    For petrografic semantic segmentation for multiclass classification, im having a problem while fitting the model.
    As the VGG16 output is of shape (None, 8, 8, 512), i cant fit it to my y_train values, of shape (None, 256, 256, 6).
    What should i do?

  • @inhibited44
    @inhibited44 Год назад

    looking forward to viewing and trying this on my own data. I haven't enough data to classify the images and the images are unusual.

  • @efadsheikh4921
    @efadsheikh4921 2 года назад

    Sir, using a large dataset my program gets crashed, because of the RAM isn't sufficient enough. What should I do?
    Note: I'm using Colab

  • @muhammadsalmanjamil7182
    @muhammadsalmanjamil7182 3 года назад

    Why its still labeling a data out of 4 classes if i give it some random image. ?

  • @shimaro1000
    @shimaro1000 Год назад

    Hello sir, how can this method be used for videos instead of images

  • @shakirinjahanmozumder2572
    @shakirinjahanmozumder2572 2 года назад

    Hi, sir. 28 number line, print (label). What is the output of this line? Same as directory_path?

  • @Mary-gl4lz
    @Mary-gl4lz Год назад

    Hello Sir How to apply Transfer Learning for numerical data? using 1dCNN?

  • @suramonther9913
    @suramonther9913 2 года назад

    Thsnk you for this video ...I want to ask if I can extract features from 960 image using HOG and then concatenate with cnn features to obtain features vector to use it for classifying images using svm .please give us example about this idea

  • @zakariaboucetta5149
    @zakariaboucetta5149 3 года назад

    Hello,
    Thank you for this video, but i can't find the dataset ??

  • @rabindulal1280
    @rabindulal1280 3 года назад

    How can we use yolo for localizing and alexnet for final identification purpose

  • @hulk8889
    @hulk8889 2 года назад

    Is there video of you explains the process of using pretrained weights (VGG16) as feature extractors for MLP?

  • @seharaftab4363
    @seharaftab4363 3 года назад

    sir your code run but give accuracy 0.0 what problem and how to solve please answer

  • @zohalghasemzadeh4510
    @zohalghasemzadeh4510 3 года назад +1

    I wish you'd make a version of this with data augmentation (giving the input to random forest)

  • @RAZZKIRAN
    @RAZZKIRAN 2 года назад

    line number 72,73,74 getting error
    ValueError: zero-size array to reduction operation maximum which has no identity

  • @tafadzwazhakata4579
    @tafadzwazhakata4579 3 года назад

    This code part is giving an error that 'int' object has no attribute 'assign' pliz assist
    #Now, let us use features from convolutional network for RF
    feature_extractor=VGG_model.predict(x_train)
    features = feature_extractor.reshape(feature_extractor.shape[0], -1)
    X_for_training = features #This is our X input to RF

  • @saritagautam9328
    @saritagautam9328 4 года назад

    hello sir! when i run this code, the one hot encoding runs into error everytime. its not working properly

  • @zaing9609
    @zaing9609 3 года назад

    why can't we just add a flatten layer at the end of the VGG model? Then add a dense layer with softmax activation and desired number of classes and train the model with these trainable layers only. I have tried it and it gives almost same validation accuracy. Am I missing something?

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      May be you are are missing something. The approach you mentioned is deep learning approach which works great if you have large amount of training data. With limited training data, you will get higher accuracies with feature engineering and traditional machine learning (e.g., Random Forest). You can engineer your features by adding a bunch of filters or filter banks using Gabor. Or you can use pretrained CNN (e.g., VGG on imagenet) as feature extractor, which is what I described in this video. In summary, this approach may yield better results with only a handful of training images and masks.

  • @abderrahmaneherbadji5478
    @abderrahmaneherbadji5478 4 года назад +1

    Hello Sreeni,
    Thanks for your great efforts.
    Please let me know how one can use SVM instead of Random Forest.
    Also, I think feature vectors are of higher dimensions, so how can we implement PCA to reduce dimensions.
    Best regards.

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Just switch the line where we have Random Forest to SVM.

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      I will try to do a video on the topic of PCA.

    • @abderrahmaneherbadji5478
      @abderrahmaneherbadji5478 4 года назад

      @@DigitalSreeni looking forward to the video

    • @campechano7789
      @campechano7789 3 года назад

      hi, i have been reading a paper and it say this "We noticed that
      randomly selecting few dimensions is more efficient that a classic Principal Component Analysis (PCA) algorithm [30].
      This simple random dimensionality reduction significantly decreases the complexity of our model for both training and testing time while maintaining the state-of-the-art performance." Maybe can help u

  • @bigghostshadowchannel3955
    @bigghostshadowchannel3955 Год назад

    I have followed this process of coding, but can you please mention how many epochs you have used, and is there any way to change the epoch number in the program?

  • @tamizhelakkiya
    @tamizhelakkiya 3 года назад

    Very nice explanation...but while I am trying this code for my classification problem, it gives more number of classes for a.3 class problem statement

  • @konstantinosdiamantis6098
    @konstantinosdiamantis6098 4 года назад +1

    Awesome video sir! I What if we have a big dataset? SVM from sklearn dont support generators or partial_fit

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Well, that is why you have neural networks, try SGD classifier. If you want to stick with SVM try using PCA to reduce dimensions.

    • @konstantinosdiamantis6098
      @konstantinosdiamantis6098 4 года назад

      @@DigitalSreeni Thank you for your reply!

  • @ravilourembam9412
    @ravilourembam9412 2 года назад

    sir how to generate roc curve..? i am getting error

  • @Utaustinesl
    @Utaustinesl 2 года назад

    You are really great Sir!

  • @holthuizenoemoet591
    @holthuizenoemoet591 2 года назад

    Would love to apply this same method but using random forest for segmentation

  • @vlrsenthil
    @vlrsenthil 2 года назад

    Sir, this code part is giving an error that "Unexpected result of `predict_function` (Empty batch_outputs)"
    #Now, let us use features from convolutional network for RF
    feature_extractor=VGG_model.predict(x_train)
    Kindly clarify my doubt

  • @lalitsingh5150
    @lalitsingh5150 4 года назад +1

    Sir, please make a tutorial video on GAN using your own data set..can transfer learning is used in GAN also

  • @subhijaen3391
    @subhijaen3391 4 года назад

    Sir when i run this code for two class problem its not working.Getting a confusion matrix of 4X4.Kindly tell me

    • @saritagautam9328
      @saritagautam9328 4 года назад

      to me its showing 0.0 accuracy in two class problem

  • @nandiniloku7747
    @nandiniloku7747 Год назад

    hi sir, how can i run it for n trails ?

  • @tirthadatta7368
    @tirthadatta7368 3 года назад

    Can I use the same process for other models like EfficientNet to combine EfficientNet with SVM and Random Forrest??

    • @DigitalSreeni
      @DigitalSreeni  3 года назад +1

      Sure. You can use output from trained models as features for your traditional classification algorithms such as SVM or RF or Gradient boost.

    • @tirthadatta7368
      @tirthadatta7368 3 года назад

      @@DigitalSreeni thanks a lot, Sir. But I have another question. In transfer learning we see that we can do the augmentation manually by using ImageDataGenerator of keras during running the model. But is their any way to do augmentation manually in hybrid model like this u given in this video??? If it can be done, then how?? Can u please tell us??

    • @viola431
      @viola431 2 года назад

      @@DigitalSreeni did you get any luck trying the SVM?

  • @saritagautam9328
    @saritagautam9328 4 года назад

    in my case, i am running the code on cat dog sample dataset. but it is showing 0.0 accuracy. please help

    • @ghezaliwaffa3183
      @ghezaliwaffa3183 3 года назад

      I have the same problem,plz if you had resolved it tell me

  • @RajKumar-h2y1b
    @RajKumar-h2y1b Год назад

    Dear Sreeni, please share the code for practice purpose

  • @kuipan5968
    @kuipan5968 4 года назад

    Another great video, thanks!

  • @kamya5732
    @kamya5732 3 года назад

    sir how can we save final model for further use?

  • @diyasapra8923
    @diyasapra8923 2 года назад

    hi i am using this code for a 4 class dataset and i am getting the accuracy as 0
    and the confusion matrix also has 8 indices (i.e. from 0 to 7) instead of 4.
    if you could please help me out with this it'd be great!

    • @ecemiren3370
      @ecemiren3370 24 дня назад

      Hi can you solve this problem ?

  • @sneharaina2943
    @sneharaina2943 2 года назад

    Hello sir ..i had one query..what is the use of random forest classifier or any other classifier to be applied after deep learning...like vgg19 or vgg16...because the deep learning model can give us classification results as well???

    • @Otatoes
      @Otatoes 5 месяцев назад

      It's for imbalanced data, cnn models are really biased towards the majority class however random forest models aren't, so we are leveraging the feature extraction quality of cnn to classify the class by random forest

  • @harshkumarsingh5815
    @harshkumarsingh5815 3 года назад

    Sir, aren't we supposed not to rescale the test images ?

  • @glennmathewgarma5368
    @glennmathewgarma5368 3 года назад

    Very useful. Thank you!

  • @iphipi
    @iphipi 2 года назад

    if I only have 2 classes, should I use label encoder?

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      It depends on whether you define the problem as multiclass with 2 classes or binary. If you define it as multiclass and using softmax activation then you need to encode your labels and convert to categorical. If you are defining it as binary and using sigmoid activation, you do not need to convert to categorical, just encoding to 0 and 1 would do.

    • @iphipi
      @iphipi 2 года назад

      @@DigitalSreeni thankyou so much, your video helps me a lot

  • @jigarhundia2957
    @jigarhundia2957 3 года назад

    i am trying it on two classes of vegetable one is fresh_vegetable and other is stale_vegetable, using the same code , but i am getting accuracy as zero

    • @ritujangra00
      @ritujangra00 3 года назад

      same here...how did u resolve the issue???

  • @قصيعودهجعفرحسينالخفاجي

    hello sreeni
    can you make a vedio to classify covid -19 ct scan image with using the grad-cam please

  • @lalitsingh5150
    @lalitsingh5150 4 года назад

    sir after
    feature_extractor=VGG_model.predict(x_train)
    UnboundLocalError: local variable 'batch_outputs' referenced before assignment
    Please give solution

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      May be an issue with keras.... instead of
      from keras.models import .....
      from keras.layers import ......
      try...
      from tensorflow.python.keras.layers import ....
      from tensorflow.python.keras.models import ....
      But, please be consistent. Do not mix some modules from keras and some from tensorflow.keras.

  • @islamicinterestofficial
    @islamicinterestofficial 2 года назад

    Thank you so much sir

  • @sobia355
    @sobia355 2 года назад

    Sir,can we give binary images as input to vgg16?

    • @Otatoes
      @Otatoes 5 месяцев назад

      Yes

  • @oladosuoladimeji370
    @oladosuoladimeji370 3 года назад +1

    Hi thanks for the tutorial
    Can you make a tutorial on transfer learning for multi label classification

    • @hulk8889
      @hulk8889 2 года назад

      I need same issue

  • @mehmetbey5621
    @mehmetbey5621 4 года назад

    you shared extremely really valuable videos.Thank you very much.
    I wonder if can we use fine-tuned CNN models for feature extraction (that you used all trainable layers=false) and
    is it enough just like:
    (for layer in ResNet_model.layers[:xy]: layer.trainable = False for layer in ResNet_model.layers[zx:]: layer.trainable = True)
    And will we use cnn model's features after all epoch finished or just like you shared like layer.trainable = False.

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      When we say we want to use pretrained models as feature extractors we will set trainable=False because we want to use pre-trained weights as kernels that define the feature extractor. If you do the training, running through epochs, you are adjusting the weights and fitting a model. This is not the goal when you only want to use the model for feature extraction. Think of it as training a model on a bunch of data and now you are using the trained model for prediction. Except, we are not predicting and only extracting features because the original network was trained for a different purpose.

    • @nedim8403
      @nedim8403 4 года назад

      @@DigitalSreeni thank you for this valuable help and contributions

  • @nyanlintun3854
    @nyanlintun3854 4 года назад

    Sir!
    Your task for 4 classes! It can work for 30 classes?

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Yes, no reason why it wouldn't work for 30 classes.

  • @manishsahani4867
    @manishsahani4867 4 года назад

    sir please make a video on style transfer and pixtopix neural networks.

  • @elyesachour6333
    @elyesachour6333 2 года назад

    Amazing video!!
    I was wandering if I can save my trained model so I can call it one more time without rerunning it and how to do so.
    I'm actually trying to classify 3 classes with a limited number of features and I'm getting an 0.59 in accuracy, I've tried data augmentation with ImageDataGenerator but the accuracy has become 0.58.
    What should I do to increase it.
    Thank you

    • @DigitalSreeni
      @DigitalSreeni  2 года назад +1

      To save and load sklearn models you can use Pickle.
      import pickle
      #Save the trained model as pickle string to disk for future use
      filename = "sandstone_model"
      pickle.dump(model, open(filename, 'wb'))
      #To test the model on future datasets
      loaded_model = pickle.load(open(filename, 'rb'))
      result = loaded_model.predict(X)
      If you accuracy is not improving, play with the classifier parameters. If that doesn't help, you need more training data. You cannot augment your way to better accuracy.

  • @moss_bee
    @moss_bee 4 года назад

    Nice work, keep going!

  • @surajshah4317
    @surajshah4317 4 года назад +1

    sir, it is my request to make a video on a combination of Unet and GAN as GAN-Unet network for segmentation purpose

    • @holthuizenoemoet591
      @holthuizenoemoet591 2 года назад

      I second this, transfer learning from Unet to random forest for image segmentation would be amazing

  • @himanshumani1550
    @himanshumani1550 2 года назад

    why are u not normalizing extracted features before giving it to random forest

    • @DigitalSreeni
      @DigitalSreeni  2 года назад

      For a couple of reasons, primary reason being Random Forest is a tree-based model so it does not require feature scaling.

  • @anupriya3741
    @anupriya3741 3 года назад

    Sir your videos are damn amazing. Tried it with a different dataset, but stuck with 0.0 accuracy as mine is a 2 class classification.. Pls guide us..

    • @DigitalSreeni
      @DigitalSreeni  3 года назад

      I don't see why there should be an issue with 2 class classification. It just reads the folder name and assigns it as a label. It does not matter how many folders you have. Please make sure the folder names do not have any spaces, that can lead to issues.

    • @anupriya3741
      @anupriya3741 3 года назад +1

      @@DigitalSreeni no sir no spaces in folder names... Just changed the folder path as I am running this on colab...just troubled with the accuracy..is it because I am using xray images?. Plz have a look..

    • @ghezaliwaffa3183
      @ghezaliwaffa3183 3 года назад

      I have the 3problem ,plz if you had resolved it tell me

    • @ritujangra00
      @ritujangra00 3 года назад

      @@anupriya3741 i also got 0.0 accuracy with 10 class dataset...what can be done??

    • @ritujangra00
      @ritujangra00 3 года назад

      @@DigitalSreeni i also got 0.0 accuracy with 10 class dataset...what can be done??pls rply sir....m stuck

  • @ameentendolkar4430
    @ameentendolkar4430 4 года назад

    Please a video for VGG16 with SVM and VGG16 with KNN

    • @DigitalSreeni
      @DigitalSreeni  4 года назад +1

      Just replace the Random Forest part in this video with SVM or other classifier. I am not sure if it requires a new video for each classifier in this process.

    • @ameentendolkar4430
      @ameentendolkar4430 4 года назад

      @@DigitalSreeni Thanks. I was able to do it. Please post a video with pretrained AlexNet.

    • @viola431
      @viola431 2 года назад

      @@ameentendolkar4430 what did u replace in the code to use SVM as I'm trying to do the same

  • @azamatjonmalikov9553
    @azamatjonmalikov9553 3 года назад

    Amazing content as usual, well done :)

  • @Vidush05
    @Vidush05 4 года назад

    Great, Thank you

    • @DigitalSreeni
      @DigitalSreeni  4 года назад

      You are welcome!

    • @Vidush05
      @Vidush05 4 года назад

      @@DigitalSreeni I have a doubt on this here we are giving input size 256, but the default size of the vgg16 is (224,224) based on this only the convolutional layers are structured and got 92 % accuracy on imagenet competition. Here we are changing input shape will it affect the performance of the model?

  • @sahibkhouloud8670
    @sahibkhouloud8670 4 года назад

    Thank you sir

  • @agsantiago22
    @agsantiago22 2 года назад

    Merci !

  • @shimaro1000
    @shimaro1000 Год назад

    Great

  • @FunkyPeakz
    @FunkyPeakz 5 месяцев назад

    Any body intrested in my project.. im working on a density map estimatatin

  • @rv0_0
    @rv0_0 2 года назад

    from tensorflow.keras.layers import BatchNormalization
    Use this import statement for BatchNormalization

  • @iphipi
    @iphipi 2 года назад

    after feature extraction, can i use the optimization like gridsearchcv or GA with tpot for the classifier?

  • @nareshgoud7993
    @nareshgoud7993 3 года назад

    sir i am getting error while implementing this on another dataset " zero-size array to reduction operation maximum which has no identity"