195 - Image classification using XGBoost and VGG16 imagenet as feature extractor
HTML-код
- Опубликовано: 5 окт 2024
- Code generated in the video can be downloaded from here:
github.com/bns...
XGBoost documentation:
xgboost.readth...
Video by original author: • Kaggle Winning Solutio... - Наука
you are really a good teacher, i appreciate good work
Glad you think so!
@@DigitalSreeni sir I am applying xgboost algo fir image classification but I am getting 0% accuracy
Pls help me sir
How can I apply k-fold cross validation in this code. I wish you may help me in this situation. Thank you for all your effort Sir.
Because the most common problem in practice is overfittig. How can I overcome this in this code
Thank you for your effort Sir! Wish you all the best! I have a question regarding classification for medical image when using your approach. Obviously, you have implemented your method on categories or classes that are well differentiable, but it it still worth trying for medical CT images of classification such as COVID19 classification task? Do you have different codes or recommendation for such a task? Best!
Hello Sreenivas sir!
I am working on a project named *Semantic Segmentation for Autonomous Vehicles in different weather conditions*
For this, we are using *A2D2 Semantic Segmentation* dataset. This dataset contains images with their annotations ready.
Our aim is to create a model which is robust in different weather conditions.
For the adding the different weather effects, have used different Image Processing Techniques for image augmentation for 4 weather effects : rainy, foggy, cloudy & snowy.
Now for Semantic Segmentation, we are using *ENCODER-DECODER* model where we are using VGG16 pretrained model on Imagenet + FCN (Fully Convolutional Networks).
I am trying the standard process of adding convolutional layers, deconvolutional layers, unpooling, etc for the FCN part but i am not that confident.
*Questions:-*
1) How should i approach FCN part? I am doing trial-error for this purpose. Any suggestions.
2) I have created 950 images of each weather condition. The annotated images of all the 4 weather effects will be same. Can it overfit the model where the truth value will be same for all the 4 weather effects?
3) I was thinking of adding another feature extraction NN which will provide information about the weather effects. The o/p of this feature extraction network would be added to the FCN part to increase robustness of the model.
Any suggestions or tips will be helpful.
Thank you & Stay Safe!
Have you ever work on using XGboost to classify both image and text data? For example, classify "meme", so image + text column
You are too good sir
I keep getting this error 'int' object has no attribute 'assign' in --> feature_extractor=VGG_model.predict(x_train)
im also getting the same problem,,, if u got solution... tell me
Hey, thank you for the fantastic video; I have a question about feeding input to the VGG16; Can we also feed NumPy arrays to VGG16 and extract features from our NumPy arrays instead of images? If yes, are we limited to using packages from Keras preprocessing to be on the safe side regarding our calculations, or can we simply load arrays with whatever we want? Thanks.
sir, please upload a separate video for how to convert this model into a local web application using Flask sir...please
Great lecture
Glad you think so!
please do a video on multilabel image classification using vgg16
Amazing video!!
I was wandering if I can save my trained model so I can call it one more time without rerunning it and how to do so.
I'm actually trying to classify 3 classes with a limited number of features and I'm getting an 0.59 in accuracy, I've tried data augmentation with ImageDataGenerator but the accuracy has become 0.58.
What should I do to increase it.
Thank you
Hi Sir, thanks for your great video. Is it possible to apply XGBoost for image to image regression or image denoising like in Convolutional Autoencoder or UNET?
You are great!
Thanks :)
#Now, let us use features from convolutional network for RF
feature_extractor=VGG_model.predict(x_train)"I got a error on this linr please help"
hi, great video. How can I use this for gray scale images?
Could you please make a video on how to work with custom image dataset, or if you have one already, could you please post the link? In this video, you're working VGG16 pre-trained dataset. But what if I have my own dataset of, let's say, clothing or food items images... And I'm currently in the same situation. A little help regarding this will be greatly appreciated.
Thank you. And a very nice video and explanation.
This video uses custom dataset. VGG16 is used as pretrained network and not dataset. Most of my videos use custom datasets. I covered both using pretrained networks and also custom networks and also custom feature extractors. Please have a look at other relevant videos on my channel.
How can I measure complexity of the proposed approach
Hi sir currently I'm dealing with a data set in which there's only two categories and each having only 36 images for training.. can i use the method u discussed above?
Sir, I have 120 classes will this work in my case ?
Hello Sir, can we use XGBoost and CNN model( for feature extraction ) , for doing classification of arthritis from input images?
Hi great videos I've seen most of them but I don't understand why you would convert the image from bgr to rgb when you read the image with opencv it will be in bgr format so you should convert it from bgr2rgb no ?
I don’t understand what your exact question is, can you please elaborate? When you read images using opencv they are in the form of BGR. You don’t have to convert them to RGB for most purposes but I do it for visualization, makes it easy for us to interpret them. You can use BGR2RGB conversion in opencv which is nothing but swapping the B and R channels. You’d get the same result if you use RGB2BGR as this operation is just swapping the first and last channels.
Am getting 0% accuracy . What may be the reason?
mee too
My accuracy is also 0%
Pls help me sir
@@shamimsarker839 hello sir have you got solution of this prob
I've run code with jupyter-lab but during training process always appears statement "the kernel appears to have died. it will restart automatically" . do you know the solution?
Can we do data argumentation and perform Xgboost later? If in case, it is yes then how the labels will be given?
Yes of course. In the past I got good results when I used augmented data. I performed augmentation to generate 5 to 10 times more images and stored them locally in separate folders with appropriate folder names which I have used as class labels.
@@DigitalSreeni Thanks andi :)
in xgbclassifier to fit how can I provide images and labels in batches after data augmentation through data generator??
Where can we download the data ?? And, how to arrange the data in subfolders??
Hi Sir, I was running this code to classify 66 categories(6000 training images) on colab pro, but fitting the code has been taking too long. Any suggestions for this? Love the videos,Thanks
When fitting the train data I am getting a bad allocation error, any idea why this may be?
I have installed xgboost and it works well on python but I could not import "xgboost" at jupyter notebook. Can you help to solve the problem? Thanks!
How can we perform fusion of handcrafted and vgg16 features for training our classifier.
Concatenate both features before sending to XGBoost.
Which works better on small dataset of 50 vgg16 + random forest or vgg16+ xgboost ?
I do not expect any big statistical difference between the results from random Forest and XGBoost.
Is it a good image classification model? I am doing a paper on plant disease detection and am looking into XGBoost
Yes, please try both XGBoost and LGBM.
@@DigitalSreeni Hello again, the purpose of the VGG16 is as a feature extractor correct? this produces a new image of the extracted regions am I correct? Would I be able to show these images before I pass it to XGBoost in order to visualise further what the features per class look like?
Thanks a lot for your input it is greatly appreciated
Same I am also thinking
Hi i am doing a master thesis and I would like to know that is it advisable to write code for functions from scratch or using libraries .. In order to justify proof of work.. Like for example Instead of XGBoost library do the entire thing from scratch??
Why stop there? Why not write the code to reinvent entire python? Obviously, I was being sarcastic. I don't understand why you want to reinvent something that has already been invented and working fine. I recommend benefitting from existing libraries so you can focus more on things that actually need to be coded.
can we also display ROC curve for this program?
Yes, of course.
@@DigitalSreeni Thank you so much, I really appreciate it. For binary classification with VGG16 and XGBoost, I have been able to generate a ROC curve. But for Multiclass I did not. Could you please help me,I really need help.
Sir,Can You pls provide the code
It should be on my github page now. Sorry for the delay in sharing the code.
I want to asking you , what if I would to use binary classification with this code ,what should I supposed to edit ? because I am so confused and I tried a lot of ways to set this code as binary but it same to be 4 class no matter what I am doing , so I wondering where you are setting labels and classes as 4 in your code ? Thank you !
I keep getting this error 'int' object has no attribute 'assign' in --> feature_extractor=VGG_model.predict(x_train)
hie did you manage to resolve this?
when i use this line "x_train, x_test = x_train / 255.0, x_test / 255.0", I have a error of "TypeError: unsupported operand type(s) for /: 'list' and 'float'".
Do you know what happended?
#Convert lists to arrays
train_images = np.array(train_images)
train_labels = np.array(train_labels)
test_images = np.array(test_images)
test_labels = np.array(test_labels)