Ma'am, thank you so much for all these videos you have made for deep learning. I am an aspiring ML researcher, but I have had a lot of problem finding proper explanation for all the models and frameworks and their implementations. Your videos are a treasure for anyone who wants to understand these concepts.
thank you so much you are the first person that gave me full explanation and helped to start learning in an easy way ........thank you i wish the best for you
do we need define all layers , can we directly import vgg16 architecture? vgg16 alreday trained with imagenet dataset , so we can test our image? do we need to train with our own dataset like ? i did not get this point
How about Skipped connetions? I mean can we implement vgg16 as like input layer ->lamda layer ->convolution layer -> max pooling layer ->convolution layer ->max pooling layer ->convolution layer-> max pooling layer->flatenning layer->dense layer-> dense layer Is this correct process??
Regarding your question about skipped connections, VGG16 architecture does not use skipped connections, which are commonly used in modern deep neural network architectures like ResNet, DenseNet, etc. Regarding your proposed architecture, it seems like you are trying to modify the original VGG16 architecture. While you can certainly modify the architecture, the main characteristics of VGG16 include the use of 3x3 convolutional filters with a stride of 1 and max-pooling layers with a stride of 2. Also, the architecture has a large number of convolutional layers, which helps to learn features at different scales. Your proposed architecture appears to be missing several convolutional layers that are present in the original VGG16 architecture. You should also include the fully connected layers at the end of the architecture. However, adding a lambda layer in the beginning is not necessary. A possible modification to VGG16 could be to add BatchNormalization layers after each convolutional layer to help with faster convergence and regularization. You can also experiment with different activation functions and regularization techniques to improve performance.
Thank u very much !!! another question: I train the CNN model from scratch and the pre-trained model but training from scratch are do not improve the training accuracy and the pre-trained model does not decrease validation loss, what is the reason for this sir?
There could be several reasons why training a CNN model from scratch may not improve training accuracy, or why using a pre-trained model may not decrease validation loss. Here are a few possible explanations: Insufficient data: Training a CNN from scratch typically requires a large amount of labeled data to generalize well. If you have a small dataset, the model may struggle to learn meaningful patterns and could result in poor accuracy. In such cases, using a pre-trained model that has been trained on a larger dataset can be beneficial. Inadequate model capacity: CNN models have a certain capacity to learn complex patterns. If your model is too shallow or has fewer parameters, it may not have enough capacity to capture the underlying patterns in your data. Increasing the depth or width of your model or trying a different architecture may improve its performance. Inappropriate hyperparameters: The choice of hyperparameters, such as learning rate, batch size, or regularization strength, can significantly impact the training process. If these hyperparameters are not set appropriately, the model may struggle to converge or generalize well. It's important to experiment with different hyperparameter configurations to find the optimal settings for your specific task.
Thank you so much for this helpful video. I just have one question: Whenever I want to fit the model, I receive the following error: "Failed to convert elements of SparseTensor to Tensor. Consider casting elements to a supported type". I tried resolving the error but failed so far. Would you have an idea on how to solve it? Thanks so much in advance!
Thank you so much, for this wonderful tutorial. Great explanation and code. I am doing fire detection with two sets of data fire & non-fire ,after pre-training this model, how can i use this model for real time fire detection application. can you please explain it.
HELLO MAM,I WAS USING THIS CODE TO DETECT MULTICLASSES.BUT IT IS GIVING ME ERROR OF Failed to convert SparseTensor to Tensor.MAM THIS CODE IS ONLY RESTICTED TO BINARY CLASSES??KINDLY RESPOND🙏🙏🙏🙏🙏🙏🙏
Use .todense() with your xtrain and xtest to handle this error. And to make it work for multiple classes, change the number of neurons in the last last layer.
please how do i solve this error @Aaroi Call arguments received by layer 'sequential' (type Sequential): • inputs=tf.Tensor(shape=(None, 512, 512, 3), dtype=float32) • training=True • mask=Non
hey sir? I have one question for you about image resizing, my image dimensions are 1650*3500 height and width but which one is the best size for resizing an image for developing a CNN model?
You could consider resizing your images to a common size used in many CNN architectures, such as 224x224 pixels. These dimensions strike a balance between preserving important details and reducing computational requirements. However, it's important to experiment with different sizes and evaluate the impact on your specific CNN model's performance to determine the optimal size for your particular use case.
Hello, thank you for your effort. Very well explained. Would you mind doing some projects with time-series data? I have real-time dataset which is fully imbalanced according to time. Long story short, in a different time, sometimes neural stimuli we have and sometimes we dont. In dataset there is input and output, there the crucial task is balancing the data and finding out the parameter. I can give you the dataset and related paper if you are interested. Thanks again
hi, can you help me I have an error --> ValueError: A target array with shape (284, 8) was passed for an output of shape (None, 1000) while using as loss `categorical_crossentropy`. This loss expects targets to have the same shape as the output. how I can solve it ?? thanks
check the number of neurons . Number of neurons in the output layer should be as per your dataset. But current the number of neurons in the output layer 1000
Thank you so much for this helpful video. I just have one question : Whenever I want to fit the model, I receive the following error: ValueError: Data cardinality is ambiguous: x sizes: 38 y sizes: 1 Make sure all arrays contain the same number of samples. I tried resolving the error but failed so far. Would you have an idea on how to solve it? Thanks so much in advance
This is the error due to the shape or size of your x and y. Both of them should be of same shape. If they are not then you can use reshape. I can't help you more without seeing your code.
thank you for this video, when I try to train the model I got this error : TypeError: Failed to convert elements of SparseTensor(indices=Tensor("DeserializeSparse:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse:1", shape=(None,), dtype=float32), dense_shape=Tensor("stack:0", shape=(2,), dtype=int64)) to Tensor. Consider casting elements to a supported type. please can you help me ?
Hey , I got this error while one hot encoding TypeError: __init__() got an unexpected keyword argument 'categorical_features' I think sklearn is upgraded and is causing issue Can you tell how to use it with the upgraded version where this is deprecated or tell the version of sklearn you used.
Thank you so much, Aarohi, for this wonderful tutorial. Great explanation and code. I can't seem to get it to run, however. After printing "Epoch 1/10", the call model.fit(...) crashes and gives the following error: TypeError: 'NoneType' object is not callable I am using Google Colab with Tensorflow version 2.4.1 and Keras version 2.4.3 and so have commented out the import for _obtain_input_shape. Any help would be greatly appreciated. Thank you again, Aarohi.
@@CodeWithAarohi if we use keras 2.3.0 then the given line won't work "from keras.applications.imagenet_utils import _obtain_input_shape # this will work for older versions of keras. 2.2.0 or before"
I have randomly downloaded images from google and i think 20 images per category. As this dataset is just a sample dataset used for explanation of vgg16
How to use 'get_default_graph' in tensorflow 1.x ? AttributeError: module 'tensorflow' has no attribute 'get_default_graph' Getting this error : Tried multiple stackoverflow solution but didn't work !
Giving error AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects' which versions of keras and tensorflow required
SOrry its not working giving error:Thannk you or your patience in replying. TypeError: Failed to convert object of type to Tensor. Contents: SparseTensor(indices=Tensor("DeserializeSparse:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse:1", shape=(None,), dtype=float32), dense_shape=Tensor("stack:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.
@@CodeWithAarohi MAdam , I have installed h5py 2.10.0 along with tesorflow and keras. But while running model.fit() i am getting error: TypeError: Failed to convert object of type to Tensor. Contents: SparseTensor(indices=Tensor("DeserializeSparse:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse:1", shape=(None,), dtype=float32), dense_shape=Tensor("stack:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.
Check if you have imported One hot encoding module. The way to use onehot encoding in tensorflow 1 and tensorflow 2 is different. Import the one hot encoding module as per your tensorflow version.
This error is because you are giving wrong dimensions to your model. Check the last layer of your model where you are performing classification. Make it 2 instead of 1. I guess this will solve your issue.
Check your keras version and then according to your version use the right line to import load_img. You can see from keras documentation how to import for that particular version
@@CodeWithAarohi thank you for your reply. I can see my train_y values. But I wanted to what your format. Actually I followed your video and everything worked fine. But on model.fit I am getting error. No ideo how to fix it.
Because vgg 16 is designed for 16 layers and yes you can add more convolution layers but then you can’t call that network is vgg16 . That would be a customised convolution neural network
@@CodeWithAarohi i have a doubt that if we want to add convolutional layer what is the use because we are downloading weights from github so what is the necessasary to do convolution and pooling
@@vikramreddy5631 downloading weights from internet is not mandatory . Create your own cnn and see results. Adding more cnn means we are adding more neurons and more neurons means more computation and with more layers you can have good results but you should have more data for that
Hello maam, Message Error. Traceback (most recent call last) -----> model.fit(train_x,train_y, ... _____________frame ____________ Error. Rise e.ag_error_metadata.to_exception(e) Value Error: in user code: If you don't mind, can I email you my code? Maybe i did some mistakes there. Can you check it for me because i have no idea how to solve it.🥺
Amazing explanation Aarohi! when I'm running yhat = model.predict(image) , getting this error.Please help me out! AttributeError: module 'matplotlib.image' has no attribute 'shape'
plz if any one implement successfully then plz send the code or colab link plz b/c in maam code face some error may be due to old v Thank @everyone @anyone
@@CodeWithAarohi maam if possible then plz modify the according to lastest versions and share colab link its help many student like me plz 🙏maam Thanks for your reply
The code & concepts explained so uncomprehensible
One of the best tutorials for beginners in cnn thanks so much ...you deserve more views and likes
Thankyou
Thanks a lot! I couldn't explain how much your video tutorials have helped me with my computer vision project
Glad I could help!
Ma'am, thank you so much for all these videos you have made for deep learning. I am an aspiring ML researcher, but I have had a lot of problem finding proper explanation for all the models and frameworks and their implementations. Your videos are a treasure for anyone who wants to understand these concepts.
You are most welcome
Thanks Aarohi. You brilliantly explained everything. Run all the codes successfully with few changes of course :)
Great to hear!
VGG19 pr b koi video bna dein please best explanation of video
thankyou soo much this has cleared all my doubts on vgg16, great job
Happy to help
thanks a lot, you are the first person that gave me full explanation so easy way
Glad to hear that
thank you so much you are the first person that gave me full explanation and helped to start learning in an easy way ........thank you i wish the best for you
Glad to hear that!
Than you very much, maam...This tutorial helps me a lot in completing my final-year project :)
Glad to hear that! Good Luck
In this no freezing of layers is implemented? That means ,whole model is getting trained again? We have just used the architecture?
Can someone help
Brilliant Explanation.
Glad it helped!
do we need define all layers , can we directly import vgg16 architecture?
vgg16 alreday trained with imagenet dataset , so we can test our image?
do we need to train with our own dataset like ? i did not get this point
Thanks a lotttt Aarohi.
You explained it so well and helped in solving all the error. ❤️❤️
My pleasure 😊
It is very well explaLINED. Thanks
Glad it was helpful!
I need to classify some documents which is of pdf,image, etc. Will this work ?
Thank you very much
Welcome!
How about Skipped connetions?
I mean can we implement vgg16 as like
input layer ->lamda layer ->convolution layer -> max pooling layer ->convolution layer ->max pooling layer ->convolution layer-> max pooling layer->flatenning layer->dense layer-> dense layer
Is this correct process??
Regarding your question about skipped connections, VGG16 architecture does not use skipped connections, which are commonly used in modern deep neural network architectures like ResNet, DenseNet, etc.
Regarding your proposed architecture, it seems like you are trying to modify the original VGG16 architecture. While you can certainly modify the architecture, the main characteristics of VGG16 include the use of 3x3 convolutional filters with a stride of 1 and max-pooling layers with a stride of 2. Also, the architecture has a large number of convolutional layers, which helps to learn features at different scales.
Your proposed architecture appears to be missing several convolutional layers that are present in the original VGG16 architecture. You should also include the fully connected layers at the end of the architecture. However, adding a lambda layer in the beginning is not necessary.
A possible modification to VGG16 could be to add BatchNormalization layers after each convolutional layer to help with faster convergence and regularization. You can also experiment with different activation functions and regularization techniques to improve performance.
Thank u so much, 4 this wonderful explanation. u did a great job god bless u.
Glad my video was helpful!
Thank u very much !!! another question: I train the CNN model from scratch and the pre-trained model but training from scratch are do not improve the training accuracy and the pre-trained model does not decrease validation loss, what is the reason for this sir?
There could be several reasons why training a CNN model from scratch may not improve training accuracy, or why using a pre-trained model may not decrease validation loss. Here are a few possible explanations:
Insufficient data: Training a CNN from scratch typically requires a large amount of labeled data to generalize well. If you have a small dataset, the model may struggle to learn meaningful patterns and could result in poor accuracy. In such cases, using a pre-trained model that has been trained on a larger dataset can be beneficial.
Inadequate model capacity: CNN models have a certain capacity to learn complex patterns. If your model is too shallow or has fewer parameters, it may not have enough capacity to capture the underlying patterns in your data. Increasing the depth or width of your model or trying a different architecture may improve its performance.
Inappropriate hyperparameters: The choice of hyperparameters, such as learning rate, batch size, or regularization strength, can significantly impact the training process. If these hyperparameters are not set appropriately, the model may struggle to converge or generalize well. It's important to experiment with different hyperparameter configurations to find the optimal settings for your specific task.
@@CodeWithAarohi but I use data augmentation, Does this affect the performance, sir? Thanks, A lot for all the deep comments...
wonderfull session my doubt was can i use same code for google colab
Yes
Thanks for the video.
How to use vgg16 for gray scale images?
Just replace the channel from 3 to 1
Thank you so much for this helpful video. I just have one question: Whenever I want to fit the model, I receive the following error: "Failed to convert elements of SparseTensor to Tensor. Consider casting elements to a supported type". I tried resolving the error but failed so far. Would you have an idea on how to solve it? Thanks so much in advance!
Try to add this .todense() with your training varibale before fitting
Thank you so much, for this wonderful tutorial. Great explanation and code. I am doing fire detection with two sets of data fire & non-fire ,after pre-training this model, how can i use this model for real time fire detection application. can you please explain it.
ValueError: Found input variables with inconsistent numbers of samples: [102, 359]
How can I fix it ?
Phenomenal
Osama Ashraf thanks
HELLO MAM,I WAS USING THIS CODE TO DETECT MULTICLASSES.BUT IT IS GIVING ME ERROR OF Failed to convert SparseTensor to Tensor.MAM THIS CODE IS ONLY RESTICTED TO BINARY CLASSES??KINDLY RESPOND🙏🙏🙏🙏🙏🙏🙏
Use .todense() with your xtrain and xtest to handle this error. And to make it work for multiple classes, change the number of neurons in the last last layer.
Mam thanku so much for your quick response.I hve tried with this solution and one other solution also but nothing works for me.Kindly help me out🙏
Mam do you have video for multiclass image classification through cnn using keras
I don't have the code for vgg-16 but this example is in keras- resnet 50 for multiclass image classification: github.com/AarohiSingla/ResNet50
@@CodeWithAarohi Thanku mam
@@divyanshubse welcome 🙂
please how do i solve this error @Aaroi
Call arguments received by layer 'sequential' (type Sequential):
• inputs=tf.Tensor(shape=(None, 512, 512, 3), dtype=float32)
• training=True
• mask=Non
please can you help out with a solution....it's urgent for my school project
hey sir? I have one question for you about image resizing, my image dimensions are 1650*3500 height and width but which one is the best size for resizing an image for developing a CNN model?
You could consider resizing your images to a common size used in many CNN architectures, such as 224x224 pixels. These dimensions strike a balance between preserving important details and reducing computational requirements. However, it's important to experiment with different sizes and evaluate the impact on your specific CNN model's performance to determine the optimal size for your particular use case.
Very well explained
Thanks for liking
i noticed on your video it stated that vgg was designed for colored images but can i use it for black and white image dataset?
Yes, You can
Hello, thank you for your effort. Very well explained. Would you mind doing some projects with time-series data? I have real-time dataset which is fully imbalanced according to time. Long story short, in a different time, sometimes neural stimuli we have and sometimes we dont. In dataset there is input and output, there the crucial task is balancing the data and finding out the parameter. I can give you the dataset and related paper if you are interested. Thanks again
hi, can you help me I have an error --> ValueError: A target array with shape (284, 8) was passed for an output of shape (None, 1000) while using as loss `categorical_crossentropy`. This loss expects targets to have the same shape as the output. how I can solve it ?? thanks
check the number of neurons . Number of neurons in the output layer should be as per your dataset. But current the number of neurons in the output layer 1000
mam, can I create a vgg16 model for leaf disease detection using the same process you did?
You can make a leaf disease classification model with VGG-16
Thank you so much for this helpful video. I just have one question : Whenever I want to fit the model, I receive the following error: ValueError: Data cardinality is ambiguous:
x sizes: 38
y sizes: 1
Make sure all arrays contain the same number of samples. I tried resolving the error but failed so far. Would you have an idea on how to solve it? Thanks so much in advance
This is the error due to the shape or size of your x and y. Both of them should be of same shape. If they are not then you can use reshape. I can't help you more without seeing your code.
thank you for this video, when I try to train the model I got this error : TypeError: Failed to convert elements of SparseTensor(indices=Tensor("DeserializeSparse:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse:1", shape=(None,), dtype=float32), dense_shape=Tensor("stack:0", shape=(2,), dtype=int64)) to Tensor. Consider casting elements to a supported type.
please can you help me ?
This code is compatible with tensorfow 1.x Are you using tensorflow 2?
@@CodeWithAarohi Yes I am using tensorflow 2.10.0 ..how can I fix it ?
Hey ,
I got this error while one hot encoding
TypeError: __init__() got an unexpected keyword argument 'categorical_features'
I think sklearn is upgraded and is causing issue Can you tell how to use it with the upgraded version where this is deprecated or tell the version of sklearn you used.
Instead of:
y = y.reshape(-1, 1)
onehotencoder = OneHotEncoder(categorical_features = [0])
Y = onehotencoder.fit_transform(y)
Y.shape
Use:
y = y.reshape(-1, 1)
onehotencoder = OneHotEncoder()
Y = onehotencoder.fit_transform(y)
print(Y.shape)
Nice explanation
Thank you
Thank you so much, Aarohi, for this wonderful tutorial. Great explanation and code. I can't seem to get it to run, however. After printing "Epoch 1/10", the call model.fit(...) crashes and gives the following error:
TypeError: 'NoneType' object is not callable
I am using Google Colab with Tensorflow version 2.4.1 and Keras version 2.4.3 and so have commented out the import for _obtain_input_shape.
Any help would be greatly appreciated. Thank you again, Aarohi.
just use tensorflow 1.14.0 and keras 2.3.0
@@CodeWithAarohi if we use keras 2.3.0 then the given line won't work
"from keras.applications.imagenet_utils import _obtain_input_shape # this will work for older versions of keras. 2.2.0 or before"
which model we should use for getting 98% accuracy ?? can we use vgg-16 or vgg-19
Accuracy not depends upon your model. There are other parameters also which you need to take care
can u share the link from where u have downloaded the dataset
Created this dataset myself
@@CodeWithAarohi how many images were taken
I have randomly downloaded images from google and i think 20 images per category. As this dataset is just a sample dataset used for explanation of vgg16
@@CodeWithAarohi okay, thank you
Good afternoon mam... I am getting "No module named future" error.. Please help me...
pip install future
ma'am where to run this code?
Could you please help...
You can run it on jupyter notebook or just create a .py file and run it from cmd prompt.
How to use 'get_default_graph' in tensorflow 1.x ?
AttributeError: module 'tensorflow' has no attribute 'get_default_graph'
Getting this error : Tried multiple stackoverflow solution but didn't work !
tell me the tensorflow version you are using.
Can you please share the theroy behind vgg16
Will do that soon
why did we reshape it to [1,224,224,3]?
Thanking you in Advance
We are converting the numpy array to this [1,224,224,3] because algorithm will accept the array in this format for prediction.
When i try to split the data into training and test data,i am getting a memory error?
What is the exact error?
Giving error
AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
which versions of keras and tensorflow required
tensorflow 1.14.0 and keras 2.3.0
Create a separate environment. And the python version I am using python 3.6.8, tensorflow 1.15.0, keras 2.2.5 and h5py version 2.10.0 .
@@CodeWithAarohi thank you very much. I will check
SOrry its not working giving error:Thannk you or your patience in replying.
TypeError: Failed to convert object of type to Tensor. Contents: SparseTensor(indices=Tensor("DeserializeSparse:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse:1", shape=(None,), dtype=float32), dense_shape=Tensor("stack:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.
@@CodeWithAarohi MAdam , I have installed h5py 2.10.0 along with tesorflow and keras. But while running model.fit() i am getting error:
TypeError: Failed to convert object of type to Tensor. Contents: SparseTensor(indices=Tensor("DeserializeSparse:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse:1", shape=(None,), dtype=float32), dense_shape=Tensor("stack:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.
name 'categorical_features' is not defined
Kindly help.
Check if you have imported One hot encoding module. The way to use onehot encoding in tensorflow 1 and tensorflow 2 is different. Import the one hot encoding module as per your tensorflow version.
Hi i am facing issue with one hot encoding in it can you pls tell the solution for it in latest keras version
y=y.reshape(-1,1)
#onehotencoder = OneHotEncoder(categorical_features=[0])
#Y= onehotencoder.fit_transform(y)
#Y=Y.toarray()
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([('my_ohe', OneHotEncoder(), [0])], remainder='passthrough')
Y = ct.fit_transform(y) #.toarray()
print(Y)
Mam where can i find this room dataset
github.com/AarohiSingla/ResNet50
@@CodeWithAarohi thank you mam
maam can we implement vgg 16 on custom dataset without gpu?
Yes you can but the training will be slow on CPU
@@CodeWithAarohi thank you ma'am
While running model.fit() I am getting the follwoing error:
ValueError Traceback (most recent call last)
in ()
----> 1 model.fit(train_x, train_y, epochs = 10, batch_size = 32)
2 frames
/tensorflow-1.15.2/python3.7/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
143 ': expected ' + names[i] + ' to have shape ' +
144 str(shape) + ' but got array with shape ' +
--> 145 str(data_shape))
146 return data
147
ValueError: Error when checking target: expected predictions to have shape (1,) but got array with shape (2,)
This error is because you are giving wrong dimensions to your model. Check the last layer of your model where you are performing classification. Make it 2 instead of 1. I guess this will solve your issue.
Can you share the "rooms_dataset" plz?
please provide your email id
Mam can you please share the link for this implementation
github.com/AarohiSingla/VGG-16
@@CodeWithAarohi
Mam Can you tell how we can plot great confusion matrix for these type of image classifications
How can we increase test_size?
Increase the size from train_test split
Hello we got error in inporting load_img so please solve this
Check your keras version and then according to your version use the right line to import load_img. You can see from keras documentation how to import for that particular version
I try to run this code and I am getting this error in model.fit
Type Error:in user code
Can you show me the values of train_y
just print(train_y) to see the values in train_y.
@@CodeWithAarohi thank you for your reply. I can see my train_y values.
But I wanted to what your format.
Actually I followed your video and everything worked fine. But on model.fit I am getting error. No ideo how to fix it.
@@amnaasif4920 send me your code on aarohisingla1987@gmail.com. I will check the error and revert you back
Hello did you know how to fix your problem please i have the same error
why only 16 why not more convolutional layer
Because vgg 16 is designed for 16 layers and yes you can add more convolution layers but then you can’t call that network is vgg16 . That would be a customised convolution neural network
@@CodeWithAarohi i have a doubt that if we want to add convolutional layer what is the use because we are downloading weights from github so what is the necessasary to do convolution and pooling
@@vikramreddy5631 downloading weights from internet is not mandatory . Create your own cnn and see results. Adding more cnn means we are adding more neurons and more neurons means more computation and with more layers you can have good results but you should have more data for that
Great tutorial, can you share the code, please
github.com/AarohiSingla/VGG-16/blob/main/vggnet_with_keras_own_dataset.ipynb
Hello! I need to work on a database with 7 labels( around 500). Should it work? Thank you very much!
Can you explain vgg19 please
Will try to cover after finishing my pipelined topics
@Code With Aarohi thank you 😊.
You still not put vgg 19
I am getting this error can anyone please solve it?
Which error?
I don't know why i getting error in model.fit line🤕🥺
what is the error?
Hello maam,
Message Error. Traceback (most recent call last)
-----> model.fit(train_x,train_y, ...
_____________frame ____________
Error. Rise e.ag_error_metadata.to_exception(e)
Value Error: in user code:
If you don't mind, can I email you my code? Maybe i did some mistakes there. Can you check it for me because i have no idea how to solve it.🥺
can u please share the code
github.com/AarohiSingla/VGG-16
@@CodeWithAarohi thanks mam
code link?
github.com/AarohiSingla/VGG-16
mam thanks for your efforts
but mam keep in mind u need to work on controlling to say the word "right" for every 10 sec
Sorry for that
ablacım sen konuyu çok yanlış anlamışsın bence
Amazing explanation Aarohi!
when I'm running yhat = model.predict(image)
, getting this error.Please help me out!
AttributeError: module 'matplotlib.image' has no attribute 'shape'
convert your image into an array and then pass it to model for prediction
plz if any one implement successfully then plz send the code or colab link plz b/c in maam code face some error may be due to old v
Thank @everyone @anyone
Yes, the errors youa re getting because this code is compatible with tensorflow 1.x
@@CodeWithAarohi maam if possible then plz modify the according to lastest versions and share colab link its help many student like me plz 🙏maam
Thanks for your reply
Very well explained
Anjani Suman thanks