even after doing all the three steps in model iam still facing issue sir....Each epoch is running for about 30 min..Iam executing this code in vs code?
That stupid hidden layer line is showing oom out of memory Earlier it was working just fine and now i have been stuck here for the past 3 hours and it is not working
Sir i am using a different dataset in which my final accuracy is 22% and loss is 1.87 and loss doesn't increase or decrease it remains the same What do i do to increase the accuracy and minimize the loss
hello ,sir instead of tf.keras.utils.image_dataset_from_directory to load images i used Image Data generator with flow from directory() , but when i trained the accuracy on traning data is very low like 6% , i had taken same dataset and same value that you have given then also i am facing issuse ?? can you help
ValueError: Arguments `target` and `output` must have the same rank (ndim). Received: target.shape=(None, 38), output.shape=(None, 3, 3, 38) What should I do sir??
Are you using same dataset and following video step by step Check i have printed shape of training set before starting building model check and debug it
It feels like the more i run this model to change things the more slow it becomes and now even if i increase the neurons to 1500 it shows oopm(memory not available) but before it even ran 3000 Please any solution
@@SPOTLESSTECHdid that still didn't work My data only has 1750 images and now only 50 neurons are working if i add more than that it shows OOM out of memory
Training set are the set of dataset used for training While for monitoring the accuracy on unseen data we used validation set So what’s happening here in simple words 1. Training set data used for training model and adjusting its weights according to label(class_name) 2. Validation set used to see the model accuracy it is data which is unseen to the model
In the train_plant_disease.jpynb notebook the program runs till saving model correctly and agter that the kernel dies and restarts again and the accuracy visualization codes don't run and they give errors. Please help me solve this problem asap
You have to run 3 cell simultaneously 1. model.fit() 2. Model.save() 3. Recording history During training cell-2,3 will be queued and it will run after model training Once history saved in your local you can load your json and perform visualization That is the significance of saving history Reason for this : Model training is taking 30-40 min or more than this depends on your pc architecture So basically what’s happening after model training your kernel dies due to high usage of your pc and it want to get restarted so you have to run all this 3 cell So that after running this 3 cell it dies So that you can restart and continue with your json file and then visualization
It affect model accuracy Did you uploaded your data on drive and accessing drive via google colab Also you have changed runtime of colab to T4 ?? If you have not done this two things then model will take time
Again the another error is showing that valuerror:'target' and 'output' must have same shape.Recived:target.shape=(None,20),output.shape=(None,38). What shall I do😢😢??
@@Supriyanayak21 In Output layer number of units (which is number of neuron) must be equal to number of classes present in dataset There is mismatch in neuron of last output layer check whether its equal to no of classes or not
my dataset is small compared to yours I'm getting 99 percent training accuracy and 49 percent validation accuracy. any suggestions how can I improve my validation accuracy?
Your model overfits Add dropouts layer reduce number of deep layers/reduce neurons in deep NN layers and try I also faced same in my previous project watch this Fruits and Vegetables Recognition System Part-9 | Improving Deep Learning Model Performance ruclips.net/video/FTBVsYg_FVg/видео.html
I am using Macbook M2 chip So its running fine for this architecture But yes if you are doing experiments and changing parameters and retraining then sometimes yes
Reduce model complexity by adding dropouts Use techniques called regularization using tensorflow u can easily do these things If still things not working then feed more data and make it more generalised
@@SPOTLESSTECH i have a gpu bro still it takes take time does model by default use gpu if detected or whether we have to write a code accordingly? thank you for getting back to me.
my error is ValueError: Arguments `target` and `output` must have the same rank (ndim). Received: target.shape=(None, 38), output.shape=(None, 8, 8, 38) How must I overcome this?
sir i get error during compiling the code ImportError( 116 "`keras.optimizers.legacy` is not supported in Keras 3. When using " 117 "`tf.keras`, to continue using a `tf.keras.optimizers.legacy` " 118 "optimizer, you can install the `tf_keras` package (Keras 2) and " 119 "set the environment variable `TF_USE_LEGACY_KERAS=True` to " 120 "configure TensorFlow to use `tf_keras` when accessing `tf.keras`." such error shown in this code : " model.compile(optimizer =tf.keras.optimizers.legacy.Adam(learning_rate=0.0001) , loss = 'categorical_crossentropy', metrics = ['accuracy']) "
Sir I have tried again using the same code as provided in GitHub . It took almost 5hr for all epoch to complete the training for model. But it gave an accuracy of 0.32.
Next 2 parts will be uploaded at the end of weekend There we will see how we can evaluate our model and most important how we can evaluate our model on some other metrics Like Precision, Recall, F1 score and confusion metrics of deep learning model Keep learning 👍👍
@@SPOTLESSTECH I having the same issue. I tried to increase the model size 'filters=1024' but encountering "ValueError: Exception encountered when calling Conv2D.call()"
Check training accuracy and validation accuracy and if underfit add more layers, increase complexity of network if overfit add dropout layers decrease complexity of network
ValueError: Arguments `target` and `output` must have the same rank (ndim). Received: target.shape=(None,), output.shape=(None, 38) bro please help me resolve this
You have done something wrong in code See all dimensions and follow the video carefully and compare code I have already given all the code try to copy paste the code and debug it whats wrong in yours
I can't increase the no pf neurons in hidden layer as it shows oom out of memory and earlier it used to run just fine
Please help me
even after doing all the three steps in model iam still facing issue sir....Each epoch is running for about 30 min..Iam executing this code in vs code?
Dm me on Instagram
Mee to sir..
@@SPOTLESSTECH sir i have messaged you sir, help me out please
That stupid hidden layer line is showing oom out of memory
Earlier it was working just fine and now i have been stuck here for the past 3 hours and it is not working
Sir i am using a different dataset in which my final accuracy is 22% and loss is 1.87 and loss doesn't increase or decrease it remains the same
What do i do to increase the accuracy and minimize the loss
sir please can you share the list of algorithms other than cnn ..
hello ,sir instead of tf.keras.utils.image_dataset_from_directory to load images i used Image Data generator with flow from directory() , but when i trained the accuracy on traning data is very low like 6% , i had taken same dataset and same value that you have given then also i am facing issuse ?? can you help
For flow_from_directory you have to give parameter value correctly according to your use case
sir one error is coming "validation_set" not defined. wht to do with that?
Run cell where we have defined it then run curr cell it will not come
ValueError: Arguments `target` and `output` must have the same rank (ndim). Received: target.shape=(None, 38), output.shape=(None, 3, 3, 38)
What should I do sir??
Are you using same dataset and following video step by step
Check i have printed shape of training set before starting building model check and debug it
It feels like the more i run this model to change things the more slow it becomes and now even if i increase the neurons to 1500 it shows oopm(memory not available) but before it even ran 3000
Please any solution
Try to run code after restarting kernel
And what is your model size ?
Mine only has 1750 images@@SPOTLESSTECH
@@SPOTLESSTECHdid that still didn't work
My data only has 1750 images and now only 50 neurons are working if i add more than that it shows OOM out of memory
in my case there is overfitting instead of overshooting any idea
Add dropouts layer
Decrease model complexity
You can add kernel regularizer in neural network layers
Why is it taking too much time for accuracy to load ... For the 1st epoch only it's taking half an hour
Looks like you are training on cpu
If you don’t have try on google colab change runtime to T4
@@SPOTLESSTECH runtime T4 means ?... Epoch=4 insted of 10?
Can the project run on vs code?
@SPOTLESSTECH its take to much time how to resolve this problem
here what do you mean by training_set and validation_set
in model fitting
Training set are the set of dataset used for training
While for monitoring the accuracy on unseen data we used validation set
So what’s happening here in simple words
1. Training set data used for training model and adjusting its weights according to label(class_name)
2. Validation set used to see the model accuracy it is data which is unseen to the model
In the train_plant_disease.jpynb notebook the program runs till saving model correctly and agter that the kernel dies and restarts again and the accuracy visualization codes don't run and they give errors. Please help me solve this problem asap
You have to run 3 cell simultaneously
1. model.fit()
2. Model.save()
3. Recording history
During training cell-2,3 will be queued and it will run after model training
Once history saved in your local you can load your json and perform visualization
That is the significance of saving history
Reason for this : Model training is taking 30-40 min or more than this depends on your pc architecture
So basically what’s happening after model training your kernel dies due to high usage of your pc and it want to get restarted so you have to run all this 3 cell
So that after running this 3 cell it dies
So that you can restart and continue with your json file and then visualization
Bro I am getting an error in model.fit like
ValueError: Shapes (None, 1) and (None, 38) are incompatible
Check output layer number of neuron whether it is equal to number of classes
@@SPOTLESSTECHDataset downloaded only 50% into my system. Is that a problem for this error?
Could you please elaborate what should I do sir in this kind of error@@SPOTLESSTECH
If i will use google colab can i deploy it?
Yes you can
@@SPOTLESSTECH I am trying the same process in colab but it is taking time... Can't we compress the size of input data to make training fast
It affect model accuracy
Did you uploaded your data on drive and accessing drive via google colab
Also you have changed runtime of colab to T4 ??
If you have not done this two things then model will take time
I got the value error
That the exception encountered when calling conv2D.call()
Input=tf.tensor(shape(none,2,2,512),dtype=float32)
What i will do??
What input shape you are using ?
@@SPOTLESSTECHas per your instruction I used the same input shape
Again the another error is showing that valuerror:'target' and 'output' must have same shape.Recived:target.shape=(None,20),output.shape=(None,38).
What shall I do😢😢??
@@Supriyanayak21 In Output layer number of units (which is number of neuron) must be equal to number of classes present in dataset
There is mismatch in neuron of last output layer check whether its equal to no of classes or not
While printing training history val_loss, val_accuracy does not printing. How can I show this?
Model.fit() return loss accuracy in each epoch in dictionary format
Definitely it has to return if model training is completed successfully
@@SPOTLESSTECH It worked. Thank you
@@SPOTLESSTECH Which architecture in CNN do you use for model train & test like VGG16, VGG19, ResNet50 etc.?
my dataset is small compared to yours I'm getting 99 percent training accuracy and 49 percent validation accuracy.
any suggestions how can I improve my validation accuracy?
Your model overfits
Add dropouts layer reduce number of deep layers/reduce neurons in deep NN layers and try
I also faced same in my previous project watch this
Fruits and Vegetables Recognition System Part-9 | Improving Deep Learning Model Performance
ruclips.net/video/FTBVsYg_FVg/видео.html
@@SPOTLESSTECH thanks brother
does your laptop gets over heat easily
I am using Macbook M2 chip
So its running fine for this architecture
But yes if you are doing experiments and changing parameters and retraining then sometimes yes
What can I do if my model is overfitting?
Reduce model complexity by adding dropouts
Use techniques called regularization using tensorflow u can easily do these things
If still things not working then feed more data and make it more generalised
bro what are the specs of your device?
because in mine it takes so much time in model.fit any reason why
You need GPU to train your model
I am using Mac M2 chip device
If you don’t have gpu support try using it from google colab
@@SPOTLESSTECH i have a gpu bro still it takes take time
does model by default use gpu if detected or whether we have to write a code accordingly?
thank you for getting back to me.
@@SPOTLESSTECH i have 2gpus one is is intel r hd graphics other one is Geforce 930 MX but they are not utilized
Try this and let me know
www.freecodecamp.org/news/how-to-setup-windows-machine-for-ml-dl-using-nvidia-graphics-card-cuda/
@@SPOTLESSTECH bro above link is insightful but i trained the model on kaggle they are providing gpu free of cost. thank you for helping me out.
my error is
ValueError: Arguments `target` and `output` must have the same rank (ndim). Received: target.shape=(None, 38), output.shape=(None, 8, 8, 38)
How must I overcome this?
Check output layer and write number of neurons equal to number of classes
@@SPOTLESSTECH how can we exactly do that because I have set the no. of neurons as same as you
@@arpitkadam6026 problem solve hua??
my each epoch is taking 1:30 hr i am also using m2 mac
Check model architecture by doing model.summary
And are you using tensorflow-gpu
Check list of installed packages
sir i get error during compiling the code
ImportError(
116 "`keras.optimizers.legacy` is not supported in Keras 3. When using "
117 "`tf.keras`, to continue using a `tf.keras.optimizers.legacy` "
118 "optimizer, you can install the `tf_keras` package (Keras 2) and "
119 "set the environment variable `TF_USE_LEGACY_KERAS=True` to "
120 "configure TensorFlow to use `tf_keras` when accessing `tf.keras`."
such error shown in this code : " model.compile(optimizer =tf.keras.optimizers.legacy.Adam(learning_rate=0.0001) , loss = 'categorical_crossentropy', metrics = ['accuracy']) "
Instead of that write optimizer=‘adam’
And run the code
Sir i have done step by step way but the accuracy is coming ONLY 2%
Try using my notebook from github
And let me know if any issue
Sir I have tried again using the same code as provided in GitHub . It took almost 5hr for all epoch to complete the training for model. But it gave an accuracy of 0.32.
Sir mine 1 epoch took 4 hours to complete what to do sir please help it is stuck now
Use GPU if not train using google colab
Sir how can I use gpu
please upload next part soon sir
Next 2 parts will be uploaded at the end of weekend
There we will see how we can evaluate our model and most important how we can evaluate our model on some other metrics
Like Precision, Recall, F1 score and confusion metrics of deep learning model
Keep learning 👍👍
My loss is more than accuracy. loss is 3. and accuracy is 0.027. Why is that so
Model having underfitting
Increase model size by adding layers and neurons
same issue
@@BTECE_SHIVANANDSHRIRAMESame issue
@@SPOTLESSTECH I having the same issue. I tried to increase the model size 'filters=1024' but encountering "ValueError: Exception encountered when calling Conv2D.call()"
tried this on different datasets and i am getting accuracy of 40-60%
I have been thinking on how to improve it
can anyone help?
Check training accuracy and validation accuracy and if underfit add more layers, increase complexity of network if overfit add dropout layers decrease complexity of network
can u share trained_model file. I am getting accuracy of 30% only
Check playlist description
ValueError: Arguments `target` and `output` must have the same rank (ndim). Received: target.shape=(None,), output.shape=(None, 38) bro please help me resolve this
You have done something wrong in code
See all dimensions and follow the video carefully and compare code
I have already given all the code try to copy paste the code and debug it whats wrong in yours
Check the unit size in the output dense layer
Same error please help
It takes 2sec/step, the accuracy i am getting is better than yours but it is time consuming, 1:30 hrs for one epoch
Are you using GPU ?
@@SPOTLESSTECH No, my laptop has great processor but integrated 8gb gpu, and i didn't use any external application to do the job.