You are hands down the best teacher I've found on youtube for deep learning and coding. I've spent hours and hours trying to figure this stuff out, and you just make it so simple and elegant. Thank you good sir.
nice video, I've struggling in data augmentation for a long time T_T. Now I found how to use cnn with data augmentation, I hope I can create my own deep learning model using this method. thanks bro
I am getting an error while scaling the train and test set. Unable to allocate 35.7 GiB for an array with shape (3258, 700, 700, 3) and data type float64 How can I solve it?
hi in this one error coming for me when I start to do data augmentation. That is " AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'experimental' " . please sir help me to solve this issue. There is no any issue with the libararies. All the libraries are upto date.
The experimental and preprocessing modules are deprecated. You can use the following code: data_augmentation = keras.Sequential( [ layers.RandomFlip("horizontal"), layers.RandomRotation(0.1), layers.RandomZoom(0.1), ] ) To achieve the same
Sir, why was a layers.input (input layer) given. Why did we start with the conv2D layer. If there is no input layer, how will the model know the size of input data?
Hi sir, for the stack of Conv2D layers...are there any general rule-of-thumb for selecting the number of filters? Whether its recommended to go in an increasing order (like you did), decreasing order or constant?
Can someone explain me this: After the data aug, the model.evaluate function returns 070~0.75 accuracy, but when using classification_report(y_test, y_pred_classes), i get 0.51 as accuracy. Wasn't it supposed to be the same as model.evaluate()?
The part were image is getting zoomed in and out is not working for me and while plotting it is not showing and difference. Is anyone facing the same issue ? Sometimes it is working sometime it doesn't.
Hi, during the first process of trainning the model, i used loss='sparse_categorical_crossentropy' as loss function and the accuracy was terrible (~20-30%) but when i used loss=tf.keras.losses.SparseCategoricalCrossentropy (as shown in video), the accuracy was so much better (~99%) can someone explain what is the difference between these two losses? (irrespective of having the same name)
Sir, in data_augmentation section, while running this cell, it says "img_height" is not defined!! How to get rid from this problem? Thnks for the great tutorial
So all of this effort of improving accuracy is taken care by pre trained models right. We just need to use pre trained models in our model and with just a small number of layers and epochs we can achieve maximum accuracy, am i correct?
I was trying to use loss='SparseCategoricalCrossentropy' in the cnn model and i got the same accuracy after each epochs. Can someone explain me why it's happening??
if you will be available in stock market then i will put all of my money on you cause I know you are going to rise exponentially on youtube in upcoming years 🙏
Sir, I'm stuck in "import cv2". I have been tried many ways and many times. But in the end, it's not working. Can you please help me with that? Advanced thanks for your help.
What error are you seeing? have you tried installing using pip install opencv-python? If nothing else works, create a new notebook on google colab - colab.research.google.com and execute the code. Here, all the libraries come preinstalled so you can use imports right away.
@@kishore961 Thank you for your suggestion. I was using windows 10 pro N. From various sites I acknowledged that if you are using windows 10 pro N then it's obvious that you will face this problem. Now, I am using windows 10, and my problem is solved.
Hello, i got a question. Which level of mathematics(do i have to know what do limits, integrals, derivatives and etc mean?) i have to know before watching this playlist?
you dont need any math knowledge at all. Most of the time i try to make clear math concepts and if I dont refer to mathisfun.com to clear the concepts yourself.
When I do data augmentation I get this error"Input 0 of layer lstm_155 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 30, 126, 3)" how to fix it please I tried everything
Hii sir, i have applied same configuration of model as mentioned but i am still getting .2442 accuracy. Have i made some mistake or i have missed something here
probably you are not fitting the model with scaled valued. I made the same mistake first time. Make sure you are doing the training with the scaled trained data. with "x_train_scaled" not "x_train"
Sir do you have plans to update Julia Playlists. Seems there has not been a single video since 2016. I understand there could be a lot to chew, but in your future plans I request you to consider it. Till then I would be learning from your other Playlists. Thanks
Hello there! Awesome tutorials so far. I’m having an error here 29:11 , it tells me the following: NotImplemented error, Cannot convert a symbolic Tensor to a numpy array. Can anyone help me, please. I get the previous error when I add the data_augmentation to the model and then compile. I will appreciate your help!
Congratulations on this and all the other videos on the series, they are very informative and instructive. I was trying to run the notebook on github but I'm getting an error when trying to create data_augmentation. Variables img_height and img_width are not defined and using (180, 180) in their place and running the cell just gives me the following error : "NotImplementedError: Cannot convert a symbolic Tensor (random_rotation_5/rotation_matrix/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported" I´m just running the notebook, no changes to it. Every other coding from previous videos work just fine, Can you please check if there's a problem with this one? Thank you so much!
Well, I got it running! I downgraded numpy to version 1.18.5 take a look at this article. exerror.com/notimplementederror-cannot-convert-a-symbolic-tensor-lstm_2-strided_slice0-to-a-numpy-array/ Thanks anyway, I´m sure you would have helped :) Amazing videos you make, Keep them coming:) Best regards
Hi sir thanks for such a good lecture but i have i have a problem when i run data augmentation code the code show error that NameError: name 'img_height' is not defined so i can do for that
Good Day Sir, thank you for giving us awesome tutorials. I would like to ask why you chose 180x180 dimension to resize the images? Thank you in advance 😊
If you see the images in the dataset all of the images have different dimensions, to be able to train the model well we need to have the same dimension for all the images.
hello I am facing some error tried all the solutions on stackoverflow and github even changed code to import tensorflow as tf from tensorflow.keras import layers data_augmentation = tf.keras.Sequential( [ tf.keras.layers.experimental.preprocessing.RandomFlip("horizontal", input_shape=(img_height, img_width, 3)), tf.keras.layers.experimental.preprocessing.RandomRotation(0.1), tf.keras.layers.experimental.preprocessing.RandomZoom(0.1), ] ) but erros is same, AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'experimental' Help i am stuck on this error
You could use an adversarial network to fit the data better. have a Clown train a Identifier by competition. So the clown tries to fall the identifier with the kind of issues the identifier would face in the real world relative to it's task until the identifier is guessing with a good enough data fit in the competition. You can use both clown and identifier Adversarial networks and GANs to better fit a Generator also. Say you want to make a generator that generates real world c++ code. First you train the identifier with a clown then you train your code generator with the identifier.
@@siddharthsingh2369 Hi I was facing same issue 180 did work like a charm but I wanted to know why 180 only cuz I tried other values giving shape error in model training. could you please let me know how you came up with 180?
in data augmentation sir u consider 3 transformations it means for single images it generates 3 images or it applies 3 transformations on 1 image plz clarify my doubt thank you
I having an error on the following code : --------------------------------------------------------------------------- error Traceback (most recent call last) Cell In[24], line 6 4 for image in images: 5 img = cv2.imread(str(image)) ----> 6 resized_img = cv2.resize(img,(180,180)) 7 X.append(resized_img) 8 y.append(flowers_labels_dict[flower_name]) error: OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src esize.cpp:4152: error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' Please help me to resolve it.
Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
You are hands down the best teacher I've found on youtube for deep learning and coding. I've spent hours and hours trying to figure this stuff out, and you just make it so simple and elegant. Thank you good sir.
You are a gifted teacher. I easily understand any topic you teach. Thanks
Hi bro are you an ml engineer
nice video, I've struggling in data augmentation for a long time T_T. Now I found how to use cnn with data augmentation, I hope I can create my own deep learning model using this method. thanks bro
Your teaching makes things so simple. Thank you, sir.
Video by video this tutorial is getting more awesome, excellent teaching and very calm explanation by our Guruji
Glad it was helpful!
YOU ARE BEST MY ONLINE TEACHER
best teacher ever
great fan of your teaching style
Glad to hear that
sir..i am a big fan of urs..ur codes are very simple and easily understandable..sir please do some videos on image segmentation in medical imaging.
Ok
use layers.RandomFlip(), layers.RandomRotation(), instead of using random.experimental.preprocessing . This fixes the issue of module not found
I am getting an error while scaling the train and test set.
Unable to allocate 35.7 GiB for an array with shape (3258, 700, 700, 3) and data type float64
How can I solve it?
great job, I've found your channel recently it's helped me a lot, thank you so much🍀
Thanks sir, looking forward to more in-depth content in the upcoming videos.
A very nice lecture as usual.
👍😊
hi in this one error coming for me when I start to do data augmentation. That is " AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'experimental' " . please sir help me to solve this issue. There is no any issue with the libararies. All the libraries are upto date.
The experimental and preprocessing modules are deprecated. You can use the following code: data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
) To achieve the same
Very well explained sir...🤗
Sir can you please explain RNN and LSTM just like you explained ANN....Please Sir :)
Sir, why was a layers.input (input layer) given. Why did we start with the conv2D layer. If there is no input layer, how will the model know the size of input data?
Hi sir, for the stack of Conv2D layers...are there any general rule-of-thumb for selecting the number of filters?
Whether its recommended to go in an increasing order (like you did), decreasing order or constant?
i also get confuse sometimes
Thank you sir. I have one doubt, what % of the image dataset should be augmented to generate a good model?
what about using stratify = y for these multiclass label flower label? Can we do that? You didn't do that here
sir, you are the best ! thank you sir.
Awesome content man!! I have liked all your tutorials and I started to follow you on youtube!!👌🏽
Can someone explain me this: After the data aug, the model.evaluate function returns 070~0.75 accuracy, but when using classification_report(y_test, y_pred_classes), i get 0.51 as accuracy. Wasn't it supposed to be the same as model.evaluate()?
The part were image is getting zoomed in and out is not working for me and while plotting it is not showing and difference. Is anyone facing the same issue ? Sometimes it is working sometime it doesn't.
Hi, during the first process of trainning the model, i used loss='sparse_categorical_crossentropy' as loss function and the accuracy was terrible (~20-30%)
but when i used loss=tf.keras.losses.SparseCategoricalCrossentropy (as shown in video), the accuracy was so much better (~99%)
can someone explain what is the difference between these two losses? (irrespective of having the same name)
it was explained shortly here
ruclips.net/video/7HPwo4wnJeA/видео.html
or in previous videos of this course
Sir, in data_augmentation section, while running this cell, it says "img_height" is not defined!!
How to get rid from this problem?
Thnks for the great tutorial
Same here Did u get any solution for this?
you can define them manually, like this:
img_height = X_train_scaled[0].shape[1]
img_width = X_train_scaled[0].shape[2]
you should save this video for us, so I can learn it later lol :)) Bye thanks so much
yes it is saved. you can always refer it later
Great video Sir
It was just so helpful!!! Thank you so much!!!
incredible Video. Hey how can I get the Slides you used in this video?
again useful....thanks dear...thankyou so much.....
Glad you liked it
Amazing content
👍😊
So all of this effort of improving accuracy is taken care by pre trained models right. We just need to use pre trained models in our model and with just a small number of layers and epochs we can achieve maximum accuracy, am i correct?
I was trying to use loss='SparseCategoricalCrossentropy' in the cnn model and i got the same accuracy after each epochs. Can someone explain me why it's happening??
did you find the answer for it?
if you will be available in stock market then i will put all of my money on you cause I know you are going to rise exponentially on youtube in upcoming years 🙏
Ha ha thanks. Looks like I need issue an IPO in myself 🤓
Nice explanation!!!!
Thank you so much sir,please i want you to show us how to deploy this project in django?
Hello Sir, the data_dir outputs me as Poxis Path instead of Windows Path. Any suggestion for this issue.
Sir, I'm stuck in "import cv2". I have been tried many ways and many times. But in the end, it's not working. Can you please help me with that?
Advanced thanks for your help.
What error are you seeing? have you tried installing using pip install opencv-python?
If nothing else works, create a new notebook on google colab - colab.research.google.com and execute the code. Here, all the libraries come preinstalled so you can use imports right away.
@@kishore961 Thank you for your suggestion. I was using windows 10 pro N. From various sites I acknowledged that if you are using windows 10 pro N then it's obvious that you will face this problem. Now, I am using windows 10, and my problem is solved.
Hello, i got a question. Which level of mathematics(do i have to know what do limits, integrals, derivatives and etc mean?) i have to know before watching this playlist?
you dont need any math knowledge at all. Most of the time i try to make clear math concepts and if I dont refer to mathisfun.com to clear the concepts yourself.
@@codebasics my teacher from highschool created that website
Sir,how syllabus will remain & in future when it will be done from your side?
When I do data augmentation I get this error"Input 0 of layer lstm_155 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 30, 126, 3)" how to fix it please I tried everything
I also encounter this error in the previous lesson
@@longvietng04 have u been able to solve it. It guess I fix it but don t know how to visualize the augmented data to see if it s actually working
Hii sir, i have applied same configuration of model as mentioned but i am still getting .2442 accuracy. Have i made some mistake or i have missed something here
probably you are not fitting the model with scaled valued. I made the same mistake first time.
Make sure you are doing the training with the scaled trained data. with "x_train_scaled" not "x_train"
i have done argumentation but my accuracy have been decrease from 55% to 22% can anyone help
Seems it would be better to encode image using measures that are scale and translation invariant.
Sir do you have plans to update Julia Playlists. Seems there has not been a single video since 2016. I understand there could be a lot to chew, but in your future plans I request you to consider it. Till then I would be learning from your other Playlists. Thanks
Ok I will keep this in mind
@@codebasics Thank you
Yes I agree please add more to Julia playlist it would be helpful
What about the augmentation of tabular data ?
Hello there! Awesome tutorials so far. I’m having an error here 29:11 , it tells me the following: NotImplemented error, Cannot convert a symbolic Tensor to a numpy array. Can anyone help me, please.
I get the previous error when I add the data_augmentation to the model and then compile.
I will appreciate your help!
Thank you for this video
Glad it was helpful!
Thank you so much.
Hi sir,
Can you explain the Python 3 in a single video . because the basic is so taught for me
Thanks a lot. ❤️
Why don't you use normalization instead of dividing it to 255?
Congratulations on this and all the other videos on the series, they are very informative and instructive.
I was trying to run the notebook on github but I'm getting an error when trying to create data_augmentation.
Variables img_height and img_width are not defined and using (180, 180) in their place and running the cell just gives me the following error :
"NotImplementedError: Cannot convert a symbolic Tensor (random_rotation_5/rotation_matrix/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported"
I´m just running the notebook, no changes to it. Every other coding from previous videos work just fine,
Can you please check if there's a problem with this one?
Thank you so much!
Well, I got it running! I downgraded numpy to version 1.18.5
take a look at this article.
exerror.com/notimplementederror-cannot-convert-a-symbolic-tensor-lstm_2-strided_slice0-to-a-numpy-array/
Thanks anyway, I´m sure you would have helped :)
Amazing videos you make, Keep them coming:)
Best regards
Hi, you haven't specified input_shape while defining first convolution layer..., is it not necessary to define the input shape???
CNN detects it automatically.
Hey Dhaval, can you tell me which version of TF and keras should I use? I am facing issues with importing preprocessing module.
You need to have tensorflow 2.3.0 for this to work. pip install tensorflow==2.3.0
@@codebasics Thanks :)
Hi sir thanks for such a good lecture but i have i have a problem when i run data augmentation code the code show error that NameError: name 'img_height' is not defined so i can do for that
u can provide 180 to both img_height and img_width, it will work
thank you sir
What if I use plt.imread instead of cv2.imread ?
Good Day Sir, thank you for giving us awesome tutorials. I would like to ask why you chose 180x180 dimension to resize the images? Thank you in advance 😊
If you see the images in the dataset all of the images have different dimensions, to be able to train the model well we need to have the same dimension for all the images.
No reason to choose 180x180 dimension. you can choose any sensible dimension after exploring the sizes of different images in the dataset.
where does those augmented data got saved?
sir can you cover NLP in this playlist
Yes that is coming up
Hi, in 15:58, how do you open the window for the `train_test_split`? Thanks~
u found it ?
Try shift+tab
Thank u teacher
I have faced some errors.....I don't fit x and y.....any one help to me???
Your DeepLearning play list is not opening. Some thing is very very wrong please check your play list.
The playlist is working perfectly good: ruclips.net/video/Mubj_fqiAv8/видео.html
hello I am facing some error tried all the solutions on stackoverflow and github even changed code to import tensorflow as tf
from tensorflow.keras import layers
data_augmentation = tf.keras.Sequential(
[
tf.keras.layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
tf.keras.layers.experimental.preprocessing.RandomRotation(0.1),
tf.keras.layers.experimental.preprocessing.RandomZoom(0.1),
]
) but erros is same, AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'experimental' Help i am stuck on this error
Ussss. U got any solution?
Tried many different things. Downgraded my tensorflow but still I couldnt solve, used other methods but they are very different
i'm supprised that in the first layer, there is no the input shape? please could you clarify ? thank you
CNN does not require input shape to be specified as it figures it out itself
i think
@@usamaashfaq4188 are you sure. How does cnn know the image dimensions etc.. i m not sure. But thank you for your replay
You could use an adversarial network to fit the data better. have a Clown train a Identifier by competition. So the clown tries to fall the identifier with the kind of issues the identifier would face in the real world relative to it's task until the identifier is guessing with a good enough data fit in the competition.
You can use both clown and identifier Adversarial networks and GANs to better fit a Generator also. Say you want to make a generator that generates real world c++ code. First you train the identifier with a clown then you train your code generator with the identifier.
how can we get this url for another datasets
29:07 block 63, img_height, img_width is not defined. What would be the dimensions?
u can give 180 to both of them it will work
@@siddharthsingh2369 Hi I was facing same issue 180 did work like a charm but I wanted to know why 180 only cuz I tried other values giving shape error in model training. could you please let me know how you came up with 180?
when I get a high score in training much more than testing we call it overfitting or what??
Yes it is called over fitting
in this model how can I test an single image from my pc and predict the result??
you have to put the image through the pre processing step for image and then use it to test against y_pred
Is it possible to get a certification for this course?
Hello to All,
I am facing issue on data_augmentation part saying that img_height is not defined.
Can Someone tell me where I am wrong?
Thanks
I initialized like img_height = 180, img_width = 180, it worked, pls try. Tks.
Hey Nikhil, just replace the img_height and img_weight with 180. This should work
please tell me what is the image height and width
img_height = X_train_scaled[0].shape[1]
img_width = X_train_scaled[0].shape[2]
or just
img_height = 180
img_width = 180
@@nadiiaduiunova thank you
Thanks sir! Excellent tutorial
run all codes again
hello sir , nice video.... Anyways are you gonna make video on RNN ??
Yes I will
I am noticing that no one is talking about the loss before augmentation and loss after augmentation and how to reduce it
in data augmentation sir u consider 3 transformations it means for single images it generates 3 images or it applies 3 transformations on 1 image plz clarify my doubt thank you
Data augmentation applies different transformations to the same image.
Anyone :- WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
😍😍😍😍😍😍😍😍
I having an error on the following code : ---------------------------------------------------------------------------
error Traceback (most recent call last)
Cell In[24], line 6
4 for image in images:
5 img = cv2.imread(str(image))
----> 6 resized_img = cv2.resize(img,(180,180))
7 X.append(resized_img)
8 y.append(flowers_labels_dict[flower_name])
error: OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src
esize.cpp:4152: error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' Please help me to resolve it.
20:28