Hello Ma’am Your AI and Data Science content is consistently impressive! Thanks for making complex concepts so accessible. Keep up the great work! 🚀 #ArtificialIntelligence #DataScience #ImpressiveContent 👏👍
Hello Aarohi Your channel is very knowledgeable & helpful for all Artificial Intelligence/ Data Scientist Professionals. Stay blessed & keep sharing such a good content. Your channel really needs more likes & share so to reach maximum AI professionals who can encash from it
Hi, thank you for sharing this great content. I have a question; in 19th minute of the video, you create a model and load the trained model. also you create new_model variable. in 20th minute of the video, you write output = model(input_batch) I get confused, where we use new_model?
Hi Aarohi, your content is excellent and your channel is one of the best Artificial Intelligence channel but still not getting that much of likes which your channel deserves. Hope you succeed #AI #ArtificialIntelligence #DataScience #EducationalContent
Thank for the video. Can I ask - how do you crate the directory structure with just daisies and dandelions in separate folders? The file I have downloaded (from the link you give) has daisy, dandelion, rose, sunflower and tulip, all together.
Hello ma'am, could you please provide the source from where I could get the image files to run this project. Also, do you have any citations (references) for this project.
Thank you very much for the amazing knowledge sharing. If you can, please explain how we can use deep unfolding networks for image classification optimisation using a code.
Hello, great video! I wanted to ask why you used model instead of new_model in the line output = model(input_batch)? new_model should have only 2 neurons in the last layer and therefore choose between two solutions, while model still has all the neurons. Am I correct or am I mistaken? Thanks!!
Check the cell below "Classification on unseen image". Therewe are loading a pre-trained ResNet-18 model and its saved weights from 'flower_classification_model.pth', then creates a new ResNet-18 model adjusted to classify 2 classes (daisy and dandelion). It copies only the first 2 output units' weights and biases from the loaded model to the final layer of the new model, effectively adapting the pre-trained model for a 2-class problem.
Okay, thank you! So, load the model with 1000 final nodes and then load our model which has only 2 outputs. Next, we create a new model and copy only the first 2 weights and biases from the initial model. So, to understand, I could directly load the pre-trained model with the exact number of output units, then load my model and use that@@CodeWithAarohi
hey, i m working on an image classifcation project but i m confused what should be the order of preprocessing the images. is my below order of image prepprocessing correct?? step - 1 -> Resizing to 64x64 (Both Train & Validation dataset) step - 2 ->Splitting dataset into train and validation step - 3 ->Augmentation (Only Train data) step - 4 ->Normalization (Both Train & Validation dataset)
Hi Arohi! Thanks for sharing the knowledge:) I have a qns to clarify but I'm not sure whether would you be able to see my comments. How will the the code understand or how was the datasets being seperated into inputs and labels while running the training loop as shown in your video?
Mam i tried with my own cnn model including dropout and batch normalization. And i achieved accuracy of 64% and model predicted output label correctly with image. 64% of accuracy is not bad. How to increase accuracy mam ?.
1- Increase the amount and diversity of your training data. 2- Increase the number of layers (both convolutional and fully connected layers) to capture more complex patterns. 3- Experiment with different hyperparameters like learning rate, optimizers. 4- Use pre-trained models (e.g., VGG, ResNet, Inception) and fine-tune them on your dataset.
I have a quick question regarding this video, Aarohi. I watched your video and cloned your GitHub repository to train a dataset of approximately 100 bank cheque images. However, I encountered an issue with the model's performance. When I tested it with non-cheque images, it incorrectly classified them as cheques. On the other hand, it also misclassified bank cheque images as something other than cheques. Can you help me understand and address this problem?
Imbalanced data can lead to misclassification issues. If you have significantly more cheque images than non-cheque images (or vice versa), it can skew the model's performance. You might need to balance the dataset by oversampling the minority class or undersampling the majority class.
Keep sharing such an amazing knowledgeable content in form of very easy to learn videos.
Thank you, I will
How can someone explain such complex concepts in a very simple way? I adore you.
Glad my video is helpful 🙂
Really knowledgeable video & explained in a Very well manner. Thank you
Glad it was helpful!
Ossm video well explained
Thank you so much
didi, you deserve 1 million fr!
Thanks, that's very kind of you!
Very nice Aarohi Mam. Thanks for making complex stuff simple.
Most welcome 😊
Hello Ma’am
Your AI and Data Science content is consistently impressive! Thanks for making complex concepts so accessible. Keep up the great work! 🚀 #ArtificialIntelligence #DataScience #ImpressiveContent 👏👍
My pleasure 😊
Great, Short and Clear
Thanks!
Simple awesome . Thank you
Glad you liked it!
thanks for such easy tutorial on image classification mam.... worth watching your channel
Glad to hear that
Really amazing work
Thank you so much 😀
Hello Aarohi
Your channel is very knowledgeable & helpful for all Artificial Intelligence/ Data Scientist Professionals. Stay blessed & keep sharing such a good content. Your channel really needs more likes & share so to reach maximum AI professionals who can encash from it
So nice of you
Excellent content! Thank you
Glad you liked it!
Thank you!
Thank you mam for sharing
Thanks for liking
very helpful video
Glad it was helpful!
Great video, thanks
Glad you liked it!
Hi, thank you for sharing this great content.
I have a question;
in 19th minute of the video, you create a model and load the trained model.
also you create new_model variable.
in 20th minute of the video, you write output = model(input_batch)
I get confused, where we use new_model?
Hi Aarohi, your content is excellent and your channel is one of the best Artificial Intelligence channel but still not getting that much of likes which your channel deserves. Hope you succeed #AI
#ArtificialIntelligence
#DataScience
#EducationalContent
Thank you so much for your kind words and support! It means a lot to me. 😊🙏
Thank for the video. Can I ask - how do you crate the directory structure with just daisies and dandelions in separate folders? The file I have downloaded (from the link you give) has daisy, dandelion, rose, sunflower and tulip, all together.
delete rest of the folders
Very good video
Thanks
Hello ma'am, could you please provide the source from where I could get the image files to run this project. Also, do you have any citations (references) for this project.
Thank you very much for the amazing knowledge sharing. If you can, please explain how we can use deep unfolding networks for image classification optimisation using a code.
Sure I will
Please share the dataset used in this video
Thank you very much. Please make a video that contains an end to end computer vision project even if the project is basic.
Sure!
Hello, great video! I wanted to ask why you used model instead of new_model in the line output = model(input_batch)? new_model should have only 2 neurons in the last layer and therefore choose between two solutions, while model still has all the neurons. Am I correct or am I mistaken? Thanks!!
Check the cell below "Classification on unseen image". Therewe are loading a pre-trained ResNet-18 model and its saved weights from 'flower_classification_model.pth', then creates a new ResNet-18 model adjusted to classify 2 classes (daisy and dandelion). It copies only the first 2 output units' weights and biases from the loaded model to the final layer of the new model, effectively adapting the pre-trained model for a 2-class problem.
Okay, thank you! So, load the model with 1000 final nodes and then load our model which has only 2 outputs. Next, we create a new model and copy only the first 2 weights and biases from the initial model. So, to understand, I could directly load the pre-trained model with the exact number of output units, then load my model and use that@@CodeWithAarohi
hey, i m working on an image classifcation project but i m confused what should be the order of preprocessing the images. is my below order of image prepprocessing correct??
step - 1 -> Resizing to 64x64 (Both Train & Validation dataset)
step - 2 ->Splitting dataset into train and validation
step - 3 ->Augmentation (Only Train data)
step - 4 ->Normalization (Both Train & Validation dataset)
Correct
Thank you!
Welcome!
Can I use flatten() instead of Randomhorizontal()
good work.... do more in Gen ai and LLm's
Noted!
Hi Arohi! Thanks for sharing the knowledge:) I have a qns to clarify but I'm not sure whether would you be able to see my comments. How will the the code understand or how was the datasets being seperated into inputs and labels while running the training loop as shown in your video?
This line is responsible for reading labels and images: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']}
where i can find that dataset?, i just found of CNN in his github :(
universe.roboflow.com/enrico-garaiman/flowers-y6mda/dataset/7
mam if image is of .npy file extension then how to load it?
x = np.load("x.npy")
Mam i tried with my own cnn model including dropout and batch normalization. And i achieved accuracy of 64% and model predicted output label correctly with image. 64% of accuracy is not bad. How to increase accuracy mam ?.
1- Increase the amount and diversity of your training data.
2- Increase the number of layers (both convolutional and fully connected layers) to capture more complex patterns.
3- Experiment with different hyperparameters like learning rate, optimizers.
4- Use pre-trained models (e.g., VGG, ResNet, Inception) and fine-tune them on your dataset.
@@CodeWithAarohi thanks for your guidance mam
how did you come up with the values: [0.485, 0.456, 0.406] and [0.229, 0.224, 0.225] ?
These values are taken for ImageNet dataset. You need to arrive with your own mean[R,B,G] and std[R,B,G] values for your kind of training dataset.
@@DBWorld thanks!
How can i find that? @DBWorld can you explain?
Hi, Nice Video!
Please, can I get the notebook?
Code is available here: docs.ultralytics.com/models/yolov5/
Yea
I have a quick question regarding this video, Aarohi. I watched your video and cloned your GitHub repository to train a dataset of approximately 100 bank cheque images. However, I encountered an issue with the model's performance. When I tested it with non-cheque images, it incorrectly classified them as cheques. On the other hand, it also misclassified bank cheque images as something other than cheques. Can you help me understand and address this problem?
Imbalanced data can lead to misclassification issues. If you have significantly more cheque images than non-cheque images (or vice versa), it can skew the model's performance. You might need to balance the dataset by oversampling the minority class or undersampling the majority class.
Can the code snippet apply to multiple labels
Yes
@@CodeWithAarohi thank you 🫶🏻
Tq mam
welcome!
Where is the dataset
universe.roboflow.com/enrico-garaiman/flowers-y6mda/dataset/7
@@CodeWithAarohi
Thank you ❤️
how can i get this dataset
universe.roboflow.com/search?q=flower%20classification
Code with Aarohi is best platform to learn Artificial Intelligence & Data Science
#BestChannel #CodeWithAarohi
where i can get the datasets
universe.roboflow.com/enrico-garaiman/flowers-y6mda/dataset/7
can you provide dataset
Where is Dataset?
Where is Dataset directory?
Thank you, I sent you a mail you didn't answer me,I need your advice please 🙏 , thank you
Let me check
Keep sharing such an amazing knowledgeable content in form of very easy to learn videos.
Thank you, I will