These were videos that I requested. Please make more Project videos in Machine learning and deep learning videos and real-world machine learning projects in PYTHON because You Are The Best to learn from
This is so true. I've been so frustrated trying to learn this topic and a lot of videos are just people explaining what neural networks are, like the begging of the previous video in this series, but nobody actually gets into the code and explains how to set up functions. Like there is A HUGE DIFFERENCE BETWEEN DRAWING A NEURAL NETWORK AND ACTUALLY CODING ONE!!! ANYONE CAN DRAW ONE
@@a.n.7338 That's to initialize it to an empty array. But at 15:30, he learned that it did not initialize the type. and that's something I especially like about his videos: he doesn't just edit out the mistakes, any talks about things like the need to reshape being kind of stupid.
This is currently the best TensorFlow tutorial on youtube. Can't express how thankful I am, after wasting much time on crappy videos named TensorFlow in 10 mins, 5 mins, 1 min and etc...
The value of these videos is fucking incredible. After some setup with anaconda to get tensorflow and python 3.6 to work in pycharm, i was able to reproduce all of this with my own data. Your explanations are absolutely on point and i have no questions left after this part.
Hi Harrison. You've been doing an absolutely amazing list of implementating Deep learning videos with Python, Tensorflow, Keras, etc. This is the most useful job you've ever done. I've learned the Machine learning, Deep learning theory easily but implementation and application is something difficult to me. Keep doing this please.
Here's a course you'll need. Face Mask Detection Using Deep Learning . It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
Here's a course you'll need. Face Mask Detection Using Deep Learning . It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
Works like a charm. For those(beginners like me) who had issue in layers like "kernel size not defined". Just replace "(3,3 ) by kernel_size=3 " in layer 1 and 2 and it will be good to go.
Another amazing set of tutorials. You truly are helping me understand Python and Deep Learning at a whole different level. Thank you for your time and expertise, Sentdex.
Sentdex these are the best videos I have ever seen in Deep Learning. Amazing tutorials. You are the best at what you do. Why did it take me long to find this channel.
For those who want to use RGB/color images, modify these lines! Change these: img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE) plt.imshow(img_array, cmap='gray') X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1) To: img_array = cv2.imread(os.path.join(path,img)) plt.imshow(img_array) X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 3) And this should work, good luck!
Thank you so much for this video. As a programmer who just wants to start prototyping a simple model without a great DL background this video gave me the tools to get on with my work.
Heay sentdex pls keep up with your videos. They are really helpful in so many ways. Im just starting to get into ML and started studying Computer-Science just because of ML and your videos are so helpful. Thumbs up to you
this video is amazing, I am so glad I found your channel. I have tried learning this stuff for quite a while now through other RUclips videos but nobody could explain it that well.
Great tutorial, but the way you load the data is not very memory efficient and this will cause problems with large datasets. First the training_data list is written into RAM and afterwards the same amount of memory is reserved when converting into a numpy array. So this approach is only good for datasets < RAM size/2. Another option would be to create the numpy array at the beginning using np.empty and then write the data as entries into the array. This way the dataset can be as large as your RAM. If the dataset is larger than the RAM size it is suggested to use a generator that loads and yields the data during training. This way your dataset can be as large as your SSD, but training speed is most likely limited by the read speed of the drive. Just something I had to deal with during my thesis in the last couple of months. Maybe you could make a tutorial on the generator one, not a lot of people know about this. Anyways, keep up the good work!
@@will1337 Did you fixed your problem? my python returns "MemoryError" when doing np.array(X).reshape(-1 , IMG_SIZE, IMG_SIZE, 3) step I'm doing the colored version (so I have about 3 channels of colors which is causing me trouble)
Thanks have been looking forward to this tutorial will help with my thesis. For windows, if you have anaconda installed and cannot find module cv2, you may simply have to do: pip install opencv-python if you are on linux you can do : pip install opencv-python
If you are having trouble here use anaconda prompt. This is in the Anaconda Manager where you start jupyter. Then simply type in pip install opencv-python and mine at least worked great
Here's a course you'll need. Face Mask Detection Using Deep Learning . It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
So far, so good! The first dog grayscale image was successfully displayed. I was getting nervous there for a minute! I got confused when you added all that space at the bottom. It threw off my Jupyter notebook. I followed you thereafter, but my output did not print. We'll on to the other videos from other channels. I got to keep moving on. It was good while it lasted.
i enjoyed this so much, i was from CS degree. But not have quite good moment with programming. So i decided to get job that not programming. But, since i was try to learn about pyautogui and selenium from your video, i was so exited to learn ML, and now here am i ... following your keras tutorial :D
Thanks. On Windows, I had problems with your DATADIR="X:/Datasets/PetImages" @2:10. I had my own version of course, but the code said (in effect) my path was not valid, even tho it was. I discovered Colab runs Linux-style OS. There are several methods of doing it; I used the Google Drive mount and zipfile to extract (instead of the Windows File Explorer Extract), ending with DATADIR=''/content/drive/MyDrive/datasets/training_set'. I finally got to see the gray dogs!
Guys, the kaggle dataset that he is referring to no longer has folders named as cats and dogs... there are 2 folders, one is for training and another one is for test. You've gotta loop through images in the training folder and assign the labels using the image name
One of the most helpful video I came across as beginner. I still have not found anyone discussing how to create our own dataset and label them. I have 5000 PDF which I have converted to text and I am lost now, I dont know what to do here on, can someone give me a direction ?
The first video was great, looking forward to watching this one through as well. Can make a video about using CPU vs GPU for some of these training processes? I would like to learn more about forcing the script to use the GPU for running instead of the CPU. For instance some of your older videos (like the Monte Carlo Simulation series) could benefit from this. Thanks!
To use the GPU, you just install the GPU version of TensorFlow. Depending on your OS this is slightly different, but: Windows: ruclips.net/video/r7-WPbx8VuY/видео.html Ubuntu: ruclips.net/video/io6Ajf5XkaM/видео.html Obviously now you do the later version of TF and the correct matching CuDNN and cuda toolkit. Currently Cuda Toolkit 9.0 and CuDNN 7.0.
Hey sentdex Thanks for the video ! I can't see what you did to reshape the "y" list at the end of the video @ 16:20 ... Could you please clarify this ? Thanks again !
Cool Video as always, but as of TF 1.9 you can use tf.data with Keras to do what you did in here and it will make a much more efficient pipeline for training larger datasets. This will also work for converting to tf.records if you want to change the format. This becomes important when using fast GPUs/TPUs as they no longer are the bottleneck and loading of data into the model is the bottleneck.
I did mention there are methods for larger datasets, and I plan to eventually cover that, but that gets far more complex to do. I find that, for most applications and what 99% of what people are doing with deep learning, they don't need to be concerned with that added complexity, which is why I didn't cover it here in part 2, but will be something to cover later.
sentdex I understand the tf.records being too hard. But tf.data is now very easy to use with Keras and what we are trying to teach people to use going forward. There are very simple examples here www.tensorflow.org/guide/keras under tf.data datasets
I must be looking in the wrong spots then. What I've seen from the data api doesn't look very beginner friendly. I'll poke around more and see what I can find.
If your using keras you should use the flow_from_directory function ,it's really the same thing without the hassle of running out of memory trying to load the entire dataset.
Hi, great work! I have a question, though, upon the "homework challenge" ! reshape(-1, IMG_SIZE, IMG_SIZE, 3) pops a ValueError: cannot reshape array of size 239640576 into shape (224,244,3). What's your opinion and solution ?? Thank you
It seems like all the videos and tutorials on this topic only deal with binary situations. Outside of the Keras docs on flowers there is a lack of variety on multiple classification approaches (> 2 classes). I have a feeling that might be where complexity and accuracy dive off a cliff.
Hey man. What should I write instead of the "training_data.append" line if I want a multiclass dataset? Yours has two classes, imagine I have a 5-class dataset.
It basically tells numpy the following: given all the other parameters IM_SIZE,IM_SIZE,1 figure out the other dimension (in this case the amount of images). its an automatic way of writting np.reshape(amount_of_samples, im_size, im_size, 1).
Re: Changing the X list to X array @16:35 "X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)". This combines (conversion of list to np array) and (reshaping). (a) I can not find explanation of this syntax with 4 parameters. Any help? (b) The '-1' has various meanings in array reshape. What does it mean here? (c) edit: removed (d) The first array in X list starts as: [102 104 62 ... 69 75 83] (50 elements in dim). But X array starts as: [[[[102] [104] [ 62] ... [ 69] [ 75] [ 83]] One element in dim. Is that correct? (e) The last parameter is 1. Where does that show in the X array? (f) To simplify these questions for debugging, I used img size of (3, 2) (width, height) giving an array shape of 2r x 3c. And I process only two images, skipping random.
After changing code to handle color, the "light" appears. (b) The '-1' says to flatten the array. With color, there are 3 elements in each dim. (e) The last parameter '1', as you mentioned is for gray images (one value per item). When using color images, change this to 3. First part of X array starts as [[[ 43 55 78]. Hope this helps.
Hello is it necessary to print all the images like it is printing only one dog image what about the others ? I am doing orb detection does it required to loop through all images?
I see no one has commented about the inline printing of images/plots in Jupyter Notebook so here it is: %matplotlib inline Add this line before or after importing libraries and you will not have to use plt.show() anymore.
What is your opinion on setting an aspect ratio and adding padding during resizing? I just feel like forcing an n x n dimension distorts images too much when we have the varied original resolutions.
the plylist is amazing, however i came across this issues after running part 2 and part 3 back to back.. the y also needs to be an array, so the model.fit in part 3 can run... thank you once more :)
Great tutorial! However, I got a question. What if you have an image in multiple categories. So you could be sorting images based on size and colour and you stumble on an image that is red and big.
Sorry I am new to this topic. I'm kind of confusing for the last part, the last row the X[1] can I say its the image and y[1] is the label for that image? Till the last row, we actually already done the data training and by reading the X and y the machine start to do prediction? Is that all for the cats and dogs machine learning? Look forward to some answers, thanks in advance!
best video from a pro. i loved it and helped me lot to get the basic idea. please add a tutorial to extract frames from a 100 videos in a folder within different folders . i expects a positive reply from u pro.....
You are one of the most chill and laid-back smart teachers i ve ever seen. such an informative tutorial. Thank You :)
Thanks alot for the videos! Great way to start my Data Science Journey. Really grateful to you for posting such content for free.
Thank you for the super thanks, best wishes to you on your programming journey!
These were videos that I requested. Please make more Project videos in Machine learning and deep learning videos and real-world machine learning projects in PYTHON because You Are The Best to learn from
This is so true. I've been so frustrated trying to learn this topic and a lot of videos are just people explaining what neural networks are, like the begging of the previous video in this series, but nobody actually gets into the code and explains how to set up functions. Like there is A HUGE DIFFERENCE BETWEEN DRAWING A NEURAL NETWORK AND ACTUALLY CODING ONE!!! ANYONE CAN DRAW ONE
So true ! Make more such project videos. They prove to be of great help.
Can someone tell me what X=[] and Y=[] is used for?
@@a.n.7338 Yes I do want to know
@@a.n.7338 That's to initialize it to an empty array. But at 15:30, he learned that it did not initialize the type. and that's something I especially like about his videos: he doesn't just edit out the mistakes, any talks about things like the need to reshape being kind of stupid.
5:04 I don't know why, but the way you casually said "ha, bluedog", and then continue on, was hilarious to me xD
Just wow , didn't wanted to watch the whole video , but your are a magnet !! Excellent style of teaching
This is currently the best TensorFlow tutorial on youtube. Can't express how thankful I am, after wasting much time on crappy videos named TensorFlow in 10 mins, 5 mins, 1 min and etc...
The value of these videos is fucking incredible. After some setup with anaconda to get tensorflow and python 3.6 to work in pycharm, i was able to reproduce all of this with my own data. Your explanations are absolutely on point and i have no questions left after this part.
Thanks, that's awesome to hear!
i started my python journey with you back in the university days, thanks for being there boss.
Hi Harrison.
You've been doing an absolutely amazing list of implementating Deep learning videos with Python, Tensorflow, Keras, etc.
This is the most useful job you've ever done. I've learned the Machine learning, Deep learning theory easily but implementation and application is something difficult to me. Keep doing this please.
Here's a course you'll need.
Face Mask Detection Using Deep Learning . It's paid but it's worth it.
khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
I love the way how you personified the NN. The part with shuffling makes me laugh!
I keep Getting:
error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'
What does it mean, could not find anything on the web, help!
Here's a course you'll need.
Face Mask Detection Using Deep Learning . It's paid but it's worth it.
khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
Works like a charm. For those(beginners like me) who had issue in layers like "kernel size not defined". Just replace "(3,3 ) by kernel_size=3 " in layer 1 and 2 and it will be good to go.
Another amazing set of tutorials. You truly are helping me understand Python and Deep Learning at a whole different level. Thank you for your time and expertise, Sentdex.
A nice alternative to pickle is
np.save('features.npy',X) #saving
X=np.load('features.npy')#loading
Thanks for sharing!
thanks
Thanks
very helpful
thanks
Epic moment: "haa, blue dog" @ 5:05
hahahahaa
i laugh so hard
THANK YOU SO MUCH!!! I just started with Machine Learning and Neural Networks and this video helped me a lot!!!
Sentdex these are the best videos I have ever seen in Deep Learning. Amazing tutorials. You are the best at what you do. Why did it take me long to find this channel.
Welcome here :D
@@sentdex can I have your email if you don't mind?
@@sentdex Hello. I got stuck when instructing my directory on the file.
Kindly advise. Thank you.
You are very good in teaching and the world need you, Sir.
one of the best programming channels on RUclips. Subscribed and hit the bell ; )
Thanks!
These vids always cheer me up :) You are by far my most favourite instructor. :) When I feel depressed i just watch your videos.
Nice to hear :)
For those who want to use RGB/color images, modify these lines!
Change these:
img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)
plt.imshow(img_array, cmap='gray')
X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
To:
img_array = cv2.imread(os.path.join(path,img))
plt.imshow(img_array)
X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 3)
And this should work, good luck!
you are the man
thanks
amazing videos, great AI tutorials, honestly one of the best programming channels on RUclips. thank you for making these videos
Yeah, I also really enjoy his videos. He inspires me to make my own AI videos.
Thank you so much for this video. As a programmer who just wants to start prototyping a simple model without a great DL background this video gave me the tools to get on with my work.
This series is so thorough and easy to understand (for me at least :D)!
I can't wait for the next part!
Great to hear!
"the hand of a dog", so wise sentdex. forever indebted
Heay sentdex pls keep up with your videos. They are really helpful in so many ways. Im just starting to get into ML and started studying Computer-Science just because of ML and your videos are so helpful. Thumbs up to you
this video is amazing, I am so glad I found your channel. I have tried learning this stuff for quite a while now through other RUclips videos but nobody could explain it that well.
@Ruben can u share the code
@RubenUribe through via email if possible
Great tutorial, but the way you load the data is not very memory efficient and this will cause problems with large datasets. First the training_data list is written into RAM and afterwards the same amount of memory is reserved when converting into a numpy array. So this approach is only good for datasets < RAM size/2.
Another option would be to create the numpy array at the beginning using np.empty and then write the data as entries into the array. This way the dataset can be as large as your RAM.
If the dataset is larger than the RAM size it is suggested to use a generator that loads and yields the data during training. This way your dataset can be as large as your SSD, but training speed is most likely limited by the read speed of the drive.
Just something I had to deal with during my thesis in the last couple of months. Maybe you could make a tutorial on the generator one, not a lot of people know about this.
Anyways, keep up the good work!
This looks very interesting and I'm experiencing some errors with this as well on my thesis. Can I contact you via email about this?
Sure, can you contact me via youtube? Or post you email and I will contact you
@@will1337 delete the comment with your email
@@will1337 Did you fixed your problem? my python returns "MemoryError" when doing np.array(X).reshape(-1 , IMG_SIZE, IMG_SIZE, 3) step
I'm doing the colored version (so I have about 3 channels of colors which is causing me trouble)
@@stewie055 I did fix it with changing my sampling rate of my data. Maybe resize your images? I am not sure how to fix it with image data, sorry.
6:50 "the hand of a dog" - it is called a paw! hahaha
Sentdex: understands neural nets
Also Sentdex: doesn't know what to call a dogs paw
More like neural pets lol
I like the teaching style, it's simple to understand
I've been looking for ways to upload 40k images to my Drive for 3 days. You in one word: you are perfect
you can download google drive on ur machine and sync it with your drive
Thanks have been looking forward to this tutorial will help with my thesis.
For windows, if you have anaconda installed and cannot find module cv2, you may simply have to do:
pip install opencv-python
if you are on linux you can do :
pip install opencv-python
If you are having trouble here use anaconda prompt. This is in the Anaconda Manager where you start jupyter. Then simply type in pip install opencv-python and mine at least worked great
opencv-python is installed, but it cannot find the module cv2
11:54 that's a great imitation of a model trying to learn ^^
Here's a course you'll need.
Face Mask Detection Using Deep Learning . It's paid but it's worth it.
khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
1:52 I was taking it serious till the mug appears
This video was 2 years ago! Hey sentdex! THANKS :D
Thanks Snowden, nice tutorial.
Brother, you're amazing. This video has been a huge help. Thanks.
So far, so good! The first dog grayscale image was successfully displayed. I was getting nervous there for a minute! I got confused when you added all that space at the bottom. It threw off my Jupyter notebook. I followed you thereafter, but my output did not print. We'll on to the other videos from other channels. I got to keep moving on. It was good while it lasted.
i enjoyed this so much, i was from CS degree. But not have quite good moment with programming. So i decided to get job that not programming. But, since i was try to learn about pyautogui and selenium from your video, i was so exited to learn ML, and now here am i ... following your keras tutorial :D
seriously you teaches better than my professors
Thanks for teaching us. :)
I like your way of teaching.
Thanks. On Windows, I had problems with your DATADIR="X:/Datasets/PetImages" @2:10. I had my own version of course, but the code said (in effect) my path was not valid, even tho it was. I discovered Colab runs Linux-style OS. There are several methods of doing it; I used the Google Drive mount and zipfile to extract (instead of the Windows File Explorer Extract), ending with DATADIR=''/content/drive/MyDrive/datasets/training_set'. I finally got to see the gray dogs!
Guys, the kaggle dataset that he is referring to no longer has folders named as cats and dogs... there are 2 folders, one is for training and another one is for test. You've gotta loop through images in the training folder and assign the labels using the image name
Great video, looking forward to see the next one.
It was really a very useful video...Thaaank u very much for ur timely help
This was the most helpful video I've found. thank you!
Exactly the video I needed! Thx bro!
finally, I got through this video without any error
Thank you so much this video helped out so much with an up coming video of mine
Thanks @sentdex, just what I was looking for :)
Sir your videos are epic! You are an excellent teacher
Really interesting and objective explanation of the topic! lol'd hard bc of the sudden blue dog
Great tutorial. Looking forwart to next part!
Expect it tomorrow!
Idk why I lmfao when you said “ha, a blue dog” hahahahahah
At 14:00 is there any reason why we dont just do:
training_data = np.array(training_data)
X = training_data.T[0]
y = training_data.T[1]
Very detailed and helpful and cute. Thank you
Love getting tought ML by Snowden!
Amazing video man... Looking forward to the next one
I found for the Y labels you have to make it a numpy array if not the model will not take them. Other than that this is an amazing tutorial
I'm not sure which ends up being better....the videos or the random (read: dope) coffee mugs you keep pulling out in them ;)
One of the most helpful video I came across as beginner. I still have not found anyone discussing how to create our own dataset and label them. I have 5000 PDF which I have converted to text and I am lost now, I dont know what to do here on, can someone give me a direction ?
Waiting for the next video...thanx man you are an amazing teacher
Next one just released :)
Amazing! going through it now
I suggest using context managers for file opening. Cleaner and is better for beginners as you don't have to remember to close the file
Oh thank you! I've looking for the way to load my own dataset and here you go! :З
holy shit these videos have helped me, thank you so much dude!
That's an informative video. thank you so much
Thanks you! Awesome video. Looking forward to the next one 👍👍👍
The first video was great, looking forward to watching this one through as well. Can make a video about using CPU vs GPU for some of these training processes? I would like to learn more about forcing the script to use the GPU for running instead of the CPU. For instance some of your older videos (like the Monte Carlo Simulation series) could benefit from this. Thanks!
To use the GPU, you just install the GPU version of TensorFlow. Depending on your OS this is slightly different, but:
Windows: ruclips.net/video/r7-WPbx8VuY/видео.html
Ubuntu: ruclips.net/video/io6Ajf5XkaM/видео.html
Obviously now you do the later version of TF and the correct matching CuDNN and cuda toolkit. Currently Cuda Toolkit 9.0 and CuDNN 7.0.
Hey sentdex Thanks for the video ! I can't see what you did to reshape the "y" list at the end of the video @ 16:20 ... Could you please clarify this ? Thanks again !
Cool Video as always, but as of TF 1.9 you can use tf.data with Keras to do what you did in here and it will make a much more efficient pipeline for training larger datasets. This will also work for converting to tf.records if you want to change the format. This becomes important when using fast GPUs/TPUs as they no longer are the bottleneck and loading of data into the model is the bottleneck.
I did mention there are methods for larger datasets, and I plan to eventually cover that, but that gets far more complex to do. I find that, for most applications and what 99% of what people are doing with deep learning, they don't need to be concerned with that added complexity, which is why I didn't cover it here in part 2, but will be something to cover later.
sentdex I understand the tf.records being too hard. But tf.data is now very easy to use with Keras and what we are trying to teach people to use going forward. There are very simple examples here www.tensorflow.org/guide/keras under tf.data datasets
I must be looking in the wrong spots then. What I've seen from the data api doesn't look very beginner friendly. I'll poke around more and see what I can find.
If your using keras you should use the flow_from_directory function ,it's really the same thing without the hassle of running out of memory trying to load the entire dataset.
Hi, great work!
I have a question, though, upon the "homework challenge" !
reshape(-1, IMG_SIZE, IMG_SIZE, 3) pops a ValueError: cannot reshape array of size 239640576 into shape (224,244,3).
What's your opinion and solution ??
Thank you
It seems like all the videos and tutorials on this topic only deal with binary situations. Outside of the Keras docs on flowers there is a lack of variety on multiple classification approaches (> 2 classes). I have a feeling that might be where complexity and accuracy dive off a cliff.
This helps me a lot.
Hey man. What should I write instead of the "training_data.append" line if I want a multiclass dataset? Yours has two classes, imagine I have a 5-class dataset.
Sentdex, I didn't understand when you did the reshape what the -1 exactly meant... you glossed over it a little bit. What does it exactly mean? Thanks
i also want to know, could some1 explain please?
Same problem! I ve stuck on that!
I HAvent made it before, tried to make reshape after. But it didnt worked
It basically tells numpy the following:
given all the other parameters IM_SIZE,IM_SIZE,1 figure out the other dimension (in this case the amount of images).
its an automatic way of writting np.reshape(amount_of_samples, im_size, im_size, 1).
Re: Changing the X list to X array @16:35 "X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)". This combines (conversion of list to np array) and (reshaping). (a) I can not find explanation of this syntax with 4 parameters. Any help? (b) The '-1' has various meanings in array reshape. What does it mean here? (c) edit: removed (d) The first array in X list starts as: [102 104 62 ... 69 75 83] (50 elements in dim).
But X array starts as:
[[[[102]
[104]
[ 62]
...
[ 69]
[ 75]
[ 83]]
One element in dim. Is that correct? (e) The last parameter is 1. Where does that show in the X array? (f) To simplify these questions for debugging, I used img size of (3, 2) (width, height) giving an array shape of 2r x 3c. And I process only two images, skipping random.
After changing code to handle color, the "light" appears. (b) The '-1' says to flatten the array. With color, there are 3 elements in each dim. (e) The last parameter '1', as you mentioned is for gray images (one value per item). When using color images, change this to 3. First part of X array starts as [[[ 43 55 78]. Hope this helps.
Hello is it necessary to print all the images like it is printing only one dog image what about the others ?
I am doing orb detection does it required to loop through all images?
Great video! How could I modify this to use multiple categories for classification instead of just single category label?
*sentdex* and *DeepLizard* have both been _VERY_ helpful with teaching me how to program.
Thanks.
plt.imshow(img_array, cmap="gray")
plt.show()' shouldn't retrieve all images, not just the first one?
he had "break" in the for loops. So looped once and then "broke" out the for loop. That is why "img_array" has only 1 image data.
@@venkuburagaddaacc Okay Thanks :)
@@venkuburagaddaacc ty
I see no one has commented about the inline printing of images/plots in Jupyter Notebook so here it is:
%matplotlib inline
Add this line before or after importing libraries and you will not have to use plt.show() anymore.
What is your opinion on setting an aspect ratio and adding padding during resizing? I just feel like forcing an n x n dimension distorts images too much when we have the varied original resolutions.
the plylist is amazing, however i came across this issues after running part 2 and part 3 back to back..
the y also needs to be an array, so the model.fit in part 3 can run...
thank you once more :)
Greate tutorial, thanks. You already know that, you did not run line 42, that is why, I think X gave error there.
Great tutorial! However, I got a question. What if you have an image in multiple categories. So you could be sorting images based on size and colour and you stumble on an image that is red and big.
Can you make video on how to create raster(.geotiff) dataset in python
Excellent work
Thank you so much SentDex
Can you show us how to make a image generator with own pictures?
Please make a video on how to load in the iam dataset.
Sorry I am new to this topic.
I'm kind of confusing for the last part, the last row the X[1] can I say its the image and y[1] is the label for that image?
Till the last row, we actually already done the data training and by reading the X and y the machine start to do prediction?
Is that all for the cats and dogs machine learning?
Look forward to some answers, thanks in advance!
Cant wait to the next video!! congratulations!!
Great tutorial, but if the images with multi label ,that way is same to load the data with binary classification or multi label classification
Great Video !!
14:42 I had to change both X and y into numpy array to make it work. y as a list didn't work.
Did your model still work in the end? Im having this issue now
best video from a pro. i loved it and helped me lot to get the basic idea. please add a tutorial to extract frames from a 100 videos in a folder within different folders . i expects a positive reply from u pro.....
Great video as usual. But you need some silent fans for your computer!! :)
Thanks a lott🙏🏽🙏🏽 u just saved me
Nice T-Rex mug. For anyone interested, it is the "3-D Shaped T-Rex Dinosaur Design Ceramic Mug" on amazon