My first formal introduction to Keras. Half way through. Did not think I will make it this far without difficulty. Your narration style is not a chatty style. It is more like a professional news reader. But your script anticipates everything and preempts the questions that arise in the mind. That makes it effective. Will continue again tomorrow.
Well there are quite few like them on RUclips that provide our generation the best of their content, explaining every aspect of computer science Rather we should give them atleast give them some appreciation The internet is expanding tremendously and these are our hero(s) which make internet a golden source of knowledge.
Feedback on video production. When the discussion is about the last few lines in a screen, it is difficult to watch the screen when it is overlapped by the subtitles. You are capturing the screen showing jupyter and not using slides, and hence it should be easy for you to scroll the jupyter notebook to make the line under discussion to appear in the middle.
Near-zero computer programming training. Just admiring from the nosebleed seats the codes that go into making the fundamental building blocks of neural networks similar to how our human own brains functions. Recently read Max Tegmark's Life 3.0 book on AGI which have peaked my interest in deep learning.
Great book dude ! Read it beginning of this year. I would highly recommend Nick Bostrom's Superintelligence which I personally think gives even better insights about artificial intelligence.
Definitely that kind of Pokemon for sure, Gal with the makeup on it's hard to tell, but no makeup they are the spitting image of one another, forget being a programmer, she should be a body double
freckles just saying. tbh its awesome to have an instructor who is an attrctive gal. good to see the stereotype of good looking ladies being bimbos is really being proven to be foolish. though like 75% percent of tech jobs are still held by guys so if u learn all this and get a great job it wont feature as many as it should which is the personification of :(
I am not a programmer/coder. I found this video very soothing and inspiring simply the vision it infused in creative aspects. I sat 6 years in the hole so the slightest intellectual understanding blows up into wisest yet connected set for output variable. I am a mystic, someone needs to code what I see.
Great material, just one remark at 14:40: according to your problem definition the sample sizes should be 50 and 950, hence the second loop should be for i in range(950).
Your multi-API approach (plus CPU/GPU heads up) was indeed a major factor to consider while choosing a source of insight into Keras. Thank you for the thorough and very well presented content.
Since my laptop did not have a GPU, it threw an error, so I added an if statement in case the length of the device array was zero. I was a little green with envy when the vgg16 ran 8 to 10 seconds per epoch on your system, but my laptop with an i7 and 24 gig of RAM and 1TB of SSD took a whopping 850 seconds more or less per epoch. I suspect it's running single-threaded, and I vaguely remember something about an option to open this cpu to using 8 threads instead. I remember doing something like that in octave or matlab in Andrew Ng's first course or Geoffrey Hinton's course.
1:03:45 I think all the photos were meant to be directly under the dogs-vs-cats folder as per the later demonstration. Cause at 1:08:33, all the remaining photos were directly under dogs-vs-cats folder. Not dogs-vs-cats/train.
Hope you could see it. I renamed the folder train containing all the pics to trains. And then updated the code as below, it works on mac. os.chdir('trains') for i in random.sample(glob.glob('cat*'), 500): shutil.move(i, '../train/cat') for i in random.sample(glob.glob('dog*'), 500): shutil.move(i, '../train/dog') for i in random.sample(glob.glob('cat*'), 100): shutil.move(i, '../valid/cat') for i in random.sample(glob.glob('dog*'), 100): shutil.move(i, '../valid/dog') for i in random.sample(glob.glob('cat*'), 50): shutil.move(i, '../test/cat') for i in random.sample(glob.glob('dog*'), 50): shutil.move(i, '../test/dog')
2:26:42 You need to add one more layer before the Dense output with 10 neurons since the chosen modified layer is not working as for today (22nd July 2022). You can add it as x = tf.keras.layers.Flatten()(x) . Hope this helps.
With this line the number of non trainable parameters is the same as in this course, but the total number of parameters and the number of trainable parameters both increase. To get exactly the same result as in the course I modified that part of the code like this: x = mobNet.layers[-6].output x = tf.keras.layers.GlobalAveragePooling2D()(x) output = Dense(units=10, activation='softmax')(x) I don't know which is best but if anyone wants to follow this course religiously, that works (as of 27th July 2022).
@@profsrmq the funny part is that I haven't worked in Keras since July and now I have absolutely no clue about the thing I wrote and can't even remember how I came up with it 😅
In the first data generation step, I believe the second loop ( ~95% of young people who did not experience side effects) should be range(50,1000) instead of range(1000)
Thank you Mandy. This was a great tutorial or insight given on deep learning. This is surely the best one I have seen on RUclips. Thanks a lot again for your efforts 😊
Thank you Deeplizard and Free Code Camp. Its a great tutorial and a good video. I have been learning ML and DL, started out recently. However, its only after seeing this video that I know that I think I have the confidence to carry out something on my own, now. Thank you to the Keras and TF2 team as well.
This is excellent. I'm glad I spent a lot of time learning about machine learning and deep learning theory before I started this so I understand basically what is going on, and this is a super simple API. I think I'll use keras primarily, but I think I'll also learn tensorflow more thoroughly just in case.
I am an Italian guy and that is not an espresso, It is a cappuccino :D watching that part of the video was a stab to my heart :D The espresso is the Italian coffee made by the bar espresso coffee machine.
In the newer version of the mobile model the GlobalAveragePooling2D-layer is the -5th layer (not -6) and also it hast a different shape (None, 1, 1, 1024). I used a Reshape layer and now it works evern better than in the video: x = mobile.layers[-5].output y = keras.layers.Reshape((1024,))(x) output = Dense(units=10, activation='softmax')(y)
Thank you for this nice video, it really got me started. But I think there is one bad habit involved: "Never normalize your test data on its own but rather on the train data!" (This is what you read in many forums and for example in Chollet's "Deep Learning with Python" book. I think at 43:24 it should be scaler.transform(test_samples.reshape(-1,1)) as this takes the fitted scaler from the train_samples. Correct me if I am wrong :)
*For my own reference* ⭐️🦎 COURSE CONTENTS 🦎⭐️ ⌨️ (00:00:00) Welcome to this course ⌨️ (00:00:16) Keras Course Introduction ⌨️ (00:00:50) Course Prerequisites ⌨️ (00:01:33) DEEPLIZARD Deep Learning Path ⌨️ (00:01:45) Course Resources ⌨️ (00:02:30) About Keras ⌨️ (00:06:41) Keras with TensorFlow - Data Processing for Neural Network Training ⌨️ (00:18:39) Create an Artificial Neural Network with TensorFlow's Keras API ⌨️ (00:24:36) Train an Artificial Neural Network with TensorFlow's Keras API ⌨️ (00:30:07) Build a Validation Set With TensorFlow's Keras API ⌨️ (00:39:28) Neural Network Predictions with TensorFlow's Keras API ⌨️ (00:47:48) Create a Confusion Matrix for Neural Network Predictions ⌨️ (00:52:29) Save and Load a Model with TensorFlow's Keras API ⌨️ (01:01:25) Image Preparation for CNNs with TensorFlow's Keras API ⌨️ (01:19:22) Build and Train a CNN with TensorFlow's Keras API ⌨️ (01:28:42) CNN Predictions with TensorFlow's Keras API ⌨️ (01:37:05) Build a Fine-Tuned Neural Network with TensorFlow's Keras API ⌨️ (01:48:19) Train a Fine-Tuned Neural Network with TensorFlow's Keras API ⌨️ (01:52:39) Predict with a Fine-Tuned Neural Network with TensorFlow's Keras API ⌨️ (01:57:50) MobileNet Image Classification with TensorFlow's Keras API ⌨️ (02:11:18) Process Images for Fine-Tuned MobileNet with TensorFlow's Keras API ⌨️ (02:24:24) Fine-Tuning MobileNet on Custom Data Set with TensorFlow's Keras API ⌨️ (02:38:59) Data Augmentation with TensorFlow' Keras API ⌨️ (02:47:24) Collective Intelligence and the DEEPLIZARD HIVEMIND
Amazing tutorial! I thought about improving the custom CNN model, and I got it up to 0.8993 val_accuracy. My model: model = Sequential([Conv2D(filters=8, kernel_size=(3,3), activation='relu', padding='same', input_shape=(224, 224, 3)), MaxPool2D(pool_size=(2,2), strides=2), Conv2D(filters=16, kernel_size=(3,3), activation='relu', padding='same'), MaxPool2D(pool_size=(2,2), strides=2), Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same'), MaxPool2D(pool_size=(2,2), strides=2), Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'), MaxPool2D(pool_size=(2,2), strides=2), Conv2D(filters=128, kernel_size=(3,3), activation='relu', padding='same'), MaxPool2D(pool_size=(2,2), strides=2), Conv2D(filters=256, kernel_size=(3,3), activation='relu', padding='same'), MaxPool2D(pool_size=(2,2), strides=2), Flatten(), Dense(units=2048, activation='relu'), Dense(units=2, activation='softmax') ]) I also changed the Adam learning rate from 0.0001 to 0.001(the default value) and the epochs to 30 and lastly I used all of the included 25000 pictures(9617/animal for training, 1922/animal for validation and 961/animal for testing) imgur.com/a/DprZDhl
Great work Mandy. I really enjoyed your video. I noticed though that " classes = cm_ plot_labels" wasn't defined in the video. Hence, my plot of the confusion matrix was somewhat different. I will be glad if you define the class. Thank you.
Tensorflow + Keras technically isnt "from scratch" anymore. Those apply many abstraction and functional layers that make it calculate things for you without exposing true nature of NN.
@Nick your argument is not even valid. You just overexaggerated my thought and blown it out to make it seem you have any sense of right which is false. What i mean by making neural networks from scratch is something that exposes true math that is behind it and declaring it that way. All the functions that declare layer by layer, and then define gradient descent, back propagation, and something like that. Importing keras models and tensorflow functions and passing arguments into them isnt exposing true nature of NN and thus is not from scratch anymore. You only prove you are absolute ignorant at this point
How do I identify that the indices actually correspond the label? How if my labels are 3 and 10 for example? How I can be sure that index 0 do not correspond to label 1 and viceversa?
Many thanks for all the videos! I totally love them. I have a question? If I want to detect and object within a pic how would I need to prepare the dataset? I already have the labels of the objects and a bunch of photos, the train valid and test datasets
Running huge blocks of code like 20 lines of imports and 20-30 lines of folder creation was a huge inconvenience for me. Those huge blocks of codes should have been freely available to people watching this video.
Incredible video! Very well taught with clear explanations for all the different concepts. This has allowed me to put my first foot through the door to understand Keras/TensorFlow!
2:09:10 - Expresso is just the coffee. It becomes a cappuccino when you add milk and foam. If it's getting classified as expresso then I wonder if the original dataset labels were added incorrectly by human editors that didn't know the difference.
This pic has the classic look of a cappuccino. I too was thinking about some mislabeling in dataset. Note, espresso is both the brewed coffee ingredient in combination drinks like cappuccino, latte and cortado, as well as a drink in itself.
If the probability of a given older person experiencing side effects is 95% (respectively younger person not experiencing side effects 95%), I would think that the model's accuracy can not possibly be higher than 95%, is this right? Because for a random person picked from the dataset, you can only predict it with 95% certainty. This would reflect the model's accuracy of 94%.
yosh, you're overthinking it. If the model accurately predicts all of those who have gotten sick, it is 100% accurate. It does not depend on how accurately the model predicts in relationship to the demographic of the dataset - only the corresponding labels. Also, the dataset is more than just the "old people" we created; if the model accurately classifies 100% of the old people and 97% of young people, we should get around 98.5% accuracy.
at 2:09:04 into the video, the picture 2.png is identified as espresso and not cappuccino. This may be due to the fact that imagenet does not have a class by the name of cappuccino.
In the chapter "Image Preparation for CNNs with TensorFlow's Keras API", when and how were the labels for the images defined in the code? My guess is that it was during the calls to ImageDataGenerator().flow_from_directory() via the machine matching the passed classes to the names of either the folders or the image files, but even if I'm right, I think that should have been addressed, even briefly, especially since if we want to follow these steps for our own data, we'll need to know how to tell the machine which images are in which class.
1:32:10 - I don't get what she means. Maybe it didn't come out well. She says we don't want to shuffle the test set to plot the confusion matrix but from what she says it seems that the problem is that we don't have a 1-to-1 match between labels and samples which is not true. So why does the confusion matrix requires the set to be unshuffled?
This channel is insane. I am paying around 1000$USD for a class, and this free resource is my main learning accessory. Thank you. 1000 times.
My first formal introduction to Keras. Half way through. Did not think I will make it this far without difficulty. Your narration style is not a chatty style. It is more like a professional news reader. But your script anticipates everything and preempts the questions that arise in the mind. That makes it effective. Will continue again tomorrow.
Well there are quite few like them on RUclips that provide our generation the best of their content, explaining every aspect of computer science
Rather we should give them atleast give them some appreciation
The internet is expanding tremendously and these are our hero(s) which make internet a golden source of knowledge.
@@dhananjaygola4786 well, to expand on your point, you should name a few of them.
@@Nootey33 don't worry just believe in youtube recommendation Ai & soon you'll find them
@@dhananjaygola4786 what are the other channels pls?
You are absolutely right on! This is a great presentation.
26:16 , the parameter "learning_rate" has been renamed to "lr" from version 2.2.* to 2.3.0 in September 2018
This was an excellent instruction set. Really appreciate all the work on it.
Thank you, Paul!
@Paul Mcwhorter So good to see u here sir! I'm currently doing ur Arduino lessons and that's really amazing
@@zaief7016 Thanks, yep, I am always trying to learn as well.
:)
All I can think of is laying with her in the bed behind her
Feedback on video production. When the discussion is about the last few lines in a screen, it is difficult to watch the screen when it is overlapped by the subtitles. You are capturing the screen showing jupyter and not using slides, and hence it should be easy for you to scroll the jupyter notebook to make the line under discussion to appear in the middle.
One of the best videos on Keras for beginners like me.
Near-zero computer programming training. Just admiring from the nosebleed seats the codes that go into making the fundamental building blocks of neural networks similar to how our human own brains functions. Recently read Max Tegmark's Life 3.0 book on AGI which have peaked my interest in deep learning.
Great book dude ! Read it beginning of this year. I would highly recommend Nick Bostrom's Superintelligence which I personally think gives even better insights about artificial intelligence.
Glad I'm being taught by Gal Gadot. ☺️👍
Me too !!
I saw abella danger
Definitely that kind of Pokemon for sure, Gal with the makeup on it's hard to tell, but no makeup they are the spitting image of one another, forget being a programmer, she should be a body double
@@code-to-design bruh
freckles just saying. tbh its awesome to have an instructor who is an attrctive gal. good to see the stereotype of good looking ladies being bimbos is really being proven to be foolish. though like 75% percent of tech jobs are still held by guys so if u learn all this and get a great job it wont feature as many as it should which is the personification of :(
This was the answer to my prayer. I know linear algebra and Python. I just needed some specific code examples with comments. Thank you!!!
I am not a programmer/coder. I found this video very soothing and inspiring simply the vision it infused in creative aspects. I sat 6 years in the hole so the slightest intellectual understanding blows up into wisest yet connected set for output variable. I am a mystic, someone needs to code what I see.
I think it's better to learn to program.
You need to see a pychiatrist
You're the chosen one padawan.
Great material, just one remark at 14:40: according to your problem definition the sample sizes should be 50 and 950, hence the second loop should be for i in range(950).
Yeah, i think it is a mistake.
It's "~"95%, not 95%
There are 2100 patients.
Mandy , your are a very gifted instructor. There have been hundreds of instructors in my life and your are the BEST!
Love the course! Hate the constant ads youtube
Your multi-API approach (plus CPU/GPU heads up) was indeed a major factor to consider while choosing a source of insight into Keras. Thank you for the thorough and very well presented content.
Great to hear. You're welcome Filipe!
Since my laptop did not have a GPU, it threw an error, so I added an if statement in case the length of the device array was zero. I was a little green with envy when the vgg16 ran 8 to 10 seconds per epoch on your system, but my laptop with an i7 and 24 gig of RAM and 1TB of SSD took a whopping 850 seconds more or less per epoch.
I suspect it's running single-threaded, and I vaguely remember something about an option to open this cpu to using 8 threads instead. I remember doing something like that in octave or matlab in Andrew Ng's first course or Geoffrey Hinton's course.
I cant describe in words how much this video helped me with my research project! You are a great teacher, Thankyou so much!
hey what was your research project?
This is the first long tutorial I watched from start to finish and it says a lot! Thank you very much!!!
I enjoyed watching this tutorial, I ended up finishing the whole video without realizing it, thanks!
1:03:45 I think all the photos were meant to be directly under the dogs-vs-cats folder as per the later demonstration. Cause at 1:08:33, all the remaining photos were directly under dogs-vs-cats folder. Not dogs-vs-cats/train.
did u get past the for loops part? at 1:08:14
ive been stuck on it for ages
the error that is occuring is a valueerror
@@54M1WUL same
Hope you could see it. I renamed the folder train containing all the pics to trains. And then updated the code as below, it works on mac.
os.chdir('trains')
for i in random.sample(glob.glob('cat*'), 500):
shutil.move(i, '../train/cat')
for i in random.sample(glob.glob('dog*'), 500):
shutil.move(i, '../train/dog')
for i in random.sample(glob.glob('cat*'), 100):
shutil.move(i, '../valid/cat')
for i in random.sample(glob.glob('dog*'), 100):
shutil.move(i, '../valid/dog')
for i in random.sample(glob.glob('cat*'), 50):
shutil.move(i, '../test/cat')
for i in random.sample(glob.glob('dog*'), 50):
shutil.move(i, '../test/dog')
WHOA ... this instructor is sooo smart!
One of the best videos on Keras Deep Learning. Thanks for your wonderful teaching.
2:26:42 You need to add one more layer before the Dense output with 10 neurons since the chosen modified layer is not working as for today (22nd July 2022). You can add it as x = tf.keras.layers.Flatten()(x) . Hope this helps.
With this line the number of non trainable parameters is the same as in this course, but the total number of parameters and the number of trainable parameters both increase. To get exactly the same result as in the course I modified that part of the code like this:
x = mobNet.layers[-6].output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
output = Dense(units=10, activation='softmax')(x)
I don't know which is best but if anyone wants to follow this course religiously, that works (as of 27th July 2022).
@@robertoprestigiacomo253 works like a charm!
@@profsrmq the funny part is that I haven't worked in Keras since July and now I have absolutely no clue about the thing I wrote and can't even remember how I came up with it 😅
yes exactly :
x = mobile.layers[-5].output
x = tf.keras.layers.Flatten()(x)
output = Dense(units=10, activation='softmax')(x)
In the first data generation step, I believe the second loop ( ~95% of young people who did not experience side effects) should be range(50,1000) instead of range(1000)
You are so incredibly easy to listen to for hours on end, very well done I look forward to learning a bunch more from these videos
This is a great tutorial for Keras image classification. Can you do a similar one for object detection using Keras? That would be very helpful.
i clicked because she was pretty, now i know about keras
I clicked because i wanted to watch you comment
Dear Lady DeepLizard,
Thank you so much for the energy, time and thought you've putting this course! I have benefited a lot from your channel,
Thank you Mandy. This was a great tutorial or insight given on deep learning. This is surely the best one I have seen on RUclips. Thanks a lot again for your efforts 😊
Glad to hear that, Palash :)
Thank you Deeplizard and Free Code Camp. Its a great tutorial and a good video. I have been learning ML and DL, started out recently. However, its only after seeing this video that I know that I think I have the confidence to carry out something on my own, now. Thank you to the Keras and TF2 team as well.
absolute gem!! Way better than all those paid courses.
This is excellent. I'm glad I spent a lot of time learning about machine learning and deep learning theory before I started this so I understand basically what is going on, and this is a super simple API. I think I'll use keras primarily, but I think I'll also learn tensorflow more thoroughly just in case.
no one gives a shii
Finally, the wait ended. Thanks guys. Lots of love!
I am an Italian guy and that is not an espresso, It is a cappuccino :D watching that part of the video was a stab to my heart :D
The espresso is the Italian coffee made by the bar espresso coffee machine.
In the newer version of the mobile model the GlobalAveragePooling2D-layer is the -5th layer (not -6) and also it hast a different shape (None, 1, 1, 1024).
I used a Reshape layer and now it works evern better than in the video:
x = mobile.layers[-5].output
y = keras.layers.Reshape((1024,))(x)
output = Dense(units=10, activation='softmax')(y)
i cant thank you enough for this video. God bless you.
Mandyy! You came and you gave without taking!!
Thank you for this nice video, it really got me started. But I think there is one bad habit involved: "Never normalize your test data on its own but rather on the train data!" (This is what you read in many forums and for example in Chollet's "Deep Learning with Python" book. I think at 43:24 it should be scaler.transform(test_samples.reshape(-1,1)) as this takes the fitted scaler from the train_samples. Correct me if I am wrong :)
You're right 100%
That's correct
@@googlable
@@googlable
Also noticed, though in this example it doesn't matter as ranges of features are the same for test and train datasets
*For my own reference*
⭐️🦎 COURSE CONTENTS 🦎⭐️
⌨️ (00:00:00) Welcome to this course
⌨️ (00:00:16) Keras Course Introduction
⌨️ (00:00:50) Course Prerequisites
⌨️ (00:01:33) DEEPLIZARD Deep Learning Path
⌨️ (00:01:45) Course Resources
⌨️ (00:02:30) About Keras
⌨️ (00:06:41) Keras with TensorFlow - Data Processing for Neural Network Training
⌨️ (00:18:39) Create an Artificial Neural Network with TensorFlow's Keras API
⌨️ (00:24:36) Train an Artificial Neural Network with TensorFlow's Keras API
⌨️ (00:30:07) Build a Validation Set With TensorFlow's Keras API
⌨️ (00:39:28) Neural Network Predictions with TensorFlow's Keras API
⌨️ (00:47:48) Create a Confusion Matrix for Neural Network Predictions
⌨️ (00:52:29) Save and Load a Model with TensorFlow's Keras API
⌨️ (01:01:25) Image Preparation for CNNs with TensorFlow's Keras API
⌨️ (01:19:22) Build and Train a CNN with TensorFlow's Keras API
⌨️ (01:28:42) CNN Predictions with TensorFlow's Keras API
⌨️ (01:37:05) Build a Fine-Tuned Neural Network with TensorFlow's Keras API
⌨️ (01:48:19) Train a Fine-Tuned Neural Network with TensorFlow's Keras API
⌨️ (01:52:39) Predict with a Fine-Tuned Neural Network with TensorFlow's Keras API
⌨️ (01:57:50) MobileNet Image Classification with TensorFlow's Keras API
⌨️ (02:11:18) Process Images for Fine-Tuned MobileNet with TensorFlow's Keras API
⌨️ (02:24:24) Fine-Tuning MobileNet on Custom Data Set with TensorFlow's Keras API
⌨️ (02:38:59) Data Augmentation with TensorFlow' Keras API
⌨️ (02:47:24) Collective Intelligence and the DEEPLIZARD HIVEMIND
Brilliant illustarations and best DL material I have seen. Thank you, Andy.
*Mandy
Amazing tutorial! I thought about improving the custom CNN model, and I got it up to 0.8993 val_accuracy. My model:
model = Sequential([Conv2D(filters=8, kernel_size=(3,3), activation='relu', padding='same', input_shape=(224, 224, 3)),
MaxPool2D(pool_size=(2,2), strides=2),
Conv2D(filters=16, kernel_size=(3,3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2,2), strides=2),
Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2,2), strides=2),
Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2,2), strides=2),
Conv2D(filters=128, kernel_size=(3,3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2,2), strides=2),
Conv2D(filters=256, kernel_size=(3,3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2,2), strides=2),
Flatten(),
Dense(units=2048, activation='relu'),
Dense(units=2, activation='softmax')
])
I also changed the Adam learning rate from 0.0001 to 0.001(the default value) and the epochs to 30 and lastly I used all of the included 25000 pictures(9617/animal for training, 1922/animal for validation and 961/animal for testing)
imgur.com/a/DprZDhl
try adding some dropout layers, i think it will help with the overfitting.
Great course, I've been following this for the last week. Well organised and presented..
Fall in love with u for ur tutorials deeplizard 🦎 ... super cool. Hope that learning would be continue... Thanks a lot👩🏫
I thought its a hotel Trivago new advertisement.
IKR - I was waiting for Captain Obvious to show up!
Lol I guess it's what happens when you film courses from your Airbnb 🤷♀️😂
@@deeplizard 😅 nice video though
reporting bias.
Ahahahah
She is really good at teaching
Finished this. Liked it very much. Going to your website to find more.
Such a detailed and amazingly designed course. Covered every question I had in mind!.
Very good intro example, easy to setup problem can be tweaked to explore more and doesn't require pictures or strange formats and other downloads.
Great work Mandy. I really enjoyed your video. I noticed though that " classes = cm_ plot_labels" wasn't defined in the video. Hence, my plot of the confusion matrix was somewhat different. I will be glad if you define the class. Thank you.
Super good production! Thank you!
1:16 I am sure I missed something, Still.... Where does that labels come from? ? How did it distinguish cats and dogs?
Thanks for a well prepared, well organized, professional presentation. GREATLY appreciated
Excellent Course to understand the documentation and practice DL
Perfect, code explanations were clear and straightforward
Thanks, finally someone with a normal English accent so to speak
Your very talented..thanks for your well explained instructions..
Tensorflow + Keras technically isnt "from scratch" anymore. Those apply many abstraction and functional layers that make it calculate things for you without exposing true nature of NN.
Then how would you suggest?
@Nick then how would you suggest?
Frameworks
@Nick your argument is not even valid. You just overexaggerated my thought and blown it out to make it seem you have any sense of right which is false. What i mean by making neural networks from scratch is something that exposes true math that is behind it and declaring it that way. All the functions that declare layer by layer, and then define gradient descent, back propagation, and something like that.
Importing keras models and tensorflow functions and passing arguments into them isnt exposing true nature of NN and thus is not from scratch anymore. You only prove you are absolute ignorant at this point
2:46:15 your cam got augmented too (and flipped upwards) :D
Nonetheless awesome course!
I think there is a mistake in 15:27. If it is 95% of the population, the code should be 'for i in range(950)'.
you shouldn't call fit_transform on test_samples in 43:20. You should use the same scaler that was fitted on train_samples.
yea, you only need to transform it
2:29 today 05/05/2023 need this:
x = mobile.layers[-5].output
x = tf.keras.layers.Flatten()(x)
output = Dense(units=10, activation='softmax')(x)
thank you man, needed that, do you have any explanation for that?
This tutorial amazingly helps me. Thanks!
How do I identify that the indices actually correspond the label? How if my labels are 3 and 10 for example? How I can be sure that index 0 do not correspond to label 1 and viceversa?
Loved your video Mandy and need more content from you. Great explanation.
Muchas gracias Miss Mandy....que habitación tan ordenada, saludos de Perú
I love you Mandy
This course was awesome
Thank you Mandy, It was a great video with such a fantastic explanation!!!
You’re a great teacher! This was perfect, learned a lot in a short time.
Excellent tutorial. Thank you for the effort. The only drawback is that I get sleepy when I look at your background :-)
Many thanks for all the videos! I totally love them.
I have a question? If I want to detect and object within a pic how would I need to prepare the dataset? I already have the labels of the objects and a bunch of photos, the train valid and test datasets
Running huge blocks of code like 20 lines of imports and 20-30 lines of folder creation was a huge inconvenience for me. Those huge blocks of codes should have been freely available to people watching this video.
Thank you for this very good tutorial. You are a great teacher and very pleasant to listen to. I learned a ton.
Incredible video! Very well taught with clear explanations for all the different concepts. This has allowed me to put my first foot through the door to understand Keras/TensorFlow!
Most beautifull teacher < 3
great now i should learn semantic segmentation
Not only was I not distracted by her beautifulness, I was actually able to understand everything she said. Thank you!
2:09:10 - Expresso is just the coffee. It becomes a cappuccino when you add milk and foam. If it's getting classified as expresso then I wonder if the original dataset labels were added incorrectly by human editors that didn't know the difference.
This pic has the classic look of a cappuccino. I too was thinking about some mislabeling in dataset. Note, espresso is both the brewed coffee ingredient in combination drinks like cappuccino, latte and cortado, as well as a drink in itself.
Fantastic presentation! Thanks for sharing!!
If the probability of a given older person experiencing side effects is 95% (respectively younger person not experiencing side effects 95%), I would think that the model's accuracy can not possibly be higher than 95%, is this right?
Because for a random person picked from the dataset, you can only predict it with 95% certainty. This would reflect the model's accuracy of 94%.
yosh, you're overthinking it. If the model accurately predicts all of those who have gotten sick, it is 100% accurate. It does not depend on how accurately the model predicts in relationship to the demographic of the dataset - only the corresponding labels. Also, the dataset is more than just the "old people" we created; if the model accurately classifies 100% of the old people and 97% of young people, we should get around 98.5% accuracy.
@@adamgulamhusein8768 yes you are right. What I wrote then makes no sense lol
Thank you for such a good explanation.
So cool to learn data science from Ann Perkins!
Clear communicator. Interesting lessons. Good vid
This video deserves 1 million views.
Now gal gadot teaches you keras for free
Lol
thanks abella danger, i learned a lot :)
🤣
Damn!!!
😭😭😭 JAILLLLL
😅 damnn i thought i was the only one who saw that. She looks like a prettier Abella Danger
Lmao bri
at 2:09:04 into the video, the picture 2.png is identified as espresso and not cappuccino. This may be due to the fact that imagenet does not have a class by the name of cappuccino.
This is an amazing presentation. Amy articulates the technical information in a manner I can understand. Thank you ❤
Thank You So Much for this in-depth hands on deep learning experience
When should we use the crossentropy metric and when accuracy to optimize our model?
There is a small mistake on the first example. The range representing 95% of individuals should be `for i in range (950)`
At 2:09, I think the model did a deep learning to see the espresso layer under the cappuccino top.
Thank you It is so informative and Please make a video on how to implement facial expression recognition using deeplearning
In the chapter "Image Preparation for CNNs with TensorFlow's Keras API", when and how were the labels for the images defined in the code?
My guess is that it was during the calls to ImageDataGenerator().flow_from_directory() via the machine matching the passed classes to the names of either the folders or the image files, but even if I'm right, I think that should have been addressed, even briefly, especially since if we want to follow these steps for our own data, we'll need to know how to tell the machine which images are in which class.
You have such nice voice. Flatten by ur way of teaching
You are right about the part "Deep Learning"
You are one of a kind. Thanks much Mendy ❤️
Awesome video. One ques. When you show the architecture of the model is the last number in parenthesis the number of neurons in each layer,?
the best video ever, easy to understand. Great thanks!
Can’t install sklearn on Apple Silicone M1 macs! Any solution? Thank you in advance.
1:32:10 - I don't get what she means. Maybe it didn't come out well. She says we don't want to shuffle the test set to plot the confusion matrix but from what she says it seems that the problem is that we don't have a 1-to-1 match between labels and samples which is not true. So why does the confusion matrix requires the set to be unshuffled?
I am just joining you.
a very nice and clear explanation. Thank you mandy