*For my own reference* ⭐️🦎 COURSE CONTENTS 🦎⭐️ ⌨️ (00:00:00) Welcome to this course ⌨️ (00:00:16) Keras Course Introduction ⌨️ (00:00:50) Course Prerequisites ⌨️ (00:01:33) DEEPLIZARD Deep Learning Path ⌨️ (00:01:45) Course Resources ⌨️ (00:02:30) About Keras ⌨️ (00:06:41) Keras with TensorFlow - Data Processing for Neural Network Training ⌨️ (00:18:39) Create an Artificial Neural Network with TensorFlow's Keras API ⌨️ (00:24:36) Train an Artificial Neural Network with TensorFlow's Keras API ⌨️ (00:30:07) Build a Validation Set With TensorFlow's Keras API ⌨️ (00:39:28) Neural Network Predictions with TensorFlow's Keras API ⌨️ (00:47:48) Create a Confusion Matrix for Neural Network Predictions ⌨️ (00:52:29) Save and Load a Model with TensorFlow's Keras API ⌨️ (01:01:25) Image Preparation for CNNs with TensorFlow's Keras API ⌨️ (01:19:22) Build and Train a CNN with TensorFlow's Keras API ⌨️ (01:28:42) CNN Predictions with TensorFlow's Keras API ⌨️ (01:37:05) Build a Fine-Tuned Neural Network with TensorFlow's Keras API ⌨️ (01:48:19) Train a Fine-Tuned Neural Network with TensorFlow's Keras API ⌨️ (01:52:39) Predict with a Fine-Tuned Neural Network with TensorFlow's Keras API ⌨️ (01:57:50) MobileNet Image Classification with TensorFlow's Keras API ⌨️ (02:11:18) Process Images for Fine-Tuned MobileNet with TensorFlow's Keras API ⌨️ (02:24:24) Fine-Tuning MobileNet on Custom Data Set with TensorFlow's Keras API ⌨️ (02:38:59) Data Augmentation with TensorFlow' Keras API ⌨️ (02:47:24) Collective Intelligence and the DEEPLIZARD HIVEMIND
My first formal introduction to Keras. Half way through. Did not think I will make it this far without difficulty. Your narration style is not a chatty style. It is more like a professional news reader. But your script anticipates everything and preempts the questions that arise in the mind. That makes it effective. Will continue again tomorrow.
Well there are quite few like them on RUclips that provide our generation the best of their content, explaining every aspect of computer science Rather we should give them atleast give them some appreciation The internet is expanding tremendously and these are our hero(s) which make internet a golden source of knowledge.
Thank you Mandy. This was a great tutorial or insight given on deep learning. This is surely the best one I have seen on RUclips. Thanks a lot again for your efforts 😊
2:26:42 You need to add one more layer before the Dense output with 10 neurons since the chosen modified layer is not working as for today (22nd July 2022). You can add it as x = tf.keras.layers.Flatten()(x) . Hope this helps.
With this line the number of non trainable parameters is the same as in this course, but the total number of parameters and the number of trainable parameters both increase. To get exactly the same result as in the course I modified that part of the code like this: x = mobNet.layers[-6].output x = tf.keras.layers.GlobalAveragePooling2D()(x) output = Dense(units=10, activation='softmax')(x) I don't know which is best but if anyone wants to follow this course religiously, that works (as of 27th July 2022).
@@profsrmq the funny part is that I haven't worked in Keras since July and now I have absolutely no clue about the thing I wrote and can't even remember how I came up with it 😅
Thank you Deeplizard and Free Code Camp. Its a great tutorial and a good video. I have been learning ML and DL, started out recently. However, its only after seeing this video that I know that I think I have the confidence to carry out something on my own, now. Thank you to the Keras and TF2 team as well.
Great material, just one remark at 14:40: according to your problem definition the sample sizes should be 50 and 950, hence the second loop should be for i in range(950).
1:03:45 I think all the photos were meant to be directly under the dogs-vs-cats folder as per the later demonstration. Cause at 1:08:33, all the remaining photos were directly under dogs-vs-cats folder. Not dogs-vs-cats/train.
Hope you could see it. I renamed the folder train containing all the pics to trains. And then updated the code as below, it works on mac. os.chdir('trains') for i in random.sample(glob.glob('cat*'), 500): shutil.move(i, '../train/cat') for i in random.sample(glob.glob('dog*'), 500): shutil.move(i, '../train/dog') for i in random.sample(glob.glob('cat*'), 100): shutil.move(i, '../valid/cat') for i in random.sample(glob.glob('dog*'), 100): shutil.move(i, '../valid/dog') for i in random.sample(glob.glob('cat*'), 50): shutil.move(i, '../test/cat') for i in random.sample(glob.glob('dog*'), 50): shutil.move(i, '../test/dog')
Your multi-API approach (plus CPU/GPU heads up) was indeed a major factor to consider while choosing a source of insight into Keras. Thank you for the thorough and very well presented content.
Since my laptop did not have a GPU, it threw an error, so I added an if statement in case the length of the device array was zero. I was a little green with envy when the vgg16 ran 8 to 10 seconds per epoch on your system, but my laptop with an i7 and 24 gig of RAM and 1TB of SSD took a whopping 850 seconds more or less per epoch. I suspect it's running single-threaded, and I vaguely remember something about an option to open this cpu to using 8 threads instead. I remember doing something like that in octave or matlab in Andrew Ng's first course or Geoffrey Hinton's course.
This is excellent. I'm glad I spent a lot of time learning about machine learning and deep learning theory before I started this so I understand basically what is going on, and this is a super simple API. I think I'll use keras primarily, but I think I'll also learn tensorflow more thoroughly just in case.
Definitely that kind of Pokemon for sure, Gal with the makeup on it's hard to tell, but no makeup they are the spitting image of one another, forget being a programmer, she should be a body double
freckles just saying. tbh its awesome to have an instructor who is an attrctive gal. good to see the stereotype of good looking ladies being bimbos is really being proven to be foolish. though like 75% percent of tech jobs are still held by guys so if u learn all this and get a great job it wont feature as many as it should which is the personification of :(
Incredible video! Very well taught with clear explanations for all the different concepts. This has allowed me to put my first foot through the door to understand Keras/TensorFlow!
Near-zero computer programming training. Just admiring from the nosebleed seats the codes that go into making the fundamental building blocks of neural networks similar to how our human own brains functions. Recently read Max Tegmark's Life 3.0 book on AGI which have peaked my interest in deep learning.
Great book dude ! Read it beginning of this year. I would highly recommend Nick Bostrom's Superintelligence which I personally think gives even better insights about artificial intelligence.
I am not a programmer/coder. I found this video very soothing and inspiring simply the vision it infused in creative aspects. I sat 6 years in the hole so the slightest intellectual understanding blows up into wisest yet connected set for output variable. I am a mystic, someone needs to code what I see.
Thank you for this nice video, it really got me started. But I think there is one bad habit involved: "Never normalize your test data on its own but rather on the train data!" (This is what you read in many forums and for example in Chollet's "Deep Learning with Python" book. I think at 43:24 it should be scaler.transform(test_samples.reshape(-1,1)) as this takes the fitted scaler from the train_samples. Correct me if I am wrong :)
Feedback on video production. When the discussion is about the last few lines in a screen, it is difficult to watch the screen when it is overlapped by the subtitles. You are capturing the screen showing jupyter and not using slides, and hence it should be easy for you to scroll the jupyter notebook to make the line under discussion to appear in the middle.
I am an Italian guy and that is not an espresso, It is a cappuccino :D watching that part of the video was a stab to my heart :D The espresso is the Italian coffee made by the bar espresso coffee machine.
In the first data generation step, I believe the second loop ( ~95% of young people who did not experience side effects) should be range(50,1000) instead of range(1000)
at 2:09:04 into the video, the picture 2.png is identified as espresso and not cappuccino. This may be due to the fact that imagenet does not have a class by the name of cappuccino.
2:09:10 - Expresso is just the coffee. It becomes a cappuccino when you add milk and foam. If it's getting classified as expresso then I wonder if the original dataset labels were added incorrectly by human editors that didn't know the difference.
This pic has the classic look of a cappuccino. I too was thinking about some mislabeling in dataset. Note, espresso is both the brewed coffee ingredient in combination drinks like cappuccino, latte and cortado, as well as a drink in itself.
@@fazalali2894 models are Ofcourse same in deep learning...but data processing is very different. There is nothing like Audiodatastore as imagedatastore . Also wav files loading is different
@@krishnachauhan2850 if I were to tackle that problem I wouldn't try to use direct audio files. I don't see the benefit of that but that may be different based on your end goal. From what I have seen, log-mel spectrograms are the way to go and those can be loaded in as images (or matrices which is probably what I'd use for a more precise representation) or Stacked Spectrograms. Have you tried any of those out? If so, what problems were you facing that required the need for an audio generator?
Great work Mandy. I really enjoyed your video. I noticed though that " classes = cm_ plot_labels" wasn't defined in the video. Hence, my plot of the confusion matrix was somewhat different. I will be glad if you define the class. Thank you.
Hi Mandy, I tried training neural network for the clinical trials example @28:47. I am getting Nan for loss and 0.5 for accuracy. I ran the exact lines of code you did. What is the issue? BTW, I am not using the three lines of code to use the GPU.
When you are good you are good, and deeplizard is VERY GOOD. I recommend this course and the Deep learning classic as an excellent way to get familiar with deep learinng ANN and keras implementation. Also the text versions on the blog are very good. Great job!
This channel is insane. I am paying around 1000$USD for a class, and this free resource is my main learning accessory. Thank you. 1000 times.
What a fool you then, but you don't have to made it public, do you?
This was an excellent instruction set. Really appreciate all the work on it.
Thank you, Paul!
@Paul Mcwhorter So good to see u here sir! I'm currently doing ur Arduino lessons and that's really amazing
@@zaief7016 Thanks, yep, I am always trying to learn as well.
:)
All I can think of is laying with her in the bed behind her
26:16 , the parameter "learning_rate" has been renamed to "lr" from version 2.2.* to 2.3.0 in September 2018
*For my own reference*
⭐️🦎 COURSE CONTENTS 🦎⭐️
⌨️ (00:00:00) Welcome to this course
⌨️ (00:00:16) Keras Course Introduction
⌨️ (00:00:50) Course Prerequisites
⌨️ (00:01:33) DEEPLIZARD Deep Learning Path
⌨️ (00:01:45) Course Resources
⌨️ (00:02:30) About Keras
⌨️ (00:06:41) Keras with TensorFlow - Data Processing for Neural Network Training
⌨️ (00:18:39) Create an Artificial Neural Network with TensorFlow's Keras API
⌨️ (00:24:36) Train an Artificial Neural Network with TensorFlow's Keras API
⌨️ (00:30:07) Build a Validation Set With TensorFlow's Keras API
⌨️ (00:39:28) Neural Network Predictions with TensorFlow's Keras API
⌨️ (00:47:48) Create a Confusion Matrix for Neural Network Predictions
⌨️ (00:52:29) Save and Load a Model with TensorFlow's Keras API
⌨️ (01:01:25) Image Preparation for CNNs with TensorFlow's Keras API
⌨️ (01:19:22) Build and Train a CNN with TensorFlow's Keras API
⌨️ (01:28:42) CNN Predictions with TensorFlow's Keras API
⌨️ (01:37:05) Build a Fine-Tuned Neural Network with TensorFlow's Keras API
⌨️ (01:48:19) Train a Fine-Tuned Neural Network with TensorFlow's Keras API
⌨️ (01:52:39) Predict with a Fine-Tuned Neural Network with TensorFlow's Keras API
⌨️ (01:57:50) MobileNet Image Classification with TensorFlow's Keras API
⌨️ (02:11:18) Process Images for Fine-Tuned MobileNet with TensorFlow's Keras API
⌨️ (02:24:24) Fine-Tuning MobileNet on Custom Data Set with TensorFlow's Keras API
⌨️ (02:38:59) Data Augmentation with TensorFlow' Keras API
⌨️ (02:47:24) Collective Intelligence and the DEEPLIZARD HIVEMIND
My first formal introduction to Keras. Half way through. Did not think I will make it this far without difficulty. Your narration style is not a chatty style. It is more like a professional news reader. But your script anticipates everything and preempts the questions that arise in the mind. That makes it effective. Will continue again tomorrow.
Well there are quite few like them on RUclips that provide our generation the best of their content, explaining every aspect of computer science
Rather we should give them atleast give them some appreciation
The internet is expanding tremendously and these are our hero(s) which make internet a golden source of knowledge.
@@dhananjaygola4786 well, to expand on your point, you should name a few of them.
@@Nootey33 don't worry just believe in youtube recommendation Ai & soon you'll find them
@@dhananjaygola4786 what are the other channels pls?
You are absolutely right on! This is a great presentation.
Thank you Mandy. This was a great tutorial or insight given on deep learning. This is surely the best one I have seen on RUclips. Thanks a lot again for your efforts 😊
Glad to hear that, Palash :)
One of the best videos on Keras for beginners like me.
2:26:42 You need to add one more layer before the Dense output with 10 neurons since the chosen modified layer is not working as for today (22nd July 2022). You can add it as x = tf.keras.layers.Flatten()(x) . Hope this helps.
With this line the number of non trainable parameters is the same as in this course, but the total number of parameters and the number of trainable parameters both increase. To get exactly the same result as in the course I modified that part of the code like this:
x = mobNet.layers[-6].output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
output = Dense(units=10, activation='softmax')(x)
I don't know which is best but if anyone wants to follow this course religiously, that works (as of 27th July 2022).
@@robertoprestigiacomo253 works like a charm!
@@profsrmq the funny part is that I haven't worked in Keras since July and now I have absolutely no clue about the thing I wrote and can't even remember how I came up with it 😅
yes exactly :
x = mobile.layers[-5].output
x = tf.keras.layers.Flatten()(x)
output = Dense(units=10, activation='softmax')(x)
Thank you Deeplizard and Free Code Camp. Its a great tutorial and a good video. I have been learning ML and DL, started out recently. However, its only after seeing this video that I know that I think I have the confidence to carry out something on my own, now. Thank you to the Keras and TF2 team as well.
Finally, the wait ended. Thanks guys. Lots of love!
Mandy , your are a very gifted instructor. There have been hundreds of instructors in my life and your are the BEST!
Dear Lady DeepLizard,
Thank you so much for the energy, time and thought you've putting this course! I have benefited a lot from your channel,
This is the first long tutorial I watched from start to finish and it says a lot! Thank you very much!!!
You are so incredibly easy to listen to for hours on end, very well done I look forward to learning a bunch more from these videos
Great material, just one remark at 14:40: according to your problem definition the sample sizes should be 50 and 950, hence the second loop should be for i in range(950).
Yeah, i think it is a mistake.
It's "~"95%, not 95%
There are 2100 patients.
1:03:45 I think all the photos were meant to be directly under the dogs-vs-cats folder as per the later demonstration. Cause at 1:08:33, all the remaining photos were directly under dogs-vs-cats folder. Not dogs-vs-cats/train.
did u get past the for loops part? at 1:08:14
ive been stuck on it for ages
the error that is occuring is a valueerror
@@54M1WUL same
Hope you could see it. I renamed the folder train containing all the pics to trains. And then updated the code as below, it works on mac.
os.chdir('trains')
for i in random.sample(glob.glob('cat*'), 500):
shutil.move(i, '../train/cat')
for i in random.sample(glob.glob('dog*'), 500):
shutil.move(i, '../train/dog')
for i in random.sample(glob.glob('cat*'), 100):
shutil.move(i, '../valid/cat')
for i in random.sample(glob.glob('dog*'), 100):
shutil.move(i, '../valid/dog')
for i in random.sample(glob.glob('cat*'), 50):
shutil.move(i, '../test/cat')
for i in random.sample(glob.glob('dog*'), 50):
shutil.move(i, '../test/dog')
I cant describe in words how much this video helped me with my research project! You are a great teacher, Thankyou so much!
hey what was your research project?
I enjoyed watching this tutorial, I ended up finishing the whole video without realizing it, thanks!
Mandyy! You came and you gave without taking!!
Your multi-API approach (plus CPU/GPU heads up) was indeed a major factor to consider while choosing a source of insight into Keras. Thank you for the thorough and very well presented content.
Great to hear. You're welcome Filipe!
Since my laptop did not have a GPU, it threw an error, so I added an if statement in case the length of the device array was zero. I was a little green with envy when the vgg16 ran 8 to 10 seconds per epoch on your system, but my laptop with an i7 and 24 gig of RAM and 1TB of SSD took a whopping 850 seconds more or less per epoch.
I suspect it's running single-threaded, and I vaguely remember something about an option to open this cpu to using 8 threads instead. I remember doing something like that in octave or matlab in Andrew Ng's first course or Geoffrey Hinton's course.
One of the best videos on Keras Deep Learning. Thanks for your wonderful teaching.
This is excellent. I'm glad I spent a lot of time learning about machine learning and deep learning theory before I started this so I understand basically what is going on, and this is a super simple API. I think I'll use keras primarily, but I think I'll also learn tensorflow more thoroughly just in case.
no one gives a shii
Glad I'm being taught by Gal Gadot. ☺️👍
Me too !!
I saw abella danger
Definitely that kind of Pokemon for sure, Gal with the makeup on it's hard to tell, but no makeup they are the spitting image of one another, forget being a programmer, she should be a body double
@@code-to-design bruh
freckles just saying. tbh its awesome to have an instructor who is an attrctive gal. good to see the stereotype of good looking ladies being bimbos is really being proven to be foolish. though like 75% percent of tech jobs are still held by guys so if u learn all this and get a great job it wont feature as many as it should which is the personification of :(
Such a detailed and amazingly designed course. Covered every question I had in mind!.
You’re a great teacher! This was perfect, learned a lot in a short time.
Loved your video Mandy and need more content from you. Great explanation.
Incredible video! Very well taught with clear explanations for all the different concepts. This has allowed me to put my first foot through the door to understand Keras/TensorFlow!
absolute gem!! Way better than all those paid courses.
This was the answer to my prayer. I know linear algebra and Python. I just needed some specific code examples with comments. Thank you!!!
Thanks for a well prepared, well organized, professional presentation. GREATLY appreciated
Brilliant illustarations and best DL material I have seen. Thank you, Andy.
*Mandy
Near-zero computer programming training. Just admiring from the nosebleed seats the codes that go into making the fundamental building blocks of neural networks similar to how our human own brains functions. Recently read Max Tegmark's Life 3.0 book on AGI which have peaked my interest in deep learning.
Great book dude ! Read it beginning of this year. I would highly recommend Nick Bostrom's Superintelligence which I personally think gives even better insights about artificial intelligence.
insane, thanks mandy for your great explanation, also really loved the background scenery of the video .
Great course, I've been following this for the last week. Well organised and presented..
I am not a programmer/coder. I found this video very soothing and inspiring simply the vision it infused in creative aspects. I sat 6 years in the hole so the slightest intellectual understanding blows up into wisest yet connected set for output variable. I am a mystic, someone needs to code what I see.
I think it's better to learn to program.
You need to see a pychiatrist
You're the chosen one padawan.
Thank you for this nice video, it really got me started. But I think there is one bad habit involved: "Never normalize your test data on its own but rather on the train data!" (This is what you read in many forums and for example in Chollet's "Deep Learning with Python" book. I think at 43:24 it should be scaler.transform(test_samples.reshape(-1,1)) as this takes the fitted scaler from the train_samples. Correct me if I am wrong :)
You're right 100%
That's correct
@@googlable
@@googlable
Also noticed, though in this example it doesn't matter as ranges of features are the same for test and train datasets
Fall in love with u for ur tutorials deeplizard 🦎 ... super cool. Hope that learning would be continue... Thanks a lot👩🏫
WHOA ... this instructor is sooo smart!
This is a great tutorial for Keras image classification. Can you do a similar one for object detection using Keras? That would be very helpful.
Perfect, code explanations were clear and straightforward
This channel is a gold mine :) Keep up the good work.
I have watched this video completely. It was worth my time on a beautiful Saturday afternoon. Keep up with this nice project Mandy
Love the course! Hate the constant ads youtube
Feedback on video production. When the discussion is about the last few lines in a screen, it is difficult to watch the screen when it is overlapped by the subtitles. You are capturing the screen showing jupyter and not using slides, and hence it should be easy for you to scroll the jupyter notebook to make the line under discussion to appear in the middle.
why are u using subtitles tf
Thank You So Much for this in-depth hands on deep learning experience
Thank you Mandy, It was a great video with such a fantastic explanation!!!
Very good intro example, easy to setup problem can be tweaked to explore more and doesn't require pictures or strange formats and other downloads.
I am an Italian guy and that is not an espresso, It is a cappuccino :D watching that part of the video was a stab to my heart :D
The espresso is the Italian coffee made by the bar espresso coffee machine.
I love you Mandy
This course was awesome
Thanks, finally someone with a normal English accent so to speak
Clear communicator. Interesting lessons. Good vid
Thank you for this very good tutorial. You are a great teacher and very pleasant to listen to. I learned a ton.
i cant thank you enough for this video. God bless you.
Thanks!
Pls teach me to earn $1B per day
Your very talented..thanks for your well explained instructions..
Fantastic presentation! Thanks for sharing!!
Finished this. Liked it very much. Going to your website to find more.
This was so helpful!! Thank you so much!!!!
Also, 19:21 is Ultra CUTE
i clicked because she was pretty, now i know about keras
I clicked because i wanted to watch you comment
"alright, that's it for the manual labor" at the one hour mark haha... i love it.
you shouldn't call fit_transform on test_samples in 43:20. You should use the same scaler that was fitted on train_samples.
yea, you only need to transform it
Excellent Course to understand the documentation and practice DL
1:16 I am sure I missed something, Still.... Where does that labels come from? ? How did it distinguish cats and dogs?
Super good production! Thank you!
the best video ever, easy to understand. Great thanks!
2:46:15 your cam got augmented too (and flipped upwards) :D
Nonetheless awesome course!
This tutorial amazingly helps me. Thanks!
thanks abella danger, i learned a lot :)
🤣
Damn!!!
😭😭😭 JAILLLLL
😅 damnn i thought i was the only one who saw that. She looks like a prettier Abella Danger
Lmao bri
Can't thank you enough. I needed this.
Muchas gracias Miss Mandy....que habitación tan ordenada, saludos de Perú
I think there is a mistake in 15:27. If it is 95% of the population, the code should be 'for i in range(950)'.
so wonderful, hope you guys keep this up to date
You are one of a kind. Thanks much Mendy ❤️
In the first data generation step, I believe the second loop ( ~95% of young people who did not experience side effects) should be range(50,1000) instead of range(1000)
This video deserves 1 million views.
at 2:09:04 into the video, the picture 2.png is identified as espresso and not cappuccino. This may be due to the fact that imagenet does not have a class by the name of cappuccino.
a very nice and clear explanation. Thank you mandy
2:29 today 05/05/2023 need this:
x = mobile.layers[-5].output
x = tf.keras.layers.Flatten()(x)
output = Dense(units=10, activation='softmax')(x)
thank you man, needed that, do you have any explanation for that?
Thank you for this great contribution. On my list to watch soon.
Excellent tutorial. Thank you for the effort. The only drawback is that I get sleepy when I look at your background :-)
Thank you for such a good explanation.
Awesome job :) Keep up the good work.
2:09:10 - Expresso is just the coffee. It becomes a cappuccino when you add milk and foam. If it's getting classified as expresso then I wonder if the original dataset labels were added incorrectly by human editors that didn't know the difference.
This pic has the classic look of a cappuccino. I too was thinking about some mislabeling in dataset. Note, espresso is both the brewed coffee ingredient in combination drinks like cappuccino, latte and cortado, as well as a drink in itself.
I have always been a fan of Keras! Great video.
Did they explained audio processing also
@@krishnachauhan2850 They did not but the concepts can apply to that too.
@@fazalali2894 models are Ofcourse same in deep learning...but data processing is very different.
There is nothing like
Audiodatastore as imagedatastore
.
Also wav files loading is different
@@krishnachauhan2850 if I were to tackle that problem I wouldn't try to use direct audio files. I don't see the benefit of that but that may be different based on your end goal. From what I have seen, log-mel spectrograms are the way to go and those can be loaded in as images (or matrices which is probably what I'd use for a more precise representation) or Stacked Spectrograms. Have you tried any of those out? If so, what problems were you facing that required the need for an audio generator?
Great work Mandy. I really enjoyed your video. I noticed though that " classes = cm_ plot_labels" wasn't defined in the video. Hence, my plot of the confusion matrix was somewhat different. I will be glad if you define the class. Thank you.
Hi Mandy, I tried training neural network for the clinical trials example @28:47. I am getting Nan for loss and 0.5 for accuracy. I ran the exact lines of code you did. What is the issue? BTW, I am not using the three lines of code to use the GPU.
same here i did not understand why
Most beautifull teacher < 3
This was an awesome tutorial for me 🙂
Thanks for the video!!!!!
She is really good at teaching
I thought its a hotel Trivago new advertisement.
IKR - I was waiting for Captain Obvious to show up!
Lol I guess it's what happens when you film courses from your Airbnb 🤷♀️😂
@@deeplizard 😅 nice video though
reporting bias.
Ahahahah
Very precise and accurate information
Thanks for sharing 👍
great work mam🥰, your explanation was so good. Don't mine but you look cute as Gal gadot🙄😘!
When you are good you are good, and deeplizard is VERY GOOD. I recommend this course and the Deep learning classic as an excellent way to get familiar with deep learinng ANN and keras implementation.
Also the text versions on the blog are very good.
Great job!
The right time , the right video😁😁
Awesome, Mandy you rock
Very well explained.. Thank you Mandy @deeplizard. Would be great if you can make one such video series on RNN
Thanks a lot for such wonderful course.
Great course, thanks guys, always know that we want 👍👍
You have such nice voice. Flatten by ur way of teaching
Thanks for this @deeplizard and the instructor 👍👍👍👍