Convolutional Neural Networks - Deep Learning basics with Python, TensorFlow and Keras p.3

Поделиться
HTML-код
  • Опубликовано: 18 авг 2018
  • Welcome to a tutorial where we'll be discussing Convolutional Neural Networks (Convnets and CNNs), using one to classify dogs and cats with the dataset we built in the previous tutorial.
    Text tutorials and sample code: pythonprogramming.net/convolu...
    Discord: / discord
    Support the content: pythonprogramming.net/support...
    Twitter: / sentdex
    Facebook: / pythonprogramming.net
    Twitch: / sentdex
    G+: plus.google.com/+sentdex

Комментарии • 913

  • @blendamosity
    @blendamosity 4 года назад +4

    As a programmer/amateur data scientist, I have wanted to understand and use neural networks to take my craft to the next level for years, and sentdex, you are the first researcher/teacher/hacker/genius that has enabled me to actually break that glass ceiling and use neural networks for real-life problems. Thank you so much!

    • @sentdex
      @sentdex  4 года назад +1

      Happy to share!

  • @Osirisdaro
    @Osirisdaro 4 года назад +8

    "I need more tea" cracks me up.....
    Thanks for the vid

  • @chrisdavidson4540
    @chrisdavidson4540 4 года назад

    You are the man, Sir! Thanks so much for making these vids...and looking forward to watching the machine learning lessons!

  • @gautamj7450
    @gautamj7450 5 лет назад +36

    A tutorial on keras callbacks such as EarlyStopping and ModelCheckpoint would be nice.
    Also, I would love if you could explain Image Augmentation in Keras for CNN.

  • @prohacker5086
    @prohacker5086 4 года назад +127

    15:55 Found the solution: If you did everything exactly the same throughout the previous video, just add this " y = np.array(y) " after the " X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)" so it look like this:
    X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
    y = np.array(y)
    It was giving me errors until adding that line too and reexecuting it

    • @Amit-cg9le
      @Amit-cg9le 4 года назад +2

      Thanks for this solution, I was stuck.

    • @priyanshiburad2385
      @priyanshiburad2385 4 года назад

      after doing what you suggested I got this error
      File "", line 3, in raise_from
      tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[194] = 194 is not in [0, 194) [Op:GatherV2]
      ps: 194 is the size of my dataset

    • @emredemircialumni8637
      @emredemircialumni8637 4 года назад

      Thanks

    • @denisardelean8067
      @denisardelean8067 4 года назад +2

      Thanks, It worked!
      I was getting some weird Tensorflow InvalidArgument errors and I was stuck..

    • @xXmineralwaterXx
      @xXmineralwaterXx 4 года назад

      thanks a lot, that solved it for me too

  • @camdenparsons5114
    @camdenparsons5114 5 лет назад +117

    you should definitely do a video on transfer learning in this series.

    • @GoodDayTrade
      @GoodDayTrade 5 лет назад +4

      looking forward to your transfer learning videos

    • @kaushilkundalia2197
      @kaushilkundalia2197 5 лет назад +2

      Agreed ! Please do one. Badly waiting for that

  • @manjunathshenoy3774
    @manjunathshenoy3774 3 года назад +1

    Bro Thank you so much for this tutorial. This helped me in doing my academic project. Thank you so much bro.

  • @joseenriqueorozcobecerra5969
    @joseenriqueorozcobecerra5969 3 года назад +1

    Thank you for being so didactical. I also relate some much when it comes to the errors :) Keep rocking!

  • @nikolaiivankin
    @nikolaiivankin 5 лет назад +11

    You can add layer with Activation inside:
    model.add(Dense(256, activation='relu'))
    It allows you to choose activation function to each of them separately

    • @TigerFitoff
      @TigerFitoff 3 месяца назад

      dont know why he didn't do that

  • @ahmednayeem4849
    @ahmednayeem4849 2 года назад +50

    For anyone getting the following error "validation_split is only supported for Tensors or NumPy " add y = np.array(y) under X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE,1)

  • @henkhbit5748
    @henkhbit5748 3 года назад

    I like your enthusiastic explanations and fun videos

  • @thefastreviewer
    @thefastreviewer Год назад

    Just amazing! You are really helping me with my paper! 🙌

  • @waynefilkins8394
    @waynefilkins8394 5 лет назад +222

    you have some of the strangest coffee cups I have ever seen

  • @bharddwajvemulapalli
    @bharddwajvemulapalli 5 лет назад +11

    hey sentdex, nice tutorial I've been binge watching your tutorial videos and compared to your previous convolutional neural network videos I noticed a few differences. For conv2D you didn't increase the number of features(for both layers you kept it at 64) whereas previously you increased it. Why is that? Moreover, why did we not use dropout and why did we use the 'sigmoid' activation function over 'softmax'? And also wanted to say thank you for the great content!

  • @davidfurley1775
    @davidfurley1775 2 года назад

    Dude literally saved my dissertation, what a legend 🙌

  • @somechanne231
    @somechanne231 5 лет назад

    Thanks man. thank you for your help and great teaching

  • @mason6300
    @mason6300 4 года назад +10

    It was a headache but I finally installed tensorflow-gpu on my windows pc. now I can run the epoch in 20 seconds!

    • @rizwanrehman6833
      @rizwanrehman6833 4 года назад +1

      Can you tell me the steps to install tensorflow-gpu and my epoch is running very slowly @MRH

    • @aryanchauhan8066
      @aryanchauhan8066 4 года назад +1

      @@rizwanrehman6833 use google collab instead

    • @toonepali9814
      @toonepali9814 4 года назад

      I know the feeling

  • @aaronshed
    @aaronshed 5 лет назад +50

    You can add the "layers" as an array "model = Sequential ([ flatten(), Dense(10), Activation('relu') ])" instead of using the model.add() function every time.

    • @curious_one1156
      @curious_one1156 5 лет назад +8

      Why not use dense layer with 2 nodes in the end, why one ? please help in explaining

    • @sklify1232
      @sklify1232 4 года назад +1

      @@curious_one1156 Because one node can be both on and off ,1 and 0, cat and dog

    • @michaelschmidlin4274
      @michaelschmidlin4274 4 года назад

      @@sklify1232 So what if you put your output layer as "2". Then would you technically have 3 output classes? Ex. "Cat", "Dog", and "Airplane"?

    • @sklify1232
      @sklify1232 4 года назад

      @@michaelschmidlin4274 I think it would still be cat and dog, so it's an equivalent alternative to one node for binary choice. You could say "airplane" when both of the 2 nodes are off , but it would depend on the activation threshold- too cumbersome.

    • @dumbtex6107
      @dumbtex6107 4 года назад

      was gonna say the same thing lol

  • @code-grammardude5974
    @code-grammardude5974 5 лет назад +1

    Your tutorials are sooo good!

  • @yashdwivedi2037
    @yashdwivedi2037 5 лет назад

    Bro you are awesome.I have watched each one of your video series.Great content brother.

    • @usmanliaqat0321
      @usmanliaqat0321 5 лет назад

      Hello Dear I am facing following error. I am using same syntax as you did....
      FileNotFoundError Traceback (most recent call last)
      in
      ----> 1 X = pickle.load(open("x.pickle","rb"))
      2 y = pickle.load(open("y.pickle","rb"))
      3 X = X/255.0

  • @stopwastingmytime9194
    @stopwastingmytime9194 5 лет назад +3

    Was waiting for this for so long

  • @borispapic9510
    @borispapic9510 5 лет назад +2

    Why are the first two layers conv2d and the third one is dense? I tried switching the second conv2d layer to dense and it had better accuracy but took a few seconds more to train. is it for balance between time and accuracy? Great videos and thank you for making these

  • @liangyumin9405
    @liangyumin9405 5 лет назад

    I can not wait to the next the next great tutorial !!! Very Nice Vedios

  • @Jhaskydding
    @Jhaskydding 5 лет назад

    THANK YOU SO MUCH FOR THIS TUTORIAL

  • @brotherlui5956
    @brotherlui5956 5 лет назад +46

    Hi Harrison, doesn´t the Dense(64) layer at 11:30 need an activation function? I added another "relu" there and got better accuracy.

    • @sentdex
      @sentdex  5 лет назад +24

      Whoops. yep, that's a mistake lol.

    • @bharddwajvemulapalli
      @bharddwajvemulapalli 5 лет назад +1

      for some reason i got better accuracy in the same amount of epochs when i didn't add an activation function for that layer

    • @brotherlui5956
      @brotherlui5956 5 лет назад

      King Neptune whatever that reason might be but a layer without an activation makes no sense

    • @bharddwajvemulapalli
      @bharddwajvemulapalli 5 лет назад

      Brother Lui yeah I agree it'd be useless

    • @davidedwards7172
      @davidedwards7172 5 лет назад

      There has been some study of "all Conv" models where only the output layer is fully connected, generally results in greater accuracy.

  • @RRKS_TF
    @RRKS_TF 4 года назад +3

    how would i go about running this on the GPU instead of my CPU? it is taking ages to run a singular epoch

  • @harijsijabs2420
    @harijsijabs2420 5 лет назад +1

    Hey! Very useful and comprehensive series so far! You got my sub!
    Question - how would you go about implementing multispectral imagery (i.e. - more than 3 bands) ?

  • @martinma1680
    @martinma1680 4 года назад

    Thanks for your CNN videos, sir.

  • @ArunKUMAR-wp4sb
    @ArunKUMAR-wp4sb 5 лет назад +3

    Will you do a tutorial on using RNN in Tensorflow and GANs and stuff.

  • @Annunaki_0517
    @Annunaki_0517 5 лет назад +6

    Hey,
    Could you maybe expand a bit on the exact purpose of the ‘Dropout’ function? We imported ‘Dropout’ but I don’t think we used it anywhere in the code. Did you decide on the fly to not include a dropout function or was it perhaps just simply an oversight?

    • @aadiduggal1860
      @aadiduggal1860 2 года назад

      usually you implement a dropout if your model has overtrained. So you randomly "drop out" certain connections in hopes to make your model more generalizable. Of course, it will probably also reduce accuracy for validation purposes.

  • @marcellaufmann3901
    @marcellaufmann3901 4 года назад

    Great video. Thank you very much.

  • @AbhishekKumar-mq1tt
    @AbhishekKumar-mq1tt 5 лет назад

    Thank you for this awesome video

  • @tomasemilio
    @tomasemilio 5 лет назад +6

    Love the collection of mugs. I know you like to send hidden messages. Hahaha.

  • @alokrajgupta9452
    @alokrajgupta9452 5 лет назад

    Hi Harrison, At 12:26, in my opinion activation function is present at every layer that is why keras added it to the keras.layers !

  • @dodgeridersrt5650
    @dodgeridersrt5650 5 лет назад

    THANK YOUUUUUU !!!! It is really helpful 😘🖤

  • @ramzykaram296
    @ramzykaram296 4 года назад +7

    It can work on tf2 just by adjusting the below
    X = np.array(pickle.load(open("X.pickle", "rb")))
    y = np.array(pickle.load(open("y.pickle", "rb")))
    # now you have to reshape (col, rows, channel)
    X = np.reshape(X, (X.shape[0],*X.shape[1:],1))

    • @AaronBasch
      @AaronBasch 3 года назад

      bless you

    • @YukselCELIK
      @YukselCELIK 3 года назад

      Thank you Ramzy.. Your solution solve my problem..

  • @gangholdon_
    @gangholdon_ 4 года назад +4

    For anyone who had a negative loss by following this tutorial with your own dataset which had more than 2 categories of labels, don't forget to normalize your label array as well!! So y_normalized = y/ y_.max(). it will make your labels to be between 0-1 and then the whole thing works just fine

    • @ATDP1
      @ATDP1 Год назад

      can you elaborate more where do you put that code?

  • @kevingeorgedalpathadu5408
    @kevingeorgedalpathadu5408 Год назад

    Thank You, Good explanation 👍🔥

  • @rickeycarter
    @rickeycarter 5 лет назад

    re:layer, I think the origin is from the data graph that is formed. It's another layer/processing step that gets applied, and the order of the operations is somewhat tunable (e.g., activate before normalization or vice versa).

  • @aniketpatra6517
    @aniketpatra6517 4 года назад +11

    X=X/255.0 is giving me error unsupported operand type(s) for /: 'NoneType' and 'float'. How will I rectify the error ?

    • @markd5928
      @markd5928 4 года назад +8

      In tutorial #2 Harrison converted X to a numpy array, and it's possible that definition is still hanging around here. Before the division, add the following lines and you should be good to go:
      import numpy as np
      X = np.array(X)

    • @Twas_Grace
      @Twas_Grace 10 месяцев назад

      @@markd5928 I know it's been three years for you lol, but THANK YOU for this. I was pulling my hair trying to figure this error out.

  • @sebastianmelmoth223
    @sebastianmelmoth223 3 года назад +3

    for anyone coming across this as of 03/03/2021, i had to do a bit of fiddling to get it running properly.
    X = []
    y = []
    for features, label in training_data:
    X.append(features)
    y.append(label)
    X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
    y = np.array(y)
    convert y to a numpy array after you've filled it with the targets/labels
    X.shape = (24946, 150, 150, 1)
    y.shape = (24946,)
    just need to get it running on the GPU now as it takes an age to run an epoch lol!

  • @abhishekmaheshwari
    @abhishekmaheshwari 2 года назад

    Thank you so much for the video..

  • @EnEm523
    @EnEm523 Год назад

    You are a amazing teacher ❤️

  • @zperk13
    @zperk13 5 лет назад +17

    16:00 how did you computer do that so fast?! i have a really good computer and it took 69 seconds
    edit: I installed tensorflow-gpu and now it takes 4 seconds oh yeah

  • @sanjanakonda5632
    @sanjanakonda5632 4 года назад +5

    I am getting an error at validation split ValueError: `validation_split` is only supported for Tensors or NumPy arrays, found: (array([[[[ 36],

    • @marcus.bazzoni
      @marcus.bazzoni 4 года назад +8

      import numpy again and reconvert to nparrays after loading
      X = np.array(X)
      y = np.array(y)

  • @nitinmeena2773
    @nitinmeena2773 5 лет назад

    Series on data preprocessing and feature engineering will be really helpful.

  • @rm.throws
    @rm.throws 5 лет назад

    Harrison, the product placement master!

  • @ashvnikumar4292
    @ashvnikumar4292 5 лет назад +7

    i am using for different dataset but i am caught at this error. Help me.
    My dataset has total 4000 images in 4 classes, 1000 images in each class.
    ValueError: Input arrays should have the same number of samples as target arrays. Found 40 input samples and 4000 target samples.

  • @gokulsundeep3610
    @gokulsundeep3610 5 лет назад +4

    I'm getting this error:
    ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d/Conv2D' (op: 'Conv2D') with input shapes: [?,50,50,1], [3,3,50,256].
    Can anyone help!!

    • @robertslash3488
      @robertslash3488 4 года назад

      yes, the solution lies within the 2nd video, a slight edit should be made in 2nd video in order for the 3rd video code to run.
      In the section wher the x has been made into an array, y should also be made into an array. The line goes something like this : X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
      y=np.array(y)
      This would solve the value error problem, as they both are fitted into two individual arrays.

  • @nomesa7374
    @nomesa7374 5 лет назад

    Thanks for the tutorial. One question though:
    What further steps we should take in order to construct a model which returns the bounding box (position information)?
    Do we need to do labeling (with for example labelImg)? Tfrecord? ...

  • @TheGeektarrist
    @TheGeektarrist 4 года назад +1

    Hi,
    First of all, thanks for this tutorial, it covered most of my doubts! But I have a question, will it work for multiclass classification? I mean I am trying to apply the same exact algorithm to a lego dataset (6 classes), but I am getting pretty bad results accuracy is about 0.17. Any advice?

  • @shubhamkumargupta3478
    @shubhamkumargupta3478 5 лет назад +12

    getting error
    ValueError: Input arrays should have the same number of samples as target arrays. Found 74838 input samples and 24946 target samples.

    • @adriandrotar
      @adriandrotar 3 года назад

      same problem, any fix?

    • @hazemahmed1982
      @hazemahmed1982 3 года назад

      That means that your reshaping didn't go well, you could try to check the image size you constructed the data with and run it again

  • @doggo206
    @doggo206 5 лет назад +4

    9:05 Genuine question, what does the colon do?

    • @hamish8688
      @hamish8688 5 лет назад

      X.shape[ 1 ] returns an int e.g. 7
      X.shape[ 1 : ] returns a tuple e.g. (7, )
      so the colon would return a tuple instead of an integer, not sure if that helps?

    • @doggo206
      @doggo206 5 лет назад +1

      Hamish Okay, thanks!

    • @matheusataide1861
      @matheusataide1861 5 лет назад +1

      Actually its for slicing. X.shape[1:] means he is getting the tuple X.shape, but without the first element.
      Example: X.shape returns (a, b, c, d) X.shape[1:] returns (b, c, d)

  • @Lopezitation
    @Lopezitation 5 лет назад

    Bro, your coffee mugs are just the cherry on the cake

  • @gabrielmallin3811
    @gabrielmallin3811 5 лет назад

    Hi,
    I’m following your tutorials on the raspberry pi 3B+ and I’m questioning a part for the ssh setup. As my home computer has chrome os is won’t let me use putty as a ssh client. What other client would you recommend? A quick search lead me to Terminus but I’m unsure.
    Thanks!

  • @piyushkonher8405
    @piyushkonher8405 4 года назад +4

    Failed to find data adapter that can handle input: , ( containing values of types {""})
    getting this error , how to resolve ??

    • @rubenuribe
      @rubenuribe 4 года назад +1

      I am getting the same error.

    • @rubenuribe
      @rubenuribe 4 года назад +11

      Solved the problem:
      The problem is that X is a numpy array and y is just a list.
      put
      from numpy import array
      in the import statements
      and change the y assignment to
      y = array(pickle.load(open("Y.pickle","rb")))

    • @eloycollado1939
      @eloycollado1939 4 года назад +1

      @@rubenuribe love u

    • @AlcoverMANZoo
      @AlcoverMANZoo 4 года назад +1

      @@rubenuribe gracies maquina

    • @NickxHaruka
      @NickxHaruka 4 года назад +1

      @@rubenuribe Freaking love you man !

  • @luantaminh8103
    @luantaminh8103 5 лет назад +3

    My computer is too slow. Thanks for your tutorial

    • @atithi8
      @atithi8 5 лет назад

      USE gpu of ur computer, perhaps u have figured that out already

  • @jeremynx
    @jeremynx 3 года назад

    thank you for such good videos!!!

  • @sawwilliam5686
    @sawwilliam5686 4 года назад

    hi sentdex, i'm trying to do this project on celebrities and i was wondering if having multiple folders of them would be the same as having those folders cats and dogs. Also, i created a separate test folder and in that folder, do i need to group pictures of the same celebs in a folder, or just let them be scattered?

  • @wiemcharrada5866
    @wiemcharrada5866 5 лет назад +7

    How can I test this model for a new image ?!

    • @goldenbananas1389
      @goldenbananas1389 3 года назад +1

      I dont know if you still need this but what I did was:
      Image_size = (your image size)
      CatDogModel = tf.keras.models.load_model('(put the name of your model save file here)')
      ImagePath = "(Path to your image)"
      Image = cv2.imread(ImagePath, cv2.IMREAD_GRAYSCALE)
      NewImage = cv2.resize(Image, (Image_size, Image_size))
      NewImage = np.array(NewImage).reshape(-1, Image_size, Image_size, 1)
      prediction = CatDogModel.predict([NewImage])
      a = prediction[0]
      print(a)
      0 is dog and 1 is cat.
      (I did all of this in a separate python script.)
      I dont know if I did it in the best way but that's what I did.

  • @siawkexing5663
    @siawkexing5663 5 лет назад +5

    Seems u need someone to draw for this project😂😂
    Anyway nice tutorial

  • @kelvinkoh3604
    @kelvinkoh3604 5 лет назад

    Hi, Can I know why you added conv2d layer in to the model? Because you didnt add it in the video of classification of number

  • @Diego01201
    @Diego01201 3 года назад

    I changed the output layer to be 2 neurons with softmax activation and sparse categorical crossentropy. Makes more sense to me, since we want to know the probabilities of the input image being either a cat or a dog.

  • @harshavarthanvijayakumar8993
    @harshavarthanvijayakumar8993 4 года назад +3

    I am getting this error
    ValueError: Failed to find data adapter that can handle input: , ( containing values of types {""})

    • @vaizerdgrey
      @vaizerdgrey 4 года назад +2

      The problem is that X is a numpy array and y is just a list.
      What I did is:
      X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
      y = np.array(y)

    • @sourabhjigjinni
      @sourabhjigjinni 4 года назад +1

      @@vaizerdgrey thanks this worked!

    • @harshavarthanvijayakumar8993
      @harshavarthanvijayakumar8993 4 года назад +1

      @@@vaizerdgrey it worked thanks buddy

  • @-._
    @-._ 5 лет назад +35

    How many tea cups do you have???

  • @manojkumar-qp8xu
    @manojkumar-qp8xu 5 лет назад

    Very good content bro........ It's really helpful

  • @sanjaychakrabortyaworldofc6953
    @sanjaychakrabortyaworldofc6953 2 года назад

    Nice hands on explanation of CNN

  • @bytblaster
    @bytblaster 4 года назад +3

    I am getting a "ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=3" can someone help?

    • @x3epic763
      @x3epic763 4 года назад

      try creating a new variable that stores a resized X with an additional dummy dimension, something like "resizedX = numpy.resize(X, (1,X.shape))" and put that resized X into your fit function instead of the normal X

    • @jordenquast2655
      @jordenquast2655 4 года назад

      @@x3epic763 What Do you mean? Could you explain more? I'm getting the same error, but I don't know what you're saying

    • @x3epic763
      @x3epic763 4 года назад

      @@jordenquast2655 well the error kinda says that the function expected a 4 dimensional input but received a 3 dimensional one. Therefore you can try adding a 4th "dummy" dimension as the first dimension to the input. So say your input has the shape (200,5,5) then the new input should be (1,200,5,5).This can be done with the resize function: you make a new variable and store the result of the function as the input. This new variable can then be used as input for the training function. It depends on the overall setup if this will work, but ive seen the problem solved like this a couple of times

    • @jordenquast2655
      @jordenquast2655 4 года назад

      @@x3epic763 I solved it, there error was in the last line of the data transformation code, where it took some dimensions away. This left it with too few dimensions to be able to run. I believe the line of code was X = np.array(X).reshape(-1, imgSize). Hope this helps someone!

    • @marksahlgreen9584
      @marksahlgreen9584 4 года назад

      @@jordenquast2655 specifically this is the correct line: X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)

  • @kyle_bro
    @kyle_bro 5 лет назад +4

    Coffee mug level: 100

  • @user-oe6xr3oj7p
    @user-oe6xr3oj7p 5 лет назад +1

    when compiling with pycharm, "OpenCV(3.4.1) Error: Assertion failed (ssize.width > 0 && ssize.height > 0) in resize" errors happen by 60 or much more times, do you know what is the error.

  • @GAment_11
    @GAment_11 3 года назад

    This is such a great intro to the subject. I am interested in creating my own data for which I would train a network instead of downloading something rom the internet----aka, recording a video of something for which I can extract alot of image frame from (at least 30 frames per second) and use these images as training/test data. I plan to reshape images and the like before feeding to a CNN--but do you have any pointers on pitfalls I might face when creating my own data sets (other than the fact I would need a lot of training data)? Thanks again for these tutorials, you make this subject incredibly fun.

  • @severussnape5978
    @severussnape5978 5 лет назад +6

    my god man, why your computer so fast.
    mine takes forever to train

    • @supernova6553
      @supernova6553 5 лет назад

      it's most likely a server

    • @waynefilkins8394
      @waynefilkins8394 5 лет назад

      he has a high-end gpu. if you're on a budget I picked up a 1060 6gb on ebay for like $130 and it trains models really fast for the $$

    • @supernova6553
      @supernova6553 5 лет назад

      @@waynefilkins8394 yes but i think he uses paperspace and does his development on their servers.

  • @ederjuniorchua827
    @ederjuniorchua827 4 года назад +4

    is it just me, or does he look a bit like Edward Snowden?

  • @vinodkinoni4863
    @vinodkinoni4863 5 лет назад

    great tutorials thanks

  • @Jedop
    @Jedop 4 года назад

    A target array with shape (2400, 64, 64, 1) was passed for an output of shape (None, 1) while using as loss `binary_crossentropy`. This loss expects targets to have the same shape as the output.

  • @flosset9640
    @flosset9640 4 года назад +3

    yo i got the entire project on github check it out

  • @ygarabawala
    @ygarabawala 5 лет назад

    Awesome as always, will wait patiently for the next video and I hope to hit "Join" button too when i am less busy to get the max out of your brain :D

  • @mehdisoleymani6012
    @mehdisoleymani6012 2 года назад

    Thanks a lot for your great courses, is it possible for you to explain my question? How should we add non-image features to our CNN model (features like cat and dog prices) to our flatten layer? Does the CNN model new added features belong to which input image?

  • @moabs438
    @moabs438 5 лет назад

    Awesome video ! keep going ...

  • @yepnah3514
    @yepnah3514 3 года назад

    Hello!! thank you for your tutorials. Do you mind posting code for building the test portion of a cat-dog model or something similar? I built a model of horse-human with a training and validation set. I have a 'prediction/test' portion where I can upload one image at a time and see what the model predicts. Is there a way to just load hundreds of images to a prediction/test set so I can get something that looks like the results you see during training? thank you.

  • @ryanstern7927
    @ryanstern7927 4 года назад +2

    How do you decide how many hidden layers you are going to use and how many nodes per layer you will use?

  • @subratode7086
    @subratode7086 5 лет назад

    wonderful content
    this man should hv more than 10 million subscribers

  • @kevintsai4969
    @kevintsai4969 3 года назад

    Hey sendex, may I ask what kind net structure are you using? CNN has a lot of net structure, like vgg-16, u-net, and etc. Therefore, what's the net structure you are using?

  • @Mrcrownjk
    @Mrcrownjk 4 года назад

    Hi! Great content here found this very useful for my own projects! Btw is it always necessary to normalise image data (X/255 for grey scale)? As I do not see this being used elsewhere

    • @sudeepnellur
      @sudeepnellur 4 года назад

      yes, it is! its all for reducing numbers
      so that cpu can lift lighter(computational)
      before normalisation x would be like [255,255,235,.....
      after normalisation x will be [1,1,0.something,......
      so dealing with 1s is easier than 255s

  • @DerickAHo
    @DerickAHo 5 лет назад

    Do you have a tutorial where you implement faster RCNN for object detection? I want to learn how to make one using the CNN made from scratch. I am aware of the tensorflow object detection api but it doesn't help me learn how to create my own model.

  • @GameTuberer
    @GameTuberer 4 года назад

    Great video, I have a database with 2 folders: ships and other (waves, bridges, coasts, and similar). How should I modify this to recognize what is a ship and what is not, instead of classifying because the folder 'Other' has many different images?

  • @gavinderulo12
    @gavinderulo12 5 лет назад

    Is a CNN only useful for high res images? If so, at what resolutions does it still make sense?

  • @piotrwln9348
    @piotrwln9348 5 лет назад

    Say I want to build a classifier that tells me if a person is wearing glasses or not. Would you train your model using photos containing whole faces or just the eyes?

  • @shawnkan7157
    @shawnkan7157 4 года назад

    in the Conv2D config, 64 is the number of filters that will be produced by this layer. what changes does the number of filters do in terms of accuracy?

  • @eamonkelliher3965
    @eamonkelliher3965 3 года назад

    Hi Guys, I'm just starting out with CNNs and found this tutorial extremely helpful. However I was just wondering if I was to add an extra category (e.g. Horses) to this example, would there be any significant changes to the code required? Obviously I would need to add an extra 'Horses' CATEGORY and I changed the loss from binary_crossentropy to categorical_crossentropy. But I was just wondering other than that would the code essentially be correct to use?
    Any help on this would be greatly appreciated!

  • @koreanhomechef8559
    @koreanhomechef8559 3 года назад

    first of all, thank you very much for your great video. By the way I dont have Nvidia graphic cards. Instead of using tensorflow, is there any alternative library?

  • @thevikinglord9209
    @thevikinglord9209 2 года назад

    That is it, you are better than my professor!

  • @nacerzarguit8970
    @nacerzarguit8970 5 лет назад

    Great tutorial !
    I think it's better that you use Jupyter notebook in your tutorials. It makes things much easier to follow.

  • @lejason
    @lejason 4 года назад

    Fantastic video but as I am new to NN one thing I am still unclear on (about the nature of CNN's in general) is around 3:40 you mention that these processes are "slowly extracting value from the image [...] the more initial layers are going to be finding things like edges and lines", etc - my question is how are these features "found"? Is it just an interesting consequence of using windows/pooling in this manner? Or is there some step I am missing where you somehow guide the CNN to "find lines" on this layer or "find circles" on this layer? Or if these are emergenet, is there a way you *can* control this evolution (ie, tell it which sorts of features to look for at which layers?)

  • @miicro
    @miicro 4 года назад

    Can you please explain how you chose 64 for Conv2D creation? My biggest issue is that I'm never sure how should i choose values. Also, what is validation_split parameter in model.fit()?

  • @abdelmonaemfouad7037
    @abdelmonaemfouad7037 4 года назад

    Excuse me, how do you record such a amazing records... which tools you are using ? .. thanks

  • @SuperKafooo
    @SuperKafooo 5 лет назад

    @9:25 I am using conv1d to classify audio utterences just for fun. Also using PCA to for dimension reduction again for fun. PCA require audio matrices to be flattened out. So I am training 200 files with each having 50 features, x_tran.shape() returns (200,50).
    Now the main point. I can't figure out what should I pass in the input_shape parameter. It simply isn't accepting 1D input. Also what other changes need to be made to pass a 1d input?

  • @benjii0stylz
    @benjii0stylz 5 лет назад

    Bobby Fisher's 21 move brilliancy is a video I watched at least 10 times. Apparently we are in the same team, since RUclips's ML algorithms tell you to watch it ^^

  • @allaabdella4794
    @allaabdella4794 5 лет назад +1

    How can we plot the accuracy vs the number of epochs in your code?

  • @arkasaha4412
    @arkasaha4412 5 лет назад

    Nice video. Will you consider making a series on Signal processing with Python?