TensorFlow Tutorial 08 - Classify Lego Star Wars Minifigures | Full Project Walkthrough

Поделиться
HTML-код
  • Опубликовано: 4 окт 2024
  • New Tutorial series about TensorFlow 2! Learn all the basics you need to get started with this deep learning framework!
    Part 08: Classify Star Wars Lego Figures | Full Project Walkthrough
    In this part we use a real image dataset from Kaggle with Lego Star Wars Minifigures and do a full project walkthrough. I show you how to load the data using a tensorflow ImagaDataGenerator and implement a convolutional neural net to do image classification. I also introduce some new concepts like image augmentation and keras callbacks.
    ~~~~~~~~~~~~~~ GREAT PLUGINS FOR YOUR CODE EDITOR ~~~~~~~~~~~~~~
    ✅ Write cleaner code with Sourcery: sourcery.ai/?u... *
    Get my Free NumPy Handbook:
    www.python-eng...
    🚀🚀 SUPPORT ME ON PATREON 🚀🚀
    / patrickloeber
    If you enjoyed this video, please subscribe to the channel!
    Course material is available on GitHub:
    github.com/pat...
    Download the dataset:
    www.kaggle.com...
    Links:
    www.tensorflow...
    www.tensorflow...
    www.tensorflow...
    You can find me here:
    Website: www.python-eng...
    Twitter: / patloeber
    GitHub: github.com/pat...
    Music: www.bensound.com/
    #Python
    Course Parts:
    01 TensorFlow Installation
    02 TensorFlow Tensor Basics
    03 TensorFlow Neural Net
    04 TensorFlow Linear Regression
    05 TensorFlow CNN (Convolutional Neural Nets)
    06 TensorFlow Save & Load Models
    07 TensorFlow Functional API
    08 TensorFlow Multi-output Project
    09 TensorFlow Transfer Learning
    10 TensorFlow RNN / LSTM / GRU
    11 TensorFlow NLP
    TensorFlow 2, Keras, Deep Learning, TensorFlow Course, TensorFlow Beginner Course, TensorFlow Tutorial
    ----------------------------------------------------------------------------------------------------------
    This is a sponsored link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏

Комментарии • 18

  • @progra_kun4331
    @progra_kun4331 2 года назад

    como hispanohablante te digo que aprecio que hables lento y claro en tus videos, seguire tu curso y me subscribire

  • @sikkavilla3996
    @sikkavilla3996 4 года назад

    Thnx for the great video Patrick!

  • @DanielWeikert
    @DanielWeikert 4 года назад +2

    Why did you set the padding to valid? Why no "same"? Any particular reason. When should which of those techniques be used?
    When you use the Imagedatagen with only rescale I assume it only returns the available images. But if you use other techniques (cropping,...) you would need to tell the model.fit the amount of images otherwise you get an infinite loop right?

  • @kaiye4954
    @kaiye4954 3 года назад

    Thanks for this awesome video Patrick! Just have a few questions.
    train_batch = train_batches[0]
    1. print(train_batch[0].shape). I got (4, 256, 256, 3). What is value 4 for?
    2. Looks like train_batches[0] is training image. What is train_batches[0][0]?
    3. train_batches[0][1] is training label, right?
    4. For training the model, you are using model.fit(train_batches, validation_data=val_batches, callbacks=[early_stopping], epochs=epochs, verbose=2).
    In your video 3, you are using model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, shuffle=True, verbose=2).
    So, I guess train_batches have training data and label data. We don't need to separate it into x_train and y_train to call fit function.

    • @patloeber
      @patloeber  3 года назад

      1. 4 is the number of samples in this batch. so here you have 4 training images
      2. train_batches[0] is the first batch with both the first 4 images + labels
      3. train_batches[0][0] is the first image and train_batches[0][1] is the first label
      4. yes correct

  • @csok758
    @csok758 2 года назад

    Great video, thanks so much! Have a question on how to run this on a windows system. I have been getting errors related to paths errors. Found 0 images belonging to 5 classes. Do you have a version for windows?
    Cheers

  • @w.d.1373
    @w.d.1373 3 года назад

    Thank you for your good video. But I have a question about this. I copied your code and adjust it for my directory and files, but when I check "train_batch[0].shape", the output is like (4, 3, 256, 256), not (4, 256, 256, 3). Because of this, image didn't appear in output and I can't do model compile as well. I just changed the directory and there is nothing I touched. Can you imagine where should I check this problem?

  • @ibrahimkhan9635
    @ibrahimkhan9635 3 года назад

    Sir please tell Us How to reduce overfitting I have been Trying all the possibilities to reduce it but wasnt able to

    • @patloeber
      @patloeber  3 года назад

      yeah it's difficult in this project because we don't have a lot of images. you can try more image augmentation methods

  • @kccchiu
    @kccchiu 3 года назад

    Thank you for the great video. I have a question in 19:05
    Why do we have to apply softmax? I thought the (from_logits = True) already replace the softmax.
    I apply argmax on predictions without softmax and got the same result, so is it just a coincidence that I got the same result?

    • @patloeber
      @patloeber  3 года назад +1

      it's not a coincidence, the highest raw value is also the highest probability after softmax, so you get the same index. so it works to get the predictions, but you can't analyze the exact probability without softmax

  • @bosszz1282
    @bosszz1282 2 года назад

    ValueError: Layer "sequential_3" expects 1 input(s), but it received 2 input tensors. Inputs received: [, ]
    what is the problem?
    around 16:00

    • @bosszz1282
      @bosszz1282 2 года назад

      i figure out ,there is something i type wrong

  • @ibrahimkhan9635
    @ibrahimkhan9635 3 года назад

    Sir please tell Me how can i reduce overfitting

    • @ibrahimkhan9635
      @ibrahimkhan9635 3 года назад

      Tried every possibilities

    • @patloeber
      @patloeber  3 года назад

      oh yeah it's probably a hard task in this project since we don't have much training data :/ using data augmentation could help

    • @ibrahimkhan9635
      @ibrahimkhan9635 3 года назад

      @@patloeber sir i tried augmentation still accuracy is low