Mastering Optimizers, Loss Functions, and Learning Rate in Neural Networks with Keras and TensorFlow

Поделиться
HTML-код
  • Опубликовано: 5 окт 2024

Комментарии • 14

  • @NicolaiAI
    @NicolaiAI  Год назад +1

    Join My AI Career Program
    www.nicolai-nielsen.com/aicareer
    Enroll in My School and Technical Courses
    www.nicos-school.com

  • @geodev
    @geodev 3 года назад +1

    Nice explanation! Thanks for the video

    • @NicolaiAI
      @NicolaiAI  3 года назад

      Thank you very much! Really appreciate it

  • @notgabby604
    @notgabby604 3 года назад +1

    You can use Continuous Gray Code Optimization. Which is a simple type of evolution algorithm. The advantage is you can use sparse lists of mutations and split the training data over multiple GPUs very easily. Each GPU just returns the cost for its part of the training data which are then sumed together to see if the mutation list was a good idea. There are also Fast Transform fixed-filter-bank neural nets that not so many people know about.

  • @tracesofthesunjingab
    @tracesofthesunjingab 2 года назад

    I would like to ask how to use loss, optimization and activation function in MLP classification. Thank you.

  • @teddybest02
    @teddybest02 2 года назад +1

    God bless you

    • @NicolaiAI
      @NicolaiAI  2 года назад

      Thanks for watching! God bless you

  • @cherryoncode
    @cherryoncode 3 года назад +1

    You are great!

  • @anshumansinha5874
    @anshumansinha5874 2 года назад

    Hey, Nielsen a very nice presentation. May you kindly help me in converting a pytorch code to tensorflow?

  • @satisfiedmeteor2752
    @satisfiedmeteor2752 2 года назад +1

    it's not
    from keras.optimizers import Adam
    now ,but it's
    from keras.optimizer_v2.adam import Adam

    • @JacobGravesen
      @JacobGravesen 2 года назад

      You are totally right man, thanks for the tip

    • @ehabyahia
      @ehabyahia Год назад

      from tensorflow.keras.optimizers import Adam