Neural Networks in 100 minutes: Coded from scratch

Поделиться
HTML-код
  • Опубликовано: 25 янв 2025

Комментарии • 15

  • @wingsoftechnology5302
    @wingsoftechnology5302 7 дней назад

    Thank you very much for all the handwork. Awesome Lesson

  • @sharukhshaik7476
    @sharukhshaik7476 6 месяцев назад +6

    Thank you for this amazing tutorial! As someone with no prior experience in neural networks, I found your explanations incredibly clear and easy to follow. The way you broke down the concepts on the whiteboard made everything much more understandable. I was able to completely and clearly understand all the concepts in one sitting. I really appreciate the effort you put into making this video. I will definitely recommend it to my friends and colleagues. Looking forward to more tutorials from you!

    • @vizuara
      @vizuara  6 месяцев назад

      Thanks for the support! Very happy that you liked the tutorial :)

  • @Omunamantech
    @Omunamantech Месяц назад

    Amazing Lecture!

  • @shilpavpurushothaman
    @shilpavpurushothaman 2 месяца назад

    Amazing tutorial!

    • @PushpawatiChadha
      @PushpawatiChadha 19 дней назад

      😮😮😮😮😮😢😢😊😊😊😊😊

  • @deepikadeepika-v7z
    @deepikadeepika-v7z 6 месяцев назад +2

    Thank you very much for all the handwork you and your team has put to make these fantastic tutorials and well structured materials. Learning a lot.

    • @vizuara
      @vizuara  6 месяцев назад

      Thanks for the support Deepika :)

  • @TravelWithAD
    @TravelWithAD 6 месяцев назад +2

    Superb explanation. Thank you for the hardwork you put in

  • @mukherjeesrijit
    @mukherjeesrijit 6 месяцев назад +2

    This is a great work. Thank you for this. However, talking about the autograd aspect is important, because it teaches or motivates the learners on how to generalize the backpropagation from a computational aspect. What do you think? I would love to know and understand your viewpoint.

  • @joelbruce4093
    @joelbruce4093 14 дней назад

    I have read that SGD generalizes better to new data than ADAM does. Would SGD just be the code without adjusting the weights and biases with momentum and caches? Have you explored this at all, curious your thoughts! Thank you for the excellent videos!

  • @ajay-aiclub
    @ajay-aiclub 6 месяцев назад

    Hi Raj, Thank you for this wonderful explanation. Would you be able to share the Miro Dashboard if you can give a read only access ? What would be the best way to get a view of this pls ? Thank you.

  • @dragon5038
    @dragon5038 6 месяцев назад

    Can you provide lectures notes , it’s important please

  • @ankush_k.s
    @ankush_k.s 6 месяцев назад

    Excelent

  • @agarwalyashhh
    @agarwalyashhh 6 месяцев назад +1

    Exactly copied from sentdex nnfs book,but good tutorial overall