Neural Networks using Lux.jl and Zygote.jl Autodiff in Julia

Поделиться
HTML-код
  • Опубликовано: 5 окт 2024

Комментарии • 10

  • @mehdizahedi2810
    @mehdizahedi2810 12 дней назад

    AWESOME, best tutorial on Lux.jl, please add more tutorial on deep learning with Julia.

  • @innidynamics7510
    @innidynamics7510 Год назад +1

    Amazing tutorial

  • @pradipta.
    @pradipta. Год назад +1

    Wonderful Video 👏

  • @matteopiccioni196
    @matteopiccioni196 Год назад

    This is seriously an amazing channel, time ago I read some performance comparison among Julia and other high level programming languages, what astounded me was the flexibility of Julia together with its speed (close to C++). So now the question to you can Julia overcome Python? I'm currently using Python but it has some limitations first of all speed and lacks of backwards-compatibily while Julia seems to solve this two main drawbacks, but ok on its side Julia doesn't have all of those libraries and strong community of Python (tensorflow by Google for instance).

    • @MachineLearningSimulation
      @MachineLearningSimulation  11 месяцев назад

      First of all thanks for the kind words on the channel, much appreciated 😊
      I think the discussion of Julia vs Python (or more precisely vs. the major DL frameworks in Python) is a large one. Just wanted to leave this discourse thread: discourse.julialang.org/t/state-of-machine-learning-in-julia/74385
      And that blog post: kidger.site/thoughts/jax-vs-julia/ and to some extent also this blog post: www.stochasticlifestyle.com/engineering-trade-offs-in-automatic-differentiation-from-tensorflow-and-pytorch-to-jax-and-julia/
      My opinion: I like both Julia and Python (in particular JAX). About 6 months ago, I actually switched in my PhD research from Julia to JAX because:
      1. The "vmap" function transformation
      2. More mature and feature-rich autodiff engine
      3. Python ecosystem
      Regarding your question: Will Julia overcome Python? Maybe. As you mention many Python packages have millions of dollars of industry backing which Julia does not have yet. And the industry and research community is unfortunately slow and they stick with what is working.
      Let's see what the future holds, also with the new player "Mojo".

  • @diegosorte
    @diegosorte Год назад +1

    Hi! Thank you very much for the video! I just wanted to ask you, what is the meaning of the peak around epoch 1.7e4 in the plot of the loss function?

    • @MachineLearningSimulation
      @MachineLearningSimulation  11 месяцев назад +1

      Hi, thanks for the great question 😊.
      Sorry for the late reply, the comment got a bit lost.
      In my experience, these behaviors of the loss curve indicate that the learning rate might be too high. Probably if one looked at the loss landscape in the proximity of the points we experience this behavior (often close to minima), one would see a mostly flat region next to a steep one. Once the optimizer hits the steep wall it is kicked back.
      We should be able to alleviate this by either 1) selecting a general smaller learning rate or 2) decrease the learning rate over the training (for example exponentially).
      This should be the Julia package implementing the most common scheduling strategies: github.com/FluxML/ParameterSchedulers.jl

    • @diegosorte
      @diegosorte 11 месяцев назад

      Hey! Thank you for the explanation and the link! Since that time I was experimenting a bit and can confirm what you just explained to me. I’m also following all your videos! Nice content! 🙌🏼☺️