Stephen Wolfram Readings: What’s Really Going On in Machine Learning? Some Minimal Models

Поделиться
HTML-код
  • Опубликовано: 15 сен 2024

Комментарии • 16

  • @bobbyjunelive1993
    @bobbyjunelive1993 9 дней назад

    The "very tiny" reduced mesh net size (the first example) is actually quite impressive for such two nodes.

  • @alexmartos9100
    @alexmartos9100 18 дней назад +3

    So let me get this straight… what Mr Wolfram is suggesting is that Large Neural networks like the one used in ChatGPT is learning via a process indistinguishable from our current understanding of how biological organisms evolve using adaptive evolution/ random mutations?

    • @yrebrac
      @yrebrac 18 дней назад +2

      No he's not suggesting they're the same but that there's a more fundamental common foundation to them, based in part on core principles like computational irreducibility he presented in the NKS book a long time ago.

  • @Sûlherokhh
    @Sûlherokhh 18 дней назад

    As always, a joy to follow your thoughts on the matter, on any matter actually. Thank you Stephen! ❤

  • @josephgraham439
    @josephgraham439 8 дней назад

    We're at the beginning of something tremendous

  • @nunomaroco583
    @nunomaroco583 19 дней назад

    Wow thanks for the attention, amazing....

  • @yrebrac
    @yrebrac 18 дней назад

    Thanks for sharing Stephen, inspirational and awe inspiring science

  • @Emi-jh7gf
    @Emi-jh7gf 20 дней назад +1

    At 1:00:00 , for derivative of x , why doesn't [1,1,1] and [0,1,1] return 1 , w[1,1,1] = 0 and w[0,1,1] =1 , so change in the value of the left-most bit changes the value of the function/rule?

  • @evynt9512
    @evynt9512 16 дней назад

    Glad we can finally make sense of cellular automata!
    That is astonishing because cells are the lowest level/grand equivalence of discrete space!

  • @NightmareCourtPictures
    @NightmareCourtPictures 20 дней назад +1

    Brilliant

  • @wwkk4964
    @wwkk4964 19 дней назад

    Dr. Wolfram! Amazing presentation! I am waiting for you to collaborate with Michael Levin and Denis Nobel!

  • @WalterSamuels
    @WalterSamuels 16 дней назад

    We are a product of our environment, not just our starting conditions. In fact, our environment becomes the dominant factor. We are here because those that would not survive, did not survive. It's easy to convince ourselves this is and was an active process of choice, but instead it was merely a result of that which should thrive, does thrive. It just so happens that the optimally thriving being has interesting properties. Free will is a strong illusion.
    Also, I agree with the final assessment, that constraining AI to interpretability and reducing its complexity to confine it to computation reducability will ensure that it never achieves AGI. What makes us human is our ability to explore the irreducible, and to endlessly pluck new insight from it, to continuously grow and expand our bounds. To confine the AI in the name of "safety", to restrict its outputs, to put conditions on it, only serves to prevent its evolution.

  • @SandipChitale
    @SandipChitale 20 дней назад

    Excellent.
    Stephen, can some of approaches related to trying all mutation change-maps vs multiple mutations at the same time, be applied to the so called fine tuning problem/principle of our universe. Meaning that by varying different constants at the same times is it possible to get stable universes rendering fine tuning argument moot.

  • @JustinHedge
    @JustinHedge 19 дней назад

    👏🙌