NuPIC: A New Era of AI Inference | CES 2024

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 14

  • @DannyWelch
    @DannyWelch 7 месяцев назад +16

    Been following the Numenta story for years and really enjoy watching the talks that you share. HTM School was one of my favorite experiences in AI/ML and I've read OnIntelligence and A Thousand Brains multiple times each. The performance of NuPIC on CPUs looks incredible! We really need to improve the price/performance of inference and NuPIC looks to be a massive step in the right direction.

  • @andrewowens5653
    @andrewowens5653 7 месяцев назад +8

    It would be nice to see support for the open source community. Can NuPIC run on AVX processors with four cores? Most individuals can't afford those huge Intel server CPUs which cost many thousands of dollars just for the chip. I've followed Numenta for many years, yet it's technology has always remained obscure from the rest of the AI community. Please give the open source Community the tools to allow them to experiment with your technology and you could reap the benefits of their collective enthusiasm.

  • @Niki007hound
    @Niki007hound Месяц назад

    Good going, Subutai. Glad to see how your are bringing HTM and NuPIC to market.

  • @rb8049
    @rb8049 7 месяцев назад +6

    Great to see Numenta engaging more!

  • @rogercole
    @rogercole 7 месяцев назад +3

    Congratulations Subutai!

  • @simleek
    @simleek 7 месяцев назад +4

    Been a while since I looked at Numenta stuff.
    However... every neural network optimization I look at that claims to run better on CPU consists of algorithms that have already been ported to GPUs to run much faster. I'm very familiar with Numenta's older algorithms, so I'm curious what new stuff can't be optimized for GPUs.

  • @kevinmaillet8017
    @kevinmaillet8017 7 месяцев назад +4

    The HTM school was well done.

  • @saturdaysequalsyouth
    @saturdaysequalsyouth 7 месяцев назад

    Why isn’t this getting more exposure? This sounds revolutionary.

  • @othfrk1
    @othfrk1 7 месяцев назад +3

    The question is: how do they do it? CPUs are faster and have more memory than GPUs but they are not that good at running things in parallel. Unless it's a hack using multi-threading ...

    • @JurekOK
      @JurekOK 7 месяцев назад +4

      see the HTM School lectures.

    • @egor.okhterov
      @egor.okhterov 7 месяцев назад +1

      Maybe sparsity + extremely low precision

  • @ps3301
    @ps3301 5 месяцев назад

    Numenta is a lost cause. They should just give up. No one cares about cpu. Firstly, it is 2024, blackwell is almost ready for training. There are even better asic chips for inference. Xeon isnt good for training and slow and not that fast for inference.

  • @madmanzila
    @madmanzila 7 месяцев назад

    Numenta is a gem ... Im wondering when is a good time to invest in them.