Been following the Numenta story for years and really enjoy watching the talks that you share. HTM School was one of my favorite experiences in AI/ML and I've read OnIntelligence and A Thousand Brains multiple times each. The performance of NuPIC on CPUs looks incredible! We really need to improve the price/performance of inference and NuPIC looks to be a massive step in the right direction.
It would be nice to see support for the open source community. Can NuPIC run on AVX processors with four cores? Most individuals can't afford those huge Intel server CPUs which cost many thousands of dollars just for the chip. I've followed Numenta for many years, yet it's technology has always remained obscure from the rest of the AI community. Please give the open source Community the tools to allow them to experiment with your technology and you could reap the benefits of their collective enthusiasm.
Been a while since I looked at Numenta stuff. However... every neural network optimization I look at that claims to run better on CPU consists of algorithms that have already been ported to GPUs to run much faster. I'm very familiar with Numenta's older algorithms, so I'm curious what new stuff can't be optimized for GPUs.
The question is: how do they do it? CPUs are faster and have more memory than GPUs but they are not that good at running things in parallel. Unless it's a hack using multi-threading ...
Numenta is a lost cause. They should just give up. No one cares about cpu. Firstly, it is 2024, blackwell is almost ready for training. There are even better asic chips for inference. Xeon isnt good for training and slow and not that fast for inference.
Been following the Numenta story for years and really enjoy watching the talks that you share. HTM School was one of my favorite experiences in AI/ML and I've read OnIntelligence and A Thousand Brains multiple times each. The performance of NuPIC on CPUs looks incredible! We really need to improve the price/performance of inference and NuPIC looks to be a massive step in the right direction.
It would be nice to see support for the open source community. Can NuPIC run on AVX processors with four cores? Most individuals can't afford those huge Intel server CPUs which cost many thousands of dollars just for the chip. I've followed Numenta for many years, yet it's technology has always remained obscure from the rest of the AI community. Please give the open source Community the tools to allow them to experiment with your technology and you could reap the benefits of their collective enthusiasm.
Good going, Subutai. Glad to see how your are bringing HTM and NuPIC to market.
Great to see Numenta engaging more!
Congratulations Subutai!
Been a while since I looked at Numenta stuff.
However... every neural network optimization I look at that claims to run better on CPU consists of algorithms that have already been ported to GPUs to run much faster. I'm very familiar with Numenta's older algorithms, so I'm curious what new stuff can't be optimized for GPUs.
The HTM school was well done.
Why isn’t this getting more exposure? This sounds revolutionary.
The question is: how do they do it? CPUs are faster and have more memory than GPUs but they are not that good at running things in parallel. Unless it's a hack using multi-threading ...
see the HTM School lectures.
Maybe sparsity + extremely low precision
Numenta is a lost cause. They should just give up. No one cares about cpu. Firstly, it is 2024, blackwell is almost ready for training. There are even better asic chips for inference. Xeon isnt good for training and slow and not that fast for inference.
Numenta is a gem ... Im wondering when is a good time to invest in them.