lol. There is something tricky going on. They compare a DeepX chip with 25Tops with a Nvidia Jetson Origin with 200Tops!!! and still declare the 25Tops Chip is more powerful than the 200Tops??? Sure, they use the same software and screen resolution, but what about the other software settings??? It looks too good to be true to me. Just be careful.
It looks like its going to have to run their own AI software too. Cant use models from HuggingFace for example. But I could be wrong. Just reading the website. Still looks good but this isst a consumer product at all by the looks of it (yet)
For those who are wondering, GPUs are general purpose devices, think of it like a machine shop with a lot of machine tools, while what I believe they are making is probably an FPGA/ASIC design that is built to solve these particular algorithms, which is like a production line! So, you can make a car in a machine shop, will either take really sophisticated machines and a lot of skill (pricey! Slow) while a production line can make a car a minute! It’s not magic, ppl just are clouded by Nvidia, their marketing indirectly is discouraging this kind of technology, so that roboticists are filled with hardware illiterate engineers, idk… you don’t need a GPU to solve AI math, it is math, it can better optimised with FPGAs
Both of these situations are businesses. New solutions need to be properly marketed while producing. If you look closely, the CEO here is wearing his jacket like he's on his way out. The success of a company relies on proper sales, including demonstrating that you aren't on your way out. He should be dedicated to providing a scripted presentation as opposed to giving the interview away because it's convenient. That's a well known issue with the scientific community, that they think more features will sell more VCRs, while most of them blink 12:00. Making the best burger doesn't mean people will simply line up at your door.
I have been saying for Years "Big Tech has been Ripping us off since the 90's". We should have way more Processing speed and efficiency in 2025 but they have been gatekeeping the advancements (since day one). I hate to say it but......They (You know who) may try to "Poof Be Gone" this Young Man just for offering something at this price not to mention the performance.
Cars don't get made in a min it takes many hours from the start of the assembly line to the time it gets to the end.
11 дней назад+1
Love this video and the presenting style. Accesible and engaging. Keep pushing the boundaries of what is possible on RUclips. I will subscribe and hope to see more in the future. Bless from Mumbleton, Gloucestershire. Tony Williams.
Definitely a developing category. I will try to get some of their modules to evaluate. I have a ton of similar edge AI SoC from other manufacturers that run the models they show, including cheaper devices from NVIDIA, similar capabilities to those they claim. I just got an AX630 module from Axera for about the $50 mark, as an example.
This is way too low spec for running LLMs, what they’re running here are extremely small vision models (10s of millions parameters) vs LLMs the lowest-end of which is ~~ 1 billion parameters. One of the just-announced Nvidia Jetson systems is what you’d want. (The newest one is what you want and I don’t think they’re quite on the market yet. The previous version is $500 vs $249 and has lower performance, wait for the new one.)
Very interesting. I would like to know more about it. Finding some evaluation boards to try this IC out could be a great goal for a new potential AI stuff.
The cost ratio is excellent, however, the nice thing about the GPUs is that a lot of software can use it. Does Windows see this device as a GPU ? Or does it need special software to use it? If they can make it "appear" like a gpu to windows, and put 80GB or more on it, it would be a really huge improvement.
So... Is it about 10x superior to GPUs overall (including 10x cheaper)? Thats what i got from the video. What are the practical application / use of it. Or is it only for video recognition task? Im not a technical person.
❤this channel. Pretending to be a salty design engineer, the demos are application specific from Hyundai robotics so one could argue these are ASICs. Will they work with general purpose embeddable LLM or vision models etc. also what numeric precision are we working with half/full etc. nonetheless ASICs are pricey but NPUs seem to keep getting better and cheaper. DeepX is one to watch for sure
The price point is interesting, however, they need to scale it up to compete with the many of growing competitors. Memory and the number of Trillions of Operations Per Second. How well does the NPU calculate geometry compared to a GPU? A lot of applications need both! Lastly, in most cases you need at least 12gb of memory to do anything significant.
@@SpecialOperationsCommand I like big returns, or nothing at all. I also only invest in things that I think are cool. I want to have the future, and be able to afford it too. It's like eating two cakes with one rock, and having a bird.
This reporter is dense… it’s an NPU not a general purpose compute accelerator i.e. GPU. All you’ll get is niche acceleration on very specific workloads like classification, forget running an LLM on an NPU. Why do you need a GPU? For training/finetuning/system design. If you are an engineer, the ONLY reason you would have a product like this is for inference testing for deployment testing, i.e. QA and deployment teams more than development. This is a nothing burger
it can be used for surveillance in vertical integration, locally. GPU would have to be offsite and have data fed over the internet, something impossible to do in say, desert or Cambodian forests. It all depends on your perspective. Yours seems very very narrow. This company is most surely looking to sell the tech to bigger companies. Remember, fist iPhone was comprised of a compilation of "nothing burgers". Apple just put them all together and gave them unified GUI.
That's some crazy hardware. He seems a nice guy with good intentions. Hope this company can disrupt the market soon. Fingers crossed this will be for sale soon.
Looks like a rebranded HILO-8 W/26 TOPS. Might as well get the Hailo-10H M.2 Generative AI Acceleration Module is an M.2 card with 40 TOPS at 3.5W. Want MORE POWER? Check out the M.2 accelerator from Axelera AI featuring 4 RISC-V instruction set cores, each capable of 53.5 TOPS for a total of 214 TOPS on a 2280 size M.2 card. Need more TOPS? Install their PCIe AI accelerator card with 4 M.2 slots for a total 856 TOPS. If you have PCIe 5 you could use the PCIe card with 8 M.2 slots for 1,712 TOPS.
While the DEEPX DX-M1 is a powerful AI accelerator for edge computing applications, it is not designed for gaming purposes. GPGPUs remain the preferred choice for gaming due to their specialized architecture for graphics rendering and handling gaming workloads. Therefore, for gaming applications, a GPGPU would be more appropriate than the DX-M1.
There are a few companies doing dedicated npu's in a datacentres format from conventional small silicon (like these guys), to wafer-scale and even phonotic computers so a clear winner of "what comes next" isn't clear. At a datacentre level the ability for the separate processors to act as if one ( "a geshtalt entity") is often very important and as I understand it nvidia (and in particular Teslas way of interconnecting them) is ahead of the pack. So in short, for datacentres, it's about cohevesive horizontal scaling rather subtlety!
China already uses this in their city camera systems for pedestrians. Not sure what platform they use, but they are pretty advanced -- more than anything we have here in the US.
Ipxchange you need to learn the fundamentals of presenting and interviewing before you go much further as your boyhood enthusiasm is getting in the way of the whole piece. And forget about drawing analogies as most of the people watching the video will probably understand without them. You also need to think a little harder about how to get to the nub of the piece which was totally missed as there's no information whatsoever about the actual differences between their chip/board/system and the rest. The piece needed a total rethink
Thanks, Mike. We're always learning the best way to present content in a way that's useful to design engineers/developers. We'll take your feedback on board and learn from it.
Build a large shape and picture association base library on the chip and extrapolate to the closest comparison from the camera to bypassing full need of processing to project the image. Similar to how one would use a shuffle sort in programming. But instead use an allogam to code and decode from say like a AVi to an MKV format etc... Bypassing about 85 to 90 % of processing needs for an output. It may be a better way to manage vectors and controls of them. Sorta like how China is using an interesting trick to track all aircraft including all stealth from over 1600KM away well over the horizon. From that China is still struggling to be able to lock on stealth though thus the need for extreme high speed chips and AI advancements which can be carried on board weapons etc... for real time use.
My question exactly 😊. I think its very small. They are running resnet50 - about 25 million parameters CNN. Its tiny. The speed demo is also not fair. GPU is running at 8bit while they are running at 4bit. Improved accuracy - probably fine-tuning after quantization to 4bit. But i expect that's the near future solution - edge devices compress the information and make simple decisions.
BrainChip and Akida blows this out of the water. Akida can use around 1 watt or less actually way less with their Pico version. I like when he asked has anyone else done this ermmmm YES. If they are using Spiking Neural Networks I think they better check they aren’t infringing on Brainchips IP
Joining in on discussion the software Language makes all the difference ! We sent people to the moon with way way much less technology power that we have today
NPU is not a new concept from this guy. They all gave lower cost and higher performance. There's even an AI hat for raspberry pi. That's an NPU right there, and I think it can pretty much accomplish the same thing. Also not that expensive.
Could you crank up the DeepX intensity a bit more? 😂 When 98% of your budget is poured into software development, what hardware would you pick? Google's TPUs offer better efficiency than GPUs, yet most companies still default to Nvidia. Tesla, on the other hand, has the scale to design and refine its own embedded systems, thanks to millions of vehicles on the field.
no, micro-fpga gate logic that can do both what gpu raster/ray tracing and npu/tpu stuff can do, is the best. yes its not fixed logic but re-programmable logic. but much better. very large scale fpga. same or cheaper than current fixed logic or any fpga stuff. fixed logic unit are not gpgpu. shader cores are gpgpu. not for AI semis, but for all semis. when dynamic does what static does then dynamic wins. well, try cyclone-V fpgas. similar power efficiency for custom logic. off-the-shelf. 50usd sounds like max1000 fpga. thats automotive usually. the gpgpu seems to be a high-end jetson nano. gpgpu is most suitable for development, not deployment. unless the deployment is general purpose. best public ip is free. always. so why at 50fps film tho? double 25fps? not 60fps?
An NPU is not a GPU. A GPU it's not "General Purpose", as some ill informed poor soul below has interestingly stated. G is for Graphics, not General. It just so happens that the same things that run a game well currently happen to have Tensor Cores which happen to be good at Crypto Mining algorithm. P it's for Processing. U is for Unit. N is for Neural. Neural Processing Unit. Come on, say it with me! An NPU is designed to "simulate" the way a human brain neural network functions. Which as you could probably imagine, may be more efficient at certain things over a GPU or CPU. Likewise, the GPU is still going to have it's particular strong points. CPU's are "General Processing Units". More specifically, "Central". An NPU may not need a GPU. A CPU does indeed however need a GPU, for advanced graphic features and quality. An NPU will do great with general Image Processing, but it's going to be prone to... "fluctuating" performance inherently. Everything has its purpose. Anything "General" for today's standardized technology... has already been made. Anything made in technology today is more "Specialized". The ground works have been laid, decades ago, folks.
Okay dude I saw that you did a video on this cool technology so I subscribed to your channel and then you go about interrupting this guy every 30 seconds and saying what he said and being all Fanboy and stuff and I removed my subscription. It is seriously annoying.
Hi Scott, thanks for the feedback - it is taken on board. We're very enthusiastic and energetic when it comes to discovering new technology. I hope you can reconsider and give our updates a chance.
I’d disagree, you can do everything that company is doing with face recognition and body recognition all real-time by using Open-CV which stands for “Open-Computer-Vision” this technology is nothing new and has been around for more than 15-20 years already.
A ver q tienen una ia eficiente entrenda el hard es una rasberry jjjj ese modelo de ellos lo corres en el de nvidia y le sacaria no se mas menos 20veces ese rendimeinto minimo jjj
lol. There is something tricky going on. They compare a DeepX chip with 25Tops with a Nvidia Jetson Origin with 200Tops!!! and still declare the 25Tops Chip is more powerful than the 200Tops??? Sure, they use the same software and screen resolution, but what about the other software settings??? It looks too good to be true to me. Just be careful.
We'll try to get to the bottom of this if we can get an update interview
Surely a $50 Google Tensor chip can do the same thing!. 😅
It looks like its going to have to run their own AI software too. Cant use models from HuggingFace for example. But I could be wrong. Just reading the website. Still looks good but this isst a consumer product at all by the looks of it (yet)
It's going to be optimised for a couple of tasks. Won't be flexible silicon
No heat,no power loss, excellent invention..
For those who are wondering, GPUs are general purpose devices, think of it like a machine shop with a lot of machine tools, while what I believe they are making is probably an FPGA/ASIC design that is built to solve these particular algorithms, which is like a production line! So, you can make a car in a machine shop, will either take really sophisticated machines and a lot of skill (pricey! Slow) while a production line can make a car a minute! It’s not magic, ppl just are clouded by Nvidia, their marketing indirectly is discouraging this kind of technology, so that roboticists are filled with hardware illiterate engineers, idk… you don’t need a GPU to solve AI math, it is math, it can better optimised with FPGAs
Both of these situations are businesses. New solutions need to be properly marketed while producing. If you look closely, the CEO here is wearing his jacket like he's on his way out. The success of a company relies on proper sales, including demonstrating that you aren't on your way out. He should be dedicated to providing a scripted presentation as opposed to giving the interview away because it's convenient. That's a well known issue with the scientific community, that they think more features will sell more VCRs, while most of them blink 12:00. Making the best burger doesn't mean people will simply line up at your door.
so this is not going to run Cyberpunk 2077 in path tracing at hundreds of fps
I have been saying for Years "Big Tech has been Ripping us off since the 90's". We should have way more Processing speed and efficiency in 2025 but they have been gatekeeping the advancements (since day one). I hate to say it but......They (You know who) may try to "Poof Be Gone" this Young Man just for offering something at this price not to mention the performance.
Cars don't get made in a min it takes many hours from the start of the assembly line to the time it gets to the end.
Love this video and the presenting style. Accesible and engaging. Keep pushing the boundaries of what is possible on RUclips. I will subscribe and hope to see more in the future. Bless from Mumbleton, Gloucestershire. Tony Williams.
OMG, I saw their chips video years ago but lost track of this company. Wish them best of luck.
Nice. Let us know if there are any other interesting disruptive companies you'd like us to interview...
Definitely a developing category. I will try to get some of their modules to evaluate. I have a ton of similar edge AI SoC from other manufacturers that run the models they show, including cheaper devices from NVIDIA, similar capabilities to those they claim. I just got an AX630 module from Axera for about the $50 mark, as an example.
I need such a solution to use as external NPU to my laptop for running LLMs locally. Can it be connected externally and is it available for purchase?
This is way too low spec for running LLMs, what they’re running here are extremely small vision models (10s of millions parameters) vs LLMs the lowest-end of which is ~~ 1 billion parameters. One of the just-announced Nvidia Jetson systems is what you’d want. (The newest one is what you want and I don’t think they’re quite on the market yet. The previous version is $500 vs $249 and has lower performance, wait for the new one.)
Very cool, I am happy to see that more people will be able to afford to get in the game with a quality solution.
NVIDIA is not even worried, their only fear are The Silicon Quantum Photonic chips.
So can it be used for inference and training as well, or just inference ?
5 watts 26 tops $50, I’ll believe it when I see it in the hands of some people 😂
Very interesting. I would like to know more about it. Finding some evaluation boards to try this IC out could be a great goal for a new potential AI stuff.
The cost ratio is excellent, however, the nice thing about the GPUs is that a lot of software can use it.
Does Windows see this device as a GPU ? Or does it need special software to use it?
If they can make it "appear" like a gpu to windows, and put 80GB or more on it, it would be a really huge improvement.
So... Is it about 10x superior to GPUs overall (including 10x cheaper)? Thats what i got from the video. What are the practical application / use of it. Or is it only for video recognition task? Im not a technical person.
great topic, thanks 👍
❤this channel. Pretending to be a salty design engineer, the demos are application specific from Hyundai robotics so one could argue these are ASICs. Will they work with general purpose embeddable LLM or vision models etc. also what numeric precision are we working with half/full etc. nonetheless ASICs are pricey but NPUs seem to keep getting better and cheaper. DeepX is one to watch for sure
I found none info about how to puchase BUT DeepX has an Early Engagement Customer Program.
Please let me know this compares to RK3588 with 6 TOPS NPU $150 Orange Pi 5 Ultra. And Canaan Kendryte K230 RiscV.
The price point is interesting, however, they need to scale it up to compete with the many of growing competitors. Memory and the number of Trillions of Operations Per Second. How well does the NPU calculate geometry compared to a GPU? A lot of applications need both! Lastly, in most cases you need at least 12gb of memory to do anything significant.
love / peace, gracias to host n guest , what a presentation. brilliant...........congrats to CEO n his team ...
Thanks to you!
Where can I buy it?
I want to buy stock.
yes, where
DeepX have an Early Engagement Customer Program if you head to their website
@@jtjames79
But the CEO doesn't exactly inspire confidence, I have my doubts.😂
@@SpecialOperationsCommand I like big returns, or nothing at all. I also only invest in things that I think are cool.
I want to have the future, and be able to afford it too. It's like eating two cakes with one rock, and having a bird.
This reporter is dense… it’s an NPU not a general purpose compute accelerator i.e. GPU.
All you’ll get is niche acceleration on very specific workloads like classification, forget running an LLM on an NPU.
Why do you need a GPU? For training/finetuning/system design.
If you are an engineer, the ONLY reason you would have a product like this is for inference testing for deployment testing, i.e. QA and deployment teams more than development.
This is a nothing burger
it can be used for surveillance in vertical integration, locally. GPU would have to be offsite and have data fed over the internet, something impossible to do in say, desert or Cambodian forests. It all depends on your perspective. Yours seems very very narrow. This company is most surely looking to sell the tech to bigger companies. Remember, fist iPhone was comprised of a compilation of "nothing burgers". Apple just put them all together and gave them unified GUI.
I'd like to see this compared to jetson orin nano in depth
This is too good to be true, where do we get these MPU's
I love to see more outcome and give a demo in my office in Taiwan.
That's some crazy hardware. He seems a nice guy with good intentions. Hope this company can disrupt the market soon. Fingers crossed this will be for sale soon.
How do you purchase? Advise please..ty
Pls post link where I can buy the DeepX npu chip or kits
I want this baked into my Android device now.
Will these chips be in security cameras?
What models is it running. Need some tech details
I did that with my orange pi 5, using yolov 11. It's simple, efficient but not more powerful than GPU.
Where I can buy it ???
hang on a minute hang on a minute, annoying host interrupts 10x to repeat down to the feet
4:11 Lots of lag on the screen compared to him moving. So this real-time thing is really slow
Looks like a rebranded HILO-8 W/26 TOPS. Might as well get the Hailo-10H M.2 Generative AI Acceleration Module is an M.2 card with 40 TOPS at 3.5W. Want MORE POWER? Check out the M.2 accelerator from Axelera AI featuring 4 RISC-V instruction set cores, each capable of 53.5 TOPS for a total of 214 TOPS on a 2280 size M.2 card. Need more TOPS? Install their PCIe AI accelerator card with 4 M.2 slots for a total 856 TOPS. If you have PCIe 5 you could use the PCIe card with 8 M.2 slots for 1,712 TOPS.
Mod parent up!
While the DEEPX DX-M1 is a powerful AI accelerator for edge computing applications, it is not designed for gaming purposes. GPGPUs remain the preferred choice for gaming due to their specialized architecture for graphics rendering and handling gaming workloads. Therefore, for gaming applications, a GPGPU would be more appropriate than the DX-M1.
Man this is Epic!!! This is the obvious choice for next biggest AI Cluster!!
There are a few companies doing dedicated npu's in a datacentres format from conventional small silicon (like these guys), to wafer-scale and even phonotic computers so a clear winner of "what comes next" isn't clear. At a datacentre level the ability for the separate processors to act as if one ( "a geshtalt entity") is often very important and as I understand it nvidia (and in particular Teslas way of interconnecting them) is ahead of the pack. So in short, for datacentres, it's about cohevesive horizontal scaling rather subtlety!
Amazing. I can see this chip deployed in smart cities. How does it handle sound?
China already uses this in their city camera systems for pedestrians. Not sure what platform they use, but they are pretty advanced -- more than anything we have here in the US.
I love how friendly Chinese entrepreneurs and inventors are, amazing product, keen to get my hands on them!
korean*
There would be some use cases where latency is not an issue
Ipxchange you need to learn the fundamentals of presenting and interviewing before you go much further as your boyhood enthusiasm is getting in the way of the whole piece. And forget about drawing analogies as most of the people watching the video will probably understand without them. You also need to think a little harder about how to get to the nub of the piece which was totally missed as there's no information whatsoever about the actual differences between their chip/board/system and the rest. The piece needed a total rethink
How about you make your own damn channel Then. Send me a link when it’s ready
Thanks, Mike. We're always learning the best way to present content in a way that's useful to design engineers/developers. We'll take your feedback on board and learn from it.
@@MarxOrx how about you allow constructive criticism to take place, if you liked it so much you wouldn't be defending the obvious flaws
I loved the way he presented the piece. Much better than the usual tech grind. I wouldn't change a thing. In fact I subscribed!
@@newhopeforhealing thank you for subscribing!
Build a large shape and picture association base library on the chip and extrapolate to the closest comparison from the camera to bypassing full need of processing to project the image. Similar to how one would use a shuffle sort in programming. But instead use an allogam to code and decode from say like a AVi to an MKV format etc... Bypassing about 85 to 90 % of processing needs for an output. It may be a better way to manage vectors and controls of them.
Sorta like how China is using an interesting trick to track all aircraft including all stealth from over 1600KM away well over the horizon. From that China is still struggling to be able to lock on stealth though thus the need for extreme high speed chips and AI advancements which can be carried on board weapons etc... for real time use.
Our chip is coolest than human, it's the coolest ever!! I liked that line 😊
5 watts 26 tops $50, I’ll believe it when I see it in the hands of some people 😂 or I can buy one
How much VRAM is in this chip?
My question exactly 😊. I think its very small. They are running resnet50 - about 25 million parameters CNN. Its tiny.
The speed demo is also not fair. GPU is running at 8bit while they are running at 4bit.
Improved accuracy - probably fine-tuning after quantization to 4bit.
But i expect that's the near future solution - edge devices compress the information and make simple decisions.
Specific info about VRAM isn't available on DeepX's website... We'll do our best to get an update to answer these questions.
@@ipXchange This should have been the first question you asked
@@banknote501 it wasn’t! It’ll be the next one though.
I suspect it's using the same RAM as your CPU. It uses about the same amount of power as a stick of ram - probably can't do both at once.
BrainChip and Akida blows this out of the water. Akida can use around 1 watt or less actually way less with their Pico version. I like when he asked has anyone else done this ermmmm YES. If they are using Spiking Neural Networks I think they better check they aren’t infringing on Brainchips IP
No open source policy. Trying to put that vendor lock-in in place.
Joining in on discussion the software Language makes all the difference ! We sent people to the moon with way way much less technology power that we have today
NPU is not a new concept from this guy. They all gave lower cost and higher performance. There's even an AI hat for raspberry pi. That's an NPU right there, and I think it can pretty much accomplish the same thing. Also not that expensive.
Just give me an API to the metal so I can broaden the usefulness of this device.
Jensen's successor?
How I can get it
💥
Nvidia project digits?
Built in AI means undefeatable spyware is built in as part of the hardware.
The important question is can I use this to play video games? 😊
That's impressive with so little power and cost!!
Could you crank up the DeepX intensity a bit more? 😂 When 98% of your budget is poured into software development, what hardware would you pick? Google's TPUs offer better efficiency than GPUs, yet most companies still default to Nvidia. Tesla, on the other hand, has the scale to design and refine its own embedded systems, thanks to millions of vehicles on the field.
Not the end of gpu. I wish it would be the end of misleading and use case UNspecific titles/posters
Noted! What did you think of the video and the technology?
Is it just me or dose? AI sounds like 3d tv 😂
Can it run Crysis?
that was crazy, somebody please tell jensen about this. LOL
I love the fact they reused the X of SpaceX but upside down 😂
This could be useful to robots or autos
Guys, you need to make a tv show together
It has less than 8GB of on-board DDR4 RAM.
Do you really believe a new company can compete with Nvidia or even AMD? I Don't
no, micro-fpga gate logic that can do both what gpu raster/ray tracing and npu/tpu stuff can do, is the best. yes its not fixed logic but re-programmable logic. but much better. very large scale fpga. same or cheaper than current fixed logic or any fpga stuff. fixed logic unit are not gpgpu. shader cores are gpgpu. not for AI semis, but for all semis. when dynamic does what static does then dynamic wins. well, try cyclone-V fpgas. similar power efficiency for custom logic. off-the-shelf. 50usd sounds like max1000 fpga. thats automotive usually. the gpgpu seems to be a high-end jetson nano. gpgpu is most suitable for development, not deployment. unless the deployment is general purpose. best public ip is free. always. so why at 50fps film tho? double 25fps? not 60fps?
Jìan-Yáng is a shady in Silicon Valley
Deepseek and DeepX .... hunm!!! 🤔🤔🤔
Good video
Thank you!
This is embedded design vs general purpose design.
Something's cheaper than the cloud....yea edge is gonna disrupt that once you see them bills for a scaled up cloud solution
01 Ai - Ni Relativity, switching 010 Time Timing dual 01 polarities.
* 1 2 3 . -- -- + -- +
An NPU is not a GPU. A GPU it's not "General Purpose", as some ill informed poor soul below has interestingly stated.
G is for Graphics, not General. It just so happens that the same things that run a game well currently happen to have Tensor Cores which happen to be good at Crypto Mining algorithm. P it's for Processing. U is for Unit.
N is for Neural. Neural Processing Unit. Come on, say it with me!
An NPU is designed to "simulate" the way a human brain neural network functions. Which as you could probably imagine, may be more efficient at certain things over a GPU or CPU. Likewise, the GPU is still going to have it's particular strong points.
CPU's are "General Processing Units". More specifically, "Central". An NPU may not need a GPU. A CPU does indeed however need a GPU, for advanced graphic features and quality. An NPU will do great with general Image Processing, but it's going to be prone to... "fluctuating" performance inherently.
Everything has its purpose. Anything "General" for today's standardized technology... has already been made. Anything made in technology today is more "Specialized". The ground works have been laid, decades ago, folks.
Closest thing to the human brain by far.
Okay dude I saw that you did a video on this cool technology so I subscribed to your channel and then you go about interrupting this guy every 30 seconds and saying what he said and being all Fanboy and stuff and I removed my subscription. It is seriously annoying.
Hi Scott, thanks for the feedback - it is taken on board. We're very enthusiastic and energetic when it comes to discovering new technology. I hope you can reconsider and give our updates a chance.
So, what you're trying to say is that you were subscribed for under 30 seconds?
For once your not trying to hide the surveillance level of it
under 50$ untill almost nobody usind. it's like cheap and good australian wine, but it changes if getting market share.
very intresting
I mean not that impressive, my company Renesas does a MPU with built in AI accelerator that achieves 80TOPS with a power consumption of around 1W.
What's an MPU?
And why has nobody heared of it? Is it already on the market. Did you managed to make a functional device that runs at least 8b llm?
Sounds like analog computing !!
Not sure he needs a hype-man 😂
Will it run crisis?
I’d disagree, you can do everything that company is doing with face recognition and body recognition all real-time by using Open-CV which stands for “Open-Computer-Vision” this technology is nothing new and has been around for more than 15-20 years already.
Did you touch this guys chip and monitor? not cool bro
NPU's are nothing more than a waste of CPU die, they can be easily replaced by gpu's which can do so much more.
Thr *Nvidiafucation* of Ai needs to be stopped ⚔️🛡️
Timmy Mallet does computers. Amazing.
LoL he looks nothing like him , but funny none the less 😂
Cool
So you'd fry an egg with a GPU lol
Waiting to hear response from Nvidia 😂
Paid PR ?
can it run god of war?
How many people can I spy on per second?
NPUs have long been better for many ML models. This is well done, but not magic
I'm a disruptor
Jai Hind. Beginning of the end of US National Security restriction on supplies of expensive GPU/NPUs like Nvidia seems imminent.
Wow!!!
A ver q tienen una ia eficiente entrenda el hard es una rasberry jjjj ese modelo de ellos lo corres en el de nvidia y le sacaria no se mas menos 20veces ese rendimeinto minimo jjj
nope, gpu never born for AI, we using it wrong way
It annoys me how this man talks to us like were children. No need to repeat information we already heard and saw.