A company called Brainchip are the first and only commercial neuromorphic company and Valeo and Renesas are bringing out applications end of this year (2023) Valeo’s Scala3 Lidar and renesas and multiple applications
Very cool technology! Looking forward to seeing where Loihi and LAVA end up! Impressive that Loihi 2 was ready to go on Intel 4 so fast! From what I can see though, the extra density you gained versus Loihi on 14nm went towards a much smaller die size. Are there plans for something like a Loihi 3 that might scale the die size back up and tackle some larger networks? I also saw the stack-able Kapoho Point boards were released a couple months ago, but there wasn't really any press on that, it would be interesting to hear more about that. So there are four Loihi 2 chips per Kapoho Point board and you can stack multiple boards for 8, 16, potentially even more? I recall there were full server racks with over 700 Loihi chips in them though. Is there any plan for that sort of scale with Loihi 2? In any case, very cool stuff and I think the most interesting thing of all would be to see what this sort of technology enables. Loihi had some interesting graphs showing how it stacked up against more traditional accelerators and such, but I haven't seen that for Loihi 2 anywhere.
Great talk and smart guy, but he doesn’t give himself enough credit, saying that evolution (which is NOTHING (it’s just a theory) is more intelligent than him. 1:00 THE PERFECT CHIP DESIGNER Is GOD otherwise stop working on your design, because with time it’s just going to evolve into the perfect chip.
{Solve} : {{Maths Roll Error on 24Bit Audio versus 32Bit} ~= Stutter} : Windows 3D Audio, DTS & Dolby Atmos 2022-11-30 RS * {Solve} : {Maths Roll Error} : (c)RS {Maths Roll Error on 24Bit Audio versus 32Bit} ~= Stutter Additional roll, Error margin on 32Bit maths Float with 24Bit 5 point margin roundups, A 32Bit float rolls up on a single operation 226526554817.{24Bit float + Error roundup} .9> .49 = .5+ = roll up.. R={5+ or 4- | 0.45+ or 0.44-} : or {0.445, |> 0.444444444445 |> 0.4 N4 +Decimal Places +5} Clipping operation depth of float; Is 3 operations or 2 with Stop count = 1 to 24 bit places + 1 or 2 for error rolling, up or down. Precision Clip Math OP | Clip > Cache {Math OP Use} Precision Counter Math OP + Counter(internal to FPU:CPU | Stop > Cache {Math OP Use} * Windows 3D Audio, DTS & Dolby Atmos should do to at least 32Bit 384Khz 7.1 Channels, There is absolutely no reason a 64Bit processor cannot do 64Bit audio, Mind you 32Bit Integer is around 60% of total CPU Support with 64Bit divided by 2, So 32Bit Audio is 100% speed conformant & there are few reasons to reduce it to 24Bit or 16Bit without processing benefaction; Such as Error management on 24Bit on 32Bit instruction: Both AMD & Intel X64 Rupert S 2022-11-30 "State-of-the-art approaches such as OpenMP and OpenCL"
@@o1-preview Very positive! Inference & FMA De-Block Styles For upscaling matrix: MMX+ & SiMD 16x16 Block as used just about in HD, 8x8 Blocks Certainly NTSC, PAL, JP_NTSC!, Very usable for deblocking JPG, 16x16 & 8x8 is very good for Inferencing active on Scaling & Deblocking.. 4x4 for main Inference XBox & 8x8 for PS5.. XBox can use (4x4)x4 for 8x8 & (4x4)x16 for 16x16; Very powerful! PS5 can use (8x8)x1 or x2 for 8x8 & (8x8)x4 (x8 for additional processing) for 16x16; Very powerful! Inference & FMA De-Block Styles List (4x4)x4 (4x4)x8 (4x4)x16 + processing (4x4)x32 +++ processing (8x8)x4 (8x8)x8 + processing (8x8)x16 + processing (16x16)x1 + processing (16x16)x2 ++ processing (16x16)x4 +++ processing 8:4Bit Concepts: 65535/255=8Bit 65535/16=4Bit 16bit/4bit : 4Bit colour pallet, But we can fraction 16Bit/4bit in essence 16/4! 65535/16; Compression Shapes & Gradients. Polygon, Shadow, Contact Alpha Channel 2Bit, 4Bit Grayscale edge define sharpening Single Colour Edge detect Shape Fill in Alpha 10,10,10,2 Xor, Pattern, Shading, Shader, Cull, Shape & Depth Compare after define
Massively underrated video. Outstanding balance between accessibility to the layperson with decent technical discussion for computer engineers.
Great video. I'm very curious to see where neuromorphic hardware finds its first commercial application.
out of the corner of my eye your icon looked like the Drumsy channel icon for a second.
A company called Brainchip are the first and only commercial neuromorphic company and Valeo and Renesas are bringing out applications end of this year (2023) Valeo’s Scala3 Lidar and renesas and multiple applications
Its fascinating that biology created such a performant power efficient computer like our brain. Youre work at Intel is very exciting and inspiring.
Very cool technology! Looking forward to seeing where Loihi and LAVA end up!
Impressive that Loihi 2 was ready to go on Intel 4 so fast! From what I can see though, the extra density you gained versus Loihi on 14nm went towards a much smaller die size. Are there plans for something like a Loihi 3 that might scale the die size back up and tackle some larger networks?
I also saw the stack-able Kapoho Point boards were released a couple months ago, but there wasn't really any press on that, it would be interesting to hear more about that. So there are four Loihi 2 chips per Kapoho Point board and you can stack multiple boards for 8, 16, potentially even more? I recall there were full server racks with over 700 Loihi chips in them though. Is there any plan for that sort of scale with Loihi 2?
In any case, very cool stuff and I think the most interesting thing of all would be to see what this sort of technology enables. Loihi had some interesting graphs showing how it stacked up against more traditional accelerators and such, but I haven't seen that for Loihi 2 anywhere.
this stuff is crazy exciting!
This is profoundly exciting
Amazing, wonderful. I would like to buy a development board with one of those chips.
Is there any evaluation board avaleble to buy and play around with. Neuromophic chip seems to be very exciting things.
Niceeeeee! I was expecting this! Thanks for the information!
Amazing, I love it. Thanks
great presentation.
Brilliant!
10:12 Encore! RS
Akida 1000 more synapses
Akida 1500😉
4:04 mycelium-based neuromorphic systems can overcome this issue.
16th!
Cool…🥹🥹🤓🤓
Amazing, but not clear at all! I wonder how many people without a previous knowledge actually understood the content of this video!
Crysis? 😂
RUclips is getting filled with AI generated content ...
And this has to do with this video, how?
Great talk and smart guy, but he doesn’t give himself enough credit, saying that evolution (which is NOTHING (it’s just a theory) is more intelligent than him.
1:00 THE PERFECT CHIP DESIGNER Is GOD otherwise stop working on your design, because with time it’s just going to evolve into the perfect chip.
{Solve} : {{Maths Roll Error on 24Bit Audio versus 32Bit} ~= Stutter} : Windows 3D Audio, DTS & Dolby Atmos 2022-11-30 RS
*
{Solve} : {Maths Roll Error} : (c)RS
{Maths Roll Error on 24Bit Audio versus 32Bit} ~= Stutter
Additional roll, Error margin on 32Bit maths Float with 24Bit 5 point margin roundups,
A 32Bit float rolls up on a single operation 226526554817.{24Bit float + Error roundup} .9> .49 = .5+ = roll up..
R={5+ or 4- | 0.45+ or 0.44-} : or {0.445, |> 0.444444444445 |> 0.4 N4 +Decimal Places +5}
Clipping operation depth of float; Is 3 operations or 2 with Stop count = 1 to 24 bit places + 1 or 2 for error rolling, up or down.
Precision Clip
Math OP | Clip > Cache {Math OP Use}
Precision Counter
Math OP + Counter(internal to FPU:CPU | Stop > Cache {Math OP Use}
*
Windows 3D Audio, DTS & Dolby Atmos should do to at least 32Bit 384Khz 7.1 Channels,
There is absolutely no reason a 64Bit processor cannot do 64Bit audio,
Mind you 32Bit Integer is around 60% of total CPU Support with 64Bit divided by 2,
So 32Bit Audio is 100% speed conformant & there are few reasons to reduce it to 24Bit or 16Bit without processing benefaction; Such as Error management on 24Bit on 32Bit instruction:
Both AMD & Intel X64
Rupert S 2022-11-30
"State-of-the-art approaches such as OpenMP and OpenCL"
its been 8 months, what are you views now?
@@o1-preview Very positive!
Inference & FMA De-Block Styles
For upscaling matrix: MMX+ & SiMD
16x16 Block as used just about in HD,
8x8 Blocks Certainly NTSC, PAL, JP_NTSC!,
Very usable for deblocking JPG,
16x16 & 8x8 is very good for Inferencing active on Scaling & Deblocking..
4x4 for main Inference XBox & 8x8 for PS5..
XBox can use (4x4)x4 for 8x8 & (4x4)x16 for 16x16; Very powerful!
PS5 can use (8x8)x1 or x2 for 8x8 & (8x8)x4 (x8 for additional processing) for 16x16; Very powerful!
Inference & FMA De-Block Styles List
(4x4)x4
(4x4)x8
(4x4)x16 + processing
(4x4)x32 +++ processing
(8x8)x4
(8x8)x8 + processing
(8x8)x16 + processing
(16x16)x1 + processing
(16x16)x2 ++ processing
(16x16)x4 +++ processing
8:4Bit Concepts: 65535/255=8Bit 65535/16=4Bit
16bit/4bit : 4Bit colour pallet, But we can fraction 16Bit/4bit in essence 16/4! 65535/16; Compression Shapes & Gradients.
Polygon, Shadow, Contact
Alpha Channel 2Bit, 4Bit
Grayscale edge define sharpening
Single Colour Edge detect
Shape Fill in Alpha 10,10,10,2
Xor, Pattern, Shading, Shader, Cull, Shape & Depth Compare after define