What is Computational Storage?
HTML-код
- Опубликовано: 10 янв 2023
- Learn more about IBM FlashSystem → ibm.biz/learn-more-flashsystem
Take the cyber resiliency assessment → ibm.biz/cyber-resiliency-asse...
Are we reaching the limits of computation? Looking for a way to streamline administration and operational complexity across on-premises, hybrid cloud, virtualized and containerized environments?
In this video, Andrew Walls, IBM Fellow, CTO and Chief Architect of IBM Flash Systems, explains how bandwidth and speed can be improved through the use of computational storage.
Get started for free on IBM Cloud → ibm.biz/sign-up-today
Subscribe to see more videos like this in the future → ibm.biz/subscribe-now
#IBMstorage #IBMflashsystem #cyberresilience
Loved the narrator, please more theory from him.
Thank you for clear explanation, You're awesome sir👍
Awesome. Loved your video. Thank you
Great presentation, Andy! Thank you!
Great session Andy - thank you !
Very well done, thank you!
Thank you for reinventing the wheel, good job!...
Straight and clear!
Great insight 👏
awesome explanation sir :)
Excellent videos..
thx for this precious information
jfyi: at 4:23 Andrew mentions an ASIC (An application-specific integrated circuit ) and it is subtitled as a-sync.
Whoops, corrected. Thanks!
Great content! I am curious about hows the filming done, I mean the transparent "whiteboard".
Search on "lightboard videos"
Nice
HI Andy. The one thing I think it would be great for you to address, given that the concepts of computational storage have been proposed and tried for at least 30 years (e.g. Netezza and the likes of Jim Gray, Erik Reidel), what has changed that this idea will succeed now? To me, the problem has been similar to the problem of parallelizing any computation: it has to partitioned properly. There are a certain class of problems that naturally allow partitioning and parallel execution, but have we gotten the point where generally that's true?
It is all so cool, but what about reliability and introduction of complexity to the point as we still have another law, more parts in the system, higher the probability of failure. The more distributed the system is the higher the probability of the failure of that specific unit that will cause the freeze of the of this distributed system due to race conditions and other synchronization/time domain problems? Really interested in this part.
Onur Mutlu liked this :D
Is David Vorick’s “Proof of Work” data-file-storage blockchain technology project: Sia Coin
Is the presence of computationally-enabled devices made transparent to user-level application source code?
Looks like distributed computing is coming to microcomputers.
Nvidia has left the chat
well i think there is a strong argument to be made that there is a duopoly on cpu design. So it could be argued that both companies have become uncompetitive. Ampere is catching up, maybe when they do Moor's Law will revive. Then the issue is going to be that making massively powerful cpus will have a small market slowing down considerably the production of better cpus.
Applying the lessons of mainframes, shared-nothing architectures, etc. to the relative stupidity of the "it's just a bigger PC" design of almost everything these days. "Symmetric" multi-processing and symmetric NUMA need to go away too...