thanks for the review! By the way: At least 20 in 1U and maybe up to 22 :) w/o CPU OC PoE (without +) will be enough even with NVMe according to my tests.
@@Simon-qg2qn The original idea was to drop the backplane. It's the single point of failure. If one blade has a problem just change it :) it shouldn't be too expensive (at least for enterprise) to have one or two spare ones. But I have ideas for a backplane nevertheless
With a slightly modified design, and potential cooling issues aside, it looks like there could be potential for nesting two boards face to face, into a single space, doubling the density of a rack. Might be overkill, but the potential is there.
@@JeffGeerling if you could remove the vertical seperators in the front, and add more grooves, 1 for smallest, 1 for biggest build height, you could have more pi´s or mix and match for what you want/have also it needs a frontplate with 90° light pipes for the io lights, maybe some sort of knob to pull them out if we are here already, why not add a connector in the back to actualy blade them into a backplate? damn this adds up quickly....
I'd just as soon toss all the blades into a pile in a server closet it and call it a day. Then the density is dictated as a function of the closet size and critical angle of the pile due to friction.
NVMe has this very cool snap connector now so that you don't have to mess with the screws. For this board, it is obviously easier than another motherboard but just in case, more spins are being done, perhaps it is worth considering.
Or you could make it accessible as pins (in addition to the onboard ones, if possible?), so that people could put the LEDs wherever they need them (back or front or perhaps both if using some sort of splitter). Sadly I'm not good in regards to hardware engineering, so these are just suggestions from a noob.
I’ve been waiting for a board like this. Could be great for small scale data center needs. Especially for simulation and prototyping of complex data center setups.
A few people have suggested this. It would be nice to have that kind of system, though it would increase the cost and complexity of running the system, and might discourage people from just buying a blade or three. Honestly, trade-offs either way.
@@JeffGeerling not really ? just add an edge connector on the back of the board with trace going to the RJ connector, and you can design a backplane with a POE-switch controller bypassing the rJ connector (I had old Dlink rj-fiber 100Mbits converters like that, you could use them as standalone, or plug them in a case with a switch and power backplane and just connect the fiber in the front). Or you may just put male RJ on the backplane, and flip the card around I guess (duplicate status leds on both side ?).
@@JeffGeerling It's probably not cost effective because the raspberry is not a very good platform for this. The RAM is slow, it's hard to effectively share RAM between raspberrys, it's hard to expose the PCIe even worth one lane, the custom and slow GPUs are practically useless for general purpose work especially in a cluster etc. It looks cool but the real world applications are dubious at best. Those PoE converters are 10 bucks each and while it might make sense to have PoE for 4 blades, with 16 blades it's a no-brainer to have a backplane instead of hundreds of dollars worth of PoE crap. This will always have crappy performance and it'd be so much better to design a system around a 16 core modern SoC with actual open and documented interfaces and connect four of these SoCs with something a little better than giga ethernet. Like someone else already said: it'd probably be more cost efficient to get one real x64 cpu and run the software in virtualization. You'd get more power for less money.
I agree with this, it is valid for certain builds that want to use the complete 1U space. You can also consolidate a lot of the ports to the backplane. I really love where this is all going with the Pi
@@wombatillo there are some 'we need lots of cores, but each core can be slow' workloads you could use this for. It'd be fine for F@H, for instance. That said, I think this sort of thing is mostly a 'Do you want a REAL Hardware cluster? On the Cheap? here you go!' and it gives folks a way to play around with actual hardware without breaking the bank.
I'm wondering the same thing, like who cares about all this stuff when most of the time it's just being using by corporate America and other endeavours that add nothing to humanity
Really cool stuff! Could see these being used as something like a Ceph cluster really easily One giant nitpick re memory - you _can_ add them all up to get a total but that tends not to be how distributed/clustered apps work, rather than getting a big pool of all the memory each individual instance only accesses it own, so you'd end up with 8GB per node but you couldn't use all of it at once (:
You also need fast networking, I didn't see it on this design. Sounds like Ag5300 delivers 1 Gpbs netwoking. And the PCI-Express slot is already in use for storage.
It be cool to see a version with a SFP+ or a slot for fiber optic networking. This would do great for those who want to make servers with these clusters.
I think it'd be possible in theory, but that would basically have to be a PCIe NIC at that point, which would contend with the NVMe slot either physically, or for PCIe lanes. And of course you're then leaving the existing PoE NIC on the table, but that could still be useful for your mgmt network, separate from any network on the SFP
since its going to a rack anyways, why the POE power? you could have one board at the end of the enclosure providing power to all the blades installed and have that contain the powersource.
I think its moreso just ease of integration and less proprietary parts like power distribution. Most racks will already have underutilized POE installed so might as well run the pi cluster off it.
@@MrPipDarty you could make the board as such, that it slides in a connector at the back , you could even make it so, it will also connect networking Technically, you could split up the pcie lane, adding faster networking too, but that'll eat in the nvme speed
@@MrPipDarty Highly doubt most high density environments have no space for servers but have spare PoE+ ports on costly switches just hanging about unused... A rear powerboard compatible with a standard 19VDC power brick for instance would be nice, though at 20 'blades' you might want a power budget of 350-400 watts which would mean a full ATX supply would be needed. Perhaps perfect timing to switch to a 2U variant with built-in space for an ATX power supply (plenty space in the back) and double or even quad Raspberry blades? Currently, this is a neat way to store a whole bunch of RPi's but you'd need a 24 port PoE+ switch to power them and a 24 port PoE+ switch with a power budget of ~400 is expensive, much more expensive than a standard $50 ATX power supply.
@@Steve25g all very valid points. Perhaps the "correct" answer here is this is still in very early prototypong stages from what was said in the video, so more than lilely ot just simply hasn't been the primary concerns of development just yet.
@@someguy4915 or make it more integrated, and just add a server supply, that will drop you a huge load of 12V, offers more security, and if wished, could be made redundant, or load shared
Thank you Jeff for nice presentation of Ivan’s amazing work. @Ivan Kuleshov Your ideas and enthusiasm by which you bring those ideas into live is inspiration for many of us. Thanks a lot and keep on track.
i bought a Raspberry Pi 4 over a year ago to make a magic mirror out of. got as far as getting the MM app to boot and see the time. ordered a camera module for it, the 2-way mirror film, the frame and glass... annnd i haven't worked on it in 10 months. I'm in over my head with the coding or whatever, but I still find these videos about the Pi fascinating.
After my experience with arm cores I have trouble seeing them as really usable for heavy lifting or even desktop use. At least that's how I feel about single board computers like the raspberry and orange pi. You really have to be a programmer for them to seem amazing.
As a homelabber with a 4-slot Pi mount in my rack, this is the exact thing I would've actually wanted in my rack - hot swap blades! My current mount basically needs me to take the whole cluster offline for maintenance lol. I have a dedicated Pi with a YubiKey (for data access control), so I especially appreciate the built-in TPM! The only thing I'd miss from the built-in TPM is touch-to-sign capability.
@@JeffGeerling true! It'd be cool if the blade had a single keystone slot on front (with keystone blank plate by default) so you'd be able to extend the USB (or anything else, for that matter, like antenna connector if you built a DigiTV blade) from inside the blade to the front, should you choose to.
Ivan does amazing work. I love this idea. I've started learning PCB design.. I have no idea if I can do it, but I have one idea, I would like to try. Great video as always.
Only one way to find out! Go ahead and try it out :) Honestly I'm a beginner with PCB design too-I'm considering doing a few simple projects and making some videos on getting started with it (probably using KiCAD).
@@JeffGeerling Please do some beginner videos on KiCAD. A few years back, I started to learn PCB design with Eagle. When I was starting with Eagle, there were plenty of tutorials I could find online, and I had several friends using it too. Then Autodesk acquired it and has made a lot of changes and I didn't really keep up. I decided I'd rather use KiCAD since it's free software. But I need help getting started. I will say I was pleasantly surprised at how easy it is to do PCB design in a general sense. I just need some help over the initial learning curve of KiCAD.
Wow! This is an awesome design! I'd love to get my hands on those, but I've had issues with kickstarter. I'll wait until they go full production and score some then. Can't wait!
Flip the boards around and have a backplane of ethernet/RJ45 plugs without retention clips across the back of the enclosure. Similar to how servers hold hotswap drives or other blade servers work. That would allow easy hot swap of blades without all the cables in the front.
This can definitely be improved with a network switching / pass trough and power supplying backplane, and moving the SD slot, HDMI and USB port to the front. Easy access and hotswappable 🤷🏼♂️😉
I concur with you Jeff on the excitement at the tech level. The question is, though, what will each of your computers do? Will each host some web service that has no dependency on the other computers (except the basic network services for the OS)? Is there a way to make them collaborate - as in a parallel computer - on workloads that are too big for a single one? Tools for that?
One thing I'd like to see on maybe a future version of this is improvements on the networking and power. As it stands, having N as many Ethernet cables out the front of the rack would make for a lot of cable management. It would be nice to be able to backplane them together, and have a dedicated port on the rack chassis for whole cluster uplink to a built in switch. A bit more complex, but it would make it much more integrated and plug-and-play. Esp if the bandwidth between pi's is quite high, for even better local clustering
I think that - how cool a raspi cluster in 1 U may be - I still think you would be way better off with x86 hardware with decent core count. Raspi cores are just really slow compared to any CPU of the last 5 years. If you get a used HP DL380 Gen9 or Gen10 or something custom based on AMD, you have something that is so much more capable and performant. It will lose out on size, but power usage may be not that dissimilar if not better. I know Pi’s are this channels thing, but I think it’s good to keep perspective
This is true-I think the use case with current generation Cortex A72 cores would be for people looking for some ARM compute specifically (e.g. building ARM64 images, running ARM-based tests), or for those more interested in the fun / experimentation. Of course, depending on the price of the final board, and availability of CM4s, it could be cheaper (especially if you want or need better speed / performance per watt) to build a single AMD server.
@@louwrentius ah. You're the guy with the zfs servers. I remember reading your blog a few years back. It was very informative. I'm the diy nas kind of guy. I'm running a server with snapraid and mergerfs. These kind of nodes are awesome for cheaply testing distributed storage. With the prices of electricity I don't want to be running big 4u encases with alot of storage. I don't need much storage either just redundancy and bitrot protection.
Hey Jeff, what’s the benefits to have a cluster Pi instead of heaving one PC tower more “powerful” or even 2/3 mini PCs? Or the Cluster Pi is more for fun? Thanks man!!
@@nathanc4183 this. You lose density but gain power efficiency especially with threaded applications. It's very cost effective if you need many low power nodes.
As a software developer I would say modularity. Every machine would be optimized to process a specific part of the whole. Obviously database server, mail server, dns server, vpn server, media server, etcetera. Even better running micro-services like trackers and monitors that report the status or availability of something. Additionally having the option for custom application without touching other machines.
@@nathanc4183 I'm running proxmox on an i7 which is currently hosting 3 systems: NVR on W10, NAS on Linux, Linux workstation. Power consumption is 60 watts with 6 internal drives (20 TB). I'm still struggling to understand the benefits of the Pi Blade in terms of power consumption when compared to an x86 system.
The tiny space where the network connector is, could be fitted with some GPIO/uart and power pins to be able to hold small accessories, for example a small 5x7 led matrix debug display, a 32x64 dot tiny Oled, a console serial port card, CAN-port or whatever we want to have an interface accessable from front.
@@JeffGeerling I am also curious to know, and can you share your knowledge to make server using raspberry pi because as a hardware engineer I am not know about networking and server security, but I want to make server for MQTT for home automation and for community use.
Yeah...I keep seeing "ooh I made a cluster"...but what the heck can people do with them that will actually be useful?? There really isn't alot of explanation on this
I came to the comments to try and find this answer as well. I don't understand anything about clustering especially in a home lab. I can imagine someone like Nasa doing it for science but what purpose is there at home
I am also running three PIs in my network like jeff before (poe, in 2 HE rack). But I also run a big proxmox host for all my virtual servers. The main reason was to make certain services independend from one host machine. On my pis I run a VPN Server, WLAN controller and DNS Server. You would have an issue if they are all deployed on virtual machines on one host. However, havin 16 pis is maybe only for research.
The fact that you guys have in your hands a start up project is amazing! The SBCs world is the future of computing to me. Cost reduction, accessibility, and popularization of technology will make businesses develop product and market for SBCs at leat at SOHO level maybe up to small sized business. Long life to SBCs! Keep it up Sir!
Maybe SBC, but definitely System-On-Chip. We went from mainframe to mini-computer, to micro-computer which became to PC and now it's all SoC: smartphone, embedded, Chromebooks, Apple M1 laptop and Mac Mini.etc.
@@autohmae I cannot agree more. Definitely cost reduction is driving technology more than ever before. Turing pi (a board capable to carry 7 Rpi slots) for clusters and small servers is another clear clue about the direction of tech avantgarde. "Computing everywhere" is another clue. Combined with "low cost" has generating a totally unpredictable direction!
@@derekgoodwine7509 As Moore's law isn't giving us as much more raw (single threaded) performance it looks like we'll get more specialized chip (parts) to do certain tasks faster. Seems to me all the server parts will move to running on Kubernetes eventually. The Linux kernel & Kubernetes combination is also getting better and better at running "multi-tenant" (so secure enough so different customers/companies/persons to run on the same environment without virtualization) Allowing for much more density. Greatly reducing the price of compute as Kubernetes is a very generic platform allowing for easy competition.
My concern with this is that we end up with a bunch of non-expandable, non-repairable computers. That's fine for something like a $100 Pi, but not so fine for a $5k server.
@@Alpha8713 the bigger worry for me is actually Microsoft bringing out Windows for such a platform and it's locked down with UEFI and secureboot, TPM, etc. And Microsoft having the power to flip one bit so the Linux can''t boot on it anymore. And the EU will slap a fine on them which is smaller than their profits and MS will just say: you can run a VM or Windows subsystem for Linux, that's fine right ?
I think 1 pihole 2 Local DNS with pihole 3 VPN server 4 Homeassistant 5 nodered 6 zabbix 7 web email server 8 Local Nas(HDD Might be issue) 9 Grafana 10 database server 11 DB server (slave) 12 Rhasspy Master In 1U for home
For me, I build ARM64 Docker images on my Pis sometimes, and also run tests against ARM via Jenkins. Other than that, I have a number of Pis currently running Prometheus + Grafana for monitoring, Docker for hosting some of my small websites, and various odds and ends that don't need a ton of power to run.
Yes, and troubleshoot/reinstall every "blade" without any sort of IPMI. Sure. Run around in circles with monitor, keyboard, media. That's weird. Why you need PoE in server room but no remote management solution? Tell the developer of this board to look at pikvm project. It might require TC358743XBG on every "blade" to make bulk management less painful and more logical.
@@gedw99nah, that's different. You want to control device before it even boot. Before any OS. Be able navigate bios if any. Select boot options, change boot media. Physically reboot, cycle power. And all that remotely. That's how servers meant to be managed.
I don't have a serious usecase for these. I don't want these. I don't want to spend money on these. I don't need these at all. I don't care anyway... SHUT UP AND TAKE MY MONEY!! NOW!!1!!11 🙈🤣
With the way computing power has grown, a rack of these Pi's could easily replace the multiple flight computers in a major rocket or aircraft - the ones where 5 computers have a consensus before an ajustment is made to the motor gimbals or wing flaps.
This looks like a great project. I see many suggestions, so i'm going to toss in a few. Myself it looks like the board would have enough room for a Cisco style RJ45 console port on the back (could even be one of those low profile ports), then won't need to pull out a board to fix something (see why it's not booting, fix network problem, messed up boot, need to run fsck, etc...). Put a light pipe on the lights to make them more visable out the rack. Maybe even change the ethernet jack to a low profile also, would help with air flow a bit between boards, or be able to squeeze in an extra board or two in a rack. Might be a bit crazy, but a 2U rack case, so that there could be larger fans, that would be quite the number of boards all loaded up.
I would prefer some kind of connector on the back side for power: - The connector should be cheap and common, so home made chassis could be made. A DB-9 or DB-15 connector should be sufficient. - A simple +12V ... 24V power supply input would make a simple lead-acid battery based UPS possible. - An RS-485 style bus could be added to the connector for backplane communication. - Some GPIO pins would make possible slot position detection or other communication. - A small microcontroller could be added, it could be used for baseboard management. It could measure supply voltages, temperature, control the fan, power cycle or reset the PI, select it's boot source. It could be connected to the RS485 bus and communicate with the PI wit other means (i2c?) if populated. I think 1GB POE switches are still a bit too expensive, and the many power conversion step means bad energy efficieny for a generally low consuption device. (UPS battery -> UPS inverter -> SWITCH PSU -> POE regulator in switch -> PI's POE PSU vs. UPS battery -> PI's DC-DC PSU)
Only thing I can think I would really find useful to change is to have the LEDs on a 90° header. If you had a lot of them in a farm you want to be able to see the status with out looking at each one up close.
Thanks. I am looking forward to this PI fork (?) moving from lots of a/d pins to rack clusters. I still like having PI controllers, but this is exciting.
Yeah... We're gunna have to have a " talk" ..... About you triggering my Echo device with your dang video voice haha 😂 Keep on keeping on! Thanks for making this video, i got into making SBC cluster nodes about a year ago! Hit the nail right on the head ! Great content 👍
Would be interesting to see a long board version, with sockets for 2 compute units, 2 NVMe, and either 2 Ethernet, or some manner of onboard switch to join the two devices onto the same Ethernet port, so that you can get even more computer per blade.
It's nice that some young , clever people are starting to make useful carrier boards from the CM4. I have a bit of an aversion to Kickstarter, i hope I'll be able to buy these on Amazon soon
I'm with you on the hesitation with Kickstarter. The few projects I backed, I mostly was donating to the builder, since I believed they had a fighting chance, but it is always a gamble whether they come through in the end! I don't doubt Ivan on this board, though there are always unforeseen challenges going from a few or a few dozen to hundreds and thousands!
There's a potential for cooling the M.2 NVME drive. Maybe using Silverstone heatsink will do the trick for longevity. Also during full load to benchmark the NVME drive can be at constant 31 degrees.
What are you even talking about? Cooling? Longevity? When the Pi's feeling super jazzy it'll r/w at 400mb/s and maybe 15k iops. That's less than 5% of what a decent M.2's controller is capable of. Whatever amount of airflow the case has to cool the Pi will also take care of the nvme drive, no tinkering required.
Interesting solution but there is Bitscope blade cluster 2x or 4x and also blade center. Its not an IBM, HP, Dell.. production cluster but for practice is Perfect.
It's amazing to image that Pi Server Cluster! I have a suggestion about extending hard disk with sata interface and power source management so that each module can have sata disks and using gather power source management to provide power source. That will be a real server!
It seems like these kinds of stands are more available in Europe; here's the link to the one in the video: www.rack-magic.com/Mini-Rack-4826mm-19-Rack-Stand-6HE-9HE-10-Zoll
Would like to see a dual version of this - full 1U enterprise depth with two compute modules and two RJ45 on the front for up to 160 ARM64 cores per RU.
The guy that built these, any chance he has some video of soldering the M.2 connector, would love to see it done, and think that alignment would be critical.
These look like the ultimate pi rack solution, not sure why you'd use POE as they are all side by side, just run a power bus along the back.... and save your POE for distant things with no handy power source, like starlink, ipcams etc. Now a few video's on what can be done with these clusters would be handy... it's right outside my expertise, and probably many others who use pi's singularly. Oh, and love your mini-rack, cute (never seen one before)
I wish we could get a raspi with more i/o. Even if it was more expensive and didn't have as much compute power, I'd be fine. A version of Raspi with focus on more I/O would be great!
Love your projects.. this comment regards your attempt to power a gpu on pi from 05/13. Imop a crucial pof was the pcie adapter used, for this type of gpu and higher you should use a gpu x1 to x16 riser (aka mining riser) with a 6 pin connector from psu as the gpu draws 25w-75w from the pcie, varies by settings and use.
thanks for the review!
By the way:
At least 20 in 1U and maybe up to 22 :)
w/o CPU OC PoE (without +) will be enough even with NVMe according to my tests.
you should build a backplane with builtin kvm, switch and power ;)
As you are working on the slice
Can you add MCP23017 on the board
@@Simon-qg2qn The original idea was to drop the backplane. It's the single point of failure. If one blade has a problem just change it :) it shouldn't be too expensive (at least for enterprise) to have one or two spare ones. But I have ideas for a backplane nevertheless
What is the price range you're aiming for with these blades?
@@Onlyindianpj how will you use it? maybe zymkey 4i has these features?
THAT is the coolest Pi Compute board I've ever seen. Definitely want a full set of these in my rack now.
I can imagine someone building in a switching backplane at some point too. Would make for a fun clean 1U cluster to play with, using < 400W
Also where's Jeff???
@@JeffGeerling Somewhere on a beach with a drink in his hand ;-)
@@JeffGeerling I would fill my pantry with those
Wait for someone to put an SOQuartz in it lol.
If you have a Raspberry Pi blade, does that make each board a slice of Pi?
Finally slices are appearing
Haha
That would be a great name for the board
Installing that seems like a piece of cake.
@@SaHaRaSquad Easy as pi.
With a slightly modified design, and potential cooling issues aside, it looks like there could be potential for nesting two boards face to face, into a single space, doubling the density of a rack. Might be overkill, but the potential is there.
Just stick in some 20k RPM server chassis fans (never mind the 70 dB of noise per fan) and call it a day 😂
@@JeffGeerling if you could remove the vertical seperators in the front, and add more grooves, 1 for smallest, 1 for biggest build height, you could have more pi´s or mix and match for what you want/have
also it needs a frontplate with 90° light pipes for the io lights, maybe some sort of knob to pull them out
if we are here already, why not add a connector in the back to actualy blade them into a backplate? damn this adds up quickly....
I'd just as soon toss all the blades into a pile in a server closet it and call it a day. Then the density is dictated as a function of the closet size and critical angle of the pile due to friction.
@@JeffGeerling add a lot of them and reduce the rpm and bam
Heat?
NVMe has this very cool snap connector now so that you don't have to mess with the screws. For this board, it is obviously easier than another motherboard but just in case, more spins are being done, perhaps it is worth considering.
link?
Would be a great design choice
I've seen a similar one for pcie cards on an older HP EliteBook
Very nice board. Suggestion: Line the four LEDs along the back Ethernet edge to make installed viewing easier.
thanks, will be in the next revision.
Or you could make it accessible as pins (in addition to the onboard ones, if possible?), so that people could put the LEDs wherever they need them (back or front or perhaps both if using some sort of splitter).
Sadly I'm not good in regards to hardware engineering, so these are just suggestions from a noob.
This is so cool! Can't wait for the Kickstarter!!
I’ve been waiting for a board like this. Could be great for small scale data center needs. Especially for simulation and prototyping of complex data center setups.
He should make a backplane connector switch so you don’t need to use any cables, this thing is so cool.
A few people have suggested this. It would be nice to have that kind of system, though it would increase the cost and complexity of running the system, and might discourage people from just buying a blade or three. Honestly, trade-offs either way.
@@JeffGeerling not really ? just add an edge connector on the back of the board with trace going to the RJ connector, and you can design a backplane with a POE-switch controller bypassing the rJ connector (I had old Dlink rj-fiber 100Mbits converters like that, you could use them as standalone, or plug them in a case with a switch and power backplane and just connect the fiber in the front). Or you may just put male RJ on the backplane, and flip the card around I guess (duplicate status leds on both side ?).
@@JeffGeerling It's probably not cost effective because the raspberry is not a very good platform for this. The RAM is slow, it's hard to effectively share RAM between raspberrys, it's hard to expose the PCIe even worth one lane, the custom and slow GPUs are practically useless for general purpose work especially in a cluster etc. It looks cool but the real world applications are dubious at best. Those PoE converters are 10 bucks each and while it might make sense to have PoE for 4 blades, with 16 blades it's a no-brainer to have a backplane instead of hundreds of dollars worth of PoE crap. This will always have crappy performance and it'd be so much better to design a system around a 16 core modern SoC with actual open and documented interfaces and connect four of these SoCs with something a little better than giga ethernet.
Like someone else already said: it'd probably be more cost efficient to get one real x64 cpu and run the software in virtualization. You'd get more power for less money.
I agree with this, it is valid for certain builds that want to use the complete 1U space. You can also consolidate a lot of the ports to the backplane. I really love where this is all going with the Pi
@@wombatillo there are some 'we need lots of cores, but each core can be slow' workloads you could use this for. It'd be fine for F@H, for instance. That said, I think this sort of thing is mostly a 'Do you want a REAL Hardware cluster? On the Cheap? here you go!' and it gives folks a way to play around with actual hardware without breaking the bank.
@Jeff Geerling this might sound a bit of a strange question but what do you use the pi's in a cluster for?
bump
Had the same question, so?
I'm wondering the same thing, like who cares about all this stuff when most of the time it's just being using by corporate America and other endeavours that add nothing to humanity
🤔
Failure tolerance, update management
At 1:47, I like the BTTF DeLorean decal where the CM4 mounts.
Really cool stuff! Could see these being used as something like a Ceph cluster really easily
One giant nitpick re memory - you _can_ add them all up to get a total but that tends not to be how distributed/clustered apps work, rather than getting a big pool of all the memory each individual instance only accesses it own, so you'd end up with 8GB per node but you couldn't use all of it at once (:
Heh... just wait until someone introduces a version of Ceph that distributes the data on device RAM. Brings new meaning to fragile storage!
@@JeffGeerling I'm not _saying_ you should totally do that, but setting up a ramdisk on linux is quite easy with tmpfs 😉
Kind of like the huge misconception regarding aggregated network links. 4 NICs doesn't get you a single 4Gbps connection.
You also need fast networking, I didn't see it on this design. Sounds like Ag5300 delivers 1 Gpbs netwoking. And the PCI-Express slot is already in use for storage.
If you could do RDMA over Ethernet here then you sorta could just add up all the RAM...
I don't need this.. I don't need this.. I don't need this.... I NEEEED THIS!
I have been saying that ever since I saw the first Pi cluster. I have absolutely no need for this. But damn I would like to have one.
I was like "I'll take 16 NOW!"
Who doesn't ?!?!?!?
It be cool to see a version with a SFP+ or a slot for fiber optic networking. This would do great for those who want to make servers with these clusters.
I think it'd be possible in theory, but that would basically have to be a PCIe NIC at that point, which would contend with the NVMe slot either physically, or for PCIe lanes. And of course you're then leaving the existing PoE NIC on the table, but that could still be useful for your mgmt network, separate from any network on the SFP
sfp+ requires 4 pci-e lines. RPI4 has only one. Therefore it is not possible to run 10Gbe.
Wouldn't it be better to have the SFP(+) port on the network switch that attaches all of the blades? Assuming you want to run a cluster of them.
Pi cant saturate 10Gbe lane except u r streaming some video/adc data from gpio. But that is not the rack story.
Love your pi videos. Can’t wait to see what you have coming.
amazing work!!! can't wait for the production release!!!
Neat design. Seems like a good example of what the compute modules are meant to enable.
since its going to a rack anyways, why the POE power? you could have one board at the end of the enclosure providing power to all the blades installed and have that contain the powersource.
I think its moreso just ease of integration and less proprietary parts like power distribution. Most racks will already have underutilized POE installed so might as well run the pi cluster off it.
@@MrPipDarty you could make the board as such, that it slides in a connector at the back , you could even make it so, it will also connect networking
Technically, you could split up the pcie lane, adding faster networking too, but that'll eat in the nvme speed
@@MrPipDarty Highly doubt most high density environments have no space for servers but have spare PoE+ ports on costly switches just hanging about unused...
A rear powerboard compatible with a standard 19VDC power brick for instance would be nice, though at 20 'blades' you might want a power budget of 350-400 watts which would mean a full ATX supply would be needed.
Perhaps perfect timing to switch to a 2U variant with built-in space for an ATX power supply (plenty space in the back) and double or even quad Raspberry blades?
Currently, this is a neat way to store a whole bunch of RPi's but you'd need a 24 port PoE+ switch to power them and a 24 port PoE+ switch with a power budget of ~400 is expensive, much more expensive than a standard $50 ATX power supply.
@@Steve25g all very valid points. Perhaps the "correct" answer here is this is still in very early prototypong stages from what was said in the video, so more than lilely ot just simply hasn't been the primary concerns of development just yet.
@@someguy4915 or make it more integrated, and just add a server supply, that will drop you a huge load of 12V, offers more security, and if wished, could be made redundant, or load shared
Thanks!
Man! You are hitting all the right topics for me. Super informative.
2:25 ..btw WIndows 11 already is up and running on the pi 4..it runs with a decent performance on USB sticks or SSDs
I would love to see if this could replace traditional router/firewall combinations with a more powerful option!
Pis don't have AES hardware support. If they did you'd see projects all over for OpenWRT :p
@@snives7166 AES is only a factor with VPNs correct?
@5:19 - the two passive adaptors on right are also a NATIVE nvme, for the x1 pcie port!
Thank you Jeff for nice presentation of Ivan’s amazing work. @Ivan Kuleshov Your ideas and enthusiasm by which you bring those ideas into live is inspiration for many of us. Thanks a lot and keep on track.
i bought a Raspberry Pi 4 over a year ago to make a magic mirror out of. got as far as getting the MM app to boot and see the time. ordered a camera module for it, the 2-way mirror film, the frame and glass... annnd i haven't worked on it in 10 months. I'm in over my head with the coding or whatever, but I still find these videos about the Pi fascinating.
Heh, don't worry I have a pile of those projects (started... never got too far, now it's in a box).
After my experience with arm cores I have trouble seeing them as really usable for heavy lifting or even desktop use. At least that's how I feel about single board computers like the raspberry and orange pi. You really have to be a programmer for them to seem amazing.
Well no shit Einstein.
I love your videos and git projects. I have a curious question, what do you run on your pi cluster? Can you make a video about it?
As a homelabber with a 4-slot Pi mount in my rack, this is the exact thing I would've actually wanted in my rack - hot swap blades! My current mount basically needs me to take the whole cluster offline for maintenance lol. I have a dedicated Pi with a YubiKey (for data access control), so I especially appreciate the built-in TPM! The only thing I'd miss from the built-in TPM is touch-to-sign capability.
You can technically put a YubiKey on the USB port on this blade-though it's in a bit of an awkward location if inside the rack.
@@JeffGeerling true! It'd be cool if the blade had a single keystone slot on front (with keystone blank plate by default) so you'd be able to extend the USB (or anything else, for that matter, like antenna connector if you built a DigiTV blade) from inside the blade to the front, should you choose to.
Ivan does amazing work. I love this idea. I've started learning PCB design.. I have no idea if I can do it, but I have one idea, I would like to try. Great video as always.
Only one way to find out! Go ahead and try it out :)
Honestly I'm a beginner with PCB design too-I'm considering doing a few simple projects and making some videos on getting started with it (probably using KiCAD).
@@JeffGeerling yeah thats what I’ve been using/learning KiCAD
@@JeffGeerling Please make a video proving non-EEs can route PCI-e in a PCB.
@@JeffGeerling Please do some beginner videos on KiCAD. A few years back, I started to learn PCB design with Eagle. When I was starting with Eagle, there were plenty of tutorials I could find online, and I had several friends using it too. Then Autodesk acquired it and has made a lot of changes and I didn't really keep up. I decided I'd rather use KiCAD since it's free software. But I need help getting started. I will say I was pleasantly surprised at how easy it is to do PCB design in a general sense. I just need some help over the initial learning curve of KiCAD.
Please do your best at learning it, then make something cool and mail it to Jeff please :)
Way cool suff Jeff! Way beyond my needs, but you sure do have fun and pass that on, so Thank You!
Wow! This is insanely cool, and something that would be really neat to have in my basement at some point. Kudos Uptime Lab!
Wow! This is an awesome design! I'd love to get my hands on those, but I've had issues with kickstarter. I'll wait until they go full production and score some then. Can't wait!
I have no real use for this whatsoever. But I need it!
...said every Raspberry Pi owner ever 😂
@@JeffGeerling lol true XD
We IT hoarders are multiple on this multiverse
With a small low wattage monitor, one could easily have a solar /battery set up. And always have a pc available.
Flip the boards around and have a backplane of ethernet/RJ45 plugs without retention clips across the back of the enclosure. Similar to how servers hold hotswap drives or other blade servers work. That would allow easy hot swap of blades without all the cables in the front.
This can definitely be improved with a network switching / pass trough and power supplying backplane, and moving the SD slot, HDMI and USB port to the front.
Easy access and hotswappable 🤷🏼♂️😉
Damn, already producing prototypes and hasn't started the kickstarter yet. This is how kickstarters should work.
I concur with you Jeff on the excitement at the tech level. The question is, though, what will each of your computers do? Will each host some web service that has no dependency on the other computers (except the basic network services for the OS)?
Is there a way to make them collaborate - as in a parallel computer - on workloads that are too big for a single one? Tools for that?
I've used K3s and K8s on Pi clusters in the past. Check out my Kubernetes 101 series for a quick primer!
The best part of this board is the Delorean silkscreening below the board-to-board couplers.
Love the last part "The kids are only going to be gone for a few more minutes."
Enjoy them now...you will blink and they will be moving out.
not in this economy
and not with these housing prices.
I was literally just thinking about hosting some of my side projects on RPi hardware at my house. I'll take this video popping up as a sign to do it!
Can't wait until we see the entire rack full of these things. When do we see the 512 ARM core implementation?
I don't know if I could afford that!
@@JeffGeerling crowdfunding makker :)
@@JeffGeerling That's just the thing, it's easy to expand over time!
@@JeffGeerling not with that attitude you can't.
One thing I'd like to see on maybe a future version of this is improvements on the networking and power. As it stands, having N as many Ethernet cables out the front of the rack would make for a lot of cable management. It would be nice to be able to backplane them together, and have a dedicated port on the rack chassis for whole cluster uplink to a built in switch. A bit more complex, but it would make it much more integrated and plug-and-play. Esp if the bandwidth between pi's is quite high, for even better local clustering
I think that - how cool a raspi cluster in 1 U may be - I still think you would be way better off with x86 hardware with decent core count. Raspi cores are just really slow compared to any CPU of the last 5 years. If you get a used HP DL380 Gen9 or Gen10 or something custom based on AMD, you have something that is so much more capable and performant. It will lose out on size, but power usage may be not that dissimilar if not better. I know Pi’s are this channels thing, but I think it’s good to keep perspective
This is true-I think the use case with current generation Cortex A72 cores would be for people looking for some ARM compute specifically (e.g. building ARM64 images, running ARM-based tests), or for those more interested in the fun / experimentation.
Of course, depending on the price of the final board, and availability of CM4s, it could be cheaper (especially if you want or need better speed / performance per watt) to build a single AMD server.
@@JeffGeerling yes, exactly and I don’t want to be negative here, I think this project is really cool and amazing 🌷❤️
@@JeffGeerling it would be a nice video for you to compare a RPi cluster setup across multiple metrics with x86 or if you did: maybe do an update?
How many x86 nodes can you get into 1u? Not CPUs, full machine nodes. Best I've seen on x86 is 4 nodes in 2u.
@@louwrentius ah. You're the guy with the zfs servers. I remember reading your blog a few years back. It was very informative.
I'm the diy nas kind of guy. I'm running a server with snapraid and mergerfs. These kind of nodes are awesome for cheaply testing distributed storage.
With the prices of electricity I don't want to be running big 4u encases with alot of storage. I don't need much storage either just redundancy and bitrot protection.
Very cool. The lack of hardware crypto on the Pi is sometimes frustrating so it is nice to see a product that might help mitigate the issue.
Hey Jeff, what’s the benefits to have a cluster Pi instead of heaving one PC tower more “powerful” or even 2/3 mini PCs? Or the Cluster Pi is more for fun? Thanks man!!
I'd say power. 25W per blade is huge compared to the normal power guzzling PC.
@@nathanc4183 this. You lose density but gain power efficiency especially with threaded applications. It's very cost effective if you need many low power nodes.
Failure tolerance, update management
As a software developer I would say modularity. Every machine would be optimized to process a specific part of the whole.
Obviously database server, mail server, dns server, vpn server, media server, etcetera.
Even better running micro-services like trackers and monitors that report the status or availability of something.
Additionally having the option for custom application without touching other machines.
@@nathanc4183 I'm running proxmox on an i7 which is currently hosting 3 systems: NVR on W10, NAS on Linux, Linux workstation. Power consumption is 60 watts with 6 internal drives (20 TB). I'm still struggling to understand the benefits of the Pi Blade in terms of power consumption when compared to an x86 system.
The tiny space where the network connector is, could be fitted with some GPIO/uart and power pins to be able to hold small accessories, for example a small 5x7 led matrix debug display, a 32x64 dot tiny Oled, a console serial port card, CAN-port or whatever we want to have an interface accessable from front.
I'm curious though, what *do* you use this kind of parallel processing pi power for?
I should probably do a video on my current Pi uses someday...
@@JeffGeerling Looking forward to it! :)
@@JeffGeerling I am also curious to know, and can you share your knowledge to make server using raspberry pi because as a hardware engineer I am not know about networking and server security, but I want to make server for MQTT for home automation and for community use.
@@JeffGeerling You bet!
I dont have 1 task, i have 25 docker images that are load balanced between my 10 pis.
Holy moly the world needs these "Pi Slices". Subscribed the mailing list for updates.
Cool boards. Jeff, quick one: what do you actually do using your Pi clusters?
Yeah...I keep seeing "ooh I made a cluster"...but what the heck can people do with them that will actually be useful?? There really isn't alot of explanation on this
Make an email server for the Clintons, but that’s hazardous to your health. 🤪
I came to the comments to try and find this answer as well. I don't understand anything about clustering especially in a home lab. I can imagine someone like Nasa doing it for science but what purpose is there at home
Maybe something like start9.com embassy
I am also running three PIs in my network like jeff before (poe, in 2 HE rack). But I also run a big proxmox host for all my virtual servers. The main reason was to make certain services independend from one host machine. On my pis I run a VPN Server, WLAN controller and DNS Server. You would have an issue if they are all deployed on virtual machines on one host. However, havin 16 pis is maybe only for research.
The fact that you guys have in your hands a start up project is amazing! The SBCs world is the future of computing to me. Cost reduction, accessibility, and popularization of technology will make businesses develop product and market for SBCs at leat at SOHO level maybe up to small sized business. Long life to SBCs! Keep it up Sir!
Maybe SBC, but definitely System-On-Chip. We went from mainframe to mini-computer, to micro-computer which became to PC and now it's all SoC: smartphone, embedded, Chromebooks, Apple M1 laptop and Mac Mini.etc.
@@autohmae I cannot agree more. Definitely cost reduction is driving technology more than ever before. Turing pi (a board capable to carry 7 Rpi slots) for clusters and small servers is another clear clue about the direction of tech avantgarde. "Computing everywhere" is another clue. Combined with "low cost" has generating a totally unpredictable direction!
@@derekgoodwine7509 As Moore's law isn't giving us as much more raw (single threaded) performance it looks like we'll get more specialized chip (parts) to do certain tasks faster. Seems to me all the server parts will move to running on Kubernetes eventually. The Linux kernel & Kubernetes combination is also getting better and better at running "multi-tenant" (so secure enough so different customers/companies/persons to run on the same environment without virtualization) Allowing for much more density. Greatly reducing the price of compute as Kubernetes is a very generic platform allowing for easy competition.
My concern with this is that we end up with a bunch of non-expandable, non-repairable computers. That's fine for something like a $100 Pi, but not so fine for a $5k server.
@@Alpha8713 the bigger worry for me is actually Microsoft bringing out Windows for such a platform and it's locked down with UEFI and secureboot, TPM, etc. And Microsoft having the power to flip one bit so the Linux can''t boot on it anymore. And the EU will slap a fine on them which is smaller than their profits and MS will just say: you can run a VM or Windows subsystem for Linux, that's fine right ?
Hi! Really enjoy your content.
I'm curious what actual work you do with the cluster?
Good question
I think
1 pihole
2 Local DNS with pihole
3 VPN server
4 Homeassistant
5 nodered
6 zabbix
7 web email server
8 Local Nas(HDD Might be issue)
9 Grafana
10 database server
11 DB server (slave)
12 Rhasspy Master
In 1U for home
For me, I build ARM64 Docker images on my Pis sometimes, and also run tests against ARM via Jenkins. Other than that, I have a number of Pis currently running Prometheus + Grafana for monitoring, Docker for hosting some of my small websites, and various odds and ends that don't need a ton of power to run.
@Supernova I just added in here some can be combined
But
The list list cool
☺️☺️☺️☺️
@@JeffGeerling
I missed Grafana thank you
Can't wait until ESXi for ARM supports the CM4. These blades would be a perfect fit for it.
Thats nice and all. But can it run Crysis?
Oh. Geerlingguy. I've been using your ansible role for years now. So nice to find you here.
Yes, and troubleshoot/reinstall every "blade" without any sort of IPMI. Sure. Run around in circles with monitor, keyboard, media. That's weird. Why you need PoE in server room but no remote management solution? Tell the developer of this board to look at pikvm project. It might require TC358743XBG on every "blade" to make bulk management less painful and more logical.
agree it is missing functionality.
this is another way to do that with software, but in a slightly different way.
github.com/go-vgo/robotgo
@@gedw99nah, that's different. You want to control device before it even boot. Before any OS. Be able navigate bios if any. Select boot options, change boot media. Physically reboot, cycle power. And all that remotely.
That's how servers meant to be managed.
Awesome idea ever!
Appreciated for low cost cluster build on top of rake mount features
I don't have a serious usecase for these.
I don't want these.
I don't want to spend money on these.
I don't need these at all.
I don't care anyway...
SHUT UP AND TAKE MY MONEY!! NOW!!1!!11 🙈🤣
Hope production becomes faster and come to market soon. Can’t wait to buy one.
Lol that's my name at the end
That 10" mini-rack is SO CUTE!!!
jeff: "higher density arm" me: laughs in ampere
Ampere so far won't give me the time of day :(
With the way computing power has grown, a rack of these Pi's could easily replace the multiple flight computers in a major rocket or aircraft - the ones where 5 computers have a consensus before an ajustment is made to the motor gimbals or wing flaps.
I have to say this might be the coolest thing you've got to beta test. I mean this is so cool, all I can say is FUCK!
This looks like a great project. I see many suggestions, so i'm going to toss in a few. Myself it looks like the board would have enough room for a Cisco style RJ45 console port on the back (could even be one of those low profile ports), then won't need to pull out a board to fix something (see why it's not booting, fix network problem, messed up boot, need to run fsck, etc...). Put a light pipe on the lights to make them more visable out the rack. Maybe even change the ethernet jack to a low profile also, would help with air flow a bit between boards, or be able to squeeze in an extra board or two in a rack. Might be a bit crazy, but a 2U rack case, so that there could be larger fans, that would be quite the number of boards all loaded up.
totally useless for my usecase. i need it
I would prefer some kind of connector on the back side for power:
- The connector should be cheap and common, so home made chassis could be made. A DB-9 or DB-15 connector should be sufficient.
- A simple +12V ... 24V power supply input would make a simple lead-acid battery based UPS possible.
- An RS-485 style bus could be added to the connector for backplane communication.
- Some GPIO pins would make possible slot position detection or other communication.
- A small microcontroller could be added, it could be used for baseboard management. It could measure supply voltages, temperature, control the fan, power cycle or reset the PI, select it's boot source. It could be connected to the RS485 bus and communicate with the PI wit other means (i2c?) if populated.
I think 1GB POE switches are still a bit too expensive, and the many power conversion step means bad energy efficieny for a generally low consuption device.
(UPS battery -> UPS inverter -> SWITCH PSU -> POE regulator in switch -> PI's POE PSU vs. UPS battery -> PI's DC-DC PSU)
The raspberry pi has been upgraded to a raspberry cake
I'm still waiting for this to make it to market. Still very impressive.
There is a led kernel module you can - afaik - configure to use the gpio for the led
Only thing I can think I would really find useful to change is to have the LEDs on a 90° header. If you had a lot of them in a farm you want to be able to see the status with out looking at each one up close.
Thanks. I am looking forward to this PI fork (?) moving from lots of a/d pins to rack clusters. I still like having PI controllers, but this is exciting.
Wow can't wait to see the performance of a fully loaded rack!
Happy to see Raspberry going from tinkering boards to more serious stuff, I hope they go even more in that direction. Also the outtakes was golden!
Yeah... We're gunna have to have a " talk" .....
About you triggering my Echo device with your dang video voice haha 😂
Keep on keeping on!
Thanks for making this video, i got into making SBC cluster nodes about a year ago! Hit the nail right on the head ! Great content 👍
This is really neat. I'm still running the v1 version as my cluster :') with pi3 b+'s.
Would be interesting to see a long board version, with sockets for 2 compute units, 2 NVMe, and either 2 Ethernet, or some manner of onboard switch to join the two devices onto the same Ethernet port, so that you can get even more computer per blade.
It's nice that some young , clever people are starting to make useful carrier boards from the CM4. I have a bit of an aversion to Kickstarter, i hope I'll be able to buy these on Amazon soon
I'm with you on the hesitation with Kickstarter. The few projects I backed, I mostly was donating to the builder, since I believed they had a fighting chance, but it is always a gamble whether they come through in the end! I don't doubt Ivan on this board, though there are always unforeseen challenges going from a few or a few dozen to hundreds and thousands!
Can't wait for these to be available, have 4 Pis ready to transfer with external m.2 drives attached.
Nice! Love the back to the future DeLorean graphic on the board.
This is what i am looking for! Cant wait to order some of these
Love the bloopers at the end:)
I don't have an idea what this is for or any use for it but this is very cool.
There's a potential for cooling the M.2 NVME drive.
Maybe using Silverstone heatsink will do the trick for longevity.
Also during full load to benchmark the NVME drive can be at constant 31 degrees.
What are you even talking about? Cooling? Longevity?
When the Pi's feeling super jazzy it'll r/w at 400mb/s and maybe 15k iops. That's less than 5% of what a decent M.2's controller is capable of.
Whatever amount of airflow the case has to cool the Pi will also take care of the nvme drive, no tinkering required.
Seeing the depth, I'm quite sure you can put 32 of those back to back (need central airflow probably tho), that would be 128 ARM CPU in 1U !
Interesting! Neat seeing something like that though!
Wow..... just wow. So impressive and a HUGE congratulations to that guy and you!
Interesting solution but there is Bitscope blade cluster 2x or 4x and also blade center. Its not an IBM, HP, Dell.. production cluster but for practice is Perfect.
It's amazing to image that Pi Server Cluster! I have a suggestion about extending hard disk with sata interface and power source management so that each module can have sata disks and using gather power source management to provide power source. That will be a real server!
that desk stand is beautiful, would insta buy if i could
It seems like these kinds of stands are more available in Europe; here's the link to the one in the video: www.rack-magic.com/Mini-Rack-4826mm-19-Rack-Stand-6HE-9HE-10-Zoll
Would like to see a dual version of this - full 1U enterprise depth with two compute modules and two RJ45 on the front for up to 160 ARM64 cores per RU.
The guy that built these, any chance he has some video of soldering the M.2 connector, would love to see it done, and think that alignment would be critical.
Perfect k8s setup with that badboy right there..
I'm a new subscriber and already love your vids. Thank you!!!
Needs more pcie lanes, but this is a really cool concept. Thanks for showing it off!
What an amazing proyect, I would love to have one
These look like the ultimate pi rack solution, not sure why you'd use POE as they are all side by side, just run a power bus along the back.... and save your POE for distant things with no handy power source, like starlink, ipcams etc.
Now a few video's on what can be done with these clusters would be handy... it's right outside my expertise, and probably many others who use pi's singularly.
Oh, and love your mini-rack, cute (never seen one before)
Genius!. Let's fund this project and go!
I wonder if it would make sense to have a shared powersupply for them in the enclosure, instead of PoE, and then connect it somehow
great job as always jeff
I wish we could get a raspi with more i/o. Even if it was more expensive and didn't have as much compute power, I'd be fine. A version of Raspi with focus on more I/O would be great!
Love your projects.. this comment regards your attempt to power a gpu on pi from 05/13. Imop a crucial pof was the pcie adapter used, for this type of gpu and higher you should use a gpu x1 to x16 riser (aka mining riser) with a 6 pin connector from psu as the gpu draws 25w-75w from the pcie, varies by settings and use.