Took CCNA classes in community college (4 semesters of it) as electives for my gen ed. I now have two mellanox switches. 1 managed 32 port 40g qsfp and one 8 port 40g qsfp. PITA to find the firmware to get the modules connecting lol
If you're talking about the Mellanox VPI switches that default to Infiniband, there are guides out there to upgrade to ethernet capability, but after mucking around with it for a while I just bought a "license" from a guy on ebay. It was under $50 and to me it was worth it to avoid the headache. I've been using my SX6036 for a couple of years now with zero problems. Edit: I saw a post that now seems to have been deleted talking about the actual QSFP modules. I can't speak for all of the Mellanox switches, but mine is very accommodating as far as brands go. I'm using QSFP DAC cables in my rack instead of module&fiber, but same compatibility should apply. I've used "generic" branded cables, Mellanox, Dell and I believe I have some Cisco that worked as well. Basically all the ones I've tried have worked, but it's always good to start with something on the HCL to eliminate the unknowns when you're troubleshooting.
@@kevinerbs2778 I find that hard to believe. Can you back that up with sources? Here's why I doubt your statement: USB 3.x Gen 1 → 8/10 bits available for data; 2/10 bits reserved for protocol; 20% overhead USB 3.x Gen 2 → 128/132 bits available for data; 4/132 bits reserved for protocol; ≈3% overhead According to your statement, "you'll only get about 40 anyways because of overhead on each lane": 40 ÷ 64 = 0.625 = 5/8 -PCIe Gen 4¹ → 5/8 bits available for data; 3/8 bits reserved for protocol; 37.5% overhead- I had no idea USB Gen 2 had so little overhead compared to PCIe Gen 4. Right? Why use PCIe Gen 4 instead of USB Gen 2? ¯\_(ツ)_/¯ Here's the real overhead: PCIe Gen 4¹ → 128/130 bits available for data; 2/130 bits reserved for protocol; 1.54% overhead Here's my _correction_ of your statement: "you'll only get [63] anyways because of overhead on each lane." btw- You make it sound like the overhead compounds with each lane. It doesn't. Even if it did, 1.0154⁸ - 1 = 13% (not 37.5%). 128 ÷ 130 = 98.46% = (128×number_of_lanes) ÷ (130×number_of_lanes) → the number_of_lanes cancel out ¹ According to *Rambus* : *The PCI Express (PCIe) 4.0 protocol has some encoding overhead, but it's 98.46% efficient, resulting in an actual bandwidth of 63 GB/s. This is due to the 128b/130b encoding scheme, which has an overhead penalty of less than 2%.*
Really nice video 👍 4:34 -- yes, SAS dual porting as distinct from SATA. 14:12 -- iSCSI ❤. However unless clustered, iSCSI direct to a workstation in most homelabs is limited to that single workstation for practical purposes. With clustered clients using iSCSI, SAN comes to mind. Worth the home experimentation and *fun* though. Kindest regards, neighbours and friends.
The other Tailscale thing that doesn’t work on Synology is “accept-routes” if you’re doing advanced LAN subnet linking via another device. The Synology Tailscale client can’t do it. Shrug.
13:40 another note about SMB MultiChannel - if you enable LACP/Port Aggregation on your NICs, SMB MultiChannel will be unavailable. Many people will opt for Aggregation over MultiChannel
5:00 This topic of NVME dual pathing got me thinking about the Dell PowerStore T line which does Active/Active NVME storage. I wonder how they are communicating with all the NVME drives from both nodes.
The non "apple channel" reviews I've seen all call the ugreen mediocre at best. Which to be fair to the Apple people, they used to trash hardware with non-existent storage, so I can't really blame them for getting excited... 😅
@@KiraSlithugreen is somewhat known in the power brick, USB Cable and USB NIC sphere, but they're certainly unproven in the storage host market. Most of the channels I've watched have been optimistic about the future but realistic about their performance now and not ignoring faults, like the PCIe switching being suboptimal iirc.
@@IanBPPK I'm not saying the brand is bad, there's no need to get that defensive. Most of my USB3 extensions are ugreen, 2 of which have lasted me 5 years already. I'm just saying the NAS is mediocre, and a lot of the strong hype is coming from channels that don't really have any experience with good hardware.
That 25Gb switch shortage on the second hand market seems to be a very recent thing, I've been upgrading my network infrastructure for the past year and you could find a bunch 25Gb switches around the 800 euro/usd mark (i forget which i checked at the time), and for the past few months all of those switches have been bought and there has been a lack of new switches coming in at that price point. Let's hope some large datacentre somewhere upgrades to 56Gb as that seems to slowly be becoming a thing.
I actually wanted to grab one of these for some FC tests and labs to some Cisco MDS switches, maybe try some FCoE etc, just for cert and lan experience. Problem is the total noise, heat and power use for 4 switches (2 nexus, 2 MDS) starts to get insane for a home environment. Either way, a lovely machine! Even having a nice HA LDAP box is totally worth it. But the noise....😂 I can't wait until these things advance to the point of 30 dBm or less! Thanks for the review 👍
Great report , in terms of of supervisor performance, have tested proxmox in contrast w/VMW future plans ? It might be an opportunity to see what are the alternatives for an upcoming video, especially in context of using this type of gear. This model of Synology kinda leans towards Dell PowerScale type of use case, granted not in same level of use cases or price range.
As always enjoyed the video! Little sad the 3020 couldn't hold its own against it, love those boxes. I guess ultimately its held back by the lower SKU CPU and limited memory bandwidth to differentiate it from its peers and lower the price point.
Yeah I have a 1821+ and a 1621+ and neither have Synology drives in them. Same with the m.2 cache drives, 10GB NICs, and upgraded the RAM to 32GB and 16GB respectively with 3rd parties. The run (for well over 2 years) great!
Ahh I wish I could get my hands on one of those. For a variety of reasons, we use a much slower model and spinning drives. At least the model finally has a 10G interface now. I like DSM.
4:07 *Then you've got NVMe-which for PCIe Gen four is **_eight gigabytes per second._** Or (4-second pause) thirty-two gigabit.* 8 GB/s × 8 b(its)/B(yte) = 64 Gbps. But, PCIe Gen 4×4 ≈4 GB/s → 4 GB/s × 8 b(its)/B(yte) = 32 Gbps. Did Wendell _miscalculate?_ Or (4-second pause) did Wendell _misspeak_ - as Wendell is prone to do? ¯\_(ツ)_/¯
For the scenario where you don't have the build in dual High Availability - don't you still need two identical Synology servers? Their documentation says this: "Implementation of Synology High Availability requires two identical Synology servers to act as active and passive servers."
@@TechBro-125 21:01 I'm referring to this statement referring to where you can have two cabinets, one high performance and one with mechanical drives. I might have misunderstood, but it's only possible to do hyper backup and similar , for High Availability feature both devices must be identical
meehhh can get my MT27500 ConnectX-3 or NetXtreme II BCM57800 (both 10 Gigabit) & a file manger to show 1.1GB/s... over 30m of copper!, sure it started at 3...what was the limit? HDD write? some iperf tests would have been nice..anyhoo always nice to see a lvl1 tech vid ...but ACTUAL measured performance would have been welcomed when we are ment to be focused on the 25 part of it all.... Edit: NB... not one of the 14 thumbs down folks... can't wait for "links" with friends later 💋
You mentioned 25gbps for video editing. Is it worth it for a single user video editing NAS? I need a lot of storage and I would really appreciate redundancy. I can get conectX-4 cards for a reasonable price and was wondering whether it was worth it. Also is it OK to run the connectX-4 in my editing PC which has a 5900x and a 4070ti? Will it be affected by the limited PCIE lanes?
I bought a SAS HDD by accident, bc it was such a good price. But I'm looking to put it into an externally powered enclosure to backup my pc periodically, a do-it-yourself external hdd backup product. However, I cannot find an enclosure that is under $100 that will work with SAS interface, then output to SATA or USB 3.0. Any suggestions?
@Level1Techs You failed to mention it's not active-active, it has some delays in failover. What were your results on vmware multi node cluster with HA/DRS when one path is down with ISCSI round robin enabled?
I have a UC3200 which is active/active but doesn’t run the normal synology os with all the normal apps, it’s only for storage. Mine is running back end storage for MS SQL high availability clustering with MPIO enabled, if you have less aware applications using the shared IP it took about 4 seconds for that to fail over on node failure in testing (physically unplugging the controller from the chassis as the fault)
Yo you should check out the UC3400 from Synology! Also need to make them come out with a 2.5" drive bay only version! imagine 24 drives in a HA 2u SAN that is crazy!
100GbE Intel Ethernet Network Adapter E810 CQDA2T maybe worth a look. FYI - you’d need about 46,451 bananas stacked on top of each other to reach the height of Mount Everest!
so I get that virtualizing a router is "forbidden," but I have not only that but a "forbidden" NAS on my proxmox box that I've had set up for quite some time, all on AMD on a Aorus X570 board... I also managed to get traffic shaping and failover set up on pfSense CE (virtually), as well as a Shinobi server running about 10 4k cameras. am i doing it wrong?
Absolutely. There's dual 2.5Gbe micro pcs for like $140 that can likely handle your WAN connection with opnsense for 5 watts. I used to run a virtual router but only because the devices weren't up to par and dual 2.5Gbe
Most of my network stuff is small and private, almost cozy. The equipment is still interesting just because of how far back in generations it's possible to go before the upgrades no longer help. Onboard 1GbE is fine for most of my HDD jobs but dual 10GbE SFP hits in all the right places and is a wonderful security against bad modems that kick everything off the network during an Internet outage. Nothing I have is new or durable enough to invest in anything more. Maybe the day I retire this Ryzen box.
Wendell is sure religious about Windows. Took him 17 seconds to start complaining. Mellanox ConnectX-4 25Gbe cards work great in all my Windows machines and my Mac Pro.
And how much is this? $10k. Much better buying a dell rackmount and putting nvme in it, could probably afford 100gig. Synology software is good though!
Enterprise grade ... LOOOL. Never buy syno for any important enterprise level stuff. Home toy - with a hefty price tag. Best to keep the fingers away from syno.
Any "enterprise" pays this much for a fraggin' switch... 1 gig may be slightly cheaper in 2024, but even if, just slightly. And if you consider this can be a highly available server of some unimportant stuff which used to live on an vmware cluster... And even if you're not looking for a HA place to put your VMs in, it's cheap as potatoes where I'm from! And it even has Xeons? No one ever got fired for buying Xeons! This is the kind of thing that some companies (or, more precisely, employees like myself) enslaved by the Distinguished Architects are looking for... I know I'm looking for hardware that my Distinguished Architects would ummm... "certify". This Xeon-based small box sounds like a cheat code to super cheap iSCSI SAN, so we'd suddenly get a SAN in auxiliary locations (like colocation at a stock exchange, where we don't have SAN and the Distinguished Architects need months of convincing that 4 servers (with Xeons, of course) in a configuration able to pick up work within 1 minute is good enough "HA"). Stuff like this may just be a game changer for those fuckers with concrete inside their skulls. Believe me, it's CHEAP, too cheap to convince those morons that it can do anything reliably, not to mention anything they have on their checklists. But hey, it's Xeon, this actually heightens my chances!
Took CCNA classes in community college (4 semesters of it) as electives for my gen ed. I now have two mellanox switches. 1 managed 32 port 40g qsfp and one 8 port 40g qsfp. PITA to find the firmware to get the modules connecting lol
If you're talking about the Mellanox VPI switches that default to Infiniband, there are guides out there to upgrade to ethernet capability, but after mucking around with it for a while I just bought a "license" from a guy on ebay. It was under $50 and to me it was worth it to avoid the headache. I've been using my SX6036 for a couple of years now with zero problems.
Edit: I saw a post that now seems to have been deleted talking about the actual QSFP modules. I can't speak for all of the Mellanox switches, but mine is very accommodating as far as brands go. I'm using QSFP DAC cables in my rack instead of module&fiber, but same compatibility should apply. I've used "generic" branded cables, Mellanox, Dell and I believe I have some Cisco that worked as well. Basically all the ones I've tried have worked, but it's always good to start with something on the HCL to eliminate the unknowns when you're troubleshooting.
It was nice to hear you include some information about Hyper-V and iSCSI.
4:15 "8GBps or.... (mentally does 8*8) 32Gbps"... errr haha
Yes, which would be 64, not 32. 😂
@@Derek.Iverson you'll only get about 40 anyways because of overhead on each lane.
@@Derek.Iverson Off by less than a power of ten, basically correct 😀
@@kevinerbs2778 I find that hard to believe. Can you back that up with sources?
Here's why I doubt your statement:
USB 3.x Gen 1 → 8/10 bits available for data; 2/10 bits reserved for protocol; 20% overhead
USB 3.x Gen 2 → 128/132 bits available for data; 4/132 bits reserved for protocol; ≈3% overhead
According to your statement, "you'll only get about 40 anyways because of overhead on each lane":
40 ÷ 64 = 0.625 = 5/8
-PCIe Gen 4¹ → 5/8 bits available for data; 3/8 bits reserved for protocol; 37.5% overhead-
I had no idea USB Gen 2 had so little overhead compared to PCIe Gen 4. Right? Why use PCIe Gen 4 instead of USB Gen 2? ¯\_(ツ)_/¯
Here's the real overhead:
PCIe Gen 4¹ → 128/130 bits available for data; 2/130 bits reserved for protocol; 1.54% overhead
Here's my _correction_ of your statement: "you'll only get [63] anyways because of overhead on each lane."
btw- You make it sound like the overhead compounds with each lane. It doesn't. Even if it did, 1.0154⁸ - 1 = 13% (not 37.5%).
128 ÷ 130 = 98.46% = (128×number_of_lanes) ÷ (130×number_of_lanes) → the number_of_lanes cancel out
¹ According to *Rambus* :
*The PCI Express (PCIe) 4.0 protocol has some encoding overhead, but it's 98.46% efficient, resulting in an actual bandwidth of 63 GB/s. This is due to the 128b/130b encoding scheme, which has an overhead penalty of less than 2%.*
@nster3 I can't believe they didn't pin or heart your comment. Can you? :-)
Honey.... I need a bigger stack of Benjamins, there's another piece of tech calling my name. 😂
I'm totally calling my boss tomorrow. Before you ask, my boss is not my wife (even if it feels that way sometimes!) ;)
Really nice video 👍
4:34 -- yes, SAS dual porting as distinct from SATA.
14:12 -- iSCSI ❤. However unless clustered, iSCSI direct to a workstation in most homelabs is limited to that single workstation for practical purposes. With clustered clients using iSCSI, SAN comes to mind. Worth the home experimentation and *fun* though.
Kindest regards, neighbours and friends.
The other Tailscale thing that doesn’t work on Synology is “accept-routes” if you’re doing advanced LAN subnet linking via another device. The Synology Tailscale client can’t do it. Shrug.
13:40 another note about SMB MultiChannel - if you enable LACP/Port Aggregation on your NICs, SMB MultiChannel will be unavailable. Many people will opt for Aggregation over MultiChannel
5:00 This topic of NVME dual pathing got me thinking about the Dell PowerStore T line which does Active/Active NVME storage. I wonder how they are communicating with all the NVME drives from both nodes.
9:57 screen recording scrolling flicker. I'd bet money you're on Wayland 😂
It's actually finally getting fixed! Explicit sync has been merged on most DE's. Just need the various distros to push out the updates
@@REALfreaky what about all of the flickering in all the applications, hover tooltips?
Little different than the ugreen crap everyone else is shilling this week.
God I hate Ugreen so much
Total garby
The non "apple channel" reviews I've seen all call the ugreen mediocre at best. Which to be fair to the Apple people, they used to trash hardware with non-existent storage, so I can't really blame them for getting excited... 😅
@@KiraSlithugreen is somewhat known in the power brick, USB Cable and USB NIC sphere, but they're certainly unproven in the storage host market. Most of the channels I've watched have been optimistic about the future but realistic about their performance now and not ignoring faults, like the PCIe switching being suboptimal iirc.
@@IanBPPK I'm not saying the brand is bad, there's no need to get that defensive. Most of my USB3 extensions are ugreen, 2 of which have lasted me 5 years already. I'm just saying the NAS is mediocre, and a lot of the strong hype is coming from channels that don't really have any experience with good hardware.
@@KiraSlith🤦♂️
That 25Gb switch shortage on the second hand market seems to be a very recent thing, I've been upgrading my network infrastructure for the past year and you could find a bunch 25Gb switches around the 800 euro/usd mark (i forget which i checked at the time), and for the past few months all of those switches have been bought and there has been a lack of new switches coming in at that price point. Let's hope some large datacentre somewhere upgrades to 56Gb as that seems to slowly be becoming a thing.
Putting the NAS in nasty.
Thx 4 being there 😊
I actually wanted to grab one of these for some FC tests and labs to some Cisco MDS switches, maybe try some FCoE etc, just for cert and lan experience. Problem is the total noise, heat and power use for 4 switches (2 nexus, 2 MDS) starts to get insane for a home environment.
Either way, a lovely machine! Even having a nice HA LDAP box is totally worth it. But the noise....😂 I can't wait until these things advance to the point of 30 dBm or less!
Thanks for the review 👍
Man Level1 videos have absolutely the best music - don't think we aren't noticing
Played with a Synology DS920 that a consultant spec'd but never used... It kept reaching out to Turkey, so I unplugged it from my network.
Great report , in terms of of supervisor performance, have tested proxmox in contrast w/VMW future plans ? It might be an opportunity to see what are the alternatives for an upcoming video, especially in context of using this type of gear.
This model of Synology kinda leans towards Dell PowerScale type of use case, granted not in same level of use cases or price range.
As always enjoyed the video! Little sad the 3020 couldn't hold its own against it, love those boxes. I guess ultimately its held back by the lower SKU CPU and limited memory bandwidth to differentiate it from its peers and lower the price point.
Man, I was already pretty happy with the old RS3617xs+ with 10Gb links and iSCSI mpio.... somehow I missed synology making dual-controller units.
Yeah I have a 1821+ and a 1621+ and neither have Synology drives in them. Same with the m.2 cache drives, 10GB NICs, and upgraded the RAM to 32GB and 16GB respectively with 3rd parties. The run (for well over 2 years) great!
Ahh I wish I could get my hands on one of those. For a variety of reasons, we use a much slower model and spinning drives. At least the model finally has a 10G interface now. I like DSM.
Scsi makes so much sense for redundancy and reliability, makes sense why its used instead of sata and nvme.
Hey wendell, you said want to post video about wrx90e / north XL build..im still waiting for it..
4:07 *Then you've got NVMe-which for PCIe Gen four is **_eight gigabytes per second._** Or (4-second pause) thirty-two gigabit.*
8 GB/s × 8 b(its)/B(yte) = 64 Gbps. But, PCIe Gen 4×4 ≈4 GB/s → 4 GB/s × 8 b(its)/B(yte) = 32 Gbps.
Did Wendell _miscalculate?_ Or (4-second pause) did Wendell _misspeak_ - as Wendell is prone to do? ¯\_(ツ)_/¯
For the scenario where you don't have the build in dual High Availability - don't you still need two identical Synology servers? Their documentation says this:
"Implementation of Synology High Availability requires two identical Synology servers to act as active and passive servers."
@@TechBro-125 21:01 I'm referring to this statement referring to where you can have two cabinets, one high performance and one with mechanical drives. I might have misunderstood, but it's only possible to do hyper backup and similar , for High Availability feature both devices must be identical
Is there a NAS with the hot-pluggable capabilities on which I can use open-source software?
meehhh can get my MT27500 ConnectX-3 or NetXtreme II BCM57800 (both 10 Gigabit) & a file manger to show 1.1GB/s... over 30m of copper!, sure it started at 3...what was the limit? HDD write?
some iperf tests would have been nice..anyhoo always nice to see a lvl1 tech vid
...but ACTUAL measured performance would have been welcomed when we are ment to be focused on the 25 part of it all....
Edit: NB... not one of the 14 thumbs down folks... can't wait for "links" with friends later 💋
You mentioned 25gbps for video editing. Is it worth it for a single user video editing NAS? I need a lot of storage and I would really appreciate redundancy. I can get conectX-4 cards for a reasonable price and was wondering whether it was worth it. Also is it OK to run the connectX-4 in my editing PC which has a 5900x and a 4070ti? Will it be affected by the limited PCIE lanes?
I bought a SAS HDD by accident, bc it was such a good price. But I'm looking to put it into an externally powered enclosure to backup my pc periodically, a do-it-yourself external hdd backup product.
However, I cannot find an enclosure that is under $100 that will work with SAS interface, then output to SATA or USB 3.0. Any suggestions?
What is this meme at 9:52 that flashes on by lol
A bit of an off-topic: is there any way to run a vdi with windows on a proxmox server, preferably with free solutions?
@Level1Techs You failed to mention it's not active-active, it has some delays in failover. What were your results on vmware multi node cluster with HA/DRS when one path is down with ISCSI round robin enabled?
I have a UC3200 which is active/active but doesn’t run the normal synology os with all the normal apps, it’s only for storage. Mine is running back end storage for MS SQL high availability clustering with MPIO enabled, if you have less aware applications using the shared IP it took about 4 seconds for that to fail over on node failure in testing (physically unplugging the controller from the chassis as the fault)
Oh OK , How about application?
Yo you should check out the UC3400 from Synology! Also need to make them come out with a 2.5" drive bay only version! imagine 24 drives in a HA 2u SAN that is crazy!
Would a connect x4 work in with 4 pcie lanes (i only need 1 port to work at full speed)
It's different than other NASAs?
100GbE Intel Ethernet Network Adapter E810 CQDA2T maybe worth a look. FYI - you’d need about 46,451 bananas stacked on top of each other to reach the height of Mount Everest!
you Do know that we mere mortals will only get near that stuff 20 years from now when it is sold on ebay for pennies on the bit coin... right....
Is NAS singular and plural like deer, or is it NASs or NAS's...
so I get that virtualizing a router is "forbidden," but I have not only that but a "forbidden" NAS on my proxmox box that I've had set up for quite some time, all on AMD on a Aorus X570 board... I also managed to get traffic shaping and failover set up on pfSense CE (virtually), as well as a Shinobi server running about 10 4k cameras.
am i doing it wrong?
Absolutely. There's dual 2.5Gbe micro pcs for like $140 that can likely handle your WAN connection with opnsense for 5 watts. I used to run a virtual router but only because the devices weren't up to par and dual 2.5Gbe
Just run the patch that adds the drives to the supported list, makes them all good again
Good old Fischer Price!
But Can It Run Crysis?
(plugs in 10/100 PCI card) "whats that metal plug thing on his ethernet cord?"
TLDR: This 20000€ server gives you 25 gigabit transfer speeds via network
Products: (1) Synology SA3400D (12) kioxia PM7-V SAS SSD
Most of my network stuff is small and private, almost cozy. The equipment is still interesting just because of how far back in generations it's possible to go before the upgrades no longer help. Onboard 1GbE is fine for most of my HDD jobs but dual 10GbE SFP hits in all the right places and is a wonderful security against bad modems that kick everything off the network during an Internet outage. Nothing I have is new or durable enough to invest in anything more. Maybe the day I retire this Ryzen box.
Wendell is sure religious about Windows. Took him 17 seconds to start complaining. Mellanox ConnectX-4 25Gbe cards work great in all my Windows machines and my Mac Pro.
I'd rather have an iX Systems X10 or X20 for the price/spec ratio of this thing.
Casually dropping 2-3 giga on the table for all to see .
And how much is this? $10k. Much better buying a dell rackmount and putting nvme in it, could probably afford 100gig. Synology software is good though!
You could run mcafee on your nas....don't do that.
8 GB, which is.... 12 million
Dat Nas
Nah, I still call it Plug And Pray...
Bloody Ubuntu in the background WHAT THE HELL
Spinning rust huh, well I’ve seen more rust on SSD than HDD internals
Until relatively recently magnetic media literally used micropatterned iron oxide as the magnetically active material.
1gbit here 😂😂😂😂😂😂
NASty
politics is the best form of Planned Non-Parenthood.
Enterprise grade ... LOOOL. Never buy syno for any important enterprise level stuff. Home toy - with a hefty price tag. Best to keep the fingers away from syno.
Who would pay $10k for that, what a joke 😂
Any "enterprise" pays this much for a fraggin' switch... 1 gig may be slightly cheaper in 2024, but even if, just slightly.
And if you consider this can be a highly available server of some unimportant stuff which used to live on an vmware cluster...
And even if you're not looking for a HA place to put your VMs in, it's cheap as potatoes where I'm from!
And it even has Xeons? No one ever got fired for buying Xeons!
This is the kind of thing that some companies (or, more precisely, employees like myself) enslaved by the Distinguished Architects are looking for... I know I'm looking for hardware that my Distinguished Architects would ummm... "certify". This Xeon-based small box sounds like a cheat code to super cheap iSCSI SAN, so we'd suddenly get a SAN in auxiliary locations (like colocation at a stock exchange, where we don't have SAN and the Distinguished Architects need months of convincing that 4 servers (with Xeons, of course) in a configuration able to pick up work within 1 minute is good enough "HA").
Stuff like this may just be a game changer for those fuckers with concrete inside their skulls. Believe me, it's CHEAP, too cheap to convince those morons that it can do anything reliably, not to mention anything they have on their checklists. But hey, it's Xeon, this actually heightens my chances!