Over 1PB of Storage Dell EMC PowerEdge XE7100 Review

Поделиться
HTML-код
  • Опубликовано: 13 мар 2021
  • STH Main Site Article: www.servethehome.com/dell-emc...
    STH Merch on (Tee)Spring: the-sth-merch-shop.myteesprin...
    STH Top 5 Weekly Newsletter: eepurl.com/dryM09
    In our Dell EMC PowerEdge XE7100 review, we see how this 5U system handles 100x 3.5" HDDs, with flexible CPU, GPU, and SSD options
  • НаукаНаука

Комментарии • 276

  • @JeffGeerling
    @JeffGeerling 3 года назад +328

    Here I am putzing around with my 16 hard drives thinking I'm hot stuff...

    • @ikkuranus
      @ikkuranus 3 года назад +27

      No, you can't attach 100 drives to a cm4

    • @nilswegner2881
      @nilswegner2881 3 года назад +3

      You are!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +54

      Somewhat different concepts though. The RPi w/ RAID was very cool!

    • @nilswegner2881
      @nilswegner2881 3 года назад +1

      @@ServeTheHomeVideo I agree, I'd love to have something like this myself

    • @JeffGeerling
      @JeffGeerling 3 года назад +12

      @@ikkuranus *so far* 🤪

  • @danielanderson9052
    @danielanderson9052 3 года назад +191

    Linus is calling Dell now asking if he can trade in his gold Xbox controller for this fully decked out

    • @francismendes
      @francismendes 3 года назад +31

      New New New Whonnock (Jake: Whonnock 4!)

    • @Momi_V
      @Momi_V 3 года назад +5

      @@francismendes well, this would be more like super petabyte project... Whonnock is PCIE SSD only

    • @itmkoeln
      @itmkoeln 3 года назад +6

      @@francismendes New Petabyteproject...

    • @francismendes
      @francismendes 3 года назад +2

      @@Momi_V you're right... Whonnock is geared towards performance, not raw storage space...

    • @QuentinStephens
      @QuentinStephens 3 года назад

      He's already got 1 PB boxes.

  • @fckngcheetah
    @fckngcheetah 3 года назад +86

    Hyped to see that server in 4-5 years on the used market 👌👍

    • @ndragon798
      @ndragon798 3 года назад +6

      @@psori depending on your place in the world 15,000kwh is less than $1500/usd which if you are using this for business or are a power user and need the space isn't a bad deal.

    • @majstealth
      @majstealth 3 года назад +5

      @@ndragon798 4500€ in germany, but that is on consumer rate, highpower users pay less, because "reasons"

    • @thegreenguy8837
      @thegreenguy8837 3 года назад

      @@majstealth I love how you marked the reasons with the " " because that is exactly how it works in germany. Companies over everything.

    • @colonelangus7535
      @colonelangus7535 3 года назад +1

      @@psori
      Solar arrays are a thing.

    • @boemlauw
      @boemlauw 3 года назад

      I said that 4 years ago, had to install 2 racks of InfiniFlash units @ 500k euro each.
      Don't hold your horses, it's still pretty steep :/

  • @MrDirectNL
    @MrDirectNL 3 года назад +42

    Nice addition to my home lab 😁

  • @corrgie9830
    @corrgie9830 3 года назад +47

    The forbidden Plex server

  • @Mireaze
    @Mireaze 3 года назад +119

    Damn, finally a sever big enough to hold all my por... Projects, all my projects!

    • @Felix-ve9hs
      @Felix-ve9hs 3 года назад +12

      P... Plex Media Server

    • @logikgr
      @logikgr 3 года назад +8

      Geez that's enough for all of my P...PDF files.

    • @excitedbox5705
      @excitedbox5705 3 года назад +2

      Ah yes more space for my portable media library

    • @DigBipper188
      @DigBipper188 3 года назад +2

      Let's be honest - *what's the difference?*

    • @PWingert1966
      @PWingert1966 3 года назад +2

      Cat pictures bro'!

  • @irgendna
    @irgendna 3 года назад +13

    Ok but where's the satisfying sound of how this beast starts up )

  • @watministrator
    @watministrator 3 года назад +8

    dude, its been 5 years since I've been in a DC, I've not touched servers like this in forever. Now working for a hyper scaler for 4 years, and being significantly abstracted from DC ops, has me so jealous of both you and DC ops in general. I seriously miss this and am very jealous of you getting access to this.

  • @SteelHorseRider74
    @SteelHorseRider74 3 года назад +18

    Boss: "We've got a new server delivered, go move it to server room, unwrap and rack it..."
    Trainee: o_O

  • @CrazyLogic
    @CrazyLogic 3 года назад +21

    This box looks like an ideal cache/proxy for streaming services, lots of spinning rust, with lots of NVME, and multiple low profile GPU to handle trans-code offloading.

    • @Alpine_flo92002
      @Alpine_flo92002 Год назад

      This is legit just the perfect "I got all in one system and can upgrade with just another one" Instead of having to adjust your structure

    • @CrazyLogic
      @CrazyLogic Год назад

      @@Alpine_flo92002 yep, as few as possible to maintain a minimum redundancy level is good enough

    • @dercooney
      @dercooney Год назад

      just get a disk shelf and plug it into the gpu box of doom. it'll be fine

  • @Allyouknow5820
    @Allyouknow5820 3 года назад +22

    "Next, get ready for MILAN" OOOH YEA !!!
    And thanks Patrick, as a tech writer I know what you mean by "Excellent product let down by marketing".
    I've been a number of times explaining to PR people their own products and why they could probably sell 5x more products if they actually made marketing that makes sense.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +9

      It is a passion thing. People passionate about technology understand it and can communicate it.

    • @godslayer1415
      @godslayer1415 3 года назад

      @@ServeTheHomeVideo Well then you have lost the passion...

    • @PWingert1966
      @PWingert1966 3 года назад

      In Intel's case it's excellent marketing let down by Product! 😁

  • @R2_D3
    @R2_D3 3 года назад +3

    ''Recording this at 4am before I had coffee!'' Could have fooled me!! :)

  • @akurenda1985
    @akurenda1985 3 года назад +9

    From Patrick in 2021, the rear is where the real magic happens.

  • @thiesenf
    @thiesenf 3 года назад +4

    CERN: That will only last 1 minute untill we have filled it with one of our LHC test run data...
    They can generate like 1TB/s...

  • @shammyh
    @shammyh 3 года назад +2

    Excellent content, as always!

  • @sparklyballs2
    @sparklyballs2 3 года назад +31

    expensive domino run on the desk there.

    • @thomasb1521
      @thomasb1521 3 года назад +2

      It's stressing me out just watching

  • @goldenteegreatshots
    @goldenteegreatshots 2 года назад +1

    I remember when i got my 5x Dell EqualLogic PS6100XV with all 600GB 15.5K SAS drives which was a 120 drives and was so excited to populate it. Fast forward 8 years and now they sit in my garage collecting dust because i can't even give them away lol. they had 3x PE910's which also are collecting dust. Crazy how it all becomes so absalete so fast. They would be nice plat servers but for that fact i no longer live in the cage at the data center and couldn't even power these baby's up. I'm sure like someone else said in 5 years they'll be barely worth the power it takes to boot them up.

  • @dupajasio4801
    @dupajasio4801 3 года назад +9

    Again perfect timing. I need something like that at work. But .. Agree, going Intel not Epyc is really disappointing. Dell must be selling these to big boys so they don't care about marketing. I didn't even know such system existed. You are such a good source of info !!!

    • @nilswegner2881
      @nilswegner2881 3 года назад +1

      I agree. Intel CPUs are not very Epyc anymore

  • @kelownatechkid
    @kelownatechkid 3 года назад +1

    Interesting. I use ceph at home with 24/36-bay supermicro servers. In 5-10 years once these hit the used market, this kind of system might be a good way to increase capacity without increasing the number of nodes in use significantly.

  • @SlothTechTV
    @SlothTechTV 3 года назад +2

    Another great video! Thanks for creating such technical and informative content!! Its always fun to watch these videos. :)

  • @LouisSubearth
    @LouisSubearth 3 года назад +8

    This is it, the server I'd buy and put in an old broadcast van to make a pirate TV station, it definitely has space to put enough drives to store enough TV shows to last a lifetime!

  • @ravi0maan
    @ravi0maan 2 года назад +1

    described in the best way ❤️

  • @crazy_human2653
    @crazy_human2653 3 года назад +6

    I would love to see this filled with the ExaDrive EDDCT100 (sata) or EDDCS100 (sas) which are 3.5 in formfactor drives which are 100 tb of drive space, too bad they are 40k per drive

  • @jolness1
    @jolness1 3 года назад +1

    My TrueNAS is feeling inadequate.
    This looks well executed, I couldn't fill 100 drives but I would like something smaller this well executed.

  • @AdrianSchwizgebel
    @AdrianSchwizgebel 3 года назад +3

    Insane... The company I work for, combined with the last company I've worked for could fit all their data on the four 960GB SSDs from one of the controller modules, in RAID10...

  • @zenja42
    @zenja42 3 года назад

    Yes, I've seen 4u JBOD'S yt my customers and they have a 1u server on top of each box.
    If you are talking about deep racks, that side you have in mind? I'm normally fitout our colo with 1200x600 or 800 and 47u.

  • @Burnman83
    @Burnman83 3 года назад +2

    You know you are a great RUclipsr when you have 50.000 follower, but Dell calls you up to send you a system to review that is most likely in the neighbourhood of 120.000€ =D
    ...and of course ...when Linus calls, invites and references you all over the place =D
    Well done, Sir. Way underrated channel.
    Do you still have the UBNT Unifi Leaf that you checked (NOT REVIEWED!) and can you tell any updates on how it works in the meanwhile and if there is already an acceptable firmware for it?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +2

      Thanks! Last I saw it was removed as a product

    • @Burnman83
      @Burnman83 3 года назад +1

      @@ServeTheHomeVideo Oh that's too bad, I thought I had seen it only a few days back in the US EA store. A German store still has it listed as 'coming soon'.
      Such an inexpensive offer I might have given a try.

  • @berndeckenfels
    @berndeckenfels 3 года назад +1

    Can you slide out and open such a system in operations? I would not dare to do that to replace failed drives...

  • @mdd1963
    @mdd1963 3 года назад +9

    I'm sure the 100 each 12 TB drives are not cheap coming from Dell; you'd expect them to be $350-400 each for good pricing, meaning they are probably ....$700 each from Dell? :)

    • @RussSirois
      @RussSirois 3 года назад

      Close, $837 so says a quick Google search. So close to $90k just in drives? Do we get a bulk discount?

  • @VirtualTechBox
    @VirtualTechBox 3 года назад

    Thank you for the video 👌

  • @gulllars4620
    @gulllars4620 3 года назад

    This makes me think of the scale-up nodes you could make if you added another U (so 6U) and went with EPYC CPUs here instead.
    As an example for a data warehouse: 100 HDDs, a node with 2x 64 core CPUs and 4TB of RAM, 4 u.2 SSDs in front for system drives and caching, and a secondary node with E3.S/L ruler SSDs connected by say 4x16 PCIe 4.0 links (64 in total).
    I think this is a foreshadowing of the kind of setups we may see with next gen interconnects with CXL where you may have a 4U of HDDs with a link to a 2U control node, with has another link or set of links to SSDs and accelerators.
    Keeping compute local to the storage is a great way of gaining efficiency by limiting data movement. I feel there are 2 opposing forces at work in the data centers and clouds: Disaggregation for composability and flexibility (high network infrastructure requirements) VS distributed localized resource groups for efficiency at the cost of some flexibility and up-front knowledge of workloads. Co-localizing resources is also a way to speed up workloads that don't scale out well due to limited parallelism and/or latency sensitivity. I'd be interested to hear Patricks take on that, and what he expects to see. Maybe we will see disaggregation as default for most general workloads and specialized systems for well known and infrequently changing high volume workloads or performance-critical workloads?

  • @cheesefries7436
    @cheesefries7436 3 года назад

    Does anyone know the part number of the toolless drive bays used in this machine?

  • @RmFrZQ
    @RmFrZQ 3 года назад +3

    I wonder, how it performs in terms of performance and data throughput. I mean, those are 100 SAS drives, each connected to a backplane with 12Gbit link, but they have to be connected to a HBA\RAID controller (multiple HBA\RAID controllers?).
    Correct me if I'm wrong, but 12000 Mbit divided among 100 drives nets 120 Mbit per drive, which is measly 15 MBytes (120 \ 8).

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +6

      Typically your backplane to expander to host system bandwidth is not a single SAS3 lane. Even the cables are 4x SAS3 each. Then you have multiple cables and HBAs and expanders as shown in this system. The bigger limitation is on the networking side where you have say 25GbE in the configuration we tested. That is a big part of why the onboard GPU is interesting. One also has to remember all of these drives are not being access simultaneously and there is strictly on-node data movement.

  • @okoeroo
    @okoeroo 3 года назад +1

    Epic review, without an epyc. Very very interesting

  • @johnmijo
    @johnmijo 3 года назад +2

    Great stuff Patrick, now we need a new series based on High-Density Storage with Compute ;)
    Yes I'm an AMD Fanboy but really a tech fanboy moreso, so even though this has Intel inside it's still a nice piece of kit....

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад

      Just wait for tomorrow's Milan.

    • @johnmijo
      @johnmijo 3 года назад

      @@ServeTheHomeVideo yes indeed :)

  • @dbzssj4678
    @dbzssj4678 3 года назад +1

    are those seagates? you'll need 600tb parity to for when you transition to western digital or HGST.

  • @hariranormal5584
    @hariranormal5584 3 года назад

    me and my friend combined struggling to fill a 6TB HDD, at a cloud server. it gets more annoying when you want to backup the files and it is just clutttered in the bare HDD with no proper software

  • @hariranormal5584
    @hariranormal5584 3 года назад

    How much does everything cost, in this video (the full server, and all the disks and the servers?)

  •  3 года назад

    Would love to see some benchmarks

  • @OTechnology
    @OTechnology 3 года назад +1

    Interesting they didn't use counter rotating fans considering the static pressure benefits they provide and those choking hazard 100 drives lol.

  • @t4ir1
    @t4ir1 3 года назад +2

    Please tell me that you configured that in RAID-0

  • @JohnSmith-yz7uh
    @JohnSmith-yz7uh 3 года назад +6

    For just 4mil you can get 10 PB RAW if you use the 100TB 3.5" SSD. Just imagine. Maybe the next Linus Techtips Project XD.

  • @fyzhkar3962
    @fyzhkar3962 3 года назад

    Hey Patrick, can you review the DDN AI7990X next? ;)

  • @ewenchan1239
    @ewenchan1239 3 года назад +3

    I'd be really curious to see if the Xeons would become the bottleneck as the general/"rule of thumb" recommendation now is one CPU core per HDD. (just to be able to manage the data coming onto and off of said HDD).
    I don't think that there's going to be enough RAM, even with the AMD EPYC processors (4 TB of RAM) to be able to run ZFS dedup on this.

    • @mdd1963
      @mdd1963 2 года назад

      One core per hard drive? Never even remotely heard of that before... (And it would likely be news to NetApp, who as of a couple years ago had dual core CPUs in a few of their 28-bay offerings.)

    • @mdd1963
      @mdd1963 2 года назад

      (Have heard of 1 GB of RAM per TB of storage...)

  • @QuentinStephens
    @QuentinStephens 3 года назад +3

    Now fill it with 100 of those those 50 TB 3.5" SSDs. Who wouldn't want 5 PB in a single box?

    • @Groovewonder2
      @Groovewonder2 3 года назад +2

      Anybody with a sensible accounting and requisitions department lmao

  • @denvera1g1
    @denvera1g1 3 года назад +1

    A worthy upgrade for my Poweredge C2100LFF

  • @TheExard3k
    @TheExard3k 2 года назад

    how long can you run them as Raid 0 until the first one breaks?

  • @osamaa.h.altameemi5592
    @osamaa.h.altameemi5592 3 года назад +2

    Wow and I was thinking it is just another storage server. Fantastic review. But don't you see the controllers being the main-bottleneck for this system? having 50 hard drives per controller (assuming these 50 drives are shared among many remote servers) sounds like a performance killer to me. What do you think?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +2

      It is a bottleneck, but realistically so is the 25GbE that we had. Remember there are 1U 1PB storage arrays these days so disks are basically slow tier and not dense storage at this point.

    • @osamaa.h.altameemi5592
      @osamaa.h.altameemi5592 3 года назад +2

      @@ServeTheHomeVideo I see your point and that is why i love this channel. You guys have such fantastic comprehensive view. Thx a ton.

  • @arnaldogonzalez1
    @arnaldogonzalez1 3 года назад

    The only thing about these vertical drive bay designs is that they're not so hot-swap friendly. Especially if you're running tight cable management and you have to slide the whole server out to pop the lid for a drive swap. Unless the server rails are beefy enough to withstand the 400 pounds while fully extended, but even then I'd be a little scared to stand next to it while servicing.

  • @scoty_does
    @scoty_does 2 года назад +1

    I wonder if TruNAS could run on it. Given the raid controllers in it.

  • @yuxianwang3238
    @yuxianwang3238 Год назад

    Hello I just found the dss fe1 riser card assembly in china and the riser itself without and drives only cost $80. And I’ve asked the seller who’s within dell assembly facility. It works on normal server. If you can find a way to ship it from mainland to the states I’d love to help

  • @PR1V4TE
    @PR1V4TE Год назад

    Hey Patrick. Can you suggest me a high core count 2-4 u server with as less bays as possible for 250TB of storage. It's really hard for me to figure out the right hardware due to its limitations.
    Spoiler:- it's for my home usage. I want to put my bills and space less used. 200TB will go as my vault and rest 50TB (42 ish maybe) will be running all my machines. With that high core count.

  • @johnkristian
    @johnkristian 2 года назад +1

    I _REALLY_ want one and fill it with 18/20 TB drives

  • @joshhardin666
    @joshhardin666 11 месяцев назад

    how do you reasonably get 2kw to a single UPS/outlet in a home?

  • @Jorge2222
    @Jorge2222 3 года назад +1

    Yes cool hardware, but how did you utilize this hardware? ZFS, FreeNAS, ? How did it perform? What about the management interface? The real stuff we need to know assuming the hardware works.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +1

      We covered the management which is standard iDRAC 9 on the main site review, and showed in the video this setup with ZFS (albeit ZFS on Linux.) Also discussed that most installations will use Ceph, Gluster, or another scale-out solution.

  • @nandulalkrishna923
    @nandulalkrishna923 3 года назад +2

    Imagine that 40k 100TB 3.5inch SSDs x100 in this chasis ...

  • @laliemail7276
    @laliemail7276 3 года назад

    Hellow .. I want to asking what is better for database server ? Dual Amd epyc or dual xeon platinum ? Thanksyou ..

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад

      Maybe a better question for the Milan then Ice Lake launches given that we are at the very end of the cycle. If you have big in-memory or persistent needs, than the 4-socket Cooper Lake system is the best option. Otherwise, a lot is going to change.

  • @scottstewart3884
    @scottstewart3884 2 года назад

    That system would do for all my storage needs for several years. Unfortunately, Cost of unit Aside, plugging it in would Blow every breaker in my house, on top of melting the >Shudder< Aluminum wiring.......

  • @eformance
    @eformance 3 года назад

    I would really like to know how a single PERC card can saturate a 25GbE. I've seen the bandwidth specs and I've never seen cards come close to hitting those specs, they must have used Unicorn configurations. An array like this would most likely be running RAID60, unless you don't care about your data. RAID60 would be 4 stripes of RAID6 of 25 drives, that's a lot of work for the PERC to do. I'm going to assume that's a 16 channel PERC card with 12Gb/s channels, but maybe it's 8 channels at 12Gb/s. I've generally been disappointed by RAID controllers, when you lean on them with RAID 6 their throughput numbers go way down. My modest setup went from 16x 2TB SATA2 drives to 8x 4TB SAS3 drives running in SAS2 to a 6805T. With RAID 6 on 8 drives and a 4 channel SAS2 uplink, the best the card can do on sequential is 800MB/s, mixed mode is 400MB/s. Unless there is amazeballs tech in that single PERC (LSI I'm assuming), I have doubts that it would saturate a 25GbE with RAID60.

  • @Bierkameel
    @Bierkameel 3 года назад +2

    Samsung has 30TB 2.5 inch SSD's available, image the system you can build with those in 5U.

  • @logikgr
    @logikgr 3 года назад +10

    Giveaway! Giveaway! Giveaway! 🎉😂

  • @alphaLONE
    @alphaLONE 3 года назад +8

    can you even pull it from the rack on rails while it's fully loaded without either buckling the cabinet or tipping it over?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +10

      Yes. This is basically required to service the top loading bays. Our racks in the data center are bolted to the floor though.

    • @IamFormaggio
      @IamFormaggio 3 года назад +1

      Most likely you're not allowed to mount above the middle of the rack.

  • @Mist8kenGAS
    @Mist8kenGAS 3 года назад +1

    this is would be very fun to put together.. for me atleast

  • @redtails
    @redtails 3 года назад +1

    you should definitely show how difficult the toolless system is, though. Sounds fishy if they pre-did it for you. I've had some toolless systems that were much more work compared to screwing in 4 screws into a metal bracket

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +1

      I did about 10 of them. First one was slow just figuring it out. #2-10 were fast to get in/out so did not go into much detail about it

  • @leknyzma
    @leknyzma 3 года назад +1

    I watch your channel with Ad blocker paused

  • @charleslaughton203
    @charleslaughton203 3 года назад +1

    Okay, haven't got far into the video yet, but how many forklifts required to lift?

  • @wywywywywywywy
    @wywywywywywywy 3 года назад +1

    So when's the Milan video coming out?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +1

      Assuming all goes well today, tomorrow www.servethehome.com/amd-epyc-7003-date-set-for-milan/

  • @SoranoGuardias
    @SoranoGuardias 2 года назад +1

    What is really funny is two of these assemblies can replace all of Linus's Petabyte Project nodes. XD

  • @mm64
    @mm64 3 года назад

    Did you record the video in 30fps? Try 60fps.

  • @jasonhowe1697
    @jasonhowe1697 3 года назад

    1.8 PB not including the 1u storage
    at the current commercial limits of 8TB you would max at 800 TB however you might run into a bus speed limitation with gen 3 and gen 4 ssd types
    you max out at 100Mbps on 7200 rpm hard drives
    can't comment on 10-15 k rpm..
    Based on 100 drives @ 18 tb EACH you be looking @ 800-900 TB in caching

  • @redtails
    @redtails 3 года назад +1

    Are racks and the surrounding infrastructure even designed to carry that kind of weight? How do you even work on a f heavy machine like that? Standard rack height is 42U, so you can have like 7 of these systems in a rack, plus a switch, UPS, and/or a load balancer server or whatever. The vibration on the rack with 700 HDDs must be crazy.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +2

      That is a concern with these and is why they are often only installed in half of racks (plus easier for top loading service. Static loading is often OK but dynamic loading is an issue if the racks is full and drives are installed.

  • @DoctorX17
    @DoctorX17 2 года назад

    I want this… but all I have is 72TB of random drives in a case I cut with a Dremel to put in extra hot swap bays

  • @TwinTailTerror
    @TwinTailTerror 3 года назад +1

    ya but how much does one of those suckers cost (no drives included) cuz i tried to look it up and i cant find it

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад

      I am not sure these are sold without drives.

    • @TwinTailTerror
      @TwinTailTerror 3 года назад

      @@ServeTheHomeVideo you can get most servers bare bones i cant afford 100 hdds off the bat but could work tward it over time. So spitball a price

    • @TwinTailTerror
      @TwinTailTerror 3 года назад

      Also ty for reply =3 im new to owning a server and im so proud of my pile a junk ^...^

  • @thesadiqful
    @thesadiqful 3 года назад

    Hi Patrick,
    Where to buy this server, please?
    How much does it cost?
    Looking forward to your reply.
    Thanks for the awesome video :)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +1

      These you would call up your Dell sales rep for.

    • @thesadiqful
      @thesadiqful 3 года назад +1

      @@ServeTheHomeVideo Sorry, we don't have any Dell sales rep in Nigeria. Do you have one that can help me with the purchase and shipment? Thanks 👍

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +1

      Likely need to find a local Dell rep at Dell.com.

  • @pabss3193
    @pabss3193 3 года назад

    Which OS? thx

  • @SupremeRuleroftheWorld
    @SupremeRuleroftheWorld 3 года назад +1

    so, unraid will run fine on this right?

  • @MassimoTava
    @MassimoTava 3 года назад

    Any bad drives?

  • @lezlienewlands1337
    @lezlienewlands1337 3 года назад

    Those 2.4kw 80 Plus Platinum PSUs. Must sound like a jet engine taking off.

  • @marcin_karwinski
    @marcin_karwinski 3 года назад +2

    Lol now imagine this filled with Exadrive 100TB 3.5" SSDs... and using "regular" mdadm raid rebuild upon multi disk failures ;) Were it not for those advanced FSes/volume managers as in ZFS, regular block rebuilds in these servers once fully filled would take days or weeks ;)

  • @justethical280
    @justethical280 3 года назад +1

    I'm working in IT but even i think it is getting really unhealthy the power we use nowadays in all the datacenters AND all the data we collect...... It is really getting out of control..... ( i'm working for a large government institute and sometimes i think to myself, oh my god, another new software project which even collect more data then we already have. We need to draw the line somewhere...

  • @gattie12ben
    @gattie12ben 3 года назад +2

    Good job flexing on us :)

  • @Razor2048
    @Razor2048 3 года назад

    Any chance they will avoid price gouging so that a system like that can be affordable for home use?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +1

      I am unsure of how to answer this. Even 100 drives at $400/drive is $40,000. Drives fail over time and you would likely want more than one of these systems so my sense is that these are only purchased by businesses.

    • @Razor2048
      @Razor2048 3 года назад

      @@ServeTheHomeVideo I was thinking more of using cheaper drives, and possibly implementing a way for users to expand their storage over time. For example, every year most people will purchase a few hard drives on black Friday, or during other sales, and it would be cool to have a system that they can expand over the course of multiple years users can add more storage to the same system.

    • @kelownatechkid
      @kelownatechkid 3 года назад

      @@Razor2048 Look into using ceph, pair that software with used ebay supermicros and you can achieve this. Ceph is not designed specifically for single-node use but it actually works well

  • @TheKev507
    @TheKev507 3 года назад +2

    Crazy cool server. I bet those fans suck back a ton of power too.

  • @darkphotographer
    @darkphotographer 11 месяцев назад

    could be cool to go and one in 5 or 10 years , when they dicomition them ,

  • @marksapollo
    @marksapollo 3 года назад +1

    Goggle might be eying this up? Or Amazon maybe? Nice design.

    • @shubinternet
      @shubinternet 3 года назад

      Nah, they custom design their own.

  • @tonyrking
    @tonyrking 3 года назад +1

    Apple have been using high density drive arrays higher than this?? Where, what model?? link?!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад

      For many years. They go direct to contract manufacturers to get them built. I probably should not say which one, but they have been shown off at trade shows.

    • @henkondemand
      @henkondemand 3 года назад

      @@ServeTheHomeVideo That's insane

  • @goodday1801
    @goodday1801 3 года назад

    so basically its a two unit half height blade chasis optimized to hold 100 hds. these kind of things were there for ages i guess. nothing really to boast about.

  • @dawnv3436
    @dawnv3436 2 года назад +1

    I really wish they had stuffed more controllers in and built in HA were in it. SOOOOO close....

  • @brianherman6144
    @brianherman6144 3 года назад +1

    Better than LINUS TECH TIPS Every day of the week.

  • @tangfranklin1730
    @tangfranklin1730 3 года назад +1

    Highlight of this vidoe: Get Ready For Milan
    Launch day review coming up?

  • @EdR640
    @EdR640 3 года назад +1

    I'm at 1/10th of a Petabyte - I'm catching up to you! lol

  • @OfficialiGamer
    @OfficialiGamer 2 года назад +1

    you talking with your hands in front of those hard drives scares me lol

  • @BennyTygohome
    @BennyTygohome 2 года назад

    19:16 Warning: "When Installing the chassis into the cabinet, there can NOT be HDD inside."
    Patrick: We're going to test that

  • @OVERKILL_PINBALL
    @OVERKILL_PINBALL 3 года назад +1

    My 40TB Plex server is now embarrassed
    :P

  • @woodant1981
    @woodant1981 3 года назад +1

    I thought my 24 disk SAN was heavy!!

  • @azurite2926
    @azurite2926 3 года назад +1

    410 pounds? Holy shit! How did you get it on the table for this video?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +4

      That includes the pallet, boxes, rack rails, foam, power cables and such. There are handles but the key with this, and many large multi-node and/or accelerated systems is to strip the chassis then insert components while in the rack or on the photo table. I have been doing this for years. Pre-pandemic I had a 395lb deadlift but still did this whenever moving a bulky server.

    • @RyanLackey
      @RyanLackey 3 года назад

      @@ServeTheHomeVideo Interesting experiment would be "deadlift fragile expensive objects". I wonder how much that de-rates a lifter's peak, knowing it needs to be set down gently and not dropped at any point....

  • @thatLion01
    @thatLion01 3 года назад +1

    Nice server. I can only imagine the cost $$$$$$$$$$$$$$$
    ps if someone gave me this server i would screw 2000 screws :)

  • @George-664
    @George-664 3 года назад +1

    Why not 18tb HDD's ?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 года назад +3

      At the time we did this, there was only 16TB options available. Also, getting 100x drives do to a review and video is not easy :-)

  • @midwestiowashooter
    @midwestiowashooter 3 года назад +6

    Me: Here we go...
    Best Buy: Limit of 2 - 12TB EasyStore External Hard Drives

  • @susanrashid9531
    @susanrashid9531 3 года назад

    One word HPE with Infosight if you want a truly superior product.