What is a SAS SSD?

Поделиться
HTML-код
  • Опубликовано: 28 дек 2024

Комментарии •

  • @marcogenovesi8570
    @marcogenovesi8570 7 месяцев назад +43

    The main benefits of SAS are that it's always hotswap (vs nvme where it's not always supported), has simple dual channel so it can be used with dual-controller storage appliances in an easy way, and the signal integrity isn't a huge mindboggling issue like with nvme so it's simpler and cheaper to make a large setup with them (with or without expanders). A lot of servers still don't really need a battery of nvme drives, even if a lot of servers do need nvme

    • @ImAManMann
      @ImAManMann 6 месяцев назад +4

      Most don't need nvme..... with raid and storage tiering you can get close to nvme speeds at much lower cost. I also rarely see workloads which need the storage speed of even fast ssd arrays.... most of the ti.e nvme is just a waste of money.

  • @tuttocrafting
    @tuttocrafting 7 месяцев назад +37

    If i had a grand for each time wendell say to throw away my LGA2011 server i would have enought money to get another used LGA2011 chassis with all the caddies.

    • @pixiepaws99
      @pixiepaws99 4 месяца назад +1

      What? Those things go for like $300...

  • @marcogenovesi8570
    @marcogenovesi8570 7 месяцев назад +95

    Wendell lives on the bleeding edge, there are lots of use that don't and still see low end and lower midrange hardware, and old, and very old hardware too

    • @axn40
      @axn40 7 месяцев назад +2

      I agee: an 70usd supermicro X10 + e3 1230v3 is still relevant in comparaison with a rpi

    • @SquintyGears
      @SquintyGears 7 месяцев назад +26

      He's talking to actual professional deployments. Not your homelab setup.
      For people who rely on the server for making money.
      At home we will continue to use every ancient configuration imaginable with no problems.
      But these videos are very often used by sys admins as evidence for the decision boards they have to appeal to...

    • @romevang
      @romevang 7 месяцев назад +3

      @@SquintyGears Or in the case of the company I work for, they don't like to spend Money. We're using Ivy Bridge/Sandy bridge (with a random mix of Broadwell) hardware for our cluster. If it fails, we're moving to the cloud... but the problem with that is that we don't have enough staff to make such a move. We're already overloaded as is.

    • @SquintyGears
      @SquintyGears 7 месяцев назад +6

      @@romevang yeah and you all know in the department that it's just a time bomb. They've been warned. 🤷 Those companies everyone is just working on planning their exit...

    • @joshuaspires9252
      @joshuaspires9252 7 месяцев назад +2

      @@SquintyGears well i partly agree,, but my older r720 dual 8core xeon and 16 hard drive 10k's is hurting my electric bill. so i have to rethink my setup for next year.

  • @iamamish
    @iamamish 7 месяцев назад +47

    A few years ago my dad gave me his PC to work on. He still had a mechanical boot drive in it, and boy did I realize how spoiled I'd been these last 10 years or so. I gave him a new SSD - it is such an insane upgrade from a mechanical drive.

    • @Charles_Bro-son
      @Charles_Bro-son 7 месяцев назад +5

      It was the gamechanger of snappiness =)

    • @joshuaspires9252
      @joshuaspires9252 7 месяцев назад +5

      in the early 2000's i used raptor hardrives to get past slow storage drives.

    • @PoeLemic
      @PoeLemic 2 месяца назад

      @@Charles_Bro-son Well, the NVMe option and then booting from it, changed my world. That's the next upgrade.

  • @insu_na
    @insu_na 7 месяцев назад +8

    I'm a fan of sas ssds simply because I can run a ton of them with SAS expanders and in external enclosures. You can technically to that with nvme too, but it will fight you all the way. Sas just works

  • @TheKev507
    @TheKev507 7 месяцев назад +35

    SAS, and by extension SCSI has remarkable staying power

    • @yumri4
      @yumri4 7 месяцев назад +5

      yep also as the PCIe NVMe drives at most you can have 12 of them in an Intel system though 24 ish in a AMD EYPC system. Then you run into how to cool all the drives and both CPUs . As some Enterprise NASes have 50 physical drives having a limit of only 24 without bifurcation is not good while with bifurcation then you run into issues of how to get them so the chips will have a low chance of being blown off from the fans while still close enough to the controller to be quicker than 2.5" and 3.5" SAS drives.

    • @tolpacourt
      @tolpacourt 7 месяцев назад +2

      SCSI survives as a protocol, mainly.

  • @tman6117
    @tman6117 7 месяцев назад +27

    "if you have a Broadwell cpu you even a high end one it's time to upgrade"
    Me with my ivy bridge based server

    • @alan_core
      @alan_core 7 месяцев назад +2

      Haswell here ;=)

    • @PhAyzoN
      @PhAyzoN 6 месяцев назад +4

      Dell R720 intensifies

    • @ButtKickington
      @ButtKickington 6 месяцев назад

      He says this right after I just bought a high end broadwell server.
      Whatever. 88 threads make fans go brrrr.

    • @IanBPPK
      @IanBPPK 6 месяцев назад

      I've been doing well with my R620, Z620, Hyve Zeus, and DL360p Gen 8's, though the power consumption leaves a little to be desired 😬

    • @reki353
      @reki353 4 месяца назад

      me with R920, R910, R720, DL360 G5, Z820

  • @88Elzee
    @88Elzee 7 месяцев назад +33

    I can't get over how tiny those hard drives look in his hands.

    • @FaithyJo
      @FaithyJo 7 месяцев назад +10

      He is the anti-Linus. Cannot drop a HDD or a processor.

    • @piked86
      @piked86 7 месяцев назад +2

      He looks to be a pretty big guy when you see him stand next to someone. I'm guessing 6'3"

    • @annebokma4637
      @annebokma4637 7 месяцев назад +2

      ​@@FaithyJohe seems not to play a nice guy on camera either. Truly anti Linus 😂😂

    • @marble_wraith
      @marble_wraith 7 месяцев назад

      If you consider mechanical drives, they have to be.
      Smaller disk platters in terms of surface area means you don't need to worry as much about centrifugal forces, so you can spin faster without as much vibration dampening. Why spin faster? Seek times.

    • @88Elzee
      @88Elzee 7 месяцев назад +2

      @@marble_wraith I know how big those drives are, they aren't as small as they look in his hands lol.

  • @Makeshift_Housewife
    @Makeshift_Housewife 7 месяцев назад +6

    One of my favorite servers had to be retired a few months ago after about 6 years of use. It was a little 1U HPE DL160 with dual 10 core cpus, and six 1tb 7200 RPM spinners. I checked the total disk use, and they were about 1077 TB each. We kept buying servers for a while with SSD OS arrays and spinners for storage so we could maximize our small budget

    • @Alan.livingston
      @Alan.livingston 6 месяцев назад +1

      Still nothing wrong with spinning rust for some mass storage tasks I reckon.

  • @dancalmusic
    @dancalmusic 7 месяцев назад +4

    Enterprise SSDs still cost a lot more than their equivalent (enterprise) spinning drives. Read intensive SSDs cost about double, mixed use about triple, and write intensive about quadruple. A write intensive SSD of modern size (not 512GB please) costs as much as a server. And you need at least two of them. It makes me smile when Wendell talks as if all of us are generously gifted our disks by Kioxia :)
    My enterprise HDs typically last 10 years, then get replaced due to overall server obsolescence, not because they broke. And I'm talking about servers with 5-8 MS RDS VMs, MS SQL, File servers and other write intensive roles. I highly doubt a Qlc SSD will last 10 years under that load, unless you pay a fortune for them.

    • @blahorgaslisk7763
      @blahorgaslisk7763 7 месяцев назад

      it's also a case of knowledge and experience. We know spinning rust pretty well after forty or so years of use. We can calculate lifetime cost and performance. SSD's have what, 10 years of reasonably common use, and not anywhere near the same amount of data about long time use. Predictability is worth a lot in professional server environment.
      I remember the first SSD I saw. it was DRAM based as there were no such thing as flash memory at the time. The DRAM was backed up by a battery that could keep it alive for a bit more than 24 hours. Battery goes empty and all storage is gone... But as primitive as it may seem the performance was phenomenal.
      First FLASH SSD I got my hands on was a prototype from a manufacturer. It was dog slow. Installing the OS on it took ten times as long as installing it on spinning rust. It was hilarious as you knew that this was a SLC SSD, and still it sucked so bad. Now I've seen SSD's fail badly. But they were all consumer grade devices.

    • @dancalmusic
      @dancalmusic 7 месяцев назад

      The term “spinning rust” annoys me. When 4TB SSDs that can last 10/12 years like HDs with the same usage on high-transaction servers and cost $400 then we can call those splendid examples of technology that are rotational disks “spinning rust” without necessarily being a youtuber (very good, but a bit far from the pockets of normal sysadmins)

    • @bloomtom
      @bloomtom 7 месяцев назад +2

      @@dancalmusic Spinning rust is not a derogatory term. It's an informal, cutesy term. Not necessarily correct either, as HDDs haven't had iron oxide media layers for a long time, but that's beside the point.

  • @nextalcupfan
    @nextalcupfan 7 месяцев назад +14

    multiple PB from a 256GB SSD sounds insane.
    frankly IMO that would be very impressive for a HDD.

  • @duduoson1306
    @duduoson1306 6 месяцев назад +3

    I really appreciate the old Macintosh graveyard aesthetic in your shop.

  • @anothersiguy
    @anothersiguy 7 месяцев назад +14

    We’re those people who are still running Broadwell era Xeons and spinning rust some of our branch office servers lol. Hopefully will be put out to pasture soon but SAS SSDs would be an awesome way to keep them rolling if we needed to.

    • @ICANHAZKILLZ
      @ICANHAZKILLZ 7 месяцев назад

      Same 😅 We did stick some SATA SSD's in most of them but they go slow after about a year of writes. Let us pray we can convince upper management for something made post 2017

    • @asm_nop
      @asm_nop 7 месяцев назад +1

      I've been running a used Dell R510 at home with a pair of Westmere Xeons and a pile of DDR3 and 600GB 15K drives. I paid so little for it, that it was basically a gift from a friend. This hardware is now nearly 15 years old, and I could easily run it another 5 if it doesn't fail outright. It's keeping up with my current needs surprisingly well. The only pressure to upgrade I have is that newer gear is vastly more power efficient, and really cheap on the used market. Is it just me, or is old hardware staying relevant for much longer than it used to?

    • @romevang
      @romevang 7 месяцев назад +2

      My work is 1 or 2 steps worse. Mostly Ivy/Sandy Bridge cluster with a brand-new Dell Unity below it all. Their long-term strategy is to go to cloud once all the hardware just gets "too old." Like it isn't already.

    • @rkan2
      @rkan2 6 месяцев назад +1

      ​@@asm_nopSuch old hardware uses so much electricity, that any newer stuff will save the new hardware multiple times in that 5 years. Unless you havr basically free electricity of course...

  • @TheMarkRich
    @TheMarkRich 7 месяцев назад +2

    Had two in my ibm storage unit to act as ssd top tier in dynamic store. They work well.

  • @coraldayton
    @coraldayton 7 месяцев назад +5

    I just shutdown my C240 M4 LFF server that was primarily spinning rust. I went from Dell RX10s and RX20s to older SM chassis to Cisco chassis and a whitebox. I'm trying to migrate all to SSDs, but costs for SSDs aren't as low as spinning rust. Once the costs go down, I'll be all over full flash/SSD for my homelab. I've got 2x 3.2TB, 4TB, 7.68TB U.2 NVME SSDs, a 6.4TB PCIe NVME SSD, but I wish I could have more.

  • @louisharkna9464
    @louisharkna9464 7 месяцев назад +6

    That Packard Bell tower you have in the background took me straight back to working at Best Buy in the EARLY 90s... oof.

    • @jrm523
      @jrm523 7 месяцев назад

      Time is a cruel bitch

    • @wargamingrefugee9065
      @wargamingrefugee9065 6 месяцев назад +1

      Packard Bell made outstanding color televisions back in the '60's.

    • @levygaming3133
      @levygaming3133 5 месяцев назад +1

      Am I correct in assuming Packard Bell is packard of Hewertt-Packard and bell of Bell Labs (AT&T)?

    • @wargamingrefugee9065
      @wargamingrefugee9065 5 месяцев назад

      @@levygaming3133 Good question. I didn't know the answer. Wikipedia says no.
      "Packard Bell Corporation (also known as Packard Bell Electronics or simply Packard Bell) was an American electronics manufacturer founded in 1933 by Herb Bell and Leon Packard."
      "The Hewlett-Packard Company, commonly shortened to Hewlett-Packard...was founded in a one-car garage in Palo Alto by Bill Hewlett and David Packard in 1939..."

  • @marble_wraith
    @marble_wraith 7 месяцев назад +2

    Toshiba has recently come out saying they're investing in both HAMR and MAMR drives. I'd be interested in L1T objective analysis on the pros / cons of each.
    Typical use cases for home servers would include steam cache's and media servers. In the case of latter, say you wanted to ahem, backup / transform your physical optical media to streamable files. A drive with high write capacity is *required* for this, and high throughput is required especially if you have multiple optical drives operating simultaneously. Having one with enterprise logs so you can have an estimation of time to failure is super useful.
    The other piece of the equation being once the media is "backed up" and compressed, what drives would you archive it to? Hence the question / statement on the first line 😁

  • @Exzeph
    @Exzeph 7 месяцев назад +6

    As someone who's trying to make a good homelab who just wants power efficiency over anything else, it's really surprisingly disappointing to me how few options exist for 2.5" SATA SSD's with like... a lot of TB onboard -- at an affordable price. Like what gives? Why is that segment so underserviced?

    • @DaleEarnhardtsSeatbelt
      @DaleEarnhardtsSeatbelt 7 месяцев назад +3

      The same can be said about pcie lanes. The gap from consumer to enterprise is huge. It's crazy how limited you are on consumer gear.
      SATA SSD's pretty much stop at 4TB. I assume it's because of the form factor. All the larger 2.5 inch drives are about 2x as thick. U.2 is where it's at. You can get 30TB 2.5 inch SSD's that way. They do take 25watts each though. Opposed to the 8 watts required by m.2 nvme.

    • @LtdJorge
      @LtdJorge 6 месяцев назад +1

      Because no one wants to really make SATA SSDs. The speed limits were reached a long time ago and the protocol is very very inefficient for flash. Same PCB layout for SATA vs SAS/NVMe would leave the SATA one so far behind. But I do get you, you want a replacement for spinny ones without the spinny thing, and yeah, the offer is not good. I’m thinking that the NAND needed for the high density would be a waste on SATA, so that’s why they don’t do it.

  • @Gryfang451
    @Gryfang451 7 месяцев назад +2

    I've used enterprise SAS SSDs for years as VFlash drives (VMWARE) and server boot drives. One of our FIber channel SANS uses them exclusively and an ISCSI SAN uses them as it's performance tier in an auto-tier setup with an expansion unit holding 8GB NL SAS drives. We're fairly small, so footing the bill for NVME SANs isn't going to happen any time soon. If you're using shared storage that is still spinning hard drives, using caching methods to 12Gbs SAS SSDs or NVME drives really helps out.

  • @agw5425
    @agw5425 6 месяцев назад +4

    Sure, with a unlimited budget anything new will be better/faster than 5-10 year old equipment but a server that is still doing what it did 10 years ago is not trash, especially if you can replace the power hungry hdds with power sipping ssds. For home use servers from 15 years ago that are still fully functional will do just fine and save you a ton of money in hardware as most is free or near free used. There are also sas to M.2 adapters for both ssd and nvme disks that would serve the home user well for a long time, regardless of the servers age. If you match your activity to the servers capability there is no "to old" . Some still run pre 286 pc´s and servers/mainframes from the 60´s and 70´s and enjoy it as a hobby. With what you know you could be a big help to us who can´t buy new for what ever reason, instead of trash talking older systems. The best server is the one you can afford, any thing else is pointless.

    • @PoeLemic
      @PoeLemic 2 месяца назад

      Good point. What you said, really applies to students like me. We can't go out and buy threadripper or epyc systems. And, picking between SAS SSD's and just
      normal SSD's are a no-brainer.

  • @chaosfenix
    @chaosfenix 7 месяцев назад +4

    This is why I wish SATA would just go the way of PATA. MOBO and CPUs should just drop support for SATA and move to SAS. SAS are backwards compatible with SATA drives so there would be zero issues with people moving their drives over. NVME will always be better but if hardware manufacturers are worried about backward compatibility they should just switch to SAS which would provide backwards compatibility for HDDs all while actually giving them a way forward. The last major revision was completed in 2008, 16 years ago. It could drive now. SAS-4 on the other hand is only 7 years old and goes up to 22.5Gbps or about 4x the speed of SATA.

  • @andibiront2316
    @andibiront2316 7 месяцев назад +6

    I have 12 SAS3 7.68TB SSDs in my TrueNAS. They are rated for 2000MB/s read, but SAS3 is limited to 1200MB/s. I guess they use 2 links? They are currently working at 1200MB/s. Do they require a special backplane? They are directly connected to a LSI 9300-16i, without a backplane. I don't really need the extra BW but I was wondering how do you connect them to fully utilize the rated BW. Also, they support 2 modes of operation regarding power consumption... 11W and 9W. I don't know how to set that up. And they are running on a Hawell Xeon v3 with 2x10Gbps, dont be so hard on them! :P

  • @chromerims
    @chromerims 6 месяцев назад +3

    3:46 -- Broadwell (socket 2011) in 2024? Can I call it, "Greatly outclassed by the latest gen," rather than pure "trash"?
    10:35 -- Good point. Whereas a qualified SAN device will always look pricey to us DIYer's unfortunately.
    Nice video 👍
    Kindest regards, friends and neighbours.

  • @Movingfrag
    @Movingfrag 6 месяцев назад

    I was slowly replacing mechanical drives in my systems with SAS SSDs and the funny thing is in my experience Toshiba ones were the least reliable. Had four 3.2TB SAS3 drives - after less than a year of a moderate use three died within a month and fourth gave signs of imminent failure so i retired it too. Replaced them with the HGST drives of the same capacity - these are working nicely for years already.

  • @14m13375p1c3
    @14m13375p1c3 7 месяцев назад +7

    "Socket 2011 CPUs are trash" Don't you talk about my sons like that! LOL At least for homelabbing it doesn't matter too much, but I will say, having gone down the rabbit hole to look at more recent ewaste on ebay recently, I did find out just how far the gap between the 2630L-v3s I have, first gen Xeon Scalables, and then the super impressive 3rd gen threadripper and even earlier epycs is. Been trying to find used enterprise gear that would work well for a decently capable editing server that doesn't send my wallet into a panic.

  • @Daniel-k4t3n
    @Daniel-k4t3n 6 месяцев назад +2

    Broadwell is fine for 99 percent of situations for home and even small business. First time I feel Wendell is taking shots and the plebs

  • @xandrios
    @xandrios 7 месяцев назад +1

    How trustworthy would you consider the consumer drive DWPD ratings to be? You mentioned not being happy with using a Samsung consumer drive in an enterprise setting - but would that be actually a possibility when doing low writes? For instance, dirt cheap M.2 drives are often rated for 1DWPD. They are so cheap that even only using only 10% of their capacity, effectively making them 10 DWPD drives, is cost wise very possible.

    • @Hugh_I
      @Hugh_I 7 месяцев назад

      I've been using bottom barrel consumer drives for a home server setup for ~10 years. I never had issues with hitting the write endurance ratings. Rewriting an 8TB SSDs daily would be A LOT, you'd have to have a very IO intensive task for that to happen. For use cases that aren't constantly writing to the disk, the endurance ratings on consumer drives is often sufficient for those drives to last until you want bigger ones anyways. In case they do fail, its probably still cheaper to replace them than to use enterprise grade. Though I would still probably not do that in an enterprise setting.
      (I did though had two Samsung SSDs fail catastrophically, but not due to exhausting spare cells. Both simply died at the same time, the controllers just went away. I think it may have been one one of the Samsung firmware issues that fried drives, but not sure. As it happens, of course, those were two drives in a RAID1 mirror, so both copies gone. Gladly I had backups. With mechanical drives you generally have them fail slowly not abruptly like that. So there's that).

    • @frankwong9486
      @frankwong9486 7 месяцев назад

      I have some QLC which come with laptop , they still have 93% of life span 😂
      And if you see how those Chia miner plotting on SSD and how they replace / resell used ssd, most SSD die due to other issues such as controller or pcb components issue , not common dead by writing too much or TBW reach

  • @michaelsanders5815
    @michaelsanders5815 6 месяцев назад

    It reminds me of what people said about hard drives when they came out. It's a perception thing. Drives are far more delicate. We just think of them as safe because we protect them so much. But it's a spinning delicate piece of glass. When you think about it it's crazy to use them.

  • @churblefurbles
    @churblefurbles 7 месяцев назад +3

    Mic is muffled.

  • @pephathalok
    @pephathalok 6 месяцев назад +1

    Broadwells go for $5-$50, used SAS3 HDDs as $5/TB. Even accounting for TDPs those are the most cost-effective, way ahead of newer stuff. Try to build an OLAP rig able to scan a petabyte of CSVs with some historical data in

  • @BillLambert
    @BillLambert 6 месяцев назад

    Sure it's "middle of the road", but it's very attractive for homelabs as an affordable and scalable gateway into flash arrays. There is plenty of wiggle room for a tier of fast-ish big-ish storage in between spinners and NVMe.

  • @Elinzar
    @Elinzar 6 месяцев назад

    Living on the bleeding edge sometimes dont let you see whats going on on the middle of the blade
    For most people and specially homelab enthusiasts who usually adquire old servers would be thrilled to put ssds on their hardware

    • @adrianandrews2254
      @adrianandrews2254 6 месяцев назад +1

      Re Homelab use: I bought an X10DRi-LN4F+ motherboard (Dual 2011 CPU) on ebay for $120 which can support 14 x NVME drives and dual 10GBit ethernet with the appropriate PCI-E cards. Started with 4 x 2TB PCI-E V3 Consumer SSD in RAID 5. . So it need not cost the earth.

  • @gedavids84
    @gedavids84 6 месяцев назад

    For me it seemed like what was hold back NVMe drives in this segment was the seeming lack of lanes to plug them into. And before you say "Epyc has 128 lanes" that's good and all but still only 32 4x devices. Our nimble has 48 drive capacity before we even add a shelf. Older Intel systems are like 56 lanes or something? Not a lot, and that's assuming you can even split them in a way that would be useful. We need like PCIe switch add in cards that can give us ports to actually connect to.

  • @TheFullTimer
    @TheFullTimer 7 месяцев назад +1

    NGL, I've been debating moving my plex system to SAS ssd for a while. It is easier & cheaper to do than move to a newer box with trays for nvme, e.3, or e.1. Plus, I could keep costs down because I don't need more transcoding power.

  • @TheFickens-xr4rr
    @TheFickens-xr4rr 6 месяцев назад +2

    I wish I could afford to populate my Xeon E5 V4 Supermicro CSE216 24-bay 2.5" chassis with 8TB SAS drives 😥

  • @DeKempster
    @DeKempster 7 месяцев назад +1

    The disks in my jobs 10 year old DL380 G8 only started to fail this year.

  • @idied2
    @idied2 7 месяцев назад +1

    I have a 800Gb sas SSD vut haven't used it yet for my server. I wanna get 3 more but can't seem to get more

  • @Marc_Wolfe
    @Marc_Wolfe 2 месяца назад

    Sounds good, I'm over here trying to game on Ivybridge and really wanting a cheap upgrade.

  • @esunisen3862
    @esunisen3862 2 месяца назад +1

    My Bloomfield isn't quite happy hearing this.

  • @alexbold4611
    @alexbold4611 6 месяцев назад

    I am sticking to my Dell T430 and R430, so SAS SSD is way to go for me.

  • @galileo_rs
    @galileo_rs 6 месяцев назад

    Price of one of those, Dell branded drives is in kilobucks range. Price of your used Dell server is ... ?

  • @redhonu
    @redhonu 7 месяцев назад +1

    Would you run a sever with a E5-2698v3 if you got it for the price of the ssd's in a beginner home lab?

    • @eDoc2020
      @eDoc2020 6 месяцев назад

      The only thing wrong with them is power _efficiency._ You'll probably draw at least 100 watts at idle. Newer equivalent servers will give more compute for the same power. Newer small servers will give the same compute at less power.

  • @ImAManMann
    @ImAManMann 7 месяцев назад +1

    I use tons of sas ssd drives in my environment.
    As for servers...
    For our environment... most things don't need the extra performance as we hame a lot of service running at relatively low utilization.... we get better value by having many more servers a gen or 2 back clustered.... overall reliability is better having the ability to have maintenance performed by moving containers and vms to other nodes. For example... I have 5 dell r320 servers with 10 core cpus 192gb ram and 8 sas ssd drives each and 10gb networking in a cluster..... all of that for less than a single new server and for what it does the speed is a non-issue... many VMs and containers running on proxmox... everything could run on 3 nodes if needed .... maybe 2.... and that gives us a super robust environment and I can wait for newer gen servers to drop in price.

  • @johnpaulsen1849
    @johnpaulsen1849 7 месяцев назад +1

    So your telling me I can't add these into my vnxe3200 for my homelab?

  • @shawnmcelroy1829
    @shawnmcelroy1829 7 месяцев назад

    These still seem great for homelab NAS cold storage. Where can you get these. How much should they cost

  • @nicknorthcutt7680
    @nicknorthcutt7680 7 месяцев назад

    Isn't that the kioxia drive that the ISS put up in their servers?

  • @gabest4
    @gabest4 7 месяцев назад +1

    How does one 8TB ssd cost as much as 280TB hdds.

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 7 месяцев назад

      Strange math. In my market, one 8tb kioxia costs same as 6x20tb ultrastar hdds. And this is unpleasant but more or less reasonable

  • @maxheadrom3088
    @maxheadrom3088 14 дней назад

    That is an awesome notebook!!!

  • @reinekewf7987
    @reinekewf7987 6 месяцев назад

    i have a r630 with e5 2683v4. i could use nvme but booting from is dificult. i use 6 521gb sata ssd in raid 10 and those drives getting heavy redden and written. one drive last about 18 moths and cost me only 30€ i have a sas ssd but those a bit expensive if you ask me. also i dont need the extra features of sas drives. i needed a powerful and cheap server so the r630 cost me only 600€ and it came with 2 xeon e5 2683v4, 512gb of 4rx32 ram modules, 8 drive bays, perc h330 mini, 2x2 10gbit nic, idrac8 enterprise. i am happy with this it is perfect for my needs maybe a bit old but powerful.

  • @kristeinsalmath1959
    @kristeinsalmath1959 7 месяцев назад +1

    Did some one use Kingston DC SATA drives? I was thinking to buy some for old but working servers.

  • @DelticEngine
    @DelticEngine 7 месяцев назад +1

    NVMe may be very fast, but that's the only advantage. My main system is running SAS and SATA. SAS is great because it only needs a host adapter and I can run several drives and a lot more if I choose to use an expander. I'm also running a couple of SATA SSDs on the SAS controller very happily.
    Frankly, I'm really not at all impressed with NVMe technology. At present, each one takes four PCIe lanes and is basically a four-lane PCIe slot you can't use for anything else. Depending what you put in your system, PCIe lanes can be a limited resource that can be better utilised on something other than storage, and that's before going on to the limitations of PCIe splitting. It's a very inflexible storage system and I really dislike motherboard manufacturers dictating how I can use the limited number of PCIe lanes; I need to be able to swap drives so embedded M.x slots are a complete waste of resources for me. I did hear that there may be single PCIe lane NVMe devices, which could be a improvement.
    --------
    One possible solution to this could be some sort of 'NVMe Host Adapter' that would utilise just one PCIe slot and provide connections (ports) for several NVMe drives. Maybe it could take the form of a 16-lane PCIe 5.x Host Adapter and provide, for example, 16 4-lane ports a PCIe 3.x speeds enabling up to 16 NVMe drives to be connected to one PCIe slot. Such a Host Adapter could be made that would also support SAS and SATA drives, which could be relatively straightforward if miniSAS connectors were used. This would facilitate choosing how resources are allocated in terms of PCIe lane utilisation, number of storage devices and make NVMe a much for viable alternative to SAS.
    It could also make system expansion rather interesting if standards existed that enabled, for example, a front or even rear panel hot-swap array to also be used as a general expansion slot. This could be used for card readers, network adapters, video capture hardware, sound cards, or even custom home-brew expansion or scientific expansion cards. Could the 2.5" U.x also be used as a kind of 'form factor' for expansion cards?....

  • @Decenium
    @Decenium 7 месяцев назад +2

    and here I am trying to make a personal nas with a Q9550....

    • @rkan2
      @rkan2 6 месяцев назад

      Don't 😅 buy a Xeon D or similar

  • @sk8lucas
    @sk8lucas 7 месяцев назад

    I does not make sense, but I really want to put like 4 os those drives in a Micro-ATX Video editing build.

  • @MarkRose1337
    @MarkRose1337 7 месяцев назад +1

    Sometimes you don't have the funds for all new gear. Or you may have use-cases where CPU doesn't matter so much, and that hardware could be repurposed for Ceph storage or whatever.

  • @rafiq991008
    @rafiq991008 7 месяцев назад +2

    lol, I just dropped a few of those SATA SSDs on production system this week. Better than HDD for now.
    also, nice to know that SAS SSD exists, SAS HDD is not good for DB hitting the servers with random read/writes.

  • @Saphykitten
    @Saphykitten 7 месяцев назад +20

    Come on Wendell, we can’t all have $4k servers at home >:c

    • @romevang
      @romevang 7 месяцев назад +6

      This video isn't targeted for home users.... It's making arguments for businesses to get off ancient hardware.

    • @Saphykitten
      @Saphykitten 7 месяцев назад

      @@romevang Awe Im just busting his chops, and giving him the business ;)

  • @fwiler
    @fwiler 7 месяцев назад +11

    I'm showing this to my boss that thinks our Broadwell servers are fine.. The cost in electricity to run them is more than they are worth.

    • @eDoc2020
      @eDoc2020 6 месяцев назад +1

      AFAIK the newer generations don't use any less power, they just give you more performance for the same power.

    • @fwiler
      @fwiler 6 месяцев назад

      @@eDoc2020 It isn't about the electricity it's about Wendell saying just throw them out. And when a laptop has more compute than a server, you know it's time to change.

    • @eDoc2020
      @eDoc2020 6 месяцев назад +1

      @@fwiler If the electricity isn't an issue and they have plenty of compute power _for your use case_ there's no need to change.

    • @fwiler
      @fwiler 6 месяцев назад +1

      @@eDoc2020 You aren't getting it at all. My post wasn't serious. I'm not actually going to show the video to my boss. My initial post was to show that even Wendell wouldn't use such an old pos in production. IT is notorious for not upgrading due to.... insert reason here. And no it isn't enough compute and there's about 100 other reasons I could list why you would upgrade, so don't say you don't need to change when you have no idea about the hardware.

  • @joshuaspires9252
    @joshuaspires9252 7 месяцев назад

    i love the sas ssd, for a home brew media server,, i cant afford fancy new gear on every thing.. but i am rocking a 16drive 10k array and it is shreading power,, so it was a learning deal.. so now i computer money on hardware against energy usage now.. changes every thing for me. in short spinning hard drives is not going away for me as ssd all the things is just way to much money as of now.

  • @fhpchris
    @fhpchris 7 месяцев назад +5

    I don't think broadwell-e is trash. My 2699V4 system can do ~2.48 GiB/sec in Windows file transfer from a single windows 11 client. SMB and the windows 11 TCP stack begin to be a limitation probably before the CPUs do if you have the best ones that you can for your socket. Enterprise Sata SSDs are also great and much cheaper than SAS SSDs. When you start putting 24 of any of these drives into a single chassis your networking (or the Sas adapter/expander) is probably going to be the limitation before the drives. 24 Sata SSDs can go faster than a SAS3008 can easily.

    • @churblefurbles
      @churblefurbles 7 месяцев назад +1

      Problem is an n100 mini can do pretty close to that as well using almost no power.

    • @deepspacecow2644
      @deepspacecow2644 7 месяцев назад

      I think he meant for more in the enterprise rather than in the homelab.

    • @terosma
      @terosma 7 месяцев назад

      @@churblefurbles with 9 PCI lines you do not attach many NVMe to N100 or 100Gb NICs either

  • @Elemino
    @Elemino 7 месяцев назад

    Wendell, maybe you can answer this question for me... why does SATA still exist? Why haven't consumer drives transitioned over to SAS? It seems like the technology should be old enough and mature enough that the cost difference is negligible at this point... especially when the new hotness is NVMe.

  • @monkeyrebellion117
    @monkeyrebellion117 7 месяцев назад

    My hot take: As long as the hardware runs what you want within your spec, it's still good. The SSDs do make a huge difference though.

  • @bcredeur97
    @bcredeur97 6 месяцев назад

    For small business, SATA SSD’s are fine
    No need to burden the small guy with high costs of SAS/NVMe
    But yeah the moment you get to “we’re losing serious money if this goes down” then you need to think different

  • @arthurswart4436
    @arthurswart4436 6 месяцев назад

    Are these U.2 / SFF-8639 drives or is it the same SAS connector I use for mechanical SAS drives? I get suspicious when manufacturers cheerfully reply I can replace mechanical SAS drives with SAS SSDs, yet never respond when I ask about using them in the SAS bays I have available. I know i can replace them, I'd rather not buy new servers to do that just yet.

    • @LadyWolffie
      @LadyWolffie 5 месяцев назад +1

      No, they aren't compatible.
      There is trimode backplanes and connectors that accept u.3 nvme drives, sas and sata, but is a new technology

  • @LackofFaithify
    @LackofFaithify 7 месяцев назад

    Are you saying that a 4TB 7.2K HDD should be less than the, "all things must go, store closing, clearance price," of $450? But I signed a multi-decade contract to get that price.....

  • @piterbrown1503
    @piterbrown1503 6 месяцев назад

    Im thinking many people in the mid and small bussnes dont now that existing sas ssds. We are using dell server and in the dell configurator the sell you ssd with sata not sas. And we are talking about R550 or R660 Dell. Also Kingstone Enterprise ssd are stil sata or nvme. Also on micron you need to search deeper to found sas ssd.

  • @AxiomofDiscord
    @AxiomofDiscord 6 дней назад

    I only have 2 10k drives and they are only 80gb and SATA of course.

  • @wobblysauce
    @wobblysauce 7 месяцев назад

    Yep, just like an SSD for that old laptop... they boost the server response just as nicely.

  • @tolpacourt
    @tolpacourt 7 месяцев назад

    Pedantic pronunciation coach here. Anachronistic has stress on none of the syllables. uhn-nak-cruh-nis-tic. Maybe a slight emphasis on the second syllable, the nak.

  • @rickwhite7736
    @rickwhite7736 6 месяцев назад

    Why is a 1TB sas ssd $500 and a 1TB ssd only $40?

  • @ArmChairPlum
    @ArmChairPlum 7 месяцев назад

    Hmm, this would be interesting for the likes of schools with older hardware.
    In my case I have 9 year old twin vmware 128gb hosts on a 12gbit sas storewize v3700 that I am getting... concerned about.
    16tb total on itty 2.5 inch drives.
    Schools need for compute has dropped and storage too (shifting to online onedrive usage) so a couple of the lower capacities in mirror would be sweet - depending on cost.
    I do want to get a newer server though! Then migrate the need for DCs but have papercut and their finance package and potentially their student management system that require a physical server.

  • @CatalystReaction
    @CatalystReaction 7 месяцев назад

    I can put my finger on it but the audio hasnt been as good recently

  • @wooviee
    @wooviee 7 месяцев назад +1

    This camera angle is so nice, feels like an old Mythbusters shot, or a shot from Adam's videos on Tested.

  • @dustingodin5323
    @dustingodin5323 6 месяцев назад

    The problem with these are they straight more expensive than NVME. Even some u.2

  • @youtubasoarus
    @youtubasoarus 7 месяцев назад

    It hurts my heart to hear hardware referred to as garbage. I get it, in an enterprise environment that is dollars wasted, but for home gamers.... that be gold in them there chassis.

  • @beansnrice321
    @beansnrice321 7 месяцев назад +1

    Lol, ouch, me and my ol Broadwell workstation feel attacked. XD

  • @frankwong9486
    @frankwong9486 7 месяцев назад

    Hopefully one day these ssd become more affordable 😢

  • @UmVtCg
    @UmVtCg 6 месяцев назад +1

    SAS SSD are the special Forces of Storage media

  • @ddobster
    @ddobster 7 месяцев назад

    damn, i just got a serious case if itchy rash after seeing that old Lexmark 2390 in the back....

  • @AnnatarTheMaia
    @AnnatarTheMaia 6 месяцев назад

    I don't know what you were running on those servers you labeled "garbage", but I got Solaris 10 running on some of that hardware... and it just flies. Solaris 10 is unbelievably fast on that "garbage" hardware (it's garbage, but not because of generation, but because it's PC-bucket hardware, but that's a different discussion).

  • @almc8445
    @almc8445 6 месяцев назад

    The pop up explaining what OEM means made me laugh - I would be SHOCKED if someone watching this level of analysis wasn't intimately familiar with the phrase

  • @chuckthetekkie
    @chuckthetekkie 7 месяцев назад

    For my use case, NVMe is overkill as I don't need that crazy fast NVMe speeds. I'd rather have 16 SATA/SAS drives than 4 NVMe drives on a HBA card. Capacity is more important than speed. My server is mostly for serving media and the occasional VM so I don't really need NVMe speeds. Also SAS SSDs typically use less power than their NVMe counterparts. My server also has a bunch of 16TB SATA HDDs in it in a ZFS pool and I typically get over 900MB/s which is plenty fast for serving movies and TV shows but I would like to eventually replace them with something a bit more reliable and more energy efficient and these SAS SSDs sound like a logical upgrade for me. My wallet says otherwise but eventually I'll upgrade the HDDs.

  • @eTwisted
    @eTwisted 7 месяцев назад

    Ha ha, I have so many servers 8+ years old and yea I drop in Samsung consumer drives. Many I am trying to get off of RHEL7 onto 8 and that barely supports EPYC. But first gotta get Cadence working fully with RHEL7 so we can buy 5 year old servers.

  • @shrimp_p4rm
    @shrimp_p4rm 7 месяцев назад

    If you have to ask what its used for, you definitely need at least 2.

  • @Koop1337
    @Koop1337 7 месяцев назад +1

    Need a big disclaimer that says for your business not your home lab in your garage lol

  • @duncancampbell9490
    @duncancampbell9490 7 месяцев назад

    Nice prices....

  • @Angel_the_Bunny
    @Angel_the_Bunny 7 месяцев назад

    Reminds me of sshdd

  • @bart_fox_hero2863
    @bart_fox_hero2863 7 месяцев назад

    Laugh now, but having to explain to your pc component why it’s lifespan was so short, as it threatens to kill you just before it expires as scheduled by the manufacturer is closer than we think boys

  • @Anaerin
    @Anaerin 7 месяцев назад

    *cries in Xeon X5677*

  • @tsclly2377
    @tsclly2377 6 месяцев назад

    on a older server (HP gen8) , NVME was generally something that would be on the motherboard connectors using 8PCIe lanes and they where expensive and could be burned up on the writes in 5-7 years, or would be password protected in such a way that if you didn't have that actual password, all you really had was parts.. Me, I collect the 45nm 400gb SLC SAS drives, using only the older ones, in DATA transfer/modification, as they may never burn out.. and almost nuke proof. When I was mining ETH (2018), we had some SATA Intel545's that lasted as little as 4 months, while the OS and routines on most were stored on HP SLC SATA and these never burned out, even when the temps went to over 140*F (air conditioner was turned off) and Mining cards failed and fried a Pcie accelerator.. Don't forget the tape back-up.. TLC is garbage.. and is going to be sketchy especially if you but used and generating a lot of useless data, like in AI.. A 400gb SLC is going to be better than a 4-6 TB that has a write (est.) limit of 30PB and is going to be more likely to live on, but these are now getting harder to get in the used market. If you are buying used, the considerations are, writes per GB total, used, how many can one get and costs..Deifately buy them 10-14 at a time. and look for the spares that are being cycled out. Like scoring 24-30 Dell 400GB SAS 2.5" 12G SLC Solid State Drive SSD T2TPF MDKW-400G-5C20 0T2TPF drives...

  • @kaeota
    @kaeota 6 месяцев назад

    Can we have a clip of Wendell saying it's trash, I'm sorry 😂

  • @xandrios
    @xandrios 7 месяцев назад +1

    The enterprise markup on these drives is way rediculous. But they all do it. What enterprise server platform would actually happily accept those nice Kioxia drives? Because dell and HPE will not. And for enterprise use - especially when deploying offsite - you probably want to go with one of the big brands in order to get your on site hardware support.

  • @krazydime0
    @krazydime0 6 месяцев назад

    you aint putting one in a dell or hp unless it runs their firmware…aka…you bought it overpriced from them

  • @drewzoo028
    @drewzoo028 7 месяцев назад

    3:19 Made me feel targeted, I run an array of 24 240 GB Samsung 860 EVO SATA SSDs in my homelab 🤣

    • @piked86
      @piked86 7 месяцев назад

      Are you concerned about drive wear?

    • @drewzoo028
      @drewzoo028 7 месяцев назад

      @@piked86 Yes, and reliability in general, but they were free and I have a lot of spares. Can't beat free!

    • @piked86
      @piked86 7 месяцев назад +1

      @@drewzoo028 That setup makes more sense with that price tag.

  • @Nah_no_thanks
    @Nah_no_thanks 7 месяцев назад

    Real SASsy drive... Punching out.

  • @krykry606
    @krykry606 6 месяцев назад

    I have a more important question.
    What is an AI SSD.

  • @uni-kumMitsubishi
    @uni-kumMitsubishi 6 месяцев назад

    Wow looking healthy, feel like everyone is coming out of there post covid funk

  • @floodo1
    @floodo1 7 месяцев назад

    I hope to be able to use a $1500 SSD at home one day lol (-8

  • @mrhassell
    @mrhassell 7 месяцев назад +1

    SAS SSDs (Serial Attached SCSI Solid-State Drives) are often the preferred choice over SATA SSDs (Serial ATA Solid-State Drives).

  • @jfudge7384
    @jfudge7384 7 месяцев назад +1

    The bots are here

    • @RotaryJunkie
      @RotaryJunkie 7 месяцев назад

      Impressive nonsense from them.

  • @Ozz465
    @Ozz465 7 месяцев назад

    Just let it go brother , Its trash. Its hard to let go of what one used for ages .