Forbidden Arts of ZFS | committing the greatest sin & getting away with it [Hardware RAID with ZFS]

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • WARNING: This video is for entertainment purposes only and may be disturbing to some ZFS users. Do not try this at home. Any actions you perform after watching this video is at your own risk and the creator of this video shall not be liable for any of your actions.
    This is the first video in the Forbidden Arts of ZFS series. In this video, I'm going to show you how to commit the greatest sin as a ZFS user and get away with it too. I'll show you all the reasons why people will tell you not to do this, and we'll have a myth busting discussion about all these reasons. In the end, you should make up your own mind.
    You may also want to watch "Uncovering the Truth about HBAs and SMART data": • Uncovering the Truth a...
    If you need an IT mode HBA SAS controllers, visit the HBA section of my store: ebay.to/3l4xlch
    If you want the H200 with IT mode firmware that works in the integrated slot of 11th gen PowerEdge servers, you can find it here: ebay.to/2N6PCZT
    If you'd like to support this channel, please consider shopping at my eBay store: ebay.to/2ZKBFDM
    eBay Partner Affiliate disclosure:
    The eBay links in this video description are eBay partner affiliate links. By using these links to shop on eBay, you support my channel, at no additional cost to you. Even if you do not buy from the ART OF SERVER eBay store, any purchases you make on eBay via these links, will help support my channel. Please consider using them for your eBay shopping. Thank you for all your support! :-)

Комментарии • 145

  • @resicom97
    @resicom97 3 года назад +39

    I've called the UN and they've issued a binding resolution to send peacekeeping forces to stop this madness.

  • @hubertley939
    @hubertley939 2 года назад +5

    ZFS on RAID boxes nearly killed us. While it worked great for a few years, the manufacturer of the RAID boxes went bankrupt. We had RAID controllers fail every couple months. No problem because of double redundancy (redundant RAID cards) and multi-pathing. Then we were eventually no longer able to purchase replacement controllers anywhere on the market. We even sent controllers to Taiwan to have them manually repaired, hoping to get functional ones back after a couple months. Eventually we ran without redundancy because half of our RAID controllers were being repaired or faulted at that time. One more lost controller and none of the data would be any longer accessible. Because of the custom RAID formatted disks, they were not readable in JBODs or boxes from other manufacturers. We had to purchase sufficient additional backup storage, transfer our files off the affected systems before the last controller failed, and then rebuild and reload the whole system onto simple JBOD-based typical ZFS. Took about a couple months for data transfer and restoring. Luckily we came up with a scheme that required just a week of downtime. I think this was about 400TB on Lustre over ZFS. Performance was good with the RAID boxes, but we somehow felt that ZFS on plain JBODs had much higher data transfer rates. So, that’s the compelling reason not to lock yourself into proprietary RAID technology.

    • @ArtofServer
      @ArtofServer  2 года назад +6

      I don't know which RAID controllers you used, but LSI, which has the majority of the RAID/HBA SAS controller market has made their on disk format standardized. To the point that you can migrate to newer versions of their RAID controllers, or even reassemble their RAID volumes using software available in Linux. I demonstrated this in my more recent video here: ruclips.net/video/6EVjztB7z24/видео.html

  • @PeterBatah
    @PeterBatah 7 месяцев назад +3

    Your mission, should you choose to accept it... Great intro by the way.

    • @ArtofServer
      @ArtofServer  7 месяцев назад

      ha ha ha... thanks for watching!

  • @alexanderelmi8379
    @alexanderelmi8379 4 года назад +8

    Thank you for this video! I have been sitting on an LSI 9211-16i that I considered "useless" because of all the rumors around ZFS. Now I am feeling better about using it in a storage server with Centos/ZFS next.

  • @Ryan-zer000
    @Ryan-zer000 2 года назад +5

    Bought a cheap used R710 with the PERC 6/i raid controller. Wanted to try using Truenas Scale on it, so I was looking all over ebay for cheap alternative HBAs to use. After watching this, I think I'll try this method first 😜

    • @HeroRareheart
      @HeroRareheart 2 года назад +2

      I've been doing this with TrueNAS Scale for like 3 weeks and it's fine. That said I'll be switching back to Core because I have encountered a bug that nobody else has encountered apparently and it's a huge hindrance.

    • @Ryan-zer000
      @Ryan-zer000 2 года назад +1

      @@HeroRareheart I also found TrueNAS Scale a little much for what I wanted to do, so I just popped in Ubuntu 20.04 LTS haha. Just need ZFS and samba shares...keeping it simple

  • @dorinxtg
    @dorinxtg 4 года назад +5

    1. Take any LSI 92xx/93xx (no need HBA) and clean the configuration using the GUI
    2. Connect all the drives to the card, power on the machine, go into the GUI, see the drives as "unconfigured good".
    3. Reboot, install ZFS. lsscsi should show you all the drives. Create your pool and use it just like with HBA
    I have 4 systems built like that. I simply skipped the BS that many people say.
    Regarding STEC and SLOG - an Intel 900p (with 3D Xpoint) is a great SLOG solution, just divide it with partition (and skip this "you must use whole device and not a partition" nonsense).
    Overall, great video!

    • @artlessknave
      @artlessknave 4 года назад +2

      it's not BS, it's paranoid safe. one of the most common places that discourages zfs and raid is the freenas forums...largely because people who dont know what they are doing mangle data with improperly designed raid-zfs (or virtualized zfs) and then go to the freenas forums begging for help. when they say "dont use zfs and RAID" when is really meant is "dont use zfs+RAID if you aren't advanced enough to fix it", because most of the people who can fix it on forums arent interested in doing so for free when the risk could have been avoided entirely.

    • @artlessknave
      @artlessknave 4 года назад +1

      also, one of the reasons to discourage raid controllers is that even in "raid0" or "jbod" modes, some of them still add their own metadata, and make the drives unreadable, or hard to read, in anything but the same controller line. with an HBA the disks are readable by any HBA or onboard controller. on any OS.

    • @chrismoore9997
      @chrismoore9997 4 года назад

      @@artlessknave - Good points. Thanks for the input.

    • @ArtofServer
      @ArtofServer  4 года назад +3

      @artlessknave uh.... didn't i just prove that wasn't true in this video? I went from a H700 single drive raid0 virtual disks and just swapped to an IT mode card and re-imported the zpool. or maybe you skipped that part... ???
      I should add, I've done this sort of operation with many other MegaRAID controllers, not just H700, even more modern SAS3x08 controllers.

    • @ArtofServer
      @ArtofServer  4 года назад +1

      @Hetz Biz thanks for watching and sharing your info!
      yes, Optane drives make great SLOG too! I have a few of those I will talk about in the future for SLOG related videos.
      LOL... I don't know if you read my mind, but next video in this series will involve the partition nonsense you mentioned...

  • @PeteKowalsky
    @PeteKowalsky 4 года назад +8

    Great explanations and excellent info. I'd never even thought to use single-drive RAID0 drives like this. VERY smart. :) Keep up the good work!

    • @ArtofServer
      @ArtofServer  4 года назад +1

      Thanks! Glad you enjoyed it. Thanks for watching!!

  • @UnkyjoesPlayhouse
    @UnkyjoesPlayhouse 4 года назад +8

    I linked this video to my YT community thread, it is a great explanation :)

  • @YaroKasear
    @YaroKasear 7 месяцев назад +4

    Don't get me started on the ECC RAM thing. This was a thing that got blown way out of proportion by the FreeNAS community when if you take what was actually said in context, did NOT mean that using ZFS in non-ECC systems is dangerous, but that using non-ECC RAM, regardless of filesystem, is more risky to your data.
    Somehow this translated to some ZFS users into "your data is going to be corrupted forever if non-ECC memory so much as exists in the same room as a zpool."

    • @ArtofServer
      @ArtofServer  7 месяцев назад +1

      Yeah, I prefer to discuss facts and data rather than dramatization, which seems prevalent in some communities. There is good reason for ECC RAM, but it isn't the end of the world. The problem with non-ECC is that it fails silently and programs will not know about the data corruption in RAM until it breaks execution of the program. ECC RAM will notify the system of errors and also correct the error to an extent.

  • @mithubopensourcelab482
    @mithubopensourcelab482 3 года назад +4

    Great learning and observation. Its worth of placing the video in ZFS leaning kit for new comers as well refresher course for experienced, all alike.
    Thumps up from my side.....

    • @ArtofServer
      @ArtofServer  3 года назад

      Thanks for watching! Glad you enjoyed it!

  • @Gunzy83
    @Gunzy83 3 года назад +4

    It even has a battery for backing up the cache... I LOL'd

  • @NickyNiclas
    @NickyNiclas 3 года назад +5

    This video has no dislikes! That is impressive!

    • @ArtofServer
      @ArtofServer  3 года назад +1

      Let's see how long it will stay that way! Lol

  • @ThomasTomchak
    @ThomasTomchak 4 года назад +4

    Nicely done as usual. Great information and very thoughtfully presented. I can tell you’ve been working on this one for some time and really knew what you wanted to say. If I have one critique it’s that you need to get a microphone. The slides part had too much room echo for my taste. But I’m a video nerd and notice these things. Maybe it’s just me.

    • @chrismoore9997
      @chrismoore9997 4 года назад +1

      The room echo is something regular people don't even think about or notice.

    • @ArtofServer
      @ArtofServer  4 года назад +2

      Thanks Tom for the feedback! You are so right on all points. I actually had this video idea over a year ago and had been poking at it here and there over time and finally decided to put it together. The part about "Uncovering the truth about HBAs and SMART data" was originally going to be part of this video, but the combination of both would have been too long... that's part of the reason why I sat on this idea. But, I eventually broke out the SMART data part into it's own video a while ago.
      I think now was just the right time to release this. This 1st part wasn't going to be part of a series originally. But I've accumulated several ideas since then, that all fall under the "Forbidden Arts of ZFS" banner so this was a good way to start the series.
      Thanks for the feedback regarding audio. Not being an expert in this area, I too noticed the change in audio quality between the opening scene in the dark vs the on-screen recordings. The opening scene was recorded with a GoPro on a boom in the lab rack area. The on-screen recording was with a webcam mic in the office at my desktop pc. I knew something sounded different, but I didn't think of it as echo to my amateur ears. If you have any suggestions you can make in terms of mic products, audio setup, etc. please shoot me an email. I'm still learning a lot in this area so would appreciate suggestions.

    • @abitrubbish
      @abitrubbish 2 года назад

      @@chrismoore9997 Regular is overrated

  • @gamergamer5345
    @gamergamer5345 6 месяцев назад +2

    You make me smile! Thank you very much!

  • @user-xd3en9mm5x
    @user-xd3en9mm5x 4 года назад +10

    In before YT shuts this madness down 😂

  • @tdtrecordsmusic
    @tdtrecordsmusic 3 года назад +3

    I feel ya on the cult oddities. I have also sinned... XCP-ng with multiple freenas's some of the disks are shared btw multiple Freenas's. 3 years zero faults.
    odd enough, some of the freenas on baremetal have had drive loss... out of 5 servers my experience is not worthy of scientific statements, but this is my tally so far >> 5 servers(3 XCP , 2 FN baremetal) Same uptime on all servers, 3 drives failed on the baremetal. zero fails on the hypervisor'ed versions.

    • @ArtofServer
      @ArtofServer  3 года назад +2

      LOL... someone is busy watching my channel! Thanks for watching! Hopefully it's been useful!

  • @slopsec2358
    @slopsec2358 Год назад +1

    @1:27 Ahh, I feel like were friends! Heck, I'd even let you log into my servers, and that's saying A LOT!

  • @SheeplessNW6
    @SheeplessNW6 2 года назад +1

    Interesting! I recently got a Dell R720, planning to use it for TrueNAS, though I haven't had time to do anything with it yet. As I understand it, one downside of reflashing a Dell PERC to use IT mode is that the iDRAC then increases the fan speeds, and many people resort to using IPMI to try to compensate for this. It wouldn't surprise me to see the technique in this video cited as an alternative, since the iDRAC will see nothing "wrong"...

    • @ArtofServer
      @ArtofServer  2 года назад

      No, I have not seen a repeatable, directly correlated, much less demonstrable cause and effect of an LSI IT mode flashed mini mono SAS card causing fan speeds to increase simply because of the firmware. Dell BMC controls fan speeds based on performance profile and sensor parameters. It is possible that heat generated by a card can cause an increase in exhaust temp sensor readings, and thus causing a fan speed increase, but actually I typically see the opposite.
      For example, H710 mini is a SAS2308 based controller that runs very hot, and some people mistakenly run this with "efficient" profile which reduces fan speeds. This leads to long term exposure of overheating the H710 and possible failure. I usually recommend people using the H710 to run in performance profile to keep the fan speeds high to maintain enough cooling for the SAS controller.
      that said, Dell BMC does strange things to the fans that are sometimes hard to explain. so, I can see how some people might think this or that and somehow conclude one thing or another. I have on many occasions, witness the fan speeds increase for no apparent reason, and then suddenly quiet down again.

    • @SheeplessNW6
      @SheeplessNW6 2 года назад

      @@ArtofServer thanks for the response! I was thinking of this bit from Fohdeesha's crossflashing docs: "iDRAC does not expect to see a PERC card running LSI firmware - this will cause the iDRAC to no longer see the drive temperatures. In some cases, this will cause the error PCI3018 in the Lifecycle Log, and the fans will be set to a static speed of about 30%. The fan speed acts as a failsafe to prevent any disks from possibly overheating".

    • @ArtofServer
      @ArtofServer  2 года назад

      The PCI3018 condition is not really specific to the LSI firmware flash. Dell BMC estimates the heat output of the PCIe slots by measuring power draw. I don't know if that sensor is specific to each riser or overall system. Basically, you can get PCI3018 even if you use Dell firmware PCIe cards if you have enough of them that draw enough power to go over a threshold. If you have a loaded system, and the addition of the LSI flashed HBA puts it over the threshold, that might be why some people think it's due to the LSI firmware. If you have a mostly empty system, you will probably not notice any difference. That said, I've had Dell systems that were seemingly bi-polar and would randomly spin up the fans, then go quiet again, and this is under no load conditions.

    • @SheeplessNW6
      @SheeplessNW6 2 года назад

      @@ArtofServer you've reassured me. I'm going to reflash my H710 mini.

  • @CrazyLegsFE
    @CrazyLegsFE 4 года назад +2

    AoS Learned a lot! Thank you. I'm still just using basic LVM software raid 5 with an old r420 attached to an old NetApp 2246 disk shelf on an h200e HBA and it appears to be pretty resilient. Been wanting to try ZFS however I'm not ready to bring down my server and rebuild yet until I get proper backups. Also slightly off-topic ...How does ZFS handle multipathing devices in a drive failure scenario the information I've found is vague and I'm not sure I should even attempt moving to ZFS if it's not properly managed.

    • @ArtofServer
      @ArtofServer  4 года назад

      Hi Josh! Thanks for watching!
      Oh man... if the data is important to you, definitely get your backup plan up and running!
      Are you using the raid stuff in LVM? I've used both LVM(with RAID) and mdadm+LVM... and I found the mdadm+LVM was more stable when a fault condition happens. Nothing wrong with using that tech though... I wouldn't just switch to ZFS just because... if you think ZFS offers features you'd like to have that you don't have now, then try it out.
      I'm not understanding your question though... if a drive fails, all paths to that device should see that it went offline, no? Or, are you talking about a path failure (HBA failure, expander failure) and the drive survives?

  • @johngrabner
    @johngrabner Год назад +1

    Excellent video.

  • @GeoffSeeley
    @GeoffSeeley 4 года назад +6

    Will I burn in a special level of hell if I use my Oracle branded 9261-8i Raid controller to perform this pagan ritual? :-)

    • @ArtofServer
      @ArtofServer  4 года назад +1

      OMG LOL! that's another level man... but you might make Larry Ellison proud. I'll keep your secret, but I can't help you redeem your sins....

  • @stevejones2697
    @stevejones2697 3 года назад +2

    This is really cool - I had always assumed (and maybe this is true in other scenarios?) that there is a sector or two on each drive controlled by a hardware RAID controller which would invalidate it's partition table when used with an IT mode controller.. I guess this whole scenario is irrelevant in anything but a one drive RAID-0, since any other logical drive would depend on the controller to present a single entity.. Is this generally true of hardware RAID cards or is this something specific to Dell? Could I take a generic LSI with hardware RAID firmware and do the same thing - Create a bunch of single drive RAID-0 logical volumes, then flash the IT firmware and expect it to work?

    • @ArtofServer
      @ArtofServer  3 года назад

      Yes, most LSI RAID controllers have similar feature as this one.

  • @kimbjoern
    @kimbjoern 3 года назад +1

    Great wizardry! Thanks for a great (forbidden) series.
    I'll definitely play a bit with the idea on some of my DELL/LSI controllers.
    I also agree with your "sensitivity" against sources claiming "it can't be done" without proper arguments. In the case against ZFS on HW-RAID, one reason may be due to the opposite logic of one of your arguments: You shouldn't configure ZFS on HW REDUNDANT-AID, with redundancy! - and why would you use your HW-RAID controller for single-disk RAID0's? I assume (strongly) that this wouldn't work on any array of disks - even JBOD(?)
    As you've demonstrated: RAID-0 works (apparently the ZFS partitions are recognised (from LSI cards)). I to love ZFS, also because of portability (linux freebsd). I typically create and import pools by labels (-d /dev/disk/by-partlabel), and your off-world wizardry do have a real-world use case; should you need to recover from a HBA, and only have access to a RAID controller. IF the overall labelling is accepted by it.
    Did you ever try to cast these type of (reverse) spells upon disks? Create/label disks "manually", that could be recognised by a RAID controller?
    Thanks again for your videos, including you inspiring shell skills.

    • @ArtofServer
      @ArtofServer  3 года назад

      I stumbled on this many years ago when I was handed a server and asked to use ZFS. I was told it was a no budget project and I had to come up with a solution. What was silly is they paid more for my time than to just buy a HBA controller for the server. Lol. But it made them happy so that is what counts.
      I'm glad you enjoyed the video. Thanks for watching!

  • @alanmoore489
    @alanmoore489 4 года назад +3

    Brilliant and informative video, one slight point, one of the questions mentions zfs is open source, this is not strictly true, Sun Microsystems who were taken over by Oracle and own ZFS.
    OpenZFS is an opensource derivative but sadly, Oracle's ZFS still kicks OpenZFS's ass in terms of performance. It's not even close.

    • @artlessknave
      @artlessknave 4 года назад +2

      that seems...unlikely. you have some numbers or something to back that claim up?

    • @chrismoore9997
      @chrismoore9997 4 года назад

      Interesting. Upon what do you base this claim of performance?

    • @ArtofServer
      @ArtofServer  4 года назад

      Thanks for watching Alan! I'm an old school Solaris guy so aware of the history, rather sad history at that. :-(
      Yes, I assumed the comment about ZFS being open source was referring to OpenZFS, and not the now closed source ZFS in Solaris.
      In what use cases does Oracle ZFS outperform OpenZFS? I haven't used Solaris in many years so don't know how it compares these days. Is it in sequential i/o? random i/o? synchronous i/o? small vs large i/o? with or without encryption? I'd love to read about the details of any benchmark comparisons if you can point me to some.

  • @VTOLfreak
    @VTOLfreak 4 года назад +1

    Not every raid controller creates a single disk RAID 0 that is readable without the controller. Some controllers write a whole bunch of headers and you can't read the data bypassing the controller. I once had as hell of a time recovering data from a RAID 1 where the controller died and we couldn't get another one of the same brand.

    • @ArtofServer
      @ArtofServer  4 года назад +1

      That could be true, but it shouldn't be the default assumption. What I did in this video pretty much works on every LSI RAID controller from the SAS-2 generation and newer.

  • @PieVsCake
    @PieVsCake 2 года назад +2

    How will they know? They’re gonna know…. So I just bought a flashed H710 mini…. Shhhhhhhh

  • @Magnus_E
    @Magnus_E 2 года назад +1

    1:15 "Me I got no friends I have nothing to lose", same here (for my server life) hahaha i'm gonna siiin

  • @snowdog993
    @snowdog993 Год назад +1

    Excellent

  • @PSzlazak
    @PSzlazak 3 месяца назад +1

    23:55 How about doing it vice-versa - creating more complex RAID (e.g. RAID-5) on controller level and having RAID-0 on ZFS level?

    • @ArtofServer
      @ArtofServer  3 месяца назад

      give it a try and experiment with it. if you made such a video, i would watch it! :-)

  • @Chris-hy6jy
    @Chris-hy6jy 3 года назад +1

    So what if you did this with actual RAID0? By that I mean have 3 X 2 disk RAID0 stripes on the raid controller and then pass those 3 virtual disks to ZFS to create a z1 pool? As this is still a non-redundant raid level at the controller, would it work in the same way?

    • @ArtofServer
      @ArtofServer  3 года назад +1

      I'm not sure I would recommend that kind of setup, as I don't recommend mixing RAID features from both technologies. But you can try it and report back!

  • @WorBlux
    @WorBlux 3 года назад +2

    The HBA mode of IT controllers and the firmware of disks are at least nominally compliant with a published standards. You know the what if know the how. With arbitrary hardware raid you only see the top of the iceberg with regards to what it is doing on.
    Sure if you know exactly what you are going, and precisely how to mitigate issues you can get away with it, but it's still not a recommended configuration. This is what you do when procurement screws you over or you are dealing with repurposed hardware.

    • @ArtofServer
      @ArtofServer  3 года назад

      Hardware RAID is not really arbitrary. I think too many people think that and don't realize what the SNIA is or that they have defined "standard" called the DDF that is used by many RAID controllers. What published standards are you referring to when you say HBA firmware is "nominally compliant" with? Citations please.

    • @WorBlux
      @WorBlux 3 года назад +1

      @@ArtofServer "The DDF data structure also allows RAID groups with vendor unique RAID formats. While vendor unique RAID formats prohibit data-in-place migration between vendors, the Common RAID DDF will be a benefit in these situations. At a minimum, when a RAID group containing a unique format is moved to a different RAID solution that does not support the format, the new system will still be able to read the DDF structure"www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf
      So if I'm reading this right DDF will tell you that you're fucked, but doesn't actually prevent raid controllers from using vendor-specific layouts? Sure maybe I'll read the full 100 page document and make sure my controller was using a common format.... if this was my full time job. For homelab/ SOHO NAS, I'm just sticking with an HBA thank you very much.
      Vs many of the bus adaptors just supporting AHCI, failing that have Sas/sata drivers in kernel. Any port on any adapter behaves mostly the same.

    • @ArtofServer
      @ArtofServer  3 года назад +2

      @WorBlux I think perhaps you will benefit from reading the specification for DDF in detail because it sounds like you have a misunderstanding. The ability to read the COD (config on disk) metadata and still allow vendor unique data formats isn't to say "I'm not going to be compatible with your RAID implementation". It's there to detect when a controller can't handle the format and prevent it from destroying the data on disk. by having a common COD, various controllers can still interoperate with each other's formats, but there's always a final "else" case in the logic so that no one steps on each other's toes. This can happen for example, if newer specifications allow for newer formats, while a controller compliant with older format can still read COD and say, sorry, I can't handle the newer format.
      At the very least, RAID controller vendors have a standard they can comply with if they so choose. There really aren't that many players in this space anymore, with Avago/Broadcom/LSI being the dominant vendor now.
      I don't know of an equivalent standard for HBA controllers. If you know of one, i'm still waiting for your citation on the matter. For example, I don't know of a standard specification on how to translate SATA LBA to SAS LBA, especially when they are of different sizes. That's why some Adaptec HBAs seem to handle greater than 2TB SATA drives while some older SAS-1 LSI HBAs can't. So, all the vendors seem to implement their SAT (SAS/ATA translation) layer differently. In fact, LSI has a strange off by one bug that makes SATA TRIM not always work correctly in their SAT layer when mapping SATA TRIM to SAS UNMAP command. If there was a better standard for HBA implementation and especially with SAT implementation, I think it would help fix these types of problems for HBA controllers.

    • @WorBlux
      @WorBlux 3 года назад +1

      @@ArtofServer No I understand. But as I said I don't really care to meticulously cross-reference allowed modes between controllers.
      The DDF just means you can drop in a different RAID controller (If you for instance work somewhere where you have access to a stack or different RAID controllers) and it may or may not work, but it wont destroy data. Still a PITA for the little guy.
      And SAS is and ANSI standard.
      SAS-1 can't handle >2TB, as they reduced ti to 32 bits to improve controller performance (a reasonable thing to do at the time)
      You can buy the standard to a reasonable fee. webstore.ansi.org/Standards/INCITS/INCITS5192014
      This off by one error? - Hopefully it's fixed in a firmware update. If not it joins a long list of hardware Errata and special cases in block layer code. But the occasional bug doesn't mean the standard isn't there. It just means bugs exist and that open firmware would be awfully nice to have.
      Also curious as to weather it only shows on 512e drives.

    • @ArtofServer
      @ArtofServer  3 года назад +2

      Uh... SAS-1 handles >2TB just fine. See my vids on the R410 storage. You are talking about the 2TB limit due to LBA translation problems, which is what I was talking about above. Some vendors, other than LSI, have SAS-1 controllers that handle >2TB SATA just fine but LSI SAS-1 doesn't. There was no reduction to 32bit LBA for performance reasons... And ATA spec used a different LBA scheme in general and that's why translating between SATA and SAS isn't always straight forward leading to problems.
      Anyway, getting off topic. Point is RAID isn't as proprietary as people think. You can even take a hardware RAID set and assemble it with open source software if you lose the controller, thanks to software that understands the DDF.

  • @mattiashedman8845
    @mattiashedman8845 3 года назад +1

    megacli is a fantastic tool, got a question though if id like to CfgEachDskRaid0 slot 2 through 7 how do I do that?

    • @ArtofServer
      @ArtofServer  3 года назад

      You would have to configure them individually, selecting the drives you want to use.

    • @mattiashedman8845
      @mattiashedman8845 3 года назад

      @@ArtofServer And how do I do that? If I allowed to be lazy.... :) CfgDskRaid0 slot2 :P

    • @ArtofServer
      @ArtofServer  3 года назад +1

      @@mattiashedman8845 I don't have the command handy off the top of my head. But read the user manual or help pages for megacli and you should find it.

  • @chrismoore9997
    @chrismoore9997 4 года назад +1

    In the FreeNAS forum, we have seen many people create a hardware RAID array of some number of disks and present that as a single disk to ZFS. That is a bad idea because you are relying on the RAID controller to manage the disks. The biggest problem is improper configuration and lack of knowledge of how to troubleshoot when problems happen. There is a limited ability to monitor drive health when using a hardware RAID controller. FreeNAS has monitoring capabilities for hard drive health, that are not part of ZFS, that work with HBA but can't work with RAID cards.
    If done right, many things are possible, but many people don't take the time to find out how to do it right. That is the biggest reason for saying, "do this" and never do that.

    • @ArtofServer
      @ArtofServer  4 года назад +4

      Thanks Chris for sharing your thoughts! It's very much appreciated! :-)
      I do understand the approach the FreeNAS community takes to support its users. I just think that the message some times gets twisted along the way that results in misinformation getting spread around like gospel.
      A perfect example is as you mentioned above regarding FreeNAS HDD monitoring, which I believe is in part done via smartd daemon, and as you mentioned not part of ZFS. I cannot tell you how many times I've had people ask me on eBay , "what HBA do I need in order to allow ZFS to monitor SMART?" As you know, neither of those are true... ZFS doesn't monitor SMART, and an HBA is not required to access SMART. But in the FreeBSD/FreeNAS corner of the world, I know some of the megaraid drivers haven't implemented the ability to pull SMART via a RAID controller. So, instead of understanding the problem as a software driver issue, or that SMART monitoring is done via smartd, not ZFS, all 3 concepts get mixed in a pot and result in bad information getting spread. This is not exclusive to FreeNAS community by the way, but I think FreeNAS' success in getting ZFS adopted by the masses has just made it focal point, and I often trace the origins of such information back to something someone posted on FreeNAS forums. Some of the new comers to ZFS often learn about ZFS through FreeNAS. And then they take some of their learned misinformation to other implementations of ZFS saying the same misinformation.
      But, my response to you is not a complaint, just sharing my perspective. Being a ZFS user since Solaris days, I'm happy to see FreeNAS succeed in getting ZFS adopted by so many. In fact, the reason I make some of these videos is to help clarify these misunderstandings, so this is my small contribution in being part of the solution. I just hope that we, including the greater ZFS user community, can start correcting the sources of such misinformation.

  • @augurseer
    @augurseer 4 года назад +1

    Love this!!!!

  • @mthanry
    @mthanry 3 года назад

    Thank you very much for another enlightening video. You made me consider ZFS for the R720XD I have here. MegaCLI doesn't;t see the H710 mini controller. Firmware issue?

    • @ArtofServer
      @ArtofServer  3 года назад

      Is that H710 with Dell megaraid firmware or running LSI IT firmware?

    • @mthanry
      @mthanry 3 года назад

      @@ArtofServer Looks like it's running the MegaRaid firmware (iDrac reports version 21.3.5-0002):
      lspci -Dmmnn|grep LSI

  • @bold_jester2557
    @bold_jester2557 3 года назад

    Great video and explanations. I was wondering if you had ever tested or had thoughts on the tunable hw.mfi.allow_cam_disk_passthrough ? Does seem to make the drives accessible similar to an HBA.

    • @ArtofServer
      @ArtofServer  3 года назад

      Thanks for watching. Sorry, no I'm not familiar with that setting, and that sounds like something specific to FreeBSD?

    • @bold_jester2557
      @bold_jester2557 3 года назад

      @@ArtofServer Hadn't thought of that but you are correct, specific to BSD.

  • @waaromzomoeilijk1473
    @waaromzomoeilijk1473 2 года назад +1

    Boss man

    • @ArtofServer
      @ArtofServer  2 года назад

      LOL thanks for watching! :-)

  • @matthiasdiehl4305
    @matthiasdiehl4305 4 года назад +1

    great job, thanks!

  • @maherkhalil007
    @maherkhalil007 2 года назад +1

    do you think zfs is suitable for cloud servers?

    • @ArtofServer
      @ArtofServer  2 года назад

      depends on use case. IaaS is meant for the user to not have to worry about underlying infrastructure redundancy. So, for that use case, I think it is better to leave that to your IaaS provider. but if you want ZFS for it's other features, like snapshots, compression, encryption, etc, then that can make sense.

    • @maherkhalil007
      @maherkhalil007 2 года назад

      @@ArtofServer I am a hosting company, we manage our own server, we tested it but it makes latency and overload servers even we use latest technology servers from dell with intel Xeon 28 cores and more ECC RAM, so, we think to move to LVM thin.

  • @MayGuh
    @MayGuh 4 года назад +1

    I just picked up my first R710, and I know this is a new question, but is the backplane SAS 2 / sata 3, or will it come with a 3 gigabit backplane? I'm only putting hard drives in there, so I really don't need 6 gigabits per drive, but I was just curious

    • @seanm9378
      @seanm9378 4 года назад

      Mine came with a perc 6i which was lower than 2tb drive support and 3Gb connections. I swapped it out for a H700 and now all is perfect. 6Gbps and + 2Tb drive and SSD support.

    • @artlessknave
      @artlessknave 4 года назад

      @@seanm9378 the question was about the backplane but you answered with...raid controller?

    • @artlessknave
      @artlessknave 4 года назад +2

      I don't have one but the backplane is likely direct attach (the spec sheet shows a max of 8 drives, which is a perfect match for the very common 8 lanes of controllers), which would make the sas version irrelevant, since the backplane would function as a wire. the sas version is realistically only relevant for expander back-planes. ultimately though it depends on which back-plane you have and you didn't supply that info so....best guess is best guess. if it is direct attach, your sas speeds will be wholly dependent on the controller.

    • @seanm9378
      @seanm9378 4 года назад

      Well yes, it does work at SAS2 speeds, the H700 yields full SAS2 disk speeds of 6gbps on my dell r710. The backplane is compatible at that speed. I updated all firmware to latest, removed the Perc 6i controller that supported only SAS @3gbps and swapped in the H700. All drives running great at their max speed since.

    • @artlessknave
      @artlessknave 4 года назад +1

      @@seanm9378 ah, i see, it was an indirect answer. duh.

  • @bogdan3453
    @bogdan3453 3 года назад

    Is this setup improving writing and reading speeds on hdd drives ? I am a newbie trying to learn. Thanks!

    • @ArtofServer
      @ArtofServer  3 года назад

      This setup is using ZFS with hardware RAID controllers in a way that is relatively safe. There's no discussion about performance speeds here.

    • @bogdan3453
      @bogdan3453 3 года назад

      @@ArtofServer I am not a native English speaker so probably the way I formulated the question was not clear enough. As far as I know HW raid0 doubles the read/write speeds on big file transfers. I was asking if this setup might help benefit of the hw raid0 speeds. I read a lot of articles on the internet on how to improve data transfer speed on zfs with sil/slog - L2ARC and I was asking if this setup could benefit somehow from that raid0.

    • @ArtofServer
      @ArtofServer  3 года назад

      No, I disabled any caching in this setup, so there's no benefit from a performance standpoint.

  • @plasmar1
    @plasmar1 Год назад +1

    ** secretly when no one is watching, on esxi I have a truenas vm with physical access(vmdk that redirects to the drive) to one of my virtual drives on my raid card, than share that as iscsi so that I can use it on another machine...... #rebel

    • @ArtofServer
      @ArtofServer  Год назад +1

      LOL! I suggest you never share that secret on the TrueNAS forums...

  • @squelchedotter
    @squelchedotter 4 года назад +7

    Maximum clickbait :D

  • @VirendraBG
    @VirendraBG 3 месяца назад +1

    35:22

    • @ArtofServer
      @ArtofServer  3 месяца назад +1

      😂

    • @VirendraBG
      @VirendraBG 3 месяца назад

      @@ArtofServer 😅
      "Being a person of action"
      Was most interesting part of this video. 👍🏻

    • @VirendraBG
      @VirendraBG 3 месяца назад

      @@ArtofServer
      May I ask URL to download your PC wall paper?
      With the cat 😺 one? 😅

  • @yakX54
    @yakX54 4 года назад +1

    Ha Ha now i have a fellow outcast better move to down under cant get you here

  • @nebraskadigital
    @nebraskadigital 2 года назад +1

    Seems like the greatest sin is at the end. You blow away all the reasons not to use Raid 0, then say to use a proper HBA anyway without really going into why.

    • @ArtofServer
      @ArtofServer  2 года назад +1

      to clarify, my point in this vid was to show what is possible. but given a choice between a "proper" setup, I would recommend the "proper" setup over SDR0. the reason is less complexity, and if you needed help from others, it's a more standard configuration. some ZFS communities are a bit stricter, and if you approached them for help with this type of setup, you'd likely be shunned. but, if you have no other choice, using SDR0 is technically possible.

  • @wojciechb4732
    @wojciechb4732 4 года назад +1

    nice, but i need *the numbers* , it mode vs hw raid, real life scenario - read test, write test, copy test - with raid cache and without cache, zfs rebuild time, scrub time etc, something like calomel.org/zfs_raid_speed_capacity.html, in general people use it mode because is faster,

    • @ArtofServer
      @ArtofServer  4 года назад

      Thanks for watching. I understand you, and I do make benchmark videos, but that wasn't the point for this video. But I'll add that to my list of future videos! Thanks for the suggestion.

  • @HeroRareheart
    @HeroRareheart 2 года назад

    I uhh... I may have this exact setup... Hey i don't got a choice bro don't judge me.

    • @HeroRareheart
      @HeroRareheart 2 года назад +1

      Well actually so far it seems like everything's fine, guess everyone over played how bad it was.

    • @ArtofServer
      @ArtofServer  2 года назад

      I don't recommend it, but this SDR0 configuration can work and I have had some servers run this way for years.

    • @HeroRareheart
      @HeroRareheart 2 года назад

      @@ArtofServer It's basically my only option and the only issue I have run into is a small one that basically keeps me from being able to replace drives without a full reboot of the system.

    • @ArtofServer
      @ArtofServer  2 года назад

      @@HeroRareheart you can replace drives without rebooting. you just need to run the MegaCli commands as shown in this video when you replace drives, re-create the SDR0, then use that drive to replace the failed drive in the zfs pool.

  • @Spacefish007
    @Spacefish007 2 года назад

    So why not:
    - Enable Write Cache without battery
    - Setup one Raid-0 in the HW-Raid over all 6 drives
    - Setup ZFS on the single virtual drive presented by the raid-controller
    - Setup SLOG on a RAMdisk (because it´s fast)
    i mean: We get all the great storage + performance of the drives, have 6 times the failure rates and 0 redudancy :D + if we have a power failure we are sure we loose all cached writes + may corrupt the metadata due to the unpredictable behaviour of the Raid Controllers HW cache, which is also lost ;)

  • @bumpsy2358
    @bumpsy2358 2 года назад +1

    LOL!

    • @ArtofServer
      @ArtofServer  2 года назад +1

      good entertainment?

    • @bumpsy2358
      @bumpsy2358 2 года назад

      @@ArtofServer It was so spot on, as to how these storage fanactics act about whatever flavor of software they have decided is the best thing since sliced bread.

    • @bumpsy2358
      @bumpsy2358 2 года назад +1

      @@ArtofServer Had to watch it again. :)

    • @ArtofServer
      @ArtofServer  2 года назад

      @@bumpsy2358 LOL.. was it still as entertaining the 2nd time around? be sure to checkout Part 2 and 3 of this series if you enjoyed it. :-)

  • @Raymond6494
    @Raymond6494 4 года назад +1

    lol hahahahaha

    • @ArtofServer
      @ArtofServer  4 года назад

      glad you were amused! thanks for watching!