Tuesday Tech Tip - Linux RAID vs ZFS RAID

Поделиться
HTML-код
  • Опубликовано: 31 дек 2024

Комментарии • 20

  • @marcc5768
    @marcc5768 5 месяцев назад +4

    nice tutorial, but the next on the screen was too small to read.

    • @45Drives
      @45Drives  4 месяца назад +1

      Thanks for the feedback, this is from a few years ago when we were still new to creating content. You should find our content a little better now, we hope 😆

  • @rafalkolodziej8437
    @rafalkolodziej8437 4 года назад +6

    04:00 zpool is NOT a filesystem! zfs datasets are. zvol is a block device, hence it may be formatted /not formatted and exposed via iscsi or internally to its jails and visible as block device.

    • @7481OK
      @7481OK 15 дней назад

      Correct

  • @attainconsult
    @attainconsult 5 лет назад +2

    great simple explanation this is going to be a must watch for my techs who are mostly windows and I don't have the patience to explain ;-)

  • @franksenkel2715
    @franksenkel2715 10 месяцев назад

    i'd like to use either zfs or linux raid grouping several internal hard drives, I periodically reinstall my OS which is on it's own separate drive...can I recreate the drive configurations after the OS install with out loss of data?

  • @alphabanks
    @alphabanks 7 месяцев назад

    I will move to ZFS when I can expand array by adding a single drive.

  • @paulfx5019
    @paulfx5019 4 года назад +1

    Many thanks for another great tutorial! We have been big fans of ZFS for several years with both Solaris and Linux, would you also recommend mdadm for FC LUNs implementations for same reasons as iSCSI LUNS? Also why do you prefer LIO targetcli over SCST?

  • @ox3965
    @ox3965 3 года назад

    So which one is better ?

  • @DaniloMussolini
    @DaniloMussolini 5 лет назад +1

    Thanks for the video guys.
    One question regarding direct IO. Do you guys think it's better to benchmark the ZFS storage with direct IO, when provisioning a storage considering the performance needs?

    • @45Drives
      @45Drives  5 лет назад +3

      Hey Danilo, Mitch here! Thanks for the question. The traditional view of benchmarking different file systems has always been to remove the cache in whatever way possible before attempting benchmarks. We believe this is a little misguided for a few reasons. The reason it has been done that way traditionally is because the assumption is that all file systems will cache equally well (or uselessly on the larger scale) and so it's better to get a real picture of what the file system is capable of in most scenarios when cache is not going to be helpful. This is assuming however that all file systems use the same caching methods and algorithms. While this may be true for many file systems that use simple LRU (Least Recently Used) cache, ZFS on the other hand uses an algorithm called ARC (Adaptive Replacement Cache). This is a much more complex caching system and allows for the ability for higher efficiency and better performance for many workloads. Now all of that being said, for ZFS On Linux direct IO was not possible until version 0.8 but it is now available. So in the end, what we typically will recommend is to look at the workloads you are planning on using your ZFS pool for. If your source requires a Direct IO workload, then it will make sense to benchmark in that way, but if your workload is able to take advantage of the ZFS ARC, then we believe benchmarking with it disabled is like intentionally handicapping yourself. Hope that helps!

  • @dild0sled
    @dild0sled 2 года назад +1

    My dude is baked more then my lsi HBA's crusty old thermal compound

  • @georgesimpson3113
    @georgesimpson3113 3 месяца назад

    Do you know what your this is? Linux is how old and you still show how to do things via command line? Is there no gui for this?

  • @carlosvirgengomez9713
    @carlosvirgengomez9713 3 года назад

    Great video

  • @marknakasone85
    @marknakasone85 Год назад

    Open BSD + ZFS is very stable and efficient

  • @nomoloubagabe3117
    @nomoloubagabe3117 4 года назад

    Hellooo, R U in internet?

  • @richardbennett4365
    @richardbennett4365 Год назад

    Eff stab?
    Fs is file system and tab is table.
    Better: eff ess tab.

  • @alfa.voland
    @alfa.voland 3 года назад

    Tuesday Tech Tip - ZFS Read & Write Caching
    ruclips.net/video/H5aLY253daE/видео.html

  • @cryptearth
    @cryptearth 4 года назад

    sorry - but anyone fiddle around with mdadm should be aware in order to have your filesystems auto-mounted after reboot they have to be in fstab - that was kind of unnecessary