UNRAID ZFS Pools and Shares Performance Optimizing

Поделиться
HTML-код
  • Опубликовано: 28 ноя 2024

Комментарии • 63

  • @SpaceinvaderOne
    @SpaceinvaderOne Год назад +46

    Fantastic video! I really enjoyed watching your tests .Both informative and engaging. Keep up the great work!

    • @DigitalSpaceport
      @DigitalSpaceport  Год назад +7

      Thanks for watching! Long time fan of your great content also. No one breaks it down like you for all the bells and whistles.

  • @mousamousa1686
    @mousamousa1686 Год назад +1

    Wonderful and distinctive always and as usual in all your videos

  • @Mrperson0
    @Mrperson0 5 месяцев назад +2

    Would you look at that! With UNRAID 7, Arrays are finally optional, and we can set up shares with pools as both a primary and secondary storage! Definitely a welcome change.

    • @DigitalSpaceport
      @DigitalSpaceport  5 месяцев назад

      Wow finally! I can put that to use right off, excited hope it's outta beta soon-ish

    • @Mrperson0
      @Mrperson0 5 месяцев назад

      @@DigitalSpaceport Yeah, definitely don't want to update my main setup to a beta, so I hope it'll be official soon. Also love the idea that mover will work between pools, since I would love to move from my raid 0 nvme drives to my raid 10.

  • @lawrencesayshi
    @lawrencesayshi Месяц назад

    Ducks, geese, and cats?! I'm sold!

  • @callowaysutton
    @callowaysutton Год назад +1

    Are you not using SMB Direct/RDMA, pinning cores or SMB over QUIC? You should be able to easily saturate 40Gbps with these disks if doing sequential reads or writes.

  • @TheSasquatchjones
    @TheSasquatchjones Год назад +2

    Brilliant content as always.

  • @1000tb
    @1000tb 7 месяцев назад

    I want to move to Unraid, I'm currently using OMV with ZFS drive, they are not mirrored, each drive is ZFS with no mirror. So what I want to ask is
    1. if I'm using ZFS with pools of single drives, is there a point on keep using ZFS when I move to Unraid? What I liked about ZFS is the bit rot detection it has built-in and also the compression though I can live without this, but as far as I'm reading Unraid as it is it's not super fast so I'm guessing using ZFS will slow even more the transfer performance isn't it? (Because of the compression, etc)
    2. I don't have the budget to get 3 SSDs to use as cache but I do have one that I can use as cache, will it be enough to improve speed? What I also don't get is what would happen if I transfer more than the capacity of the cache, let's say my cache SSD is 500GB but I transfer 1TB of data, what will happen then? As far as I know the cache is stored until the mover is fired up.
    I'm not sure about moving to Unraid but I really need the parity feature, as it is right now the transfer speeds aren't super great (I guess because of ZFS itself), I'm using a Connect-X 3 but still, speeds aren't max out, and that is using OMV, using Unraid I can only imagine it will be ever worst

  • @jamesbarnette4350
    @jamesbarnette4350 Год назад +2

    Lots of good info in here I will have to try some of this out when I get home. getting abismal performance writing to my nvme in my unraid server over 40GB I get like 100MB write and like 450MB read. in a windows system locally I get close to 3GB on these. I know the MTU is prolly the culprit for some of that. I don't understand why the write is that slow though. I will be really glad when they get propper ZFS pooling in unraid so that can setup additional VDEVs this is where the true performance increases comefrom with zfs. Hopefully they will then get the mover to work with interpool manament for primary and secondary stuff.

    • @gustcol1
      @gustcol1 Год назад

      nice, thanks for sharing this

    • @DigitalSpaceport
      @DigitalSpaceport  Год назад +4

      Your speed is def too slow, here are a few things I would check. Windows tuning of the NIC settings to match your switch/Unraid host is critical to great performance, but you should be getting way better for sure. NVMe's from various manufacturers will have different amounts of Cache SLC super fast NAND, then revert to a slower TLC base in most consumer NVMe. Still your base write performance after you exhaust the cache should be at a minimum 500-1200MBs for writes, not 100. Have you tested iPerf3 with your setup to ensure you have expected network speeds? Have you tested your drive with DD to ensure it is operating to spec from the CLI? Ensure you are using an exclusive access disk share to avoid FUSE. Is the drive properly trimmed? I have seen some HBAs cause issues with this but its unlikely your running an NVMe off an HBA.

  • @johnfr2389
    @johnfr2389 9 месяцев назад

    I need to setup a NAS for a large SMB share over a 10 gbit LAN. Newbie with NAS system here. I'm hesitating between TrueNAS Core and unRAID. I'll be using 8 to 12 SATA SSD drives. Based on your experience, which of the two systems offers the best performance and reliability for a similar setup?

  • @jllerk
    @jllerk Год назад +3

    Nice vid. Can you do one on ZFS rebuild times. if you make a 24 disk vdev of 22TB and you pull one disk. How long would the rebuild take? With 1 cpu +/- 20 cores 3ghz. Will it be a day, a week, a month or a year? Its something I can't find anywhere.

    • @nadtz
      @nadtz Год назад +1

      It will depend on how much data you have to be resilvered. I had to replace a disk in my system but I only had like 6 of 60tb used and it took maybe an hour. Did it again after I moved more data over and it took maybe 6 hours. That was with 8x12tb drives on a E3-1230 V2 @ 3.30GHz in truenas. 24 drives would probably be pretty quick if they are rz1/2 even with a 22TB drive but the best practices guy in me wants to say break that up into multiple vdevs.

    • @jllerk
      @jllerk Год назад +1

      Thanks for your reply! @@nadtz What I am trying to figure out is the maximum amount of disks with say 22TB that can resilver within 24hrs or 48hrs. I prefer the vdev to be as big as possible considering a raidz3 and least parityloss with the excel configurator.

    • @nadtz
      @nadtz Год назад +1

      @@jllerk No idea with a vdev that big, I've read in the past large vdev's slow down resilvering and have never had a reason to do a vdev of more than 6-8 devices myself. Anecdotally you can expect long scrub and reesilver times with a vdev that big. How long exactly I've seen people mention 48hrs+ but I don't know how much data was involved.

    • @juliansbrickcity5083
      @juliansbrickcity5083 Год назад +1

      Afaik a 24 drive wide vdev is Not a good Idea, but it depends on your workload.
      If its for home use and you can leave the Pool resilver without hiting it with io all the time it could be ok. But for anything else I would do at least 2 raidz2 vdevs. That would give you better iops and less chance going in a resilver spiral. Maybe a draid2:21d:1s:24c would be a safer bet to get faster resilver times. But I have Not tested unraid much with draid. You can create a draid in the cli but Not use it as a unraid Pool in 6.12.4, maybe support will be added in the future.

  • @donglobal
    @donglobal 6 месяцев назад

    Thanks for sharing the tips, very useful, but I am looking at the speeds you are getting and what immedaitley comes to mind is what is the speed of your network. I have an all 10GBe network and get nowhere near those speeds. As an optimized 10gbe network what sort of speed would be normal?

    • @DigitalSpaceport
      @DigitalSpaceport  6 месяцев назад +1

      If you can sustain a ~9gbit iperf3 single stream on that network, you should be able to hit in the 1GB/s range to your cache pool.

    • @donglobal
      @donglobal 6 месяцев назад

      @@DigitalSpaceport I'm currently planning a setup using ZFS on Unraid and would appreciate some expert advice. Here are the details of my proposed hardware configuration:
      * Drives: 4 x 2TB NVME drives
      * Proposed Pool Layout: 3 x NVME drives in a RAIDZ1 configuration
      * Cache: 1 x NVME drive
      * Array: A single 32GB thumb drive
      I'm also considering upgrading these drives to 4TB NVMEs in the future. Given this setup, I have a few questions:
      1. Is a cache drive necessary, given that it's the same specification as the pool drives?
      2. Could I potentially configure all four drives into a single RAIDZ1 pool instead?
      3. Regarding future upgrades to 4TB drives: How complex is the process? Previously, I used TrueNAS Scale with HDDs in a similar configuration (3 drive RAIDZ1 pool upgraded from 6TB to 12TB drives by replacing each drive one by one). However, after upgrading, the pool size didn't increase until I destroyed and recreated the pool. Would I face the same issue in Unraid?

  • @ToddyIvon
    @ToddyIvon 9 месяцев назад

    awesome video

  • @bitcoinsig
    @bitcoinsig Год назад +2

    Are you still performance limited by CPU single core performance, since it appears you have 'NIC Offloading' turned off, I'm guessing there is still no RDMA support.

    • @DigitalSpaceport
      @DigitalSpaceport  Год назад +2

      This video went off the rails for about 2 days as I documented and learned about SMB Direct, RDMA and several other topics. I did have some successes but since the R520 system itself is slated to be replaced with an EPYC or TR Pro this weekend, it felt like wasted time to chase those problems on that machine. I did have SMT turned off on the 2470v2's but had to run both sockets so NUMA likely was at play as well degrading performance. The single thread speed on those is indeed not idea but peaking at 3.2GHz should provide for pretty decent 40Gbit access, but for sure there is overheads impacting this also. Will dig into RDMA on the new 100Gbit nics and switch here soon and *hopefully* leave lower performance behind.

    • @bitcoinsig
      @bitcoinsig Год назад +2

      @@DigitalSpaceport ahh, would be interested in the findings of RDMA, as I think most of us have more access to cast off 40gbe nics and switches than 100gbe. I put unraid on my r630, which can be had for less than $200, and it has a 40gbe mellanox in it, that hasn't performed great for me, so getting the RDMA going is something I would be interested in, personally. On paper the 630's looked like a great platform for unraid, since you can pack it with 4 u.2 nvme's and 6 sff drives, but I can't seem to get amazing speeds from unraid, I might have to try more of the unraid tweaks shown in the video.

    • @DigitalSpaceport
      @DigitalSpaceport  Год назад +1

      Yes there are so many "gotcha" things around RDMA its pretty daunting but I am 2 pages of notes in so far and I think its solvable and explainable in fairly easy manners.

  • @cyrilasfrenchyaz
    @cyrilasfrenchyaz 11 месяцев назад +1

    You will always get good performance with nvme, ssds... also you have a big server and CPU process. I tried Unraid with 6 WD 14TB Enterprise drives and the write over a 10GB was abysimal (50MB/s or less). Reading was ok though around 400MB/s.
    I switched to OMV and I'm getting now 800MB/s R/W in software RAID 6. As I move a lot of data, I ditched Unraid and I have no regrets. Thank you for the video though, seems like depending of what you have it may be a good solution for some.

  • @MarkCollinstechyMarkbo
    @MarkCollinstechyMarkbo 10 месяцев назад

    Is there a way to convert 2 disks in an array to a pool without wiping data? I have Minio running with two drives as a share for Minio. The transfer from cache to array is slow and fills up the cache fast. I am adding more drives for a larger total on the share and saw this video for creating raid within a pool and thought this would work great.....but I have the issue of the files already in the system.

    • @DigitalSpaceport
      @DigitalSpaceport  10 месяцев назад +1

      You need a backup system in place. If you try to pull out 2 drives in a ZFS array it wont work. You cannot shrink a zpool. You can fail and replace drives but the initial consideration of disks in use for a zpool needs to remain the same. Restoring from backups is your best bet here.

  • @IEnjoyCreatingVideos
    @IEnjoyCreatingVideos Год назад

    Great video Thanks for sharing it with us!💖👍😎JP

  • @AndreAndre-ud6dj
    @AndreAndre-ud6dj Год назад

    Wow, Great Video... is it possible to use only zfs and eliminate the basic array of unraid ?

    • @DigitalSpaceport
      @DigitalSpaceport  Год назад +2

      Well you can do what I did and have a dummy drive in the main array but that does seem to be a hard requirement for startup of the systems. There is also ZFS in the main array but I am not sure about that one. I need to test more on that specifically.

    • @AndreAndre-ud6dj
      @AndreAndre-ud6dj Год назад +1

      @@DigitalSpaceport Thanks for the get back this fast.... cool i will wait for your other video, this is something i would like to do, use only zfs pool like you did. thanks again.

  • @timobk14
    @timobk14 Год назад

    Some questions: We have to use at least on disk under array device. Why do we need this and what happen when it will broken? Can I use any low cost USB Stick for that? By the way, Nice video.

  • @immildew2
    @immildew2 10 месяцев назад

    Out of curiousity, I may have missed it, why did you decide to run zfs without compression??

    • @DigitalSpaceport
      @DigitalSpaceport  9 месяцев назад +2

      I have multiple accesses off the dataset at one time and often high bandwidth usage of the TrueNAS machine at the same time. I found compression tries to hit the fastest thread also while I am hitting SMB or NFS shares hard and does indeed slow that down. I had minimal gains from compression on the storage side, as most of what I store is uncompressible data already.

    • @immildew2
      @immildew2 9 месяцев назад +1

      @@DigitalSpaceport Thank you. I assumed the media aspect, but had not considered the other. Something to consider in the future I suppose.

  • @brianflanders3027
    @brianflanders3027 5 месяцев назад

    Is your store down? I can't seem to access it.

    • @DigitalSpaceport
      @DigitalSpaceport  5 месяцев назад +1

      It was but it's back up now. Got hit with the transition of a few domains I neglected to move from Google domains and that got transferred to square space. Thanks for the heads up.

  • @adgjk1
    @adgjk1 10 месяцев назад

    Curious to see if you have a mac that you can test this on? I'm struggling with SMB settings right now on a 10gbe connection to a raidz2 pool of 8 SATA SSDs. It *should* be fast, but I'm not getting more than about 130mbps, when I should be getting close to 8-10x that.

  • @gustcol1
    @gustcol1 Год назад +3

    unraid vs truenas, what is the best ?

    • @schlosspt
      @schlosspt Год назад

      ruclips.net/video/z4CkJbUlkVM/видео.htmlsi=19US52b_7TzGYyKP

    • @schlosspt
      @schlosspt Год назад +1

      Depends on your use case really. Personally, i just think you can't beat free.

    • @DigitalSpaceport
      @DigitalSpaceport  Год назад +3

      Beginners, UnRaid usually. More advanced users, TrueNAS. Generally that's my feeling on it. ruclips.net/video/z4CkJbUlkVM/видео.html This comparison I did I think is pretty accurate on major points. If your a power user, TN is your best bet.

    • @gustcol1
      @gustcol1 Год назад +1

      @@DigitalSpaceport truenas gave me a lot of headaches because it didn't recognize my mellanox network cards, apart from that, I had problems with docker and other things on it. So, I'm going to set up a machine with 72 disks with 20tb each and I will buy a epy motherboard kit from ebay - I already did that to my new machine learning server .. thanks for the tips.. :) very nice video

    • @gustcol1
      @gustcol1 Год назад +1

      @@DigitalSpaceport I've had a lot of problems with 100Gbps intel, a lot. Mellanox runs better.

  • @THEG12EG
    @THEG12EG Год назад

    why do you need 1 array drive?

    • @DigitalSpaceport
      @DigitalSpaceport  Год назад

      UNRAID won't let you start the system without at least 1 drive in the areay