Fixing my worst TrueNAS Scale mistake!

Поделиться
HTML-код
  • Опубликовано: 30 июн 2024
  • In this video, I'll fix my worst mistake I made on my TrueNAS Scale Storage Server. We also talk about RAID-Z layouts, fault tolerance and ZFS performance. And what I've changed to make this server more robust and solid! #truenasscale #homelab #nas
    Teleport-*: goteleport.com/thedigitallife
    Follow me:
    TWITTER: / christianlempa
    INSTAGRAM: / christianlempa
    TWITCH: / christianlempa
    DISCORD: / discord
    GITHUB: github.com/christianlempa
    PATREON: / christianlempa
    MY EQUIPMENT: kit.co/christianlempa
    Timestamps:
    00:00 - Introduction
    01:32 - Advertisement-*
    02:17 - What was my set-up before?
    06:58 - My new set-up 2x RAID-Z2
    08:15 - New storage pool with SSDs
    ________________
    All links with "*" are affiliate links.

Комментарии • 227

  • @TrueNAS
    @TrueNAS Год назад +172

    It happens to the best of us! Glad you were able to get that sorted out, Christian!

  • @borealis370
    @borealis370 Год назад +46

    Good on you for coming clean and sorting that mess out.

    • @christianlempa
      @christianlempa  Год назад +4

      Thank you :) Because of all your feedback

    • @vicmac3513
      @vicmac3513 Год назад

      @@christianlempa could you please make a video on how to install/update/sustenance and how it differs between OSes? I've tried to figure it out is it even possible and even my professor said it requires so much resources, that manual/scheduled updates are much easier.
      I'm sure using Linode would double your earnings on that project lol.
      Btw, thanks for being so good teacher.

  • @petersimmons7833
    @petersimmons7833 Год назад +25

    Thanks for being honest and introspective about mistakes. We all make them and sometimes we learn the most from them. Hopefully they don’t cost us too much in the process. Great series.

  • @snowballeffects
    @snowballeffects Год назад +10

    There's no better way than learning from the mistakes of other's ahead of having to make them yourself - Thank you - Brilliant as always!

  • @jonmayer
    @jonmayer Год назад +1

    I made the screengrab! I'm glad you are taking the steps to improve your setup. I must have 2nd, 3rd, and 4th guessed my setup before I deployed it.

  • @spectreofspace
    @spectreofspace Год назад +27

    Keep an eye on your SSD health. ZFS can eat cheap consumer SSDs in no time if you do a lot of writes on them. Forums generally recommend enterprise grade SSDs when used with ZFS. I had a 128GB Kingston I was using as an L2ARC. In 6 months it's health had already dropped to around 70%.

    • @christianlempa
      @christianlempa  Год назад +1

      Thanks for sharing! I’ll keep an eye on it :)

    • @lorenz323
      @lorenz323 Год назад

      I bought some NAS SSDs, but 1 already failed after 1 Year. And sereval Samsung EVO failed after some Months.

    • @aidanr.579
      @aidanr.579 Год назад +1

      Samsung Evo running as a virtualization boot drive failed in less than a year. I switched to Intel Optane and after over a year the health hasn’t gone down even a percent.

    • @codycullum2248
      @codycullum2248 Год назад

      @@aidanr.579 how do you change out your boot drive and keep all of your settings? The only way I can think is clone the boot Drive using a 3rd party App but I assume there’s an easier way using the software.

    • @aidanr.579
      @aidanr.579 Год назад

      @@codycullum2248 personally I use Proxmox so I did a full backup of my VMs and settings and put it on a new SSD.

  • @dastiffmeister1
    @dastiffmeister1 Год назад +5

    Great video, Christian. I respect the humility. :)
    I will allow myself to nitpick:
    If you truly want to guard yourself against potential data loss when using identical and new SSDs in a vdev, if I am not mistaken it's best practice to write to (in your case) two drives (one from each separate vdev) before the creation of the 2x2 striped mirror pool so that all four SSDs do not potentially fail simultaneously.
    But that would be taking it to an entirely new level of preparation ^^

    • @christianlempa
      @christianlempa  Год назад +1

      Interesting, I haven't heard about that, but thanks for your insight! :)

  • @nixxblikka
    @nixxblikka Год назад +3

    I like the exceptional production quality of your videos - one of the view channels where my displays can show what they can - and yeah the content is also helpful!

  • @alex.prodigy
    @alex.prodigy Год назад +1

    awesome , these videos are the best ... explaining what went wrong and how it was mitigated
    thanks!

  • @danielfisher1515
    @danielfisher1515 Год назад

    Great summary, and good changes!

  • @Sevbh12
    @Sevbh12 Год назад +1

    Hi Christian
    Thank you for the great video. I've been learning a lot from you! It would be great to see a video on automatic backup solutions that are out there. I am currently building a NAS with the main goal of automating backups of my production servers, but not sure where to start or what the best practices are. Thank you so much! Keep well.

  • @B20C0
    @B20C0 Год назад +7

    6:48 just to add: The recovery process is not only a critical time if you have no additional fault tolerance because of the time it takes, but also because of the extra strain it puts on your hard drive as during reconstruction when you literally read every single bit on that drive.
    Add read fails on top which happen on average for 1 bit in every 12 TB and you know that you're playing with fire.

  • @TradersTradingEdge
    @TradersTradingEdge Год назад +7

    Awesome Christian. Highly respecting you!
    There are not many RUclipsrs who confess making a mistakes in their domain. And I also learned more about the ZFS, because I also experiment with a 6TB TNS Server §8-)
    So, keep it up, your doing great.
    Cheers 👊

  • @cdm297
    @cdm297 Год назад +1

    great video, I request to make an updated step-by-step video on the TrueNas Setup, including all this 🙂

  • @abansalify
    @abansalify Год назад +2

    if you add an SSD with HDD in a Pool, ZFS can use a portion of SSD to convert the HDD into a hybrid drive which means, your IO speed goes up.

  • @ianpogi5
    @ianpogi5 11 месяцев назад +1

    Thank you for all your vidoes! How's the health of your SSD's?

  • @BrianSez
    @BrianSez Год назад +3

    Great video. For the future video mentioned, could you discuss how to revert from the single VDEV to two VDEVs without compromising existing data? Also, I'd love to hear about your backup solution.

    • @christianlempa
      @christianlempa  Год назад +1

      I needed to copy the data to another location and destroy the pool, that took pretty long, but was worth it!

    • @andrewr7820
      @andrewr7820 Год назад +1

      Once you have either a second pool or machine, a simple ZFS send/receive (that's what the replication tasks do), rebuild the source array, then push the data back). You *_did_* also have a separate backup before starting, right? 😀

  • @AdenMocca
    @AdenMocca Год назад +3

    Good video - glad that Scale is getting a lot of attention. TrueNAS is a great product to help experienced - but not ZFS developers - get going with a really good solution. The next step for good back up is to actually create a back up which means move the data to a different system. It's expensive at your size, but running a ZFS send / receive - or in TrueNAS a ZFS replication is important for real enterprise backup. RAID is not a back up, just a way to enhance availability. For important data you could also consider Backblaze or another solution.

    • @christianlempa
      @christianlempa  Год назад

      Thank you :)

    • @andrewr7820
      @andrewr7820 Год назад

      I did just that. Second TN box with fewer, larger drives in striped mirror. Configured periodic snapshots and replication tasks under "Data Protection" menu. The next step will be to move the second box off-site to my folk's place.
      FWIW, in the failure recovery scenario, rebuilding a mirror VDEV only involves the surviving member of the mirror, so the risk of a single drive failing during a rebuild is statistically less likely than in a multiple drive RAIDZn array.

    • @samcan9997
      @samcan9997 8 месяцев назад

      @@andrewr7820 however on that note its worth mentioning SSDs main cause of failure is being written into the ground which in a RAID 1/mirror is much more likly to occure at the same relitive time period. spinners long as they havent been dropped you should be good for write life

  • @lawrencerubanka7087
    @lawrencerubanka7087 3 месяца назад

    Thanks for the very clear explanations. I'd love to see your take on backup and recovery of ZFS pools. That would make a good video. I suspect the backup topic would elicit plenty of critical feedback as well. :)

    • @christianlempa
      @christianlempa  3 месяца назад

      Thanks! :) I hope to make a video about backups at some point

  • @MrBlackCracker100
    @MrBlackCracker100 4 месяца назад

    man im glad i found your video. this is almost the EXACT setup i have running. with 12x 4tb drives. i dont mind sacrificing 33% because the security is deff worth it, but full mirroring just felt overkill. in fact, despite me skipping directly to the part i needed, im going to go ahead and rewatch the full video to make sure i didnt miss anything. maybe you had a mistake i can learn from

  • @passaronegro349
    @passaronegro349 Год назад

    I'm Brazilian, and new to your channel !!! congratulations for the work and caption !!! 🇧🇷

    • @christianlempa
      @christianlempa  Год назад +1

      Thank you 🙏 but the caption is coming from YT 😄✌️

    • @passaronegro349
      @passaronegro349 Год назад

      @@christianlempa I thought it was in the settings of RUclips ... 😂🇧🇷

  • @systemofapwne
    @systemofapwne 6 месяцев назад

    While having a dedicated VM pool on SSDs is absolutely nice, as an intermediate step, you could have added a L2ARC and SLOG device to your main pool and run the VMs from there. For syncronous IOPS, the SLOG devices is a huge deal (eg for databases) and the L2ARC (especially when marked persistent) will make your VMs feel snappier when reading from their respective disks. In principle, adding more RAM also helps by a lot, but the persistent L2ARC is especially helpful, when you reboot TrueNAS and then want to spin up your VMs: ARC in RAM is then not yet populated but L2ARC was, giving you SSD IOPS instead of HDD.

  • @damani662
    @damani662 Год назад

    Thanks for insight.

  • @smolicek90
    @smolicek90 Год назад +1

    Good choice on that redundancy step. Have you concidered using the”special dev” class on our hdd pool? I have setup of 4x16TB raidz2, 3x1TB mirror specialdev, 2x200GB slog, you can also play with block sizes on datasets to store the dataset entirely on SSDs

    • @christianlempa
      @christianlempa  Год назад

      I haven't done it yet, as far as I understand the docs SLOG and ZIL come into play when memory is exhausted for caching, but I might do a few more tests with internal NVMEs or SSDs in the future. Great tip ;)

  • @DarrolKHarris
    @DarrolKHarris Год назад +3

    yes, i would like to a videoo on storage plans, replivation, backup and more about truenas scale.

    • @christianlempa
      @christianlempa  Год назад +1

      Awesome! Let's do this once I feel comfortable enough ;)

  • @cinemaipswich4636
    @cinemaipswich4636 9 месяцев назад

    I scoured RUclips for installs of TrueNAS. This one helped me understand how important Raid-z is for security of data. I now have 8 drives with z-2 I lose 1/4 of my storage but have 2 volumes of redundancy.

  • @alonzosmith6189
    @alonzosmith6189 Год назад

    Thank U, currently learning TrueNAS, looking to upgrade to Scale.

    • @christianlempa
      @christianlempa  Год назад

      Oh nice! What are you doing with your NAS? ;)

    • @alonzosmith6189
      @alonzosmith6189 Год назад

      @@christianlempa For family storage of data (pictures, videos, docs,etc) and backup. My NAS is their cloud storage.

  • @David_Quinn_Photography
    @David_Quinn_Photography 11 месяцев назад

    I have been using a 2 disk mirror for about 4 years now and I keep a copy on my desktop as well as backing up to a friend's NAS states away from me and the mirror has done well.

  • @TheDWehrle
    @TheDWehrle Месяц назад

    I love how he says a second drive dies "sometimes". lol. It has happened to me a few times over the years with the WD Reds, which is why I always use at least raidz2.

  • @xordoom8467
    @xordoom8467 Год назад +4

    I agree if you require those Reads\Writes & IOP's then make multiple zdevs. However, I've been running one giant 1 zdev at a size of 172tb without issues for years, its mainly for Plex and its been fine all this time. If I have more than 12 streamers the server may buffer once in a while but for the most part I feel these fears of a large zdev can be over exaggerated at times... I do replicate this server to a sister server just in case...

  • @Silent1Majority
    @Silent1Majority Год назад

    Excellent breakdown. The question you've created now is did you need to create a bridge network from fast (SSD) storage to allow your applications to use the slower pool storage? If so how because the TrueNas documentation confuses me on this. 😅

    • @christianlempa
      @christianlempa  Год назад

      I'm not sure what you mean by bridge network?

  • @TannerWood2k1
    @TannerWood2k1 10 месяцев назад

    This was helpful for understanding how to optimize larger #'s of discs. You said that you have a 10gbe interface, which one are you using? I ended up with 2 different broadcom cards which would not work and have now ordered a chelsio card. My system started as a core build but my motherboard crashes the installer, so I used scale instead.

    • @christianlempa
      @christianlempa  10 месяцев назад

      Thanks :) I’m using a X520 DA-1/2 card that works great

  • @andrewt9204
    @andrewt9204 Год назад

    I did something similar, I had 6x 6TB drives all in a Z1 config without doing any research. After reading forums, I decided it was better to have those 6 in a Z2 vdev, or two vdev in a Z1. I liked the idea of a bit more performance and went the 2 vdev option.
    That's pretty much the limit with board I have. I only have 1-16x pcie slot and two 1x slots. The 16x slot is being used by the 10G fiber card, and based on what I've read, those cheap 1x SATA expansion cards aren't the greatest. I thought about using the two 1x slots with m.2 adapters to mirror two SSD's for a cache. Even a single pcie 3 lane on an NVME drive is going to be 3x faster sequential write than a SATA HDD vdev. The iops will be magnitudes faster as well. I've basically turned a x4 NVME drive into a SATA SSD at that point.

    • @christianlempa
      @christianlempa  Год назад

      That's a good question. I guess in larger systems it's better to buy a server main board with more PCIE lanes and faster controllers.

  • @Neo8019
    @Neo8019 Год назад

    I had 2 servers fail in a space of 2 month because both had 5 disk in RAID5 and during the rebuild a second disk failed. Luckely I had back ups from oen and the other one had a mirror. Since then I do not use anything less than RAID6. Extra HDD is cheaper than the data it holds.
    On the previous video you said you spent about 500 Euro for the case. You can find a HP Proliant DL380 Gen9 with 2x Intel Xeon E5-2640v3 8-Core and 32GB ECC RAM that can hold 12x 3.5" (LFF) for around 700-800 Euros used, from a German website and they provide 12 months warranty. A dual port P440ar controller which supports HBA will also cost you around 100 Euro from the same shop.
    In any case, nice build!!

  • @gregjones9601
    @gregjones9601 Год назад +1

    Love your videos Christian. Not sure if anyone asked, but would love to know how you migrated from your original storage layout to the new one. Did you just blow the data away or is there a way to migrate the data in setting up a new layout? I have two vdevs with 12 drives in each…I questioned my layout originally and maybe I should have split up even more! Always hard to sacrifice storage when you pay $$$$. The flip is the cost of the failure, something I always seem to deny to myself! Thanks for doing what you do!

    • @christianlempa
      @christianlempa  Год назад

      I needed to copy the data somewhere else and destroy the pool, create a new one :(

    • @VallabhRao123
      @VallabhRao123 Год назад

      ​@@christianlempa How did you manage to have so much of spare storage. If you did have, why not add them in the pool alongside all the drives to begin with! Did you buy new drives ? That will have so many new challenges, as now you have a gazillion 4TB drives to manage.
      would be great if you could explain a bit more as i am new to NAS world.

    • @gabrielosvair
      @gabrielosvair 6 месяцев назад

      ​@@VallabhRao123I would also love to know more in detail

  • @MikeHarris1984
    @MikeHarris1984 Год назад

    I wish I saw this last week!!! I was trying to figure out if I should make one big vdev of 20 drives or two or three smaller Raidz1 vdevs to make my pool and give me better performance with better redundancy, but lower space. Got 160tb to take 40tb for redundancy is not a big deal. I didn't see the iops thing... Lol. Have 256gb ram and using 8 ssds for cache/meta vdev.

  • @chrisumali9841
    @chrisumali9841 Год назад

    thanks for the demo and info, yeah, mistakes are a part of life LOL

  • @betterwithrum
    @betterwithrum Год назад +12

    Christian, given that you're in Germany, I'm curious about your power consumption and what steps you're taking to lower your home lab cost? I'm in the US and I have enough solar on my house to offset 110% of our usage so this isn't a concern for me. I'm curous how it is for you. Thanks in advance

    • @Oxygen.O2
      @Oxygen.O2 Год назад +3

      Here in Belgium, running my very old PC turned into an 18TB homeserver who consume 65W@idle 24/24h would cost about 17€/month, and that's not even counting the next increased price in january 2023, which will end up at around 35€/month... So, as you imagine, I turn it off most of the time! I can't even start to imagine how much those racked system use!
      For the config :
      Ubuntu 22.04 Server Edition
      Intel Q9450@2,66Ghz baseclock
      8GB DDR2
      512GB SSD
      HDD's 14TB + 2x2TB (No raid, each HDD has its purpose)
      Running multiple docker apps, a very light homepage through nginx to access all services and a SMB share for data backup (time machine).

    • @christianlempa
      @christianlempa  Год назад +7

      That's a pretty important topic for me, and I still have so many questions regarding idle power usage and home servers. Currently, the average power consumption of this system is 110W, which is approximately €20 a month, but increasing (based on the current Ukraine war, energy crisis in Europe). Soon, I'll need to find another solution, I've heard Intel CPUs perform better in idle, compared to AMD Ryzen, however, the investment for a new CPU + MB doesn't pay off - yet.
      My plan is to first build a new Proxmox Server with my old PC hardware, once I replace it with a Mac, and take that experience to decide for the storage server build.

    • @gshadow1987
      @gshadow1987 Год назад

      @@christianlempa i'm using a cheap HP i5-6400, have installed unraid on a usb3 stick hooked Up on a Adapter which sits directly on the Mainboard so Nobody can rip it out. There i created a VM with Windows 10, striped down by the cool utility @ChrisTitusTech has build and using the NAS Features the OS hast to offer. the package Power of the CPU under Windows ist on idle at around 5 Watt. Bought the System (mobo+cpu+8gb ram) on ebay for around 50 Euro by some refurbish reseller. im using two nvme ssd INSIDE the 4x Slot of the Motherboard via an adapter for 2 nvme Drives which both has 4500mb/s RW with high iops. I also got 12 Drives, the Same as youre using, hooked Up 6 of Them directly onto the 6 SATA ports of the mainboard. As a second Pool i added a 16x pcie card for 6 more SATAIII drives. Both Pools running RW, cache supported at around a GB/s (Not Gbit/s) in parallel, more when only one pool is doing its work. Overall total cost: 200euros, sipping well under 100watt from the Wall. Breakdown costs: 50 mobo+CPU+RAM, 25 +8gb ram, 15 pcie nvme card, 30 pcie SATA III card, 80 2x 500gb nvme ssd (with cache Chip,very important), 5 for the USB Stick (+120 dir unraid)
      For the Case i use an old Silverstone grandia HTPC Case with an 500Watt Xilence PSU which already existed. "Server" running 24/7 buttery smooth and quick as hell.
      Maybe my system inspiring you.
      Greetings from the hometown of Blau und Weiss to M-Town (40min Drive) :)

    • @xmine08
      @xmine08 Год назад

      @@Oxygen.O2 look into more modern processor, mobo and psu. You should be able to get to 25W at idle easily.

    • @xmine08
      @xmine08 Год назад

      @@Oxygen.O2 für reference, my Ryzen 5950x machine consumes 42W at idle - not great but for the performance (that I'm not using when at idle, lol) it's amazing

  • @RzVa317
    @RzVa317 Год назад

    I would definitely be interested in a truenas overview video

    • @christianlempa
      @christianlempa  Год назад

      I already did two videos about truenas scale, maybe that's what you're looking for :)

  • @habib.bhatti
    @habib.bhatti Год назад

    Quick Question; is the fault tolerance PER vDev, not over the entire system array as in say a traditional Hardware RAID solution?

  • @vladimirherrlein3809
    @vladimirherrlein3809 Год назад

    As soon as you start to play with SSDs (SATA or SAS) with that amount of drives, check also your HBA in order to get the best performance (PCI lanes used, bandwidth per lane,…) if your backplane uses and expander or not, may be you will have to change your backplane and/or add another HBA.
    Exemple: With 4 SAS SSD I’m reaching limits of the HBA 6gbs on a Dell r720

    • @christianlempa
      @christianlempa  Год назад +1

      Yeah, that's a great point to keep in mind. I'm not using this storage server extensively, but you're absolutely right. I should do some tests with copying data from both pools at the same time and maybe put the SSDs on a second HBA or internal controller.

  • @dudley810
    @dudley810 Год назад

    I am pretty sure that the Lose all your data was the other reason why I picked unraid. I believe that you can still read the data on the unraid drives if multiple drives fail past the parity drive of unraid but I never tested that. Might be a good test for me as well.

    • @samcan9997
      @samcan9997 8 месяцев назад

      you still technically can with truenas however you will have blank stripes of missing data effectivly making it unreadable so unless your running multipar or something there as good as lost anyway... and yeah ive attempted raw data recovery unless its gotten a lot better in the last 8 years there aint much you can do

  • @wildmanjeff42
    @wildmanjeff42 Год назад +1

    I have been using truenas/freenas for years and have learned with years of 24/7 use you will have failures. I use Z2 on spinning drives, Z1 (with backup) on SSD arrays. I know a lot of people use Scale for Linux OS and running other things in docker, but my storage is for ONE thing only-- storage and backups. I use FreeBSD core as it is VERY established and its safer with ZFS than Linux at this point in time---Its your data, and its your choice of course.
    Thanks for the video !

    • @christianlempa
      @christianlempa  Год назад

      Thank you for your insight! The good news is, I'm still doing an offline-backup in case the whole server is messed up, but I also have faith in the skills of iX Systems to improve on that ;)

    • @wildmanjeff42
      @wildmanjeff42 Год назад +1

      @@christianlempa Same here, I have 2nd server with replication set up to back everything up every 6 hours automatically. I feel like they will get scale working to the same level as Freebsd, it will just take vetting the product, same as it did with years of use with Freebsd and the community! Lawrence systems youtube channel goes so far in depth with Truenas as a great resource !

  • @elonbrisola998
    @elonbrisola998 3 месяца назад

    I'm configuring my home Truenas setup. Started with raidz1, and after some reading, I went to raidz2. I have 5 drives in a single vdev.

  • @gyulamasa6512
    @gyulamasa6512 Год назад

    With the SSDs If speed is not the biggest concern, I woul do a RAIDZ1 and back it up to a mirrored pair of HDDs. If more speed is needed, I would go for a striped volume of 4 SSDs backed up to a mirrored HDD volume often enough. In your case, the first setup would result in 1,5TB, the second is at 2TB.

    • @christianlempa
      @christianlempa  Год назад

      Thanks, yeah there are other possible setups, maybe I'll change it and do some further performance testing. A stripe would probably be the best performant setup.

  • @ToxicwasteProductions
    @ToxicwasteProductions Год назад

    If running multiple drives like you I tend to use raid6 now adays with larger drives. And to be completely honest I don't even fully trust that so I moved over to raid6+0 on my main production pc. I har 8 drives in my array. So I get 4tb usable space and ample failure space. I can loose basically 4 drives at a time given the right four drives die and still be able to recover. Like you say it makes me sleep a little better at night.

  • @rahaf.s1217
    @rahaf.s1217 Год назад

    as HW it will be one drive ? then i will split it as virtual based ob RAIDZ type ?

  • @barneybarney3982
    @barneybarney3982 Год назад

    8:20 well its always about balancing between redundancy and cost... Like ofc its better to have mirror of two z2 vdevs but this way you get only 16tb of capacity from 12x4tb drives. Imo 1x z2 , 2x z1 or 3x z1 is fine for 12 drives...

  • @jwspock1690
    @jwspock1690 Год назад

    Danke für's Filmchen

  • @kewitt1
    @kewitt1 7 месяцев назад

    My setup, nas1, 4x18tb raidz1, nvme meta and cache, 8tb mirrors, 1tb apps and vms. Nas 2 8x8tb raidz1, backup. 10 gbe between both. 27 tb backup was 15 hours on 1st sync,

  • @frets1127
    @frets1127 7 месяцев назад

    So what if you have data on the new build? I made this mistake. 8x10TB raidz2 1 vdev 🤦🏻‍♂️ copied all my data from old NAS to this. So now I have to copy it back, reconfigure, then copy back to the new build? Ugh. Any recommendations on best way to copy from new back to old?

  • @LucS0042
    @LucS0042 Год назад

    How did you convert without losing data?

  • @ozzieo5899
    @ozzieo5899 Год назад

    hey.. how are those Fanxiang ssds working out for you? I saw them on amazon, but was apprehensive on purchasing..

    • @christianlempa
      @christianlempa  Год назад +1

      Can’t really say much negative, so far they’re working good! But who knows about reliability and duration :P

    • @ozzieo5899
      @ozzieo5899 Год назад

      @Christian Lempa got it.. thanks.. perhaps I'll wait a month or so more.. and if still nothing, I'll pull the trigger on it.. thanks soo much for everything..

  • @postnick
    @postnick Год назад

    I'm running 3x 1TB SSD in Raid 0 - I know I know- but I have my "FILES" backed up to the NVME boot drive often - and also keep that key data on a different computer and extra drive. Thankfully its only 200 gb at this time.

  • @VGAMOTION
    @VGAMOTION Год назад

    Can you help me with a question? I'm setting up a server with truenas scale. I only have three bays available and I was planning to put 3 HDDs of 20tb. What do you think is the best configuration? Thank you so much!

    • @sempertard
      @sempertard Год назад

      if you truly value the data, then two drives mirrored, and the other for backup. That means you will only have 20TB of storage available out of the 60TB you started with. Yeah.. Ouch. Or you could possibly use the three drives in a Z1 (Raid 5) configuration, giving you 40TB available and use external drives to back up that data. Again, how important is your data?

  • @nangelo0
    @nangelo0 Год назад

    Why you didn't combine storage and VM servers into single server?

  • @BrianD-pf4px
    @BrianD-pf4px Год назад

    Nice fix up. Your next mistake was using Adaptec 71605 16-Ports SAS/SAT as your controller. Truenas forums all say this controller isn't really an HBA. With that being said I use the same controller in my build but get dinged on the forums about that. Also not sure if you will be able to TRIM those SSD drives with that controller.

    • @christianlempa
      @christianlempa  Год назад

      Yet it doesn’t seem to be a problem, I might hit a performance Limitation when I need to use both pools at the same time heavily. But I’m not sure what you mean by it’s not a real HBA? It’s a controller that runs in HBA mode, so .. where is the problem with that?

    • @BrianD-pf4px
      @BrianD-pf4px Год назад

      @@christianlempa I am still using that card as well for rotational drives. Just thought might be something to look in to as well. The TrueNAS forum seems to be very adamant that the card is a poor choice. Very interested in your take on the card though. Also did you see if you are able to TRIM those SSDs?

  • @lalala987
    @lalala987 Год назад

    Which kind of ssds fit into the trays?

  • @alex10pty
    @alex10pty Год назад

    Great video. How do you manage to recreate the pool if the drives already have information? Do you have spare drives to copy the existing information? I asked because i read that if a drive have information it doesnt show up in the zfs pool at least in proxmox

    • @christianlempa
      @christianlempa  Год назад +1

      I needed to copy the data to another location, destroy the pool and re-create it. And yep... that took the whole day and night :D

  • @dillonhansen71
    @dillonhansen71 Год назад

    What SSD's did you buy? did you make sure they have NAND flash on them? If they dont. you will get HDD performance :(

    • @christianlempa
      @christianlempa  Год назад

      I got the Fanxiang S101, they got 3D NAND when you believe their docs :D

  • @erfianugrah
    @erfianugrah Год назад

    That panning effect on the "Hey everybody"

    • @christianlempa
      @christianlempa  Год назад

      Yeah sometimes I still suck at editing :(

    • @erfianugrah
      @erfianugrah Год назад

      @@christianlempa Thought it was intentional haha

  • @zaluq
    @zaluq 8 месяцев назад

    Have you planned to do a new Truenas setup with the changes in ver 23 ?

    • @christianlempa
      @christianlempa  8 месяцев назад

      Not yet, but I'll look into TrueNAS again when I have time

  • @zazuradia
    @zazuradia 8 месяцев назад

    it's not that a second drive fails. it's that if a string of bits fails during parity reconstruction (which is much much more common), some part of your data is gone.

  • @roymorrison1075
    @roymorrison1075 Год назад

    12 drives with only 1 failure. Yep I wouldn't of been able to sleep at night. Some time size isn't everything ! I also have a spare drive per dev for fail over. Drives are cheap when it comes to data that you can never be replace. Try explaing to your wife about loosing all the kids pictures for the sake of a couple of $150 HDD. Call it over kill, but I also run a 2nd Truenas server that I run once a week to replicate the main Truenas server, from its snapshots. very quick and easy. But any way great video, Thanks Christian.

  • @uuu12343
    @uuu12343 Год назад

    Question, you are using a ssd for storage?

  • @samuelmoser
    @samuelmoser Год назад

    After watching this video..... do I have to be concerned about my configuration. I have 3x8TB and Z1. Which means I can only loose one drive, but I don't want to do Z2 with a fourth drive, as then I only have a efficiency of 50%. So is it really a problem when I have just 3 drives?

  • @antonmaier5172
    @antonmaier5172 Год назад

    I assume you are using the latest TrueNas Scale version ? You didn't mention.
    What is your idle CPU usage with TrueNAS Scale ?
    I tried about a year ago and then my TrueNAS Scale server cpu used about 25% in idle state, which in my opinion is unacceptable.
    The problem then was all those Kubernetes processes doing nothing but still using cpu and electrical power.
    TrueNAS Core 13 on the same hardware uses 0% cpu in idle.
    Has it gotten any better ?

    • @christianlempa
      @christianlempa  Год назад

      I'm using the latest version, and I didn't have any problems with idle, mine is always at 1 to 3%

  • @JohnWeland
    @JohnWeland Год назад

    So here is a question. You have multiple vdevs in a single pool. If you wanted to have deduplication would you need an extra drive per vdev for this or 1 drive for the entire pool?

    • @christianlempa
      @christianlempa  Год назад

      I'm not really sure, but I thought deduplication is a compression method that takes a lot of your CPU power to compute it, but it's not different in vdev requirements than non-deduplication pools.

    • @JohnWeland
      @JohnWeland Год назад

      @@christianlempa I thought it required a segment of storage to use as a manifest. I may be misremembering. Maybe it’s caching I am thinking of.

  • @eloimartinez9446
    @eloimartinez9446 Год назад +1

    Nice fix, but i have the question, why 12 4tb hdds inead of 4 12tb hdd, it might be a little bit slower, but much more energy efficient, and you have the ssd's for anything io intensive.

  • @RossCanpolat
    @RossCanpolat Год назад

    I would love to see an NGINX Proxy Manager with SSL for LAN only video. 🙂👍

    • @christianlempa
      @christianlempa  Год назад

      Mhh I'm not sure if I'd do this, as I'm pretty happy with Traefik as a Reverse Proxy. I will do a video about Traefik on TrueNAS Scale though, maybe that's still interesting ;)

  • @bsandoval2340
    @bsandoval2340 Год назад

    Hold on I’m a little confused dosent mirror just duplicate the data meaning you could lose theoretically 3 drives assuming they were all on the same vdev but If you lose even 1 drive on both vdevs they’re all gone? I’m fairly new to a lot of this.

    • @christianlempa
      @christianlempa  Год назад

      The 2 vdevs aren’t in a mirror, but in a stripe, meaning I need both of them staying intact. Each of them have a parity of 2, so I can lose 2 drives in each vdev, but not more.

  • @IvanToman
    @IvanToman 9 месяцев назад +1

    Mirrors only. Simple is always the best.

  • @Bartek2OO219
    @Bartek2OO219 9 месяцев назад

    Isn't raid 6(z2) better than raid 10 for SSD?

  • @mt_kegan512
    @mt_kegan512 Год назад

    Watch your sync write speed to SSDs when using NFS. May not be the speed you're expecting for fast VM storage over network. If you're using anything over 1gbit/s you may want to look into write cache (SLOG). Granted .. this will send you down quite the expensive rabbit hole! Its really about how quick the cache can write and it's endurance, not the size. If u don't care about NFS/synchronous speeds, I wouldn't bother however

  • @eNKa007
    @eNKa007 Год назад

    Why not to create a vdev with higher RAIDZ level nstead of breaking dives into two vdevs?

  • @tabascocrimson7865
    @tabascocrimson7865 Год назад

    Where did you move all your data off in the process?

    • @christianlempa
      @christianlempa  Год назад

      I needed to copy them to another hard drive, and yes, that took the whole day and night :D

    • @tabascocrimson7865
      @tabascocrimson7865 Год назад

      @@christianlempa Me: Need a drive for redundancy in case one fails
      Me: decides one redundancy is not enough
      Also me: When redesigning my array, I rely on a single one.
      Lol

  • @jacquesredmond
    @jacquesredmond 17 часов назад

    I am building a sever with FOUR 4TB HDDs (this would be my archive pool that I need max redundancy for, so I would only have 4TB max storage), and a second pool of FOUR 500TB SSDs that I would use for max speed and performance for temporary uses. (Combining them for almost 2TB temporary work space?) What would you recommend settings for this situation?

  • @Jimmy_Jones
    @Jimmy_Jones Год назад

    Have you come across many bugs? I still think it's too early for the Kubernetes side. Loads of people seem to encounter issues/limitations even on the official pods

    • @christianlempa
      @christianlempa  Год назад

      Actually not, however, I'm not doing much with the Kubernetes part of truenas, but I haven't seen any bug on my end yet

  • @VassilisKipouros
    @VassilisKipouros Год назад

    It could be an idea to add your SSDs to your spinning disks as L2ARC and SLOG cache. Do some research on it. This way you can increase your spin drive pools performance...

    • @christianlempa
      @christianlempa  Год назад

      Thank you! I’m currently fine with the memory caching but it’s indeed an interesting topic

    • @samcan9997
      @samcan9997 8 месяцев назад +1

      or just buy 1TB of LRdimms as there cheaper and faster than replacing SSDs every few months but eh
      special metadata pools can also help a lot

  • @SharkBait_ZA
    @SharkBait_ZA Год назад +1

    Please make the video. I want to learn more. 🙂

  • @hpsfresh
    @hpsfresh Год назад

    Why not make 3 vdevs of 4 disks in raid z1?

  • @Damarious25
    @Damarious25 4 месяца назад

    Any update on how those SSDs were?

  • @mitchellsmith4601
    @mitchellsmith4601 Год назад

    I just had two older 4 TB drives fail in a single vdev over a three month period. It happens.

  • @ragtop63
    @ragtop63 7 месяцев назад

    There is no 100% fault tolerant config. Even with RZ2 it's entirely possible to lose enough drives to kill your entire data storage. In fact, it happened to me many years ago when a lightning storm hit while I was out of town. The storm compromised my PSU and the PSU killed 5 of the drives and destroyed all of my data.
    Since then, I've some to the conclusion that for my personal needs creating pools consisting of 4xHDD@RZ1. I never have less than 2 vdevs in a pool so my IOPS are better than a single disk. The throughput is also good enough for my needs so far. I handle failed disks by having cold spares. Since I'm almost always near my system, if a degraded state were to ever show up, I can just power down and swap the drive in a matter of minutes. It would be different if the server was in a datacenter somewhere that isn't instantly accessible but for most home users, that's simply not the case. I also have a duplicate identical system at my son's house. The 2 systems are synced so the data is theoretically always backed up.
    All this is to say, I personally believe that sacrificing 2 disks per vdev in a home environment is a waste of storage space and money. As long as you have 1 or 2 cold spares and a good UPS/protection circuit, you should almost never be put in a situation where the problem can't be resolve immediately.

  • @YouTubeGlobalAdminstrator
    @YouTubeGlobalAdminstrator Год назад +1

    Those SSDs might fail quick, it's really not recommended to use consumer drives in a server environment due to their endurance.

    • @christianlempa
      @christianlempa  Год назад

      Well, that's what everybody says, but no one could actually tell me reasonable docs about the impact of the missing features.
      So as I said, I'm going to test it, if an SSD dies, it's just 40$ for a new one ;)

    • @cyberagent009
      @cyberagent009 Год назад

      @@christianlempa i suppose that each and every SSD has a tbw reading before it can fully fail. This information is available from the SSD manufacturers website. Enterprise drives have higher mtbf for the drives. Just my two cents. Correct me if I'm wrong.

    • @severgun
      @severgun Год назад

      @@christianlempa there is actually no fancy "features". Enterprise SSDs just have more spare cells. So they last longer.
      All CoW filesystems suffer from write amplification.

  • @THEMithrandir09
    @THEMithrandir09 Год назад

    So with SSDs you need to watch out if you buy QLC or TLC NAND storage. TLC is great, QLC is slow af, but often cheaper/TB. There's more to it than that, but QLC is often a rip-off, especially for SSDs smaller than 2TB.

  • @heavy1metal
    @heavy1metal 6 месяцев назад

    Fault tolerance is only dictated by how much downtime you can afford, not preventing data loss. If you have everything backed up and have the time to rebuild and recover then there's nothing wrong with raidz1.

  • @Mr_Meowingtons
    @Mr_Meowingtons Год назад

    yeah i have 10 4tb Drives and i put them in RAID-z2
    My PLEX Server running 15 drives is on harware RAID6 but i want to chage that to TrueNAS + HBM some day. and run a 2U for the PLEX Server.

  • @helderfilho4724
    @helderfilho4724 2 месяца назад

    Please replace your Sata SSDs with Samsung EVO or Crucial MX. I bet yours will become slower than old spinning disks as soon as you fill it. That as my case anyway, and I am much happier with good SSDs from reputable brands. I may be too late watching your video, but if you have any news on that please let me know =). And thanks for the info!

  • @stacygirard647
    @stacygirard647 6 месяцев назад

    good thing i find thix video just before i get the old used pc i mgetting for my nas , so i will be sure not to do the mistake 🙂
    im getting an old i5 7th gen with 16 gig ram(ill upgrade it over time to 64(max i can have on that mother board)
    and i bought 3 ssd 1 t and a 250 gig ssd nvme for the cache , i have already in it a 112 gig ssd that ill set for my boot installation .
    i will use it a s cloud (next cloud,) my plex server, and maybe other things that ill find over time i already got a domaine name to access my cloud ect,
    and alreday have a cloudflare set up too
    and i ll save money for later get a bigger sever that ill install proxmox and install vm on it and ill see if i keep that one as a nas server or use only for some thing else
    adn mistake happen to any one

  • @George-rm7yw
    @George-rm7yw Год назад

    In my opinion, trying and failing is the only way to learn!

  • @kreaweb-be
    @kreaweb-be Год назад

    I used consumer SSD's for a while but went back to HDD because ZFS eats up SSD's , way too much write operations cause consumer SSD to degrade in a few months.

  • @MrJonsson9
    @MrJonsson9 9 месяцев назад

    What is this "starch-server"?

  • @putrag2loh
    @putrag2loh Год назад

    how about when the OS broken , can we rescue all our data on vdev?

    • @christianlempa
      @christianlempa  Год назад

      You can import the ZFS pool into a new system. So either backup and restore the OS disk or set up a new one an import the store

  • @ArifKamaruzaman
    @ArifKamaruzaman Год назад

    I created stripe and too lazy to change.

  • @romanrm1
    @romanrm1 Год назад

    You said the performance is the most important for the SSD array, and then leave the Encryption checkbox on. What for? Test with and without, typically it will harm performance a lot, even if your CPU has hardware acceleration. And saying "performance matters, and if anything happens I can just restore from backup", I expected you'd just run RAID0 across all four.

  • @bensatunia8842
    @bensatunia8842 Год назад

    No Schadenfreude ... The Pro

  • @heinowalther5023
    @heinowalther5023 Год назад

    I don't agree about the IOPs per vdev is equal to one disk IOP. I think this is "old" information, and has been corrected in later ZFS releases... I use a 24 disk shelf with just one vdev (it's a draid3 because of the better rebuild times)... (draid can only be done from the commandline on truenas... but I just did a simple write test, where I were able to reach over 5.000 IOPs with a 128k blocksize (1.5GB/sec)... so go figure?

  • @MokshaDharma
    @MokshaDharma Год назад

    Wen Mastodon join?

  • @enormouschunks7138
    @enormouschunks7138 Год назад

    Before watching the video. Was the mistake installing truenas?

    • @christianlempa
      @christianlempa  Год назад +1

      *SLAP* watch the video

    • @enormouschunks7138
      @enormouschunks7138 Год назад

      @@christianlempa I did and still think installing truenas was the worst mistake in the video.