Make Your Home Server Go FAST with SSD Caching

Поделиться
HTML-код
  • Опубликовано: 3 июл 2024
  • WD Red Pro Hard Drives and SSDs: www.westerndigital.com/en-us/...
    Tiered caching script by elmuz: github.com/notthebee/infra/bl...
    Follow me:
    Mastodon tilde.zone/@notthebee
    GitHub github.com/notthebee
    Twitch twitch.com/notthebeee
    Support the channel:
    Patreon / wolfgangschannel
    PayPal (one time donation) www.paypal.com/donate/?hosted...
    Music:
    Hale - Moment
    Delavoh - Always With Me
    Meod - Crispy Cone
    Steven Beddall - Cuts So Deep
    When Mountains Move - Alone Atlast
    Videos are edited with Davinci Resolve Studio. I use Affinity Photo for thumbnails and Ableton Live for audio editing.
    Video gear:
    Camera geni.us/K8OOyKV (Amazon)
    Main lens geni.us/jnnElY4 (Amazon)
    Microphone geni.us/tgiSqL (Amazon)
    Key light geni.us/Gi1zE2 (Amazon)
    Softbox geni.us/F86pM (Amazon)
    Secondary light geni.us/aciv (Amazon)
    Other stuff that I use:
    Monitor geni.us/KUzcmcP (Amazon)
    Monitor arm geni.us/5RXu (Amazon)
    Laptop stand geni.us/X5vx9Af (Amazon)
    Keyboard www.amazon.de/HHKB-PD-KB401W-...
    Mouse geni.us/KB7h (Amazon)
    Audio interface geni.us/sdhWsC (Amazon)
    As an Amazon Associate, I earn from qualifying purchases
    Timestamps:
    00:00 Intro
    01:27 2.5Gbit Networking
    03:15 10Gbit Networking
    06:26 SATA or NVMe SSDs?
    07:51 WD Red SSDs
    08:46 Filesystems
    09:08 BTRFS
    11:30 ZFS
    13:04 Mergerfs & Snapraid
    14:43 Tiered Caching
    17:10 Outro
  • НаукаНаука

Комментарии • 220

  • @trapexit
    @trapexit Год назад +30

    mergerfs author here. Thanks for the coverage.

    • @JonathanYankovich
      @JonathanYankovich 16 дней назад +1

      Mergerfs user here. On behalf of so many, THANK YOU!

    • @JonathanYankovich
      @JonathanYankovich 16 дней назад

      (Also, NFS shares on Unraid using mergerfs are broken/unstable. It might be a bug in libfuse, not mergerfs, but it’s causing me to ditch Unraid and roll my own snapraid+mergerfs probably on Ubuntu)

  • @muazahmed4106
    @muazahmed4106 Год назад +75

    When will you do a video about home automation?

  • @Guilherme-qk9so
    @Guilherme-qk9so Год назад +5

    Your videos are always so helpful and well made. Thanks for sharing this!

  • @nichtgestalt
    @nichtgestalt Год назад +5

    Thank you very much for this and all the other videos. Even though I don't use a server (yet?), it is so interesting to see these tutorials, especially the ones about power efficiency. Have a good one!

  • @myghi63
    @myghi63 Год назад +8

    Thank you! because of your videos I learned a lot about server stuff and also improved my own server here!
    I already have a nas with a corsair nvme drive and probably in 2023 I will be able to switch to 2.5Gb/s network. Btrfs has been my FS of choice on all my OSes at home and on my server it's running at RAID 1 + zstd:3 compression, without any problem at all on two seagate ironwolf 4TB drives

  • @TheTeregor
    @TheTeregor Год назад +11

    Small note about BTRFS: its RAID1 is not actually RAID1, it's a different type of RAID that is confusingly named "1".
    To cut to the chase: BTRFS RAID1 (and RAID10, for that matter) can tolerate only ONE disk loss, REGARDLESS of the amount of disks in RAID. Please consider this before committing to BTRFS on your NAS.
    Suppose you have 2 6TB drives and 1 8TB drive in a BTRFS RAID1 (yes, you can do odd number of disks, and different sizes as well). Now you write a 1TB file to it, for the sake of example. The way BTRFS works, it will write 1TB file to the "most free" drive, which is the 8TB drive. Then, it will write a copy of it to ANOTHER "most free" drive, which is either of the 6TB drives. Let's write 1TB files until our BTRFS RAID1 is full and the free space on disks:
    6TB#1 | 6TB#2 | 8TB
    6TB | 6TB | 8TB
    5TB | 6TB | 7TB
    5TB | 5TB | 6TB
    4TB | 5TB | 5TB
    4TB | 4TB | 4TB
    3TB | 3TB | 4TB
    2TB | 3TB | 3TB
    2TB | 2TB | 2TB
    1TB | 1TB | 2TB
    0TB | 1TB | 1TB
    0TB | 0TB | 0TB
    We can see that our biggest disk (8TB) is used the most, while its free space doesn't become equal to another two drives' free disk space, and by then writes are balanced equally between the disks.

  • @chromerims
    @chromerims Год назад

    This is timely for me. Thank you.
    Will watch tonight.

  • @halbouma6720
    @halbouma6720 Год назад +17

    There's a linux kernel driver that does tiered storage as well - its called btier. It moves the more often used data to the ssd drives. I use it, works great. Thanks for the video!

    • @abwesend182
      @abwesend182 Год назад +2

      can you give more information about this topic? maybe where I can read more about it?

  • @NN-uc1fh
    @NN-uc1fh Год назад

    Thanks for your tips and videos. I like you're channel for helping me in my daily life here as an non-programmer. Greetings from southern Germany !

  • @Felix-ve9hs
    @Felix-ve9hs Год назад +32

    15:11 The ZIL (ZFS Intent Log) is a part of the ZFS Copy-on-Write function (for preventing Data loss), which only gets used with sync writes (e.g. if you use your Storage Server for Virtual Machine Storage).
    On normal file copy operations, the ZILL never gets used. If one does have a lot of sync writes, they should put their ZIL on a dedicated Log Device (SLOG), which would usually be an SSD.
    The L2ARC is an extention of the ARC read cache, wich will cache frequently access files in you free/unused RAM.
    a L2ARC will be useful if you cannot fit all of your data in your RAM you want to be cached, but it will only speed up operations that access files already on your ZFS pool.

    • @agistan7764
      @agistan7764 Год назад +3

      Great and 100% correct explanation. I'd also add that ZFS is great for handling lots of small files and has outstanding data reliably and safety. However most of it's features really be useful in enterprise (like snapshots and replication)

  • @LXJ1974
    @LXJ1974 Год назад

    Excellent video as always. Thanks for that

  • @tredonlinder2543
    @tredonlinder2543 Год назад

    Thank you for your work! Keep rolling.

  • @ZeroXtiK
    @ZeroXtiK Год назад

    1min of video and I love the idea, love ur vids dude!

  • @adriancozma6102
    @adriancozma6102 Год назад

    Very insightful, thanks for sharing!

  • @user-fi9xc6nc1q
    @user-fi9xc6nc1q Год назад

    Thank you veeeeeery much for the video about homelab💖

  • @Kohega
    @Kohega Год назад

    Very useful documentation, thanks

  • @anthonvanderneut
    @anthonvanderneut Год назад +2

    I have been using Btrfs over mdadm raid-6 on several servers, for many years now. Apart from the occasional dead drive that needed replacing this worked without problem. I did switch to combining smaller (1Tb) disc partitions of each of the 6 drives in a raid, then combining the raid with LVM and put Btrfs on top of that. The smaller disc partitions have the advantage that I can run a raid check over night on them, instead of having a huge one starting Sunday morning early, but not finish until very late in the day, making access very slow. Of course some people will roll their eyes about this layering and the loss of speed, but for me and my usage pattern that is not a matter of concern.

  • @Airbag888
    @Airbag888 Год назад

    Love this series...

  • @chris11d7
    @chris11d7 Год назад +37

    While you're right about needing a network upgrade to get more sequential performance from SSD caching, but even on single gigabit, you're getting a huge performance advantage in random IOPS. I love your videos, keep them coming!

  • @shawn576
    @shawn576 Год назад +2

    Stablebit Drivepool now has an SSD caching option. Things saved to the drivepool are saved to SSD first then moved onto HDD after.

  • @bertnijhof5413
    @bertnijhof5413 Год назад +4

    12:10 The memory requirements of ZFS depends completely on your use case and the mentioned rule is valid for a server with many users (say >20). I use ZFS on my desktop and laptop (Ubuntu) and I limit the L1ARC cache to 20% to 25% of my RAM size, mainly to save some time starting VMs. If needed ZFS will free up cache memory, if programs or VMs need it. On my 16GB desktop I limit the cache to 3 or 4GB; on my 8GB laptop I limit the cache to 1.5 to 2 GB. On my 2003 backup-server (Pentium 4; 1C2T; 3.0GHz) with 1GB DDR (400Mhz) I did not set limits, but FreeBSD/OpenZFS limits the cache to 100/200MB.
    Currently I use a 512GB SP nvme-SSD (3400/2300MB/s) and a 2TB HDD supported by a 128GB sata-SSD cache (L2ARC and ZIL). Often I run the datasets on the nvme-SSD only caching the meta-data (L1ARC), because full caching only speeds up disk IO by 10% to 20%. That small difference is, because of the fast nvme-SSD and my slow Ryzen 3 2200G, who needs relative much time for compressing/decompressing the records. I don't complain, because I boot e.g Xubuntu 22.04 LTS in ~6 seconds mainly from the cache or in ~7 seconds directly from the nvme-SSD.
    Note that all my data storage and all transfers of the changed records during an incremental backup are lz4 compressed.
    The ZFS snapshots on my desktop saved me twice from 2 hacks I've experienced this year.

  • @Akshun82
    @Akshun82 Год назад

    Those Orico adapters are awesome.

  • @ciaduck
    @ciaduck Год назад +5

    A better way to add disks to ZFS is do add another vdev (array) to a pool. Yes, you should not simply add just one disk, but by adding pairs of disks, or even another raidz volume with several, you can grow your storage pool without having to do the kinds of gymnastics you outlined. ZFS will stripe across vdevs in a pool.
    It is a trade off in convenience and cost (requiring multiple disks), but what you get is security. ZFS, essentially, is designed to force a user to do it the "safe" way.
    This will also let you avoid the MergerFS on ZFS thing you alluded to. Which sounds like a bad idea.

  • @kevinwuestenbergs7612
    @kevinwuestenbergs7612 Год назад +4

    Not really mentioned in the video but cache in unraid is read/write only when the files are in the cache pool. After the files have been moved back to regular storage you have to manually move it back to the cache. I would think tiered storage should do read and write caching but in unraid it only does write caching.

  • @YannMetalhead
    @YannMetalhead 11 месяцев назад

    Great video!

  • @Nalanaij
    @Nalanaij Год назад

    This is gold! Thank you!
    My relevant data is on my Desktop machine and is synced to the nas and Notebook. So i'm Independent of the cable Speed, but my data ist in the nas. Though, i'm no Video Editor and this isnt the best idea for large quantities of data.
    Hope tiered caching support comes to truenas. I'll check the level1forums

  • @JasonsLabVideos
    @JasonsLabVideos Год назад

    good video. 10gtek makes nice compatible dac cables !! Use them for my home lab & at work no issues at all!!

  • @rasaskitchen
    @rasaskitchen Год назад

    I learned a lot about RAID in this video than 5 years running a server.

  • @Zavala-z9g
    @Zavala-z9g Год назад

    Thank you very much.

  • @AaronMolligan
    @AaronMolligan Год назад

    Thats setup looks nice android the information you can presented is really helpful. I went a different route and currently have my nas setup in a HCI solution with one nvme as well system cache. I prefer to use unraid but currently being forced to use truenas for my companies setup

  • @lizzyfrizzy4969
    @lizzyfrizzy4969 Год назад +2

    Im really glad you talked about NAS.
    I want to build a medium sized (50tb)array, but out of NVME-m2 drives. For pure speed at all costs.
    In my research i found that due to chipsets, the fastest raid arrays for m2 are quantity of 3 drives. Although some boards and many m2 raid cards have 4 ports, actually adding the fourth drive lowers the performance. Therefore, each submachine of the array would be limited to 3 drives. Although i could plug in 4 cards with 4 drives each, those 16 ssd wont enjoy the full bus rate (i think?)
    Therefore, instead of building one server with 16-24 drives, the m2 ultra speed nas would need to be made out of a network of machines. Ideally built with the smallest board with a fully supporting chipset i can find. Well, i know nothing about networking. What manages this nas array? Another machine with tb of RAM?
    Building a high speed high performance array is more complicated than i thought😢

    • @ShaferHart
      @ShaferHart 10 месяцев назад

      living it up before end times aren't we

  • @cinemaipswich4636
    @cinemaipswich4636 10 месяцев назад

    I thought hard about adding NVMe drives to RAM overflow (or VDEV Cache/L2ARC), but it worked out cheaper and faster if I just bought another 128GB of ECC RAM. My TrueNAS server uses much less resources, by having fewer devices attached. As for other VDEV's (metadata, log, dedup), I will think about later. Network speed is my next project. I see the that I do not need a switch if I use direct attach via the SFP cable. I am a single user. Thank's Wolfyie.

  • @bradlee5374
    @bradlee5374 Год назад +3

    Wolfgang, do you think you could make a video about your current OS selection for your server? When you were upgrading to your server rack you said that now that you have CMR drives you will try trunas but I see you are using unraid now. I think it would be great to hear your thoughts about unraid vs trunas core/scale vs Ubuntu server and see the process of how to switch your OS on a current server.

  • @wheisenberg559
    @wheisenberg559 Год назад

    Hyper-V Core does also support tiered storage with Storage Spaces.

  • @hippoage
    @hippoage Год назад +1

    9:50 Looks like one of options is missed: LVM. It also can organize RAIDs and cache on SSD.

  • @baricdondarion6228
    @baricdondarion6228 Год назад

    This is my best RUclips channel. Straight to the point, no BS. No irrelant talk. Hit hard on just the important information.
    I found out despite always getting your videos in my feed, I wasn't subscribed. Totally fixed that.

  • @jamesbutler5570
    @jamesbutler5570 Год назад

    Running btrf in raid 6. Upgraded harddrives with bigger ones, switched defective drives, changed fron rait 1 to 6. Now its bigger 50TB. Never had problems for more that 5 years

  • @keshavrathore4189
    @keshavrathore4189 6 месяцев назад

    Thanks

  • @kdog8787
    @kdog8787 Год назад +2

    You can utilize CAT 5e with 10g; it's just not guaranteed to work. I've done it. Ironically, I had trouble with CAT 5e (same cable) with 2.5g because I was using low power NICs.

  • @kz2682
    @kz2682 Год назад +1

    I use mergerfs and snapraid for my 11HDDs, so only the Hdd is running which is in use, this will reduce the powerconsumption.

  • @benedikt3880
    @benedikt3880 Год назад +5

    what made you switch/extend your homeserver to UnRaid? I remeber you said in your old home-server tour video that you consider UnRaid too limited. Maybe that would be an interesting topic for a video as well.

    • @ShaferHart
      @ShaferHart 10 месяцев назад

      I'm interested in hearing his rational for the change as well. Maybe he did and the algo hasn't pushed it to me lol

  • @secretcactus4717
    @secretcactus4717 Год назад +2

    So you changed your server's OS to Unraid or did you use forbbiden black magic to use two OS at the same time?
    PS: Your home server videos are great, keep it up!

  • @loadmastergod1961
    @loadmastergod1961 5 месяцев назад

    working on upgrading my network to 10gig now. not sure how good it'll be with the short cat5e to my porch, but once i heal from surgery and the ground thaws, i'll run a new cat6 line to the garage again and have full 10 gig network to my servers

  • @MrCoffis
    @MrCoffis Год назад

    In which video did you talk about tiered caching? Will you ever consider doing how to's in the future?

  • @alekzandru221
    @alekzandru221 Год назад

    Got 10gbe, 3x cards with trans included 24x1gb 4x10gbe switch, om3 cables all for under $300. Found an aruba 1930 for 100, and cheap hp cards, had to do some driver updates, but all worked out.

  • @pipeliner8969
    @pipeliner8969 Год назад

    you are so smart

  • @tabimeterable
    @tabimeterable Год назад +4

    Hey, nice Video *thumbs up*. An alternative to an mergerfs/copy based cache is bcache. You just have to write a superblock to the drive in front of an ext4 Partiton, or let bcache handle that with a new drive, and then group the drives under mergerfs. Works flawless for me and is stable, well tested and in the kernel since 3.10., drives can be "easily" add or removed to the same cachedrive and you can excess the ext4 partiton by just offsetting the mountpoint? (or something idk, but it works).

    • @tabimeterable
      @tabimeterable Год назад +1

      And it is highly recommend to mirror your cache drive (eg. 2 drives with mdadm Raid 1) since a failed (non raid) cachedrive is pretty bad.

  • @mrsansiverius2083
    @mrsansiverius2083 Год назад

    Guys he's not changing haircuts, a new Wolfgang periodically appears and kills the old one, taking his place.

    • @robertt9342
      @robertt9342 Год назад

      It’s pretty standard stuff.

  • @shanent5793
    @shanent5793 Год назад +3

    zfs works with diverse drive capacities and you can add any number of drives of any size to a pool, without having to destroy the pool. It's been that way for 10+ years. If there is any limitation, it's in the management layer, and not in the filesystem itself

    • @ShaferHart
      @ShaferHart 10 месяцев назад +2

      for all intents and purposes it does not support it. Sorry.

  • @S3nra
    @S3nra Год назад +1

    Which model is the supermicro case?

  • @sarundayo
    @sarundayo Год назад

    Came for the speed, stayed for the Sanic memes

  • @ankitsinghaniya
    @ankitsinghaniya Год назад

    Seems like ZFS can now support different size drives in the pool and also add/remove after initial setup?

  • @wagmi_dude
    @wagmi_dude Год назад +3

    I strongly recommend server grade HBAs in IT mode rather than cheap chinese SATA cards. I own H310. It has 2 6Gbit 4-port channels. For SSDs there are similar cards with 12Gbit throughput. Originally I used to run my ZFS on 4 port SATA controller but it frequently made disks resilver.

  • @ShaferHart
    @ShaferHart 10 месяцев назад

    Without the data being mirrored/paritied in a btrfs raid I see little reason to use btrfs for your media (which is probably most of the storage). You're not going to snapshot largely static data and you're using snapraid for parity/backup anyways.

  • @sufyspeed
    @sufyspeed Год назад

    You could also use fibre instead of Cat 6A

  • @HueMongus101
    @HueMongus101 Год назад

    The Supermicro 3.5" bays work with the Dell server 2.5" to 3.5 metal bay adapters. Much cheaper than the Orico

  • @hiasi94
    @hiasi94 Год назад

    First really good channel with great content
    I am rebuilding my network and want to get a new switch anyway what do you think of 1Gigbit switches with 2 SFP+ ports for 10G?

  • @nekoskylynn
    @nekoskylynn Год назад +1

    Just my two cents about 2.5/5 switches and fiber cable confusion:
    In my country (it is Russia, please don't hate me) 2.5 or 5gbps switches cost like x2 more than 10gpbs switches (8 sfp+ ports!!!). I've been planning to setup 2.5 for a looong time (pcie cards are dirt cheap though). After long managing I just got bamboozled and set up 10 gbps network everywhere lol.
    It was hard af to manage which DAC cables are compatible, which fiber cables to use, which transceivers to use. Uhhh.
    Ended up with LC OM3 Multimode fiber cables and basically every transceiver compatible with LC/mm fibre cable. And to answer to all guys to question I couldn't find: Yes, most DAC cables are vendor agnostic and you will not find any trouble. I am currently using TpLink/Zyxel/Cisco switches together and all my random DAC cables are working (Cisco, nonane, aliexpress cables lol).
    One problem with DAC is that it requires more power so more is power consumption, but its great for rack/datacenter solutions because its short and less prone to damage
    Thanks for the video! I wish it would be here earlier lol. I am currently using mergerfs/snapraid and looking forward how it goes with tiered caching

    • @TuMbl4
      @TuMbl4 Год назад

      So, how it goes with tiered caching? Did you try it? ;)

    • @Nunoflashy
      @Nunoflashy Год назад

      Why would someone hate you, is it because of the war, something that you don't have any direct participation in? (Unless you're in the army, of course). I get it that this is the internet and you get insulted or canceled for these petty reasons, but having to apologize for such a thing astounds me. If anyone hates you for this, and you have no connection to the war, which you most likely don't, then you owe no one an apology and it's great that you found them out so you can stay away as a result.

  • @benjiderrick4590
    @benjiderrick4590 Год назад

    Thanks! My home server is running openmediavault right now, so I don't know if I will be able to set up mergefs on it with a pair of small SSDs.
    Right now the focus is on power efficiency and savings, especially since my ISP doesn't let me route my ports to the internet, so I have no means right now to use jellyfin, Plex or even ssh

    • @ShaferHart
      @ShaferHart 10 месяцев назад

      you can't port forward on your router? how come? You can also look at tunneled solutions like tailscale or cloudfare tunnels.

    • @benjiderrick4590
      @benjiderrick4590 10 месяцев назад

      @@ShaferHart well that was long before I knew I wasn't on a full ipv4 stack, which I asked to my ISP. I now run jellyfin (and all other containers) through tailscale whenever I'm outside, but initially the plan was to do reverse proxy. 8 months in, and I now have doubled the storage capacity and added a "cache drive" to help reduce power consumption (and wear) when watching animes and such. Well for that last one, I wish I had chosen truenas instead, it would have been much more versatile ; here I have to manage copies of media on the SSD drive

  • @BTA_KeepItFun
    @BTA_KeepItFun Год назад +7

    Found your channel by a happy accident. Very helpful and well written videos! Would be interested to hear your take on OpenMediaVault (OMV6) if you've used/checked that out. Personally been quite happy using OMV for some years now, but the new UI is downgrade to previous (OMV5).
    Happy early Winter time =)

  • @defyiant
    @defyiant Год назад

    Question I have gigabit cable internet will I be able to take advantage of 10gbit speeds or is my isp too slow.

  • @MarkJay
    @MarkJay Год назад

    When I look at some of my sata ssd specs, they say 5V at 1.5A, or 7.5W. That seems just as high as a 7200rpm HDD

  • @smitler
    @smitler Год назад

    Love your videos! I'm quite new to diy server/home networking, but have enough experience to follow-along and learn from you, but I have a question:
    So you're running unraid as you're main os for your server right? And what file system are you using with Merger/SnapRaid?
    I currently have my hands on a Cisco UCS C240 M3 (2 x Xeon E5-2600 CPUs & 64gb 1866mhz RAM, 4x gb NIC) server with 22x 2.5" sas drives (a mixture of 900 and 1.2tb sizes) along with a PCIe onboard dual sata card (that I'm running 2 x 240gb SSDs). I'm currently running Ubuntu with Merger/SnapRaid and I'm mainly using my server for JellyFin/Home Assistant/Frigate (for my CCTV NVR storage and object detection). I've picked up a lot of tips from your videos but now I'm confused as to whether I should be running Unraid OS rather than struggling with Ubuntu and manually setting up everything there as my Linux knowledge is limited and I'm taking a lot of time getting things to work with each other.. I guess I'm also trying to see if I'm utilising my Setup to the best of its ability and would love your take on it.
    Where's the best place to chat to you about this?

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад

      Unraid is great if you want a "just works" solution. You can get a trial license key and see if you like it before committing to it

    • @smitler
      @smitler Год назад

      @@WolfgangsChannel Awesome, thanks buddy! Also what file format system do you recommend to use with MergerFS ?

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад

      I use XFS for hard drives and ext4 for SSDs. You can also use ZFS for SSDs

  • @ivosarak959
    @ivosarak959 Год назад

    If the DAC is desirable, but a bit short then the AOC will do the trick as well and with much longer runs.

  • @swistak0220
    @swistak0220 Год назад

    I was just thinking about all this.
    I wanted to go with managed switch but markup for 2.5 GbE is huge.
    I also want to keep my ZFS pools so I found the autotier by 45drives. Unfortunately there is not much info on this.

  • @L0rDLuCk
    @L0rDLuCk Год назад

    ZFS is King! I use it since 2007 and never lost a single file even though i lost multiple hdds in the past 15 years. it is by far the best file system on earth. no one should consider any other filesystem. if you put enough ram in your machine you also dont need any ssds for caching. i saturate a 10gig link with no problem for any file transfer only having enough ram. also the scrubs is much faster with more ram! and even linus accepted that unraid is crap and shouldn't be used in any situation!

  • @mikedoth
    @mikedoth Год назад

    I need to get sponsored if I want some sweet hardware :-)

  • @brookerobertson2951
    @brookerobertson2951 5 месяцев назад +1

    I run deathwish RAID. “RAID 0”. I also run suicide Linux as the OS. “It deletes the whole system of you enter one incorrect command”. It’s like driving your car at full speed with no seatbelt. But it’s okay because its only hospitals computer systems not my personal. Makes my boring IT job way more exciting.

  • @kazumakazuma5814
    @kazumakazuma5814 Год назад

    Getting a usb-network-card with unraid can be an issue depending on the chip. 6.10 or so broke the drivers for mine.

  • @jasongreenwood3260
    @jasongreenwood3260 Год назад

    I'm using a Qnap 6 bay NAS for my plex server.
    I just have 4 drives currently in RAID 1 (paired in two separate drives). No pool. No SSD cache.
    It has nothing bigger than 1080p.
    It seems to work ok, but networking has never been my strong suit (which is why I watch your channel).
    I also do heavy photo editing and I have tried to work from this NAS, but it's slow. So, I tend to edit locally and then upload for long term storage.
    It has 2.5gb Ethernet, but I have to upgrade my switch.
    Would something other than RAID 1 function better here?

    • @Faddermakker
      @Faddermakker Год назад

      I dont think that the troughput is the issue here. Your NAs will probably lack in randomIO performance, so random IO instead of sequential reads/writes. You could setup a networkshare on another computer that is backed by SSD storage and try to edit the same photos and compare performance instead of changing your NAS-setup right away.

  • @ashishpatel350
    @ashishpatel350 Год назад

    FIIIBBBBBBEEERRRR

  • @mamdouh-Tawadros
    @mamdouh-Tawadros 3 месяца назад

    Forgive me for a simple question, if you have a SSD to boot from, you can still benefit from another cache SSD?

  • @scottstamm7022
    @scottstamm7022 Год назад

    Is there any point to a caching drive for spinning disk....if you have a dedicated RAID controller w/ battery and onboard cache?!

  • @Bixmy
    @Bixmy 8 месяцев назад

    the thing with 10g is if only you need the 10g on the window machine just get 2 10g and direct connect them and manually assign IP not related to the dhcp.and have other connection go through 1 or 2.5 as usual

    • @Bixmy
      @Bixmy 8 месяцев назад

      you could add 10g switch later too if you want it for other mahine

    • @Bixmy
      @Bixmy 8 месяцев назад

      with this method you could only pay like 60 USD for a direct 10g connection from workstation to nas. well this will only work if you using the nas along thou

    • @Bixmy
      @Bixmy 8 месяцев назад

      a 1m dac for 20 USD 2 10g nic for 20 USD each so 60 USD

  • @leo11877
    @leo11877 11 месяцев назад

    Is this recommended for large 50-90GB single files?

  • @tomdillan
    @tomdillan Год назад +1

    If your motherboard has dual gigabit nic’s how to do combine (bond/bind) them to one for faster transfers?

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад

      Yes, as long as those two ports actually go through two separate NICs. Your switch will also need to support port bonding/binding

  • @brandonedwards7166
    @brandonedwards7166 Год назад

    It would probably be better to jump up to 40gb based on price. dual 40gb cards are about $10 on ebay. 40gb 8 port switch for $40. It is a little more to configure but faster and cheaper.

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад

      Yep, but careful, older 10+Gbit cards run hotter and draw a lot of power

  • @omarthatha
    @omarthatha Год назад

    Hey Wolfgang, any recommendation/advice on building a home lab but outdoor? e.g. balcony? what should I be looking for and what should I consider?

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад +1

      Depends on how the weather is where you live. I personally don’t have any experience with that, but Mikrotik has some “rugged” outdoor switches, and I’m pretty sure that you get an “outdoor”/embedded server or even just a PC case.
      Still, that will only protect your electronics to a certain extent, and if it’s very humid or rains a lot, that might be a problem

    • @omarthatha
      @omarthatha Год назад

      @@WolfgangsChannel Appreciate your time answering buddy, I am located in Estonia xD, and currently it's -11 and I was thinking of putting my whole rack into a DIY Wooden-isolated box with a few fans to control humidity, but a bit terrifying lol

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад +1

      Yeah, I wouldn’t do it unless you have rugged equipment with a high enough IP rating. Which isn’t cheap

    • @omarthatha
      @omarthatha Год назад

      @@WolfgangsChannel yeah, I am leaning more on the skeptical side of the moon with the outdoor setup, alrighty, time to do it Wolfgang style :), have a good one

  • @neverwasthere
    @neverwasthere 5 месяцев назад

    It sounds like you talked about SSD cache for software RAID setup. But how to implement SSD cache on a hardware raid 5 setup. I have a ThinkServer TS440 with Windows Server 2019 on a separate SSD and Data on 4x 4TB HDD wired to a LSI 9364-8i hardware RAID card with RAID5. How to add SSD as cache and how to enable the cache from where? the hardware RAID card control or OS?

  • @oguime
    @oguime Год назад

    I enjoy your videos, but noticed in WD's documentation that that the 2500 TBW refers to the 4TB drive version. For a 2TB drive it is 1300... Is it still good enough? How does it compare to an enterprise SSD?

    • @dgsprysoup
      @dgsprysoup Год назад

      Enterprise SSDs usually have endurance in the double digits petabytes range, but 1300TBW is still quite good for usage and reliability

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад

      Yep, basically what @DGsprysoup said. In my experience, consumer 4TB SSDs are usually rated around ~1000 TBW

  • @robertt9342
    @robertt9342 Год назад

    Unraid now has ZFS support in 6.12.0

  • @roysigurdkarlsbakk3842
    @roysigurdkarlsbakk3842 Год назад

    What's wrong with mdraid? You also have lvmcache, not tiering, but still

  • @pedrorocha6225
    @pedrorocha6225 11 месяцев назад

    Here's my idea/doubt, I have a mini pc running Plex with internal SSD, room for another SSD. my files are in 1 (for now) large HDD. Is there any way to do the following. When Plex asks for a file from the HDD, I would like the file to go to RAM or SSD and be played from there, that way the HDD only reads for a few minutes and just stops. How can I do this?

  • @joegaffney8006
    @joegaffney8006 Год назад

    Random read and writes can still benefit from ssd caching even on 1gbit

  • @roysigurdkarlsbakk3842
    @roysigurdkarlsbakk3842 Год назад

    CAT6a is rated to handle 10Gbps over up to 100m, so it shouldn't be a problem.

  • @DonSalieri4
    @DonSalieri4 Год назад

    Hi. You said that raid 5 and 6 are not stable with btrfs, but Synology use them in their nas, also in their more expensive models. I just bought one plus model, I have to worry?

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад +1

      I've been using RAID5 with my SSDs in Unraid for a couple of months. Like I said, the documentation doesn't recommend using BTRFS RAID5/6 for anything important, but at the same time, a lot has changed in 5 years and the documentation doesn't seem to reflect that.

  • @shalak001
    @shalak001 Год назад

    What's your take on OpenMediaVault? Have you tried it? I personally went with it because it's debian-based distro, unlike TrueNAS (FreeBSD) or Unraid (proprietary).

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад +1

      TrueNAS Scale is based on Debian

    • @shalak001
      @shalak001 Год назад

      @@WolfgangsChannel wow, that's a new one! And it's been out a year already... looks interesting. Thanks!

  • @postnick
    @postnick Год назад

    I need to buy a PCI expansion card as my old system only has 2 SATA 6 and 4 Sata 3 ports.
    My goal is to just run exclusvily SSD's in my unraid, but I'm not sure if it knos how to deal with no HDD.

  • @robertt9342
    @robertt9342 Год назад

    What happens if the cache drive fails, is what is “preferred” on it lost? For example, if I set my docker Plex app to preferred, and may ssd dies, is that plex and it’s library gone?

  • @user-kl6qj9lc5y
    @user-kl6qj9lc5y 7 месяцев назад

    sounds like a good way to kill the flash drive if you are doing constant writes.

  • @cig_in_mouth3786
    @cig_in_mouth3786 Год назад

    How about this switch? netgear M4300-8X8F (XSM4316S), I really like that concept if I buy a Mac mini it supports 10 Gig Cat but my refurbished server has 10GiG sfp port, so I am confuse. Thanks for the video

  • @gold-junge91
    @gold-junge91 Год назад

    Can you show us your unraid setup? I have seen you have a Time Machine backup on your unraid system, I have play 2 years ago with unraid and only little experience about it

  • @ex1tium
    @ex1tium Год назад

    I'm planning to upgrade my Asrock Deskmini A300 (Ryzen 3 3200G) and it will have 16GB RAM, 2x NVME 500GB + 2x 500GB HDD. I think I will run Proxmox and ZFS but I'm unsure what would be the optimal caching/pool/raid setup for me. Is Unraid better/easier alternative? I aim to have 1TB+ storage and be able to recover from 1 disk failure. I'd like to run Home Assistant OS + Ubuntu for Docker containers and app development. I'm planning to host Nextcloud and/or some network storage too. How should I configure the system?

    • @uncreativename9936
      @uncreativename9936 Год назад +1

      You'd probably be better off with unRAID since it doesn't look like it has a PCI slot to upgrade networking. Best bet would probably be to install unRAID set two SATA HDDs in a RAID 1 pool (or whatever the unRAID equivalent is) with one NVME as a cache disk and the other as an OS and run home assistant as a VM in unRAID.

    • @ex1tium
      @ex1tium Год назад

      @@uncreativename9936 Thanks! I'll look into it. I read ZFS is really hard on consumer SSD's because of the volume of data it writes so I'm leaning towards unraid.

  • @pieter-yt
    @pieter-yt Год назад

    Ive been running 2 2tb nvme drives in raid 0 for a few years now and its great OFC i keep daily backups off the whole system in case off a failure
    Still dont recommend doing this unless you like living on the edge waiting to be kicked off it and have to recover ur whole system from backup :3

    • @ShaferHart
      @ShaferHart 10 месяцев назад

      that's child play bro

  • @kiamaz254
    @kiamaz254 Год назад

    Noob question,, I'm new to diy NAS , are you building all this on Freenas or Ubuntu?

    • @ShaferHart
      @ShaferHart 10 месяцев назад

      looks like unraid

  • @TheJaniable
    @TheJaniable Год назад

    What about lvm/lvmcache?

  • @good_old_tam
    @good_old_tam Год назад

    Do you think it worth it to plug a home server on my router’s 10G slot in order to improve access over wifi ?

    • @WolfgangsChannel
      @WolfgangsChannel  Год назад

      No, I don’t think you’ll see much improvement over WiFi by doing that

  • @csn04
    @csn04 2 месяца назад

    What about bcache/lvmcache?

  • @henri470x
    @henri470x Год назад

    Can we please have tutorial of how to make your RPi Wireless KVM? Thank you :)

  • @ewenchan1239
    @ewenchan1239 Год назад

    QNAP makes great NAS systems that are easy and simple to use.
    The biggest problem that I have with QNAP NAS units is that their lack of expandability leaves a LOT to be desired.
    For example, I already have a 100 Gbps networking deploying in the basement of my home, but unfortunately, NONE of my QNAP NAS units has a PCIe 3.0 x16 slot (nor my older, Xeon based TrueNAS server neither).
    And whilst I've thought about going to 10 GbE, the lack of expandability on the QNAP NAS systems still presents a problem (and rather than trying to spend my time debugging/fixing user permission issues between iSCSI/NFS/SMB on a CentOS server, I'd much rather have the systems up and running and working (hosting data already) vs. it being down, and my having to do said debugging.)
    Besides, not all of my clients are on the 100 Gbps network yet. Windows has support for the NICs, but it's really not necessary for day-to-day usage. (But for my HPC/CAE applications, it is an absolute must).
    So, I'm still living live in the slow lane, only working with a single 1 GbE NIC for the most part.