This is THEY most helpful video of yours for me over the years. I've watched them all. I just started over completely with my server of 3+ years based on this.
Wanted to drop here and say that you are doing great work to educated noobs on unRAID and answering all critical questions. If I didn't find your guides, I'd be ditching unRAID.
This video kicked some serious ass! As a new user to Unraid, it was clear as to why you were making the configuration changes you were as well as how you made them. I couldn't click subscribe fast enough. Thanks for the great Unraid series!
I been using 3 cache pools to minimise writes to my array and NVMe's. Cache 1 - Downloads - downloads folder for all Linux iso's - 2x500GB HDD in RAID0(bought from Facebook market for dirt cheap ,mostly at the end of life) Cache 2 - Media - Cache for imported iso's, moved to array every week- 2x1TB in RAID1( old array disks) cache - Appdata - 2x500GB NVMe in RAID1 I would say this is the best feature of unRaid so far!
I was thinking this as Ed was talking about naming the cache pools to the drive type. I think that naming the cache to the cache PURPOSE and drive type would be better. Shares Tab Name comment SMB NFS Cache appdata prefer : Cache_Appdata_nvme domains prefer : Cache_VMs_nvme isos Yes : Cache_ISOS_ssd system prefer : Cache_System_nvme
great videos! Just wish they came out more frequently. Your last video said this one would be out "tomorrow", was starting to worry something had happened to you lol. Anyway, awesome work and thank you!
Honestly, I thought exactly the same thing when he said he would post 1 video a day for the 6.9.x series at the beginning and then it went quiet for a while. I thought something horrible happened to him over the bank holiday weekend in the UK and started to worry haha. It looks like this is just his style for releasing videos, better that it's done properly, I've gotten used to it now. It's worth the wait :)
@@AbsTheUploader right, better done properly for sure. But maybe don't mention when the next video should be out since otherwise people expect them. or stick to a regular weekly schedule or something so it gives reasonable spacing and people can know when the look for the next one.
Yes sorry about the delay on the other videos. They were all recorded but not fully edited and whilst trying to tidy my desktop i deleted a lot of footage by mistake. Sadely I hadnt transfered the data onto the server as i normally do after finishing a video because this was a few videos I lost the data and had to re record them. 😭 Each video takes about 20 to 30 hours normallly so it really set me back and there was no way i could get them out how i planned. In hind site i probably should have made a small video to say there was a delay. so sorry bout that. Next time with a series of videos released over a few days i will upload them all first and set an auto schedule to release them !
@@SpaceinvaderOne hey no worries man, even if you said something like "yeah i just wasnt feeling up to making the video right away" that would be totally fine; after all you're putting out free content for everyones benefit so I'm not gonna be a choosey beggar. I'm mostly just glad nothing terrible happened to you during the mysterious delay, with Covid and all you never know! The videos are top notch! Sorry to hear you had to re-do all that work though, thats a huge bummer. Looking forward to the next ones!
Great stuff as always! Thanks! Maybe you could do a tutorial on how to change to this (new) setup if you already are running the default (old) setup? Probably a lot of moving things around in very specific order? If you have the time 😁.
I'm getting ready to add a 2nd cache drive (currently have single 2TB SSD drive). My plan for any share that is currently set to ONLY CACHE is to CACHE YES, then invoke the mover. This will move the share onto the RAID. I'll then add my 2nd cache drive (another 2TB SSD), and once things settle, I'll change the settings on the shares I previously modified back to ONLY CACHE. This will move the shares back to the cache drive.
At 15:58 where you mentioned about the motherboard and the ability to accommodate only 2 nvmes why dont you consider pci-e cards that can handle even 4 of them and act as HBA adapters without Raid level. Of course I am not aware if BTRFS and XFS need direct access to the disk like zfs or they have a problem when there is an extra layer of raid underneath
what about if you already have data on the cache drive itself. Can you change the configuration and will mover shift the data accordingly? Also how about if you have a VM assigned to the unassigned drive. Do you need to move that over to the domain prior to creating the cache?
By setting Shares to “No” use cache, then invoke Mover, will move the data off of it. Once done, you can reassign the disks or us Midnight Commander via putty or CLI to manually move things around.
This is amazing stuff. I am ditching my Windows with Linux VM and going to Unraid. Perfect timing. The video with GPU passthrough is going to help tremendously also. Thanks!
I really appreciate this video in learning how to setup more than one cache pool. For my setup since its a new build I wanted to use an HDD for one of the pools to be used to transfer media files to that and later have it moved to the array. Plus, is there a video that shows how to create a Windows VM to run say a media server software (I plan to use something other than Emby and Plex).
Question about ~17:10, I dont think mine registered the ssd in mine for a cache drive so I think I've been running the past couple days without. If I go and try to set it up now will I have to move/copy anything to the new cash drive from the array or will the system do it itself? (Yes, i have a couple dockers set up, no VMs though, not at this time)
Wonderful video! Would you be able to lay out the process for migrating an existing set of shares (appdata, system, etc...) to a new cache pool for existing users?
Exploring this myself. I set every single share to `Yes` and the started the Mover. This will move all of my stuff to my array, then I can set everything to No, then I can kill the cache and start over with new cache pools.
this video is great and helps me a lot as a total unraid noob. unfortunately many of these options changed in unraid 6.12 and I am quite confused now.. :( especially with primary and secondary storage
Great video and easy to follow. The problem I am facing is that none of the shares move to the created cache pool. They all stay at /mnt/user/ and not /mnt/*new cache pool* Followed your video exactly but my shared folders no do move
"Only" is an amazing option. I have 2x 16TB drives and 1x 8TB drive for my array, no parity. I have 1x 2TB nvme for the cache for the data folder for my array. I then have 2x 4TB drives in a cache pool for my backups, it is set to only. I also have 2x 512GB SSDs set for only for my appdata and system. This means my array has no protection, by my backup/appdata/system folders are protected.
I read that information that apples prefers a fruit code on samba running Linux server system to get the FCPX content run better between server and fcpx. What do you think? And how do it set this up. Just search for: smb macOS and Linus server vfx fruit
Thanks for this. But why not just call “cache_nvme” the name “docker_nvme” to make it consistent with the “vms_nvme”? Also, how to I move any existing docker data/images from the array to the new “cache_nvme”? Thanks in advance
i just set up an unraid NAS this week, new to the game of homelabbing overall. does this initial instruction still hold for the newer unraid version or is there some other guide you'd recommend?
Can't find a video for a full server rebuild as if ya starting g all over on a server setup already I want to add more drives and just start it over how do I do thos I'm new to this!!
Maybe stupid question: When something prefers cache, and data is stored on the cache, is it ALSO left in the pool? or is there no fault tolerance as long as its in the cache?
You mention that we should limit writes to the main array, so what is the difference in downloading directly to the array, rather than use a cache pool and copy over later, wouldn't this result in the same amount of writes? Referring mainly to media for use in a plex setup. Edit: You answered this in the next video. Thank You!
Someone please correct me if I am wrong here. With Unraid 6.9+ btrfs now includes the mount option of "discard=async" this is great for SSD's since it will trim automagically. With that said wouldn't we want to mount the cache drives as btrfs and not xfs? That way Dynamix SSD Trim is not needed anymore. The only reason why I think you wouldn't want "discard=async" is due to performance reasons but I have no idea how much of a performance hit it would be.
If I have a DRAM-less SSD with a slower sustained write, and a much faster SSD with good write speed, where do you suggest they go? appdata/system for the slower, and saving the faster SSD for the write cache? Also, I just created the array as above. I have my main share set to "Prefer Cache" and files have been written to the main array for Months. After running mover, it's moving these files to my write_cache SSD.
Ed, great video as always. I have maybe an unrelated question. I setup my UNRAID on an HPE DL380p g8 machine. I installed 394GB of RAM but UNRAID is showing only 236GB useable RAM. I know it is unfair question, but do you have any idea what may be causing this? in the syslog file, something about RAM being reserved and remapping, but the BIOS on my server does not have a setting for remapping. Thank you
Thanks Ed - could you please comment on the hardware level configuration of drives: for e.g. I have six SATA ports on my motherboard and four more on a PCIe card. Presumably it's best if groups of drives that work together are at least using the same i/o from the motherboard? In your example of a cache of two mechanical drives in RAID 1, would it hurt the performance of those (already slower) drives if one was connected to the motherboard ports and one to the PCIe SATA ports? Or, in your experience would this be any less reliable, if not noticeably slower or faster? I understand from your other videos that the allocation is indifferent to the ports as it's using UUID's to determine which drives go in which array or cache, but as I'm still adding drives to my server I wanted to get a sense of what was and wasn't important at the physical connection level. Cheers
You mention that you can use plugins to backup appdata and VMs, but if you store your nextcloud in a different share, that's not getting backed up as far as I know. Got any tips for keeping a solid backup of that? Currently thinking I'll just toss it in my Duplicati.
This is great if i can get it to work correctly. i have 2 cash pools one drive each. cash, cash_nvme but they both have the same folder shared and fill up at the same rate. This happens even whnen i have selected seperate shares to only handel via differnt cash pools as you did in the video. :( Not sure how to remove the unused folders. also if you have pools of one drive you get the "!" for not backed up data.
Thanks again for all your efforts. As you mention, some builds have constraints. Is it possible to partition a large SSD to use with two separate cache pools? 2 Tb seems like overkill for only dockers. Could a 2 Tb be used for a Cache pool for docker (1 Tb) and another for the VMs (1 Tb)?
You wouldn't need 2 pools for that use case. When set to prefer, both the docker and the vms would have access to the full 2tb. Splitting a single drive wouldn't keep from over read /writing as it would be the same drive anyway. Think of pools as ways to bundle multiple drives for either redundancy or adding them for a larger pool. Splitting them like you said is just the equivalent of multiple shares (like folders) on the drive. You could even specify the different shares to use the pool {a pool can be one OR more drives} differently.
So, if Ive already been running Unraid for a long time and have lots of Docker containers, and I want to add another separate cache drive to use for Docker, can I just change the appdata and system shares to use the new cache drive? Will the Dockers continue to run off the old drive until the mover starts, and then I assume it will move the data from the old cache to the new cache? Or do I have to manually move everything to the new cache drive and then point the shares to the new cache drive?
Back up your app data using the backup/restore app data in the community, delete your docker container images, restore your app data to your new drive, spin up your docker containers and make sure the storage location is set to your new drive(in the configuration section for each docker) and everything should be excatly before the wipe.
Typically yes. I can't imagine a situation where you would want it to happen any other way. But if space is limited and your migrating data from say NTFS, you can gradually grow the array pool -- adding one empty disk, move data over to empty out another drive, and repeat.
Another great content from the Invader! I am new to Unraid and I find it quite a conceptual change compared to conventional redundant storage systems. If anyone can chime in and explain the reason it's missing the possibility to add multiple arrays? I plan to use a 4-disk parity protected array for data storage and another 2-disk mirror for recordings from my security system. Can Unraid handle this use case? Should I just create a pool for my 2x4TB disk mirror? Maybe this scenario is exactly why they have the "CACHE: only" option in the dropdown list? Thoughts, anyone?
Unraid doesn't use striping, so data is stored on individual disks that just are combined and logically appear as some number of virtual disks (shares). To have multiple arrays would require you to care about physical drives and managing what data is stored on which drives, which isn't really the point of unraid. If there's a specific reason why you need to store certain data on certain disks, I suppose you could use the cache disks as a special category. That's typically used for data requiring especially high performance, and with SSDs.
When i do this with an alrdy running Unraid System, does Unraid move the existing files to the new Cache Pools when assigning them to the Shares, or do i have to move the old data by hand to the new location ? (in case is this possible without problems ?)
An answer would be helpful. I think the majority of people alrdy got a running Unraid Server with Docker Container and VM's, but maybe want to expand to multiple Pools like in this Tutorial.
I know it's considered best practice to use prefer rather then only, and I understand and agree why. But I use only for some shares because I would rather have the write fail. That would catch my attention really quick to investigate whats going on. Otherwise I might not notice for some time. Rather then get rid of that option, I wish unraid would just put a notation or tooltip or something telling uses it is not a recommended setting. Thank for all your videos, very much appreciated.
What if I have 1 Parity & 6 data HDD and want to pull 3 data HDD out of array? And take each one of those 3 HDD removed from Array and use them as Shares. Can I make each of those data HDD and single drive Pool and not lose data?
I'm sure you have a video for it already, but when adding a new cache pool for say my docker containers, what is the best way to move them off the original cache and on to the new cache pool?
Hey i did this a few weeks ago and it was pretty straight forward. I stopped all docker Container and deactivated docker. Then i set my appdata and my system share to use cache:yes. After that i activated the mover and all data was written to the array. Now you can install your new disks (or use already installed ones) and create a new pool. After the pool is created, you go back to your share settings and set appdata and system to use cache:prefer and of course be sure you use the new pool. Let the mover do his thing and activate the docker Service again :)
@@Skywalker111abc Awesome, thank you so much for the reply. Last time (several versions ago) I upgraded my cache SSD and ran into some issues which required me to download all the docker containers again.
No problem and that sucks. One time o wanted to reduce the number of devices in my cachepool from 3 to 2 without knowing what i was doing. It was a long night :D
Please don't do what I did and type "rm -r", forgetting the location of the files you want to remove. Ended up deleting the entire contents of my boot USB drive, now have to start again and rebuild the entire parity.
Having btrfs is fine for most things, but for nextcloud? By even the btrfs devs own statements, it's not recommended for databases as performance is just terrible. I only studied up on it after doing some performance testing on my install and seeing some really odd IO behavior - even with writes aligned, btrfs just isnt really suited for running DBs, and even some VMs have issues. I really wish theyd allow lvm xfs mirrors/arrays - none of the logical volume raid commands seem to work, nor do the mdadm counterparts. Until then, I guess its zfs or bust... which I love, but isnt really ideal for a system running a gaming vm due to zfs ignoring cpu isolation :-(
I don't run Unraid but I follow your tutorials and appreciate the contribution to the linux community. Proxmox needs a similar contribution.
This is THEY most helpful video of yours for me over the years. I've watched them all. I just started over completely with my server of 3+ years based on this.
Wanted to drop here and say that you are doing great work to educated noobs on unRAID and answering all critical questions. If I didn't find your guides, I'd be ditching unRAID.
All hail the unraid king. 👌 I'd pay good money to have space invader do an audit of my system ❤️
Check out his patreon. He will definitely remote into your system to help you out. He has helped me a bunch!
@@zeinnaja Damn thats awesome!
This video kicked some serious ass! As a new user to Unraid, it was clear as to why you were making the configuration changes you were as well as how you made them. I couldn't click subscribe fast enough. Thanks for the great Unraid series!
I been using 3 cache pools to minimise writes to my array and NVMe's.
Cache 1 - Downloads - downloads folder for all Linux iso's - 2x500GB HDD in RAID0(bought from Facebook market for dirt cheap ,mostly at the end of life)
Cache 2 - Media - Cache for imported iso's, moved to array every week- 2x1TB in RAID1( old array disks)
cache - Appdata - 2x500GB NVMe in RAID1
I would say this is the best feature of unRaid so far!
I was thinking this as Ed was talking about naming the cache pools to the drive type. I think that naming the cache to the cache PURPOSE and drive type would be better.
Shares Tab
Name comment SMB NFS Cache
appdata prefer : Cache_Appdata_nvme
domains prefer : Cache_VMs_nvme
isos Yes : Cache_ISOS_ssd
system prefer : Cache_System_nvme
great videos! Just wish they came out more frequently. Your last video said this one would be out "tomorrow", was starting to worry something had happened to you lol. Anyway, awesome work and thank you!
Honestly, I thought exactly the same thing when he said he would post 1 video a day for the 6.9.x series at the beginning and then it went quiet for a while. I thought something horrible happened to him over the bank holiday weekend in the UK and started to worry haha. It looks like this is just his style for releasing videos, better that it's done properly, I've gotten used to it now. It's worth the wait :)
@@AbsTheUploader right, better done properly for sure. But maybe don't mention when the next video should be out since otherwise people expect them. or stick to a regular weekly schedule or something so it gives reasonable spacing and people can know when the look for the next one.
Yes sorry about the delay on the other videos. They were all recorded but not fully edited and whilst trying to tidy my desktop i deleted a lot of footage by mistake. Sadely I hadnt transfered the data onto the server as i normally do after finishing a video because this was a few videos I lost the data and had to re record them. 😭 Each video takes about 20 to 30 hours normallly so it really set me back and there was no way i could get them out how i planned. In hind site i probably should have made a small video to say there was a delay. so sorry bout that. Next time with a series of videos released over a few days i will upload them all first and set an auto schedule to release them !
@@SpaceinvaderOne hey no worries man, even if you said something like "yeah i just wasnt feeling up to making the video right away" that would be totally fine; after all you're putting out free content for everyones benefit so I'm not gonna be a choosey beggar. I'm mostly just glad nothing terrible happened to you during the mysterious delay, with Covid and all you never know! The videos are top notch! Sorry to hear you had to re-do all that work though, thats a huge bummer. Looking forward to the next ones!
You’re a legend mate. Thanks for the help all these years.
Great stuff as always! Thanks! Maybe you could do a tutorial on how to change to this (new) setup if you already are running the default (old) setup? Probably a lot of moving things around in very specific order? If you have the time 😁.
I 2nd this. This would be helpful
Agreed! Would be awesome to see how that is best done!
I'm getting ready to add a 2nd cache drive (currently have single 2TB SSD drive). My plan for any share that is currently set to ONLY CACHE is to CACHE YES, then invoke the mover. This will move the share onto the RAID. I'll then add my 2nd cache drive (another 2TB SSD), and once things settle, I'll change the settings on the shares I previously modified back to ONLY CACHE. This will move the shares back to the cache drive.
@@zcranium72 spaceinvaderone did this in a tutorial. worked like a champ
This was exactly the video I needed when trying to decide what drives to buy for my first unraid setup. Thanks!
Thank you for this video tutorial. Im a new unraid user and this helped me tremendously.
Nice job man, seems like soft soft has gotten a lot more complex since the fruity loops days that I rember. Very helpful, thank you.
Thanks for the demo and info, this is a sweet system setup. Have a great day
Thanks for making the video! Invaluable advice, as I'm about to build my first unraid server.
Thank you so much for this video. This is EXACTLY like even with the NVMe drives how I ended up setting mine up. Thank You Thank You!
At 15:58 where you mentioned about the motherboard and the ability to accommodate only 2 nvmes why dont you consider pci-e cards that can handle even 4 of them and act as HBA adapters without Raid level. Of course I am not aware if BTRFS and XFS need direct access to the disk like zfs or they have a problem when there is an extra layer of raid underneath
Awesome cache video! For a video down the line, can you do one about how to add multiple NIC's and how to setup and use multiple Vlans.
Always enjoy the videos man. You do us a service!
Excellent! I am new to Unraid and learned a lot!
what about if you already have data on the cache drive itself. Can you change the configuration and will mover shift the data accordingly? Also how about if you have a VM assigned to the unassigned drive. Do you need to move that over to the domain prior to creating the cache?
By setting Shares to “No” use cache, then invoke Mover, will move the data off of it. Once done, you can reassign the disks or us Midnight Commander via putty or CLI to manually move things around.
This is amazing stuff. I am ditching my Windows with Linux VM and going to Unraid. Perfect timing. The video with GPU passthrough is going to help tremendously also. Thanks!
I really appreciate this video in learning how to setup more than one cache pool. For my setup since its a new build I wanted to use an HDD for one of the pools to be used to transfer media files to that and later have it moved to the array. Plus, is there a video that shows how to create a Windows VM to run say a media server software (I plan to use something other than Emby and Plex).
This is great! It has all the info in one place! Thanks!
Question about ~17:10, I dont think mine registered the ssd in mine for a cache drive so I think I've been running the past couple days without. If I go and try to set it up now will I have to move/copy anything to the new cash drive from the array or will the system do it itself? (Yes, i have a couple dockers set up, no VMs though, not at this time)
Wonderful video! Would you be able to lay out the process for migrating an existing set of shares (appdata, system, etc...) to a new cache pool for existing users?
I would also appreciate this. Love your videos, built my server from your videos 👍👍
Exploring this myself. I set every single share to `Yes` and the started the Mover. This will move all of my stuff to my array, then I can set everything to No, then I can kill the cache and start over with new cache pools.
this video is great and helps me a lot as a total unraid noob. unfortunately many of these options changed in unraid 6.12 and I am quite confused now.. :( especially with primary and secondary storage
Great video and easy to follow. The problem I am facing is that none of the shares move to the created cache pool. They all stay at /mnt/user/ and not /mnt/*new cache pool* Followed your video exactly but my shared folders no do move
Great video Ed, I always learn new things about best ways to setup unraid
"Only" is an amazing option. I have 2x 16TB drives and 1x 8TB drive for my array, no parity. I have 1x 2TB nvme for the cache for the data folder for my array. I then have 2x 4TB drives in a cache pool for my backups, it is set to only. I also have 2x 512GB SSDs set for only for my appdata and system. This means my array has no protection, by my backup/appdata/system folders are protected.
Great video. Looking forward to the next one.
Is there any real world advantage to separating the VM disk from the Docker disk, or is it just "it could theoretically be faster"
Excellent video as always, very informative and useful, thank you so much!
I read that information that apples prefers a fruit code on samba running Linux server system to get the FCPX content run better between server and fcpx.
What do you think? And how do it set this up. Just search for: smb macOS and Linus server vfx fruit
Another very informative vid, thanks m8 :)
Wonderful video from thr Master, himself. Thank you!!!
I love your videos! Would ge very glad if you could cover a video to gather splitted appdata folders across pools/disks.
Great video Ed! Thank you for always sharing them with us!😎JP
Thanks for this. But why not just call “cache_nvme” the name “docker_nvme” to make it consistent with the “vms_nvme”? Also, how to I move any existing docker data/images from the array to the new “cache_nvme”? Thanks in advance
Great work Ed.
You deserve all the likes.
@SpaceInvaderOne could you do a quick update on this video using ZFS?
Why do you use xfs for the NVME cache drives? Is it better with one drive oder why?
i just set up an unraid NAS this week, new to the game of homelabbing overall. does this initial instruction still hold for the newer unraid version or is there some other guide you'd recommend?
Can't find a video for a full server rebuild as if ya starting g all over on a server setup already I want to add more drives and just start it over how do I do thos I'm new to this!!
Maybe stupid question: When something prefers cache, and data is stored on the cache, is it ALSO left in the pool? or is there no fault tolerance as long as its in the cache?
You mention that we should limit writes to the main array, so what is the difference in downloading directly to the array, rather than use a cache pool and copy over later, wouldn't this result in the same amount of writes? Referring mainly to media for use in a plex setup.
Edit: You answered this in the next video. Thank You!
Fantastic video! Thank you so much!
How would you go about backing up a secondary cache pool, e.g. for Plex metadata?
Is the Dynamix trim plugin still necessary? And what is the advantage of choosing xfs for a single cache drive instead of btrfs?
Someone please correct me if I am wrong here. With Unraid 6.9+ btrfs now includes the mount option of "discard=async" this is great for SSD's since it will trim automagically. With that said wouldn't we want to mount the cache drives as btrfs and not xfs? That way Dynamix SSD Trim is not needed anymore. The only reason why I think you wouldn't want "discard=async" is due to performance reasons but I have no idea how much of a performance hit it would be.
If I have a DRAM-less SSD with a slower sustained write, and a much faster SSD with good write speed, where do you suggest they go? appdata/system for the slower, and saving the faster SSD for the write cache? Also, I just created the array as above. I have my main share set to "Prefer Cache" and files have been written to the main array for Months. After running mover, it's moving these files to my write_cache SSD.
Ed, great video as always. I have maybe an unrelated question. I setup my UNRAID on an HPE DL380p g8 machine. I installed 394GB of RAM but UNRAID is showing only 236GB useable RAM. I know it is unfair question, but do you have any idea what may be causing this? in the syslog file, something about RAM being reserved and remapping, but the BIOS on my server does not have a setting for remapping. Thank you
Thanks Ed - could you please comment on the hardware level configuration of drives: for e.g. I have six SATA ports on my motherboard and four more on a PCIe card. Presumably it's best if groups of drives that work together are at least using the same i/o from the motherboard?
In your example of a cache of two mechanical drives in RAID 1, would it hurt the performance of those (already slower) drives if one was connected to the motherboard ports and one to the PCIe SATA ports? Or, in your experience would this be any less reliable, if not noticeably slower or faster?
I understand from your other videos that the allocation is indifferent to the ports as it's using UUID's to determine which drives go in which array or cache, but as I'm still adding drives to my server I wanted to get a sense of what was and wasn't important at the physical connection level. Cheers
Thank you. Simply Thank you!
Could you have the Nextcloud share set to 'Prefer Cache' so that it just keeps the files on the SSDs with redundancy?
excellent video as usual.
I need to setup two drives one to copy the other. Do I set up both drives as disk 1 and 2 or do I do a Parity drive and disk one?
The cache_nvme seems to be quite under-utilised? What else would you suggest for sharing that cache?
You mention that you can use plugins to backup appdata and VMs, but if you store your nextcloud in a different share, that's not getting backed up as far as I know. Got any tips for keeping a solid backup of that? Currently thinking I'll just toss it in my Duplicati.
🚀🚀
I moved share folders as instructed but they do not show in the cache pools. What did I do wrong?
Great tutorial !!! 👍👍
Learned a lot!
This is great if i can get it to work correctly. i have 2 cash pools one drive each. cash, cash_nvme but they both have the same folder shared and fill up at the same rate. This happens even whnen i have selected seperate shares to only handel via differnt cash pools as you did in the video. :( Not sure how to remove the unused folders. also if you have pools of one drive you get the "!" for not backed up data.
Is there a Part 3 in the series? I can only find Pts 1, 2, 4, 5. Are there 6 in all?
Can I assign USB drives to the pool ? I´m planning to farm CHIA, but I didn´t wanto to use my array disks..lol
Why wasn't there an option for ZFS in the cache disk file system?
I actually use the "only" setting for some of my shares and like it.
What kind of shares would you use this on?
Cool. But what if i have an existing Server and want to change the location of appdata and docker.img to the nvme cache pools? is there a workaround
This is amazing info
space invader...hi thanks for all the tutorials ...but this needs a revamp...the layout and use is a lot different in 6.12.14
With 6.9 can you have a for instance four drive cache in a raid five or six configuration
With BTRFS
Thanks again for all your efforts. As you mention, some builds have constraints. Is it possible to partition a large SSD to use with two separate cache pools? 2 Tb seems like overkill for only dockers. Could a 2 Tb be used for a Cache pool for docker (1 Tb) and another for the VMs (1 Tb)?
You wouldn't need 2 pools for that use case. When set to prefer, both the docker and the vms would have access to the full 2tb. Splitting a single drive wouldn't keep from over read /writing as it would be the same drive anyway.
Think of pools as ways to bundle multiple drives for either redundancy or adding them for a larger pool.
Splitting them like you said is just the equivalent of multiple shares (like folders) on the drive. You could even specify the different shares to use the pool {a pool can be one OR more drives} differently.
Why are we not formating all storage devices with zfs?
So, if Ive already been running Unraid for a long time and have lots of Docker containers, and I want to add another separate cache drive to use for Docker, can I just change the appdata and system shares to use the new cache drive? Will the Dockers continue to run off the old drive until the mover starts, and then I assume it will move the data from the old cache to the new cache? Or do I have to manually move everything to the new cache drive and then point the shares to the new cache drive?
Back up your app data using the backup/restore app data in the community, delete your docker container images, restore your app data to your new drive, spin up your docker containers and make sure the storage location is set to your new drive(in the configuration section for each docker) and everything should be excatly before the wipe.
Great video! One question: When creating a pool, must all the pool drive/s go through formatting?
Typically yes. I can't imagine a situation where you would want it to happen any other way. But if space is limited and your migrating data from say NTFS, you can gradually grow the array pool -- adding one empty disk, move data over to empty out another drive, and repeat.
Another great content from the Invader! I am new to Unraid and I find it quite a conceptual change compared to conventional redundant storage systems.
If anyone can chime in and explain the reason it's missing the possibility to add multiple arrays? I plan to use a 4-disk parity protected array for data storage and another 2-disk mirror for recordings from my security system. Can Unraid handle this use case? Should I just create a pool for my 2x4TB disk mirror? Maybe this scenario is exactly why they have the "CACHE: only" option in the dropdown list?
Thoughts, anyone?
Unraid doesn't use striping, so data is stored on individual disks that just are combined and logically appear as some number of virtual disks (shares). To have multiple arrays would require you to care about physical drives and managing what data is stored on which drives, which isn't really the point of unraid. If there's a specific reason why you need to store certain data on certain disks, I suppose you could use the cache disks as a special category. That's typically used for data requiring especially high performance, and with SSDs.
When i do this with an alrdy running Unraid System, does Unraid move the existing files to the new Cache Pools when assigning them to the Shares, or do i have to move the old data by hand to the new location ? (in case is this possible without problems ?)
An answer would be helpful. I think the majority of people alrdy got a running Unraid Server with Docker Container and VM's, but maybe want to expand to multiple Pools like in this Tutorial.
WHERE IS PART 3 OF THIS SERIES? I watched part 2 and want to watch part 3 but I can't seem to find it :(
Great thanks!
I know it's considered best practice to use prefer rather then only, and I understand and agree why. But I use only for some shares because I would rather have the write fail. That would catch my attention really quick to investigate whats going on. Otherwise I might not notice for some time. Rather then get rid of that option, I wish unraid would just put a notation or tooltip or something telling uses it is not a recommended setting. Thank for all your videos, very much appreciated.
What if I have 1 Parity & 6 data HDD and want to pull 3 data HDD out of array? And take each one of those 3 HDD removed from Array and use them as Shares. Can I make each of those data HDD and single drive Pool and not lose data?
The drives have to be part of the array to use them as Shares. I recommend keeping all HDDs in the array and SSDs out of the array.
If you ever build a ghetto unraid server I would love that
Yea!!!
Teach me how to setup archive box
Unraid King!!
Alright I've got some things to do on mine. What's the best cache setup for plex to improve watching performance?
As long as your Plex container uses an ssd (or a dedicated cache drive) for it’s transcode folder, you should be good to go.
@@SkippyTheLost Mines setup on an nvme but that's also the only cache drive.
where's the vm vid ! i thought that was next
Why add another cache pool with a single drive if you can just mount it as an unnassigned device? Whats the advantage?
More consistent and simple for your system. Your shares will show up in /mnt/user rather than disks/drivename/somefolder
I'm sure you have a video for it already, but when adding a new cache pool for say my docker containers, what is the best way to move them off the original cache and on to the new cache pool?
Hey i did this a few weeks ago and it was pretty straight forward. I stopped all docker Container and deactivated docker. Then i set my appdata and my system share to use cache:yes. After that i activated the mover and all data was written to the array. Now you can install your new disks (or use already installed ones) and create a new pool. After the pool is created, you go back to your share settings and set appdata and system to use cache:prefer and of course be sure you use the new pool.
Let the mover do his thing and activate the docker Service again :)
@@Skywalker111abc Awesome, thank you so much for the reply. Last time (several versions ago) I upgraded my cache SSD and ran into some issues which required me to download all the docker containers again.
No problem and that sucks. One time o wanted to reduce the number of devices in my cachepool from 3 to 2 without knowing what i was doing. It was a long night :D
Michael, tNice tutorials is the best soft soft Video Tutorial ever.
You rock.
Wait where is part 3?!
Excellent video Mr Ed! You inspired me to build a new server 🤘🏼🔥🤘🏼, my wife wants to kill me/you lol
regards shane from Trinidad!
Again I will make a plea to teach people how to use mc to delete stuff instead of teaching them to paste into a rm -rf prompt
I'm convinced that the big investors and analysts are trying to scare us to keep us poor and ignorant to the market.. because its steady
Limetech, please do NOT remove Only from cache options! Pretty critical for my use case.
Can I ask what do you use it for?
Wouldn't prefer do the same thing as only except having a fallback to the array in the case the cache gets full?
Please don't do what I did and type "rm -r", forgetting the location of the files you want to remove. Ended up deleting the entire contents of my boot USB drive, now have to start again and rebuild the entire parity.
Just in time for me to arrange Chia farm
Having btrfs is fine for most things, but for nextcloud? By even the btrfs devs own statements, it's not recommended for databases as performance is just terrible. I only studied up on it after doing some performance testing on my install and seeing some really odd IO behavior - even with writes aligned, btrfs just isnt really suited for running DBs, and even some VMs have issues.
I really wish theyd allow lvm xfs mirrors/arrays - none of the logical volume raid commands seem to work, nor do the mdadm counterparts. Until then, I guess its zfs or bust... which I love, but isnt really ideal for a system running a gaming vm due to zfs ignoring cpu isolation :-(