How To Get the Most From Your TrueNAS Scale: Arc Memory Cache Tuning
HTML-код
- Опубликовано: 30 июн 2024
- lawrence.video/truenas
ZFS Arc Value Location
/sys/module/zfs/parameters/zfs_arc_max
Bytes converter
whatsabyte.com/P1/byteconvert...
TrueNAS Scale Half Used Memory Issue
ixsystems.atlassian.net/brows...
Deeper dive into Linux Memory Management
blogs.oracle.com/linux/post/l...
Connecting With Us
---------------------------------------------------
+ Hire Us For A Project: lawrencesystems.com/hire-us/
+ Tom Twitter 🐦 / tomlawrencetech
+ Our Web Site www.lawrencesystems.com/
+ Our Forums forums.lawrencesystems.com/
+ Instagram / lawrencesystems
+ Facebook / lawrencesystems
+ GitHub github.com/lawrencesystems/
+ Discord / discord
Lawrence Systems Shirts and Swag
---------------------------------------------------
►👕 lawrence.video/swag/
AFFILIATES & REFERRAL LINKS
---------------------------------------------------
Amazon Affiliate Store
🛒 www.amazon.com/shop/lawrences...
UniFi Affiliate Link
🛒 store.ui.com?a_aid=LTS
All Of Our Affiliates that help us out and can get you discounts!
🛒 lawrencesystems.com/partners-...
Gear we use on Kit
🛒 kit.co/lawrencesystems
Use OfferCode LTSERVICES to get 10% off your order at
🛒 www.techsupplydirect.com?aff=2
Digital Ocean Offer Code
🛒 m.do.co/c/85de8d181725
HostiFi UniFi Cloud Hosting Service
🛒 hostifi.net/?via=lawrencesystems
Protect you privacy with a VPN from Private Internet Access
🛒 www.privateinternetaccess.com...
Patreon
💰 / lawrencesystems
⏱️Time Stamps ⏱️
00:00 TrueNAS Scale Setting ZFS Arc Cache Memory
02:00 TrueNAS Scale Memory Usage
02:48 How To Set zfs_arc_max
#truenas #storage #zfs - Наука
Thank you Tom! Great video!
I can't believe this is something that isn't exposed in the GUI, especially considering ixsystems touts the application and virtualization abilities of Scale.
Scale is a bit less mature than core still.
Thank you! Now I can just point people to a very well put together and comprehensive video when this topic comes up - especially because you very clearly outlined the potential issues you may run into!
Perfect!
Just changed over from Truenas Core yesterday, and your tip helped greatly!
Thanks! 😊
Thanks Tom for posting this. While memory for my potato NAS is cheap, I'd like to get the most bang for my buck.
Thanks Tom! You really are one of the best Scale resources for the community. Love the hair btw. And hope you are still healing well.
Indeed, Tom is my go to tutor for Truenas Scale and PfSense.
Thanks, Tom
The memory shortage was a pain for me
Now I am feeling way chillier about it
Love the guild you provided
Awesome
I feel like I truly lucked out venturing forth in July and August of 2023 to initiate and build our TrueNAS home video and photo archives server like I did.
Videos like THIS in PARTICULAR helped me shave the one thing that annoyed me about how much oomph I was getting out of what I invested in particularly.
Thank you so very much.
Thanks Tom! You are the best! Fixed here for me, awesome, now I can run some VM's again...
Incredible, I was trying to find a way to limit the memory consumption, thank you Tom
I was wondering why ARC filled only half of my memory, glad I found this explanation. However I think this setting should be accessible from TrueNAS GUI.
Thanks for going over this! It was super helpful!
Thanks for this video, I just recently upgraded my home NAS to 512GB (prices have really dropped on 64GB DIMMS recently) and I also moved to TrueNAS Scale.
Same boat as you just jumped to scale with my server and was only using 256GB of arc
Some Core features we take for granted... tnx for info and solution 👍👍
Another great guide, Thanks mate! Like to use all i have purchased, as in RAM. Dont use a load of apps, just a few, so left about 6Gb free for other things. Think thats sensible.
thanks finally a simple explanation :)
Glad it helped!
Worked really well, I don’t know why Truenas defaults using full of ram for ZFS caching
THANK YOU SO MUCH!
Cool. Wasn't aware this was even a thing.
I would be interested in a follow-up on the scale versus core as of 2023. Lots of videos date back from launch date and a lot has changed since. What would be the recommendation for a variety of standard use cases today?
It's become more stable and more tuned in terms of performance, maybe it needs a new video once the new version of scale come out soonish.
@LAWRENCESYSTEMS thanks for the consideration. I would eat it up. Even as a short.
@@LAWRENCESYSTEMSI like the effort and quality you put in your videos, would wait until February when the new version is actually stable
Thank you!
Thankyou so much
thanks!
The way it seems to work currently is when I start a VM it just falls back to the 50% limit no matter what. I've been testing this for over 6 months with 90/128G and have never ended up in a crash scenario.
There is probably some logic in Scale that writes that value when you start the VM, this does not happen in Proxmox or other generic Linux system. The value you set sticks.
Thanks Tom for this tips, I'm wondering 🤔 which value I set ? I have 64GB.
Set it as much as you want to use for cache while allowing enough for services
1:10 I don't really agree with this assessment, because ARC memory is not swappable on FreeBSD (it's Wired, and it shouldn't be swapped out), but this also gives it an ugly tendency to push text pages out to swap. That's kinda okay if swap is on dedicated SSDs, but TrueNAS CORE will put the swap - by default - on the main pool devices. So by default you can easily end up with CORE aggressively swapping to the pool devices, and then system responsiveness goes down the drain as swap I/O competes with pool I/O. I was wondering on my CORE setup why I experienced these sudden, dramatic slowdowns of share performance, and well, it was ARC Wiring up all the memory.
Would 45 Drives's Rocky Linux / Houston system also be capped at 50% by default? This is a general Linux behavior with ZFS, correct?
yes this is default ZFS on linux behaviour.
On a more generic Linux system (and Proxmox) what he did is done by creating/editing in "/etc/modprobe.d/zfs.conf" file with "options zfs zfs_arc_max={your_number}" and then rebuilding initrd images so this config is loaded on boot.
As shown in the video it appears you need root access to do this. For those of us with root disabled, sudo doesn't appear to work. Is there a workaround?
To answer my own question in case anyone has the same issue: found on another forum, use sudo sh -c. Tom please correct me if I'm wrong here but this seems to have worked for me. Thank you!
Could you do a video on tuning TrueNAS on an all flash system to use the speed efficiently?
Nothing to tune besides that setting.
lol my freenas locked up until I manually capped the arc, it was in some boot param bullshit flash area too.
After I did this to expand the arc, it seems like the memory in the main dashboard is nearly completely free and not used in ZFS any more. Any ideas? I returned to 0 and removed the init script, but it still shows almost no cache used when before it was about 115 gb or so. However, the system functioned fine before and after I made this change. The only thing impacted is/was the dashboard in truenas showing almost no memory being used for zfs.
When you reboot, ARC is purged, so, you'll have to use it for awhile in order to fill the ARC again.
While I have 128GB of RAM I have 46GB used in services. So at least I know now that if I deploy some VMs with only 17GB of RAM free the cache isn't going to automatically scale down like in Core.
I'm confused mine uses all but 20 gigs all the time, and I never set anything up? I'm on TrueNAS Scale Dragonfish-24.04.0 - Currently 377.8GiB total available (ECC) - Free: 21.3 GiB
Was something added recently to auto expand past 50% of the memory capacity?
Can you make more video regard s1, huntress and ninjaone?
Setting swap is a good way from preventing VM crashing. But should be low swappiness
I doubt it would be persistent between upgrades which is why you set it in the UI.
That value converter website is pretty weird. I noticed TrueNAS shows RAM utilization in GiB (base 2), not GB (base 10). Your manual calculation using the factor 2^30 is in line with that and yields the same result the website gives. However, the website calls 1024 bytes a kB instead of a KiB. On the same page they state that "kilobyte" has different meanings depending on whether you are in a binary or a decimal system. That's plain wrong. A kB is ALWAYS equal to 10^3 bytes. It's just that many people tend to confuse the binary and the decimal prefixes. They should have explained that instead of further contributing to this misunderstanding. Also the conversion table on that page claims that a kB is 100^1 B, but that's probably just a typo. I'm sure (I hope!) they meant to say 1000^1.
I wish we could like have options to have like 10-90% in 5% increments. I would be fine with that solution
Can anyone advise if this is still a problem in the latest Cobia build? TrueNAS-SCALE-23.10.1.3
I've heard they fixed some issues with ARC metadata caching recently and didn't know if this was also addressed.
Question; I have a 40TB Raidz2 on a system with 32GB ram. After setting the zfs_arc_max to 27GB (only samba on this system), running a scrub. I left it for a few hours running, when I checked on it's progress, the arc cache had consumed almost a 32GB (less samba). If I had left it alone, I'm sure oom would have kicked in. Fortunately I was able to lower the zfs_arc_max value and complete the scrub. The question is why would the scrub not honor the zfs_arc_max value?
Not sure, but this video will soon be sunset since Dragonfish changes all of this.
@@LAWRENCESYSTEMS :-) Something to look forward to. Thanks
@@LAWRENCESYSTEMS After upgrading to Dragonfish, memory cache would use up to 50G of 64G in my system, which could be troublesome at times because there is not enough memory left for my VM as other services use about 6G and there is only 3-5G left. When I try to start up a VM, there will be a warning about memory overcommitment. If I proceed anyway, the VM would start without issue and the memory cache would be reduced automatically by the system. Is that something I should be worried about?
@@zhenhaochen8170 It only uses the memory that is free so start those services before the memory is used and then you don't have to worry about it.
I am logged in as admin, and even if I use sudo I keep getting
zsh: permission denied
for anyone else finding this issue, this is how you have to put in the command (note: this is for 100gb of ram)
sudo sh -c 'echo 107374182400 >> /sys/module/zfs/parameters/zfs_arc_max'
may i know even how to open the comand windows ?
so, how would this be done in core?
As I said in the video, not needed in FreeBSD/TrueNAS Core.
@@LAWRENCESYSTEMS sorry, i missed that part. But also, good to know! :)
If I want the NAS to primarily work as a NAS I am probably happy if the ARC uses 50% of my memory, even on a 256GB RAM server. Right?
I set it to use AS MUCH AS POSSIBLE as that is a big speed boost.
I use mine as a NAS-only system--no VMs, no containers. My thought is the opposite of yours @kwinzman. If the only value of the RAM is for NAS, why not use almost all the RAM for NAS (including ARC). No need to have half of your RAM just sitting there doing nothing, reserved for something you're explicitly not interested in. Now, my question for myself is: why am I not just using CORE instead of SCALE, since I don't need or want the unRAID-like stuff SCALE offers and the NAS-specific stuff SCALE offers isn't mature and/ or I don't have the hardware for it? Time to watch the new video on SCALE vs CORE!
I had problem with persmission:
zsh: permission denied: /sys/module/zfs/parameters/zfs_arc_max
and had to use this workaround:
sudo sh -c 'echo NEW_ARC_SIZE > /sys/module/zfs/parameters/zfs_arc_max'
I want to know how to fix share permission. Only owner and onwer group seem to be working.
ruclips.net/video/59NGNZ0kO04/видео.html
Fixed: Unfortunately this is still relevant in Dragonfish 24.04-RC.1 Almost 3/4 of my memory was locked into ZFS Arc Cache. Giving me no room. Thanks. I have 32gb of Ram in my System what would you recommend for a ZFS Arc Cache? I have it set now for 4gb.
When I did the "cat /sys/module/zfs/parameters/zfs_arc_max" I didn't have a 0. I had a locked in amount of 30 something Gb.
After doing:
sudo sh -c 'echo 4294967296 > /sys/module/zfs/parameters/zfs_arc_max'
I entered Admin Pass - Hit enter
Checked ZFS Cache Size
sudo sh -c 'cat /sys/module/zfs/parameters/zfs_arc_max'
4294967296
Went to System Settings - Advanced - I removed the Init script command - rebooted
Checked ZFS Cache
sudo sh -c 'cat /sys/module/zfs/parameters/zfs_arc_max'
I entered Admin Pass - Hit Enter - Returned 0. Went to Dashboard - Memory, and it's no longer locked in an insane 3/4 ZFS Cache memory.
So is this solved now? I just installed TN in Proxmox and also a baremetal install, and both had 500G of RAM and TN consumed >95% RAM and also gave it back when I was starting other VMs where I allocated 64G and 128G RAM.
Yes, it's solved in the latest version.
@@LAWRENCESYSTEMS Sweet dude. Totally sweet.
Yes thank you this is so annoying
I'm surprised there isn't a "tunables" spot in the UI like there is with TrueNAS Core. Ah well.
Oh, there's a spot for sysctl variables (System -> Advanced, left column), but this isn't a sysctl var, is it?
I don't think so.
I applied it to my system and it started using it immediately, so you may get away without a restart if you have work to do
yes the command from console will change the setting immediately but it's not kept on reboot. To run it automatically on reboot you have to add the startup script as shown in the video
So what of tunables we had in core?
What about them?
not needed in Core, only in Scale
So Tom says there is a risk of crashing when using up too much RAM with services when arc_max is set manually. Does this also apply to the default setting? I thought scales arc would behave more like Tom said about core - use free space and give it back when services need it.
the actual issue here is that on Linux the ZFS module is often too slow to let go of the memory when there is memory pressure, so this triggers an OOM that will start to kill OS processes.
So yeah you can have that issue on default settings too if your application need more than half the RAM. But it's the middle point between safety and performance so that's why it's a default.
If you choose a manual limit then it's down on you to choose wisely and know how much RAM the applications on the NAS require.
@@marcogenovesi8570 Thanks for clarification! Hopefully they will find a soultion for this. Better sooner than later.
No. I think the "When" of this command should be PreInit rather than PostInit.
Why?
The value is live. You can run the command in the terminal and the new value will take effect immediately.
As long as it gets run before your ARC hits the default 50% high-water, it will be fine.
does not matter. ZFS module is loaded before Preinit anyway and you can do that command at any time you want
Will be patched in 24.04 (Dragonfish) ! See the Version Notes
Yup
TrueNAS Scale is WEIRD. It's neither for home use nor for enterprise use.
They really should pick a lane.
What about how to get different IP addresses from DHCP like trueas non scale
Do you mean TrueNAS Core?
@@gh8447 I have used scale but all apps used the same IP address
If you want it from DHCP I'm pretty sure that's on the router side. That's how I did it anyway, static mapping in pfsense
the only think i dont understant why he put 2gb on web site and not 8gb if truenas have 8gb memory. i put 32gb because i have 32gb memory but im not sure if i do good
im french and not all good understanding
You want to leave room for applications setting it to the max of your system will not work out well. Drop the number down into the 20's.
@@mr_jarble ok thanks for answer
In Proxmox (uses Linux kernel so it should be the same) there is a hint in documentation if you need to set zfs_arc_size permanently you can set it in "/etc/modprobe.d/zfs.conf" file with "options zfs zfs_arc_max={your_nubmer}" string. Isn't it the case for Truenas aswell?
Because TrueNAS scale is built more like an appliance you should always use the UI for updating settings so it will survive updates.
@@LAWRENCESYSTEMS TrueNAS also keeps the values in a different spot.
TrueNAS - /sys/module/zfs/parameters/zfs_arc_max
@@Prophes0r that file is available on all linux systems running zfs too. It's two ways of controlling the value. One with the modprobe command on start of the module or online by writing a number in bytes in that file
@@marcogenovesi8570Hrm...
I remember when I was first trying to change these values I looked at the official OpenZFS documentation and couldn't figure out why the settings weren't taking. It took a few days to realize that TrueNAS was storing values in a different place. I could have sworn it was these values...
Tom, on Debian systems you can set the etc/modprobe.d/zfs.conf. Is that persistent and TrueNAS scale?