Really appreciate the depth you go into, like finding that the starwind nvme-tcp initiator doesn’t handle v6 or DNS. So few tech tubers do a “proper” network for their lab that would reveal these issues.
Excellent article. I have had a simple, small, server based network of gaming capable computers round the house for many years using 1Gbps Ethernet. This is used for centralized music, photo, video, etc, storage and light gaming (both Steam and CD rips). My network is Linux Mint clients connected to a 24x7 Ubuntu Server running on a low power Intel Atom with a 4TB eco hard drive. I used SMB shares for a long time on each client and with older games this all worked fine. However, as game sizes and anti-copy/cheat tech increased over the years this has become more flaky. I have tried moving from SMB to NFS to simplify file protections and speed up game start, but still end up with 3 to 5 minute start up times for larger steam games. The only solution I have found is to have a local client steam library for my 30 large or tricky steam games on each client's local SSD, but the older and smaller games still on the server's steam library. Steam allows for multiple libraries, and its excellent game copying capability between client libraries makes this practical.
Some games that have anti-cheat will not run off of a mapped network drive. I’ve gotten around this by creating a VHDX disk on an SMB share, and I have a logon script to mount it. It’s a lot simpler for someone who can’t figure out iSCSI in Linux.
iscsi was not terrible to setup at all, but a ton of target-side guides use tgt / tgtadm (a userspace daemon) instead of lio / targetcli (in kernel iscsi server). Not sure why lio isn't more commonly used, it's not any more difficult to setup.
There are already a bunch of "what if" comments, but I will add a couple more :D. What if you used different storage mediums, like SATA SSDs, or even spinning rust? I don't have a fancy NVMe flash NAS to use...
As he already explained in the video, there would be no difference, as the 1gbe connection he used would also saturate hdd speeds. The effect would be exactly the same.
you might have higher latency using HDDs vs SSDs, but for game loading (which tends to load fairly large files, so the IO is largely sequential) it probably won't make a big difference
really interesting! i do not game, though I have tons of movies on my NAS. Not that I had issues with my 1Gb network (is going to be upgraded). I like the effort you have done in comparing what I didn't had the nerve to do myself. ThanX for that und best regards from Munich / Germany!!
I've kept most of my games on my spinning-rust NAS using SMB for many years. Most of my 10+ year old games work without any issues other than longer loading screens. (I've never seen that Steam sync issue, for example) Newer games tend to have more dynamically loaded assets that don't perform well without SSDs, so even trying to play them off a _local_ HDD is annoying at times. I _would_ like to upgrade to a 10g-capable NAS and network to let ZFS caching really stretch its legs. Maybe next year.
In the past I set up an ISCSI drive on my synology, made an NTFS drive out of it for windows. Put my Steam library on it and played some games. Everything played fine (as you point out) but is also loaded slow. Bandwidth is still bandwidth. Sure, the protocols maybe more efficient in some cases, but 1GB bandwidth still limits how much data can move between machines. You really need 10GB to do this. So, for now, I just use local storage.
I'm pretty sure a game like Shadow of the Tomb Raider and even RE:4 Remake run fine even on a HDD. I think it would be more interesting to look at a game that utilizes streaming more, a game requiring an SSD as part of it's minimum specification would do the trick or you could look at games with the DirectStorage API. EDIT: Oh just saw the HL:Alyx part, yeah it's either latency or plain bandwidth. Would be interesting to see if 10gig or beyond solves this issue. If it doesn't and latency is the issue maybe RDMA/RocE could solve this. Edit2: Wait am I getting that if you create a share as your install folder, anyone can use that install as long as they have the proper licensing in Steam? That seems very interesting for lan party set-ups and more efficient than nginx caching.
The vast majority of Steam games does not store user data in library folder, each user has their own cloud save folder in Steam folder and games usually store their own copy of the saves in Local Data or Documents folders. That's why you should not have file conflicts.
Any game that requires admin rights will cause a lot of issues for unsuspecting users. Your shares mapped as user don't show up for programs running as admin and vice versa. And anything kernel is a massive no-no. iSCSI should solve most if not all of these problems though but has it's own downsides. I personally just move the game to the local system when i want to play it and move it back to the nas when I'm done with it.
I've moved most of my games to an NFS share on my NAS and it works fine. I'm currently using Gigabit, because I've replaced the boot drive in my desktop with a 1.5TB optane drive that unfortunately needs the PCIe slot that my 40GbE NIC used to use, but it's still decent enough
Steam cache only really helps with having to install the same game on multiple gaming systems/clients. If you are just doing it at home, where you are more than likely, only going to be using a single Steam account (or maybe two, at most, since most kids can't have their own Steam account yet/still), if you have one system that's on and you're trying to install the same game to say, your kid's computer -- it can copy it from your system directly over to their system, without the need for a Steam cache). But if I don't have my own system running, and one of my kids wants to install the same game, then it can pull it from the Steam cache that I've got set up. And if it isn't in the Steam cache, then it'll pull it from Steam/outside internet.
Ive got a server running a windows vm with a 2080 super passed through to it, and a truenas scale vm with the sas controller passed through to it. Using iscsi to hold all the games for the windows vm has been flawless over the internal proxmox virtio network. depending on the game and what other stuff is going on in the system, it'll get between 3-20 gigabit transfer rates. Pretty cool!
Such a vocal IPv6 proponent stating that iSCSI using really long strings of letters and numbers as identifiers is a terrible experience is endlessly amusing to me.
Oooh. the ever expanding steam library problem. Been there, wore out the tee shirt. 😁 I started using steam 20+ years ago, and now have 150ish games in there. Because I often get the urge to do run throughs of my older games, I like to keep most of them installed. That wasn't a big problem until a few years ago when games started routinely taking up 30, 40 or even 50gb of space. SSD's are getting cheaper every day, but they're still not cheap enough to chuck dirty great M.2 drives at all my games PC's. What I did in the end was just grab some 4tb/7200rpm mechanical drives and put them in the systems. I've got the current AAA games on 1tb M.2 SSD's and older stuff on the mechanicals. The load screens on something like Half-Life aren't intrusive (Which they shouldn't be, considering the original game was perfectly happy on the 66MB/s Ultra ATA/66 IDE hard drive my systems would have been rocking back in late 1998. :D).
i got a few variations of the cheap 6 port (4 2.5 rj45 2 sfp+) switches from amazon and im pretty happy with them. the management on the managed ones is pretty cheesy but they have acceptable 802.1 capabilities for me. just make sure you back up your config because they factory reset when they lose power lmao. some 8 port sfp+ units are just hitting the streets for like a hundred bucks too... looking enticing.
At best I can do 5Gbe with an SFP+ to RJ45 adapter over the Cat5 (non-enhanced) in the wall. Going beyond that might be possible with a better transceiver (not SFP+), or would need fiber.
@@apalrdsadventures oh, i thought the limitation was just switch ports. but yeah if the pathing infrastructure isnt there then no point. maybe installing a couple runs of os2 smf lc/lc could be an upcoming video?!
re save games, I had considered using syncthing or similar to sync save games to my nas years ago, and found most are stored somewhere in the Users folder, not Steam. I don't have to look for save games too often, but when I do this seems to hold true, so they may not be affected by your setup.
Windows doesn't have a NFSoRDMA client. And most Debian-based distros doesn't have that feature neither (although you might be able to get it if you install the MLNX_OFED Linux drivers). If you're using RHEL-derived distros (e.g. Nobara which is based on Fedora Core), then you might have more luck with being able to use NFSoRDMA. (I can mount NFS exports with NFSoRDMA with CentOS/Rocky Linux, but not with Ubuntu, Xubuntu, nor Debian). (And this is with 100 Gbps Infiniband.)
@@frankwong9486 You could, in theory, deploy SMB Direct, which is about the CLOSEST thing you can get to NFSoRDMA. I haven't tried it as I don't have the servers (especially Linux-based) servers that can do SMB Direct (since almost all of them use Samba, and I don't think Samba has this feature, but I can be wrong), nor the clients as well. (I suppose I COULD try and pass one of my Mellanox ConnectX-4 cards over to a Win11 VM to see if I can get it working, but it gets a little bit more tricky for me, personally, because I am running 100 Gbps Infiniband rather than 100 GbE.) But that would likely be the closest you'd get, with a Windows client, to NFSoRDMA is with SMB Direct. If you're able to get that up and running in your environment, then you'll be able to reap at least some of the potential/theoretical benefits of RDMA. YMMV, of course. And the only way that you'll know is by finding out via testing it. (I also don't have enough NVMe SSDs to be able to run a relatively unconstained test for this.)
First of all, great video! I wasnt aware of this way of moving games, does steam also show a problem if you move the games by going to: Steam --> Settings --> Storage --> Select Both titles at once --> Move
Wasn't the general rule of thumb to not enable dedupe on ZFS? I know they recently released that "fast dedupe" feature, but as I understand it, the more things you have deduplicated, the slower the entire array gets, because every time you update a file you also have to walk the dedupe table.
there are a few ways to do it. If you are trying to do the lanparty style share, you can clone one dataset/zvol as the source of the rest, and the clone will point to data on disk for its origin regardless of dedup setting. So you get the space saving and performance advantages without enabling dedup because they have a common origin (at least until they diverge from the clone). dedup=on also requires dedup tables, but if the tables fit entirely in RAM the performance impact is not massive. The dedup table is a hash table so walking it is fairly fast if it's in memory. Dedup=on also causes zfs to use sha2 hashes which can also slow things down compared to using the normal Fletcher32 crc function.
@@apalrdsadventures I forget if you can change the hash algorithm from SHA2 to SHA256 or SHA512, but if you are able to use SHA256, and your host system has an AMD processor -- AMD CPUs have hardware accelerated SHA hashing, so for dedup=on with sha256, you should expect very little drop in hashing performance, even as the hash table grows.
@@apalrdsadventures I hadn't considered using the clone feature as "dedupe", since the cloned dataset would share blocks with the parent. Man, I love zfs.
I tried using nfs over my 25gig card to a pcie3 nvme. The client was running Ubuntu. Unfortunately I had a lot of issues and even after moving the games back to my internal ssd on my desktop the games refused to run. I had not idea what was wrong and ended up reinstalling Ubuntu lol. Maybe I will try again when I get backups configured.
The issue might have more to do with Ubuntu than with NFS/Steam. I'm currently using Nobara Linux 40 for gaming instead and it's actually working. (i.e. I can play Halo Infinite and Cities Skylines 2 no problems, whereas I could never get those games to run with Ubuntu.)
For some things this works great for me, but any resource demanding game e.g. driving fast in GTAV or FH4 really shows the limitations. That being said, im on a Gig connection with SMB for a mirrored pair of NAS HDDs over ZFS, so worst case really.
Very cool, thanks for the video! Chris Titus Tech always talks about NFS being available in stock Windows... his winutil can enable it. Would be interesting, if there is a performance difference to SMB/iSCSI?
NFS permissions are very wonky *in nfs* which is partially why they are such a pain to setup. NFS was really designed for an enterprise network where every machine shares UID/GIDs in a central map. If you have that, then it's fantastic, but if you have a bunch of random machines then it's weird.
@@apalrdsadventures Yep, an incredible nuance! I used SMB on my Windows machine up until I switched to Linux. Oh boy... I spent a whole day just trying to get the NFS drive to show up on my machine. Permissions are a whole separate can of worms that I have yet to fix completely...
@@fuchsi3010 I've never had problems with permissions, since on almost all systems I'm UID 1000 by default. But then again I've put my user database into an LDAP server running on k8s
@@apalrdsadventures What happens if you were to use a SFP+ AOC instead of copper? (Two of my 100 Gbps links I think are 100 m 100 Gbps AOC cables because I didn't know EXACTLY how long I needed, so I bought the longest one, and how I have a giant spool of AOC cable that's bundled/tied up.)
Really appreciate the depth you go into, like finding that the starwind nvme-tcp initiator doesn’t handle v6 or DNS. So few tech tubers do a “proper” network for their lab that would reveal these issues.
Excellent article. I have had a simple, small, server based network of gaming capable computers round the house for many years using 1Gbps Ethernet. This is used for centralized music, photo, video, etc, storage and light gaming (both Steam and CD rips). My network is Linux Mint clients connected to a 24x7 Ubuntu Server running on a low power Intel Atom with a 4TB eco hard drive. I used SMB shares for a long time on each client and with older games this all worked fine. However, as game sizes and anti-copy/cheat tech increased over the years this has become more flaky. I have tried moving from SMB to NFS to simplify file protections and speed up game start, but still end up with 3 to 5 minute start up times for larger steam games. The only solution I have found is to have a local client steam library for my 30 large or tricky steam games on each client's local SSD, but the older and smaller games still on the server's steam library. Steam allows for multiple libraries, and its excellent game copying capability between client libraries makes this practical.
Some games that have anti-cheat will not run off of a mapped network drive.
I’ve gotten around this by creating a VHDX disk on an SMB share, and I have a logon script to mount it. It’s a lot simpler for someone who can’t figure out iSCSI in Linux.
iscsi was not terrible to setup at all, but a ton of target-side guides use tgt / tgtadm (a userspace daemon) instead of lio / targetcli (in kernel iscsi server). Not sure why lio isn't more commonly used, it's not any more difficult to setup.
Got the entire PS2 game library on my NAS using SMB to play them on the PS2. Didn't think about doing this with Steam.
Nuh uh, no you dont liar, phony. Impossible.
There are already a bunch of "what if" comments, but I will add a couple more :D. What if you used different storage mediums, like SATA SSDs, or even spinning rust? I don't have a fancy NVMe flash NAS to use...
As he already explained in the video, there would be no difference, as the 1gbe connection he used would also saturate hdd speeds. The effect would be exactly the same.
@@KapitanMokraFajaCool! So I should be able to setup something similar with what I have already! Very nice!
you might have higher latency using HDDs vs SSDs, but for game loading (which tends to load fairly large files, so the IO is largely sequential) it probably won't make a big difference
@@apalrdsadventures thank you Mr. apalrd! I love all the great work you do!
really detailed inside benchmark
really interesting!
i do not game, though I have tons of movies on my NAS. Not that I had issues with my 1Gb network (is going to be upgraded). I like the effort you have done in comparing what I didn't had the nerve to do myself.
ThanX for that und best regards from Munich / Germany!!
I've kept most of my games on my spinning-rust NAS using SMB for many years. Most of my 10+ year old games work without any issues other than longer loading screens. (I've never seen that Steam sync issue, for example) Newer games tend to have more dynamically loaded assets that don't perform well without SSDs, so even trying to play them off a _local_ HDD is annoying at times. I _would_ like to upgrade to a 10g-capable NAS and network to let ZFS caching really stretch its legs. Maybe next year.
In the past I set up an ISCSI drive on my synology, made an NTFS drive out of it for windows. Put my Steam library on it and played some games. Everything played fine (as you point out) but is also loaded slow. Bandwidth is still bandwidth. Sure, the protocols maybe more efficient in some cases, but 1GB bandwidth still limits how much data can move between machines. You really need 10GB to do this. So, for now, I just use local storage.
I'm pretty sure a game like Shadow of the Tomb Raider and even RE:4 Remake run fine even on a HDD. I think it would be more interesting to look at a game that utilizes streaming more, a game requiring an SSD as part of it's minimum specification would do the trick or you could look at games with the DirectStorage API. EDIT: Oh just saw the HL:Alyx part, yeah it's either latency or plain bandwidth. Would be interesting to see if 10gig or beyond solves this issue. If it doesn't and latency is the issue maybe RDMA/RocE could solve this.
Edit2: Wait am I getting that if you create a share as your install folder, anyone can use that install as long as they have the proper licensing in Steam? That seems very interesting for lan party set-ups and more efficient than nginx caching.
I don't really own a lot of modern AAA games
The vast majority of Steam games does not store user data in library folder, each user has their own cloud save folder in Steam folder and games usually store their own copy of the saves in Local Data or Documents folders. That's why you should not have file conflicts.
Any game that requires admin rights will cause a lot of issues for unsuspecting users. Your shares mapped as user don't show up for programs running as admin and vice versa. And anything kernel is a massive no-no. iSCSI should solve most if not all of these problems though but has it's own downsides. I personally just move the game to the local system when i want to play it and move it back to the nas when I'm done with it.
maybe games should stop running in kernel and breaking everything
I've moved most of my games to an NFS share on my NAS and it works fine. I'm currently using Gigabit, because I've replaced the boot drive in my desktop with a 1.5TB optane drive that unfortunately needs the PCIe slot that my 40GbE NIC used to use, but it's still decent enough
What about Steam Cache ? Thats a nice idea too
Steam cache only really helps with having to install the same game on multiple gaming systems/clients.
If you are just doing it at home, where you are more than likely, only going to be using a single Steam account (or maybe two, at most, since most kids can't have their own Steam account yet/still), if you have one system that's on and you're trying to install the same game to say, your kid's computer -- it can copy it from your system directly over to their system, without the need for a Steam cache).
But if I don't have my own system running, and one of my kids wants to install the same game, then it can pull it from the Steam cache that I've got set up.
And if it isn't in the Steam cache, then it'll pull it from Steam/outside internet.
TL;DW 10GbE is the future of NAS Gaming. You hear that Ubiquiti?!
Ive got a server running a windows vm with a 2080 super passed through to it, and a truenas scale vm with the sas controller passed through to it. Using iscsi to hold all the games for the windows vm has been flawless over the internal proxmox virtio network. depending on the game and what other stuff is going on in the system, it'll get between 3-20 gigabit transfer rates. Pretty cool!
Such a vocal IPv6 proponent stating that iSCSI using really long strings of letters and numbers as identifiers is a terrible experience is endlessly amusing to me.
Have you heard of DNS? It’s really useful, not only for IPv6 addresses.
🤣
iSCSI is an awesome thing if you are a nut like me and buy 16 bay server and fill it up with 24TB disks because why tf not
@@cqwickedwake7651Or get a 8 bay NVME server and a Mellanox card, that’s what I have plans for
Oooh. the ever expanding steam library problem. Been there, wore out the tee shirt. 😁
I started using steam 20+ years ago, and now have 150ish games in there. Because I often get the urge to do run throughs of my older games, I like to keep most of them installed. That wasn't a big problem until a few years ago when games started routinely taking up 30, 40 or even 50gb of space. SSD's are getting cheaper every day, but they're still not cheap enough to chuck dirty great M.2 drives at all my games PC's.
What I did in the end was just grab some 4tb/7200rpm mechanical drives and put them in the systems. I've got the current AAA games on 1tb M.2 SSD's and older stuff on the mechanicals. The load screens on something like Half-Life aren't intrusive (Which they shouldn't be, considering the original game was perfectly happy on the 66MB/s Ultra ATA/66 IDE hard drive my systems would have been rocking back in late 1998. :D).
For multiple Gaming-PCs deduplication on NAS side could be interesting. Maybe tiering too.
Try iSCSI with primocache and an old 128 GB SSD for L2 cache. SSD+ local speeds and store the several TB of games on the NAS.
ISCSI is not needed for most games but there are some such as Blizzard games.
Besides SMB, you could also try NFS shares.
Which Blizzard games? Just curious. All of them?
i got a few variations of the cheap 6 port (4 2.5 rj45 2 sfp+) switches from amazon and im pretty happy with them. the management on the managed ones is pretty cheesy but they have acceptable 802.1 capabilities for me. just make sure you back up your config because they factory reset when they lose power lmao. some 8 port sfp+ units are just hitting the streets for like a hundred bucks too... looking enticing.
At best I can do 5Gbe with an SFP+ to RJ45 adapter over the Cat5 (non-enhanced) in the wall. Going beyond that might be possible with a better transceiver (not SFP+), or would need fiber.
@@apalrdsadventures oh, i thought the limitation was just switch ports. but yeah if the pathing infrastructure isnt there then no point. maybe installing a couple runs of os2 smf lc/lc could be an upcoming video?!
re save games, I had considered using syncthing or similar to sync save games to my nas years ago, and found most are stored somewhere in the Users folder, not Steam. I don't have to look for save games too often, but when I do this seems to hold true, so they may not be affected by your setup.
Will NFS over RDMA work better for this case ?
Windows doesn't have a NFSoRDMA client.
And most Debian-based distros doesn't have that feature neither (although you might be able to get it if you install the MLNX_OFED Linux drivers).
If you're using RHEL-derived distros (e.g. Nobara which is based on Fedora Core), then you might have more luck with being able to use NFSoRDMA.
(I can mount NFS exports with NFSoRDMA with CentOS/Rocky Linux, but not with Ubuntu, Xubuntu, nor Debian).
(And this is with 100 Gbps Infiniband.)
@ewenchan1239 Ah crap so with windows we are stuck on slow SMB and not very friendly iSCSI mount😢
@@frankwong9486
You could, in theory, deploy SMB Direct, which is about the CLOSEST thing you can get to NFSoRDMA.
I haven't tried it as I don't have the servers (especially Linux-based) servers that can do SMB Direct (since almost all of them use Samba, and I don't think Samba has this feature, but I can be wrong), nor the clients as well.
(I suppose I COULD try and pass one of my Mellanox ConnectX-4 cards over to a Win11 VM to see if I can get it working, but it gets a little bit more tricky for me, personally, because I am running 100 Gbps Infiniband rather than 100 GbE.)
But that would likely be the closest you'd get, with a Windows client, to NFSoRDMA is with SMB Direct.
If you're able to get that up and running in your environment, then you'll be able to reap at least some of the potential/theoretical benefits of RDMA.
YMMV, of course.
And the only way that you'll know is by finding out via testing it.
(I also don't have enough NVMe SSDs to be able to run a relatively unconstained test for this.)
First of all, great video!
I wasnt aware of this way of moving games, does steam also show a problem if you move the games by going to: Steam --> Settings --> Storage --> Select Both titles at once --> Move
I've stored steam data on a samba share on HDDs for years and games just work still.
Wasn't the general rule of thumb to not enable dedupe on ZFS? I know they recently released that "fast dedupe" feature, but as I understand it, the more things you have deduplicated, the slower the entire array gets, because every time you update a file you also have to walk the dedupe table.
there are a few ways to do it.
If you are trying to do the lanparty style share, you can clone one dataset/zvol as the source of the rest, and the clone will point to data on disk for its origin regardless of dedup setting. So you get the space saving and performance advantages without enabling dedup because they have a common origin (at least until they diverge from the clone).
dedup=on also requires dedup tables, but if the tables fit entirely in RAM the performance impact is not massive. The dedup table is a hash table so walking it is fairly fast if it's in memory. Dedup=on also causes zfs to use sha2 hashes which can also slow things down compared to using the normal Fletcher32 crc function.
@@apalrdsadventures
I forget if you can change the hash algorithm from SHA2 to SHA256 or SHA512, but if you are able to use SHA256, and your host system has an AMD processor -- AMD CPUs have hardware accelerated SHA hashing, so for dedup=on with sha256, you should expect very little drop in hashing performance, even as the hash table grows.
@@apalrdsadventures I hadn't considered using the clone feature as "dedupe", since the cloned dataset would share blocks with the parent.
Man, I love zfs.
I tried using nfs over my 25gig card to a pcie3 nvme. The client was running Ubuntu. Unfortunately I had a lot of issues and even after moving the games back to my internal ssd on my desktop the games refused to run. I had not idea what was wrong and ended up reinstalling Ubuntu lol. Maybe I will try again when I get backups configured.
The issue might have more to do with Ubuntu than with NFS/Steam.
I'm currently using Nobara Linux 40 for gaming instead and it's actually working. (i.e. I can play Halo Infinite and Cities Skylines 2 no problems, whereas I could never get those games to run with Ubuntu.)
How about S3?
Any further improvement if your were to uses NFSv3?
Dunno what I'm doing wrong. iSCSI always breaks for me at some point of use.
most of the games place your save files to your user subdir like documents
For some things this works great for me, but any resource demanding game e.g. driving fast in GTAV or FH4 really shows the limitations. That being said, im on a Gig connection with SMB for a mirrored pair of NAS HDDs over ZFS, so worst case really.
Yupe doing this asap 😂
15:29 You misspoke. :D Not 100 MB/s.
Would things be better with an all-flash NAS?
I'm already all-flash with this setup
His testing was with an all-flash NAS...
Very cool, thanks for the video!
Chris Titus Tech always talks about NFS being available in stock Windows... his winutil can enable it.
Would be interesting, if there is a performance difference to SMB/iSCSI?
Also wanted to recommend it as a try, wondering if the performance is any good.
Kind of a side note, but NFS is so ass to set up Truenas Scale perm-wise. To this day my perms are borked.
NFS permissions are very wonky *in nfs* which is partially why they are such a pain to setup. NFS was really designed for an enterprise network where every machine shares UID/GIDs in a central map. If you have that, then it's fantastic, but if you have a bunch of random machines then it's weird.
@@apalrdsadventures Yep, an incredible nuance! I used SMB on my Windows machine up until I switched to Linux. Oh boy... I spent a whole day just trying to get the NFS drive to show up on my machine. Permissions are a whole separate can of worms that I have yet to fix completely...
@@fuchsi3010 I've never had problems with permissions, since on almost all systems I'm UID 1000 by default. But then again I've put my user database into an LDAP server running on k8s
Say what :) :)
PacketFence Network Access Control
Raid isn’t a backup lol ❤😂
Why not use DAC instead of copper SFP+. Cheaper and a lot colder 😉
I'm going 30m through the house to get to my workstation, DAC isn't going to work at that distance
@@apalrdsadventures Then your only option to avoid the heath is fiber.
@@apalrdsadventures
What happens if you were to use a SFP+ AOC instead of copper?
(Two of my 100 Gbps links I think are 100 m 100 Gbps AOC cables because I didn't know EXACTLY how long I needed, so I bought the longest one, and how I have a giant spool of AOC cable that's bundled/tied up.)
Need 10GB Nic