Actually being limited by the drives is kind of nice, it just means that it is more or less just as good as having the drives installed locally, but with the added benefit of the drives being available to multiple computers.
FYI it took us ~5 years to go from 10/100 to cheap gigabit in the home But we've been stuck on gigabit in the home for almost 20 years, when businesses moved to 40G over a decade ago, and are already on 400G To me, we should have cheap 10-28G as ubiquitous at this point. The fact that gigabit has stayed around the same price for so long is criminal to me.
tbf, rarely do one need 10gb, transfer in the home is mostly rooter to pc, not a lot of people have nas or anything extra on the network, and gigabit is not limiting at all for internet use
@@neolordie Speak for yourself, i have also Nas for Plex, What do you think how many time you must wait before 4Tb is backupped, on 1G, more than a day, of course you must also have fast storage, Mac mini have fast storage, So do my Qnap 464. For internet it is not a problem.
If you can accept fiber than MikroTik is your path: CRS310-8G+2S+IN for 219 USD or CSS610-8G-2S+IN for 119 USD or CRS305-1G-4S+IN for 149 USD. If you want more 10G ports here you have CRS309-1G-8S+IN for 269 USD.
Great work as always. I purchased a QSW-2104-2T-A to upgrade from 1G to 2.5/10. I installed new wire for the single 10G run from NAS / switch rack to my main PC. But I have had success with 2.5G over my existing old timey unshielded cat 5 (no bloody e!) cable that I installed in the late 90s. Runs are only 20-40m but it all works.
The NICs are coming down in price but the Switches still seem pricey, also 10g RJ45 stuff runs HOT! I don't need 10g so obviously I went out and spent 400 quid on upgrades...
10GBASE-T NICs get HOT! If you're strapping a fan to the card, it's a good idea to disable temperature control for the fan header and set a constant speed. As the fan controller can't see the card's temperature, it can set the fan speed too low to cool it properly. I had this problem with an X540 and designed a PCIe card fan mount with an inbuilt controller. I could send you one if you'd like?
I've been running 4Gbit (4x 1Gbit trunked) on my server and workstation for about a decade using all ex-enterprise gear that I got for pennies and/or free, but these little switches are actually getting so reasonable and low powered that it seems finally the time to move to a simpler hw configuration :) Thanks for the review and detailing your experiences.
lmfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Ignore the haters and the elitists. It's a "home" network after all. When you get a dedicated staff (assuming you don't already), then you can worry about satiating your enterprise networking constituents. I would call it good for now. Still a great video though. It's nice to see the availability factor and working with stuff that's not 100% perfectly compatible. Getting it working "good enough" is sometimes the best one can hope for.
Nice to see you finally out of the dark ages of networking :D Just kidding! My home lab still has a 10/100 switch in it, and I am still running 10/100/1000 NICs, but now you've made me wanna get off my butt and actually installing the upgraded hardware I have!
Hahaha I feel ya. Honestly 100Mb is plenty for a lot of things. I always find it funny when people complain about not having gigabit on a device that will never need it
I don't have a 100Mbps switch, but I do have some runs that are 100Mbps because I 'split' the CAT5E to two devices (1GBE doesn't support that, but you can do it with 100BaseT since it only need 2 pairs). Those runs go to security cams, so there's zero reason to have 1GBE speeds on those runs. Lots of people simply don't understand how much performance their devices need. That said, it sure is fun going 10GBE! :)
Did you enable jumbo packets/frames on your Windows machine to enable a larger Maximum Transmission Unit (MTU) size for data being sent over a network?
@@wojtek-33 I get that. As long as you don't go too crazy with it, things should be fine. I've got mine set to 9014 and get much more consistent traffic speeds
Jumbo frames are basically a requirement with high bandwidth networks. Even 1gig will not live up to its full potential without it enabled. It needs to be enabled on your work station, your file server, and the switch ports. If you run a packet sniffer like Wireshark you can see that it's working.
If you loose a screw in carpet, take a sock, slide it over the end of your vacuum cleaner, and use it as a strainer as you go over the area. saved my butt a few times. lol
upgrading networking for local file transfer is like buying a new monitor for gaming it's almost guaranteed you will want to upgrade your GPU afterwards to keep up with your better resolution
Just something to remember: The file transfer protocol in Windows over the network is single threaded. At some point, you won't be able to go any faster unless you have a faster CPU.
What about if you use a third party file transfer like TeraCopy? Or is it still limited to single thread because it would still have to use the Windows FTP?
@@outhouse.wholesaler I haven't looked into it much, but from what I understand it's an application limitation. Using a third party application that is multi-threaded should provide better results.
i have noticed my best speed transfers to the nas is when i have a couple files moving at the same time,, Granted that is with the synology drive application,, lol,,
Enable jumbo frames in all the adapter settings (9000 byte MTU), should improve the 7 Gbit iperf. Aside from that, unless video editing directly from a flash-storage NAS, I found that 2.5 Gbit is really enough. And those switches are cheaper, and cards are cooler, less power-consuming and often even integrated onboard in mobos nowadays.
I'm sad that 5gbit wasn't common as a middleground. I have a x370 ASRock motherboard with 5gbit and a Qnap pcie x1 Card with 5gbit for a few years but to take advantage of the speed I have to use my 10g switch because there was not an affordable 5gbit option. But in the last 1 or 2 years SSDs got very fast and cheap so 10g is already a huge bottleneck :D
mfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Try using parallel streams with iperf. Something like adding "-P 4" switch to your command might fix that. That will run 4 transfers in parallel, and each will settle at 2.2-2.4 Gbps each, getting you to ~9.6 total. That ancient BCM card might hold that back a little, but jumbo packets could help there, too.
Testing 4 streams instead of 1 on iperf3 won't give you a faster network. 4 cars running side-by-side at 50mph won't get there at 200mph. You want to test whether your 10gbe network actually gives you 10gbe
@@BertelSchmitt iperf isn't designed to speed up a network, it measures data throughput. If the CPU doesn't have sufficiently fast enough cores, a single thread might not be enough to keep a 10gBE link saturated, hence the -P # switch in iperf. I know, iperf 2 and iperf 3 are different. Older NIC IC's, typically 1st and 2nd gen for a given speed class, are often not able to maintain line speed data transfers. I also didn't hear him mention NPAR, or any other partitioning scheme, so the 4x50 analogy doesn't seem germane.
Pretty nice deals there. I have one qnap switch, which has been fine on my use cases, but my homelab runs primarily on spf+. I needed 10Gig way back because I use network storage on my virtualization, but spf+ nics were way cheaper than 10G rj45 ones even as used. The situation seems to be changed somewhat, but I've gotten used to the spf plugs and those are awesome since they do not fall of as easily by accident as rj45 does. As of now, I have a small unreasonable fantasy of getting into 100G networking. It doesn't make any sense but would be fun
lmfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Hey I've been enjoying your videos for a while now... I'm learning much from yours and others. Couple things I read recently you may want to consider. Stacking even the passively cooled switches not good. Heat rises. Utilizing sfp+ with short DAC cables and/or any length fibre reduces much of the heat!, this applies to nics too! It's the transceivers that generate the heat. I will be utilizing DAC cables between switch uplink and downlink...and to my yet to be built nas server next to rack with 10gb sfp+. In some ways techie folks can/should skip the 2.5gb wave that's arriving now, implement 10gb now with the information available on workarounds and cost savings of used gear! Keep reading up on this everybody! And avoid the cheap foreign knock offs!!! Hold out for quality gear!
I went 10Gbit with SFP+ and DAC Cables - main usage: workstation, NAS and connection between switches - APs connected with 2.5Gbit - all other devices: Wifi, or 1Gbit- totally sufficient
Good upgrade for a content provider. I actually wouldn't do the flash upgrade just yet. It will be an *amazing* upgrade, however both SSD and NVME drive prices are dropping quite nicely at the moment. There should be some good black Friday / cyber Monday sales coming up soon enough, or just wait and take advantage of the post holiday sales. With both SSD and NVME 8TB drives becoming "reasonable" now, it's about time to just convert over in the next year or two. I'm looking forward to more 8Tb price drops next year myself. Another fun place to look is U.2 drives, or what looks really fun is something like the OWC u.2 NVME shuttle that will let you run 4xNVME on 1 U.2 connection (there are m.2 adaptors). So you can sacrifice a bit of that per drive nvme speed and end up having up to 32Tb of flash storage off of a single m.2 connection (or u.2 if you put an add in card in your motherboard).
Lmfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
To reply to your final comment about NAS speed. The Synology DS1621+, and most other good 6 bay hard disk NAS, pre-built or home built, are good for 500-600 MB/s, faster if you use NVME cache. It won’t saturate 10G, but you get 2x+ 2.5g performance and 100 TiB of usable storage without breaking the bank.
Like most Enterprise level gear. They are designed for a LOT of forced airflow. That's why sticking Enterprise cards in a consumer case, often causes them to cook. The airflow they need versus consumer sound levels is not easy to balance.
Re: the max throughput being about 7-8 Gbps, did you enable jumbo frames? I saw similar behavior (about the same speed), but enabling jumbo frames on the client and the server got me up near the theoretical 10 gbps maximum. Of course, enabling jumbo frames can lead to other issues if you don't segregate the 10Gbps devices onto their own VLAN, which is probably one reason I haven't fully set up my 10Gbps network yet. :P
Just bought a little switch with 2x10GB and 4x2.5GB for 50 bucks at Amazon. I will have a high speed NAS next to my machine (10GB) and the 2.5 connection to my home server. Cool.
Just bought a 16port, 2.5G, managed TP-Link Omada switch for 489.00. You really pay for that 2.5G these days. Frankly, it’s more than I will ever need. I just wanted 2.5G to all my rooms, VLAN capability, and faster DNS.
Here is one quality that one must possess to win, and that is definiteness of purpose, the knowledge of what one wants, and a burning desire to possess it.
For most people, Gigabit is probably fine, if not a bit overkill as well. The network in my place I set up with D-Link commercial desktop switches, one 5 port and one 8 port. My node has the 8 port switch as I have game systems and a few computers. For short runs I don't see much issue of data loss due to attenuation on cat5e but I would want to use the appropriate category of twisted pair cable for longer runs. I still find it funny that computers still use glorified 4 line POTS telephone cable for communication across networks. I also find it interesting that even though coaxial cable can carry tens of gigabits of data, is used by cable tv service providers for broadband and is incredibly simple to terminate isn't the cable of choice for networking computers together. Granted, it isn't as flexible but there is probably a way to make a stranded flexible variant for this use.
WOW! Linus Tech Tips could not be bothered to mention Gallium Nitride and could only call it GAN, which is useless. Thank you for being actually helpful!
Interesting 16 port switch, paid that much getting a 10 port 10gig last year. Prices seem to be starting to fall a little. When I bought mine I figured we'd have jumped straight to 10gig instead of 2.5. But seems 2.5gig is being widely adopted first. Which in hindsight guess makes since as it covers a much larger audience. As not everyone has the hardware to fully take advantage of 10gig yet. But when you have a use case and hardware that can drive it. It is utterly fantastic!
This guy must be pretty loaded... Just curious how much for the electricity bill for that humongous stuff he's got there... The term "cheaper" sounds really relative lmao
Enterprise grade PCIE cards like NICs or SAS controllers are designed to run in rackmount servers which have an insane amount of airflow compared to a normal desktop PC. Those fans alone often pull 20 watts or more PER FAN. Almost all server-grade PCIE cards I've used run crazy hot without additional ventilation in a desktop PC.
The inspur is not a dud, I have one too. It worked on your nas because you gave it all the 8 lanes it asked for unlike the 4 lanes you gave it on your desktop. There should be three jumpers on it. Two of them should be adjacent to the ports. Start from left and short pins 1 and 2 on the first jumper, then do 2 and 3 on the second. This way you will have disabled one of the ports and it will happily accept to work on pci-e 2.0 x4.
I am running HP branded x540-T2 cards that i scored on ebay for about 40-50 bucks a couple of years ago, as well as an genuine intel x710-T4. They get HOT without airflow! However the heatsinks are beefy and just a small about of airflow keeps the temps in check. Funny thing is that the x540 has a "Caution HOT!" warning etched on the heat sink! :D
Turned into NOT-ON-A-BUDGET video pretty quickly. There's unreasonable jump in price between 2.5G and 10G networking. My budget solution is managed 8x 2.5G Mikrotik CRS310-8G+2S+IN (150€) in combination with QNAP PCI-E 2x 2.5G NIC QXG-2G2T-I225 (75€). That makes 4 PCs with 5G connectivity (LAG configuration).
It is amazing to see that the cost of these higher speed networking devices getting lower, to where if you are serious about it, you can achieve 10gb in some places on your own network for an affordable price without using something like fiber or aggregated ports.
First video I've found of yours. Recently been pulling OM3 fiber through my church for a 10GbE backbone. Might have to pick up a few of the smaller 10GbE Qnap switches for small breakout areas.
Great video! Wait are u using a samba share??? Or even at times windows is limited by its copy feature in file explorer being a single threaded task, m sure u could have pulled better speeds with some multithreaded copy software
16:28. Why don't you just zip tie an 80mm fan to a pci slot cover underneath it? Find one of you fancy cases, check if the pci slot cover piece have fancy design holes on it. If yes, you have a "pci mounted fan"! :) Happy to help.
16:30 You may wish to get a slot mounted Squirrel cage blower fan. It would take up a single slot and wouldn't be as obnoxious to remount if you wanted to repaste the NIC's heatsink(s). Amazon currently has a deal on a Startech brand for less than $9.
I believe the Intel x540 NICs have a known issue when plugged into a PCI port that is routed off the Chipset, rather than directly off the CPU's PCIe lanes. I had a vary simular issue with my desktop where the thing wouldn't even boot up with the card in a "chipset" slot. but if I put it in the main PCie slot on my mobo it worked fine. I'm assuming you have a GPU in the main slot, and any extra slots that are accessable might be routed off the Chipset. I see in the TrueNAS server, you are using the primary slot, and that may be why it worked there, and not in your desktop.
@@HardwareHaven Also, for NVMe, I just got one of these: aoc-shg3-4m2p and some cheap teamgroup 4TB MP34 drives that can go into any PCIe slot, no Bifercation needed ;)
I bought an off branded NIC card that's supposedly an Intel clone and my PC would not post at all no matter what PCIe or PCI slot I had it plugged in. I plugged in my old genuine Intel Pro/1000 Dual port NIC card and my PC post just fine. Steer clear away from those unknown off brand Chinese junk. You get what you paid for. Always buy genuine Intel hardware esp when it comes to VMware ESXI.
Using the Inspur x540 network card on a regular motherboard seems to require shielding certain pins for it to be successfully recognized by the motherboard.
Definitely. I looked it up later and the bottom slot on the Supermicro board in the NAS is only x4, so I would need to use the middle slot which is x8 3.0
@@HardwareHavenyou should test them as well. Network switches have a similar issue to the flash storage ecosystem, where you can get mislabeled and counterfeit cards.
Even if you're not running optimally and your drive array only runs at 500MB/s anyway then moving from a 100MB/s GigE connection to a 1GB/s 10g connection is still a huge step up.
if you experience this fault: expansion card receives power but no drivers detected on windows 10/11 (even after try to force a hardware detect) one solution may be to put it into another pcie bus (try the one closest to the CPU - known pcie x 16) It looks like the first nic you tries uses pcie x 8 (im guessing) - not all busses have the full fat lane (even though they look like pcie x 16 physically). you may have plugged the expansion card in a pcie x 4 bus resulting in this issue. I hope this helps :)
the "Cat" phases of ethernet isnt only for data transmission, cat8 has way more ability to block out signal noises caused from electromagnetic interference and reduce the chances of jitter/packet delay which is different from packet loss.
Have you found a fix for the power draw from the 10Gb NIC on your NAS? Even if you shutdown the NAS, it will still take 10W from the wall with Inspur cards. They will become hot when the system is down. (Potentially burnt because the fan will be off when the system is down). I haven't find a good solution around this yet. Let me know if you have experienced the same and how to fix it.
Something good to call out about QNAP switches is that they are made in Taiwan--including the QSW-2104-2T-A. Given China's questionable practices with producing network hardware, I would much rather go with QNAP over a mainland Chinese manufacturer. While not TAA compliant, this gets us half way there at half the cost, IMHO. Great product offering from QNAP!
Lmfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Good video man !! When you put a fan like that on a Network card, if you put a piece of cardboard / plastuck 4-5mm thick between the heatsink & back of fan, it will move more air. Right now it has no space to move air because its suffocating
Tackled this situation few months ago, went with direct 10G optics (used ebay stuff) between pc and nas. NAS has bunch of mechanicals in raid (sequentials is faster than single ssd), no problems there, but main obstacle is ssd's in main pc cant keep up, probably some nvme could change this, but sata ssd's just get overloaded. But opening files on nas is a breeze, scrubbing some 26gb movie is instantaneous, zero waiting, you can drag or click anywhere in timeline and it just plays.
Yeah, not many people know that the old and trusted cat5e can support 10G if you're runs are short. It's works pretty dang good IMO. If you need to run longer runs. I would highly recommend MM fiber. Just make sure you are extremely gentle when running fiber and purchase a fiber cleaning pen. Surprisingly 10G MM Optics are very inexpensive as well.
No, SM fiber. So many homelabbers talking about MMF like its still relevant. SMF is cheaper and better. More speed, up to 160km distance on one pair of optics (not even counting amplifiers), CWDM, and the optics are a few dollars more expensive.
if most of your stuff is not that far away for example a nas and a virtualization box running from the nas. you can try going with sfp+ switches, there are 40$ 2x 10gb sfp+ switches with 4x 2.5 gb rj45, and used sfp+ cards are quite cheap for servers and pcs, fiber itself is a bit expensive but foir short runs upto a few meters you can use sfp+ dac cables which are just copper cables with the modules already attached. its also more energy efficient and if you really need 10gb rj45 you can get (quite expensive) sfp+ tp rj45 adaptors
No complaints here. 10G in a homelab where you run a hypervisor cluster and a NAS for NFS/iscsi isnt overkill, it’s minimal. Things you will want to consider: roll your own NAS for scalability, ensuring your memory and PCIE lanes are adequate; something most consumer NAS lacks in order to get full throughput. Also, as another user mentioned, steer clear of the cheap clone NICs.
One issue on consumer level PCs, the primary x16 PCIe slot may drop down to x8 speed if a x8/x16 card is installed in the lower x16 slot (both shots running at x8 speeds). This can affect games that heavily rely on the GPU.
I think the desktop NIC could be limited by your motherboard, since 0.985 GB/s is the max speed of a PCIe gen3 x1 slot. Probably sufficient for a 10 gig nic, just an FYI :D
If it's only a few devices and are willing to use fiber or DAC, you don't even need the switch. Most 40GbE interfaces can be split into 4x10GbE, so four devices can make a fully connected mesh. Some cards even have an integrated switch which you can program with static routes so you can connect more devices in a ring or torus. Just make sure you have enough paths so that one node can't shut down the whole network. Same goes for 100GbE, they can be split into 4x25GbE
Is your PSU intake pulling air from inside the case? It's usually more efficient to have it exhaust out the back. Pulling in hot air from the NIC through the PSU might be suboptimal. Regarding the 10Gbps transfer, Windows SMB might not fully saturate it with a single transfer. Multiple streams could help if your CPU and storage can handle it. Thinking of upgrading to at least 2.5Gb soon. The LTT screwdriver's ratcheting click is oddly satisfying-I find myself fidgeting with mine all the time. 😄
there's no way to make the PSU exhaust anywhere besides out of the case it's how the PSUs are all designed the only options are if the case has a weird layout for the PSU or the PSU is mounted with the intake fan on top versus the bottom some cases don't have ventilation on the bottom of the case for the PSU so you'll have to have it pull air from inside the case but for all cases that have the PSU mounted to the case the exhaust is facing out
One thing if you have a budget is to setup an NVME device on your truenas server and make it a cache for the mechanical drives. I have seen this speed up writes as it caches the write onto the nvme then works to move it off the nvme to the mechanical on it's own time. at least until nvme drives become as cheap as mechanical drives per tb
The SW-3216R-8S8T looks awesome, may consider it or for a bit more some larger options to handle my 1gb access layer where I have about 18 devices. This video was good though, never looked at qnap for switches, I just look at their NAS devices, and after I look, then I look away because of the price and go build my own lol Would have been really good to see more scientific and accurate testing for the bandwidth since this is a tech channel and not just a average joe channel so engineers like myself are looking for the real deal as far as testing/evaluation before investing in a product.
Fast is great, but 10 Gb gear generally runs hot, and that means fans. If you don't have a dedicated equipment closet where fans aren't an issue, it's something you definitely should investigate. All of my networking gear sits about ten feet away in my home office, so the need to be fanless was a primary consideration, which is why I haven't upgraded yet.
Can you get QNAP to send me a 10G switch ? Just kidding... GREAT content. Well done. I just got to partial 2.5 G in the home network and THAT''s a big improvement over 1G.10G sounds really great ! And maybe even without financing (if going mostly used...). Cool.......
My SuperMicro motherboard had 2x Intel 10G dual X550 ethernet ports. I then looked for switches that use that chip. The one featured here (QNAP QSW-2104-2T) uses that chip. Definitely get some CAT 7 cables.
Thanks for the informative video. I do have a question though. So, supposing you have a 10GbE NAS, aren't the drives in the NAS still limited to much slower write speed? If so, is there a way to fix that?
Very nice. I ran my 4-to-1 consolidation earlier this year, so right now, everything runs off my Proxmox server, and with the virtio NIC, it shows up as 10 Gbps in Window10+ and all of my Linux systems. (In Windows 7, it actually shows up as a 100 Gbps NIC, which is SUPER AWESOME!) This saved me from having to buy 10 GbE switches, NICs, and cables to actually make it work at 10 GbE speeds.
Having seen this, and seeing 2.5gbps NICs and routers are cheaper than just a few years ago - I am curious as to how cost-effective it can be to have a network where the main systems needing to transfer large swathes of data have 2-port 2.5gbps AIC's and you pair-bond the ports on the system side compared to just buying 5gbps NIC's and running a network like that. Cables would be a wild card variable, one that I'd personally see as more cost-effective to cut and crimp my own compared to buying pre-made cables.
I'm glad I decided to watch this video. It confirmed some things for me regarding 10gbe file transfers. I recently rebuilt by Plex server with a new motherboard and processor going from an 4th Gen Intel processor to a 13th Gen processor. I was always wondering if I had something set wrong with my 10gbe network (Intel X540-T1) card. I also have a Synology DS1819+ NAS with a 10gbe (Synology) card in it all the 10gbe traffic is going through a Mikrotik 5 port 10gbe switch. When ever I transfer files they normally move at or above 450MB per second. Some times it will peak at 650 and some times above 700 but then it will drop back down. I always make sure I transfer to the 10 gig IP port on the NAS. I never tried transferring a second time the same file. I guess cacheing might get faster transfers.
10gbe get's really hot(and consume more power). try something with sfp+ ports(DAC in same room). intel x520 clones are really cheap(but they don't support aspm).
I have one of those Broadcom NICs as well in my Proxmox Xpenology box, works well with a 10mm thick 40mm Noctua fan. Mine came with a fan installed and a lower profile heatsink and it was the worst fan I've heard in years.
Interesting vid, and worth a watch for people dabbling in this stuff. I'm using a slightly different solution for 10gbit, but I haven't gone all-in on it yet either. Hopefully in the couple months since you made this vid you did a better job of securing that fan to the one NIC though. What you're likely to experience is a combination of the plastic tie drying out in that environment coupled with a slight imbalance in the spin of the fan which could cause it to fail rather spectacularly. Also, invest in some Noctua industrial fans for that server. :P
Just had a look at 10G switches, 5 port, you know, the small ones that you can get with 1G for like 10, 20€... They are over 200€. So it may be cheapER than before but still hell of a lot expensive. At least for "normal" people's use :)
Someone else has likely pointed this out, but two of those ten gig switches would have still only left you with two ten gig ports after connecting them to each other.
People doing a renovation or a new construction, please look at multimode fiber. It is much cheaper compared to copper (cabling, transceivers, NICs) and has better reliability. Anything above 1G over copper is a last resort only, and it's not guaranteed to work reliably. I don't get why these RJ45 10G products still come to market. This is not used at all in the enterprise environment. Only DAC and MM in use. I have never seen 10G over copper in a datacenter. (Also there are 16 cage 10G switches on the market for 1/3rd the price of the sponsor's proposed product.) On a side note, also look at MPO/MPT cabling with LC breakouts, this allows multiple 10G connections on a single cable.
Well I thought I made it clear that that one component wasn't what I was calling a budget setup. I recommended what I was originally planning, which was to have two of the smaller switches. The majority of what I did in this video (connecting a PC to a NAS with a 10Gb connection) is do-able with that setup. But by all means, you're entitled to your opinion.
I recently purchased the smaller qnap switch (terrible names) and it's a winner imo. I move a lot of very large files between my workstation and nas and while i guess it's not a requirement it is a very nice quality of life upgrade. I did have a built in 10g nic in my workstation so i only needed to add 1 card which cut down on cost. The 10g card i bought has a little fan on the heatsink that im hoping doesn't instantly die. Anyway i can also recommend the smaller switch for a setup where most things are fine connected at 2.5g but you need (or want) certain traffic running over 10g. Cheers, Christopher
If you have dual cards you can round robin (old internet/intranet style) a 10gbe network. You can put two cards in one system and use pfsense. I have two multi function enterprise cards, not so cheap and a copper cable.. running between them. It works well and i get full speed to my SAN. I have fibre, which im in a rental and will be moving, so that can wait. But i also have an old dell power connect, i had to buy a 10gbe slot in the back card that came in at 150usd ish (I'm in the uk) its sat not doing anything as well you guessed it.. moving so its not going to the rack. I have some HP enterprise managed switches but they are limited to just four ports at 10gbe. And you lose some 1gbe using them, but with 36 ports or so its not a bother, but for some it maybe. Head off to the recycler see what they have. But your going to be using more power. 😮 There is a few videos their in how to do that list of set ups 😊
Actually being limited by the drives is kind of nice, it just means that it is more or less just as good as having the drives installed locally, but with the added benefit of the drives being available to multiple computers.
+ added latency, network always adds latency.
@@RobinCernyMitSuffixhalf the senior developers I've worked with are incapable of processing this thought.
@@magfal then they are only senior in age ;)
Also, network adds IOPS limitations too
@@davorzdralo8000 yes. Of course, if you just use it as bulk storage it's fine. But if push a decent amount of IOPS, you will feel the difference.
@@davorzdralo8000 bro I missed a headshot cause I was gaming on a NAS. Wait, why was I even doing that. Nevermind, carry on.
FYI it took us ~5 years to go from 10/100 to cheap gigabit in the home
But we've been stuck on gigabit in the home for almost 20 years, when businesses moved to 40G over a decade ago, and are already on 400G
To me, we should have cheap 10-28G as ubiquitous at this point.
The fact that gigabit has stayed around the same price for so long is criminal to me.
tbf, rarely do one need 10gb, transfer in the home is mostly rooter to pc, not a lot of people have nas or anything extra on the network, and gigabit is not limiting at all for internet use
@@neolordie Well, gigabit was way more than necessary when it came to consumers too.
@@FAB1150gigabit but who really need 10gig?
@@neolordie Speak for yourself, i have also Nas for Plex, What do you think how many time you must wait before 4Tb is backupped, on 1G, more than a day, of course you must also have fast storage, Mac mini have fast storage, So do my Qnap 464. For internet it is not a problem.
Easy answer, how many home internet has more than 1gbps?
Wake me up when we have affordable 2.5g / 10g managed switches.
Yeah that'll be cool when we get there. Affordable and unmanaged is still nice though
Unmanaged you can get half the price haven did
If you can accept fiber than MikroTik is your path: CRS310-8G+2S+IN for 219 USD or CSS610-8G-2S+IN for 119 USD or CRS305-1G-4S+IN for 149 USD. If you want more 10G ports here you have CRS309-1G-8S+IN for 269 USD.
Mikrotik
Mokerlink
Great work as always. I purchased a QSW-2104-2T-A to upgrade from 1G to 2.5/10. I installed new wire for the single 10G run from NAS / switch rack to my main PC. But I have had success with 2.5G over my existing old timey unshielded cat 5 (no bloody e!) cable that I installed in the late 90s. Runs are only 20-40m but it all works.
The NICs are coming down in price but the Switches still seem pricey, also 10g RJ45 stuff runs HOT! I don't need 10g so obviously I went out and spent 400 quid on upgrades...
10GBASE-T NICs get HOT! If you're strapping a fan to the card, it's a good idea to disable temperature control for the fan header and set a constant speed. As the fan controller can't see the card's temperature, it can set the fan speed too low to cool it properly. I had this problem with an X540 and designed a PCIe card fan mount with an inbuilt controller. I could send you one if you'd like?
I should’ve said something in the video, but those fans are just on a set RPM. There’s no curve. But that sounds like a really cool solution you made!
I've been running 4Gbit (4x 1Gbit trunked) on my server and workstation for about a decade using all ex-enterprise gear that I got for pennies and/or free, but these little switches are actually getting so reasonable and low powered that it seems finally the time to move to a simpler hw configuration :) Thanks for the review and detailing your experiences.
lmfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Ignore the haters and the elitists. It's a "home" network after all. When you get a dedicated staff (assuming you don't already), then you can worry about satiating your enterprise networking constituents. I would call it good for now. Still a great video though. It's nice to see the availability factor and working with stuff that's not 100% perfectly compatible. Getting it working "good enough" is sometimes the best one can hope for.
I appreciate this, haha
Nice to see you finally out of the dark ages of networking :D Just kidding! My home lab still has a 10/100 switch in it, and I am still running 10/100/1000 NICs, but now you've made me wanna get off my butt and actually installing the upgraded hardware I have!
Hahaha I feel ya. Honestly 100Mb is plenty for a lot of things. I always find it funny when people complain about not having gigabit on a device that will never need it
I don't have a 100Mbps switch, but I do have some runs that are 100Mbps because I 'split' the CAT5E to two devices (1GBE doesn't support that, but you can do it with 100BaseT since it only need 2 pairs). Those runs go to security cams, so there's zero reason to have 1GBE speeds on those runs.
Lots of people simply don't understand how much performance their devices need.
That said, it sure is fun going 10GBE! :)
@@HardwareHavenas long as you aren't using Cisco, you're doing well!
Your qnap switches are much more modern and have 0 backdoors!
If you're on 10/100 you're already on Fast Ethernet, congratulations!
@@stephendetomasi1701 I have a half-duplex 10mbps hub (not switch) connects to a voice over ip phone and laser printer. it works fine.
Did you enable jumbo packets/frames on your Windows machine to enable a larger Maximum Transmission Unit (MTU) size for data being sent over a network?
You ask that question as if I know what I'm talking about! Haha, no but I'll look into it
@@wojtek-33 I get that. As long as you don't go too crazy with it, things should be fine. I've got mine set to 9014 and get much more consistent traffic speeds
Jumbo frames are basically a requirement with high bandwidth networks. Even 1gig will not live up to its full potential without it enabled. It needs to be enabled on your work station, your file server, and the switch ports. If you run a packet sniffer like Wireshark you can see that it's working.
If you loose a screw in carpet, take a sock, slide it over the end of your vacuum cleaner, and use it as a strainer as you go over the area. saved my butt a few times. lol
or just walk with bare feet. it'll quickly stab your foot. :)
@@Nalianna this !!!
@@Naliannagood ol' lego method
I have a very strong neodymium magnet for this exact purpose. Less finicky hunting down those things.
Another option...don't work over carpet. :) But those screws can jump a long way. hee hee. Look further away than you first assume.
upgrading networking for local file transfer is like buying a new monitor for gaming
it's almost guaranteed you will want to upgrade your GPU afterwards to keep up with your better resolution
Spot on, lol
Just something to remember: The file transfer protocol in Windows over the network is single threaded. At some point, you won't be able to go any faster unless you have a faster CPU.
Damn that's archaic 😂 i mean windows is built ontop of a legacy system I'm not surprised
What about if you use a third party file transfer like TeraCopy? Or is it still limited to single thread because it would still have to use the Windows FTP?
@@outhouse.wholesaler I haven't looked into it much, but from what I understand it's an application limitation. Using a third party application that is multi-threaded should provide better results.
i have noticed my best speed transfers to the nas is when i have a couple files moving at the same time,, Granted that is with the synology drive application,, lol,,
Isn’t there some sort of BitTorrent based network file transfer programs that can multi thread?
Then there’s Free Download Manager….
Enable jumbo frames in all the adapter settings (9000 byte MTU), should improve the 7 Gbit iperf.
Aside from that, unless video editing directly from a flash-storage NAS, I found that 2.5 Gbit is really enough. And those switches are cheaper, and cards are cooler, less power-consuming and often even integrated onboard in mobos nowadays.
I'm sad that 5gbit wasn't common as a middleground. I have a x370 ASRock motherboard with 5gbit and a Qnap pcie x1 Card with 5gbit for a few years but to take advantage of the speed I have to use my 10g switch because there was not an affordable 5gbit option.
But in the last 1 or 2 years SSDs got very fast and cheap so 10g is already a huge bottleneck :D
mfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Try using parallel streams with iperf. Something like adding "-P 4" switch to your command might fix that. That will run 4 transfers in parallel, and each will settle at 2.2-2.4 Gbps each, getting you to ~9.6 total. That ancient BCM card might hold that back a little, but jumbo packets could help there, too.
Testing 4 streams instead of 1 on iperf3 won't give you a faster network. 4 cars running side-by-side at 50mph won't get there at 200mph. You want to test whether your 10gbe network actually gives you 10gbe
@@BertelSchmitt iperf isn't designed to speed up a network, it measures data throughput. If the CPU doesn't have sufficiently fast enough cores, a single thread might not be enough to keep a 10gBE link saturated, hence the -P # switch in iperf. I know, iperf 2 and iperf 3 are different. Older NIC IC's, typically 1st and 2nd gen for a given speed class, are often not able to maintain line speed data transfers.
I also didn't hear him mention NPAR, or any other partitioning scheme, so the 4x50 analogy doesn't seem germane.
@@andrewb6iperf3 measures the network speed between 2 computers. If one of them is anemic, then it achieves lower speed, and iperf3 reflects that.
15:08 haha. I can already hear the conversation now. "Honey do you smell something burning? It smells like burnt plastic."
Pretty nice deals there. I have one qnap switch, which has been fine on my use cases, but my homelab runs primarily on spf+. I needed 10Gig way back because I use network storage on my virtualization, but spf+ nics were way cheaper than 10G rj45 ones even as used. The situation seems to be changed somewhat, but I've gotten used to the spf plugs and those are awesome since they do not fall of as easily by accident as rj45 does. As of now, I have a small unreasonable fantasy of getting into 100G networking. It doesn't make any sense but would be fun
lmfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Hey I've been enjoying your videos for a while now... I'm learning much from yours and others. Couple things I read recently you may want to consider. Stacking even the passively cooled switches not good. Heat rises. Utilizing sfp+ with short DAC cables and/or any length fibre reduces much of the heat!, this applies to nics too! It's the transceivers that generate the heat. I will be utilizing DAC cables between switch uplink and downlink...and to my yet to be built nas server next to rack with 10gb sfp+. In some ways techie folks can/should skip the 2.5gb wave that's arriving now, implement 10gb now with the information available on workarounds and cost savings of used gear! Keep reading up on this everybody! And avoid the cheap foreign knock offs!!! Hold out for quality gear!
I went 10Gbit with SFP+ and DAC Cables - main usage: workstation, NAS and connection between switches - APs connected with 2.5Gbit - all other devices: Wifi, or 1Gbit- totally sufficient
Good upgrade for a content provider. I actually wouldn't do the flash upgrade just yet. It will be an *amazing* upgrade, however both SSD and NVME drive prices are dropping quite nicely at the moment. There should be some good black Friday / cyber Monday sales coming up soon enough, or just wait and take advantage of the post holiday sales. With both SSD and NVME 8TB drives becoming "reasonable" now, it's about time to just convert over in the next year or two. I'm looking forward to more 8Tb price drops next year myself. Another fun place to look is U.2 drives, or what looks really fun is something like the OWC u.2 NVME shuttle that will let you run 4xNVME on 1 U.2 connection (there are m.2 adaptors). So you can sacrifice a bit of that per drive nvme speed and end up having up to 32Tb of flash storage off of a single m.2 connection (or u.2 if you put an add in card in your motherboard).
ssd prices are going to go up very soon, the summer was the all time low. now nand manufacturers are decreasing production
Lmfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Love watching these vids. Still kinda new to everything, but I hope to be able to do this kind of stuff. Keep it up!
Glad you like them! And it's a constant journey of learning and trying new things.
To reply to your final comment about NAS speed. The Synology DS1621+, and most other good 6 bay hard disk NAS, pre-built or home built, are good for 500-600 MB/s, faster if you use NVME cache. It won’t saturate 10G, but you get 2x+ 2.5g performance and 100 TiB of usable storage without breaking the bank.
Well done... glad you got 10G all setup and working.
Like most Enterprise level gear. They are designed for a LOT of forced airflow. That's why sticking Enterprise cards in a consumer case, often causes them to cook. The airflow they need versus consumer sound levels is not easy to balance.
Re: the max throughput being about 7-8 Gbps, did you enable jumbo frames?
I saw similar behavior (about the same speed), but enabling jumbo frames on the client and the server got me up near the theoretical 10 gbps maximum.
Of course, enabling jumbo frames can lead to other issues if you don't segregate the 10Gbps devices onto their own VLAN, which is probably one reason I haven't fully set up my 10Gbps network yet. :P
Been giving thoughts to updating the home network to go from Gigabit to 2.5G, and here's something to make me think about faster equipment.
Just bought a little switch with 2x10GB and 4x2.5GB for 50 bucks at Amazon. I will have a high speed NAS next to my machine (10GB) and the 2.5 connection to my home server. Cool.
Just bought a 16port, 2.5G, managed TP-Link Omada switch for 489.00. You really pay for that 2.5G these days. Frankly, it’s more than I will ever need. I just wanted 2.5G to all my rooms, VLAN capability, and faster DNS.
Here is one quality that one must possess to win, and that is definiteness of purpose, the knowledge of what one wants, and a burning desire to possess it.
For most people, Gigabit is probably fine, if not a bit overkill as well. The network in my place I set up with D-Link commercial desktop switches, one 5 port and one 8 port. My node has the 8 port switch as I have game systems and a few computers. For short runs I don't see much issue of data loss due to attenuation on cat5e but I would want to use the appropriate category of twisted pair cable for longer runs. I still find it funny that computers still use glorified 4 line POTS telephone cable for communication across networks. I also find it interesting that even though coaxial cable can carry tens of gigabits of data, is used by cable tv service providers for broadband and is incredibly simple to terminate isn't the cable of choice for networking computers together. Granted, it isn't as flexible but there is probably a way to make a stranded flexible variant for this use.
WOW! Linus Tech Tips could not be bothered to mention Gallium Nitride and could only call it GAN, which is useless. Thank you for being actually helpful!
Interesting 16 port switch, paid that much getting a 10 port 10gig last year. Prices seem to be starting to fall a little. When I bought mine I figured we'd have jumped straight to 10gig instead of 2.5. But seems 2.5gig is being widely adopted first. Which in hindsight guess makes since as it covers a much larger audience. As not everyone has the hardware to fully take advantage of 10gig yet. But when you have a use case and hardware that can drive it. It is utterly fantastic!
This guy must be pretty loaded...
Just curious how much for the electricity bill for that humongous stuff he's got there...
The term "cheaper" sounds really relative lmao
Best workflow is edit on your workstation, using local storage. It isn't about tranferspeeds. It's about accesstime.
The network/nas is for archive.
I finally got to see another Hardware Haven video.
It was great!
Enterprise grade PCIE cards like NICs or SAS controllers are designed to run in rackmount servers which have an insane amount of airflow compared to a normal desktop PC. Those fans alone often pull 20 watts or more PER FAN. Almost all server-grade PCIE cards I've used run crazy hot without additional ventilation in a desktop PC.
Your home lab set up is getting better and better!!!
Great video
The inspur is not a dud, I have one too. It worked on your nas because you gave it all the 8 lanes it asked for unlike the 4 lanes you gave it on your desktop. There should be three jumpers on it. Two of them should be adjacent to the ports. Start from left and short pins 1 and 2 on the first jumper, then do 2 and 3 on the second. This way you will have disabled one of the ports and it will happily accept to work on pci-e 2.0 x4.
been running 10gig over cat5e the last 6-7 years with no isssue, longest run is ~50ft.
Show the cheapest 2.5 gigabit home lab. I want Pis, PCs, storage and Ethernet out
I am running HP branded x540-T2 cards that i scored on ebay for about 40-50 bucks a couple of years ago, as well as an genuine intel x710-T4. They get HOT without airflow! However the heatsinks are beefy and just a small about of airflow keeps the temps in check. Funny thing is that the x540 has a "Caution HOT!" warning etched on the heat sink! :D
Turned into NOT-ON-A-BUDGET video pretty quickly. There's unreasonable jump in price between 2.5G and 10G networking.
My budget solution is managed 8x 2.5G Mikrotik CRS310-8G+2S+IN (150€) in combination with QNAP PCI-E 2x 2.5G NIC QXG-2G2T-I225 (75€).
That makes 4 PCs with 5G connectivity (LAG configuration).
It is amazing to see that the cost of these higher speed networking devices getting lower, to where if you are serious about it, you can achieve 10gb in some places on your own network for an affordable price without using something like fiber or aggregated ports.
my home server for video editing definitely needs this...scrubbing 4k on the timeline is impossible using spinning drives and GBe. nice video
First video I've found of yours. Recently been pulling OM3 fiber through my church for a 10GbE backbone. Might have to pick up a few of the smaller 10GbE Qnap switches for small breakout areas.
Nice! Big church or do you guys just push a lot of data? Haha
Great video! Wait are u using a samba share??? Or even at times windows is limited by its copy feature in file explorer being a single threaded task, m sure u could have pulled better speeds with some multithreaded copy software
16:28. Why don't you just zip tie an 80mm fan to a pci slot cover underneath it? Find one of you fancy cases, check if the pci slot cover piece have fancy design holes on it. If yes, you have a "pci mounted fan"! :) Happy to help.
16:30 You may wish to get a slot mounted Squirrel cage blower fan. It would take up a single slot and wouldn't be as obnoxious to remount if you wanted to repaste the NIC's heatsink(s). Amazon currently has a deal on a Startech brand for less than $9.
I gotta save up for that lil switch. so glad to see one that is not rack mount and is small.
I believe the Intel x540 NICs have a known issue when plugged into a PCI port that is routed off the Chipset, rather than directly off the CPU's PCIe lanes. I had a vary simular issue with my desktop where the thing wouldn't even boot up with the card in a "chipset" slot. but if I put it in the main PCie slot on my mobo it worked fine. I'm assuming you have a GPU in the main slot, and any extra slots that are accessable might be routed off the Chipset. I see in the TrueNAS server, you are using the primary slot, and that may be why it worked there, and not in your desktop.
Ahh that would make sense.
@@HardwareHaven Also, for NVMe, I just got one of these: aoc-shg3-4m2p and some cheap teamgroup 4TB MP34 drives that can go into any PCIe slot, no Bifercation needed ;)
I bought an off branded NIC card that's supposedly an Intel clone and my PC would not post at all no matter what PCIe or PCI slot I had it plugged in. I plugged in my old genuine Intel Pro/1000 Dual port NIC card and my PC post just fine. Steer clear away from those unknown off brand Chinese junk. You get what you paid for. Always buy genuine Intel hardware esp when it comes to VMware ESXI.
Strongly suggest you double zip tie the fan on your NIC. Believe me, it's gonna start to rattle...
Using the Inspur x540 network card on a regular motherboard seems to require shielding certain pins for it to be successfully recognized by the motherboard.
Depending on the MB, putting it in the bottom slot may limit the speed on the 10Gbe card.
Definitely. I looked it up later and the bottom slot on the Supermicro board in the NAS is only x4, so I would need to use the middle slot which is x8 3.0
@@HardwareHavenyou should test them as well. Network switches have a similar issue to the flash storage ecosystem, where you can get mislabeled and counterfeit cards.
I don't know if this is parody or not, but I like your style!
Life is not measured by the breaths you take, but by its breathtaking moments.
Even if you're not running optimally and your drive array only runs at 500MB/s anyway then moving from a 100MB/s GigE connection to a 1GB/s 10g connection is still a huge step up.
if you experience this fault: expansion card receives power but no drivers detected on windows 10/11 (even after try to force a hardware detect)
one solution may be to put it into another pcie bus (try the one closest to the CPU - known pcie x 16)
It looks like the first nic you tries uses pcie x 8 (im guessing) - not all busses have the full fat lane (even though they look like pcie x 16 physically). you may have plugged the expansion card in a pcie x 4 bus resulting in this issue.
I hope this helps :)
the "Cat" phases of ethernet isnt only for data transmission, cat8 has way more ability to block out signal noises caused from electromagnetic interference and reduce the chances of jitter/packet delay which is different from packet loss.
Have you found a fix for the power draw from the 10Gb NIC on your NAS? Even if you shutdown the NAS, it will still take 10W from the wall with Inspur cards. They will become hot when the system is down. (Potentially burnt because the fan will be off when the system is down). I haven't find a good solution around this yet. Let me know if you have experienced the same and how to fix it.
I use 4010 blower fans and 3d printed shrouds, so the active cooling doesn't block lower pcie ports.
Something good to call out about QNAP switches is that they are made in Taiwan--including the QSW-2104-2T-A. Given China's questionable practices with producing network hardware, I would much rather go with QNAP over a mainland Chinese manufacturer. While not TAA compliant, this gets us half way there at half the cost, IMHO.
Great product offering from QNAP!
Lmfao I made a comment on this dude cause I gave an alternative switch which @ServeTheHome also talked about later and guess what guys he deleted my comment lmfao... Is he even allowed to do that where is the YT police lmfao wait cause it will make him look bad and then they cant make $$$ thats why they gave him power to give quirky information and not checking all products out there.. Was he paid by qnap?? I am curious.. not saying he is, it looks weird. Also what about Mikrotik they also have better products which are cost effective as well!
Good video man !! When you put a fan like that on a Network card, if you put a piece of cardboard / plastuck 4-5mm thick between the heatsink & back of fan, it will move more air. Right now it has no space to move air because its suffocating
Tackled this situation few months ago, went with direct 10G optics (used ebay stuff) between pc and nas. NAS has bunch of mechanicals in raid (sequentials is faster than single ssd), no problems there, but main obstacle is ssd's in main pc cant keep up, probably some nvme could change this, but sata ssd's just get overloaded. But opening files on nas is a breeze, scrubbing some 26gb movie is instantaneous, zero waiting, you can drag or click anywhere in timeline and it just plays.
Yeah, not many people know that the old and trusted cat5e can support 10G if you're runs are short. It's works pretty dang good IMO. If you need to run longer runs. I would highly recommend MM fiber. Just make sure you are extremely gentle when running fiber and purchase a fiber cleaning pen. Surprisingly 10G MM Optics are very inexpensive as well.
No, SM fiber. So many homelabbers talking about MMF like its still relevant. SMF is cheaper and better. More speed, up to 160km distance on one pair of optics (not even counting amplifiers), CWDM, and the optics are a few dollars more expensive.
@@farmeunit The optics automatically adjust
@@farmeunit precisely why I mentioned MM fiber. Great point. Thank you. I was going to say the same thing but you said it just as good 👍🏾.
@@farmeunit key words UP TO dufuss
if most of your stuff is not that far away for example a nas and a virtualization box running from the nas. you can try going with sfp+ switches, there are 40$ 2x 10gb sfp+ switches with 4x 2.5 gb rj45, and used sfp+ cards are quite cheap for servers and pcs, fiber itself is a bit expensive but foir short runs upto a few meters you can use sfp+ dac cables which are just copper cables with the modules already attached.
its also more energy efficient and if you really need 10gb rj45 you can get (quite expensive) sfp+ tp rj45 adaptors
No complaints here. 10G in a homelab where you run a hypervisor cluster and a NAS for NFS/iscsi isnt overkill, it’s minimal.
Things you will want to consider: roll your own NAS for scalability, ensuring your memory and PCIE lanes are adequate; something most consumer NAS lacks in order to get full throughput. Also, as another user mentioned, steer clear of the cheap clone NICs.
One issue on consumer level PCs, the primary x16 PCIe slot may drop down to x8 speed if a x8/x16 card is installed in the lower x16 slot (both shots running at x8 speeds). This can affect games that heavily rely on the GPU.
I think the desktop NIC could be limited by your motherboard, since 0.985 GB/s is the max speed of a PCIe gen3 x1 slot. Probably sufficient for a 10 gig nic, just an FYI :D
If it's only a few devices and are willing to use fiber or DAC, you don't even need the switch. Most 40GbE interfaces can be split into 4x10GbE, so four devices can make a fully connected mesh. Some cards even have an integrated switch which you can program with static routes so you can connect more devices in a ring or torus. Just make sure you have enough paths so that one node can't shut down the whole network. Same goes for 100GbE, they can be split into 4x25GbE
Cat 5e is inside a lot of homes wired for phone lines. There's nothing wrong with using it, because that's what many people have.
Is your PSU intake pulling air from inside the case? It's usually more efficient to have it exhaust out the back. Pulling in hot air from the NIC through the PSU might be suboptimal.
Regarding the 10Gbps transfer, Windows SMB might not fully saturate it with a single transfer. Multiple streams could help if your CPU and storage can handle it.
Thinking of upgrading to at least 2.5Gb soon. The LTT screwdriver's ratcheting click is oddly satisfying-I find myself fidgeting with mine all the time. 😄
there's no way to make the PSU exhaust anywhere besides out of the case it's how the PSUs are all designed the only options are if the case has a weird layout for the PSU or the PSU is mounted with the intake fan on top versus the bottom some cases don't have ventilation on the bottom of the case for the PSU so you'll have to have it pull air from inside the case but for all cases that have the PSU mounted to the case the exhaust is facing out
Aren't you network-transfer-speed limited by the Hard Drives? If not, what do you use to offset the bottleneck? I use Unraid btw.
After 3 years my 10G is dying while gaming due to heat (windows err 43), thanks for the tip I'll try the fan on it :)
One thing if you have a budget is to setup an NVME device on your truenas server and make it a cache for the mechanical drives. I have seen this speed up writes as it caches the write onto the nvme then works to move it off the nvme to the mechanical on it's own time. at least until nvme drives become as cheap as mechanical drives per tb
Great video, thanks. How was the noise on that 10gbe switch (qsw-m3216r-8s8t)?
I always put the fan so it blow from the side and to avoid noise i use 90x90 at 7 volt.
The SW-3216R-8S8T looks awesome, may consider it or for a bit more some larger options to handle my 1gb access layer where I have about 18 devices.
This video was good though, never looked at qnap for switches, I just look at their NAS devices, and after I look, then I look away because of the price and go build my own lol
Would have been really good to see more scientific and accurate testing for the bandwidth since this is a tech channel and not just a average joe channel so engineers like myself are looking for the real deal as far as testing/evaluation before investing in a product.
Fast is great, but 10 Gb gear generally runs hot, and that means fans. If you don't have a dedicated equipment closet where fans aren't an issue, it's something you definitely should investigate.
All of my networking gear sits about ten feet away in my home office, so the need to be fanless was a primary consideration, which is why I haven't upgraded yet.
Can you get QNAP to send me a 10G switch ? Just kidding... GREAT content. Well done. I just got to partial 2.5 G in the home network and THAT''s a big improvement over 1G.10G sounds really great ! And maybe even without financing (if going mostly used...). Cool.......
My SuperMicro motherboard had 2x Intel 10G dual X550 ethernet ports. I then looked for switches that use that chip. The one featured here (QNAP QSW-2104-2T) uses that chip. Definitely get some CAT 7 cables.
You could get a blower-style fan and mount it next to the heatsync on the NIC
Thanks for the informative video. I do have a question though. So, supposing you have a 10GbE NAS, aren't the drives
in the NAS still limited to much slower write speed? If so, is there a way to fix that?
Very nice. I ran my 4-to-1 consolidation earlier this year, so right now, everything runs off my Proxmox server, and with the virtio NIC, it shows up as 10 Gbps in Window10+ and all of my Linux systems. (In Windows 7, it actually shows up as a 100 Gbps NIC, which is SUPER AWESOME!)
This saved me from having to buy 10 GbE switches, NICs, and cables to actually make it work at 10 GbE speeds.
you just made my day .. tnx for sharing this awesome content
greetings from overseas and 10 thousands of miles away " Tripoli, Libya " ..
Having seen this, and seeing 2.5gbps NICs and routers are cheaper than just a few years ago - I am curious as to how cost-effective it can be to have a network where the main systems needing to transfer large swathes of data have 2-port 2.5gbps AIC's and you pair-bond the ports on the system side compared to just buying 5gbps NIC's and running a network like that.
Cables would be a wild card variable, one that I'd personally see as more cost-effective to cut and crimp my own compared to buying pre-made cables.
I'm glad I decided to watch this video. It confirmed some things for me regarding 10gbe file transfers. I recently rebuilt by Plex server with a new motherboard and processor going from an 4th Gen Intel processor to a 13th Gen processor. I was always wondering if I had something set wrong with my 10gbe network (Intel X540-T1) card. I also have a Synology DS1819+ NAS with a 10gbe (Synology) card in it all the 10gbe traffic is going through a Mikrotik 5 port 10gbe switch. When ever I transfer files they normally move at or above 450MB per second. Some times it will peak at 650 and some times above 700 but then it will drop back down. I always make sure I transfer to the 10 gig IP port on the NAS. I never tried transferring a second time the same file. I guess cacheing might get faster transfers.
How to do it for cheap. Call someone up, and get it for cheap.
This is what sucks about some RUclips channels.
your pricing is not what we pay as you get free stuff.
Nigga he said the pricing
10gbe get's really hot(and consume more power). try something with sfp+ ports(DAC in same room). intel x520 clones are really cheap(but they don't support aspm).
I have one of those Broadcom NICs as well in my Proxmox Xpenology box, works well with a 10mm thick 40mm Noctua fan. Mine came with a fan installed and a lower profile heatsink and it was the worst fan I've heard in years.
Interesting vid, and worth a watch for people dabbling in this stuff. I'm using a slightly different solution for 10gbit, but I haven't gone all-in on it yet either.
Hopefully in the couple months since you made this vid you did a better job of securing that fan to the one NIC though. What you're likely to experience is a combination of the plastic tie drying out in that environment coupled with a slight imbalance in the spin of the fan which could cause it to fail rather spectacularly.
Also, invest in some Noctua industrial fans for that server. :P
Just had a look at 10G switches, 5 port, you know, the small ones that you can get with 1G for like 10, 20€... They are over 200€.
So it may be cheapER than before but still hell of a lot expensive. At least for "normal" people's use :)
Someone else has likely pointed this out, but two of those ten gig switches would have still only left you with two ten gig ports after connecting them to each other.
People doing a renovation or a new construction, please look at multimode fiber. It is much cheaper compared to copper (cabling, transceivers, NICs) and has better reliability. Anything above 1G over copper is a last resort only, and it's not guaranteed to work reliably. I don't get why these RJ45 10G products still come to market. This is not used at all in the enterprise environment. Only DAC and MM in use. I have never seen 10G over copper in a datacenter. (Also there are 16 cage 10G switches on the market for 1/3rd the price of the sponsor's proposed product.) On a side note, also look at MPO/MPT cabling with LC breakouts, this allows multiple 10G connections on a single cable.
I’m definitely saving this video so I can find some budget 10G networking gear especially me as a tech content creator.
You can use a 512gb nvme as cache to the hard drives, or even a 1tb
Great video once again! Can't wait to talk to you on the meeting.
No point in watching when we learn he's running a $600 ethernet switch.
Ruined it. Try again.
Well I thought I made it clear that that one component wasn't what I was calling a budget setup. I recommended what I was originally planning, which was to have two of the smaller switches. The majority of what I did in this video (connecting a PC to a NAS with a 10Gb connection) is do-able with that setup. But by all means, you're entitled to your opinion.
@HardwareHaven thanks for that. I didn't get that far in the vid. I hear that and away I went.
I recently purchased the smaller qnap switch (terrible names) and it's a winner imo. I move a lot of very large files between my workstation and nas and while i guess it's not a requirement it is a very nice quality of life upgrade. I did have a built in 10g nic in my workstation so i only needed to add 1 card which cut down on cost. The 10g card i bought has a little fan on the heatsink that im hoping doesn't instantly die. Anyway i can also recommend the smaller switch for a setup where most things are fine connected at 2.5g but you need (or want) certain traffic running over 10g. Cheers, Christopher
If you have dual cards you can round robin (old internet/intranet style) a 10gbe network.
You can put two cards in one system and use pfsense.
I have two multi function enterprise cards, not so cheap and a copper cable.. running between them. It works well and i get full speed to my SAN.
I have fibre, which im in a rental and will be moving, so that can wait.
But i also have an old dell power connect, i had to buy a 10gbe slot in the back card that came in at 150usd ish (I'm in the uk) its sat not doing anything as well you guessed it.. moving so its not going to the rack.
I have some HP enterprise managed switches but they are limited to just four ports at 10gbe. And you lose some 1gbe using them, but with 36 ports or so its not a bother, but for some it maybe.
Head off to the recycler see what they have. But your going to be using more power.
😮
There is a few videos their in how to do that list of set ups 😊