Cat6, Cat6a, and Cat6e are all indeed able to support 10gbit copper ethernet. The difference is the distance. Only Cat6e can go 100meters of 10gbit. You should be fine if the server and workstation are nearby. You'll be ok. Great video.
Thanks for the information. We can see a detailed table on the Wikipedia page to "Cat6" or better "Twisted pair". Oh and: "The American Cat 6A standard is less stringent than the European Cat 6A standard. Cat 6a is not an official standard." and "Currently Cat6e is not recognised as a standard of data cabling, it is not ratified by any of the standard boards regulating the structured cabling industry." And about something different: I don't see a benefit in putting more and more "suspect" hardware into a server, that otherwise has even redundant hardware by design. Out of spec cabling will not make it worse (only slightly ... hehe). Especially in a piece of hardware than one uses to do his daily job, in other words: that is a necessary part to earn your income!!! This server has some strange botches and I don't think I like this concept(cheap and botched) at all. But it doesn't matter. Years later this hardware was exchanged to something more reliable. Anyway, as you said, Andrey: Great video ... and a great series about this server:) Even if some aspects of it made me cringe a lot, hehehe.
Fun fact, you can actually run it over Cat5e if your cable is really good and the terminations are solid. It's not production-quality, nor is it reliable, but... yeah!
@@MrShambles Hey smokers, druaga1 here and today we're going to build a Windows 95 PC out of nothing but floppy disk shells. I know, you thought it couldn't get ANY crazier than the Lego PC build... But this is where the real fun begins.
Hey, you upgraded and got significantly better results. That’s a win in my book. What’s cool is that you’ve recognized the bottleneck and still have room to improve, if you choose to do so in the future. Or if you choose to upgrade even further in future, you have that option. I was thoroughly impressed you were getting 37C max at idle for an air cooler just sitting on the cpu with 7 year old TIM. That’s actually impressive. What set me over the top laughing was when you topped off your setup explanation with “Using Ubuntu on a flash drive”. XD. You sound just like me, Frankenstein-ing things together just to test things. And if you blow on it hard enough, something might explode. What I thought was most refreshing is that someone as knowledgeable as yourself stumbling on how to pronounce Ubuntu. I have the same issue. And every time I say it in front of a Linux expert, I’ve been corrected 30 ways to Sunday. I like the idea of using those mining cards. Waste of money or not, it would be super entertaining to see your journey to get it to work. Even if it was like a 10 part series over the span of months, I’d still be super interested. Love that you got the HP server running, I have one very similar I might try to bring back it life because of you. Good luck on your future endeavors, I can’t wait to see what’s next.
Two things 1; you likely didn't have to cut & splice the wires, the crimped connector likely would fit in the other plastic connector, you just have to poke the retention with a sharp knife until it releases. 2; You probably didn't have to melt that connector that popped out, just use a sharp knife or razor blade to slightly tilt out the metal tab on the end of that metal crimped connector. I've done this several times over the years.. usually making custom audio cables for CD-Roms and connection from those devices to a sound card or motherboard..
Careful with damaged ceramic caps on the boards. They need replaced before powering. They short out internally if damaged. Invest in kapton tape, put under where the screw head is located.
you could use a thermal camera, if you have one, to see the heatsink temp. btw you could max the speed of the 10GBaseT with an M.2 NVMe SSD on both end.
If you want to ghetto rig an even more interesting cooling solution, the heat pipe pads from laptop CPUs/GPUs might give you that 1-card height and still pull the heat away, but those are only meant for dissipating ~40W TDP max. Also, have you ever done a video on your Linux workflow? Resolve looks interesting for an NLE, and I remember you said you used Blender before
BTW there are multiple standards for cat 6a and the shielded with shielded twisted pairs IS the spitting image clone of Cat7, the only difference is cat 6a does not require shielded twisted pairs and cat7 does so the 2 are the same when you get 6a with fully shielded twisted pairs.
One of the caps in one of the boards you are using looks lifted. Top board @4:34, on the column of 6 caps in the middle. You might wanna take a look at that...
Cool! Welcome to 10GBE! I took the FSP+ route myself with a microtik switch. This weekend I gonna work on my 10GBE nics too with 40mm noctua fans. I have 4 passive cooled nics. And in my workstation it already overheated and window dropped the card 😱 I also make videos about my 10GBE journey 😄
This is where a CNC machine would come in handy, you could likely make something that extends above the height of the card (you have the deadzone where full height backplate rises up) might not even need a fan with a big enough lump of metal 😁
That IHS makes me think you could possibly take a step further, and possibly delid the main chip of the NIC to try and direct mount and improve cooling efficiency. Though i suggest practicing with a *dead* one, first.
I know you have a 3D-printer, so I'd have fired up something like Fusion 360 and then just design a case that would blow over all of the PCB (think single-slot blower GPUs (like the Asus 7800 gt, just the other way around). Then just print it and see how badly it could go. Then just make two or three revisions where the blower inlet is on the top or the bottom to stagger the fan inlet position. But yeah, that's just me - AND with PLA you might even melt the "covers" onto the cards by accident.
I use 3d printed 'brackets' to mount noctua fans to my HBAs and NICs outside of server chassis, some very nice people have created models for many cards out there and some of the better ones can be adapted to work on others. I have a few HP530SFP+ cards in various systems and they get VERY hot even at idle and the same can be said for all the HBAs I have.
just a thought: if you add an air-duct, you can place the fan (or, perhaps, one with larger intake) on top of the card while still having the air flow go along the card instead of across the card and against the main-board.
If you couldn't get a reading in software and you're thinking about soldering wires to extend the overheat LED, why don't you just put a thermocouple between the heatsink and check things that way?
A few bypass caps knocked off will most likely not be enough to cause a problem. Those larger ones on the first card are more likely to be a deal breaker.
You can go with a Temperature Sensor on the Card an bring it in an overheating Situation and then you have the max themperature. You can reads out the Sensor with an Raspberry Pi or a display with a wifi on it.
I myself watercool my three *LSI/Avago/Broadcom MegaRAID SAS 9361-8i* RAID cards - that get over 65°C hot without cooling - with two *Alphacool MCX RAM* each. Works perfect. Under load the micro controllers stay stable at 45°C.
I'm using too many slots for other stuff to be able to have multiple NVMe drives. I could possibly fit one if I don't get a USB-C card and use my desktop for reading the drives from the camera. I don't know if there are any true multi-slot NVMe adapter cards or if I'd only be able to get one per-slot in the system. Whatever the case, NVME is going to be the ultimate solution in the end. It's just not cheap and I have a viable alternative right now so there isn't a lot of incentive at the moment. A RAM disk is not the solution, Even if I had 128GB in there, I'm still transferring around 500GB per video. So that would give me just a little bit of a speed boost then drop back down to the same speed if it could smoothly transition over.
@@TechTangents A RAMDisk wouldn't work, but caching should. The RAM cache will take all of the data coming in and let you access it quickly while it's still being offloaded to the hard disks.
I am for sure! I don't think I've ever mentioned it in a video, I guess I should. I don't do a lot of videos where I could naturally work that in. So I don't know when exactly that will happen.
@@TechTangents I seem to remember you mentioning it once in passing in a video but it's been a WHILE back. Linus mentioned it on the WAN Show as well but that's also been a couple of months ago. :). Edit: Of course, I COULD be wrong. The old memory, she ain't what she used to be! Too many solder fumes over the years. LOL!! 😁
Seems like 2.5 or 5 is a good in-between for most people. I know only a few of my drivers can do that much speed and they're not used for mass storage.
for temperature monitoring you could get something like a DROK 100096, stick the temperature sensor into the heatsink, and then have a raspberry pi hook into the display and decode the pattern of segment illumination back into numbers, and then have that info accessible over a tiny network html page
Just for fun, you could have bought some big BGA heatsinks that are as big as the width as the network card and drill holes for those inductors or tall parts if needed. Initially, I thought of buying a big rectangle or square as big as the whole length of card, to replace both heatsinks with one big heatsink.... but I doubt it would be beneficial thermally. Digikey has 70 x 70 mm (or smaller) heatsinks for 10-15$, for example search Digikey for ATS36399-ND or ATS1761-ND or ATS36324-ND ... 70 mm x 70 mm x 12-20mm fin height. For big long heatsinks, look for example at ATS2208-ND ( 288mm x 188mm, 16.5mm tall) or ATS2197-ND (254mm x 101mm, 14mm fins, nice fins with "ribs" for more surface) or ATS2193-ND (254x100, 10mm fins) Also, looks like you could use plain BGA heatsinks and simply use a thermal adhesive to glue the heatsink to the chip instead of drilling and locking heatsink to card... there's A LOT of space around the chip.... but it would be a pita to remove the thermal glue afterwards.
another thing is this PHY (this is not actual chip which does all the work) will be run hotter the longer cable you will have SFP PHYs have no heatsinks, as SFP is limited to exact amount of energy it can draw
there is no risk of getting computer damaged by physically damaged cards (especially in this way) I have repaired a lot (definitely over 100) PCIe cards at worst you will get 3.3V or 12V lines shorted, which will stop you from turning on computer - no risk at all PCIe lanes are decoupled using capacitors and only AC part of signal does go to other side, any DC stays on sending side due to nature of capacitors (this way short makes no difference, as it just creates LC circuit and it is very carefully fine tuned to avoid ringing, so you do not risk damaging PCIe slot or card)
Got three of these cards. All three are noisy. Two are in servers upstairs and I dont give a f... Third is in desktop and I had to put fan that not quite fits in 1 slot. Where did you get your fan?
One other bottleneck to be aware of is MTU size/jumbo frames... if you don't increase frame size then you're losing a fair amount of bandwidth to overhead.
If this doesn't work for you, you could always try to get an old single slot blower style cooler for an old gpu and see if you can't get it to work on here
i just bought a pair of SFP+ cards to connect to my unraid box and after having to FORCE drivers to work in windows 10. it works great. BUT... the cards do overheat with about 15 min of 1+gigabyte per second transfer rates. yea. they are a little nuts
I'd be so tempted to build a remote monitoring solution of an esp8266 and a CdS LDR for that overheat LED. Sure, there's a lot of ways it could be done, but that just sounds like the hackiest of them all
Shooting 4K to SSD on the BMPCC4K I see! :) I can feel your pain, shooting 4.6K on my UM4.6K generates gigantic files even at 24fps ProRes Proxy. As for the caps: you can probably shear half of them off and it'll work, most are redundant decoupling on these boards.
I love what you do! I do component level repair and diagnostic work on medical equipment professionally. Soldering is my specialty. Where are you located, If you are close enough, I would love to help you!
I bet there is more space behind the cards, between the pcie slots and the cpu area. So, a larger radial fan and a plastic shroud, behind the card...? Your core2 looks like one of my firts pc's! The case is, in fact, identical:D
try setting up a ram drive of sufficient size transfer your data there first and then do a tx over ethernet... if your rams are sufficiently fast and connected with high enough bandwidth this should work out....
umm if the temp is that big of a deal why not just get a thermal couple and attach it to the heat sink and have it run to a lcd on the server and desktop so you can see an actual temp figure at a glance
I bet you could make a custom aluminum heatsink if you only knew a CNC machine that could make anything in the world... Maybe if some Canadian who had a Haas Milling Machine could take some dimensions off of the more broken ones, and then make a custom heatsink for these cards. I bet that might work. Or maybe finding a huge one in/on a old amp, and cutting out holes with a drill for all the components that are too tall, and making it that way. :D
there were Things around in the 90ies called peltier coolers… Maybe they still exist and Maybe they are a good measure to cool These Cards (only they have a very bad Efficiency … )
Cat6, Cat6a, and Cat6e are all indeed able to support 10gbit copper ethernet. The difference is the distance. Only Cat6e can go 100meters of 10gbit. You should be fine if the server and workstation are nearby. You'll be ok. Great video.
Thanks for the information. We can see a detailed table on the Wikipedia page to "Cat6" or better "Twisted pair".
Oh and: "The American Cat 6A standard is less stringent than the European Cat 6A standard. Cat 6a is not an official standard." and "Currently Cat6e is not recognised as a standard of data cabling, it is not ratified by any of the standard boards regulating the structured cabling industry."
And about something different: I don't see a benefit in putting more and more "suspect" hardware into a server, that otherwise has even redundant hardware by design. Out of spec cabling will not make it worse (only slightly ... hehe). Especially in a piece of hardware than one uses to do his daily job, in other words: that is a necessary part to earn your income!!! This server has some strange botches and I don't think I like this concept(cheap and botched) at all. But it doesn't matter. Years later this hardware was exchanged to something more reliable.
Anyway, as you said, Andrey: Great video ... and a great series about this server:) Even if some aspects of it made me cringe a lot, hehehe.
Fun fact, you can actually run it over Cat5e if your cable is really good and the terminations are solid. It's not production-quality, nor is it reliable, but... yeah!
That ghetto test rig made me happy. I love seeing sketchy shit like that work.
It turned into a Druaga1 video for a bit there.
Booting Ubuntu with such little RAM makes me cringe though :P
@@blakecasimir yeah, especially modern (16.04 is still supported) default Ubuntu. It might work with Lununtu, but the 1GB was stretching it.
@@MrShambles Hey smokers, druaga1 here and today we're going to build a Windows 95 PC out of nothing but floppy disk shells. I know, you thought it couldn't get ANY crazier than the Lego PC build... But this is where the real fun begins.
I always have the greatest respect for people who turn on a PC with a screw driver or pair of tweezers :-D Makes me smile every time.
5:05 -- That sounded like someone slipping on a banana peel in a cartoon!
Ahh!! At 3:14 you knocked off the mini-capacitor D:
Well, that is certainly not "mini" by SMD standards. But he probably ripped off the pad with that mistake. RIP.
I noticed that too xD
I'm curious, it's been a few months, so has that cooling solution worked for you?
Yep, I haven't had any issues!
Hey, you upgraded and got significantly better results. That’s a win in my book. What’s cool is that you’ve recognized the bottleneck and still have room to improve, if you choose to do so in the future. Or if you choose to upgrade even further in future, you have that option. I was thoroughly impressed you were getting 37C max at idle for an air cooler just sitting on the cpu with 7 year old TIM. That’s actually impressive. What set me over the top laughing was when you topped off your setup explanation with “Using Ubuntu on a flash drive”. XD. You sound just like me, Frankenstein-ing things together just to test things. And if you blow on it hard enough, something might explode. What I thought was most refreshing is that someone as knowledgeable as yourself stumbling on how to pronounce Ubuntu. I have the same issue. And every time I say it in front of a Linux expert, I’ve been corrected 30 ways to Sunday. I like the idea of using those mining cards. Waste of money or not, it would be super entertaining to see your journey to get it to work. Even if it was like a 10 part series over the span of months, I’d still be super interested. Love that you got the HP server running, I have one very similar I might try to bring back it life because of you. Good luck on your future endeavors, I can’t wait to see what’s next.
Screenplay written by Druaga1 13:05
that let every druaga1 pizzabox setup look professional.
@@Rudzge True. 4K video playback and ample lighting make it clear that this is a AkBKukU video. :D
@@AdamChristensen 420
I enjoyed your fabricobbled heatsinks and fans. Good job.
Two things
1; you likely didn't have to cut & splice the wires, the crimped connector likely would fit in the other plastic connector, you just have to poke the retention with a sharp knife until it releases.
2; You probably didn't have to melt that connector that popped out, just use a sharp knife or razor blade to slightly tilt out the metal tab on the end of that metal crimped connector. I've done this several times over the years.. usually making custom audio cables for CD-Roms and connection from those devices to a sound card or motherboard..
Careful with damaged ceramic caps on the boards. They need replaced before powering. They short out internally if damaged. Invest in kapton tape, put under where the screw head is located.
lovely test setup
you could use a thermal camera, if you have one, to see the heatsink temp.
btw you could max the speed of the 10GBaseT with an M.2 NVMe SSD on both end.
If you want to ghetto rig an even more interesting cooling solution, the heat pipe pads from laptop CPUs/GPUs might give you that 1-card height and still pull the heat away, but those are only meant for dissipating ~40W TDP max.
Also, have you ever done a video on your Linux workflow? Resolve looks interesting for an NLE, and I remember you said you used Blender before
This was actually the last thing I wanted to get done before I talked about my workflow again. So i'm going to do that again some time soon.
BTW there are multiple standards for cat 6a and the shielded with shielded twisted pairs IS the spitting image clone of Cat7, the only difference is cat 6a does not require shielded twisted pairs and cat7 does so the 2 are the same when you get 6a with fully shielded twisted pairs.
One of the caps in one of the boards you are using looks lifted. Top board @4:34, on the column of 6 caps in the middle. You might wanna take a look at that...
That one is just rotated. I prodded it before starting the video, it is firmly attached, just at an odd angle.
Cool! Welcome to 10GBE! I took the FSP+ route myself with a microtik switch.
This weekend I gonna work on my 10GBE nics too with 40mm noctua fans. I have 4 passive cooled nics. And in my workstation it already overheated and window dropped the card 😱 I also make videos about my 10GBE journey 😄
This is where a CNC machine would come in handy, you could likely make something that extends above the height of the card (you have the deadzone where full height backplate rises up) might not even need a fan with a big enough lump of metal 😁
That IHS makes me think you could possibly take a step further, and possibly delid the main chip of the NIC to try and direct mount and improve cooling efficiency. Though i suggest practicing with a *dead* one, first.
I think it might be a soldered ihs, it has the hole in the left corner
Held together with Bubblegum :) it is... beautiful....
You could put a pin header on where the overheat LED goes and connect your front panel power or HDD LEDs to it.
I know you have a 3D-printer, so I'd have fired up something like Fusion 360 and then just design a case that would blow over all of the PCB (think single-slot blower GPUs (like the Asus 7800 gt, just the other way around). Then just print it and see how badly it could go. Then just make two or three revisions where the blower inlet is on the top or the bottom to stagger the fan inlet position.
But yeah, that's just me - AND with PLA you might even melt the "covers" onto the cards by accident.
I use 3d printed 'brackets' to mount noctua fans to my HBAs and NICs outside of server chassis, some very nice people have created models for many cards out there and some of the better ones can be adapted to work on others. I have a few HP530SFP+ cards in various systems and they get VERY hot even at idle and the same can be said for all the HBAs I have.
Can you bond the 2 connections on each card?
you can yes
just a thought: if you add an air-duct, you can place the fan (or, perhaps, one with larger intake) on top of the card while still having the air flow go along the card instead of across the card and against the main-board.
If you couldn't get a reading in software and you're thinking about soldering wires to extend the overheat LED, why don't you just put a thermocouple between the heatsink and check things that way?
Need to find a Google dumpster to go diving for parts.
A few bypass caps knocked off will most likely not be enough to cause a problem. Those larger ones on the first card are more likely to be a deal breaker.
You can go with a Temperature Sensor on the Card an bring it in an overheating Situation and then you have the max themperature. You can reads out the Sensor with an Raspberry Pi or a display with a wifi on it.
the best test bench ever
I myself watercool my three *LSI/Avago/Broadcom MegaRAID SAS 9361-8i* RAID cards - that get over 65°C hot without cooling - with two *Alphacool MCX RAM* each. Works perfect. Under load the micro controllers stay stable at 45°C.
Remove over temp LED, install relay, have relay drive loud alarm. Cheap smoke detectors are loud and the push to test button is simple.
Love the server videos, would be great to get the storage bottleneck finished and watch it flyyyy
Storage bottleneck could be solved by making a NVMe array, or some caching on the server's RAM.
I'm using too many slots for other stuff to be able to have multiple NVMe drives. I could possibly fit one if I don't get a USB-C card and use my desktop for reading the drives from the camera. I don't know if there are any true multi-slot NVMe adapter cards or if I'd only be able to get one per-slot in the system. Whatever the case, NVME is going to be the ultimate solution in the end. It's just not cheap and I have a viable alternative right now so there isn't a lot of incentive at the moment.
A RAM disk is not the solution, Even if I had 128GB in there, I'm still transferring around 500GB per video. So that would give me just a little bit of a speed boost then drop back down to the same speed if it could smoothly transition over.
@@TechTangents A RAMDisk wouldn't work, but caching should. The RAM cache will take all of the data coming in and let you access it quickly while it's still being offloaded to the hard disks.
What beautiful sacrilege haha at 14:00
5:56 maybe a 3d printed part to help canalised the air flow
Are you still going to LTX? Haven't heard you mention it in a while so I thought I would ask. :)
I am for sure! I don't think I've ever mentioned it in a video, I guess I should. I don't do a lot of videos where I could naturally work that in. So I don't know when exactly that will happen.
@@TechTangents I seem to remember you mentioning it once in passing in a video but it's been a WHILE back. Linus mentioned it on the WAN Show as well but that's also been a couple of months ago. :).
Edit: Of course, I COULD be wrong. The old memory, she ain't what she used to be! Too many solder fumes over the years. LOL!! 😁
Seems like 2.5 or 5 is a good in-between for most people. I know only a few of my drivers can do that much speed and they're not used for mass storage.
You don't have to cut the 2/3 pin fan connectors, you can push out the pins with needle from it's plastic coats and switch.
I love your videos more and more !!
for temperature monitoring you could get something like a DROK 100096, stick the temperature sensor into the heatsink, and then have a raspberry pi hook into the display and decode the pattern of segment illumination back into numbers, and then have that info accessible over a tiny network html page
or just hook a thermistor right into a raspberry pi but that's harder to calibrate
Druaga1 would be proud of that core2 setup.
Just for fun, you could have bought some big BGA heatsinks that are as big as the width as the network card and drill holes for those inductors or tall parts if needed. Initially, I thought of buying a big rectangle or square as big as the whole length of card, to replace both heatsinks with one big heatsink.... but I doubt it would be beneficial thermally.
Digikey has 70 x 70 mm (or smaller) heatsinks for 10-15$, for example search Digikey for ATS36399-ND or ATS1761-ND or ATS36324-ND ... 70 mm x 70 mm x 12-20mm fin height.
For big long heatsinks, look for example at ATS2208-ND ( 288mm x 188mm, 16.5mm tall) or ATS2197-ND (254mm x 101mm, 14mm fins, nice fins with "ribs" for more surface) or ATS2193-ND (254x100, 10mm fins)
Also, looks like you could use plain BGA heatsinks and simply use a thermal adhesive to glue the heatsink to the chip instead of drilling and locking heatsink to card... there's A LOT of space around the chip.... but it would be a pita to remove the thermal glue afterwards.
another thing is
this PHY (this is not actual chip which does all the work) will be run hotter the longer cable you will have
SFP PHYs have no heatsinks, as SFP is limited to exact amount of energy it can draw
that all copper heatsink look good af
You can push a zip tie through the hole in the heat sink and lock it with another zip tie on the other side then cut off the excess.
Just curious - what network filesystem are you using?
He mentions ZFS towards the end.
@@waltherstolzing9719 ZFS is not a network filesystem
I use NFS as the communication protocol if that is what you mean.
there is no risk of getting computer damaged by physically damaged cards (especially in this way)
I have repaired a lot (definitely over 100) PCIe cards
at worst you will get 3.3V or 12V lines shorted, which will stop you from turning on computer - no risk at all
PCIe lanes are decoupled using capacitors and only AC part of signal does go to other side, any DC stays on sending side due to nature of capacitors (this way short makes no difference, as it just creates LC circuit and it is very carefully fine tuned to avoid ringing, so you do not risk damaging PCIe slot or card)
Got three of these cards. All three are noisy. Two are in servers upstairs and I dont give a f... Third is in desktop and I had to put fan that not quite fits in 1 slot. Where did you get your fan?
Who would get annoyed at such ghetto masterpiece work like this?
You could also use a scrapped laptop camera to record the network card to see if you get an overheat led
What you need is speedfan 4.52 to monitor your temps, It's what i've been using the past 3 years. It's a freeware download file.
It looks like the "missing"/"shifted" components are caused by a bad reflow.
Cat6 is fine for 10GB. You're just limited to 55M. To get the full 100M you need 6A or higher.
One other bottleneck to be aware of is MTU size/jumbo frames... if you don't increase frame size then you're losing a fair amount of bandwidth to overhead.
Sounds like we get a video “How to water cool your Networkcards” 👌
If this doesn't work for you, you could always try to get an old single slot blower style cooler for an old gpu and see if you can't get it to work on here
"wow, is that your gaming rig? Sick!"... Not too shabby. Heheh
you could say not too "Shelby" xD You understand? no? aight imma head out
(His name is Shelby for those who didnt know)
i just bought a pair of SFP+ cards to connect to my unraid box and after having to FORCE drivers to work in windows 10. it works great. BUT... the cards do overheat with about 15 min of 1+gigabyte per second transfer rates. yea. they are a little nuts
You're confused about what 10 in 10-base stands for, and where it goes. 10-base-T is 10 Mb/s over (t)wisted pair.
I'd be so tempted to build a remote monitoring solution of an esp8266 and a CdS LDR for that overheat LED. Sure, there's a lot of ways it could be done, but that just sounds like the hackiest of them all
Shooting 4K to SSD on the BMPCC4K I see! :)
I can feel your pain, shooting 4.6K on my UM4.6K generates gigantic files even at 24fps ProRes Proxy.
As for the caps: you can probably shear half of them off and it'll work, most are redundant decoupling on these boards.
Why are you using ubuntu in your systems? Is there any advantage for you or are you just accustomed to it?
Simply add an expansion Slot Rear Exhaust Cooling Fan. will do the job better.
Why not transfer the raw data to an ssd on your editing rig, edit locally and when done, dump it all on the server? (10GBE still sweet though)
Can u make a video on ur kubuntu setup with focus on konsole setup u have?
I love what you do! I do component level repair and diagnostic work on medical equipment professionally. Soldering is my specialty. Where are you located, If you are close enough, I would love to help you!
Awesome video :D
Cat6 is good for 10G for 55 feet over copper
I just decided to remove the faulty fan and fill the empty area with some thermal pads and havent had any trouble with over heating
“ That’s thermal paste from 2012, deal with it!” 😂😂 Love it, those stickler comments sure are annoying.
So you're saying you didn't use a 120mm radiator for a CPU to cool down your ethernet card?
If it's stupid and it works then it ain't stupid.
You should really take advantage of the dual port setup of the cards by setting up a etherchannel on that.
Get an IR thermometer to check temperature on the radiators and the underside.
What operating system are you using on your desktop?
*edit: Never mind i checked about on your Chanel. Thank you!
So much more interesting than; Linus plays with thousands of dollars of free tech most of us will never own.
Is that the metcal mx-500?
I'm looking forward for the video rendering on the server!
Cat6 e supports up to 1gb of data transfer speeds
I bet there is more space behind the cards, between the pcie slots and the cpu area. So, a larger radial fan and a plastic shroud, behind the card...?
Your core2 looks like one of my firts pc's! The case is, in fact, identical:D
0:46 voice crack?
try setting up a ram drive of sufficient size transfer your data there first and then do a tx over ethernet... if your rams are sufficiently fast and connected with high enough bandwidth this should work out....
good to know since bahnhof isp here offers 10 GB internet :D
How many of these are needed and where r they gonna be installed? If u have 2 computers and 2 servers u need 4, right?
the term you are looking for is "radial blower"
en.wikipedia.org/wiki/Squirrel-cage_rotor
You should have used those Nylon Cable mesh for the cables.
umm if the temp is that big of a deal why not just get a thermal couple and attach it to the heat sink and have it run to a lcd on the server and desktop so you can see an actual temp figure at a glance
I bet you could make a custom aluminum heatsink if you only knew a CNC machine that could make anything in the world... Maybe if some Canadian who had a Haas Milling Machine could take some dimensions off of the more broken ones, and then make a custom heatsink for these cards. I bet that might work.
Or maybe finding a huge one in/on a old amp, and cutting out holes with a drill for all the components that are too tall, and making it that way. :D
there were Things around in the 90ies called peltier coolers… Maybe they still exist and Maybe they are a good measure to cool These Cards (only they have a very bad Efficiency … )
Did you get these questionable cards from wish?
Static team both sides and use 4 Cat6 cables. She'll be a screamer :)
If it is only between your editing workstation and nas then it is not internet connection, it is lan connection
❤️
Nope. Too much power and heat.
Get fiber optic instead!!!
You may have damaged the bearings by spinning it so fast with compressed air.
+1 sub! :) Nice job! Thank you for this vid! :)
This is all super janky and i love it
Try a copper heatsink with fan.
you did something wrong
you cannot in any circumstance get 2.6GB/s from USB connected SSD
you have read file from RAM (cache)
0:04 don't you mean 10Gbit NETWORK connection; Not internet connection.
you need to switch to pci-e based storage in your server!