The K suffix on Intel CPUs isn't related to integrated graphics. It only means they're unlocked, so you can overclock them. You said you can overclock F-CPUs, but the F suffix only means it has no integrated graphics. And they have the same amount of performance-cores and efficient-cores, regardless of suffix. So for instance, the i5-13600 has got integrated graphics, but can't be overclocked. The i5-13600K can be overclocked and has got integrated graphics. The i5-13600KF can be overclocked, but has no integrated graphics. There's no i5-13600F, but if there was you couldn't overclock it, and it would have no integrated graphics. :)
An F suffix means the CPU don't have a graphics processor. So a KF suffix means the processor is overclockable and does not have a GPU. Generally you do not want a K series processor in a NAS. You may get away with having a F series processor, but having a GPU often makes things a lot easier. With server class motherboards there's often a separate graphics controller integrated in the IPMI controller. This way you can get KVM over IP which can make life a lot easier, especially if you have a lot of servers to manage.
@@blahorgaslisk7763 There is at least one exception: i5-12600K has 6P+4E, for total of 10 cores, but the i5-12600 (non-K) weirdly doesn't have any E-cores, only 6P. Also, you don't need any iGPU just to use IPMI as there is a small GPU in the BMC SoC. It is however a good idea to have an iGPU if you, for example, happen to need some video transcoding in the future. And it certainly happens, as many DIY NAS people end up using Jellyfin or Plex.
The fact this issue totally wrong in the video devalues his credibility significantly. Nor has he commentated to say thanks for correcting or making a comment to correct it himself…..
I’m building a super simple NAS just to try it out because I’ve never had one. I had an extra PC case with enough drive bays as well as enough HDD storage lying around anyway, I just got a 400W modular 80+ Platinum PSU for cheap secondhand, now all I need is an older i5 main board and I’m done, all for about 50 bucks
Just a reminder of how "i3" is meaningless without generation. The current range of i3 chips have the same or more cores and threads than old i7 chips have.
One big advantage of using an ATX converter (Pico PSU or the like) in place of a regular power supply in a build is the possibility to use a low voltage UPS / solar controller as a secured power source. Of course this is only suitable for a low consumption server or NAS. Doing this reduces the numbers of UPS conversions up and down and thus the overall losses. In my home lab I have got: - 3 servers (, small machines: Asrock J50x and Atom 330) - a QNAP NAS - a 22 inch flat screen (modified to accept 12v (from 14v)) - a gigabit switch (modified to accept 12v (from 9v)) All this runs an a single 12v 600w LED power supply. The power supply is not the best but finding a good high efficiency power supply was not easy. I Tried looking for a PC like PSU with a beefy 12v rail and converters but is is not often indicated. The point with this is to be able to use solar equipment (a solar controller) in front with a battery to fix permanent power. I hope this gives ideas to other people for their builds.
Unraid strongly recommends against PCIe to SATA adapters with more than 2 ports (reliability issues). They recommend PCIe to SAS cards with SAS to SATA breakout cables instead.
They do recommend the ASM1166 and ASM1164 controllers (*provided you can update their firmware*) because they are stable and work with ASPM. The JMB585 is also recommended but it doesn't support ASPM and stops your motherboard from reaching power efficient states. Mine wouldn't go below C3 with the JMB585 but it goes to C8 with an ASM1166 with updated firmware.
I just upgraded my youngest son's PC's (that I built). Decided to recommission it until a homelab server, I was so delighted the mobo comes with EIGHT sata connections. Even has Intel RST so I could use optane nvme (really cheap now on the bay!!). More importantly it is a fractal design R5 case. AND I have ALL the HDD sleds, (including the expansion bay). It holds EIGHT HDDs. All I needed to do was upgrade the DDR4 memory. Core i7 9700k (8 core) is more then enough for a NAS/VM box for my modest home needs. The fact I can easily install EIGHT drives, with EIGHT sata is a god send. I totally lucked out.
Here is a crazy idea: go for one of those workstations, like hp z440/640/840 workstations, they come with xeon cpus, ecc ram, support bifurcation, plenty of hdd slots and slap a truenas or anything else really. Can find them here for 200 euros.
That’s a good idea. I think some want low power consumption for the home server, these may not be as efficient, especially the z840, but would be much more reliable.
basic i3-13100 is more than enough for a NAS use, having 4 P cores (8 threads), igpu and up to 192GB ram, it's basically 33% faster than i7-4770, and gives you 8+8+4 PCIe 5.0/4.0 lanes which is plenty for dual 25Gbe NIC and 3x NVME SSDs
PS: SAS is not high end any more. When used enterprise 4TB drives are around for a pittance, you can put together a goodly pile of them on a real budget. Just as I have. Ok, some of them require hours upon hours of reformatting to work with truenas, but for the low income folks, that can be worth it.
Just bought 4x 12tb Exos drives used for $80 a piece on Newegg. 48tb for just around $280 after tax, got them all in and the health check showed them only have a few hours of use so I def lucked out
I just want a bunch of storage and given the way raid works I can get decent access speed if I plan it right. All the solid state stuff is over the top for most people and both costly and risky for errors over time. Reman Seagate exos are half price. Checked to higher standard than new. Is a no brainier.
1000%. I've been using used Hitachi ultra Star enterprise drives from 2013 for probably 4 years now and have had zero issues. Meanwhile I watch people blow like $800 on the "highest end" "most reliable" WD or Seagate or whatever. Id much rather have a couple extras sitting by, have a hot spare and pocket the rest of the money
A cheap alternative could be the Dell EMC ktn-stl3. 15 bay for 55-120 dollars depending on if it comes with caddies. Great video with some really smart advice.
The power supply is such an important factor (yet underrated) in a NAS, I would certainly go with gold or above from a name brand. I am partial to Seasonic myself, and pay the premium for platinum. If you ever have a problem you can't figure out, try swapping the PSU.
@@wojtek-33it's not about the power efficiency mainly, I suppose. I'd say by massively overpaying for a high-80plus label you're buying a bit more reliability. Though I suppose that past gold, the returns are diminishing. Nothing beats redundant power supplies
@@BoraHorzaGobuchul If you're running a high-end gaming or productivity system, sure. But with something that sips power in comparison, it really doesn't make that much of a difference.
The mistake I made was believing manufacturer specs, which is not usually a problem. The specs for my PC case said "supports up to 4 3.5" drives", and indeed there is space for those drives. However, the space is on the side of the case behind the motherboard and as such has no cooling. The drives hit 55C after 1.5 hours - too hot! The reviews of this case didn't mention this (upon further digging I found some forums that did), because who runs spinning hard drives these days? I'd never even heard of the need to cool hard drives before, presumably because all of my past systems had been properly designed. Of my myriad of choices to remedy this, I elected to 3D print a bracket to hold some fans, which I managed to squeeze into the small space available. Drives run fine now, but I won't make the same mistake again. Next time I'll buy a more appropriate case.
Thank you for the incredibly kind and supportive gesture! Me and Ed do these videos and the NASCompares support sections as a passion project and, frankly, it is not profitable! So it's always insanely brilliant when someone goes out of their way to donate and support us.Thank you for being bloody brilliant Matt!
The PCI card approach is the best of a bad situation. NAS box is usually not going to have a Graphics card so the PCI card slot would have remained unused anyway! My 4U server rack setup has 15 bays (5 currently occupied), a 1TB NVME drive for system, 1TB SSD setup as scratch drive and a 8TB WD Blue drive set aside for Plex media server. The $70 Glotrends PCI card has 16 SATA slots for those 15 slots. The big problem is the 8Gbps bandwidth is split between the 16 channels depending on use so we don't really get full speed!
I've been baffled over how expensive the power supplies gotten lately! But if you are in for building a quality powerhouse you never really can go wrong with a power supply from example Seasonic, which is the number one producer world wide. They build a lot of the other brands power supplies as well. Most if not all of their PSU's comes with a 10-12 year long warranty. For the price devided over all those years aren't all that bad. But, I recently found power supplies made by Inter-Tech that also builds rack chassis. I actually don't think they would brand them as their own if they weren't at least ok. I've yet to use mine, but I bought the Inter-Tech VP-M300 witch is a 300Watt one I'm planning to use for a small simple server. I even found it as a "find" at my shop so I got even cheaper. It has only 24 months of warranty. The size is 63.5 x 125 x 100mm. Regarding mainboards, mini-ITX isn't actually the smallest type. Almost all of the mainboards used in the "mini pc's" that started surfacing some time back is partly made Asrock Rack and are called "4X4 7040 Motherboard Series" This exact series is coming with either a Ryzen 7 7840U or a Ryzen 5 7640U as a embedded CPU. The boards are tiny! Dimensions 4.09-in x 4.02-in x 1.4-in or 10.4 cm x 10.2 cm x 3.6 cm. The only downside with these are that they don't come with any PCI-E slots other than one for M.2 storage and 1 x M.2 (Key E, 2230) with PCIe x1 and one SATA3 port. If you don't need many ways of storage, You could use the 2x USB4 and/or the 2x USB 3.2 Gen2 for external storage. Those boards also supports 4 displays (ie 2x HDMI 1.4b, DisplayPort 1.4a via the USB 4 ports.) I was planning to use this as my own cloud and container server.
This was a great video. In a way it took me back to own mATX PC build last February. For what its worth, I chose the Corsair RM 550X Modular PSU. I consider this a highly efficient, quiet and Excellent PSU. Thank You. .
12:40 Speaking of mistakes when it comes to PSUs that also crosses over into Cooling... be aware of thermal rating of your PSU. Good PSUs start at 50C/122F, meaning they will provide their listed power even up to that temperature. That generic gray brick picked for that NAS build likely caps out at 22C/70F meaning that if it goes above common room temperature the amount of power provided and the quality at which it is provided will take a dive straight into the crapper.
There is one detail that is overlooked when building your own NAS, that's you yourself control the firmware. Saying that because buying a Nas and you rely on them to update for how long? , I got a nas , I can't access the files anymore because it uses smb1 , I can of course lots of hoops and tricks to access them but yes I need a new one , the point is , it isn't THAT old and no updates have come to fix it because you guessed it.. a new better product was released, and that's why building one my self makes me control when to get updates so files aren't lost and both windows and Linux have support for what ever new technology comes along to make things safer ... and BTW not many manufacturers state which smb version they support , you have to go thru a lot of search to find that information, which again makes me wonder , how long will they support it . And building yourself does have a fun factor too, you learn a lot on the way and probably get a system that last far longer than any out of the box experiences (unless of course you buy server grade Nas, but honestly, those a toooooo expensive and overkill 😊)
Well, I won't say I enjoyed it, but I certainly needed to watch it. That was a lot. Thank you for going over all of that. Thanks also for ALL of your other work putting out useful info. This channel is super useful for noobs to the home server/NAS world.
Seeing as flash prices are coming down a lot, I'm very interested in building a passively cooled all-NVMe (or SATA SSDs, potentially) NAS, any chance of a tutorial about that?
Most B550 boards will take 2 NVMe Gen3 drives. I went this way with 2x1Tb crucial M.2 as the main system drive. Then I scored an "IcyDock" 6xSSD hot swap bay which fits into a 5.25" slot. My server only has a single HDD now and it's days are numbered as I have a pair of 4Tb SSDs in the Amazon cart to replace it. The HDDs move to an old server which is "WakeOnLAN" on demand for backups only. HDDs consume a lot more power than SSDs.
@@1over137 The lower power consumption is one of the main reasons that I'm interested in an all-flash NAS, the other being noise. A third would be space, as it should be possible to make a much more compact machine. I would have a strong preference for a fully passively cooled machine, I'm happy to sacrifice some performance for it. Not doing anything demanding with the machine at all, just want an essentially unnoticeable device that doesn't add too much to the power bill.
I really want to build my own NAS / Home server, I've built many PCs but cant really find the confidence to do this. Price isn't too much of a worry, but I want something that I can grow over time using both NVME and SSDs. It would most likely have differing size drive and so I am thinking perhaps this is a non starter as most of the DIY OS's seem to require you to have the same size drives. Maybe I am wrong but that's where my head has gotten too. I am not really needing it to do a lot, file storage, cctv footage storage, photo & video sharing to Apple TV's - any guidance would be most helpful..
Im a nobody who had built a couple computers over the years, my last one that I had built sported sn Asus P67 sabertooth mobo with I think either 16 or 32 gigs of ram. When it reached the end of its useful life for me, I decided to build my own beast, but i kept that old one around. It has 8 sata connectors, and so I decided to build a NAS out of it, with great trepidation i might add. So after buying six 6tb bars drives and putting them into the case, I removed the original drives, but kept the 120gb ssd and have decided on Unraid as my software, I got it running, I have a good quality Samsung USB stick for the unraid, I use the ssd as the cache currently, one of the six drives as parity and the rest are my array. This will be for our photos, music, and especially as a media server for our home. So far so good. But I think I'm going to take our the old ssd, get 2 new ssd drives to use as cache for safety while writing data. Give it a shot, while I was white knuckles for a while, I'm starting to relax a bit and enjoy playing with it. I've been messing about with docker! Fun wow, lol
I just found this video and subscribed to your channel right away. I was a UNIX Guru supporting a couple of LANs until I had to retire in the late 2000s but hardware issues were left for the people trained to deal with those issues. After that, I mainly played on FreeBSD UNIX machines and had a mid 2010s era Windows machine for stuff I couldn't do on UNIX. But I need a couple of new servers and a brand new Windows 11 spec system. My problem is that I have found that I would have to learn most of what has changed since I retired to understand the answers to most questions that I have.
I was surprised to find that an Intel i5-6500T has actually LESS power than an i7-4490 about 3 years older than it. the 6500T is a weak PoS and can't even drive a 1440p monitor properly. So, yes I hear you. The interesting thing however is that they 6500T is a 35W TDP chip. I think the i7-4490 is 75W TDP I believe. It comes from the generation before the "Energy saving and efficiency" became the big movement in hardware specs. I recall for a few generations chips and gpus didn't get faster, but the amount of power the consumed halved.
I got an old p35-DS3R board with a q6600 kicking around its got 8 sata ports on it and I'll be using that, should be quick enough for what i intend to use it for, after running a BBS for a number of years back in the day I never skimp on a PSU but I got a descent enermax for the build all I got to do is check the caps are ok as its old.
Wow, Intel Q6600. That was my 1st full desktop build. Thankfully had a Micro Center in town, IIRC the Q6600 was $200 cheaper than anywhere else at the time I bought it. That's when, what was it, 98-99% of the market was dual core.
Am I the only one who dislikes the obsession with using ITX for NAS? I'd like one of those sonata 8x ssd PCIEx16 cards. I'd like the ability to at least have the option of QSFP+ NICs, and I have HDD's for slow storage such as video playback and TV recording, I'd like trueNAS with ZFS (redundant disks - a single mirror isn't enough, so... dedicated HBA) and I'd like to have some passthrough PCIE NIC slots for a firewall. I've been trying to do this with some older kit, but the heat and sound output is too high. There seems to be a lack of power-efficient CPUs with lots of PCIE lanes.
great advice thanks, i saw those m.2 sataport cards the other day on aliexpress was wondering about using it the way you have implemented. Great tip on the molex for backplane also .
On point. So true. Made the following mistakes myself building a NAS: 1. Only two SATA ports onboard while having a 4 bay chassis. 2. On-board network chipset incompatible with my chosen NAS software. 3. Even though I installed a low power system, the PSU fan was noisy as hell and not controllable. Fixed by replacement fan.
1. Are single parity enough to do most basic nas like store data ? 2. Should be fine to use 2 sata connector as OS and cache drive ? 3. Do you have any chance to data all PSU rated from reputable brand like corsair ? I heard so many people like corsair RMx because there a proof that PSU have enough power efficient. 4. So for cpu with F series can't be use for transcode on media server unless plug gpu on it ?
Excellent video ! Very useful and I would have been much better confident if I had seen it a couple years ago. Anyway I did a pretty good job and found your video very reassuring to me 👍🏻
Yeah, the CPU struggle is real, bizarre how a lot of people go, ah, I got an i5...need more than that, what generation, what market segment? To add context, I asked what spec was the laptop for work testing, "It is an i5 so it should be relatively good" then I followed up with, "Okay, give me the model and I will look it up"...i5 Raptorlake good, Raptor Lake-U ah, one of the lowest power market segments, noooo! Raptorlake-U and Alderlake-U are likely more than enough for your average NAS, Storaxa with the highest config, is a 1265U.
@nascompares Q) How much space does the single low-profile PCIe slot provide you on the Jonsbo N2? (Beyond the single slot) I'd love to see if one of the Low-profile Arc A310/380 cards would fit in there for transcoding.
The whole reason I built my NAS with a full ATX board was for PCIE slots. 1 for HBA, 1 for GPU and 1 for 1gb SFP+. It didn't hurt that I needed space for 8 drives and at the time there were no smaller cases that would support that many like there are now. My next build will hopefully be able to drop GPU but I'll still need 2 so will be sticking with a big ol NAS.
@@gorillaau It's kind of sad how limiting modern consumer boards and low TDP server stuff is now a days. Used to be fairly easy to build something in the 65W TDP range with a full ATX board with lots of PCIE slots, but it seems that day has passed.
@nadtz Even cases are sometimes atrocious. Yep, full-size ATX... but where are the hard drive bays? You guys forgot the hard drive bays. Yes, I can have two M.2 nvme(??) Interfaces, but seriously what's the damn point in big case but little storage? Oh, it's to show off the blinky lights!
@@gorillaau Agreed. Luckily the fractal define R5 and XL are still available but it's kind of sad how few options there are for HD's in ATX these days.
@@nadtzthe Fractal Design Node 804 is also a good choice if you can live with a mATX motherboard. Nine 3.5” drive bays and two 2.5” drive bays. Also room for a slim optical drive.
This is exactly the video that I'm needing. I have a que nap that runs an AMD so apparently I cannot utilize unrate on it which is a shame. It also doesn't have NVMe, although it does have SATA m.2. I also have a computer that I just built that I'm thinking about converting over two unrade and using it as a daily driver along with all my local media server and storage and docker containers. And I think I might go down that path, get everything set up and then look into a different chassis in the future. I will definitely be utilizing the M.2 to SATA adapter. That looks like it's going to be very helpful. Thanks for another great video and a ton of info dumping!
diy nas is fairly basic and superior the vast majority of the time when compared to commercial off the shelf branded nas - you have many advantages when diy nas - you will have a better cpu and much faster networking options plus expandability and upgradeability - diy nas blows away commercial alternatives if people just do a bit of homework - plus they can save money and build 2 nas for the same price most of the time which gives them vital redundancy
@@wojtek-33 preferences - the ability to run 40g very easily outweighs ease of use on the sw imo - just run debian /smbd/ open media vault and keep it simple - the zfs caching tiers can add some value too with zil and zlog on nvme. in the end it is just data - i like to just image boot drive with guymager and then backup files, cloning vms is also a good way to go along with snapshots and ability to rollback with qemu/vmmanager - diy rocks just for the value and ability to scale and upgrade
@@wojtek-33 Hyper Backup is about the only Synology thing I use. I have a Synology NAS, but I just run everything through Docker Compose. I could just copy my docker folder over to any Linux based server and run a command to boot up all my stuff, all I'd have to do is change an environment variable for the root volume path and maybe some others. More often than not, I find Synology's UI gets in my way when it comes to hosting services; for instance, I had to edit the .mustache files so their built in nginx runs on 81 and 444 to free up 80 / 443 for the LSIO swag container (which is far superior as far as I'm concerned and comes with a ton of preset sample proxy configs for basically anything you might want to host). That said, I'm looking into alternatives, as I don't like the idea that if my NAS dies I have to install some Synology app on my desktop or get another Synology NAS to be able to unroll my backups.
Hi 👋 I am wondering to build a nas server and i chose to buy i5 10400f paired with h470 with 16gb 2666mhz and 3 hard drive of 4tb and 550w gold evga All these 3 hdd will be connected on sata pcie And a gpu gtx720 for picture Do you think the above specifications is fare enough ??
What can I create for less then $200 or £153.72 and still et the M.2 on the back? I mean I have the hard drives, but I also need a micro sd and sd card reader, on it. IS there a ways to add USBs to it?
just go for standard atx ,and will have 4-6sata , 2 or 3 pcie , and old style case , most will have 4-8 drive bay some will have cd bay that you can add expansion bay , , my main server is i5 4 gen , has 10gb nic , 4ssd and 2 hdd , just running plex and my work files photo and video , and run on w server 2012r2 so i can use tired storage , and my cold storage run with i3 4gen , with pcie sata with 10 drives , and has dual 1gb nic , and my backup server is my old core2quad rig ,
There is not a single 1-to-5 SATA port multiplier available on Amazon (US) with a rating of 4.0 or better. The reviews for most of the 3 and less star cards are useless. Has anybody had positive experiences with one of these cards or a useful comment about issues (besides the degradation of throughput if more than 3 drives are connected)?
I have a MB with CPU ram etc that has 8 SATA connectors.... but it is OLD, as in 8+ years old. Still works fine, is it too old though? MB is a Gigabyte P55A-UD4 running a gen 2(?) i5.
Can't currently look it up but what to look for is if the ports are all full SATA III 6gb/s, a lot of those older boards would have only 2 or 3 high speed ports and the rest would be half bandwidth 3gb/s ports. If you can't read the mobo silkscreen easily then tend to color code them differently too (ie grey 6gbs and blue 3gbs
I'll tell you right now.... If you want to run a legitimate NAS, do NOT use the SATA controller built into your motherboard. I have had so many issues with those controllers. Save a few extra bucks on that and instead get a board with more PCIE slots and get a good add-on PCIE -> SATA Controller like an LSI card, etc.
I have a second hand mini Pc (i5 8500T, 16GB DDR4 2666, 256GB NVME) as a server and I'm doubting whether to acquire a Synology or use in a VM in the server TRUENAS or directly a ZFS pool in Proxmox. Any recommendations?
@@abel4776 i haven't decided on anything yet cause I have to do a total renovation at home and I am going to wait until then, but with a likely second child on the way by then I think I'll take the NAS route cause I won't have much time to play with it.
Well if you think money will be an issue with the hard drives __sizes not matching__ and will eventually want to add many HD's over the years, like 6, then 8 then 12. You have mismatched HD's right, now then you may want to consider Unraid then, problem is they have a perpetual license right now but that is changing in a week or two Max. Then things change, then it'll maybe be about 50% a yr cost from what you paid ( $59, $89 ), if you want updates, you can wait a few months if you want ( Christmas time & it'll still fully work ). What the two lower tiers cost a year will be. However, you can get a lifetime license or upgrade to it later on, but that is going up in price. If you do build now, PSU can get one with 6 yr warranty or even a 12 yr warranty if higher end one. Motherboard with 3 yr or 5 yr. maybe, RAM with limited lifetime warranty. Spinning hard drives ( rust ) will be 3 to 5 years and 2.5" SSD's 3 to 5 yrs and NVMe's will be 3 to 5 years. If you do buy a Synology, at least one will be 2 years, some will be 3 year warranties but you can get an extended warranty now for up to 5 yrs from what I understand. Full disclosure, I did buy two Unraid OS keys because I can add different size hard drives and slowly add bigger ones when I want ( just need to get a used LSI HBA card with IT active off of eBay ). Can't afford to purchase 6-10 exact same 8TB ( prefer 14TB ) at the same time right now, was really against paying when I wanted to use Proxmox & TrueNAS Scale and with Unraid you have to use a stupid thumb drive for now. But watched enough YT vids in last 10 month on XCP-NG, Proxmox, TrueNAS Scale & Core and Unraid to know that people love Unraid even though it isn't free. I'd look up; Upcoming Changes to Unraid OS Pricing.@@barygol
They can be used for a few things depending on the choice of software. Can be used to act as a router of sorts or to aggregate the signal together for more bandwidth. Could be used for failover/redundancy too as well as not for more bandwidth but for more concurrent users
Always go platinum on the psu. Why knowingly waste electricity when you don't have to? We should put our money into efficient products, not higher energy prices due to higher consumption.
I was looking into a simple personal backup device and I found external usb dual bay encolures that are Raid 0/1/JBOD configured with a switch on the case itself , No software needed, and one model says it will sleep if no network/data activity for 10 min And I had a thought , I can plug this into my modem and have a file system that I can save to / read from and unplug it so I can sleep at night and I can take it with me, again this is for a personal setup and obviously not for a company / large family So I'm thinking it should have a decent life with 2 x QVO's in it ? (RAID1) any thoughts ???
I had one of those cases. Mediasonic was the brand, but I bet these days when you buy one of those chassis you’d find the same hardware under at least a half dozen different brand names. When it died, it killed three out of the four hard drives in it. That’s when I decided to build a PC with enough drive bays internally.
@@williamp6800 interesting... My idea is to plug it in, save all important stuff and unplug it , more like an archive. Drives are cheap and I can totaly stuff my ITX for that, so this idea would be handy.
@@nascompares when I looked at the EXO's hard drives they had a SATA version and SAS version. I'm pretty sure my Naz only uses SATA though so I guess it doesn't matter. I guess it's more for Enterprise use
They’re just fine, both with and without integrated graphics. Anything using the AM4 socket, which means anything with Ryzen in the name except the 7000 series. The 3000 and 5000 series are the better options as they are on the Zen 2 and Zen 3 architectures respectively. You can get 4cC/8T Ryzen 3 4100 for $72 on Newegg. No integrated graphics but plenty of power for a small to medium NAS. My own personal preference right now would be the 5600G with 6C/12T and integrated graphics for $135. The 5700G with 8C/16T for $200 is a nice step up, but serious overkill for a small NAS.
@@MrRakushin it’s nice to have but not essential. The Pro CPUs don’t seem to be available as a retail product in North America. Only in prebuilds as far as I know. I wanted ECC for my NAS and ended up going with a used Supermicro X10slm-f and a Supermicro x10sl7-f. The latter has an LSI HBA built into the motherboard so natively supports 14 SATA drives. Old enough that they are DDR3. There days you can can pick up one of them up for $70-$100. If you’re patient, you can find a real bargain occasionally. A couple of months ago I got an x10slm-f, a Xeon 1270 v3 4C/8T CPU, 32GB of ECC DDR3 RAM, and stock Intel CPU cooler for $90.
The reason Intel is typically preferred for NAS is because Intel CPUs with integrated graphics are historically better than AMD APUs with integrated graphics, and I think there are issues getting hardware transcoding to work on AMD CPUs (or at least Intel is easier).
I built my 32TB NAS around a Supermicro x10sl7-f which I got with a Xeon CPU and 16G ECC RAM for $55 on Ebay. Put it in a new basic ATX case with 10 drive bays. Using SAS drives and NVME as caches and a 10Gbps RJ45 card. it performs superbly well and only uses 20-100 W. depending on load. Total cost about $700.
F*k -fuel- power efficiency. Even at European energy prices the difference between a cheap psu and a decent one will be negligible. What is really important is that a cheap psu is much more likely to not only croak at the most inopportune moment, but also to take along a few components like the mobo and the drives to keep itself company on the trip to the netherworld
I think a lot of the problems and solutions outlined in this video stem from 2 sources. The case and the board. If you accept that you can make most of them go away by buying a proper case and motherboard and stop trying to use ITX for servers. They are not designed for that purpose and unsurprisingly you are finding they don't do very well at it either. Buy a proper case. Buy a full ATX motherboard. One with 4 or 5 PCI slots and 2 or 3 M.2 slots and full PCIe express lanes and speeds. ITX boards are for desktop micro PCs. Not servers. Stop making problems for yourself over "aesthetics". If the wife don't like looking at it, put it in the attic.
I dont understand why anyone would build or buy a NAS that has its own computer inside. Why wouldnt you just build yourself an inexpensive home server and control the NAS through it?
My server for example - running unRaid... hosts all these services: 30+TB NAS Windows 10 VM Home Assistant VM CCTV DVR Plex Media Server Media transcoder Web application server Cloud IDE host Multiple Minecraft servers Continuous cloud backup Among other things... So a "NAS" isn't always just a little box with a few HDDs to store files on. Many of us call our machines a "server" but don't have space for a server rack and corresponding cases, so we utilize these "NAS" cases to build a home server that runs A LOT of different services.
Lol theres nothing wrong at all with using a PCIe for an HBA. Running 9 sata cables is a pain in the ass. Two breakout cables are way cleaner. If you have half a brain and just plan out your build youll be fine. I have a dual 10gig nic and an HBA and its a great setup because i have zero need for NVMe. You absolutely can make use of a good portion of 10gig if youre running 8-10 drives. Please dont use an M.2 Sata adapter in a NAS.
I’m thinking of doing this for a nas/server: CPU: 2700x Motherboard: Asrock Rack X470D4U I don’t know what OS I want to use but I want to be able to access it anywhere and maybe run VMs off of it.
One thing is a bit inconsistent here: you talk about 13th gen Intel CPU while on other videos you talk QNAP and Synology (or others) that have... Atom inside So if we talk DIY we should not be really looking for the best performing CPU for the NAS - others proved even Atom can do Also: 750W PSU - seriously? If someone really has such power consumption, his monthly expenses for such NAS would be in UK: 180 GBP/month ;-) And finally: GPU in NAS?
Really 180 just for that. If it's powerful enough you can run Proxmox and run VM's and literally run a NAS and numerous containers now. It will take a lot of research and work though. Problem is you want a big enough CPU that can handle it, but not too big so your not wasting too much money. GPU could be used in transcoding videos for Jellyfin or Plex and playing them. If you built a Ryzen 5000 series (6000+ have it) for example then you don't have AV1 decoding, with even an Intel i3 11th Gen you do so you could have a cheap low power mobile CPU and do AV1 and also watch RUclips with lower bandwidth if I understand correctly. Intel's with UHD 630 graphics and above support AV1 I think. Newer AMD & Intel ones can stream 4k at 60p with 5.5 MB/s, which is like 1/4 what it used to be 2 years ago. That's why I want to build a new NAS right away, but thinking of buying mini PC first so I can stream RUclips and watch it at 5%, 10%, 15%, 20% faster with the browser instead of with the SmartTV setting of 25% only option. Also could comment all the time with my Logitech K830 keyboard, which I never, ever use. I have SmartTV on RUclips while I have laptop on typing with Chrome only for RUclips comments, annoying. Just saw MSI makes a mini PC, don't know how I missed that. Don't think I want a Chinese BIOS one, even though better bang for your buck. So yes, try to save power when you can. But actually if some people actually got a i7-14xxx or i9-14xxx and get a full size ATX motherboard and case they may not even need another computer. It may save them more money in the long run. If they are a true gamer, well that doesn't work at all though unless the NAS just has like 6 disks or something and isn't a media server for other people in the household.
@@sbme1147 my NAS, 4xHDD + 2x SATA SSD 2.5" with Intel(R) Pentium(R) Silver N6005 @ 2.00GHz and 64GB RAM consumes around 42W 24/7. My Proxmox, on AMD Ryzen 9 7940HS w/ Radeon 780M Graphics with 96GB RAM and 2x M2 SSD consumes 32W with 10+ VMs and 10+ CTs. Of course when you push it goes to 100W but in average around 32W only. HDD are taking a lot - each around 6W.
Cheap no brand name PSU is the last thing on heart you should buy. If you do you are just looking for troubles. 90% of the time you will end up with random hardware failures, that is one of the most frustrating and time consuming issue to solve.
You speak for mistakes and you dont know what is the difference between K,F,KF and no suffix on intel cpus... You speak for lanes population and use in your builds cpus with total 6 lanes Rev. 2.0.... You speak for power efficiency and use external PSU which are the worst ....GPUs are used as accelerators in servers and NVMe used again as accelerators in Intel Rapid Storage volumes as for Larc 2 and Zlibs in ZFS systems where the volume of RAM is very important as the support of ECC... Why You did not run a transfer speed test of your machine with all these cards you promoted ? More PCIe Rev 4.0 or 5.0 means QSFP+ abilities with 20Gbps speeds and addon hardware raid cards which support SAS 12Gbps connections . All these have as result real transfer speeds of 2GBytes per second wich can apgrated to 4GBytes per second in the future for NAS and DAS. All these for the same money with that you spend + 50 US$ !!! PS : There is no need for hot swap trays at home level servers . You can use any case wich can service your needs for capacity, air flow, water cooling , cable managment etc.
Your perspective on raid cards are a bit narrow. Not everyone is running software raid and many people will want to run a hypervisor that does not support software raiding. While you're not incorrect about the losses using a raid card comes with, you are not factoring in that people may not be running software raid solutions. It's absolutely not insanity to run use a raid card in your pcie lane.
The K suffix on Intel CPUs isn't related to integrated graphics. It only means they're unlocked, so you can overclock them. You said you can overclock F-CPUs, but the F suffix only means it has no integrated graphics. And they have the same amount of performance-cores and efficient-cores, regardless of suffix.
So for instance, the i5-13600 has got integrated graphics, but can't be overclocked.
The i5-13600K can be overclocked and has got integrated graphics.
The i5-13600KF can be overclocked, but has no integrated graphics.
There's no i5-13600F, but if there was you couldn't overclock it, and it would have no integrated graphics. :)
An F suffix means the CPU don't have a graphics processor. So a KF suffix means the processor is overclockable and does not have a GPU.
Generally you do not want a K series processor in a NAS. You may get away with having a F series processor, but having a GPU often makes things a lot easier.
With server class motherboards there's often a separate graphics controller integrated in the IPMI controller. This way you can get KVM over IP which can make life a lot easier, especially if you have a lot of servers to manage.
@@blahorgaslisk7763 There is at least one exception: i5-12600K has 6P+4E, for total of 10 cores, but the i5-12600 (non-K) weirdly doesn't have any E-cores, only 6P.
Also, you don't need any iGPU just to use IPMI as there is a small GPU in the BMC SoC. It is however a good idea to have an iGPU if you, for example, happen to need some video transcoding in the future. And it certainly happens, as many DIY NAS people end up using Jellyfin or Plex.
Good comment. Came here to say that video's description of K and F didn't sound quite correct.
Kindest regards, friends and neighbours.
The fact this issue totally wrong in the video devalues his credibility significantly. Nor has he commentated to say thanks for correcting or making a comment to correct it himself…..
So only intel cpu? Amd cant work?
I’m building a super simple NAS just to try it out because I’ve never had one. I had an extra PC case with enough drive bays as well as enough HDD storage lying around anyway, I just got a 400W modular 80+ Platinum PSU for cheap secondhand, now all I need is an older i5 main board and I’m done, all for about 50 bucks
Just a reminder of how "i3" is meaningless without generation. The current range of i3 chips have the same or more cores and threads than old i7 chips have.
Also a reminder that the i3 of 13 and 14° gen are the exact same i3 of the 12° gen but with some mhz more
the chips are all the same except faster. ☝️🤓 @@theorganizationXII
One big advantage of using an ATX converter (Pico PSU or the like) in place of a regular power supply in a build is the possibility to use a low voltage UPS / solar controller as a secured power source. Of course this is only suitable for a low consumption server or NAS.
Doing this reduces the numbers of UPS conversions up and down and thus the overall losses.
In my home lab I have got:
- 3 servers (, small machines: Asrock J50x and Atom 330)
- a QNAP NAS
- a 22 inch flat screen (modified to accept 12v (from 14v))
- a gigabit switch (modified to accept 12v (from 9v))
All this runs an a single 12v 600w LED power supply.
The power supply is not the best but finding a good high efficiency power supply was not easy.
I Tried looking for a PC like PSU with a beefy 12v rail and converters but is is not often indicated.
The point with this is to be able to use solar equipment (a solar controller) in front with a battery to fix permanent power.
I hope this gives ideas to other people for their builds.
Unraid strongly recommends against PCIe to SATA adapters with more than 2 ports (reliability issues). They recommend PCIe to SAS cards with SAS to SATA breakout cables instead.
What they definitely do *not* recommend is port multipliers like the JMB575.
They do recommend the ASM1166 and ASM1164 controllers (*provided you can update their firmware*) because they are stable and work with ASPM.
The JMB585 is also recommended but it doesn't support ASPM and stops your motherboard from reaching power efficient states. Mine wouldn't go below C3 with the JMB585 but it goes to C8 with an ASM1166 with updated firmware.
@@andrebraitwhat does it depend on whether you can upgrade the asmedia firmware or not?
Are those the same as pcie M.2 to 2 mini SAS adapter, (with SAS to SATA breakout cable, enabling 8 hdd connection)?
or different thing?
huh I had no idea.
I just upgraded my youngest son's PC's (that I built). Decided to recommission it until a homelab server, I was so delighted the mobo comes with EIGHT sata connections. Even has Intel RST so I could use optane nvme (really cheap now on the bay!!). More importantly it is a fractal design R5 case. AND I have ALL the HDD sleds, (including the expansion bay). It holds EIGHT HDDs.
All I needed to do was upgrade the DDR4 memory. Core i7 9700k (8 core) is more then enough for a NAS/VM box for my modest home needs.
The fact I can easily install EIGHT drives, with EIGHT sata is a god send.
I totally lucked out.
Here is a crazy idea: go for one of those workstations, like hp z440/640/840 workstations, they come with xeon cpus, ecc ram, support bifurcation, plenty of hdd slots and slap a truenas or anything else really. Can find them here for 200 euros.
That’s a good idea. I think some want low power consumption for the home server, these may not be as efficient, especially the z840, but would be much more reliable.
basic i3-13100 is more than enough for a NAS use, having 4 P cores (8 threads), igpu and up to 192GB ram, it's basically 33% faster than i7-4770, and gives you 8+8+4 PCIe 5.0/4.0 lanes which is plenty for dual 25Gbe NIC and 3x NVME SSDs
I agree. I have yet to find a good board that supports ECC memory for it, but I think the CPU is a good choice for truenas.
@@john_in_phoenixnot cheap but the ASUS Pro WS W680M-Ace SE is great and even has 8x SATA
@@geraldh.8047 Thank you, very much appreciated! "ASUS Pro WS W680M-Ace SE" is just what I was looking for, although not cheap, it has everything.
PS: SAS is not high end any more. When used enterprise 4TB drives are around for a pittance, you can put together a goodly pile of them on a real budget. Just as I have. Ok, some of them require hours upon hours of reformatting to work with truenas, but for the low income folks, that can be worth it.
Just bought 4x 12tb Exos drives used for $80 a piece on Newegg. 48tb for just around $280 after tax, got them all in and the health check showed them only have a few hours of use so I def lucked out
I just want a bunch of storage and given the way raid works I can get decent access speed if I plan it right. All the solid state stuff is over the top for most people and both costly and risky for errors over time.
Reman Seagate exos are half price. Checked to higher standard than new. Is a no brainier.
1000%. I've been using used Hitachi ultra Star enterprise drives from 2013 for probably 4 years now and have had zero issues. Meanwhile I watch people blow like $800 on the "highest end" "most reliable" WD or Seagate or whatever. Id much rather have a couple extras sitting by, have a hot spare and pocket the rest of the money
A cheap alternative could be the Dell EMC ktn-stl3. 15 bay for 55-120 dollars depending on if it comes with caddies. Great video with some really smart advice.
The power supply is such an important factor (yet underrated) in a NAS, I would certainly go with gold or above from a name brand. I am partial to Seasonic myself, and pay the premium for platinum. If you ever have a problem you can't figure out, try swapping the PSU.
@@wojtek-33it's not about the power efficiency mainly, I suppose. I'd say by massively overpaying for a high-80plus label you're buying a bit more reliability. Though I suppose that past gold, the returns are diminishing. Nothing beats redundant power supplies
@@BoraHorzaGobuchul If you're running a high-end gaming or productivity system, sure. But with something that sips power in comparison, it really doesn't make that much of a difference.
The mistake I made was believing manufacturer specs, which is not usually a problem. The specs for my PC case said "supports up to 4 3.5" drives", and indeed there is space for those drives. However, the space is on the side of the case behind the motherboard and as such has no cooling. The drives hit 55C after 1.5 hours - too hot! The reviews of this case didn't mention this (upon further digging I found some forums that did), because who runs spinning hard drives these days? I'd never even heard of the need to cool hard drives before, presumably because all of my past systems had been properly designed. Of my myriad of choices to remedy this, I elected to 3D print a bracket to hold some fans, which I managed to squeeze into the small space available. Drives run fine now, but I won't make the same mistake again. Next time I'll buy a more appropriate case.
Super helpful to understand why we are paying so much for Celeron powered NAS and why we can't just build our own with a random i5 board.
You can do my fried you can do! And it will be 1000 times better than any prebuild nosense!!!
Love your work
Thank you for the incredibly kind and supportive gesture! Me and Ed do these videos and the NASCompares support sections as a passion project and, frankly, it is not profitable! So it's always insanely brilliant when someone goes out of their way to donate and support us.Thank you for being bloody brilliant Matt!
The PCI card approach is the best of a bad situation. NAS box is usually not going to have a Graphics card so the PCI card slot would have remained unused anyway!
My 4U server rack setup has 15 bays (5 currently occupied), a 1TB NVME drive for system, 1TB SSD setup as scratch drive and a 8TB WD Blue drive set aside for Plex media server.
The $70 Glotrends PCI card has 16 SATA slots for those 15 slots.
The big problem is the 8Gbps bandwidth is split between the 16 channels depending on use so we don't really get full speed!
I've been baffled over how expensive the power supplies gotten lately! But if you are in for building a quality powerhouse you never really can go wrong with a power supply from example Seasonic, which is the number one producer world wide. They build a lot of the other brands power supplies as well. Most if not all of their PSU's comes with a 10-12 year long warranty. For the price devided over all those years aren't all that bad. But, I recently found power supplies made by Inter-Tech that also builds rack chassis. I actually don't think they would brand them as their own if they weren't at least ok. I've yet to use mine, but I bought the Inter-Tech VP-M300 witch is a 300Watt one I'm planning to use for a small simple server. I even found it as a "find" at my shop so I got even cheaper. It has only 24 months of warranty. The size is 63.5 x 125 x 100mm.
Regarding mainboards, mini-ITX isn't actually the smallest type. Almost all of the mainboards used in the "mini pc's" that started surfacing some time back is partly made Asrock Rack and are called "4X4 7040 Motherboard Series" This exact series is coming with either a Ryzen 7 7840U or a Ryzen 5 7640U as a embedded CPU. The boards are tiny!
Dimensions 4.09-in x 4.02-in x 1.4-in or 10.4 cm x 10.2 cm x 3.6 cm. The only downside with these are that they don't come with any PCI-E slots other than one for M.2 storage and 1 x M.2 (Key E, 2230) with PCIe x1 and one SATA3 port. If you don't need many ways of storage, You could use the 2x USB4 and/or the 2x USB 3.2 Gen2 for external storage. Those boards also supports 4 displays (ie 2x HDMI 1.4b, DisplayPort 1.4a via the USB 4 ports.)
I was planning to use this as my own cloud and container server.
This was a great video. In a way it took me back to own mATX PC build last February. For what its worth, I chose the Corsair RM 550X Modular PSU. I consider this a highly efficient, quiet and Excellent PSU. Thank You.
.
12:40 Speaking of mistakes when it comes to PSUs that also crosses over into Cooling... be aware of thermal rating of your PSU. Good PSUs start at 50C/122F, meaning they will provide their listed power even up to that temperature. That generic gray brick picked for that NAS build likely caps out at 22C/70F meaning that if it goes above common room temperature the amount of power provided and the quality at which it is provided will take a dive straight into the crapper.
Glad you addressed the PSU thing 👍
Really great mITX boards on market for nx intel. 6 sata and bunch of nics. Desinged for routers but great for nas and loght virtualization.
There is one detail that is overlooked when building your own NAS, that's you yourself control the firmware. Saying that because buying a Nas and you rely on them to update for how long? , I got a nas , I can't access the files anymore because it uses smb1 , I can of course lots of hoops and tricks to access them but yes I need a new one , the point is , it isn't THAT old and no updates have come to fix it because you guessed it.. a new better product was released, and that's why building one my self makes me control when to get updates so files aren't lost and both windows and Linux have support for what ever new technology comes along to make things safer ... and BTW not many manufacturers state which smb version they support , you have to go thru a lot of search to find that information, which again makes me wonder , how long will they support it . And building yourself does have a fun factor too, you learn a lot on the way and probably get a system that last far longer than any out of the box experiences (unless of course you buy server grade Nas, but honestly, those a toooooo expensive and overkill 😊)
Well, I won't say I enjoyed it, but I certainly needed to watch it. That was a lot.
Thank you for going over all of that. Thanks also for ALL of your other work putting out useful info. This channel is super useful for noobs to the home server/NAS world.
Thanks. Excellent level of detail !
Seeing as flash prices are coming down a lot, I'm very interested in building a passively cooled all-NVMe (or SATA SSDs, potentially) NAS, any chance of a tutorial about that?
Most B550 boards will take 2 NVMe Gen3 drives. I went this way with 2x1Tb crucial M.2 as the main system drive.
Then I scored an "IcyDock" 6xSSD hot swap bay which fits into a 5.25" slot.
My server only has a single HDD now and it's days are numbered as I have a pair of 4Tb SSDs in the Amazon cart to replace it.
The HDDs move to an old server which is "WakeOnLAN" on demand for backups only.
HDDs consume a lot more power than SSDs.
@@1over137 The lower power consumption is one of the main reasons that I'm interested in an all-flash NAS, the other being noise. A third would be space, as it should be possible to make a much more compact machine. I would have a strong preference for a fully passively cooled machine, I'm happy to sacrifice some performance for it. Not doing anything demanding with the machine at all, just want an essentially unnoticeable device that doesn't add too much to the power bill.
Second video I watched anyhow I subscribed good information here no stupid intros or dumb segments straight to the point information thank you
I really want to build my own NAS / Home server, I've built many PCs but cant really find the confidence to do this. Price isn't too much of a worry, but I want something that I can grow over time using both NVME and SSDs. It would most likely have differing size drive and so I am thinking perhaps this is a non starter as most of the DIY OS's seem to require you to have the same size drives. Maybe I am wrong but that's where my head has gotten too. I am not really needing it to do a lot, file storage, cctv footage storage, photo & video sharing to Apple TV's - any guidance would be most helpful..
Im a nobody who had built a couple computers over the years, my last one that I had built sported sn Asus P67 sabertooth mobo with I think either 16 or 32 gigs of ram. When it reached the end of its useful life for me, I decided to build my own beast, but i kept that old one around. It has 8 sata connectors, and so I decided to build a NAS out of it, with great trepidation i might add.
So after buying six 6tb bars drives and putting them into the case, I removed the original drives, but kept the 120gb ssd and have decided on Unraid as my software,
I got it running, I have a good quality Samsung USB stick for the unraid, I use the ssd as the cache currently, one of the six drives as parity and the rest are my array.
This will be for our photos, music, and especially as a media server for our home.
So far so good. But I think I'm going to take our the old ssd, get 2 new ssd drives to use as cache for safety while writing data.
Give it a shot, while I was white knuckles for a while, I'm starting to relax a bit and enjoy playing with it.
I've been messing about with docker! Fun wow, lol
Another very informative video. Always well presented and straight to the point. Kudos!
In building my low cost Nas i searched for good gold modular used supply. With 65eur i take 2 years old CoolerMaster v550 gold (10 YEARS OF WORRANTY)
I just found this video and subscribed to your channel right away.
I was a UNIX Guru supporting a couple of LANs until I had to retire in the late 2000s but hardware issues were left for the people trained to deal with those issues.
After that, I mainly played on FreeBSD UNIX machines and had a mid 2010s era Windows machine for stuff I couldn't do on UNIX.
But I need a couple of new servers and a brand new Windows 11 spec system.
My problem is that I have found that I would have to learn most of what has changed since I retired to understand the answers to most questions that I have.
Looking forward to seeing what you do with the N3, as been looking at it since it was announced
I was surprised to find that an Intel i5-6500T has actually LESS power than an i7-4490 about 3 years older than it. the 6500T is a weak PoS and can't even drive a 1440p monitor properly. So, yes I hear you. The interesting thing however is that they 6500T is a 35W TDP chip. I think the i7-4490 is 75W TDP I believe. It comes from the generation before the "Energy saving and efficiency" became the big movement in hardware specs. I recall for a few generations chips and gpus didn't get faster, but the amount of power the consumed halved.
I got an old p35-DS3R board with a q6600 kicking around its got 8 sata ports on it and I'll be using that, should be quick enough for what i intend to use it for, after running a BBS for a number of years back in the day I never skimp on a PSU but I got a descent enermax for the build all I got to do is check the caps are ok as its old.
Wow, Intel Q6600. That was my 1st full desktop build. Thankfully had a Micro Center in town, IIRC the Q6600 was $200 cheaper than anywhere else at the time I bought it. That's when, what was it, 98-99% of the market was dual core.
Am I the only one who dislikes the obsession with using ITX for NAS? I'd like one of those sonata 8x ssd PCIEx16 cards. I'd like the ability to at least have the option of QSFP+ NICs, and I have HDD's for slow storage such as video playback and TV recording, I'd like trueNAS with ZFS (redundant disks - a single mirror isn't enough, so... dedicated HBA) and I'd like to have some passthrough PCIE NIC slots for a firewall.
I've been trying to do this with some older kit, but the heat and sound output is too high. There seems to be a lack of power-efficient CPUs with lots of PCIE lanes.
Hi, Thanks great info, Looking at your *chipset diagram* can tell you almost everything about ports and slots.
This was a great video thank you for all the professional information I think it helped me so much for my first build.
All useful info.
great advice thanks, i saw those m.2 sataport cards the other day on aliexpress was wondering about using it the way you have implemented. Great tip on the molex for backplane also .
Apart from the size, what is the down side of using say a cheap Dell 5820 as a NAS? I got one with 16gb ram, 1tb drive, W series Xeon for £60.
On point. So true.
Made the following mistakes myself building a NAS:
1. Only two SATA ports onboard while having a 4 bay chassis.
2. On-board network chipset incompatible with my chosen NAS software.
3. Even though I installed a low power system, the PSU fan was noisy as hell and not controllable. Fixed by replacement fan.
Interesting, but my own NAS uses AMD, and the suffix is G. :)
1. Are single parity enough to do most basic nas like store data ?
2. Should be fine to use 2 sata connector as OS and cache drive ?
3. Do you have any chance to data all PSU rated from reputable brand like corsair ? I heard so many people like corsair RMx because there a proof that PSU have enough power efficient.
4. So for cpu with F series can't be use for transcode on media server unless plug gpu on it ?
I like watching your long videos while on my health club's treadmill. Keep the long videos coming !!!😊
Excellent video ! Very useful and I would have been much better confident if I had seen it a couple years ago.
Anyway I did a pretty good job and found your video very reassuring to me 👍🏻
Can you talk about the external power supplies…I have a hard time finding the one with correct hole size
You won the like and the suscribe with this! Thanks!
Yeah, the CPU struggle is real, bizarre how a lot of people go, ah, I got an i5...need more than that, what generation, what market segment? To add context, I asked what spec was the laptop for work testing, "It is an i5 so it should be relatively good" then I followed up with, "Okay, give me the model and I will look it up"...i5 Raptorlake good, Raptor Lake-U ah, one of the lowest power market segments, noooo!
Raptorlake-U and Alderlake-U are likely more than enough for your average NAS, Storaxa with the highest config, is a 1265U.
@nascompares
Q) How much space does the single low-profile PCIe slot provide you on the Jonsbo N2? (Beyond the single slot)
I'd love to see if one of the Low-profile Arc A310/380 cards would fit in there for transcoding.
21:37 but a basic GPU can be purchased for a VERY LOW cost
Do you have any recommendations for a motherboard without worrying about price?
The whole reason I built my NAS with a full ATX board was for PCIE slots. 1 for HBA, 1 for GPU and 1 for 1gb SFP+. It didn't hurt that I needed space for 8 drives and at the time there were no smaller cases that would support that many like there are now. My next build will hopefully be able to drop GPU but I'll still need 2 so will be sticking with a big ol NAS.
I have been evaluating doing the same thing. I am less than impressed by the current offerings from the smaller motherboards. Thanks for your input.
@@gorillaau It's kind of sad how limiting modern consumer boards and low TDP server stuff is now a days. Used to be fairly easy to build something in the 65W TDP range with a full ATX board with lots of PCIE slots, but it seems that day has passed.
@nadtz Even cases are sometimes atrocious. Yep, full-size ATX... but where are the hard drive bays? You guys forgot the hard drive bays. Yes, I can have two M.2 nvme(??) Interfaces, but seriously what's the damn point in big case but little storage? Oh, it's to show off the blinky lights!
@@gorillaau Agreed. Luckily the fractal define R5 and XL are still available but it's kind of sad how few options there are for HD's in ATX these days.
@@nadtzthe Fractal Design Node 804 is also a good choice if you can live with a mATX motherboard. Nine 3.5” drive bays and two 2.5” drive bays. Also room for a slim optical drive.
You might be sweating, but untill you mentioned the weather I didn't hear the seagulls. Correlation? Gonna watch the rest now 😁
This is exactly the video that I'm needing. I have a que nap that runs an AMD so apparently I cannot utilize unrate on it which is a shame. It also doesn't have NVMe, although it does have SATA m.2.
I also have a computer that I just built that I'm thinking about converting over two unrade and using it as a daily driver along with all my local media server and storage and docker containers.
And I think I might go down that path, get everything set up and then look into a different chassis in the future.
I will definitely be utilizing the M.2 to SATA adapter. That looks like it's going to be very helpful.
Thanks for another great video and a ton of info dumping!
Cool! Didn't know that there was an M.2 to SATA breakout converter. How does the OS see this? Will that work with TrueNAS?
Wait. Did you show a graphic where a mini is smaller than a micro but then state a micro is the smallest? So confused.
how would you approach RAID? Or do you assume ZFS support? what do you think?
a new mobo with intel 305 4x2.5gbe 6xsata 2xM.2 PCIE... its very very cool
diy nas is fairly basic and superior the vast majority of the time when compared to commercial off the shelf branded nas - you have many advantages when diy nas - you will have a better cpu and much faster networking options plus expandability and upgradeability - diy nas blows away commercial alternatives if people just do a bit of homework - plus they can save money and build 2 nas for the same price most of the time which gives them vital redundancy
@@wojtek-33 preferences - the ability to run 40g very easily outweighs ease of use on the sw imo - just run debian /smbd/ open media vault and keep it simple - the zfs caching tiers can add some value too with zil and zlog on nvme. in the end it is just data - i like to just image boot drive with guymager and then backup files, cloning vms is also a good way to go along with snapshots and ability to rollback with qemu/vmmanager - diy rocks just for the value and ability to scale and upgrade
@@wojtek-33 Hyper Backup is about the only Synology thing I use. I have a Synology NAS, but I just run everything through Docker Compose. I could just copy my docker folder over to any Linux based server and run a command to boot up all my stuff, all I'd have to do is change an environment variable for the root volume path and maybe some others. More often than not, I find Synology's UI gets in my way when it comes to hosting services; for instance, I had to edit the .mustache files so their built in nginx runs on 81 and 444 to free up 80 / 443 for the LSIO swag container (which is far superior as far as I'm concerned and comes with a ton of preset sample proxy configs for basically anything you might want to host).
That said, I'm looking into alternatives, as I don't like the idea that if my NAS dies I have to install some Synology app on my desktop or get another Synology NAS to be able to unroll my backups.
@@wojtek-33 it is simple and basic but you are constrained by biz model and algo - agreed
@@wojtek-33 how stupid people are to follow advice from youtube
Hi 👋
I am wondering to build a nas server and i chose to buy i5 10400f paired with h470 with 16gb 2666mhz and 3 hard drive of 4tb
and 550w gold evga
All these 3 hdd will be connected on sata pcie
And a gpu gtx720 for picture
Do you think the above specifications is fare enough ??
Hard to get the low TDP ryzen new but easy to find the low TDP Intel. Makes building stuff biased.
What is the temperatur on disk drives when run in a room holding 20degrees celsius?
What can I create for less then $200 or £153.72 and still et the M.2 on the back? I mean I have the hard drives, but I also need a micro sd and sd card reader, on it. IS there a ways to add USBs to it?
just go for standard atx ,and will have 4-6sata , 2 or 3 pcie , and old style case , most will have 4-8 drive bay some will have cd bay that you can add expansion bay , , my main server is i5 4 gen , has 10gb nic , 4ssd and 2 hdd , just running plex and my work files photo and video , and run on w server 2012r2 so i can use tired storage , and my cold storage run with i3 4gen , with pcie sata with 10 drives , and has dual 1gb nic , and my backup server is my old core2quad rig ,
Can a Qnap TR-004 be used as a stand alone NAS?
what if I want a couple dozen drives with an itx mo-bo?
There is not a single 1-to-5 SATA port multiplier available on Amazon (US) with a rating of 4.0 or better. The reviews for most of the 3 and less star cards are useless. Has anybody had positive experiences with one of these cards or a useful comment about issues (besides the degradation of throughput if more than 3 drives are connected)?
Fancy doing a video on old Pc to Nas converting - don't wanna make e-waste
I think I literally published a video on this yesterday
@@nascomparesooooo, it’s my first day, 😂 insert Simpson meme,
Saw your 100k special when you made that Jonsbo, that the one your referring too?
Btw, that my vpn alt, so YT can ban it if they catch its adblocker
I have a MB with CPU ram etc that has 8 SATA connectors.... but it is OLD, as in 8+ years old. Still works fine, is it too old though? MB is a Gigabyte P55A-UD4 running a gen 2(?) i5.
Can't currently look it up but what to look for is if the ports are all full SATA III 6gb/s, a lot of those older boards would have only 2 or 3 high speed ports and the rest would be half bandwidth 3gb/s ports. If you can't read the mobo silkscreen easily then tend to color code them differently too (ie grey 6gbs and blue 3gbs
I'll tell you right now.... If you want to run a legitimate NAS, do NOT use the SATA controller built into your motherboard. I have had so many issues with those controllers. Save a few extra bucks on that and instead get a board with more PCIE slots and get a good add-on PCIE -> SATA Controller like an LSI card, etc.
I have a second hand mini Pc (i5 8500T, 16GB DDR4 2666, 256GB NVME) as a server and I'm doubting whether to acquire a Synology or use in a VM in the server TRUENAS or directly a ZFS pool in Proxmox.
Any recommendations?
@@abel4776 i haven't decided on anything yet cause I have to do a total renovation at home and I am going to wait until then, but with a likely second child on the way by then I think I'll take the NAS route cause I won't have much time to play with it.
Well if you think money will be an issue with the hard drives __sizes not matching__ and will eventually want to add many HD's over the years, like 6, then 8 then 12. You have mismatched HD's right, now then you may want to consider Unraid then, problem is they have a perpetual license right now but that is changing in a week or two Max. Then things change, then it'll maybe be about 50% a yr cost from what you paid ( $59, $89 ), if you want updates, you can wait a few months if you want ( Christmas time & it'll still fully work ). What the two lower tiers cost a year will be. However, you can get a lifetime license or upgrade to it later on, but that is going up in price. If you do build now, PSU can get one with 6 yr warranty or even a 12 yr warranty if higher end one. Motherboard with 3 yr or 5 yr. maybe, RAM with limited lifetime warranty. Spinning hard drives ( rust ) will be 3 to 5 years and 2.5" SSD's 3 to 5 yrs and NVMe's will be 3 to 5 years. If you do buy a Synology, at least one will be 2 years, some will be 3 year warranties but you can get an extended warranty now for up to 5 yrs from what I understand. Full disclosure, I did buy two Unraid OS keys because I can add different size hard drives and slowly add bigger ones when I want ( just need to get a used LSI HBA card with IT active off of eBay ). Can't afford to purchase 6-10 exact same 8TB ( prefer 14TB ) at the same time right now, was really against paying when I wanted to use Proxmox & TrueNAS Scale and with Unraid you have to use a stupid thumb drive for now. But watched enough YT vids in last 10 month on XCP-NG, Proxmox, TrueNAS Scale & Core and Unraid to know that people love Unraid even though it isn't free.
I'd look up; Upcoming Changes to Unraid OS Pricing.@@barygol
Noob question. How are those 4 2.5gb nic connectors used?
They can be used for a few things depending on the choice of software. Can be used to act as a router of sorts or to aggregate the signal together for more bandwidth. Could be used for failover/redundancy too as well as not for more bandwidth but for more concurrent users
Any psu anyone can recommend for
4x 4 Tb
N100 cpu
16GB ddr5 sodim
Always go platinum on the psu. Why knowingly waste electricity when you don't have to? We should put our money into efficient products, not higher energy prices due to higher consumption.
why only intel for cpu?
Exactly. Intel is cooked for the moment, but they do seem to run hotter than I'm used to. (My first AMD build).
I was looking into a simple personal backup device and I found external usb dual bay encolures that are Raid 0/1/JBOD configured with a switch on the case itself ,
No software needed, and one model says it will sleep if no network/data activity for 10 min
And I had a thought , I can plug this into my modem and have a file system that I can save to / read from and unplug it so I can sleep at night
and I can take it with me, again this is for a personal setup and obviously not for a company / large family
So I'm thinking it should have a decent life with 2 x QVO's in it ? (RAID1)
any thoughts ???
I had one of those cases. Mediasonic was the brand, but I bet these days when you buy one of those chassis you’d find the same hardware under at least a half dozen different brand names. When it died, it killed three out of the four hard drives in it.
That’s when I decided to build a PC with enough drive bays internally.
@@williamp6800 interesting... My idea is to plug it in, save all important stuff and unplug it , more like an archive. Drives are cheap and I can totaly stuff my ITX for that, so this idea would be handy.
What about SAS versus NAS?
Do you mean SAN?
@@nascompares when I looked at the EXO's hard drives they had a SATA version and SAS version. I'm pretty sure my Naz only uses SATA though so I guess it doesn't matter. I guess it's more for Enterprise use
What about AMD CPU with and without integrated GPU? Or AMD is not good for NAS builds?
They’re just fine, both with and without integrated graphics. Anything using the AM4 socket, which means anything with Ryzen in the name except the 7000 series. The 3000 and 5000 series are the better options as they are on the Zen 2 and Zen 3 architectures respectively.
You can get 4cC/8T Ryzen 3 4100 for $72 on Newegg. No integrated graphics but plenty of power for a small to medium NAS.
My own personal preference right now would be the 5600G with 6C/12T and integrated graphics for $135. The 5700G with 8C/16T for $200 is a nice step up, but serious overkill for a small NAS.
@@williamp6800 I was thinking about their "pro" series that supports ECC memory, but I really didn't decided yet do I need it or not
@@MrRakushin it’s nice to have but not essential. The Pro CPUs don’t seem to be available as a retail product in North America. Only in prebuilds as far as I know.
I wanted ECC for my NAS and ended up going with a used Supermicro X10slm-f and a Supermicro x10sl7-f. The latter has an LSI HBA built into the motherboard so natively supports 14 SATA drives. Old enough that they are DDR3. There days you can can pick up one of them up for $70-$100.
If you’re patient, you can find a real bargain occasionally. A couple of months ago I got an x10slm-f, a Xeon 1270 v3 4C/8T CPU, 32GB of ECC DDR3 RAM, and stock Intel CPU cooler for $90.
The reason Intel is typically preferred for NAS is because Intel CPUs with integrated graphics are historically better than AMD APUs with integrated graphics, and I think there are issues getting hardware transcoding to work on AMD CPUs (or at least Intel is easier).
I built my 32TB NAS around a Supermicro x10sl7-f which I got with a Xeon CPU and 16G ECC RAM for $55 on Ebay. Put it in a new basic ATX case with 10 drive bays. Using SAS drives and NVME as caches and a 10Gbps RJ45 card. it performs superbly well and only uses 20-100 W. depending on load. Total cost about $700.
F*k -fuel- power efficiency. Even at European energy prices the difference between a cheap psu and a decent one will be negligible. What is really important is that a cheap psu is much more likely to not only croak at the most inopportune moment, but also to take along a few components like the mobo and the drives to keep itself company on the trip to the netherworld
I think a lot of the problems and solutions outlined in this video stem from 2 sources. The case and the board. If you accept that you can make most of them go away by buying a proper case and motherboard and stop trying to use ITX for servers. They are not designed for that purpose and unsurprisingly you are finding they don't do very well at it either.
Buy a proper case. Buy a full ATX motherboard. One with 4 or 5 PCI slots and 2 or 3 M.2 slots and full PCIe express lanes and speeds.
ITX boards are for desktop micro PCs. Not servers. Stop making problems for yourself over "aesthetics". If the wife don't like looking at it, put it in the attic.
Which one do I put in the attic, the NAS or the wife?
I dont see a link for the m.2 nvme to 5 port spliiter, anyone got a link for one?
I've never seen anything more than 4. Also have a look at the other comments about avoiding some of these.
Why would you ever want to waste money on a GPU for a NAS if that is the purpose you want to use it for?
I'll do all I can to save my wonga.
I dont understand why anyone would build or buy a NAS that has its own computer inside. Why wouldnt you just build yourself an inexpensive home server and control the NAS through it?
My server for example - running unRaid... hosts all these services:
30+TB NAS
Windows 10 VM
Home Assistant VM
CCTV DVR
Plex Media Server
Media transcoder
Web application server
Cloud IDE host
Multiple Minecraft servers
Continuous cloud backup
Among other things...
So a "NAS" isn't always just a little box with a few HDDs to store files on. Many of us call our machines a "server" but don't have space for a server rack and corresponding cases, so we utilize these "NAS" cases to build a home server that runs A LOT of different services.
Lol theres nothing wrong at all with using a PCIe for an HBA. Running 9 sata cables is a pain in the ass. Two breakout cables are way cleaner.
If you have half a brain and just plan out your build youll be fine. I have a dual 10gig nic and an HBA and its a great setup because i have zero need for NVMe. You absolutely can make use of a good portion of 10gig if youre running 8-10 drives. Please dont use an M.2 Sata adapter in a NAS.
I’m thinking of doing this for a nas/server:
CPU: 2700x
Motherboard: Asrock Rack X470D4U
I don’t know what OS I want to use but I want to be able to access it anywhere and maybe run VMs off of it.
So, what were the 5 mistakes?
Sarta connection lol
One thing is a bit inconsistent here: you talk about 13th gen Intel CPU while on other videos you talk QNAP and Synology (or others) that have... Atom inside
So if we talk DIY we should not be really looking for the best performing CPU for the NAS - others proved even Atom can do
Also: 750W PSU - seriously? If someone really has such power consumption, his monthly expenses for such NAS would be in UK: 180 GBP/month ;-)
And finally: GPU in NAS?
Really 180 just for that.
If it's powerful enough you can run Proxmox and run VM's and literally run a NAS and numerous containers now. It will take a lot of research and work though. Problem is you want a big enough CPU that can handle it, but not too big so your not wasting too much money.
GPU could be used in transcoding videos for Jellyfin or Plex and playing them. If you built a Ryzen 5000 series (6000+ have it) for example then you don't have AV1 decoding, with even an Intel i3 11th Gen you do so you could have a cheap low power mobile CPU and do AV1 and also watch RUclips with lower bandwidth if I understand correctly. Intel's with UHD 630 graphics and above support AV1 I think. Newer AMD & Intel ones can stream 4k at 60p with 5.5 MB/s, which is like 1/4 what it used to be 2 years ago. That's why I want to build a new NAS right away, but thinking of buying mini PC first so I can stream RUclips and watch it at 5%, 10%, 15%, 20% faster with the browser instead of with the SmartTV setting of 25% only option. Also could comment all the time with my Logitech K830 keyboard, which I never, ever use. I have SmartTV on RUclips while I have laptop on typing with Chrome only for RUclips comments, annoying. Just saw MSI makes a mini PC, don't know how I missed that. Don't think I want a Chinese BIOS one, even though better bang for your buck. So yes, try to save power when you can.
But actually if some people actually got a i7-14xxx or i9-14xxx and get a full size ATX motherboard and case they may not even need another computer. It may save them more money in the long run.
If they are a true gamer, well that doesn't work at all though unless the NAS just has like 6 disks or something and isn't a media server for other people in the household.
@@sbme1147 my NAS, 4xHDD + 2x SATA SSD 2.5" with Intel(R) Pentium(R) Silver N6005 @ 2.00GHz
and 64GB RAM consumes around 42W 24/7. My Proxmox, on AMD Ryzen 9 7940HS w/ Radeon 780M Graphics with 96GB RAM and 2x M2 SSD consumes 32W with 10+ VMs and 10+ CTs. Of course when you push it goes to 100W but in average around 32W only. HDD are taking a lot - each around 6W.
Cheap no brand name PSU is the last thing on heart you should buy. If you do you are just looking for troubles. 90% of the time you will end up with random hardware failures, that is one of the most frustrating and time consuming issue to solve.
I forgot to say: well done and well explained video, thanks for sharing.
KFC CPUs, tasty 😀
im an idiot perfect for me to help learn
You speak for mistakes and you dont know what is the difference between K,F,KF and no suffix on intel cpus... You speak for lanes population and use in your builds cpus with total 6 lanes Rev. 2.0.... You speak for power efficiency and use external PSU which are the worst ....GPUs are used as accelerators in servers and NVMe used again as accelerators in Intel Rapid Storage volumes as for Larc 2 and Zlibs in ZFS systems where the volume of RAM is very important as the support of ECC...
Why You did not run a transfer speed test of your machine with all these cards you promoted ?
More PCIe Rev 4.0 or 5.0 means QSFP+ abilities with 20Gbps speeds and addon hardware raid cards which support SAS 12Gbps connections . All these have as result real transfer speeds of 2GBytes per second wich can apgrated to 4GBytes per second in the future for NAS and DAS.
All these for the same money with that you spend + 50 US$ !!!
PS : There is no need for hot swap trays at home level servers . You can use any case wich can service your needs for capacity, air flow, water cooling , cable managment etc.
100 % agree. Using ECC RAM, 12G SAS , 10Gbps RJ45 & a Supermicro Xeon 25W TDP SERVER board. Cost about $300 (plus drives) from Ebay/ Amazon.
Your perspective on raid cards are a bit narrow. Not everyone is running software raid and many people will want to run a hypervisor that does not support software raiding. While you're not incorrect about the losses using a raid card comes with, you are not factoring in that people may not be running software raid solutions. It's absolutely not insanity to run use a raid card in your pcie lane.
Can you make a video where you speak actual english i have no idea what your talking about
fkn cleanshirt