I was just thinking about this today! Shower thought "an 8GB USB should be made to have 8.x if they're going to take up some space with whatever is in there that I don't understand (MBR? I don't freaking know) that makes the capacity lower it's false advertising". Interesting to know thanks.
@@taranullius9221 indeed I can understand why you would think so, but to confuse further, format that same drive with an appropriate file system to work on a Mac pc for example and the drive will show as an 8gb usb. I am fairly sure this is something that was fought in the courts a long time ago (Circa Windows 98/200/XP era) and the drive manufacturers managed to successfully defend their position by essentially blaming Microsoft for using the wrong capacity standard.
I know this is old but the USB drive for UnRaid can go directly into the motherboard inside the case. That is what the two USB ports on the right side of the motherboard is for, needs a smaller profile drive but makes it so you can't accidentally break it.
When I built my RAID 5 10TB server 10 years ago, I used Open Media Vault for the OS. It is free and has great support. I've had to replace one drive since it was built and everything is great.
Pro tip for unRAID n00bs: find a USB header cable and connect it directly to the motherboard. It's fine to have the flash drive just lying in there. source: this is the method I've used on my four unRAID servers.
A few things that you may want to get in your next purchase / setup change. Drive array that you can upgrade in place. - Unraid might already do this, but the ability to replace a drive with a larger one and have the new drive automatically rebuild is priceless. 10Gbe network ports - You’ll be glad that you got them as they really speed up file editing and transfers. Front accessible drive slots. You don’t want to have to open the server case up to access drives. the ability slide them in/out of the front of the server is a big stress reliever. Larger drives - this is the eternal problem. Trying to buy enough storage to last 2-3 years when you are just setting up is difficult since your needs can change over time. I’ve learned to multiple my estimate by 150% and that had worked so far. Noise reduction - Depending on where the server is at, you might want to get quieter fans and drives. I check the noise level of drivers before I buy them. if the server is out in a garage or something, noise may not be a problem. Spare drives - I admit that I haven’t purchased spare drives - ever. With a fault tolerant drive array, 1 failed drive might not be a big deal. But what about 2? Drives purchased at the same time have a higher chance of failing at the same time. Especially if they are from the same batch of drives! Your choices aren’t bad per se, but these are things that can make your purchase easier to maintain and last longer before you decide to replace it.
12:40 Drive manufacturers aren't overstating size, Windows under reports it. Windows uses tebibytes, based in binary, while terabytes are decimal, resulting in ~%10 difference in size. For some reason they refuse to either switch to terabytes, or to use the appropriate symbol for the tebibyte TiB.
This is actually partially due to historical reasons. Originally it was measured only in powers of two -- with the SI unit prefixes used for convenience. Occasionally companies would use it by power of 10 but it wasn't until 1995 when the IUPAC recommended the creation of new units to denote powers of two vs decimal units. The IEC adopted the recommendation in December 1998 -- which is relatively recent. Even OSX/MacOS used the powers of two units until Snow Lepoard and iOS 10 (2016). Likely Microsoft is staying with what it is because of a mixture of inertia and it ultimately not being that important to them. Fun-Fact: Donald Knuth proposed to call a Kibibyte a 'large kilobyte' (KKB),
It's weird how RAM manufactures and M$ can agree on which unit should be used but storage manufactures decided to used something else because they wanted to put a bigger number the box The International Electrotechnical Commission (IEC) created the term tebibyte and the other binary prefixes -- kibi, mebi, gibi, pebi, exbi, zebi and yobi -- in 1998. Before then a Kilobyte was 1024 bytes there was no decimalisation of a base 2 number system. M$ is just displaying the units the way they always have
The motherboard has internal USB sockets to allow you to plug in your flash drives inside the case so it isn't hanging out the rear of the case while the server is running. That does assume your flash drive is short enough to stand up in the case with the lid closed (~30mm but shorter is better for clearance of the case lid).
It's nice to see little projects like these that involve nods to other tech channels. This is a nice little 1U set up and a SOLID build for an early jump into home server hardware.
Be aware if you replicate this - some external HDD's with USB interface comes with the USB interface directly on the controller PCB. There is NO internal USB-SATA converter, it can ONLY be used as an USB drive!
It can be annoying, but he should be able to see the drive serial number in the OS and then compare to the serial number on the drive label if he ever needs to remove a specific one.
For that many drives in one enclosure, I would HIGHLY recommend enterprise level drives. They're more expensive, but you're far less likely to have an issue due to vibrations or timeouts, plus reliability is far better. Older Ultrastar drives are really good for this purpose, and usually aren't too expensive, and are highly reliable for long term use from my experience. I got 6 4TB Ultrastar drives back in 2015 for $100 each brand new. They were of a 2 year old model and OEM, but still new in the box. I used to work in a server storage software development test lab, and of the hundreds of thousands of drives I dealt with, the Hitachi/HGST Ultrastars has the lowest defect rate by FAR, as in we'd get 1 Ultrastar failure a month compared to 15-20 Seagate drives of the same age range, (test storage would be used for 5-8 years in a test lab setting, so I was mostly dealing with 1TB drives even in 2015) and Toshiba and Fujitsu were worse than Seagate. I can't recommend Ultrastar drives enough for long term reliability.
yes definitely enterprise drives, I took a risk once and bought WD nas red disks (x12) and had 4 fail within the first 6 months. Replaced with WD NAS red pro , running 3 years no issues.
@@edwindr7514 No, they're really not, mostly because they like to drop out of RAIDs even when they're not failing. Enterprise drives have this little thing called "Time Limited Error Recovery" that consumer drives don't have. If a consumer drive has an issue reading a sector, it will keep trying for up to two minutes to read the sector, and in a RAID scenario, that leads to timeouts and dropping the drive from the set. Enterprise drives will time out and mark the sector as unreadable, replacing it with a spare sector, and declare the data lost to the RAID controller, prompting rebuilding the sector from parity or mirror data. I know from experience when a consumer drive falls out of a RAID set, and then another drops out during the rebuild, it can cause major data loss. I lost my pics for my once in a lifetime DC trip because of it. In addition, consume drives don't do well with vibration, and this can cause errors that cause dropouts as well as cause collision between the platter and the head, if too many drives are put into one enclosure. Enterprise drives are secured from wobbling much better than consumer drives with tighter tolerances and more points of securing the heads and spindles. I know these things because I'm a sysadmin, most notably a sysadmin in a server storage software test lab for a major storage company from 2010 to 2016. (I do NOT claim to speak for them, as I was just an employee and have no say in corporate opinions.) I've overall been a sysadmin for 12 years, out of a 25 year career in corporate IT. I have EXTENSIVE experience with storage. My profile is available on LinkedIn. I'm the one in Colorado, if you care to look. Oh, and those WD Red drives? They're horrible for RAID, almost as bad as consumer drives. They're designed around 4 drive enclosures and software RAID, where vibration protections and TLER aren't as necessary. My favorite drives for mechanical storage are HGST Ultrastars, as they were massively more reliable, even when old, out of the hundreds of thousands of drives of dozens of models and 6 different vendors I dealt with in the test lab. They're not that expensive in most cases, if you keep it to 2-4 steps down from maximum capacity, and even equal to consumer drives in price at the bottom of the capacity list. (4TB Ultrastars are super cheap right now, and fast and awesome drives.)
@@includenull You should still be careful, like JT wrote already earlier: "static build up is static build up". Just because it didn't happened in LTTs video, it doesn't necessary mean that this can't happen. There are a lot of factors involved in this, especially with computer hardware. Prices for a variety of different components are going up these days and I personally would be extra careful with the bought hardware. So you could say that there is, besides of the technical issue also a financial one.
I also got the same server after crafts video. Honestly If the videos stop putting food on the table He should just buy up all of one unit and make a video about it. He did a great job selling this item.
I thought the same, anything 10TB or over from Seagate would have been non-SMR drives. I recently pulled all of the 8TB SMR drives out of my Unraid, even though its not really a critial issue like it is for raid and zfs you certainly do get effected by the slow write speeds.
@@PrimeRedux realized I got SMR's from Seagate a while ago before the whole debacle. unRAID. Definitely slower writes but with a cache SSD, meh. 4TB drives for like 50 bucks was still worth it in the end. Just holds computer backups and ripped movies so nothing mission critical anyway.
@@Nordlicht05 Well, small RaspberryPi NAS server is not a big deal. I didn't even notice the difference in power consumption. Full-blown data server though...
Yo! thanks for the shout out! I have 4-5 of the 16TB variant of these now and the oldest are coming up on over a year old, still running great and fast! Love the channel, I subbed way back on the OG XBOX PC build.
Typically I would agree from all the servers I’ve worked with in the past, but I have a Dell R210 1RU server at home that runs at room temperature and is almost silent unless I push the CPU workload up high. It’s pretty impressive. Most of the newer dell servers are also fresh air rated. It’s a stark difference from the earlier IBM servers I used to work with that sounded like aircraft taking off :-)
Yeah, I would only do it if I had a dedicated server area in a basement or something. Maybe some low power Atom or ARM processers would run cooler and be fine.
I agree. Very nice build, but It's very few people that have a basement or sound isolated room where you can hide a 19" rack server away. even its small 1u height these babies take quite a bit of space, and the noise is the even bigger issue if people live in an appartement it's close to impossible to find a suitable place to hide them away without the noise being an issue. Problem to build a "compact" system (if you can call a 10+ disk nas "compact") is to find a suitable Mobo/sata controller with enough connectors to support a "pro" level nas configuration. Even you can find mobo's with 8 Sata connectors, they are often spanned over 2 different controllers wich can be a problem dependent on how well its implemented on that particular mobo model. Thats why if you want to make a diy Nas, a 19" server is often the only way of getting the flexibility you need, and like Matt found out the choice of buying off shelf external drives and hording the drives can be risky if you don't know exactly what kinda drives is inside. Dependent of cause how "pro" you want your system to be vs price. There is a reason why eg Seagate's server grade 24/7/365/10 year drives cost over 3 times as much as their equivalent consumer grade drives, and why they have different drive series today dependent on what usage you plan to use them with.
SMR is inferior, but sadly not outdated. It's a farly new tech and the drive manaufacturers used it mostly for cost saving. I don't think SMR is going away any time soon, its primary purpose is for write once read many applications so is ideal for data archiving, many of the much higher capacity drives initually used SMR.
I wanted to point out that the factory retaining mechanism for the drives seems to rely on longer than normal screws in 2 of the holes. You might be able to reuse those screws from the Seagate external enclosures, if that kind of thing matters to you. A regular 6-32 SCHS (allen bolt) would probably work too, to engage the retaining mechanism.
subbed... because its hes decent enough to give props to his resources... unlike other YTubers - who like to present like he just magically stumbles unto knowledge.
Nice to see you do this . I have also built something similar but still changed to a synology NAS after a few years , As I didn't have space for a server rack , the noise of the fans were quit annoying to the family
In the US, shucking does not void your warranty. Just don't damage the enclosure when shucking, and keep your enclosure and note each serial number on the enclosure in the event of a failure.
3 years after buying 4*4 TB HDD I realized they were SMR. I'm using TrueNAS which uses the ZFS filesystem and you get horrible performances with SMR (10 MB/s). I only saw that when I did a little bit more that just watching 1 video at a time stored on the NAS. Otherwise I had 0 issue, thus the 3 years to figure that out. The HDD weren't labeled for NAS (got them because they were cheaper) so I had to buy replacement HDD. tl;dr 100% agree with the video, avoid SMR drive, you want CMR
and don't forget that Seagate actually ripped off customers for implementing it to their drives at one point and most of the drives would fail cause of it
@@N1lav I don't think any one was sued. People were mad at WD for changing some of their CMR product lines for SMR. Seagate had had their barracuda drives SMR for years now. Their iron wolf drives are CMR. They don't actually recommend you even use barracuda drives for a raided NAS device at all regardless.
How is it possible to go down to 10 MB/s, when normal HDDs can write at ~100MB/s? And with GBit-LAN, plenty of RAM and CACHE, how does a system manage to waste so much performance, while just copying data?
@@realedna It's not the system that's the problem. It's SMR drives during an ZFS rebuild, due to design "flaws" in SMR drives. These drives are NOT able to do 100MB/s during ZFS rebuild.
12:43 it is not overstatement it has to do with the way we see tb bc you can see it as powers of 2 or powers of 10. Search it ok yt for a better explanation bc this is saying it very simple.
only issue you will have is vibration from non nas drives that may cause issues later. I built a truenas system off am2 phenom quad core and 2x 4tb Iron wolf and 2x1tb WD Drives in it for under 500.00 server chassis seem like a good idea till you hear the noise even in a closet you will find the wine of fans to be annoying
@@AJ_Animations more likely to all die at once and cause permanent data loss in the first couple months than Ironwolf as well Barracudas aren’t made for the settings of servers, you aren’t being scammed by enterprise/nas drives costing more than the cheapo’s, servers are simply an expensive hobby The ones in this server are much slower than enterprise drives, aren’t rated for the vibration of so many drives in a single chassis, the stores they will be under, and will lack the needed statistics/warnings of drive failure needed Also oh god an old psu with a data server… It’s worth, if you get a really cheap server chassis and are going to do this, getting a second hand 80+ platinum server psu This video is made for getting your views, showing you a cheap way around things that you need to take an expensive path If you care at all about the data to be stored on something like this, don’t use this, doesn’t mean you have to buy the nicest stuff in the world, but this is basically a waste of money
"server noob" right.... excellent work, Wendell and Jeff would be proud. To bad you got SMR drives, but with chugging its a roll of the dice, but unRAID is an excellent workaround for that.
Part of me appreciates the cost savings of the external drives, but the other part of me cringes at the waste of all those plastic enclosures. And, as it's clear from the video, using external drives means that you don't really know what you're getting (other than capacity) until you crack open the enclosure. I would have spent the extra money and bought the bare drives to make sure I had exactly what I wanted. But I can appreciate what he got for the money.
"But I will probably mount it with some double-sided tape in the future." No way. Never use double-sided tape. Use Velcro. So much better for hard drives in every way.
Good work. Been wanting to do this exact same thing for the longest while. Yes, those drives have limitations but compared to what I paid for an HP 3PAR 120TB storage array 6 years ago, this is a no brainer.
Great video man. Just word of advice makes sure you make regular back ups of your unraid boot drive. Had a scare the other day where the drive I was using failed. The flash drive stores your drive assignment, Plugins, and configuration. If you have to replace the drive from scratch you need to start from scratch but also know which drive sn are your parity drives. I got luck and was able to recover my config folder under linux. So I'm fine now but man that was a pain learning how to do that last minute. I would also recommend take a screenshot of all your drive assignments and save that somewhere.
Good advice... my first NAS build I used a tiny [HP branded] thumb drive mounted inside, using the USB port on the motherboard. I shut it down to replace a failed drive, and pulled the thumb drive out, and it fell apart in my hand (I assume it was due to the heat inside the case). My OS drive literally fell apart in my hand!
Never heard this one run, but 1U servers are obnoxiously loud and have to use more power for their fans. I think for home use it would absolutely be worth buying a 4U case. It's also worth setting things up so that the drives will spin down when left idle, because in a home setting they will only see sparse demand. I have one of these exact model 8TB SMR Seagates, it's held up fine for me so far but I've only had it since about when this video was posted. Whenever I've thought about shucking, the risks haven't seemed worth it for the difference in cost I was seeing. But sometimes there might be a bigger difference and the more drives you buy at once the more tempting it gets. My last purchase was an 8TB Toshiba X300 for $160. The Seagate SMR was $140, but all the "NAS rated" drives were over $200. That includes the N300, which I'm convinced is the same drive as the X300 with a different sticker, longer warranty and probably TLER firmware. TLER won't matter unless you use hardware RAID (which I don't believe in). I've had good experiences with five of the 4-5TB Toshiba X300s, ranging in age from 3-5 years, but those are a different drive design from the 6-10TB models.
12:36 its not actually storage companies over stating the capacity of their drives its windows that uses the gibibyte counting system and calls it gigabyte
having your own personal server is not a bad idea in general, you can learn so much from it. i would guess getting an old used server from ebay would be pretty cheap and could do most of what you want a server to do.
You might consider getting a PCIe riser card and putting a NVME card on it then you could have up to two NVME drives (the x16 slot is x8, not x16) as cache. That would free up the 3.5" bay.
Great job, my friend. I've been out of messing with IT stuff for the last ten years, but have thought of building a home storage server. This was a great intro to it.
Yeah, I'd have started with fewer drives. I use 4tb more and buy in sets of 4 or so. And, it takes a good while to fill them. So, 80TB or (to be more exact) 60TB from video, would take quite a while to fill.
Hi friend. I did something similar than you. I bought an 2012 i5 HP server. I addedd 6 SATA discs (of 2TB each), and an SSD to boot and store OS. 75€ of the server, the disks than I previously had, 20 for the SSD. and 20 for the RAM. Some euro for the small stuff (to hold in place the disks), and a TRUENAS. Ultracheap nas. Great option the server cage from ebay, i did not knew that!
I also just bought an old HP server (DL360 G6) for about $140. Two 6 core Xeons, 32GB RAM. I am about to set it up with Proxmox for a virtualization and Nextcloud server.
Note for at 2:10, seagate exos drives (at least the 14tb ones I bought) do not fit, they do not have the middle holes for the mount downs, you will have to drill your own holes in the mounting tray to hold them down correctly.
That's a super cool build even with the SMR Drives. Tyan has been around for years and has always put out great hardware for servers. If you want to see one of my favorite boards check out the Tyan Tiger MP S2460.
ZFS does compression, so if you use ZFS, you'll use the CPU more than you think. You could get an internal USB drive and connect directly to the MB. FreeNAS isn't bad. :) SMR vs CMR was a good tip. Thanks!
You should consider a blade that has hard drives in hot swappable enclosures. If you have a drive that is failing, by being hot swappable, you can replace the drive (before a second one starts to fail) and maintain 100% reliability. A lot better than having to shut down your nas and completely dismantle it to replace the drive inside...
Would love to see a follow-up video showing how you have the 1U server all hooked up. I'm thinking of doing something like this in my "server closet" (aka unused coat closet).
Yeah, I'm thinking about learning how to add an air duct venting to my extra bedroom closet, then I can move computers in there. Not started but thinking about it.
Superb build! Inspired by this , I built one with a B250 motherboard, 500W power supply, a 1T Samsung SSD as WIN 10 with 7X8T WD drives into a small desktop casing.
Good video. I also use Unraid and am very happy with it (4x10 NAS drives). The motherboard is an ASUS server which has the advantage that the boot USB stick is internally plugged in to the MB. I have it doing a parity check every 2 months and it takes 17 hours to complete.
If I may ask, what's the model of your server and how much did it cost? I tried buying the chembro server but it looks like it's out of stock indefinitely. Trying to get something in the same price price range as the chembro.
same. i would probably be fine with a very low storage amount for myself, but to have it all stored and accessed through the network instead of having to plug in an external every time, as well as having redundant backups is pretty sweet.
unless you need a high-density compute / storage cluster, why would you ever use a 1U chassis? At a minimum, go with 2U, but 3U/4U is even better for rackmount deployment...
@@StephenBuergler 1 u is super condensed, heat can be an issue, those 12 drives in a 1u case could have been 40 or more in a 4 u case. And ofcourse, can a honey badger or three fit in there ;)
I like buying used enterprise servers. Often come with CPU, RAM, and have externally accessible drive bays for fairly cheap. I've bought servers worth several thousand dollars for under $500. They might have been several years old when I bought them, but 5+ years later they are still humming away.
It kind of is the drive manufacturers fault. They are the ones that chose to stick with the marketing based on decimal math knowing full well the computer used binary math. After all, it made sense to sell a 40MB drive rather than a 38MB drive.
@Alpha Nerd - Yeah, you know... terms like "mebibytes" were nowhere to be found, and not used in computer education, hobbyist, general home use, or professional settings for...ever. Well, until the end of the 20th century. Hard drive and computer makers were saying "(MB = 1,000,000 bytes)" many years after the new standards were proposed simple.m.wikipedia.org/wiki/Mebibyte and its reference physics.nist.gov/cuu/Units/binary.html I hadn't heard of mebibytes, et cetera, until last year, in spite of my using computers since the 1970s and working in computer tech fields through 2007. Manufacturers simply began claiming that a megabyte was a million bytes! No manufacturer was using "MiB" several years into the 21st century. I personally still seldom such terms. I had thought, "Bah! Marketing lies!" and only discovered the terms' origin when I found those links up above a few minutes ago. The words look silly to me, but I'm just shouting at clouds. :-)
@@FuzzyElf i feel like it kinda is the fault of the people who came up with those new terms. in my opinion they should've left the MB to be what it used to be (1024) and used the new naming (MiB) for 1 MiB = 1000 KB. i get how it would be confusing for some people who aren't very savvy, but it might've been better than the current confusion where you have 0 idea what you're getting.
@@FuzzyElf The word looks silly to me too. I've been computing since the 80s, well before the terms were invented. But the disparities in marketed drive size versus actual usable size were apparent well before 1998. I'd guess this is the result.
@@crisscrossam That wouldn't be a good idea since "Mega" is a 1000x multiplier for all units. It wouldn't be wise to make an exception just because people were used to the 1024x multiplier.
Excellent Video! Just the type of stuff I have been looking for. I need a new server and I have not done server work in over 10 years. A HUGE amount has changed in both hardware and software. This video has given my confidence back that I can still build my own. And you wanna know what was most inspiring? You made mistakes but still hit the target.
Average of $125 each on the HDDs? Great deal considering they are no longer available on Amazon and NewEgg showing $220 each now. I'm not going to say the same BS other comments did about them not being NAS drives, obviously NAS drives that are CMR would be better but, if you consider the idea that you might only be using this for more of an archiving type purpose and not necessarily lots of continuous IO then mainstream drives are not a terrible idea at all. For the price paid for the server, I think this is a great option. Interested to know how well it is managing the heat since 1U servers can run pretty hot.
Yeah. I thought about going this route my self but without having warranty and the risk of getting SMR drives i just couldn't take the chance but this is still very interesting to see and gives me ideas for future NAS builds for myself
Keep in mind that SMR drives don't exist in sizes larger than 8TB. So if you buy anything larger it will have CMR, because nobody makes SMR drives larger than 8TB. So if you want to shuck drives and are afraid of getting smr go for the larger sizes.
SMR is not “Older” and “inferior”. SMR is a technology that permits drastically (+50%) increased density allowing drives to be made larger or with fewer platters. It is ether a cost saving measure or tool to drastically increase the size of a drive you can make. It however does significantly impact performance. Regardless your point is still valid In a NAS these drives are suboptimal
Device-managed SMR (like in those 8TB Barracudas shown) is inferior in all aspects except price per gigabyte - for the manufacturer, that is, not the customer, given the whole brouhaha last couple of years when all three majors started replacing their previous CMR offerings with SMR at exactly the same price. Host-managed SMR is a different story but you will find neither host-managed nor even host-aware in the budget line unless you luck out on shucking high-capacity drives (14TB+!).
I don't know how to feel about this. So basically, -10 points for a 1u server, +5 points for used hardware, +5 points for rack mount, -15 points for not using a netapp drive shelf, +10 points for shuching drives, -5 for using Seagate, -5 for no ecc ram, -5 for not using an lsi controller, -5 for not using zfs, +30 for using Linux. Honestly, good job and +100 for making a diy Nas.
A few problems: First, you have no redundancy for the cache drive. If that fails, you will lose data. Second, those motherboards are sort of time bombs. I've had four of them, all of which eventually died (due to bad capacitors, I think...repairable, but troubleshooting wouldn't be fun). Maybe you'll get lucky, but you should probably get a spare just in case. The onboard LSI RAID controller supports both SAS and SATA drives, too. I probably wouldn't trust those Barracuda drives, either, but maybe Unraid takes care of that to your satisfaction.
Very cool! BTRFS as your data/parity and xfs as your cache (BTRFS has a ton of un-needed writes to the cache drive otherwise) I saw the same video by craft and was VERY tempted to get one of those... I have unraid on my server but I run PLEX on it so, and I used shucked drives as well Great video!!!!
Great, love this type of stuff :) To be honest I would just go with a couple of cheap NAS devices (like 3-4 drives), even buy them second hand to cut on price.
I may replicate something like this someday...but I'd probably also follow this approach and start with a few purpose-built server-grade drives and add to it over time. Costs more but you get higher reliability.
You didnt do your research enough. 10TB expansions were yielding 10TB Barracuda pro PMR drives, I was buying these and own 4 in my Synology NAS. The 16TB's had a good chance they were EXOs, my last 2 had EXOs. They are no longer in stock on Amazon. One other thing you probably should have done was do a write/read check on them before shucking. If there were issues at least you can still return it or claim warranty.
Great video and setup! As many have what is its power consumption? And also - What about noise? - Are disk hot swappable? - If one fails how can you which one? Are there indicating lights I didn't notice?
I've found a couple of them at my Microcenter for less than that over the past two weeks. Got one for $145 and the second for $130. But yeah, overall the supply is minimal, and the prices are upsetting.
Halow brother, i want build multimedia server with this : - 10 x 12 TB 3.5 inch HDD 5. 400 or 7.200 RPM - memory RAM at least DDR 4 with 2.933 mhz - power supply not over 550 watt - have hdmi or display port What kind server i must buy? Can you suggest to me. Thanx if you can help me ^^ 😉
The hardware setup is almost the same as the one I have been using for a few years now. My MB is a bit older but still uses Ivy Bridge E3 CPU, I have 16GB of ECC RAM and am running 11 HDDs and 1 SSD but mine is in a standard case with the drives in hotswap bays in the front. Only big difference is mine did not come with 12 built in SATA ports so I bought an SAS card and bought cables to split it into 8 SATA ports. I have been using it as my media and various server setup, with Plex being the main thing run on it but also some other servers in docker containers. It has been running like a champ for at least 6 years now without a problem.
My fileserver is an Athlon II with 6GB ram and 1.8TB total data capacity. It also works as a host of a virtual machine running pfSense, Home Assistant and dlna. Debian here. Works perfectly.
Matt nice video...just curious how loud the fans are in this chassis? As you stated it only has that copper heat sink for the CPU cooler but curious how loud the system runs and are you able to throttle those fan speeds? Thanks.
Excellent price on the server. Buying full tower PC case, power supply, motherboard, processor, 12 port SAS controller card (SAS supports SATA drives), additional cables, all used, might cost more and you still have to build it. The only disadvantage is no hardware RAID. I manage servers in my job and had too many bad experiences with software RAID solutions, while hardware RAID tends to be rock solid in reliability. My NAS is with 11 SMR drives in hardware RAID 5 using a cheap used SAS controller card from eBay and it works very well. Write speeds typically max out 1 Gb Ethernet speed. I only just stumbled on this channel. The way he talks reminds me of NileRed, but instead of chemistry, it's computers.
Drive manufacturers don't overstate their drive size, just that they measure in terrabytes (1000/TB) and windows measures in tebibytes (1024/TiB)
This needs to be pinned. I had forgotten this and I'm sure there's plenty more that have never heard about TiB.
Not only that but even just putting a filesystem on there has overhead. A portion of that space will be used for things like page files and meta data.
@@FlexibleToast Sometimes yeah
I was just thinking about this today! Shower thought "an 8GB USB should be made to have 8.x if they're going to take up some space with whatever is in there that I don't understand (MBR? I don't freaking know) that makes the capacity lower it's false advertising". Interesting to know thanks.
@@taranullius9221 indeed I can understand why you would think so, but to confuse further, format that same drive with an appropriate file system to work on a Mac pc for example and the drive will show as an 8gb usb. I am fairly sure this is something that was fought in the courts a long time ago (Circa Windows 98/200/XP era) and the drive manufacturers managed to successfully defend their position by essentially blaming Microsoft for using the wrong capacity standard.
I know this is old but the USB drive for UnRaid can go directly into the motherboard inside the case. That is what the two USB ports on the right side of the motherboard is for, needs a smaller profile drive but makes it so you can't accidentally break it.
As a graphics card, it’s nice to see Matt back at it again!!!
Thank you for powering my computer so I don’t have to use integrated graphics.
amd apus ftw
@@peyton_uwu laughs in Ryzen 5 1600
@@foxmore262 amogus
@@Graphics_Card im also a graphics card!1!!11 we are brothers
When I built my RAID 5 10TB server 10 years ago, I used Open Media Vault for the OS. It is free and has great support. I've had to replace one drive since it was built and everything is great.
Pro tip for unRAID n00bs: find a USB header cable and connect it directly to the motherboard. It's fine to have the flash drive just lying in there.
source: this is the method I've used on my four unRAID servers.
came here to say that. Though I'm also paranoid enough to have secured it with some self-adhesevive velcro to athe side panel
@@mauritsl84
That seems like overkill.
What is gained from using an SSD for unRAID boot?
@@adeadfishdied you can delete all of it. I read about unraid and so true no usecase for a boot ssd. I am a truenas person so sorry about this.
A few things that you may want to get in your next purchase / setup change.
Drive array that you can upgrade in place. - Unraid might already do this, but the ability to replace a drive with a larger one and have the new drive automatically rebuild is priceless.
10Gbe network ports - You’ll be glad that you got them as they really speed
up file editing and transfers.
Front accessible drive slots. You don’t want to have to open the server case up to access drives. the ability slide them
in/out of the front of the server is a big stress reliever.
Larger drives - this is the eternal problem. Trying to buy enough storage to last 2-3 years when you are just setting up is difficult since your needs can change over time. I’ve learned to multiple my estimate by 150% and that had worked so far.
Noise reduction - Depending on where the server is at, you might want to get quieter fans and drives. I check the noise level of drivers before I buy them. if the server is out in a garage or something, noise may not be a problem.
Spare drives - I admit that I haven’t purchased spare drives - ever. With a fault tolerant drive array, 1 failed
drive might not be a big deal. But what about 2? Drives purchased at the same time have a higher chance of failing at the same time. Especially if they are from the same batch of drives!
Your choices aren’t bad per se, but these are things that can make your purchase easier to maintain and last longer before you decide to replace it.
Do you have examples of enclosures with 10 GB network ports?
12:40 Drive manufacturers aren't overstating size, Windows under reports it. Windows uses tebibytes, based in binary, while terabytes are decimal, resulting in ~%10 difference in size. For some reason they refuse to either switch to terabytes, or to use the appropriate symbol for the tebibyte TiB.
Wait, so they measure in one unit but display in terabytes?
That's as stupid as if I were to measure out 100 kilometres but then display it as 100mi.
@@Chris_Cross Not only is it stupid, it is outright dangerous!
This is actually partially due to historical reasons. Originally it was measured only in powers of two -- with the SI unit prefixes used for convenience. Occasionally companies would use it by power of 10 but it wasn't until 1995 when the IUPAC recommended the creation of new units to denote powers of two vs decimal units. The IEC adopted the recommendation in December 1998 -- which is relatively recent. Even OSX/MacOS used the powers of two units until Snow Lepoard and iOS 10 (2016). Likely Microsoft is staying with what it is because of a mixture of inertia and it ultimately not being that important to them.
Fun-Fact: Donald Knuth proposed to call a Kibibyte a 'large kilobyte' (KKB),
You could still say it's the companies fault
It's weird how RAM manufactures and M$ can agree on which unit should be used but storage manufactures decided to used something else because they wanted to put a bigger number the box
The International Electrotechnical Commission (IEC) created the term tebibyte and the other binary prefixes -- kibi, mebi, gibi, pebi, exbi, zebi and yobi -- in 1998. Before then a Kilobyte was 1024 bytes there was no decimalisation of a base 2 number system. M$ is just displaying the units the way they always have
The motherboard has internal USB sockets to allow you to plug in your flash drives inside the case so it isn't hanging out the rear of the case while the server is running. That does assume your flash drive is short enough to stand up in the case with the lid closed (~30mm but shorter is better for clearance of the case lid).
It's nice to see little projects like these that involve nods to other tech channels. This is a nice little 1U set up and a SOLID build for an early jump into home server hardware.
1U is fucking horrible unless you're putting it in a 42U rack in a data center. No one wants to listen to that shit 24/7.
@Cal he said craft, which is the name of the channel
"...within the data hoarding community."
*I feel very attacked.*
Don't feel offended ;)
Data is the new money.
*laughs in terabytes*
Be aware if you replicate this - some external HDD's with USB interface comes with the USB interface directly on the controller PCB.
There is NO internal USB-SATA converter, it can ONLY be used as an USB drive!
Great video! I just have one suggestion: label all the hard drives in case you have to remove multiple of them at the same time.
Good advice!
This bro. Even numbering them with a permanent marker is better than nothing
It can be annoying, but he should be able to see the drive serial number in the OS and then compare to the serial number on the drive label if he ever needs to remove a specific one.
For that many drives in one enclosure, I would HIGHLY recommend enterprise level drives. They're more expensive, but you're far less likely to have an issue due to vibrations or timeouts, plus reliability is far better.
Older Ultrastar drives are really good for this purpose, and usually aren't too expensive, and are highly reliable for long term use from my experience. I got 6 4TB Ultrastar drives back in 2015 for $100 each brand new. They were of a 2 year old model and OEM, but still new in the box.
I used to work in a server storage software development test lab, and of the hundreds of thousands of drives I dealt with, the Hitachi/HGST Ultrastars has the lowest defect rate by FAR, as in we'd get 1 Ultrastar failure a month compared to 15-20 Seagate drives of the same age range, (test storage would be used for 5-8 years in a test lab setting, so I was mostly dealing with 1TB drives even in 2015) and Toshiba and Fujitsu were worse than Seagate. I can't recommend Ultrastar drives enough for long term reliability.
yes definitely enterprise drives, I took a risk once and bought WD nas red disks (x12) and had 4 fail within the first 6 months. Replaced with WD NAS red pro , running 3 years no issues.
@@edwindr7514 No, they're really not, mostly because they like to drop out of RAIDs even when they're not failing. Enterprise drives have this little thing called "Time Limited Error Recovery" that consumer drives don't have. If a consumer drive has an issue reading a sector, it will keep trying for up to two minutes to read the sector, and in a RAID scenario, that leads to timeouts and dropping the drive from the set. Enterprise drives will time out and mark the sector as unreadable, replacing it with a spare sector, and declare the data lost to the RAID controller, prompting rebuilding the sector from parity or mirror data. I know from experience when a consumer drive falls out of a RAID set, and then another drops out during the rebuild, it can cause major data loss. I lost my pics for my once in a lifetime DC trip because of it.
In addition, consume drives don't do well with vibration, and this can cause errors that cause dropouts as well as cause collision between the platter and the head, if too many drives are put into one enclosure. Enterprise drives are secured from wobbling much better than consumer drives with tighter tolerances and more points of securing the heads and spindles.
I know these things because I'm a sysadmin, most notably a sysadmin in a server storage software test lab for a major storage company from 2010 to 2016. (I do NOT claim to speak for them, as I was just an employee and have no say in corporate opinions.) I've overall been a sysadmin for 12 years, out of a 25 year career in corporate IT. I have EXTENSIVE experience with storage. My profile is available on LinkedIn. I'm the one in Colorado, if you care to look.
Oh, and those WD Red drives? They're horrible for RAID, almost as bad as consumer drives. They're designed around 4 drive enclosures and software RAID, where vibration protections and TLER aren't as necessary.
My favorite drives for mechanical storage are HGST Ultrastars, as they were massively more reliable, even when old, out of the hundreds of thousands of drives of dozens of models and 6 different vendors I dealt with in the test lab. They're not that expensive in most cases, if you keep it to 2-4 steps down from maximum capacity, and even equal to consumer drives in price at the bottom of the capacity list. (4TB Ultrastars are super cheap right now, and fast and awesome drives.)
I'd use used SAS drives, which are actually cheaper.
I was about to post this. These drives WILL die from vibration sooner rather than later.
Terrible tutorial...
Toshiba MG07 and MG08 Disks are great, “ enterprise “ level and cheaper then most WD Red crappy disks
The way he's handling that ram made me feel anxious.
People are too carful with their components still, their pretty resilient these days!
@@includenull static build up is static build up. Handling it like that is called luck, not component resilience.
Not even sure what the point of that was
@@jtonline99 I think you need to watch the LTT and Electroboom video.
@@includenull You should still be careful, like JT wrote already earlier: "static build up is static build up". Just because it didn't happened in LTTs video, it doesn't necessary mean that this can't happen. There are a lot of factors involved in this, especially with computer hardware. Prices for a variety of different components are going up these days and I personally would be extra careful with the bought hardware. So you could say that there is, besides of the technical issue also a financial one.
Great build man! Too bad about getting SMR disks out of the schucks, but Unraid was definitely the correct choice here.
I bought the same server after watching your videos about it. They have a lot of bang for the buck. :)
I also got the same server after crafts video. Honestly If the videos stop putting food on the table He should just buy up all of one unit and make a video about it. He did a great job selling this item.
I thought the same, anything 10TB or over from Seagate would have been non-SMR drives. I recently pulled all of the 8TB SMR drives out of my Unraid, even though its not really a critial issue like it is for raid and zfs you certainly do get effected by the slow write speeds.
@@PrimeRedux Yup so true I would stay far away from SMR drive I too have had one 8TB going south
@@PrimeRedux realized I got SMR's from Seagate a while ago before the whole debacle. unRAID. Definitely slower writes but with a cache SSD, meh. 4TB drives for like 50 bucks was still worth it in the end. Just holds computer backups and ripped movies so nothing mission critical anyway.
Now i want a server that i dont need... Nice job man!
I wonder how many people do have a server running 24/7 and only use it one day or another.... Because if you need something in this excact moment...
@@Nordlicht05 Well, small RaspberryPi NAS server is not a big deal. I didn't even notice the difference in power consumption. Full-blown data server though...
@@Miraihi I would suggest build a raspberry pi server with TuringPi v2
Yo! thanks for the shout out! I have 4-5 of the 16TB variant of these now and the oldest are coming up on over a year old, still running great and fast! Love the channel, I subbed way back on the OG XBOX PC build.
Overall solid build, but I'd never put a 1U server in my home, those tiny fans are too noisy.
Typically I would agree from all the servers I’ve worked with in the past, but I have a Dell R210 1RU server at home that runs at room temperature and is almost silent unless I push the CPU workload up high. It’s pretty impressive. Most of the newer dell servers are also fresh air rated. It’s a stark difference from the earlier IBM servers I used to work with that sounded like aircraft taking off :-)
Yeah, I would only do it if I had a dedicated server area in a basement or something. Maybe some low power Atom or ARM processers would run cooler and be fine.
You can always splice in some resistors to slow and quiet the fans. I did that on my old Dell 2U server.
I agree. Very nice build, but It's very few people that have a basement or sound isolated room where you can hide a 19" rack server away. even its small 1u height these babies take quite a bit of space, and the noise is the even bigger issue if people live in an appartement it's close to impossible to find a suitable place to hide them away without the noise being an issue. Problem to build a "compact" system (if you can call a 10+ disk nas "compact") is to find a suitable Mobo/sata controller with enough connectors to support a "pro" level nas configuration. Even you can find mobo's with 8 Sata connectors, they are often spanned over 2 different controllers wich can be a problem dependent on how well its implemented on that particular mobo model. Thats why if you want to make a diy Nas, a 19" server is often the only way of getting the flexibility you need, and like Matt found out the choice of buying off shelf external drives and hording the drives can be risky if you don't know exactly what kinda drives is inside. Dependent of cause how "pro" you want your system to be vs price. There is a reason why eg Seagate's server grade 24/7/365/10 year drives cost over 3 times as much as their equivalent consumer grade drives, and why they have different drive series today dependent on what usage you plan to use them with.
I had a 46RU server rack in my house for years. I'll never do it again.
That's a great 1U setup you've got there. Great selection of software.
SMR is inferior, but sadly not outdated. It's a farly new tech and the drive manaufacturers used it mostly for cost saving. I don't think SMR is going away any time soon, its primary purpose is for write once read many applications so is ideal for data archiving, many of the much higher capacity drives initually used SMR.
I was hoping to see the performance and hear the noise whilst running. Second video?
The value for this NAS build seems to be legit! If you are comfortable with a 1U server it will be hard to do another DIY build any cheaper!
I wanted to point out that the factory retaining mechanism for the drives seems to rely on longer than normal screws in 2 of the holes. You might be able to reuse those screws from the Seagate external enclosures, if that kind of thing matters to you. A regular 6-32 SCHS (allen bolt) would probably work too, to engage the retaining mechanism.
subbed... because its hes decent enough to give props to his resources...
unlike other YTubers - who like to present like he just magically stumbles unto knowledge.
Nice to see you do this . I have also built something similar but still changed to a synology NAS after a few years , As I didn't have space for a server rack , the noise of the fans were quit annoying to the family
Yeah, When I saw the small fans, I was like, “this won’t work for me.”
@@majorgear1021 yeah you could _technically_ get noctua fans, but those are expensive, which kinda defeats the purpose
craft computings channel is awesome for server stuff on the cheapish side
Yeah, I'm a big Craft Computing fan too.
In the US, shucking does not void your warranty. Just don't damage the enclosure when shucking, and keep your enclosure and note each serial number on the enclosure in the event of a failure.
There are 2 usb ports on the motherboard next to the sata ports. You can put the unraid flash drive there so it's not sticking out
The USB he had might be too tall to fit but, a Samsung FIT would definitely fit there.
Wow, you actually put link in description, ty! So many on RUclips say they will but never do.
3 years after buying 4*4 TB HDD I realized they were SMR. I'm using TrueNAS which uses the ZFS filesystem and you get horrible performances with SMR (10 MB/s). I only saw that when I did a little bit more that just watching 1 video at a time stored on the NAS. Otherwise I had 0 issue, thus the 3 years to figure that out.
The HDD weren't labeled for NAS (got them because they were cheaper) so I had to buy replacement HDD.
tl;dr 100% agree with the video, avoid SMR drive, you want CMR
Hasn't seagate or WD been sued for selling SMR as NAS drives? Whatever happened to that. Fkin corporations, they get away with everything
and don't forget that Seagate actually ripped off customers for implementing it to their drives at one point and most of the drives would fail cause of it
@@N1lav I don't think any one was sued. People were mad at WD for changing some of their CMR product lines for SMR. Seagate had had their barracuda drives SMR for years now. Their iron wolf drives are CMR. They don't actually recommend you even use barracuda drives for a raided NAS device at all regardless.
How is it possible to go down to 10 MB/s, when normal HDDs can write at ~100MB/s?
And with GBit-LAN, plenty of RAM and CACHE, how does a system manage to waste so much performance, while just copying data?
@@realedna It's not the system that's the problem. It's SMR drives during an ZFS rebuild, due to design "flaws" in SMR drives.
These drives are NOT able to do 100MB/s during ZFS rebuild.
12:43 it is not overstatement it has to do with the way we see tb bc you can see it as powers of 2 or powers of 10. Search it ok yt for a better explanation bc this is saying it very simple.
great build video! between shucking the drives and the dirt cheap server, I can't imagine a more economical way to get such a great NAS
only issue you will have is vibration from non nas drives that may cause issues later. I built a truenas system off am2 phenom quad core and 2x 4tb Iron wolf and 2x1tb WD Drives in it for under 500.00 server chassis seem like a good idea till you hear the noise even in a closet you will find the wine of fans to be annoying
Great build. Finally a good storage server without the $10k donation drives.
Oh God, not Barracuda's
Cheaper than ironwolfs
@@AJ_Animations more likely to all die at once and cause permanent data loss in the first couple months than Ironwolf as well
Barracudas aren’t made for the settings of servers, you aren’t being scammed by enterprise/nas drives costing more than the cheapo’s, servers are simply an expensive hobby
The ones in this server are much slower than enterprise drives, aren’t rated for the vibration of so many drives in a single chassis, the stores they will be under, and will lack the needed statistics/warnings of drive failure needed
Also oh god an old psu with a data server…
It’s worth, if you get a really cheap server chassis and are going to do this, getting a second hand 80+ platinum server psu
This video is made for getting your views, showing you a cheap way around things that you need to take an expensive path
If you care at all about the data to be stored on something like this, don’t use this, doesn’t mean you have to buy the nicest stuff in the world, but this is basically a waste of money
Not to mention 80tb with that few drives is likely in a raid 0
You can’t do budget with expensive drives.
@@terminatorfishstudios can't you just replace the PSU once it breaks? Seems like in this setup uptime is not a priority
this is called labor, thank you, everyone give this man a like.
"server noob" right.... excellent work, Wendell and Jeff would be proud. To bad you got SMR drives, but with chugging its a roll of the dice, but unRAID is an excellent workaround for that.
Love Craft Computing! Nice shout out!
Yeah, I'm a big Craft Computing fan too.
Part of me appreciates the cost savings of the external drives, but the other part of me cringes at the waste of all those plastic enclosures. And, as it's clear from the video, using external drives means that you don't really know what you're getting (other than capacity) until you crack open the enclosure. I would have spent the extra money and bought the bare drives to make sure I had exactly what I wanted. But I can appreciate what he got for the money.
"But I will probably mount it with some double-sided tape in the future."
No way. Never use double-sided tape. Use Velcro. So much better for hard drives in every way.
Good work. Been wanting to do this exact same thing for the longest while. Yes, those drives have limitations but compared to what I paid for an HP 3PAR 120TB storage array 6 years ago, this is a no brainer.
Great video man. Just word of advice makes sure you make regular back ups of your unraid boot drive. Had a scare the other day where the drive I was using failed. The flash drive stores your drive assignment, Plugins, and configuration. If you have to replace the drive from scratch you need to start from scratch but also know which drive sn are your parity drives. I got luck and was able to recover my config folder under linux. So I'm fine now but man that was a pain learning how to do that last minute. I would also recommend take a screenshot of all your drive assignments and save that somewhere.
I have a back up of it, but will make sure to be diligent about backing it up frequently!
Good advice... my first NAS build I used a tiny [HP branded] thumb drive mounted inside, using the USB port on the motherboard. I shut it down to replace a failed drive, and pulled the thumb drive out, and it fell apart in my hand (I assume it was due to the heat inside the case). My OS drive literally fell apart in my hand!
Thx for the video : your build is exactly what i was looking for years ! what is the overall consumption ? in Idle ? when spinning ?
I want to know that too :D
I've done the same thing with my NAS RE: shucking these Barracuda drives. They have been going strong in my NAS for years, always on.
Isn't it a concern that these are not NAS drives and are not rated to withstand vibrations from other drives?
LOVE this server chassis, just wish it wasnt so damn long!!!! Or that they offered a 4U chassis with the same specs/capacity!
Wall mount???
Never heard this one run, but 1U servers are obnoxiously loud and have to use more power for their fans. I think for home use it would absolutely be worth buying a 4U case.
It's also worth setting things up so that the drives will spin down when left idle, because in a home setting they will only see sparse demand.
I have one of these exact model 8TB SMR Seagates, it's held up fine for me so far but I've only had it since about when this video was posted.
Whenever I've thought about shucking, the risks haven't seemed worth it for the difference in cost I was seeing. But sometimes there might be a bigger difference and the more drives you buy at once the more tempting it gets.
My last purchase was an 8TB Toshiba X300 for $160. The Seagate SMR was $140, but all the "NAS rated" drives were over $200. That includes the N300, which I'm convinced is the same drive as the X300 with a different sticker, longer warranty and probably TLER firmware. TLER won't matter unless you use hardware RAID (which I don't believe in).
I've had good experiences with five of the 4-5TB Toshiba X300s, ranging in age from 3-5 years, but those are a different drive design from the 6-10TB models.
12:36 its not actually storage companies over stating the capacity of their drives its windows that uses the gibibyte counting system and calls it gigabyte
having your own personal server is not a bad idea in general, you can learn so much from it. i would guess getting an old used server from ebay would be pretty cheap and could do most of what you want a server to do.
Great video! You explained every step in a way, so everyone can understand it. Awesome keep going!
Nice, and Unraid is probably your best option with those drives. Now with 80TB of potential data in one place you need a video on backup strategy.
You could get the drives out quicker if you just hit the enclosure off the ground. That will open the case. Pro tip.
You might consider getting a PCIe riser card and putting a NVME card on it then you could have up to two NVME drives (the x16 slot is x8, not x16) as cache. That would free up the 3.5" bay.
Great job, my friend. I've been out of messing with IT stuff for the last ten years, but have thought of building a home storage server. This was a great intro to it.
"no redundant power supplies but that's ok with me" hahaha boi! Last famous words..
Really? The last ones? There have not been any famous words since? Astonishing!
When he said "80TB of raw storage" I was like "thank god he's running Unraid.." hahah was waiting for you to say "80TB usable space" and disappoint me
9:06 lol im not the only one how just tapes in ssds instead if actually securing them
I like to use the 3M picture-hanging Command strips, which are like velcro with removable adhesive. They're much easier to remove later.
@@edattfield5146 Me too, I love those things.
excellent video and this server overall is MORE than 99% individual users will need
Yeah, I'd have started with fewer drives. I use 4tb more and buy in sets of 4 or so. And, it takes a good while to fill them. So, 80TB or (to be more exact) 60TB from video, would take quite a while to fill.
Hi friend. I did something similar than you. I bought an 2012 i5 HP server. I addedd 6 SATA discs (of 2TB each), and an SSD to boot and store OS. 75€ of the server, the disks than I previously had, 20 for the SSD. and 20 for the RAM. Some euro for the small stuff (to hold in place the disks), and a TRUENAS. Ultracheap nas. Great option the server cage from ebay, i did not knew that!
I also just bought an old HP server (DL360 G6) for about $140. Two 6 core Xeons, 32GB RAM. I am about to set it up with Proxmox for a virtualization and Nextcloud server.
@@anonimuso well done!
Note for at 2:10, seagate exos drives (at least the 14tb ones I bought) do not fit, they do not have the middle holes for the mount downs, you will have to drill your own holes in the mounting tray to hold them down correctly.
That's a super cool build even with the SMR Drives. Tyan has been around for years and has always put out great hardware for servers. If you want to see one of my favorite boards check out the Tyan Tiger MP S2460.
Thanks for the suggestion!
Love that board
ZFS does compression, so if you use ZFS, you'll use the CPU more than you think. You could get an internal USB drive and connect directly to the MB. FreeNAS isn't bad. :) SMR vs CMR was a good tip. Thanks!
Compression has to be enabled and can selectively be enabled/disabled.
Where did you get that mat for your table?
Looks like a GamersNexus mat.
You should consider a blade that has hard drives in hot swappable enclosures. If you have a drive that is failing, by being hot swappable, you can replace the drive (before a second one starts to fail) and maintain 100% reliability. A lot better than having to shut down your nas and completely dismantle it to replace the drive inside...
Would love to see a follow-up video showing how you have the 1U server all hooked up. I'm thinking of doing something like this in my "server closet" (aka unused coat closet).
Yeah, I'm thinking about learning how to add an air duct venting to my extra bedroom closet, then I can move computers in there. Not started but thinking about it.
Superb build! Inspired by this , I built one with a B250 motherboard, 500W power supply, a 1T Samsung SSD as WIN 10 with 7X8T WD drives into a small desktop casing.
Where did you get your work mat with the motherboard and ssd/HDD sizes on? I'd love one.
Good video. I also use Unraid and am very happy with it (4x10 NAS drives). The motherboard is an ASUS server which has the advantage that the boot USB stick is internally plugged in to the MB. I have it doing a parity check every 2 months and it takes 17 hours to complete.
If I may ask, what's the model of your server and how much did it cost? I tried buying the chembro server but it looks like it's out of stock indefinitely. Trying to get something in the same price price range as the chembro.
Great build overall, I wish I could easily get my hands on this case in the UK!
Fun to watch....I had to drop off when we went from $126 up to $1,500. But it is still nice to see how the other half lives and computes.
You Know I Don't Need One Of These But Having One Would Be Pretty Sick
same. i would probably be fine with a very low storage amount for myself, but to have it all stored and accessed through the network instead of having to plug in an external every time, as well as having redundant backups is pretty sweet.
*80TB* and *Budget* Sounds like a good match
unless you need a high-density compute / storage cluster, why would you ever use a 1U chassis? At a minimum, go with 2U, but 3U/4U is even better for rackmount deployment...
Why?
@@StephenBuergler 1 u is super condensed, heat can be an issue, those 12 drives in a 1u case could have been 40 or more in a 4 u case. And ofcourse, can a honey badger or three fit in there ;)
I like buying used enterprise servers. Often come with CPU, RAM, and have externally accessible drive bays for fairly cheap. I've bought servers worth several thousand dollars for under $500. They might have been several years old when I bought them, but 5+ years later they are still humming away.
12:40 tbh not really, you should look into differences between "GiB" and "GB"
It's not really the drive manufacteres
It kind of is the drive manufacturers fault. They are the ones that chose to stick with the marketing based on decimal math knowing full well the computer used binary math. After all, it made sense to sell a 40MB drive rather than a 38MB drive.
@Alpha Nerd - Yeah, you know... terms like "mebibytes" were nowhere to be found, and not used in computer education, hobbyist, general home use, or professional settings for...ever. Well, until the end of the 20th century. Hard drive and computer makers were saying "(MB = 1,000,000 bytes)" many years after the new standards were proposed simple.m.wikipedia.org/wiki/Mebibyte and its reference physics.nist.gov/cuu/Units/binary.html
I hadn't heard of mebibytes, et cetera, until last year, in spite of my using computers since the 1970s and working in computer tech fields through 2007. Manufacturers simply began claiming that a megabyte was a million bytes! No manufacturer was using "MiB" several years into the 21st century. I personally still seldom such terms. I had thought, "Bah! Marketing lies!" and only discovered the terms' origin when I found those links up above a few minutes ago.
The words look silly to me, but I'm just shouting at clouds. :-)
@@FuzzyElf i feel like it kinda is the fault of the people who came up with those new terms. in my opinion they should've left the MB to be what it used to be (1024) and used the new naming (MiB) for 1 MiB = 1000 KB. i get how it would be confusing for some people who aren't very savvy, but it might've been better than the current confusion where you have 0 idea what you're getting.
@@FuzzyElf The word looks silly to me too. I've been computing since the 80s, well before the terms were invented. But the disparities in marketed drive size versus actual usable size were apparent well before 1998. I'd guess this is the result.
@@crisscrossam That wouldn't be a good idea since "Mega" is a 1000x multiplier for all units. It wouldn't be wise to make an exception just because people were used to the 1024x multiplier.
Excellent Video! Just the type of stuff I have been looking for. I need a new server and I have not done server work in over 10 years. A HUGE amount has changed in both hardware and software. This video has given my confidence back that I can still build my own. And you wanna know what was most inspiring? You made mistakes but still hit the target.
Average of $125 each on the HDDs? Great deal considering they are no longer available on Amazon and NewEgg showing $220 each now. I'm not going to say the same BS other comments did about them not being NAS drives, obviously NAS drives that are CMR would be better but, if you consider the idea that you might only be using this for more of an archiving type purpose and not necessarily lots of continuous IO then mainstream drives are not a terrible idea at all. For the price paid for the server, I think this is a great option. Interested to know how well it is managing the heat since 1U servers can run pretty hot.
(in Ukraine the one single 8TB Barracuda costs ~$339 today, was ~$562 in May...)
Yeah. I thought about going this route my self but without having warranty and the risk of getting SMR drives i just couldn't take the chance but this is still very interesting to see and gives me ideas for future NAS builds for myself
Keep in mind that SMR drives don't exist in sizes larger than 8TB. So if you buy anything larger it will have CMR, because nobody makes SMR drives larger than 8TB. So if you want to shuck drives and are afraid of getting smr go for the larger sizes.
SMR is not “Older” and “inferior”. SMR is a technology that permits drastically (+50%) increased density allowing drives to be made larger or with fewer platters. It is ether a cost saving measure or tool to drastically increase the size of a drive you can make. It however does significantly impact performance. Regardless your point is still valid In a NAS these drives are suboptimal
Device-managed SMR (like in those 8TB Barracudas shown) is inferior in all aspects except price per gigabyte - for the manufacturer, that is, not the customer, given the whole brouhaha last couple of years when all three majors started replacing their previous CMR offerings with SMR at exactly the same price.
Host-managed SMR is a different story but you will find neither host-managed nor even host-aware in the budget line unless you luck out on shucking high-capacity drives (14TB+!).
I don't know how to feel about this. So basically, -10 points for a 1u server, +5 points for used hardware, +5 points for rack mount, -15 points for not using a netapp drive shelf, +10 points for shuching drives, -5 for using Seagate, -5 for no ecc ram, -5 for not using an lsi controller, -5 for not using zfs, +30 for using Linux. Honestly, good job and +100 for making a diy Nas.
Your wooden case looks amazing!
A few problems: First, you have no redundancy for the cache drive. If that fails, you will lose data. Second, those motherboards are sort of time bombs. I've had four of them, all of which eventually died (due to bad capacitors, I think...repairable, but troubleshooting wouldn't be fun). Maybe you'll get lucky, but you should probably get a spare just in case. The onboard LSI RAID controller supports both SAS and SATA drives, too. I probably wouldn't trust those Barracuda drives, either, but maybe Unraid takes care of that to your satisfaction.
Nice build ! Thanks for sharing. Doesn’t the motherboard has an internal USB header for you to connect the bootable usb inside ?
Very cool! BTRFS as your data/parity and xfs as your cache (BTRFS has a ton of un-needed writes to the cache drive otherwise)
I saw the same video by craft and was VERY tempted to get one of those...
I have unraid on my server but I run PLEX on it so, and I used shucked drives as well
Great video!!!!
Great, love this type of stuff :) To be honest I would just go with a couple of cheap NAS devices (like 3-4 drives), even buy them second hand to cut on price.
I may replicate something like this someday...but I'd probably also follow this approach and start with a few purpose-built server-grade drives and add to it over time. Costs more but you get higher reliability.
Few cheap NAS devices are ok if you can get them cheap but something like this is much more reliable.
You didnt do your research enough. 10TB expansions were yielding 10TB Barracuda pro PMR drives, I was buying these and own 4 in my Synology NAS. The 16TB's had a good chance they were EXOs, my last 2 had EXOs. They are no longer in stock on Amazon. One other thing you probably should have done was do a write/read check on them before shucking. If there were issues at least you can still return it or claim warranty.
Awesome job... I wish parts here in Kenya were that cheap
So I'm a noob too could this be used as like a plex, minecraft, or emulation server?
Great job! Can you say something about the temperatures it achieves during operation?
That wooden case start of video looks incredible !
Nice video. Well done. Curious, What is your power usage on this? What does the IPMI interface look like?
Great video and setup! As many have what is its power consumption? And also
- What about noise?
- Are disk hot swappable?
- If one fails how can you which one? Are there indicating lights I didn't notice?
The price of the 8TB drive just raised to $200 LOLLLLLL!!!!!!
should i sell mine lmao
scalper
I've found a couple of them at my Microcenter for less than that over the past two weeks. Got one for $145 and the second for $130. But yeah, overall the supply is minimal, and the prices are upsetting.
@@duckythescientist this type is only okayish if you plan to wipe them between write-sessions, like a backup drive. as a work-drive they are shit
to use the SSD as a cach Memory was Amazing to Aviod problems and slow Process ! and Good Work my friend !
Halow brother, i want build multimedia server with this :
- 10 x 12 TB 3.5 inch HDD 5. 400 or 7.200 RPM
- memory RAM at least DDR 4 with 2.933 mhz
- power supply not over 550 watt
- have hdmi or display port
What kind server i must buy? Can you suggest to me. Thanx if you can help me ^^ 😉
The hardware setup is almost the same as the one I have been using for a few years now. My MB is a bit older but still uses Ivy Bridge E3 CPU, I have 16GB of ECC RAM and am running 11 HDDs and 1 SSD but mine is in a standard case with the drives in hotswap bays in the front. Only big difference is mine did not come with 12 built in SATA ports so I bought an SAS card and bought cables to split it into 8 SATA ports. I have been using it as my media and various server setup, with Plex being the main thing run on it but also some other servers in docker containers. It has been running like a champ for at least 6 years now without a problem.
1500 for the server? If thats with the hdd’s thats impressive
Well, they're cheapo SMR drives. Would not recommend them for a NAS.
I would always use enterprise grade hard-drives for such a project and only cmr drives. Then it wouldn't be at 1500 bucks 😂
My fileserver is an Athlon II with 6GB ram and 1.8TB total data capacity. It also works as a host of a virtual machine running pfSense, Home Assistant and dlna. Debian here.
Works perfectly.
Matt nice video...just curious how loud the fans are in this chassis? As you stated it only has that copper heat sink for the CPU cooler but curious how loud the system runs and are you able to throttle those fan speeds? Thanks.
Excellent price on the server. Buying full tower PC case, power supply, motherboard, processor, 12 port SAS controller card (SAS supports SATA drives), additional cables, all used, might cost more and you still have to build it. The only disadvantage is no hardware RAID. I manage servers in my job and had too many bad experiences with software RAID solutions, while hardware RAID tends to be rock solid in reliability. My NAS is with 11 SMR drives in hardware RAID 5 using a cheap used SAS controller card from eBay and it works very well. Write speeds typically max out 1 Gb Ethernet speed.
I only just stumbled on this channel. The way he talks reminds me of NileRed, but instead of chemistry, it's computers.
SMR isn't outdated, it's a newer technology to make drives cheaper!