My lg g6 has a media server option in settings 2tb sd storage and the phone is cheap mine is $18.42 4gb ram snapdragon 821 all you need is to buy the sd cards 190mb/s 1tb sd on the g6 is $94 with out the phone.
@@beatyoubeachyt8303 LG G6 had user replacable batteries too., If you need to replace batteries these days., you need to have a torque wrench with ifixit kit
Hey Jeff! Thanks so much for taking a look at our first ever All-Flash NVMe NAS! We have made numerous improvements to our design since the last time we sent products to you and we'd love to share all the ways we keep Red Shirts out of our NAS and enthusiasts and tinkerers inside! With our recent endorsement of third party operating systems, (though without technical support) we're sure that using our NAS is nothing short of a NASTastic experience and we want to keep listening! If you, dear commentor; or youtuber, want to send me a message, feel free to do so! I love praise, comments, questions and even criticism! Hit me up and thanks again!
This might be a stretch, but are there any plans to sell Nas enclosures without hardware built in, so it is for the user to choose. Love the direction where the asustor is heading in, with allowing other oses. Maybe there could one day be official true Nas and unraid support?
@@JeffGeerling I'm doing my best! I still have to really sell these ideas to the more conservative and risk-averse elements in the office too. But your backing helps me get the point across!
One additional thing I'd call out when comparing HDD vs SSD: how much data you can store in a given physical space. It's a little insane to me the absolute minimal footprint that a flash based system can occupy, and for people who live in places where physical space is at a premium, that's a very real consideration.
the ppl who life in a small places, wouldn't be able to afford ssd prices. The only consideration is that mechanical drive is more prone to failure than ssd, however ssd chip could fry no problem as well
@@s.i.m.c.a I'm kind of making an assumption here, but I think he's referring to people that live in places like cities (where even a 39m2 apartment costs 60% of your salary).
@@s.i.m.c.amaking some pretty big assumptions there. Not everyone chooses to waste money on more space than necessary. Why is there a tiny house movement anyway?
Please go to the link to Rick's site and indicate what features you'd be looking for specifically. I can't wait to see the final version he comes out with... I've seen renders of a much more reliable prototype based on the Rock 5 model B, but there's still time to let him know if there's some other feature you'd be missing!
@@hundredfireify I have been using a m.2 enclosure for a couple years but I'd like to be able to access it wirelessly sometimes like with my phone. I have used wireless storage devices before (I had a Seagate wireless drive and a western digital wireless drive) but the current solution don't support the flexibility I'm looking for yet. I want a AIO portable Nas with media output on it for tv. I'm asking for a lot but if it's not this device I was looking into buying a Latte Panda Sigma which offers a lot of what I am looking for. Speed, flexibility ("full hack-able" , portability, etc.
N5105 can easily run with 32GB of RAM - should help TueNAS. And the slow-down on write speeds is due to reaching end of cache. Most cheaper flash drives use QLC memory as the most cost-effective with some cache (DRAM or SLC). Once it fills - the drive becomes dreadfully slow. Would be interesting to see the influence of that for the ZFS poo performance.
the pocket nas would be perfect for me as a trucker, great to store some games on for my laptop, might even be able to make a ceph storage cluster , that would be something
I do wonder if it'd run CEPH. I tried setting up ROOK CEPH on a microk8s cluster running on i5-6500T and 32GB RAM and ran out of CPU. Maybe I did something wrong, but certainly interesting.
The small SBC as NAS devices interest me for home clustering experimentation. Hiding a bunch of these around the house for distributed compute and storage would be neat... running your own little home cloud, the house is the server.
I took an old dual cassette player gutted it and put an sbc in it with an 8TB drive. Set it in a detached garage that's hardwired. Now have a backup copy in a different building. Next step is getting a copy offsite.
data hoarding is the only reason i'd ever look at hdd going forward. thanks to oversupply of flash memory, it is a great time to set up a flash-only storage. not to forget how easier they are to move around without risking data loss.
Open bios shouldn't be "crazy", it should be expected / the norm for hardware that you buy - if the bios is locked, you don't really "own" the device. It's very sad that we're already at a place where an unlocked bios is "crazy" when that was the NORM for decades. Since when do you buy a PC that had a locked down bios / bootloader??
When you actually get 50+ PCIe lanes for your drives something like an AMD EPYC, you can run into another problem for an NVMe-only NAS: internal bandwidth of the CPU. When Linus Tech Tips filled up an AMD EPYC with 24 SSDs, he hit a major stability bug in the CPU, because all those NVMe traffic ate up the entire internal bus bandwidth of the EPYC processor and started knocking CPU cores offline!
The pocket NAS is actually of GREAT use to me. This can be a travel NAS for me for my photography. It's easy to set up and I could put my data on it without blasting it on my PC and have it WAY more safe. The flashstore could a cool thing for me at home as an intermediate storage for hot projects, too. I could edit them there and after I'm finished archiving them on a slower NAS. Especially due to it not being locked down. This is a really great factor for people like me who have some DIY NAS and consider some pre-builts like this one so that they can be managed with the same OS.
Awesome video Jeff! Love your thoroughness in your reviews. What I'd one day like about consumer nases is enclouseres for diyers to use. I can build a Nas in a case, but it has not enough drive mounts. I can build one in an old server but it is not power efficient and empty server chassis with drive bays are super expensive
Yeah, there are very few cases you can buy that are great for NAS use cases. It'd be pretty cool if ASUSTOR used a particular spec for their main boards so you could pop in a mini ITX replacement or something. Would make it so you could buy a used consumer NAS, rip out the guts, and put in your own!
@@JeffGeerling Standard RAID uses some version of CRC32 by default which has had hardware acceleration for a while now. BTRFS also defaults to using CRC32 as well though you can use a different option. ZFS uses Fletcher4 by default and SHA256 if you enable deduplication.
@@JeffGeerling apparently ZFS isn’t great at handling flash storage. EXT4 and F2FS are reported as having higher performance for arrayed flash storage like these. With a faster CPU and more PCIe lanes, a more optimized filesystem might also give you closer to spec performance out of those M.2 drives
Boom! Ampere Altra at 128 lanes of PCIe Gen4! Nailed it this one. Something else that would be much better is the memory bandwidth which matters as well on all-flash NAS units.
$1300 is a pretty sizeable price premium for the size and low power usage. I've been looking at making a 4U box with 10GbE and a Ryzen 7 5700G for both NAS and Docker, and it's looking to be about $1300 for two 4TB drives, with a $75 expansion card to add four more. Sure, that's six bays instead of twelve, but it's also about 6x the CPU performance, a lot more PCIe lanes, a decent GPU for transcoding, and a dedicated NVMe slot for the OS. I even threw in 2x16 of RAM, and I think it can take 4x32 if I really wanted to
At my org we're already talking full solid state with U.3 drives for servers moving forward. The elephant in the room is we don't expect to still be buying spinning rust in 10 years, but we have a tendency to keep equipment in production for 6+ years. You might think "that's at least one more refresh" but sometimes you move at the speed of committee approval.
I know it wouldn't be very fast, but I'd love to see an actual pocket NAS that used a Pi Zero W so I could power it with batteries and push/pull files to/from it while it's in my pocket.
Wow great video comparing the nitty gritty details on these 2 NAS solutions! So interesting and informative - Thanks for this Jeff! I also really like how the purchased NAS solution is basically open hardware and not locked down and you can install anything you want on it - Thanks the way it should be.
Awesome! I'm stoked to see NVME storage really dropping. I picked some 1TB WD SN850X for $55/ea on prime day and a 4TB version for $220. There has been some crazy deals on 'slower' drives, especially PCIe Gen 3 models. We just need GPUs to finally reach some level of sanity again but that's about as likely right now as Samsung stopping their quest to be a crappier version of Apple.
FYI the Intel N5105 will run 32GB (2x 16GB only) of RAM, I've got that installed in my QNAP TS-464 NAS, and plenty of people confirm it on Reddit. Now, I know Intel does specify a max of 16GB, and it does state that on it's website, however pre-late October 2022, when I was researching to buy a new NAS, Intel's website did say max 32GB of RAM, which is why I went hunting on Reddit, coz reviewers were saying !6 but I saw intel say 32. I think it might have been that "your millage may vary" scenario, because even though Intel pre-October said 32GB, QNAP always stated a max of 16GB, so I think Intel were initially edging their bets, and QNAP were being conservative to ensure they could 100% support customers. I've had 32GB running for 5 months now without issues. But I agree the weakness of the N5105 is it's PCIe lanes, QNAP only offer PCIe 3 x1 speeds, to split up what they are trying to do with 4 SATA drives, 2 onboard NVMe slots, and an add-in PCIe slot for 10Gbe or 10GBe + 2 NVMe cards, I came from a J1900, so even if I wish for a little more, the N5105 is a pretty capable CPU. I would say look out for Intel's Alder Lake N100, N200 and N350 CPUs, even faster and more power efficient, I've got a N100 in my new pfSeense firewall mini PC. I like the idea of the small SoC NAS, once we get a little more power I might deploy one at my mum's for a media server, they watch a lot of legally obtained films.
That "write speed cliff" which you fell off is there for al NAND based flash storage- sometimes better and sometimes worse. But it is always there. Basically when you write you are really writing to pre-cleared blocks of flash. The pre-clearing is a LOT slower than writing to an already cleared block. The pre-clearing happens in background using hidden blocks in your NAND flash device. If you do constant writes you eventually run out of pre-cleared blocks, then you drop down to the speed of clear-a-block-then-write. If you leave the storage alone for 10 minutes then you'll get another burst of high write performance then a drop back down to the slower write perf. All NAND based storage devices suffer this problem eventually, if your writes exceed the pre-clearing rate of the device. Enterprise drives normally just allocate more Flash storage to hidden blocks which are used for wither faster write performance, or to replace the inevitable failed blocks. For some more details, read "Over-Provisioning NAND-Based Intel SSDs for Better Endurance" which also talks about performance.
At 9:50, you implied that unlike ZFS, Btrfs doesn't support snapshots and synchronisation. However, Btrfs does support snapshots and commands "btrfs send" and "btrfs receive" can send and receive snapshots between two hosts over a network, similar to ZFS commands "zfs send" and "zfs receive".
Btrfs does, I wasn't careful with my wording there, as ADM does support Btrfs (and I used it on the NAS we deployed at my Dad's radio station). But some of the Btrfs features are not as easy to use through ADM as they would be on plain Linux, and that was more what I was comparing (ADM vs TrueNAS in particular) here.
I like ZFS but I ain't a ZFS zealot. And I never delete any comment on any video, except for anything with commercial spam (e.g. "Telegram me you won a prize") or explicit content.
@@patryk4815I have had arguments with him several times. No comment was deleted, even if he didn't agree. Now bill Murray (the 8bitguy) on the other hand.... He does, maybe you are confused.
@@JeffGeerling I use both ZFS (mostly in TrueNAS Core, but I've also used it on Linux) and Btrfs, and both work well, but I tend to prefer the ZFS snapshot model and naming syntax. Btrfs treats snapshots as directories in the same file system, so it's easier to misplace them, whereas ZFS records snapshots in a separate namespace that you can list easily with the command "zfs list -t snapshot". However, on Linux, I tend to use Btrfs more often because it is available in the kernel and requires less memory than ZFS. Though I've used ZFS for almost a decade, I've yet to learn how to control the amount of memory that the various ZFS caches consume. I guess it's never been a priority since I mostly run it on my TrueNAS machine.
The Oragmi thing could even run (even if i dont need) it on my powerbank,that caps at 10.5Watts probably all day long under good load.What are the chances?😂.Great Video Jeff,please keep going and stay healty.
I'd like to see them go one step further; a travel router with NAS capabilities using flash storage. For travel, or even for home use, one or two NVMe slots should provide plenty of storage. Great for travel or off grid use.
You might be able to use more than 16gb of memory on N5105. Well i think it depends on the motherboard. I installed 2 sticks of 16gb on my Topton N5105 router just this morning and it works just fine. I also have a N5095 board with 12 sata ports. I installed a 32gb ram on that and it works too!
Yeah, some people mentioned 32 GB works here. I know 16 does because that's the spec, and 64 doesn't because ServeTheHome tested that and it broke. So 32 might be the goldilocks if you want a lot of RAM.
@@JeffGeerlingabout the slow perf you saw on TrueNAS, I noticed you're using SCALE, did you try Core? I've had bad performance experiences with SCALE and good ones with Core on limited hardware (specially old CPUs and lower end NICs supported by Core). May be woth a try...
If the pocket NAS fan was squealing that badly, it's likely also damaged in shipping, ball bearing fans are the highest quality/longest lasting industrial fans, but they're also really sensitive to shipping damage, I learned this the hard way. It's a big reason why the PC building community considers them worse for noise than sleeve I suspect.
Hey Jeff, on 8:30 you should disconnect the 4pin dc power of JP1 from your supermicro X10SDV board. It's not recommended because of an alternatively support of two power sources. You can find this information in PDF on Page26 (1-18) Note 1: The X10SDV series motherboard alternatively supports 4-pin 12V DC input power at PJ1 for embedded applications. The 12V DC input is limited to 18A by design. It provides up to 216W power input to the motherboard. Please keep onboard power use within the power limits specified above. Over-current DC power use may cause damage to the motherboard! Note 2: Do not use the 4-pin DC power at PJ1 when the 24-pin ATX Power at JPW1 is connected to the power supply. Do not plug in both PJ1 and JPW1 at the same time.
My guess for the "subpar" ZFS performance is a mix of it still making checksums for data and how it is distributing data to the vdevs that report back they are done and have committed the data and the PLX switching that is going on adding latency, maybe? May also have to do with updating the metadata, would be a neat experiment to use 2 of the SSDs as a metadata offload for the rest to see if that brings your closer to generic raid.
@@peterbronez1188 I don't see why not, but if you are thinking of using the Optane as a ZIL then it is mostly moot for SMB as SMB is async IO unless you set the ZFS dataset to sync=always.
I agree with the checksum idea. I suspect if you turned them off you'd see that saturation occur pretty easily. Though I wouldn't recommend that as a long term solution; it's part of the point of ZFS.
If you're using it for video editing, because you're always transferring large files, you're going to be burning through the finite write endurance limits of those SSDs GUARANTEED. The analogy that I like to use for NVMe SSDs is they're brake pads for super/hypercars. Yes, you can go really, really fast in super/hypercars. But you're also going to burn through the brake pads that much sooner as well.
The cost of quality NVME SSDs has dropped by half in the last 14 months. Maybe others prefer the rock bottom pricing of spinning media but the premium for NVME SSDs isn't so premium anymore. Only capacity keeps spinning media in my NAS; if I could buy consumer-level 16tb SSDs, I probably would.
PCI x16 to quad M.2 adapters are like $30, so that Ampere option interesting. Or if you can find any motherboard that supports bifurcation you could make a full speed RAID array that isn’t bottlenecked by bus or CPU. I’ve been looking at an Epyc board that would support tons of PCI lanes - specifically the Asrock ROMED8T. You could put 26 full speed Gen 4 SSDs on that. Video editing is just barely too needy to run nicely on hard drives, and I don’t know of any solid caching solutions. Instead what’s making sense to me is RAID SSD’s as a “hot” pool to store an active video editing library, which then gets snapshots backed up to a “cold” hard disk pool.
I built my NAS using my old Ryzen 1700X with 8X 2TB Crucial MX500 SATA SSDs under Windows Server 2016 and Windows Storage Spaces. The processor, motherboard, and memory were leftovers from an upgrade, so it essentially cost me nothing, and the drives are now running under $100 each. (The 10Gb network cost a bit more.) Running with a mirror config and 8TB of usable space, I get about 800MB/s transfer rates, nearly saturating my 10Gb link.
Using what you have is always the cheapest option! (Though Windows Server 2016 is an interesting choice, it's more rare to see that used for a storage-only server).
@@JeffGeerling It's what I knew how to do. (Plus getting the license key from VIP-SCDKey.) It's not the best, but I tried other methods and couldn't get them right. Either they were too confusing to set up or I couldn't actually log into the share after getting it done to put my data in. It was too annoying, so after 2 months of dealing with it, I went with WSS. (The REAL WSS, not the dynamic partitions.) I also happen to have iSCSI targets on that drive set for my three Hyper-V hosts (self training lab) to back up to using the built in Windows Server Backup. Works great.
Thank you for sharing. The ideal specifications for my DIY NAS would include support for an ARM CPU, 10G Ethernet, and NVMe SSD. The prototype shown in the video is already very close to that.
I have created a NAS which gives best of price as well performance. I used bcache. Storage Pool - 3 X 10 TB hdd and 3 X 500 GB Nvme. I used 1 X 120 GB SSD - with Debian 11 . 10 TB HDD are coupled with 500 Nvme Drive. Used "writeback" mode. Once 3 devices created, I used btrfs over the these devices. Best part is that bcache gives me read and write cache together. So in a nutshell I am getting almost nvme performance on my sata hdd. I am also enjoy btrfs snapshots. To manage storage I am using cockpit. This serves my purpose.
I use a Raspberry Pi 4, Sata 2TB SSD, OMV, and no raid. That works well for me because I just have a few movies and TV shows on it, and all of those files are copied from DVD or Blu Ray discs I own, so I'm not worried about data recovery. I do get about 100 MB/s, which is good enough for streaming.
All I'm waiting for is 32TB of NVMe to be at least somewhat affordable. I currently have a NAS with 4 8TB spinning disks. I'd like to get a second unit going so I can switch to solid state for the primary and keep the spinning disks as a backup.
"who are these for?". I am in the planning stages for building a NAS for my camper. I want to be able to stream from it while going down the road so my wife can watch movies. Can't do that with a bunch of record players. SSD for me please and with size and power consumption being an issue, I'll take the m.2 option. I am still really considering a SBC NAS of some sort. I really want to try to build one out of one of the N100 micro PCs.
I've liked my TeamGroup SSDs as they are cheap, usually reliable, and have a solid warranty which I have used. What I don't like about them is most of their current lineup is DRAMless, but for my uses (homelabbing with RAID or ZFS, with spinning rust as my main NAS array) they work well.
Are these SSDs ok to use in a NAS? I guess with all the videos I see from other creators I thought I’d have to shell out more money for NAS specific drives. Is DRAM just what I have to look for?
@@jacobdavis6615 there is no such thing as "nas specific drives" either SSD or HDD, it's mostly a marketing gimmick that WD started in an effort to squeeze more money out of people. DRAMless SSD drives are usually cheaper and have lower performance, but just as with hard drives, that's not terribly important for a NAS where you are bottlenecked to 120MB/s by a gigabit connection (or 1-ish GB/s by a 10 Gbit connection) What you want to look at is the write endurance value
@@jacobdavis6615 I've found them to be fine, but I'm mainly using them as either a read cache for HDDs or in an array that's at least mirrored. Would drives with DRAM be faster and maybe last longer? Probably, but capacity is more important than speed for me. I should also add that I have killed a couple. I now tend to skip the 128GB ones as the price of the larger capacities has come down and ones with more capacity in theory are more reliable.
Asus's consumer electronics guys seems to be very pro-consumer, a breath of freedom with Apple, Samsung, Microsoft, John Deere and intel trying to end personal ownership.
It's amazing how quickly 2 and 4 TB NVMe drives have fallen in price. I'm okay with just a few TB of usable space so I'm doing RAID 10 with 10 drives right now, plus 2 spares. But I could upgrade over time and double or quadruple the capacity, once 2/4/8 TB drives hit whatever price point I'm comfortable with.
I got the 6 drive version to replace my unraid server to reduce power usage, noise, and physical space. Gonna be setting it up in it's (probably) permanent location today and ensuring I copied everything over, but so far I'm pretty satisfied. Sure it's not as customizable as the unraid server but I wasn't really using everything unraid had to offer anyways.
Woah that Pocket NAS is crazy small. Would love to see ASUSTOR make a tiny Arm NAS, with like the Qualcomm' 8cx Gen3/4 or MediaTek's WoA chip once that's finally out
For someone with a large movie and TV library, that's not a lot of space. I have 3 NAS drives totaling 20 TB of space (plus a duplicate of each for backups) to house my collection of TV shows and movies ripped from DVDs and BluRays to watch on various TVs around the house. I have a few more TV series to rip to disk, then I'll be looking to add a couple more drives.
Granted I am an E-Waste recycler, but I don't know if I would be confident enough to use teamgroup drives and something that matters. I don't think I've had a single working one of their drives come my way regardless of the capacity.
Okay, Open Bios is a killer feature. I wrote off pre-built NAS boxes for that exact reason, but, uggh, i may just consider this one then, because thats HUGE
But is TEAMGROUP a high enough quality for a NAS? Like the 2 TB M.2 has a 5 year or 5 TB written warranty. So you can only write the entire drive a little over twice before the warranty has expired? Wouldn't the drives be out of warranty in just a few months when in a zFS pool due to periodic scrub?
SSD costs are coming down but at the same time hard drives just keep packing more and more storage per drive with WD now listing 26TB drives with 20 and 22TB drives being fairly common at this point.
Those high end drives are a little exotic and risky to deploy for a desktop scenario where you might only have 4-6 of them, but they do bring the cost per TB way down! I hope to see NVMe prices continue to fall. It's been pretty dramatic the past 3 years.
If you’re able to either view it in the datasheet or visually see where the PCIe lanes go, perhaps you can try that 3-drive RAID0 test again but make sure all 3 drives are behind separate PCIe switches. Even just 1xPCIe gen3 lane per should get you ~1GB/s to that PCIe endpoint. In fact, it may be just as interesting to simply try a single drive and see if you still hit that 600MB/s.
Looking at these, I noticed they had TeamGroup NVMe storage. TeamGroup are cheap, but they are not anywhere close to high performance. That said, they should last a while even in a NAS. Spinning drives still have their place if a lot of read/write actions are going on. Spinning drives, despite being slower, will still saturate a 2.5Gbps connection and will tend to last longer. If your NAS is primarily reading data with little writing, then the NVMe could be an option.
They didn't come with TeamGroup, I just bought those because they were cheap enough for me to afford for this video, but also decent enough they could perform well in aggregate.
@@tomspettigue8791 no, but it's easy to confuse. Pine64 makes RockPro64 board with RK3399 - same SoC as in Rock Pi4. Rock Pi4 and Rock5 are made by Radxa.
This was a VERY good dive into this subject ... and you actually got better results than I got with an AMD EPYC with 128 PCIe Lanes ... in Dells R7415 (which Dell only provides 32 PCIe Lanes for all 24 NVMe slots). I'm looking at getting an R7525 ... but it's sad just how much you have to spend for U.2 access to some PCIe lanes just bc of games mfrs play.
Yeah, I really wish U.2 were more available in the consumer space. To even get adapters for it can be a bit pricey. Would love consumer 2.5" drives to be around as drop-in replacements where SATA drives were used.
The Teamgroup MP34 drives are great on paper, but I've seen anecdotes about people (in the US) being instructed to send units that needed to be RMA'd to Taiwan, as opposed to a center in the US. I actually have one, but I hope I won't need to RMA it.
Considering how much RAM you installed, I'm surprdsed TrueNAS and ZFS saw so much of a performance hit even when striped. I bet it has something to do with those PCIe switches. I wonder if reads would be faster as a merged JBOD instead.
Unlike many ARM vendors, though, Rockchip does do upstreaming of their kernel support so it will work in the future with just a vanilla upstream kernel. (Also those SSDs are a lot cheaper than the Samsung I tend to use. Are they any good?)
The Pocket NAS looks quite promising! I'm interested in how it interfaces to the Rock 5 B - I assume through the M.2 slot underneath the Rock 5 B, but it also looks like it has an interface through the GPIO pins, or are those just for power?
I still think using a Mac mini m1 with 10gig ethernet will probably be the best way to go for a high powered NAS. A few Thunderbolt 2 PCIe adapters would be required to install all of the storage needed. Along with having a Thunderbolt 2 gigabit adapter. You can also add on a storage array of your choice and you still have two USB 3.0 ports
you have the knack for reviewing hardware I'm interested in buying. the opennes (is that a word even?) of the asustor has settled it, I'm getting one to replace my old syno (1511+ so 12 years of 24/7 service. can't complain but it needs an upgrade).
Yeah; honestly I was quite happy with my little old-Xeon-NAS I built... but if ASUSTOR has open hardware, I like that I can have my NAS cake (hardware purpose built for storage) and eat it too (TrueNAS, or whatever OS I want).
At that low density, you wouldn't need the NAS in the first place if you weren't using a Macintosh. 2 Crucial P3 4tb drives is $400, and most modern machines support some kind of PCIe bifurcation, so one $15 adapter later, you have local redundancy at a full 2gb/s.
You didn't but I always find it weird when people working on electronics will first state the temps in C then convert the temps of the electronics or tools used when working on the electronics from C to F in their video. Component temps, hot air station temps and soldering temps are all going to be in C as standard in the US.
Grate video. Always enjoy. Just want to say I wold love to see you make your windows arm pc into a nas - server . Also I eat up both nas videos and anything to do with Linux on arm or windows on arm. Just saying. Love your channel
I wonder how the pocket nas would hold up with an Intel N100 Board (like the one from Beelink EQ12). I plan on trying a 2.5GbE NAS like that to have the best of both worlds
That black box sounds really handy, if far more spendy than what I go for. I wonder if the Intel's next N100 CPU would be enough to top up that 10Gb bus. Personally, I'd be down with owning a small, fanless server-y thing with enough SATA storage to saturate 2.5Gb/s, or even 'maxing out' 1Gb. Less focused on speed and more on consistent throughput.
I assume the squeaky noise is coming from the buck converter instead of the PWM controller directly. Had the same issue on my 3D printer. There are different solutions out there for that specific problem.
The 5105 will work with up to 64Gb of ram ;) makes a nice proxmox box for home automation/network managment etc. (arc.intel just says 16 because that's what they tested.)
The Asustor machine does not come with ECC RAM, which makes sense because the Intel Celeron N1505 does not support ECC RAM - therefore I would discourage the use of ZFS on it. Same goes for the pocket nas, sadly almost no ARM based boards with ECC support out there :(
Awesome context for these products. Two thoughts: First: Why do you choose to edit over the network? It makes sense for a company like LMG with 2 dozen editors, writers, camera operators, and on camera talent all collaborating. But if it's just you and maybe an editing assistant, wouldn't it make more sense to keep files you're actively working with on your workstation and upload everything to the NAS when you're done? Second: For people like Jeff who actively use their NAS rather than just as bulk storage or a backup target Flash storage might not be as expensive compared to HDDs as it first appears. You should see significant power savings. And because Flash has nearly unlimited read endurance it should last longer.
Good question! It's mostly because I edit sometimes at my main desk on my Mac Studio, other times on my laptop, but like to be able to dump and work from the NAS in either case, even if I have to upgrade or shut down one or the other. I only really copy to the computer's internal storage (which is only a few hundred GB free...) if I'm going out on the road and need to edit.
3:52 I have noticed this with high end NVMes that use QLC NAND. I have a Sabrent Rocket Q4 2TB gen4 NVMe When copyinging ~300GB of photos from a day at the airshow at Scott AFB down by St Louis MO, the rocket Q4 got to about 70GB into the transfer from my SD card then slowed to just 40MB/s. I tried to restart the transfer several times and just ended up sending it to a 16TB HDD which ran at nearly the full 300MB/s speed my SD card supported for the entire transfer only sometimes slowing to 250MB/s but generally around 280-290MB/s.
The 45MB/s likely means that the SLC caching ran out on the parity drive(or all 3 drives) and the drive had to slow to the maximum supported speed of QLC(assuming these are QLC based, which at 1TB for $40, suggests this is the case)
These cheap SSDs(both NVME and SATA) are only usable up to about 20~30% capacity. Beyond that point, no matter how hard you try, writing becomes a joke, e.g. 300MB/s; for NVME, >800MB/s); meanwhile, those under $30/TB ALWAYS drop to 30MB/s or so after 20~30% capacity. These manufactures definitely know what they are doing - if a user can notice their SSDs become too slow to write after such light use, these manufactures must have done extensive tests to make sure this is how their products on the cheap end should perform, therefore their high-end product can sell.
There are solutions to solve the slow write performance in openmediavault by simply adding some arguments in the extra options box under SMB/CIFS to improve write performance. I noticed a substantial improvement on a gigabit connection in terms of write speeds on my Xeon E3 powered nas.
@1:40 Oh man that could probably use some heatsinks between the m.2 drives. I'm imagining even something as simple as two copper plates with some metal spacers between them and then thermal adhesive on both sides. Then have a squirrel cage fan or something blow air from the side, which would force air to go through thole copper plates cooling them.
Great review, thank you. My only concern with all solid state storage is the finite write life that solid state storage has compared with spinning disks..
It's really dependent on what you buy; different drives have different write expectancies. Many flash drives could be written to for years with normal consumer write patterns and not have an issue. And hard drives often fail prematurely, as well (just check Backblaze's reliability reports!), so in the end, the best protection is a 3-2-1 backup plan, and choosing drives wisely.
@digitalpilotnm6039 modern flash drives are designed with an expected lifetime of 10 years. Most modern hard drives are designed with an expected lifetime of 3-5 years. Most flash drives have much, much higher MTBF - up to 10x higher than disk drives. If you're genuinely concerned about lifetime then you definitely need to switch to solid state.
I ordered the 6 bay Flashstor a couple days ago, should get it early next week. Ordered 16 gb ram for it as well. Will take a while to fill it up with NVME drive, will probably order 1 or 2 4 TB ones each pay period until I fill it up. Wish I knew about 32 GB working in this, that will be a future upgrade if I run some VMs on it.
Also big props for not locking down the bios and providing a convenient video port
Thank you!
Seriously. I’ve never been more tempted by a consumer solution
@@taylormanning2709 We appreciate your consideration! I'll do my best to fight hard for the consumer.
My lg g6 has a media server option in settings 2tb sd storage and the phone is cheap mine is $18.42 4gb ram snapdragon 821 all you need is to buy the sd cards 190mb/s 1tb sd on the g6 is $94 with out the phone.
@@beatyoubeachyt8303 LG G6 had user replacable batteries too., If you need to replace batteries these days., you need to have a torque wrench with ifixit kit
Hey Jeff! Thanks so much for taking a look at our first ever All-Flash NVMe NAS! We have made numerous improvements to our design since the last time we sent products to you and we'd love to share all the ways we keep Red Shirts out of our NAS and enthusiasts and tinkerers inside! With our recent endorsement of third party operating systems, (though without technical support) we're sure that using our NAS is nothing short of a NASTastic experience and we want to keep listening! If you, dear commentor; or youtuber, want to send me a message, feel free to do so! I love praise, comments, questions and even criticism! Hit me up and thanks again!
Thank you for (officially!) allowing alternate OSes on your NASes! Now... when ZFS in ADM? ;)
asustor my beloved
This might be a stretch, but are there any plans to sell Nas enclosures without hardware built in, so it is for the user to choose. Love the direction where the asustor is heading in, with allowing other oses. Maybe there could one day be official true Nas and unraid support?
@@JeffGeerling I'm doing my best! I still have to really sell these ideas to the more conservative and risk-averse elements in the office too. But your backing helps me get the point across!
This is how you build a good reputation. Not locking down your hardware, listening to feedback, and engaging constructively with your users.
One additional thing I'd call out when comparing HDD vs SSD: how much data you can store in a given physical space. It's a little insane to me the absolute minimal footprint that a flash based system can occupy, and for people who live in places where physical space is at a premium, that's a very real consideration.
Something I didn't even consider!
the ppl who life in a small places, wouldn't be able to afford ssd prices. The only consideration is that mechanical drive is more prone to failure than ssd, however ssd chip could fry no problem as well
@@s.i.m.c.a Not everyone with money lives in big places...
@@s.i.m.c.a I'm kind of making an assumption here, but I think he's referring to people that live in places like cities (where even a 39m2 apartment costs 60% of your salary).
@@s.i.m.c.amaking some pretty big assumptions there. Not everyone chooses to waste money on more space than necessary. Why is there a tiny house movement anyway?
I really appreciate that you just go straight into it with no intro
Gotta respect my viewer's time!
The pocket nas is almost EXACTLY what I've been wanting for a few years. I'm a traveler who requires a lot of offline video storage.
Please go to the link to Rick's site and indicate what features you'd be looking for specifically. I can't wait to see the final version he comes out with... I've seen renders of a much more reliable prototype based on the Rock 5 model B, but there's still time to let him know if there's some other feature you'd be missing!
@@hundredfireify I have been using a m.2 enclosure for a couple years but I'd like to be able to access it wirelessly sometimes like with my phone. I have used wireless storage devices before (I had a Seagate wireless drive and a western digital wireless drive) but the current solution don't support the flexibility I'm looking for yet. I want a AIO portable Nas with media output on it for tv. I'm asking for a lot but if it's not this device I was looking into buying a Latte Panda Sigma which offers a lot of what I am looking for. Speed, flexibility ("full hack-able" , portability, etc.
@@youdontneedmyrealname if you set up a samba share on your laptop you could wirelessly share your enclosed drive to your phone
N5105 can easily run with 32GB of RAM - should help TueNAS. And the slow-down on write speeds is due to reaching end of cache. Most cheaper flash drives use QLC memory as the most cost-effective with some cache (DRAM or SLC). Once it fills - the drive becomes dreadfully slow. Would be interesting to see the influence of that for the ZFS poo performance.
the pocket nas would be perfect for me as a trucker, great to store some games on for my laptop, might even be able to make a ceph storage cluster , that would be something
I do wonder if it'd run CEPH. I tried setting up ROOK CEPH on a microk8s cluster running on i5-6500T and 32GB RAM and ran out of CPU. Maybe I did something wrong, but certainly interesting.
Couple external drives would be much simpler. Use 1 and keep a 2nd synced occasionally as a backup. Much cheaper
Yea but why
The small SBC as NAS devices interest me for home clustering experimentation. Hiding a bunch of these around the house for distributed compute and storage would be neat... running your own little home cloud, the house is the server.
I took an old dual cassette player gutted it and put an sbc in it with an 8TB drive. Set it in a detached garage that's hardwired. Now have a backup copy in a different building. Next step is getting a copy offsite.
man it's crazy seeing those prices on ssds I remember paying $140 for my 1tb drive a few years ago
Yep. In a few years SSDs will be the only thing you can get. For now super large drives spinning rust is the way to go.
I remember paying $400 for a 20 MB hard drive, just a "few" years ago.
@@KameraShy I'm with you, I remember this same conversation and progression, but with GB instead of TB. On mechanical disks.
Hah, I just bought two 2TB NVME drives at $200 each a few months before the prices dropped this year.
Yes, it's crazy...
data hoarding is the only reason i'd ever look at hdd going forward. thanks to oversupply of flash memory, it is a great time to set up a flash-only storage. not to forget how easier they are to move around without risking data loss.
Why do you need to move your Nas around?
Amazing product from asustor! Open bios is crazy, I love being able to use my own software
Thank you!
Open bios shouldn't be "crazy", it should be expected / the norm for hardware that you buy - if the bios is locked, you don't really "own" the device. It's very sad that we're already at a place where an unlocked bios is "crazy" when that was the NORM for decades. Since when do you buy a PC that had a locked down bios / bootloader??
When you actually get 50+ PCIe lanes for your drives something like an AMD EPYC, you can run into another problem for an NVMe-only NAS: internal bandwidth of the CPU. When Linus Tech Tips filled up an AMD EPYC with 24 SSDs, he hit a major stability bug in the CPU, because all those NVMe traffic ate up the entire internal bus bandwidth of the EPYC processor and started knocking CPU cores offline!
That is one downside to NVMe, just like throwing hundreds of physical CPU cores in a system, all that NVMe can make things get wonky!
The pocket NAS is actually of GREAT use to me. This can be a travel NAS for me for my photography. It's easy to set up and I could put my data on it without blasting it on my PC and have it WAY more safe. The flashstore could a cool thing for me at home as an intermediate storage for hot projects, too. I could edit them there and after I'm finished archiving them on a slower NAS.
Especially due to it not being locked down. This is a really great factor for people like me who have some DIY NAS and consider some pre-builts like this one so that they can be managed with the same OS.
As an it pro, that shirt is insanely accurate to my life
Awesome video Jeff! Love your thoroughness in your reviews. What I'd one day like about consumer nases is enclouseres for diyers to use. I can build a Nas in a case, but it has not enough drive mounts. I can build one in an old server but it is not power efficient and empty server chassis with drive bays are super expensive
Yeah, there are very few cases you can buy that are great for NAS use cases. It'd be pretty cool if ASUSTOR used a particular spec for their main boards so you could pop in a mini ITX replacement or something. Would make it so you could buy a used consumer NAS, rip out the guts, and put in your own!
The Asusstor read speed drop with truenas was from ZFS’s checksum verification on read. The N5105 is just a bit slow at that task.
Good to know! That does make sense, that ZFS would be adding some processing that holds it back a little.
@@JeffGeerling Standard RAID uses some version of CRC32 by default which has had hardware acceleration for a while now. BTRFS also defaults to using CRC32 as well though you can use a different option. ZFS uses Fletcher4 by default and SHA256 if you enable deduplication.
@@JeffGeerling apparently ZFS isn’t great at handling flash storage. EXT4 and F2FS are reported as having higher performance for arrayed flash storage like these. With a faster CPU and more PCIe lanes, a more optimized filesystem might also give you closer to spec performance out of those M.2 drives
Boom! Ampere Altra at 128 lanes of PCIe Gen4! Nailed it this one. Something else that would be much better is the memory bandwidth which matters as well on all-flash NAS units.
Now I need to start campaigning for ASUSTOR to build their next tiny NAS with an AmpereOne 192-core CPU with PCIe 5.0...
I love watching these things, the $1300 is definitely outside of my price range but that is a fantastic little nas
And costs will likely just go down over time, nice to have something to look forward to :)
Wait a year and see.
$1300 is a pretty sizeable price premium for the size and low power usage. I've been looking at making a 4U box with 10GbE and a Ryzen 7 5700G for both NAS and Docker, and it's looking to be about $1300 for two 4TB drives, with a $75 expansion card to add four more. Sure, that's six bays instead of twelve, but it's also about 6x the CPU performance, a lot more PCIe lanes, a decent GPU for transcoding, and a dedicated NVMe slot for the OS. I even threw in 2x16 of RAM, and I think it can take 4x32 if I really wanted to
At my org we're already talking full solid state with U.3 drives for servers moving forward. The elephant in the room is we don't expect to still be buying spinning rust in 10 years, but we have a tendency to keep equipment in production for 6+ years. You might think "that's at least one more refresh" but sometimes you move at the speed of committee approval.
I know it wouldn't be very fast, but I'd love to see an actual pocket NAS that used a Pi Zero W so I could power it with batteries and push/pull files to/from it while it's in my pocket.
Wow great video comparing the nitty gritty details on these 2 NAS solutions!
So interesting and informative - Thanks for this Jeff!
I also really like how the purchased NAS solution is basically open hardware and not locked down and you can install anything you want on it - Thanks the way it should be.
Yes saw this Nas a few weeks ago, that impressed I bought the 6 version, 6 x team group 2tb drives. Got it yesterday, can't stop playing with it 😀
It's a neat unit!
Thank you for your support!
Awesome! I'm stoked to see NVME storage really dropping. I picked some 1TB WD SN850X for $55/ea on prime day and a 4TB version for $220. There has been some crazy deals on 'slower' drives, especially PCIe Gen 3 models.
We just need GPUs to finally reach some level of sanity again but that's about as likely right now as Samsung stopping their quest to be a crappier version of Apple.
FYI the Intel N5105 will run 32GB (2x 16GB only) of RAM, I've got that installed in my QNAP TS-464 NAS, and plenty of people confirm it on Reddit. Now, I know Intel does specify a max of 16GB, and it does state that on it's website, however pre-late October 2022, when I was researching to buy a new NAS, Intel's website did say max 32GB of RAM, which is why I went hunting on Reddit, coz reviewers were saying !6 but I saw intel say 32. I think it might have been that "your millage may vary" scenario, because even though Intel pre-October said 32GB, QNAP always stated a max of 16GB, so I think Intel were initially edging their bets, and QNAP were being conservative to ensure they could 100% support customers. I've had 32GB running for 5 months now without issues. But I agree the weakness of the N5105 is it's PCIe lanes, QNAP only offer PCIe 3 x1 speeds, to split up what they are trying to do with 4 SATA drives, 2 onboard NVMe slots, and an add-in PCIe slot for 10Gbe or 10GBe + 2 NVMe cards, I came from a J1900, so even if I wish for a little more, the N5105 is a pretty capable CPU. I would say look out for Intel's Alder Lake N100, N200 and N350 CPUs, even faster and more power efficient, I've got a N100 in my new pfSeense firewall mini PC.
I like the idea of the small SoC NAS, once we get a little more power I might deploy one at my mum's for a media server, they watch a lot of legally obtained films.
That "write speed cliff" which you fell off is there for al NAND based flash storage- sometimes better and sometimes worse. But it is always there. Basically when you write you are really writing to pre-cleared blocks of flash. The pre-clearing is a LOT slower than writing to an already cleared block. The pre-clearing happens in background using hidden blocks in your NAND flash device. If you do constant writes you eventually run out of pre-cleared blocks, then you drop down to the speed of clear-a-block-then-write. If you leave the storage alone for 10 minutes then you'll get another burst of high write performance then a drop back down to the slower write perf. All NAND based storage devices suffer this problem eventually, if your writes exceed the pre-clearing rate of the device. Enterprise drives normally just allocate more Flash storage to hidden blocks which are used for wither faster write performance, or to replace the inevitable failed blocks. For some more details, read "Over-Provisioning NAND-Based
Intel SSDs for Better Endurance" which also talks about performance.
At 9:50, you implied that unlike ZFS, Btrfs doesn't support snapshots and synchronisation. However, Btrfs does support snapshots and commands "btrfs send" and "btrfs receive" can send and receive snapshots between two hosts over a network, similar to ZFS commands "zfs send" and "zfs receive".
he likes ZFS and he also likes to delete comments that don't agree with his ideology ;)
Btrfs does, I wasn't careful with my wording there, as ADM does support Btrfs (and I used it on the NAS we deployed at my Dad's radio station). But some of the Btrfs features are not as easy to use through ADM as they would be on plain Linux, and that was more what I was comparing (ADM vs TrueNAS in particular) here.
I like ZFS but I ain't a ZFS zealot.
And I never delete any comment on any video, except for anything with commercial spam (e.g. "Telegram me you won a prize") or explicit content.
@@patryk4815I have had arguments with him several times. No comment was deleted, even if he didn't agree. Now bill Murray (the 8bitguy) on the other hand.... He does, maybe you are confused.
@@JeffGeerling I use both ZFS (mostly in TrueNAS Core, but I've also used it on Linux) and Btrfs, and both work well, but I tend to prefer the ZFS snapshot model and naming syntax. Btrfs treats snapshots as directories in the same file system, so it's easier to misplace them, whereas ZFS records snapshots in a separate namespace that you can list easily with the command "zfs list -t snapshot". However, on Linux, I tend to use Btrfs more often because it is available in the kernel and requires less memory than ZFS. Though I've used ZFS for almost a decade, I've yet to learn how to control the amount of memory that the various ZFS caches consume. I guess it's never been a priority since I mostly run it on my TrueNAS machine.
The Oragmi thing could even run (even if i dont need) it on my powerbank,that caps at 10.5Watts probably all day long under good load.What are the chances?😂.Great Video Jeff,please keep going and stay healty.
I don't see why that wouldn't work? A newer version I am working on should make that a reality.
I'd like to see them go one step further; a travel router with NAS capabilities using flash storage. For travel, or even for home use, one or two NVMe slots should provide plenty of storage. Great for travel or off grid use.
You might be able to use more than 16gb of memory on N5105. Well i think it depends on the motherboard. I installed 2 sticks of 16gb on my Topton N5105 router just this morning and it works just fine. I also have a N5095 board with 12 sata ports. I installed a 32gb ram on that and it works too!
Yeah, some people mentioned 32 GB works here. I know 16 does because that's the spec, and 64 doesn't because ServeTheHome tested that and it broke. So 32 might be the goldilocks if you want a lot of RAM.
@@JeffGeerlingabout the slow perf you saw on TrueNAS, I noticed you're using SCALE, did you try Core? I've had bad performance experiences with SCALE and good ones with Core on limited hardware (specially old CPUs and lower end NICs supported by Core). May be woth a try...
If the pocket NAS fan was squealing that badly, it's likely also damaged in shipping, ball bearing fans are the highest quality/longest lasting industrial fans, but they're also really sensitive to shipping damage, I learned this the hard way.
It's a big reason why the PC building community considers them worse for noise than sleeve I suspect.
Ah, could be. Shipping seems to have taken its toll on this poor device :(
I thought the fan was damaged the first time I fired it up. Surprisingly, that noise is the PWM interacting with the fan...
Another thoroughly researched and excellently presented video by Jeff "The Man" Geerling.
wow, those NASes look great ... the pocket NAS with its 10W consumption would be great to be put in my RV - an used as "offsite" storage 😉
That's about the perfect use case for such a little board!
good idea!
Hey Jeff, on 8:30 you should disconnect the 4pin dc power of JP1 from your supermicro X10SDV board. It's not recommended because of an alternatively support of two power sources.
You can find this information in PDF on Page26 (1-18)
Note 1: The X10SDV series motherboard alternatively supports 4-pin 12V DC input power at PJ1 for embedded applications. The 12V DC input is limited to 18A by design. It provides up to 216W power input to the motherboard. Please keep onboard power use within the power limits specified above. Over-current DC power use may cause damage to the motherboard!
Note 2: Do not use the 4-pin DC power at PJ1 when the 24-pin ATX Power at JPW1 is connected to the power supply. Do not plug in both PJ1 and JPW1 at the same time.
My guess for the "subpar" ZFS performance is a mix of it still making checksums for data and how it is distributing data to the vdevs that report back they are done and have committed the data and the PLX switching that is going on adding latency, maybe? May also have to do with updating the metadata, would be a neat experiment to use 2 of the SSDs as a metadata offload for the rest to see if that brings your closer to generic raid.
Wonder if you could put an optane mirror set in there…
@@peterbronez1188 I don't see why not, but if you are thinking of using the Optane as a ZIL then it is mostly moot for SMB as SMB is async IO unless you set the ZFS dataset to sync=always.
I agree with the checksum idea. I suspect if you turned them off you'd see that saturation occur pretty easily. Though I wouldn't recommend that as a long term solution; it's part of the point of ZFS.
不错哦,这些闪存型NAS是我想要的那种数据仓储设备,机械硬盘的NAS运行起来还是太吵了,尤其读写时的声音让我压力升高。虽然现在固态硬盘的每GB单价还是略高于机械硬盘,但是随着中国厂家的技术进步,固态硬盘的进一步降价趋势已经很明显了,之后如果哪家最终的产品能够做到足够轻便、足够安静、接口与处理器性能充足,那我就会选择购买了
I would definitely consider the Pocket NAS for portable storage, especially off-grid.
If you're using it for video editing, because you're always transferring large files, you're going to be burning through the finite write endurance limits of those SSDs GUARANTEED.
The analogy that I like to use for NVMe SSDs is they're brake pads for super/hypercars.
Yes, you can go really, really fast in super/hypercars.
But you're also going to burn through the brake pads that much sooner as well.
The cost of quality NVME SSDs has dropped by half in the last 14 months. Maybe others prefer the rock bottom pricing of spinning media but the premium for NVME SSDs isn't so premium anymore. Only capacity keeps spinning media in my NAS; if I could buy consumer-level 16tb SSDs, I probably would.
Sadly you have to cherry pick them. Not all are the same. The best nand flash is always Corp/ server stuff
And if 8TB nvme wasn't $1000 each
PCI x16 to quad M.2 adapters are like $30, so that Ampere option interesting. Or if you can find any motherboard that supports bifurcation you could make a full speed RAID array that isn’t bottlenecked by bus or CPU. I’ve been looking at an Epyc board that would support tons of PCI lanes - specifically the Asrock ROMED8T. You could put 26 full speed Gen 4 SSDs on that. Video editing is just barely too needy to run nicely on hard drives, and I don’t know of any solid caching solutions. Instead what’s making sense to me is RAID SSD’s as a “hot” pool to store an active video editing library, which then gets snapshots backed up to a “cold” hard disk pool.
I built my NAS using my old Ryzen 1700X with 8X 2TB Crucial MX500 SATA SSDs under Windows Server 2016 and Windows Storage Spaces. The processor, motherboard, and memory were leftovers from an upgrade, so it essentially cost me nothing, and the drives are now running under $100 each. (The 10Gb network cost a bit more.) Running with a mirror config and 8TB of usable space, I get about 800MB/s transfer rates, nearly saturating my 10Gb link.
(angry neckbeard noises for your choice of using WinServer2016 and Windows Storage Spaces)
Using what you have is always the cheapest option! (Though Windows Server 2016 is an interesting choice, it's more rare to see that used for a storage-only server).
@@JeffGeerling It's what I knew how to do. (Plus getting the license key from VIP-SCDKey.) It's not the best, but I tried other methods and couldn't get them right. Either they were too confusing to set up or I couldn't actually log into the share after getting it done to put my data in. It was too annoying, so after 2 months of dealing with it, I went with WSS. (The REAL WSS, not the dynamic partitions.) I also happen to have iSCSI targets on that drive set for my three Hyper-V hosts (self training lab) to back up to using the built in Windows Server Backup. Works great.
Thank you for sharing. The ideal specifications for my DIY NAS would include support for an ARM CPU, 10G Ethernet, and NVMe SSD. The prototype shown in the video is already very close to that.
I have created a NAS which gives best of price as well performance. I used bcache. Storage Pool - 3 X 10 TB hdd and 3 X 500 GB Nvme. I used 1 X 120 GB SSD - with Debian 11 . 10 TB HDD are coupled with 500 Nvme Drive. Used "writeback" mode. Once 3 devices created, I used btrfs over the these devices. Best part is that bcache gives me read and write cache together. So in a nutshell I am getting almost nvme performance on my sata hdd. I am also enjoy btrfs snapshots. To manage storage I am using cockpit. This serves my purpose.
I use a Raspberry Pi 4, Sata 2TB SSD, OMV, and no raid. That works well for me because I just have a few movies and TV shows on it, and all of those files are copied from DVD or Blu Ray discs I own, so I'm not worried about data recovery. I do get about 100 MB/s, which is good enough for streaming.
Yeah, honestly a gigabit is enough for a lot of use cases, even 4K and more than one user, as long as the needs aren't too taxing.
The other problem with the diy one is maintenance on failing drives. You have to take them all out to get to the ones at the bottom of the stack.
All I'm waiting for is 32TB of NVMe to be at least somewhat affordable. I currently have a NAS with 4 8TB spinning disks. I'd like to get a second unit going so I can switch to solid state for the primary and keep the spinning disks as a backup.
"who are these for?". I am in the planning stages for building a NAS for my camper. I want to be able to stream from it while going down the road so my wife can watch movies. Can't do that with a bunch of record players. SSD for me please and with size and power consumption being an issue, I'll take the m.2 option. I am still really considering a SBC NAS of some sort. I really want to try to build one out of one of the N100 micro PCs.
That'd be about the perfect use case!
I've liked my TeamGroup SSDs as they are cheap, usually reliable, and have a solid warranty which I have used. What I don't like about them is most of their current lineup is DRAMless, but for my uses (homelabbing with RAID or ZFS, with spinning rust as my main NAS array) they work well.
DRAMless on NVME is less bad than on Sata as per the nvme spec the SSD can get up to 64MB of system memory (RAM) to use
Luckily I found the ones I used here, which still have DRAM; I linked to the model on Amazon and can confirm it seems they do have DRAM cache.
Are these SSDs ok to use in a NAS? I guess with all the videos I see from other creators I thought I’d have to shell out more money for NAS specific drives. Is DRAM just what I have to look for?
@@jacobdavis6615 there is no such thing as "nas specific drives" either SSD or HDD, it's mostly a marketing gimmick that WD started in an effort to squeeze more money out of people.
DRAMless SSD drives are usually cheaper and have lower performance, but just as with hard drives, that's not terribly important for a NAS where you are bottlenecked to 120MB/s by a gigabit connection (or 1-ish GB/s by a 10 Gbit connection)
What you want to look at is the write endurance value
@@jacobdavis6615 I've found them to be fine, but I'm mainly using them as either a read cache for HDDs or in an array that's at least mirrored. Would drives with DRAM be faster and maybe last longer? Probably, but capacity is more important than speed for me.
I should also add that I have killed a couple. I now tend to skip the 128GB ones as the price of the larger capacities has come down and ones with more capacity in theory are more reliable.
looking at rebuilding my nas at the moment
Have a spare raid card so tempted with using a Rockpro64 as this has a PCIe slot bult in
Asus's consumer electronics guys seems to be very pro-consumer, a breath of freedom with Apple, Samsung, Microsoft, John Deere and intel trying to end personal ownership.
Loading up the pocketNAS or Flashstore with 4TB drives is where the NVME density wins. I was considering loading up a flashstore with 12x4TB drives.
It's amazing how quickly 2 and 4 TB NVMe drives have fallen in price. I'm okay with just a few TB of usable space so I'm doing RAID 10 with 10 drives right now, plus 2 spares. But I could upgrade over time and double or quadruple the capacity, once 2/4/8 TB drives hit whatever price point I'm comfortable with.
I got the 6 drive version to replace my unraid server to reduce power usage, noise, and physical space. Gonna be setting it up in it's (probably) permanent location today and ensuring I copied everything over, but so far I'm pretty satisfied. Sure it's not as customizable as the unraid server but I wasn't really using everything unraid had to offer anyways.
Thank you for your support!
Woah that Pocket NAS is crazy small. Would love to see ASUSTOR make a tiny Arm NAS, with like the Qualcomm' 8cx Gen3/4 or MediaTek's WoA chip once that's finally out
For someone with a large movie and TV library, that's not a lot of space. I have 3 NAS drives totaling 20 TB of space (plus a duplicate of each for backups) to house my collection of TV shows and movies ripped from DVDs and BluRays to watch on various TVs around the house. I have a few more TV series to rip to disk, then I'll be looking to add a couple more drives.
Granted I am an E-Waste recycler, but I don't know if I would be confident enough to use teamgroup drives and something that matters. I don't think I've had a single working one of their drives come my way regardless of the capacity.
Now that 20TB Drives are cheaper, it would be fun to just have a Pi Zero rinning a NAS IRONWOLF PRO.
Okay, Open Bios is a killer feature.
I wrote off pre-built NAS boxes for that exact reason, but, uggh, i may just consider this one then, because thats HUGE
But is TEAMGROUP a high enough quality for a NAS? Like the 2 TB M.2 has a 5 year or 5 TB written warranty. So you can only write the entire drive a little over twice before the warranty has expired? Wouldn't the drives be out of warranty in just a few months when in a zFS pool due to periodic scrub?
OMV is actually pretty neat, I use it on a nas too.
Personally, I would use freeNAS for the pocket NAS. Much better solution overall.
SSD costs are coming down but at the same time hard drives just keep packing more and more storage per drive with WD now listing 26TB drives with 20 and 22TB drives being fairly common at this point.
Those high end drives are a little exotic and risky to deploy for a desktop scenario where you might only have 4-6 of them, but they do bring the cost per TB way down! I hope to see NVMe prices continue to fall. It's been pretty dramatic the past 3 years.
Isn’t PCIe gen3 ~1GB/s/lane? Hearing that 8x lanes being a limitation sounds silly - even 8xPCIe gen2 lanes would be plenty, no?
If you’re able to either view it in the datasheet or visually see where the PCIe lanes go, perhaps you can try that 3-drive RAID0 test again but make sure all 3 drives are behind separate PCIe switches.
Even just 1xPCIe gen3 lane per should get you ~1GB/s to that PCIe endpoint. In fact, it may be just as interesting to simply try a single drive and see if you still hit that 600MB/s.
🫨🫨🫨
Looking at these, I noticed they had TeamGroup NVMe storage. TeamGroup are cheap, but they are not anywhere close to high performance. That said, they should last a while even in a NAS.
Spinning drives still have their place if a lot of read/write actions are going on. Spinning drives, despite being slower, will still saturate a 2.5Gbps connection and will tend to last longer. If your NAS is primarily reading data with little writing, then the NVMe could be an option.
They didn't come with TeamGroup, I just bought those because they were cheap enough for me to afford for this video, but also decent enough they could perform well in aggregate.
Pocket NAS With a case and battery what a dream , with a Ethernet port yay I would buy it
Oooh, another attempt at Rock5! Nice! Might dust off mine, the software support seems to have improved.
That's a Pine64 board, isn't it?
@@tomspettigue8791 no, but it's easy to confuse. Pine64 makes RockPro64 board with RK3399 - same SoC as in Rock Pi4. Rock Pi4 and Rock5 are made by Radxa.
This was a VERY good dive into this subject ... and you actually got better results than I got with an AMD EPYC with 128 PCIe Lanes ... in Dells R7415 (which Dell only provides 32 PCIe Lanes for all 24 NVMe slots). I'm looking at getting an R7525 ... but it's sad just how much you have to spend for U.2 access to some PCIe lanes just bc of games mfrs play.
Yeah, I really wish U.2 were more available in the consumer space. To even get adapters for it can be a bit pricey. Would love consumer 2.5" drives to be around as drop-in replacements where SATA drives were used.
The Teamgroup MP34 drives are great on paper, but I've seen anecdotes about people (in the US) being instructed to send units that needed to be RMA'd to Taiwan, as opposed to a center in the US. I actually have one, but I hope I won't need to RMA it.
That's why I like Asus routers. Easy to flash open-wrt.
Considering how much RAM you installed, I'm surprdsed TrueNAS and ZFS saw so much of a performance hit even when striped. I bet it has something to do with those PCIe switches. I wonder if reads would be faster as a merged JBOD instead.
My thinking is that with the limited number of PCIe channels, the commands are getting queued rather than being performed in parallel.
Unlike many ARM vendors, though, Rockchip does do upstreaming of their kernel support so it will work in the future with just a vanilla upstream kernel.
(Also those SSDs are a lot cheaper than the Samsung I tend to use. Are they any good?)
so far so good, but I've only been running them for a couple weeks now. I'll definitely update on my blog if I find any issues!
The Pocket NAS looks quite promising! I'm interested in how it interfaces to the Rock 5 B - I assume through the M.2 slot underneath the Rock 5 B, but it also looks like it has an interface through the GPIO pins, or are those just for power?
GPIO for power, the SATA all goes through a custom set of plugs that goes into a standard 6-port M.2 SATA adapter card.
Hi Michael!
I still think using a Mac mini m1 with 10gig ethernet will probably be the best way to go for a high powered NAS.
A few Thunderbolt 2 PCIe adapters would be required to install all of the storage needed. Along with having a Thunderbolt 2 gigabit adapter. You can also add on a storage array of your choice and you still have two USB 3.0 ports
That makes sense why I couldn't get gigabit speeds off my pi nas. I thought it has something to do with my network.
Nope, it's the poor little CPU not keeping up :(
Did you count the pci lanes used for that add-on card you need 2 for 10GbE then 2 per NVME (or more) but the card is only an 4 lane card.
you have the knack for reviewing hardware I'm interested in buying. the opennes (is that a word even?) of the asustor has settled it, I'm getting one to replace my old syno (1511+ so 12 years of 24/7 service. can't complain but it needs an upgrade).
Yeah; honestly I was quite happy with my little old-Xeon-NAS I built... but if ASUSTOR has open hardware, I like that I can have my NAS cake (hardware purpose built for storage) and eat it too (TrueNAS, or whatever OS I want).
Hey Jeff, great content!
At that low density, you wouldn't need the NAS in the first place if you weren't using a Macintosh. 2 Crucial P3 4tb drives is $400, and most modern machines support some kind of PCIe bifurcation, so one $15 adapter later, you have local redundancy at a full 2gb/s.
I would very much like a video about using that ARM workstation as a NAS. That would be incredibly awesome.
You didn't but I always find it weird when people working on electronics will first state the temps in C then convert the temps of the electronics or tools used when working on the electronics from C to F in their video. Component temps, hot air station temps and soldering temps are all going to be in C as standard in the US.
Grate video. Always enjoy. Just want to say I wold love to see you make your windows arm pc into a nas - server . Also I eat up both nas videos and anything to do with Linux on arm or windows on arm. Just saying. Love your channel
Might have to do it then! Annoyingly, they don't do TrueNAS on Arm yet :P
I wonder how the pocket nas would hold up with an Intel N100 Board (like the one from Beelink EQ12). I plan on trying a 2.5GbE NAS like that to have the best of both worlds
That black box sounds really handy, if far more spendy than what I go for. I wonder if the Intel's next N100 CPU would be enough to top up that 10Gb bus.
Personally, I'd be down with owning a small, fanless server-y thing with enough SATA storage to saturate 2.5Gb/s, or even 'maxing out' 1Gb. Less focused on speed and more on consistent throughput.
I assume the squeaky noise is coming from the buck converter instead of the PWM controller directly.
Had the same issue on my 3D printer. There are different solutions out there for that specific problem.
Could be.
The 5105 will work with up to 64Gb of ram ;) makes a nice proxmox box for home automation/network managment etc. (arc.intel just says 16 because that's what they tested.)
The Asustor machine does not come with ECC RAM, which makes sense because the Intel Celeron N1505 does not support ECC RAM - therefore I would discourage the use of ZFS on it. Same goes for the pocket nas, sadly almost no ARM based boards with ECC support out there :(
1:48 Into the video and the first issue that I see with the small RockPi 5 NAS unit is heat dissipation.
Awesome context for these products. Two thoughts:
First: Why do you choose to edit over the network? It makes sense for a company like LMG with 2 dozen editors, writers, camera operators, and on camera talent all collaborating. But if it's just you and maybe an editing assistant, wouldn't it make more sense to keep files you're actively working with on your workstation and upload everything to the NAS when you're done?
Second: For people like Jeff who actively use their NAS rather than just as bulk storage or a backup target Flash storage might not be as expensive compared to HDDs as it first appears. You should see significant power savings. And because Flash has nearly unlimited read endurance it should last longer.
Good question! It's mostly because I edit sometimes at my main desk on my Mac Studio, other times on my laptop, but like to be able to dump and work from the NAS in either case, even if I have to upgrade or shut down one or the other.
I only really copy to the computer's internal storage (which is only a few hundred GB free...) if I'm going out on the road and need to edit.
Wow, you edit for Network Chuck? Thanks Jeff, you rock!
i never get bored of jeff's videos, its been a while since i saw red jeff tho
9:51 snapshots and data integrity are also features of btrfs
I predict it will be less than 5-10 years!
We can dream! lol
3:52 I have noticed this with high end NVMes that use QLC NAND.
I have a Sabrent Rocket Q4 2TB gen4 NVMe
When copyinging ~300GB of photos from a day at the airshow at Scott AFB down by St Louis MO, the rocket Q4 got to about 70GB into the transfer from my SD card then slowed to just 40MB/s. I tried to restart the transfer several times and just ended up sending it to a 16TB HDD which ran at nearly the full 300MB/s speed my SD card supported for the entire transfer only sometimes slowing to 250MB/s but generally around 280-290MB/s.
The 45MB/s likely means that the SLC caching ran out on the parity drive(or all 3 drives) and the drive had to slow to the maximum supported speed of QLC(assuming these are QLC based, which at 1TB for $40, suggests this is the case)
These cheap SSDs(both NVME and SATA) are only usable up to about 20~30% capacity. Beyond that point, no matter how hard you try, writing becomes a joke, e.g. 300MB/s; for NVME, >800MB/s); meanwhile, those under $30/TB ALWAYS drop to 30MB/s or so after 20~30% capacity.
These manufactures definitely know what they are doing - if a user can notice their SSDs become too slow to write after such light use, these manufactures must have done extensive tests to make sure this is how their products on the cheap end should perform, therefore their high-end product can sell.
There are solutions to solve the slow write performance in openmediavault by simply adding some arguments in the extra options box under SMB/CIFS to improve write performance. I noticed a substantial improvement on a gigabit connection in terms of write speeds on my Xeon E3 powered nas.
What kind of arguments and where could one find info on it?
@1:40 Oh man that could probably use some heatsinks between the m.2 drives. I'm imagining even something as simple as two copper plates with some metal spacers between them and then thermal adhesive on both sides. Then have a squirrel cage fan or something blow air from the side, which would force air to go through thole copper plates cooling them.
at first I thought you were utilizing a ps4 for a nas. thanks for your video.
Great review, thank you. My only concern with all solid state storage is the finite write life that solid state storage has compared with spinning disks..
It's really dependent on what you buy; different drives have different write expectancies. Many flash drives could be written to for years with normal consumer write patterns and not have an issue.
And hard drives often fail prematurely, as well (just check Backblaze's reliability reports!), so in the end, the best protection is a 3-2-1 backup plan, and choosing drives wisely.
@digitalpilotnm6039 modern flash drives are designed with an expected lifetime of 10 years. Most modern hard drives are designed with an expected lifetime of 3-5 years. Most flash drives have much, much higher MTBF - up to 10x higher than disk drives. If you're genuinely concerned about lifetime then you definitely need to switch to solid state.
I ordered the 6 bay Flashstor a couple days ago, should get it early next week. Ordered 16 gb ram for it as well. Will take a while to fill it up with NVME drive, will probably order 1 or 2 4 TB ones each pay period until I fill it up.
Wish I knew about 32 GB working in this, that will be a future upgrade if I run some VMs on it.
Thank you for your support! We can't guarantee that 32 GB will work or will not cause issues because Intel specifies a maximum of 16 GB.
Truenas core or Truenas Scale. You should mention that in the video
id recommend making a nas with a Pentium or core i3 level of hardware so its not bottlenecked by a bunch of switches
Need to figure out why the Rock5 was stalling out when copying that 300+GB of files. Any ideas yet?