you mentioned the low noise floor, I think what a lot of folks are forgetting is just how loud harddrives are. My toshiba MG08 helium drives are so loud you couldn't sleep in the same room when they're active lol
I have 0 to 30 hard drives spinning in my bedroom rack at any given time and barely notice them. I have a few Seagate EXOS and a bunch different WD drives (Purple, Red Pros, and shucked drives that are basically their blue line up) as well as a handful of seagate and few toshiba drives. My 12U rack + my desktop is about 1/4th the volume of a window unit AC, its not bad at all. Sure I hear the drives doing their thing but its never loud. When I do something with the 2060 video card in my main server, I hear that fan spin up a whole lot louder than anything else in my set up. That being said, I have a whole different reason for wanting a bunch of SSDs as caches. The less I can have drive spun up the safer they are. Between accidental bumps into the rack and my dogs jumping around I worry about damage to the hard drives. What are some of the more noisy models? I’d prefer to avoid those
Uh? Mine are mostly inaudible over the other stuff. If they are actively reading/writing I hear it over the other stuff but it's more because of the odd sound they make, but that's only if I'm not playing a video or something. Mine are MG08ACA16TE but as I said not loud - to me anyway. I should add the server is literally next to me. Less than half a meter distance heh, I'm going to relocate it but haven't yet. Oh also, I do sleep in the same room with them, but my bed is 4m away from the MG08's approx. I usually have some youtube video playing when I go to sleep but not always, the drives never bother me. I should add the same machine has 3x 15krpm SAS drives, al14sxb90en 900GB which make more noise, they have an audible slight whine to them just running. But I only hear them if I have no video or such running and I'm right next to them.
Hi Patrick - great video, it helped me decide to get this F8 SSD Plus. You mentioned the slow RAID sync times; there's a setting in the TOS 6.0 interface, under Storage Pool settings (gear icon top right) where you can set a custom speed. I set the minimum to 512MB/sec and the max to 800MB/sec, and after logging in via SSH and running watch "cat /proc/mdstat" was able to see this running at 800MB/sec!
Raid 5 type functionality is likely something SSD struggles with but unless the lanes for these are gimped 800 MB/s a second should be child's play for an SSD focused system. Way back when the world had SCSI arrays with drives spinning at 10-15k, 380-512 MB/s a second was doable. The interfaces themselves in reality are the limiting factor for storage now, not the medium itself.
I've worked as a NAS QA before, so I have some ideas about structural design. First, for a NAS at this price, the SSD heatsink fixing mechanism is designed to use a material that is sure to age and break. This is very strange for a product at this price. ASUS products at the same price can quickly install SSDs. The second point is the issue of SSD arrangement design. The SSD heat sink will increase the weight and the center of gravity of the device. Why are the SSD and heat sink not staggered on both sides of the motherboard near the cooling fan. The third point is that considering the heat dissipation and installation issues of SSDs, I think it would be better to put eight SSDs on the same side and make a large heat sink and a cooling fan for direct heat dissipation. The other side is the CPU RAM IO PORT. The disadvantage of this is that the fuselage will become longer, and perhaps a horizontal fuselage design would be better. But judging from the current mechanical design, the only difference between a NAS at this price and a Chinese small host is the 10Gb NIC, right? This content was converted from Chinese to English with the help of Google Translate.
> 2) not 10gig NIC but in fact 8gbit Their testing shows it pulling a fairly constant 8.9 Gbit/second over a long period. Considering the overheads in the protocols, that's about what you can expect from a 10 Gbit/s network. > 3) low per nvme ssd lane allocation Since the device itself is network limited to 10 Gbit. It only takes 2 PCIe 3.0 lanes to completely saturate that (8 Gbit/second). I'm not sure what your point is here, because a 10 Gbit NAS cannot be expected to handle more data than that, and a 2 lane PCIe 3.0 NVMe SSD will be more than enough to saturate it. Even a single incredibly cheap SSD will be able to saturate the network.
8:20 rubber bands are usually bad to keep the heatsink in place, real silicone rubber is expensive and in most cases they use cheaper stuff that will dry out and break in a relatively short time (years). You are nearly always better off using zip ties. They are cheap and will last forever
So, we talked about this as a team. If you just want something to get storage up and running and maybe use a pre-packaged app or two, then it is fine. If you want this to be more of a full home server running lots of stuff, then maybe not. I think for photographers, videographers, and so forth, TOS 6 just getting storage online is perfectly OK. For power users, it feels less useful than others. Power users also make up a disproportionately large part of our audience, but a smaller relative portion of the overall market.
You'd think by now Intel or AMD (or Arm) would have a CPU with a LOT more PCIe lanes for these sorts of small m.2 nas devices to better over-subscription and reduce the need for PCIe switches. Particularly when doing raid, I can imagine the switches are saturating buffers, same as an ethernet network would if you had a bunch of line-rate devices all communicating over a single shared uplink. I'd be curious if any of these PCIe switches have buffer drop statistics to see what performance is being left behind, again as you would an ethernet network.
Well I mean the C3758R is that for the 10GbE era. People just hear Atom and freak out, but realistically for these applications the CPU performance needs are very low.
it's not that those cpus can't have more pci-e lanes, it's more about if they did, they would threatened the sales of higher, more expensive cpus a.k.a less profit. bitfurcation is also a pain.
I don't understand why mfg go cheap on the power connector. They should use a bayonet connector like the Switchcraft 761KS15. Just a few mm larger in diameter but able to screw lock on to the case.
So there is no eMMC or something else to hold the OS - and its doing a mirror on all user drives like the synology system is made? And another big issue/mystery... using Gen3x1 PCIe for 10GbE would not let you saturate the 10G link in one direction - you never fit 10G into 8G. So this is or is not adressed by the ASMEDIA chip (eg. to have a x2 uplink to cpu, x2 to NIC and x1 for the last unlucky SSD?). A verbose lspci (vvv and tvnn) would be interesting to see how the PCIe topology is actually done.
I know some people will think its not a good NAS but for the size and noise you won't finding anything like it. A great compact system which you could just pop into a bag and take with you of plugin when your at your desk. Its probably the best mobile NAS on the market. In terms of speed most people won't max this out for every day users most people are just getting 2.5gbe now. Like Patrick said thou its a shame they didn't come up with a better solution for the nvme's.
@@dolomiti850 Not tried but it does negotiate to 1gbe or buy a cheap switch about £35 at the moment. To be honest if you only have 2.5gb you could get something else which will be cheaper. Remember if you're on 2.5gbe then your read and write speeds will also be this which normal hard drives can handle so save some money.
I really want to see someone take some older u.2 server drives and build a NAS with a desktop cpu and real performance. I feel like the cost doesn’t match performance on all these small m.2 NAS
8:00 Hey Patrick, use a soft material table top when you handle PCBs. There is a chance the ceramic capacitors can crack because of impact however small the impact might be. Might be just bad luck too. So please have a, for example, a ESD safe rubber mat or just a foam.
I’m not the target market but if you need a super small box this looks great. The 1x pcie lane per drive is tough on an $800 device for me but for the right person, this thing is better than the other options.
9:20 ONE PCIE lane (Gen 3) for a 10Gb/s ethernet port? Nope. A single PCIE 3 lane only supports 8Gb/s, and so you'll NEVER saturate that 10Gb/s ethernet port.
Yeah, this is going to be a problem for everything in that system. That 10Gb USB-C port? Nope. Those two 10Gb USB-A ports? Nope. And a single Gen3 lane per SSD? They could've at least gone with an AMD Ryzen 3 PRO 7335U - this has a configurable TDP which can go as low as 15W (so matching the N305), but provides 16 usable Gen4 lanes. Bonus point would be ECC support.
Hey Patrick possible to do a review of the iKoolcore R2 Max? Soft Router with 10Gbe rj45. Lots of the soft router out there with 10Gbe only comes with SFP. My ISP modem (ONT) is using RJ45. So this helps.
No way at $699. Im waiting for minisforum to do another intel board with 6 m.2 slots. They supposedly have a breakout board for their current one being worked on so if that comes out id go with that. Yeah itll cost a little more id have to get a mini itx case and itll be a bit bigger but itll have way more power than this.
You're talking about the MS-01 I guess. It's interesting, including their prototype board. But the issue is Minisforum as a company, they don't seem reliable (QC, after-sale).
NAS boxes that are properly M.2 SSD focused despite magnetics being almost entirely in the rear view mirror for the majority of SMB and corporate usage, are still rarer than unicorn farts.
No matter what hardware you use for the NAS, the storage is going to be the most expensive part. I wonder how this system compares to something like an old ThinkServer TS150 kitted out with 32GB of ECC memory and 4 4-8TB SATA SSDs with a 10GB NIC running TrueNAS or UnRAID. Granted, the Terramaster is more portable, but if that wasn't a factor, would enterprise servers be a better option?
The limitation in these systems is always the AlderLake-N CPUs' limited number of lanes and that it's PCI-E v3.0. Hopefully Intel will learn from that and hugely bump up the version of PCI-E and maybe either additional lanes or more built into the SoC (like 10Gb networking) in the next version (and be based on Skymont).
@@ServeTheHomeVideo this box is put on place and stay for years. I dont care weight, more over I am the man. I can handle transporting 4 kilo box, lol.
Wish list: Standard 1U form factor, with front to back air cooling eSATA port USB4 ports ECC memory Disk only chassis for storage expansion Support for ZFS (optional: TrueNAS/FreeNAS). And it will remind my old days of EMC storages 😂
Nice, but why 10Gbase-T? A SFP+ port would make so much more sense and would be so much more power efficient. I use a MicroTik 10GbE switch that has SFP+ and I mostly use direct attach cables for my home lab. Twisted pair is just not that good for 10GbE.
You actually loose performance Ethernet. Aditionally, then you need a switch with and SFP+ port to do all the conversion. I plug my Nas into a switch. I tested the 10GbE on my terramaster nas and it seams to do the trick. supporting SFP+ would just need more circuitry. I guess its just more of a preferance thing. I plug it in and its done.
@@mattharris6958 SFP+ has a larger footprint on the PCB but doesn't need more circuitry. All of the neccessary circuitry is built into the SFP+ module that makes it easier. And you would have the choice of using it with fibre, Twisted Pair or DAC. I actually have no 10GbE gear that doesn't have SFP+ so I would need conversion on on switch side to get Twisted Pair. It would allow the user for more flexible choices.
soon 16TB m.2 nvme in 2280 form factor should launch, but the prices are still crazy high to switch to ssd if you are not using it to make money.... heavy duty U.2/U.3 are available in up to 32/64TB range but they cost like a used car, hope china will go into production of flash nand on a big scale to attack the market and we get some good enough ssds that pricewise beat hdds soon, would like to have a box like above with 100TB in some raid for security and still some free slots for a later upgrade
@@ServeTheHomeVideo well that is a super price advantage in the us, but in europe Seagate Nytro 5050 in that capacity is the cheapest solution costing 1565 euro/1720 usd /including vat or as you call it sales tax as an end used can not dodge it/, most of those capacities are 1700 eur /1868 usd and way above,
It's very cute, but personally I would still prefer to build a system with a fill i3 CPU just to get more PCI-E lanes and networking. After all, what is the point in putting all these fast SSDs if they are bottle-nicked by the limited connectivity!
Any small boxes like these out there that support ECC RAM and let you install your own OS? I would like three for a portable homelab Proxmox + Ceph cluster.
I take it from the review harping on about the included OS that it has no means to boot anything else? What happens if you swap out the little internal USB flash for something else?
What i'd like to see is an actual i3 in an 'ultra' model with an i3-1215u or core ultra 3 105UL limited to 12w, sure we'd be losing 4 E cores in a trade for 2 P cores, but the extra PCIe lanes, PCIe speed (20 lanes of 4.0 compared to 9 lanes of 3.0 on the N305) then with dual channel you could probably get up to 256GB of RAMonec 128GB modules are released(or 512GB if it had 4 slots), now there's probably a total of one person on earth that could use more than 32GB on a CPU that only has 6 core and 8 threads, but imagine if they took that parent die of 2P+8E and swapped the 2P cores for 8 more E cores, but kept the PCIe and dual channel memory controller My main reason for wanting the non-Atom based i3 is the PCIe. Not only can each SSD now have 4X the bandwidth, you could in theory get a 100G nic in there, though i'd be more than happy with a 40G QSFP that supports breakout into 4x10g(i know its supposed to be part of the standard, but i've come across cards that dont do this) Also the much more powerful GPU and option to expand to more RAM in the future if desired is a plus.
$499 for the lower CPU model and getting a low power silent 10GbE NAS that can house 8 SSDs and is pre-installed as an appliance. $699 for more memory and CPU. Not too bad considering the 5-bay QNAP was $1300 (albeit it is a better system)
If power and size not a problem , maybe get a msi x670E tomahawk , bifurcation quad nvme carrier and a amd 7600, a 10 G network card . That board dont share bandwidth between m2 and pcie slot so you proably can have 7-8 all high speed nvme from bifuration and onboard m2 slot. with 10G network card and still got pcie slot left , you can slap on a processor have more cores , or add more memory. Maybe few hundred hours for software and security setup , and obviously higher powerbill 😂 Personally using both turkey nas solution and diy storage pod , both have their potential.
I am fairly sure folks have these running as Plex servers, albeit not the most powerful ones ever. Maybe search for that specifically. On the web surfing PC, there is probably a way, but I would personally just get a cheap N100/N305 box for that.
@@ServeTheHomeVideo I used to use a pentium j5005 for it. Intels quicksync does well with the plex part. But it wasn't too smooth running windows for websurfing and the basics. I have an old NUC8 i3 that does fine with the basic windows I need, but obviously has trouble with mutible drives including my primary 3.5" hdd. But if this one was good enough for windows plus could handle that many ssd's I could slowly start buying ssd's transition to a little all-in-one. IF...
You will probably notice between when we recorded this and when it went live, they added $100 coupons that I think really helped the pricing discussion.
I would love to see a comparison of how well these kind of devices work if you were to run them with truenas or such instead of the included OS. Also I think this might be an amazing steaming pc/NAS/Router combination machine with it's extra cpu performance.
At $700 USD you can just make a much more powerful ryzen 7600 ITX box, I'm using as asrock deskmeet x600 it was $200 + CPU + RAM + NIC = $600 total. For fun I ran a 100gbe nic in it and was getting 40gbit.
@@ServeTheHomeVideo Yes just comparing, I'd love the terramaster if it was cheaper and had 10gbeSFP, or be the same price and have IPMI, would then be a nice NAS for on-site backup.
Well, It's a kinda cool little SSD NAS, but as a home user, It's kinda little overpriced to spend that kinda money on 8 4tb SSD's when you are never gonna utilize the full datarate they are able to give over a 10GbE port. Now I don't have the full math in my head, but wouldn't it make more sense with a Spinner 4-5 disk NAS wih possible atm up to 6x as big disks that can do a 5-10GbE agregated network bandwith be more in line with the cost per TB vs a 10Gb max connection at least to your network switch. Most people at home is still using 1GbE network at home even 2,5GbE has been around for some years now. I am just about thinking on buying a cheap 8+1 or 8+2 2.5 Gb managed switch with at least just "1" 10GbE uplink to the NAS. I first lately have more than 1 PC's that has a 2.5GbE port. I still only have roughly 100/20GbE internet link, and media playback is still limited to 1GbE as Apple havn't released an Apple TV with more than WiFi and 1GbE connection yet. Some of our gadgets still lack behind in supporting more than 1GbE, and I'm one of those that really don't trust a 5G internet connection over my trusty 100/20gbit DSL connection. Heard enough complaints that it just don't cut it for online gaming. I almost went to try one but a friend told me its still not have low enough ping to be trusted for a stable gaming session.
remember when PC wasnt absurdly expensive? i blame the Dayz Arma mod, to my best recollection that is around when kids and people started all wanting PC's for gaming. its also around then when 'gaming' devices began coming out, gaming RAM, gaming routers, gaming PC cases 🙄 its also around then when colorful ridiculousness began appearing, RGB everything, more ugh. before that , the most colorful thing in a PC was if you bought a sparkle group PSU , which had that purple color braided cabling that no one was doing.
is there an aliexpress n305 with 10GbE and 4 or 8 m.2? cause last i checked i didnt see any and thats certainly worth a markup if there arent actually alternatives
Actually 2x sfp28 would be perfect ans 64 or 96gb ram. Why brandwith of two full pcie 3.0 ssd's and ceph osd need 1gb Ram for every TB of storage of the OSD. The same ram to disk ratio exists for zfs.
Er can you elaborate how ther USB ports can be used. Can we somehow use them as a DAS (i.e. DAS interface to the NAS?) I ask this since this HW config would make a great DAS addition to a powerful workstation for a user who just wants a large dedicated additional storage and prefers not to atttach this as a LAN share.
To me it doesn't make sense to spend the big boy bucks on SSDs then choke them to death in x1 with a Fisher Price NAS? We need something like Socket SP6 in ITX form factor!
Why didn’t you just put TrueNAS SCALE on it? You didn’t even talk about that. The other two RUclips reviews of this that came out before yours at least talked about it and one of them actually tried it. He couldn’t get SCALE to boot, but did get CORE to work.
for the love of god, if you have RAID on an SSD array with mdadm please use --assume-clean after TRIMing all the LBAs (blkdiscard) or an NVMe format command. Initilization of writing zeros will make your WAF and performance abysmal
Bit of a shame the SSDs are limited to 1GB/s due to the low number of avaliable lanes. Kind of defeats the purpose of having a NVME NAS vs a 2.5" SATA NAS since it's not even 2x faster while wasting a lot of the potential performance of all the money you've spend on NVME drives.
Half the drive count and double the lane availability per-drive already! This lane design problem being the exact same on every one of these NASes is starting to get very annoying! I do want one of these machines, I would pay the $800 for this with the 10Gbe and only 4 slots, if it had the correct PCIe layout so apps running on the box don't slow down external access! Grr! 😡
@@ServeTheHomeVideo Get some extra processes going in a docker container, It'll continue to go below the 8Gbit it's currently doing. Extra lanes for the storage will give it device-wide overhead room to take advantage of the beefed up processor without affecting transfer speeds
Anti-static protection, where is it? You shouldn't handle any of these electronics without anti-static protection, in the form of a wrist strap at the very minimum. The thing with static damage is often it will not be noticed until later or give rise to random problems. Wonder why you are getting blue screen's, random reboots, or memory errors, most likely static damage during installation. Take memory for example, the chances of that arriving damaged from the factory is extremely remote, simply because it is so easy to automate testing for each and every stick and know it is faultless, so if you are getting memory errors or what you think is incompatibilities, it is most likely damaged by static, and not faulty from the factory. Static we talk about here isn't felt by us, just a mere static voltage of 6 or 10 volts charged on our bodies can damage CMOS chips. Also never touch any of the metal parts of the components or the connectors, if you need to, use gloves. This is because the smallest amount of grease from our fingers on high frequency connections can cause added impedance giving rise to random problems, and later down the line corrosion and errors.
are people not aware of $30 pcie x16 to 4 bay m.2 raisers cards, depend whats desktop you have (mines can get 128tb+), i just price a desktop with 8 slots of m.2 (half pcie 5) and 10g network, r5 7500 cpu faster now size is double but atleast upgradeable and cheaper.
They told you they've dropped the price only to get the mention in the video. Checking the price now it's $799.
Dunno just checked Amazon and there is still a $100 coupon on both models today.
@@ServeTheHomeVideo @rael_gc Just checked $799 + checkbox for $100 discount.
Not available outside the US. The UK Amazon only has it at £729 and it's rated at one star
Its over a grand today 😂
Hahaha everyone on RUclips saying it’s cheap and cheaper than building your own nas pc at home, such bull krap
you mentioned the low noise floor, I think what a lot of folks are forgetting is just how loud harddrives are. My toshiba MG08 helium drives are so loud you couldn't sleep in the same room when they're active lol
Exactly. That is why in the studio we have one old HDD archive tucked away in an equipment closet but everything else is SSD based.
coward, i ran a spinning rust 1u dl360p in my room for 3 years. i promise those fans were 10x louder than the spinning rust
I have 0 to 30 hard drives spinning in my bedroom rack at any given time and barely notice them. I have a few Seagate EXOS and a bunch different WD drives (Purple, Red Pros, and shucked drives that are basically their blue line up) as well as a handful of seagate and few toshiba drives. My 12U rack + my desktop is about 1/4th the volume of a window unit AC, its not bad at all. Sure I hear the drives doing their thing but its never loud. When I do something with the 2060 video card in my main server, I hear that fan spin up a whole lot louder than anything else in my set up. That being said, I have a whole different reason for wanting a bunch of SSDs as caches. The less I can have drive spun up the safer they are. Between accidental bumps into the rack and my dogs jumping around I worry about damage to the hard drives. What are some of the more noisy models? I’d prefer to avoid those
Uh? Mine are mostly inaudible over the other stuff. If they are actively reading/writing I hear it over the other stuff but it's more because of the odd sound they make, but that's only if I'm not playing a video or something. Mine are MG08ACA16TE but as I said not loud - to me anyway. I should add the server is literally next to me. Less than half a meter distance heh, I'm going to relocate it but haven't yet.
Oh also, I do sleep in the same room with them, but my bed is 4m away from the MG08's approx. I usually have some youtube video playing when I go to sleep but not always, the drives never bother me. I should add the same machine has 3x 15krpm SAS drives, al14sxb90en 900GB which make more noise, they have an audible slight whine to them just running. But I only hear them if I have no video or such running and I'm right next to them.
Any of y'all ever have a Bigfoot drive? 😂
Hi Patrick - great video, it helped me decide to get this F8 SSD Plus. You mentioned the slow RAID sync times; there's a setting in the TOS 6.0 interface, under Storage Pool settings (gear icon top right) where you can set a custom speed. I set the minimum to 512MB/sec and the max to 800MB/sec, and after logging in via SSH and running watch "cat /proc/mdstat" was able to see this running at 800MB/sec!
Raid 5 type functionality is likely something SSD struggles with but unless the lanes for these are gimped 800 MB/s a second should be child's play for an SSD focused system. Way back when the world had SCSI arrays with drives spinning at 10-15k, 380-512 MB/s a second was doable. The interfaces themselves in reality are the limiting factor for storage now, not the medium itself.
I've worked as a NAS QA before, so I have some ideas about structural design. First, for a NAS at this price, the SSD heatsink fixing mechanism is designed to use a material that is sure to age and break. This is very strange for a product at this price. ASUS products at the same price can quickly install SSDs. The second point is the issue of SSD arrangement design. The SSD heat sink will increase the weight and the center of gravity of the device. Why are the SSD and heat sink not staggered on both sides of the motherboard near the cooling fan. The third point is that considering the heat dissipation and installation issues of SSDs, I think it would be better to put eight SSDs on the same side and make a large heat sink and a cooling fan for direct heat dissipation. The other side is the CPU RAM IO PORT. The disadvantage of this is that the fuselage will become longer, and perhaps a horizontal fuselage design would be better. But judging from the current mechanical design, the only difference between a NAS at this price and a Chinese small host is the 10Gb NIC, right? This content was converted from Chinese to English with the help of Google Translate.
1) NO ECC 2) not 10gig NIC but in fact 8gbit 3) low per nvme ssd lane allocation 4) high price = DEFINITELY NO BUY
> 2) not 10gig NIC but in fact 8gbit
Their testing shows it pulling a fairly constant 8.9 Gbit/second over a long period. Considering the overheads in the protocols, that's about what you can expect from a 10 Gbit/s network.
> 3) low per nvme ssd lane allocation
Since the device itself is network limited to 10 Gbit. It only takes 2 PCIe 3.0 lanes to completely saturate that (8 Gbit/second). I'm not sure what your point is here, because a 10 Gbit NAS cannot be expected to handle more data than that, and a 2 lane PCIe 3.0 NVMe SSD will be more than enough to saturate it. Even a single incredibly cheap SSD will be able to saturate the network.
8:20 rubber bands are usually bad to keep the heatsink in place, real silicone rubber is expensive and in most cases they use cheaper stuff that will dry out and break in a relatively short time (years). You are nearly always better off using zip ties. They are cheap and will last forever
Yes, especially notable living in Arizona since it is dry here.
They should have a metal arm across those SSDs to help keep the heatsink in place instead of those rubber bands.
@@Darkk6969 yeah providing the aliexpr special ssd heatsink kits is a bit lousy
Been subscribed for a while and ended up joining to support the channel. Appreciate your in depth reviews/comparisons of the products you present!
Wow! Thank you so much!
the terramaster OS "... is OK..." you are too polite .... ahahahah
So, we talked about this as a team. If you just want something to get storage up and running and maybe use a pre-packaged app or two, then it is fine. If you want this to be more of a full home server running lots of stuff, then maybe not. I think for photographers, videographers, and so forth, TOS 6 just getting storage online is perfectly OK. For power users, it feels less useful than others. Power users also make up a disproportionately large part of our audience, but a smaller relative portion of the overall market.
@@ServeTheHomeVideo there's any chance to install truenas or unraid on this to replace TOS ?
@@ServeTheHomeVideoI think the gold standard should be shipping with an OS, but to allow you to install your own Linux OS.
@@ferdievanschalkwyk1669 Totally
@@waveformer2592 my terramaster Nas was running unraid ATM, they don't restrict or brother you put anything on it .
#notlikeugreen😂
You'd think by now Intel or AMD (or Arm) would have a CPU with a LOT more PCIe lanes for these sorts of small m.2 nas devices to better over-subscription and reduce the need for PCIe switches. Particularly when doing raid, I can imagine the switches are saturating buffers, same as an ethernet network would if you had a bunch of line-rate devices all communicating over a single shared uplink.
I'd be curious if any of these PCIe switches have buffer drop statistics to see what performance is being left behind, again as you would an ethernet network.
Well I mean the C3758R is that for the 10GbE era. People just hear Atom and freak out, but realistically for these applications the CPU performance needs are very low.
it's not that those cpus can't have more pci-e lanes, it's more about if they did, they would threatened the sales of higher, more expensive cpus a.k.a less profit. bitfurcation is also a pain.
AMD EPYC has 128 lanes. Enough?
Then they'd be cannibalising their enterprise offerings - we get the crap because they don't want to give consumers better hardware.
Does PCIe 'support' packet loss?
Thanks For The Great Video... Thanks Server The Home Channel 😊😊😊
Those elastics also break and fall apart so quickly in the data center. To the point we never even put them on.
zip ties FTW
13:14 -- SMB Multichannel with one nic? (10:11)
Great video 👍
Kindest regards, neighbours and friends.
I don't understand why mfg go cheap on the power connector. They should use a bayonet connector like the Switchcraft 761KS15. Just a few mm larger in diameter but able to screw lock on to the case.
Put the thermal pad on the bottom of the heat sink first. It is much easier.
is it possible to roll your own OS on this? would be awesome with TrueNAS Scale on it..
You can run TNS on it.
Main problem there is lack of ECC RAM and low memory. ZFS likes ECC and plenty of RAM.
@@AxMi-24🙄 ECC is overhyped, doesn’t matter*
@@DanielFaustnot really lol
ECC is huge overkill for most consumer and prosumer applications tbh. It's really only neccesary in fields like banking and such
The N305 seems a bit under powered with regards to PCIe bandwidth. Still a good device
Yes, it has less I/O than the C3000 series for example.
Looks awesome.. but would a second backup network connection at 1 or 2.5 GB not have been a nice addition?
So there is no eMMC or something else to hold the OS - and its doing a mirror on all user drives like the synology system is made? And another big issue/mystery... using Gen3x1 PCIe for 10GbE would not let you saturate the 10G link in one direction - you never fit 10G into 8G. So this is or is not adressed by the ASMEDIA chip (eg. to have a x2 uplink to cpu, x2 to NIC and x1 for the last unlucky SSD?). A verbose lspci (vvv and tvnn) would be interesting to see how the PCIe topology is actually done.
but does it run crysis? .....
ohh wait im in the wrong decade .....
but can you install truenas on it, or even proxmox?
I know some people will think its not a good NAS but for the size and noise you won't finding anything like it. A great compact system which you could just pop into a bag and take with you of plugin when your at your desk. Its probably the best mobile NAS on the market. In terms of speed most people won't max this out for every day users most people are just getting 2.5gbe now. Like Patrick said thou its a shame they didn't come up with a better solution for the nvme's.
Spot on
Hi, does the 10Gbe port support 2.5Gbe (the limit of my router)?
@@dolomiti850 Not tried but it does negotiate to 1gbe or buy a cheap switch about £35 at the moment. To be honest if you only have 2.5gb you could get something else which will be cheaper. Remember if you're on 2.5gbe then your read and write speeds will also be this which normal hard drives can handle so save some money.
I really want to see someone take some older u.2 server drives and build a NAS with a desktop cpu and real performance. I feel like the cost doesn’t match performance on all these small m.2 NAS
Yes that is what I want.
It will be fast. But then the audience will start complaining about power consumption.
8:00 Hey Patrick, use a soft material table top when you handle PCBs. There is a chance the ceramic capacitors can crack because of impact however small the impact might be. Might be just bad luck too. So please have a, for example, a ESD safe rubber mat or just a foam.
I'm planning on waiting for the next gen ASustor
I keep asking about this and still no word when it will arrive.
nice, finally some competition to asustor's flashtor
Exactly
I’m not the target market but if you need a super small box this looks great. The 1x pcie lane per drive is tough on an $800 device for me but for the right person, this thing is better than the other options.
Hi Patrick, what would be the difference between a Marvel AQC chip and Intel one? Latency? CPU usage?
Thanks for yet an good review. what is the altanative if you need an I5 or similar and two NIC's ?
9:20 ONE PCIE lane (Gen 3) for a 10Gb/s ethernet port? Nope.
A single PCIE 3 lane only supports 8Gb/s, and so you'll NEVER saturate that 10Gb/s ethernet port.
Yeah, this is going to be a problem for everything in that system. That 10Gb USB-C port? Nope. Those two 10Gb USB-A ports? Nope. And a single Gen3 lane per SSD? They could've at least gone with an AMD Ryzen 3 PRO 7335U - this has a configurable TDP which can go as low as 15W (so matching the N305), but provides 16 usable Gen4 lanes. Bonus point would be ECC support.
Hey Patrick possible to do a review of the iKoolcore R2 Max?
Soft Router with 10Gbe rj45.
Lots of the soft router out there with 10Gbe only comes with SFP.
My ISP modem (ONT) is using RJ45. So this helps.
The initial security email can be skipped during setup. Its the blue text under "Send Code". Its could be worded better, however.
No way at $699. Im waiting for minisforum to do another intel board with 6 m.2 slots. They supposedly have a breakout board for their current one being worked on so if that comes out id go with that. Yeah itll cost a little more id have to get a mini itx case and itll be a bit bigger but itll have way more power than this.
What's the current minisforum model?
You're talking about the MS-01 I guess. It's interesting, including their prototype board. But the issue is Minisforum as a company, they don't seem reliable (QC, after-sale).
Is the OS fixed or can you replace the OS with TrueNAS?
3:20, it should list an amperage beside the port as well.
I bet a cattle banding tool would make installing those rubber bands pretty easy.
NAS boxes that are properly M.2 SSD focused despite magnetics being almost entirely in the rear view mirror for the majority of SMB and corporate usage, are still rarer than unicorn farts.
No matter what hardware you use for the NAS, the storage is going to be the most expensive part. I wonder how this system compares to something like an old ThinkServer TS150 kitted out with 32GB of ECC memory and 4 4-8TB SATA SSDs with a 10GB NIC running TrueNAS or UnRAID. Granted, the Terramaster is more portable, but if that wasn't a factor, would enterprise servers be a better option?
Sure you can use something bigger and higher power
The limitation in these systems is always the AlderLake-N CPUs' limited number of lanes and that it's PCI-E v3.0. Hopefully Intel will learn from that and hugely bump up the version of PCI-E and maybe either additional lanes or more built into the SoC (like 10Gb networking) in the next version (and be based on Skymont).
I would love to have a few of those for my PROXMOX cluster!
Are they locked to the OS, or can I install my own?
Make box of solid aluminium, so it works like heatsink. Then makes it fanless. Zero noise. Ideal to your living room, home server.
This is close enough, while also being light enough for transport.
@@ServeTheHomeVideo this box is put on place and stay for years. I dont care weight, more over I am the man. I can handle transporting 4 kilo box, lol.
I would love an SFP28 port on this.
Wish list:
Standard 1U form factor, with front to back air cooling
eSATA port
USB4 ports
ECC memory
Disk only chassis for storage expansion
Support for ZFS (optional: TrueNAS/FreeNAS).
And it will remind my old days of EMC storages 😂
eSATA? Stuff still uses eSATA? I have never seen a device ever with eSATA.
The time that you spend for init is not because of TRAID but because of EXT4. Wrong choice when disk is more than 500GB
Nice, but why 10Gbase-T? A SFP+ port would make so much more sense and would be so much more power efficient. I use a MicroTik 10GbE switch that has SFP+ and I mostly use direct attach cables for my home lab. Twisted pair is just not that good for 10GbE.
You actually loose performance Ethernet. Aditionally, then you need a switch with and SFP+ port to do all the conversion. I plug my Nas into a switch. I tested the 10GbE on my terramaster nas and it seams to do the trick. supporting SFP+ would just need more circuitry. I guess its just more of a preferance thing. I plug it in and its done.
@@mattharris6958 SFP+ has a larger footprint on the PCB but doesn't need more circuitry. All of the neccessary circuitry is built into the SFP+ module that makes it easier. And you would have the choice of using it with fibre, Twisted Pair or DAC. I actually have no 10GbE gear that doesn't have SFP+ so I would need conversion on on switch side to get Twisted Pair. It would allow the user for more flexible choices.
@@ThorstenDrews Yeah i do see where you are coming from and am "Pro-Choice"
soon 16TB m.2 nvme in 2280 form factor should launch, but the prices are still crazy high to switch to ssd if you are not using it to make money.... heavy duty U.2/U.3 are available in up to 32/64TB range but they cost like a used car, hope china will go into production of flash nand on a big scale to attack the market and we get some good enough ssds that pricewise beat hdds soon, would like to have a box like above with 100TB in some raid for security and still some free slots for a later upgrade
We have been buying 15.36TB U.2 drives for around $1000 each this week
@@ServeTheHomeVideo well that is a super price advantage in the us, but in europe Seagate Nytro 5050 in that capacity is the cheapest solution costing 1565 euro/1720 usd /including vat or as you call it sales tax as an end used can not dodge it/, most of those capacities are 1700 eur /1868 usd and way above,
It's very cute, but personally I would still prefer to build a system with a fill i3 CPU just to get more PCI-E lanes and networking. After all, what is the point in putting all these fast SSDs if they are bottle-nicked by the limited connectivity!
Damn would be perfect with ECC ram support.
Always one more feature would be perfect :-)
Ikr
Sadly Intel doesn't provide ECC support without breaking the bank.
it's DDR5 so it's at least on-die ECC, which is already a singnificant improvement over the "nothing" of DDR4
@@marcogenovesi8570didn't know that feature of DDR5, thanks
what would be the recommendation of NAS OS to use to run primarily for storage and a container for a Home Assistant server?
Can something like this be wiped and have TrueNAS or some other server installed on it?
How does it work with bare metal Proxmox, TrueNAS, and CasaOS?
Any small boxes like these out there that support ECC RAM and let you install your own OS? I would like three for a portable homelab Proxmox + Ceph cluster.
Wait - so is Terramaster a Coolermaster subsidiary or something?
Can you run Truenas from the USB header?
I take it from the review harping on about the included OS that it has no means to boot anything else? What happens if you swap out the little internal USB flash for something else?
What i'd like to see is an actual i3 in an 'ultra' model with an i3-1215u or core ultra 3 105UL limited to 12w, sure we'd be losing 4 E cores in a trade for 2 P cores, but the extra PCIe lanes, PCIe speed (20 lanes of 4.0 compared to 9 lanes of 3.0 on the N305) then with dual channel you could probably get up to 256GB of RAMonec 128GB modules are released(or 512GB if it had 4 slots), now there's probably a total of one person on earth that could use more than 32GB on a CPU that only has 6 core and 8 threads, but imagine if they took that parent die of 2P+8E and swapped the 2P cores for 8 more E cores, but kept the PCIe and dual channel memory controller
My main reason for wanting the non-Atom based i3 is the PCIe. Not only can each SSD now have 4X the bandwidth, you could in theory get a 100G nic in there, though i'd be more than happy with a 40G QSFP that supports breakout into 4x10g(i know its supposed to be part of the standard, but i've come across cards that dont do this)
Also the much more powerful GPU and option to expand to more RAM in the future if desired is a plus.
Can someone explain to me how this isn't super overpriced?
$499 for the lower CPU model and getting a low power silent 10GbE NAS that can house 8 SSDs and is pre-installed as an appliance. $699 for more memory and CPU. Not too bad considering the 5-bay QNAP was $1300 (albeit it is a better system)
@@ServeTheHomeVideo I dunno, I'm just not sold on it.
3:04 pardon my ignorance but what other uses does an HDMI port have other than A/V transmission?
Can their fans be replaced?
If power and size not a problem , maybe get a msi x670E tomahawk , bifurcation quad nvme carrier and a amd 7600, a 10 G network card .
That board dont share bandwidth between m2 and pcie slot so you proably can have 7-8 all high speed nvme from bifuration and onboard m2 slot. with 10G network card and still got pcie slot left , you can slap on a processor have more cores , or add more memory.
Maybe few hundred hours for software and security setup , and obviously higher powerbill 😂
Personally using both turkey nas solution and diy storage pod , both have their potential.
Could this little one pull double duty as a plex server and a couch websurfing pc?
I am fairly sure folks have these running as Plex servers, albeit not the most powerful ones ever. Maybe search for that specifically. On the web surfing PC, there is probably a way, but I would personally just get a cheap N100/N305 box for that.
@@ServeTheHomeVideo
I used to use a pentium j5005 for it.
Intels quicksync does well with the plex part.
But it wasn't too smooth running windows for websurfing and the basics.
I have an old NUC8 i3 that does fine with the basic windows I need, but obviously has trouble with mutible drives including my primary 3.5" hdd.
But if this one was good enough for windows plus could handle that many ssd's I could slowly start buying ssd's transition to a little all-in-one.
IF...
I'm looking for something just like this, but with the Ryzen 7945hx or similar
That would end up being much larger just due to the cooler you would need for the 7945HX.
Was looking at one of these on amazon very recently. But didn't see any useful reviews. Looking forward to your review.
You will probably notice between when we recorded this and when it went live, they added $100 coupons that I think really helped the pricing discussion.
I would love to see a comparison of how well these kind of devices work if you were to run them with truenas or such instead of the included OS. Also I think this might be an amazing steaming pc/NAS/Router combination machine with it's extra cpu performance.
At $700 USD you can just make a much more powerful ryzen 7600 ITX box, I'm using as asrock deskmeet x600 it was $200 + CPU + RAM + NIC = $600 total. For fun I ran a 100gbe nic in it and was getting 40gbit.
You can make something that is larger, higher power and/ or has fewer drives connected for less. This was addressed in the video
@@ServeTheHomeVideo Yes just comparing, I'd love the terramaster if it was cheaper and had 10gbeSFP, or be the same price and have IPMI, would then be a nice NAS for on-site backup.
Can you wipe their OS and use TrueNAS instead?
Well, It's a kinda cool little SSD NAS, but as a home user, It's kinda little overpriced to spend that kinda money on 8 4tb SSD's when you are never gonna utilize the full datarate they are able to give over a 10GbE port. Now I don't have the full math in my head, but wouldn't it make more sense with a Spinner 4-5 disk NAS wih possible atm up to 6x as big disks that can do a 5-10GbE agregated network bandwith be more in line with the cost per TB vs a 10Gb max connection at least to your network switch. Most people at home is still using 1GbE network at home even 2,5GbE has been around for some years now. I am just about thinking on buying a cheap 8+1 or 8+2 2.5 Gb managed switch with at least just "1" 10GbE uplink to the NAS. I first lately have more than 1 PC's that has a 2.5GbE port. I still only have roughly 100/20GbE internet link, and media playback is still limited to 1GbE as Apple havn't released an Apple TV with more than WiFi and 1GbE connection yet. Some of our gadgets still lack behind in supporting more than 1GbE, and I'm one of those that really don't trust a 5G internet connection over my trusty 100/20gbit DSL connection. Heard enough complaints that it just don't cut it for online gaming. I almost went to try one but a friend told me its still not have low enough ping to be trusted for a stable gaming session.
remember when PC wasnt absurdly expensive? i blame the Dayz Arma mod, to my best recollection that is around when kids and people started all wanting PC's for gaming. its also around then when 'gaming' devices began coming out, gaming RAM, gaming routers, gaming PC cases 🙄 its also around then when colorful ridiculousness began appearing, RGB everything, more ugh. before that , the most colorful thing in a PC was if you bought a sparkle group PSU , which had that purple color braided cabling that no one was doing.
Does the system support ECC RAM?
is there an aliexpress n305 with 10GbE and 4 or 8 m.2? cause last i checked i didnt see any and thats certainly worth a markup if there arent actually alternatives
Actually 2x sfp28 would be perfect ans 64 or 96gb ram. Why brandwith of two full pcie 3.0 ssd's and ceph osd need 1gb Ram for every TB of storage of the OSD. The same ram to disk ratio exists for zfs.
Er can you elaborate how ther USB ports can be used. Can we somehow use them as a DAS (i.e. DAS interface to the NAS?) I ask this since this HW config would make a great DAS addition to a powerful workstation for a user who just wants a large dedicated additional storage and prefers not to atttach this as a LAN share.
do you know sth like this but with qsfp+ , support for rdma and ecc ?
Can you change fans?
Ceph Storage Node?
Its primary life is mobile video editing.
Yea. I mean if I were doing snowboarding, mountain biking, and so forth videos, this would be sitting in the vanlife command center.
is the second video about it I see in few days, they are pushing this a lot
It was just released so that is probably why.
Is there any way you could turn this into a functional wifi router?
Probably, but it would be cheaper just to get one designed to be a WiFi router
@@ServeTheHomeVideo Are there anything that has 10gb ethernet? All I found is nothing but either 1gb or 2.5gb. I have a switch that has 10 gbs ports.
ZFS ftw
Is the BIOS able Boot linux ?
Still no rack mount 😞
Can this run other OSses ?
Hurry up and get that stuff in storage, More rains and Storms coming.
To me it doesn't make sense to spend the big boy bucks on SSDs then choke them to death in x1 with a Fisher Price NAS? We need something like Socket SP6 in ITX form factor!
Sure, but this is much lower power. Remember, even 8x x1 SSDs are faster than 10GbE networkng by a lot.
Any AliExpress alternative? 😁
No moving parts except the fans!
Exactly
Would you consider replacing the current fans with Noctua?
Why didn’t you just put TrueNAS SCALE on it? You didn’t even talk about that. The other two RUclips reviews of this that came out before yours at least talked about it and one of them actually tried it. He couldn’t get SCALE to boot, but did get CORE to work.
10gbe is the bottleneck. Even a pcie 1x line per ssd is sufficient.
Yes
599 or 799 diskless is wild
Those rubberbands are going to fail overtime anyway.... dryrot and crumble in a year or two or three. Better off using zipties.
Yes.
PCI-E 3.0x1.. and I'm out.
for the love of god, if you have RAID on an SSD array with mdadm please use --assume-clean after TRIMing all the LBAs (blkdiscard) or an NVMe format command. Initilization of writing zeros will make your WAF and performance abysmal
No ECC arrrgh... 😢
Bit of a shame the SSDs are limited to 1GB/s due to the low number of avaliable lanes. Kind of defeats the purpose of having a NVME NAS vs a 2.5" SATA NAS since it's not even 2x faster while wasting a lot of the potential performance of all the money you've spend on NVME drives.
Half the drive count and double the lane availability per-drive already! This lane design problem being the exact same on every one of these NASes is starting to get very annoying! I do want one of these machines, I would pay the $800 for this with the 10Gbe and only 4 slots, if it had the correct PCIe layout so apps running on the box don't slow down external access! Grr! 😡
The bottleneck is the 10GbE though
@@ServeTheHomeVideo Get some extra processes going in a docker container, It'll continue to go below the 8Gbit it's currently doing. Extra lanes for the storage will give it device-wide overhead room to take advantage of the beefed up processor without affecting transfer speeds
64TB of SSDs ... Wouldn't it be cheaper to just buy Dropbox? The company that is, not a subscription to the service.
So we are running into this with Google Drive. Maybe not depending on the storage needs
Great little pirate machine
$799....$699 can't we get something cheaper? 😮💨
Where and why did "supports" became "sports"?
Qustion: why PCIe 4.0 M 2 ssds, when only PCIe 3.0 from the CPU support is.
Gen4 drives are easier to get in 4TB sizes. Most Gen3 drives are being discontinued at this point.
@@ServeTheHomeVideo sad.
And then probably QLC instead of the slightly longer-living TLC. -.-
Anti-static protection, where is it? You shouldn't handle any of these electronics without anti-static protection, in the form of a wrist strap at the very minimum. The thing with static damage is often it will not be noticed until later or give rise to random problems. Wonder why you are getting blue screen's, random reboots, or memory errors, most likely static damage during installation. Take memory for example, the chances of that arriving damaged from the factory is extremely remote, simply because it is so easy to automate testing for each and every stick and know it is faultless, so if you are getting memory errors or what you think is incompatibilities, it is most likely damaged by static, and not faulty from the factory. Static we talk about here isn't felt by us, just a mere static voltage of 6 or 10 volts charged on our bodies can damage CMOS chips. Also never touch any of the metal parts of the components or the connectors, if you need to, use gloves. This is because the smallest amount of grease from our fingers on high frequency connections can cause added impedance giving rise to random problems, and later down the line corrosion and errors.
No dual NIC ;( What a waste.
are people not aware of $30 pcie x16 to 4 bay m.2 raisers cards, depend whats desktop you have (mines can get 128tb+), i just price a desktop with 8 slots of m.2 (half pcie 5) and 10g network, r5 7500 cpu faster now size is double but atleast upgradeable and cheaper.