That's a cool video. I think you should put the words "configure" and "FreeNAS" in the description or even change the title because that is a very good example of how to configure FreeNAS. I always wanted to try FreeNAS but never knew what would await me. Thanks for showing.
backblaze opened source there hardware design for these, and various companies made them for backblaze and other interested parties. Their designs are built to provide cheap storage for them at a large scale.
In 1998 or 1999, the company I worked for (we did data processing for McDonnell Douglas and Boeing) bought a 500 GB Amdahl box called eLViS. It cost $250k. We also bought an EMC DASD box containing 1TB of mainframe storage. It cost 1 million dollars. We also bought a 30' ATL (automated tape library) for 3.5 mil. They are essentially trash now. Crazy
By the way, I followed this guide. ruclips.net/video/nVRWpV2xyds/видео.html I didn't have to create a dataset. I just enabled Windows share (SMB) under the sharing section and selected the pool I created. And named the share.
All FreeBSD ISOs are also USB stick bootable. Pretty sure either there’s a separate memstick IMG for FreeNAS if you can’t use the ISO. It’s small anyway. FreeNAS is great despite a few user interface quirks
I really enjoyed watching you run through this one and that device for the _iso is interesting, it would have saved me from more than a few headaches a while back! Thanks for the vid! P.S. I like the longer vids like this 45min - 90min 👍
I need one of those IODD devices. But I need that fully filled drive pod even more. Could even do with one rack of those 4TB drives.....can never have enough drives.
2:35 Put a piece of spongy foam in the gap and on top to stop the SSD flapping in the brease. SATA connectors are quite fragile so it just being held by the connector is not good.
It is a really nice case, but for FreeNAS and ZFS, it probably is worth adding some extra RAM. ZFS is pretty hungry in terms of memory. I usually use 32GB for a system with 24 drives. 8GB will work, but is a bit low IMHO.
@@simontay4851 ZFS loves memory. It will eat as much as it can. If you enable DeDuplication, you'd better have a bare minimum of 16GB. 32GB would be ideal.
@@SimbaSeven. You shouldn't be enabling deduplication in 99% of use cases. Don't do this, unless you really really know what you are doing, (and have 1000TB of storage). When Deduplication is enabled it is recommended to use 1GB of RAM (or L2ARC cache on fast SSD) for every 1TB of storage for best performance. Without dedup (normal zfs use even for medium size shared file servers), a significantly smaller amounts of RAM are fine, but in the past it was really hard to run with less than 8GB of RAM, but thing improved a log. You can run with just few GB fine, but more the better, so CoW can do its magic better.
2:05 I thought they looked familiar before you showed the sticker and I have exactly the same drives in my computer, not 45 disks thought but 4 and they still work perfectly fine and are a little newer as well.
You do not have to “switch” it to hard drive mode! These devices can work on dual mode: HD + ODD all the time with no problem. Most computers would boot directly from the virtual ODD, but if not, simply select the virtual cdrom when booting, done.
Yes most computers do but I had one or two that got confused by the additional HDD. Also sometimes it's quite a task to set the boot drive in the BIOS. Therefore I prefer to use the CD-only mode..
Largest SSD currently available is 100TB and 40k$ each, so that thing is theoretically 4.5PB with a price tag of 1.8M$. Probably quite cheap compared to other stuff that got junked on that channel.
@@PlaywithJunk Looks like normal 3.5" format, price is a bit crazy though ;-) www.newegg.com/nimbus-data-dc-100tb/p/2U3-002M-00004?Description=exadrive&cm_re=exadrive-_-9SIAPE6BRD0283-_-Product
At 30:06, you can see below the list of selected disks, that it decided to use raid-Z2 for the SSD array, and for the HDD array it decided to use Raid-z3. You can also automatically change it to stripe or mirror. The issue with your setup of a single raid-z3 vdev group of ~30 disks, is not recommended. It is better to create smaller raid-z3 groups. I.e. 3 groups, 10 drives each. I usually use raid-z2 myself, with 6 or 8 drives each, and if I have many controllers, I do splits so each group has equal number of disks from different controllers used. If you don't care about performance too much, but want to get as much space as available a single raid-z3 vdev group with 30 disks should be fine. It will work fine, especially on reads, but writes will be somehow slower than other configurations. I don't use FreeNAS, but it is using ZFS in FreeBSD, and I use ZFS routinly on Linux, and it is easy to configure, so I am sure FreeNAS can also configure it. It can only be done at pool creation time.
I've been using TrueNAS Core 12.0-RELEASE on a 15bay (15x8TB) RAID-Z2 for a little over a month and haven't noticed any issues. I'm also running it on a Dell PowerEdge R710 w/6x8TB RAID-Z1 as well.
This is a good question, I have never thought about.... Let's hope that one PSU is strong enough to keep the system running. But yeah, from that point of view it's pretty dumb to have an odd number of PSU's.
I once tried FreeNAS for my network attached storage box. Its too complicated. So i formatted the boot drive and just installed a heavily nlited version of WinXP instead. It boots faster, uses less RAM and HDD space, i can put it in standby when not in use and obviously there a GUI so can also use it as a media centre PC.
Ok I got to get me one of those IODD's. I have a dozen USB drives with Installs on them. To be able to put ALL my IOs on one drive and then just select the one i want?!! Will have to look into it more! Also Looks up Linux Tech Tips. They are a big user of 45 drives and have one with a Petabyte unformatted capacity and a 3 Petabyte unformatted capacity one! They are pretty cool.
The IODD looks great, I've never seen one before. It's great that it's so compact. There is also a software based tool called Ventoy. Installs on any standard USB, or portable spinning disk or SSD. Occasionally there are devices that have trouble booting, but last time I checked there is a large list of OS's that will load correctly using uefi or secure boot.
aw man, everyone with freenas freenas when it's a resource hog, i'd rather use xigmanas(which was nas4free before, which was a fork of freenas when they took a.. not so interesting turn into development)
Look closely at such a cable. It's 4 cables in one. All the "professional" RAID controllers and HBA's have multiport connectors. Look into a server.... if it has a backplane for 8 drives, two SAS cables will connect it.
45 drives storinator its a big turn on for me chris. and linux is also using these to deploy his petabyte project. wtf im felling aroused. and chat let me be asure you the person you see in these videos is one of the greatest and most vise person with lot of virtue and life experience. i just love you and biggest fan
I'd have created that pool differently. So you have 15 drives in 3 bays, 2 in each bay are SSDs. I'd create 3 vdevs in raidz2 with 12 disks and leave one for spare, use the SSDs as cache (ZIL and L2ARC) drives for the pool, basically like this: zpool create mypool raidz2 disk1 disk2 ... disk12 raidz2 disk14 disk15 ... disk25 raidz2 disk27 disk28 ... disk38 log mirror ssd1 ssd2 ssd3 cache mirror ssd4 ssd5 ssd6 spare disk13 disk26 disk39 This gives you 3 legs of raidz2 vdevs (let's call that a RAID 06) with ZIL and L2ARC Caches and 3 spare drives. In each leg two disks can fail without data loss, so theoretically a total of 6 drives (only if split evenly, so 2 in each leg - but more than two drives failing in one leg at once I'd call extremely unlikely) could fail before the pool would die. Performance with this configuration is pretty decent, it was no problem saturating a 20 GBit/s LACP link on a file server - and the limit was the network, not the disks. :D
I hate file servers, they are loud, disks consume a lot of electricity, and you always want to buy and fill bigger ones. It's up to you , but you should use those SSDs for cache. ZFS has separate types of cache for read and write (async writes only). I found that write is more useful with slower disks.
We have a custom one at work that holds about 60+ disks and things has always been janky and never working right. Would never buy from them again. It’s all off the shelf hardware we could have built our self
That's a cool video. I think you should put the words "configure" and "FreeNAS" in the description or even change the title because that is a very good example of how to configure FreeNAS. I always wanted to try FreeNAS but never knew what would await me. Thanks for showing.
It is in the description... just at the end.
backblaze opened source there hardware design for these, and various companies made them for backblaze and other interested parties. Their designs are built to provide cheap storage for them at a large scale.
In 1998 or 1999, the company I worked for (we did data processing for McDonnell Douglas and Boeing) bought a 500 GB Amdahl box called eLViS. It cost $250k. We also bought an EMC DASD box containing 1TB of mainframe storage. It cost 1 million dollars. We also bought a 30' ATL (automated tape library) for 3.5 mil. They are essentially trash now. Crazy
I was at a course at ATL in California in 1999. The machines were very well built, not to compare with modern stuff.
Thanks for recommending this IODD device! It looks like a great solution for us who don't have a pxe server running all the time.
It is definitively a great little device!
I installed TrueNAS myself on an old PC I have a few hours before you posted this video.
I only have four 1 TB drives and one 500 GB drive. :-)
By the way, I followed this guide. ruclips.net/video/nVRWpV2xyds/видео.html
I didn't have to create a dataset. I just enabled Windows share (SMB) under the sharing section and selected the pool I created. And named the share.
wow, I'm still learning about useful tools from your videos ! Thank you !
All FreeBSD ISOs are also USB stick bootable. Pretty sure either there’s a separate memstick IMG for FreeNAS if you can’t use the ISO. It’s small anyway. FreeNAS is great despite a few user interface quirks
Wow such a useful tool, worth 60 for keeping your important CDs out of the tool bag!
The reason there were 3 entries in the ACL was because *nix uses 3 different categories of access control - user, group, and world.
I really enjoyed watching you run through this one and that device for the _iso is interesting, it would have saved me from more than a few headaches a while back!
Thanks for the vid!
P.S. I like the longer vids like this 45min - 90min 👍
Yeah the virtual CD/DVD/BD drive is really good. No problem to boot from, even on old machines.
Thank you for the walkthrough of the software.
You're welcome!
Linus Tech Tips viewers will be familiar with this one... :)
Really nice that iodd device, was looking for a similar thing but couldn't find any name/info, now I know! Thank you!
I need one of those IODD devices. But I need that fully filled drive pod even more. Could even do with one rack of those 4TB drives.....can never have enough drives.
2:35 Put a piece of spongy foam in the gap and on top to stop the SSD flapping in the brease. SATA connectors are quite fragile so it just being held by the connector is not good.
I think I will remove the SSD's completely and use them for other projects...
It is a really nice case, but for FreeNAS and ZFS, it probably is worth adding some extra RAM. ZFS is pretty hungry in terms of memory. I usually use 32GB for a system with 24 drives. 8GB will work, but is a bit low IMHO.
Thats ridiculous. Why should it need so much memory. Its just a file server.
@@simontay4851 ZFS loves memory. It will eat as much as it can. If you enable DeDuplication, you'd better have a bare minimum of 16GB. 32GB would be ideal.
@@SimbaSeven. You shouldn't be enabling deduplication in 99% of use cases. Don't do this, unless you really really know what you are doing, (and have 1000TB of storage). When Deduplication is enabled it is recommended to use 1GB of RAM (or L2ARC cache on fast SSD) for every 1TB of storage for best performance. Without dedup (normal zfs use even for medium size shared file servers), a significantly smaller amounts of RAM are fine, but in the past it was really hard to run with less than 8GB of RAM, but thing improved a log. You can run with just few GB fine, but more the better, so CoW can do its magic better.
2:05 I thought they looked familiar before you showed the sticker and I have exactly the same drives in my computer, not 45 disks thought but 4 and they still work perfectly fine and are a little newer as well.
You do not have to “switch” it to hard drive mode!
These devices can work on dual mode: HD + ODD all the time with no problem.
Most computers would boot directly from the virtual ODD, but if not, simply select the virtual cdrom when booting, done.
Yes most computers do but I had one or two that got confused by the additional HDD. Also sometimes it's quite a task to set the boot drive in the BIOS. Therefore I prefer to use the CD-only mode..
Largest SSD currently available is 100TB and 40k$ each, so that thing is theoretically 4.5PB with a price tag of 1.8M$. Probably quite cheap compared to other stuff that got junked on that channel.
Do you have a reference for that 100TB SSD? I doubt it is in 3.5" format ;-)
@@PlaywithJunk Looks like normal 3.5" format, price is a bit crazy though ;-)
www.newegg.com/nimbus-data-dc-100tb/p/2U3-002M-00004?Description=exadrive&cm_re=exadrive-_-9SIAPE6BRD0283-_-Product
At 30:06, you can see below the list of selected disks, that it decided to use raid-Z2 for the SSD array, and for the HDD array it decided to use Raid-z3. You can also automatically change it to stripe or mirror.
The issue with your setup of a single raid-z3 vdev group of ~30 disks, is not recommended. It is better to create smaller raid-z3 groups. I.e. 3 groups, 10 drives each. I usually use raid-z2 myself, with 6 or 8 drives each, and if I have many controllers, I do splits so each group has equal number of disks from different controllers used. If you don't care about performance too much, but want to get as much space as available a single raid-z3 vdev group with 30 disks should be fine. It will work fine, especially on reads, but writes will be somehow slower than other configurations.
I don't use FreeNAS, but it is using ZFS in FreeBSD, and I use ZFS routinly on Linux, and it is easy to configure, so I am sure FreeNAS can also configure it. It can only be done at pool creation time.
I've been using TrueNAS Core 12.0-RELEASE on a 15bay (15x8TB) RAID-Z2 for a little over a month and haven't noticed any issues. I'm also running it on a Dell PowerEdge R710 w/6x8TB RAID-Z1 as well.
I have been looking for a device to hold all my iso's.. thank you
Nice video, hope you get the raid set up successfully.
IODD is a South Korean company which specialises in secure storage products.
OMG... Habt Ihr zufällig paar Raid controller die Ihr günstig abzugeben habt?
Ich habe eigentlich vor das selbe zu bauen....
Naja, RAID Kontroller haben wir schon aber keine mit so vielen Anschlüssen. Wenn dir ein paar alte HP P410 genügen... die können 8 Disks pro Karte
@@PlaywithJunk Danke genau die möchte Ich gerne ersetzen.
Thanks for the interesting video. Very similar USB Hdd-enclosures are also available on the european market...the Zalman ZM-VE350 for example.
The Zalman case seems to be exactly the same. They just don't use a keyboard but a scrollwheel to select settings. The display is also tiny. :-)
3 PSUs ... What a pain. How do you split that up amongst A and B power feeds???
This is a good question, I have never thought about.... Let's hope that one PSU is strong enough to keep the system running. But yeah, from that point of view it's pretty dumb to have an odd number of PSU's.
My PS5 needs this storage.
been trying to get myself a used xyratex/seagate 2584 chassis for this kind of build. they should be coming off production use.
Could that LMGs old storenator?
5:23 Put a piece of PET plastic between them to stop them touching.
Or bend the pins 90 degrees upwards so then you can connect LEDs or whatever connects here.
How long to fill it using Rapberry Pi? Plenty of Pis could even live inside!
Hey you can sharing your SPP 2019.12?
maybe... write me ar playwithjunk@gmail.com
You can add the SSDs as read and write caches to the HDD pool. Thats what i did on my shitty NAS with two 64GB SSDs.
That's why they are there.... they were used in that way.
@Play with Junk i meant that TrueNAS supports that. sorry for expressing myself poorly
Yeah it's an earlier 45 Drives model
Ventoy is a great alternative project that lets you boot from an ISO directly. You don't have the ability to eject and switch (iso) though.
I once tried FreeNAS for my network attached storage box. Its too complicated. So i formatted the boot drive and just installed a heavily nlited version of WinXP instead. It boots faster, uses less RAM and HDD space, i can put it in standby when not in use and obviously there a GUI so can also use it as a media centre PC.
Protocase Inc. IS manufacturer of storinators
45Drives is part of Protocase.
Ok I got to get me one of those IODD's. I have a dozen USB drives with Installs on them. To be able to put ALL my IOs on one drive and then just select the one i want?!! Will have to look into it more!
Also Looks up Linux Tech Tips. They are a big user of 45 drives and have one with a Petabyte unformatted capacity and a 3 Petabyte unformatted capacity one! They are pretty cool.
The IODD looks great, I've never seen one before. It's great that it's so compact.
There is also a software based tool called Ventoy. Installs on any standard USB, or portable spinning disk or SSD. Occasionally there are devices that have trouble booting, but last time I checked there is a large list of OS's that will load correctly using uefi or secure boot.
Maybe just my musician vane, but the first thing I though seeing the thumbnail was: Nice xylophone :D
Nice :-) Hmmm... maybe that could work with different HDDs making different tones? Just kidding :-)
@@PlaywithJunk Depending on how full it is with data :D
Also I have 2 freenas boxes on my network its great software
Those reds are still worth money
OMG are they throwing this away???
Calm down.... NO! :-)
@@PlaywithJunk too bad, I'm needing more PC junk to play with.
Danke. Darauf hab ich gewartet. 😁😁😁😁
Ich auch! :)
aw man, everyone with freenas freenas when it's a resource hog, i'd rather use xigmanas(which was nas4free before, which was a fork of freenas when they took a.. not so interesting turn into development)
All in RAID 0, faster than a M.2 for sure.
4:38 how can you have 4 drives per cable. SATA is point to point unlike the old 50 pin SCSI or IDE/PATA interfaces. There is no master/slave.
Look closely at such a cable. It's 4 cables in one. All the "professional" RAID controllers and HBA's have multiport connectors. Look into a server.... if it has a backplane for 8 drives, two SAS cables will connect it.
45 drives storinator its a big turn on for me chris. and linux is also using these to deploy his petabyte project. wtf im felling aroused. and chat let me be asure you the person you see in these videos is one of the greatest and most vise person with lot of virtue and life experience. i just love you and biggest fan
chirs can we get a call this sunday i need to talk need some guidance
play with zfs - - some speedtests normal pools vs zfs pools
@2:00 not sure if i want my drives seating that loose :P
It's not too bad when you use 3.5" drives. They are held by the cover pretty well. But the small SSD... well that's not the way how it works.
@@PlaywithJunk ssd does not spin 7200 rpm, thats my angle for loose hdd's, it could destroy the ball bearing
Looks like it's made by protocase
Linus has one of those.
More than one I think...
Didn't backblaze invent them
I think so...
@@PlaywithJunk www.backblaze.com/b2/storage-pod.html
I'd have created that pool differently. So you have 15 drives in 3 bays, 2 in each bay are SSDs. I'd create 3 vdevs in raidz2 with 12 disks and leave one for spare, use the SSDs as cache (ZIL and L2ARC) drives for the pool, basically like this: zpool create mypool raidz2 disk1 disk2 ... disk12 raidz2 disk14 disk15 ... disk25 raidz2 disk27 disk28 ... disk38 log mirror ssd1 ssd2 ssd3 cache mirror ssd4 ssd5 ssd6 spare disk13 disk26 disk39
This gives you 3 legs of raidz2 vdevs (let's call that a RAID 06) with ZIL and L2ARC Caches and 3 spare drives. In each leg two disks can fail without data loss, so theoretically a total of 6 drives (only if split evenly, so 2 in each leg - but more than two drives failing in one leg at once I'd call extremely unlikely) could fail before the pool would die. Performance with this configuration is pretty decent, it was no problem saturating a 20 GBit/s LACP link on a file server - and the limit was the network, not the disks. :D
You are certainly right. What I did in the video was just an example "how to" configuration. I don't claim it was the best configuration... :-)
One day our computers will have that much ram. Probably in my life time, I’m only 14. I am probably one of the youngest viewers of this channel.
You probably don't want to put that many drives into a single vdev even with Raid-Z3.
I hate file servers, they are loud, disks consume a lot of electricity, and you always want to buy and fill bigger ones. It's up to you , but you should use those SSDs for cache. ZFS has separate types of cache for read and write (async writes only). I found that write is more useful with slower disks.
We have a custom one at work that holds about 60+ disks and things has always been janky and never working right. Would never buy from them again.
It’s all off the shelf hardware we could have built our self
What software is on your pods?
*linus*
For USB ISO booting there is a free and easy solution: Ventoy
There are many solutions including the HP "USB Stick Format Tool". But I still like that IODD :-)
Only see it from Chinese manufacturers because everyone one else wants to do away with encryption
IODD is actually a South Korean company.