Thanks for the feedback, this is from a few years ago when we were still new to creating content. You should find our content a little better now, we hope 😆
04:00 zpool is NOT a filesystem! zfs datasets are. zvol is a block device, hence it may be formatted /not formatted and exposed via iscsi or internally to its jails and visible as block device.
i'd like to use either zfs or linux raid grouping several internal hard drives, I periodically reinstall my OS which is on it's own separate drive...can I recreate the drive configurations after the OS install with out loss of data?
Many thanks for another great tutorial! We have been big fans of ZFS for several years with both Solaris and Linux, would you also recommend mdadm for FC LUNs implementations for same reasons as iSCSI LUNS? Also why do you prefer LIO targetcli over SCST?
Thanks for the video guys. One question regarding direct IO. Do you guys think it's better to benchmark the ZFS storage with direct IO, when provisioning a storage considering the performance needs?
Hey Danilo, Mitch here! Thanks for the question. The traditional view of benchmarking different file systems has always been to remove the cache in whatever way possible before attempting benchmarks. We believe this is a little misguided for a few reasons. The reason it has been done that way traditionally is because the assumption is that all file systems will cache equally well (or uselessly on the larger scale) and so it's better to get a real picture of what the file system is capable of in most scenarios when cache is not going to be helpful. This is assuming however that all file systems use the same caching methods and algorithms. While this may be true for many file systems that use simple LRU (Least Recently Used) cache, ZFS on the other hand uses an algorithm called ARC (Adaptive Replacement Cache). This is a much more complex caching system and allows for the ability for higher efficiency and better performance for many workloads. Now all of that being said, for ZFS On Linux direct IO was not possible until version 0.8 but it is now available. So in the end, what we typically will recommend is to look at the workloads you are planning on using your ZFS pool for. If your source requires a Direct IO workload, then it will make sense to benchmark in that way, but if your workload is able to take advantage of the ZFS ARC, then we believe benchmarking with it disabled is like intentionally handicapping yourself. Hope that helps!
sorry - but anyone fiddle around with mdadm should be aware in order to have your filesystems auto-mounted after reboot they have to be in fstab - that was kind of unnecessary
nice tutorial, but the next on the screen was too small to read.
Thanks for the feedback, this is from a few years ago when we were still new to creating content. You should find our content a little better now, we hope 😆
04:00 zpool is NOT a filesystem! zfs datasets are. zvol is a block device, hence it may be formatted /not formatted and exposed via iscsi or internally to its jails and visible as block device.
Correct
great simple explanation this is going to be a must watch for my techs who are mostly windows and I don't have the patience to explain ;-)
i'd like to use either zfs or linux raid grouping several internal hard drives, I periodically reinstall my OS which is on it's own separate drive...can I recreate the drive configurations after the OS install with out loss of data?
I will move to ZFS when I can expand array by adding a single drive.
Many thanks for another great tutorial! We have been big fans of ZFS for several years with both Solaris and Linux, would you also recommend mdadm for FC LUNs implementations for same reasons as iSCSI LUNS? Also why do you prefer LIO targetcli over SCST?
So which one is better ?
Thanks for the video guys.
One question regarding direct IO. Do you guys think it's better to benchmark the ZFS storage with direct IO, when provisioning a storage considering the performance needs?
Hey Danilo, Mitch here! Thanks for the question. The traditional view of benchmarking different file systems has always been to remove the cache in whatever way possible before attempting benchmarks. We believe this is a little misguided for a few reasons. The reason it has been done that way traditionally is because the assumption is that all file systems will cache equally well (or uselessly on the larger scale) and so it's better to get a real picture of what the file system is capable of in most scenarios when cache is not going to be helpful. This is assuming however that all file systems use the same caching methods and algorithms. While this may be true for many file systems that use simple LRU (Least Recently Used) cache, ZFS on the other hand uses an algorithm called ARC (Adaptive Replacement Cache). This is a much more complex caching system and allows for the ability for higher efficiency and better performance for many workloads. Now all of that being said, for ZFS On Linux direct IO was not possible until version 0.8 but it is now available. So in the end, what we typically will recommend is to look at the workloads you are planning on using your ZFS pool for. If your source requires a Direct IO workload, then it will make sense to benchmark in that way, but if your workload is able to take advantage of the ZFS ARC, then we believe benchmarking with it disabled is like intentionally handicapping yourself. Hope that helps!
My dude is baked more then my lsi HBA's crusty old thermal compound
Do you know what your this is? Linux is how old and you still show how to do things via command line? Is there no gui for this?
Great video
Open BSD + ZFS is very stable and efficient
Hellooo, R U in internet?
Eff stab?
Fs is file system and tab is table.
Better: eff ess tab.
Tuesday Tech Tip - ZFS Read & Write Caching
ruclips.net/video/H5aLY253daE/видео.html
sorry - but anyone fiddle around with mdadm should be aware in order to have your filesystems auto-mounted after reboot they have to be in fstab - that was kind of unnecessary