FreeNAS Virtual Machine Storage Performance Comparison using, SLOG/ ZIL Sync Writes with NFS & iSCSI

Поделиться
HTML-код
  • Опубликовано: 24 окт 2024

Комментарии • 31

  • @zesta77
    @zesta77 5 лет назад +4

    The way that ZFS does snapshots really should not have any significant performance penalties. Unlike most snapshot systems, there really isn't any "keeping track of the deltas." All new writes are simply written to new blocks. Unless you are accessing the snapshots simultaneously, there really isn't any difference. This is why ZFS snapshots (and NetApp, who did it first) are so awesome. Having 100 snapshots should not be much different than having 1. For what it is worth, we use NetApp for production systems. We have all boot disks for VMs stored via NFS (NetApp does NFS better than anything I have ever tested). We use iSCSI only for data stores that need high IOPS and we connect iSCSI disks directly to the VM. We have hundreds of snapshots with no performance penalties.

  • @thegreatga
    @thegreatga 5 лет назад +7

    Now lets put a 900 or 905 optane for a slog and see those results again.

  • @coltconley
    @coltconley 3 года назад

    Good science, Tom. Always informative and well-designed experiments with proof. Keep up the good work!

  • @Mastakilla13
    @Mastakilla13 4 года назад +2

    Great video again!
    One question though... I noticed you're using compression and in your speedtest script you're using "dd if=/dev/zero ...". Isn't it better to use "dd if=/dev/random ..." to prevent testing the compression instead of the SSDs?

  • @Maisonier
    @Maisonier 3 года назад +4

    What are the best practices? Using freenas as a Virtual Machine on Windows? using a VM over Linux? using Proxmox and virtualize a Ubuntu and install Freenas, using direct Freenas on a raid 1 with 2 ssd?
    If you have 2 servers to redundancy, use both as a mirror, share the data?

  • @alexandermotin4106
    @alexandermotin4106 5 лет назад +1

    NFS with sync=disabled does not hurt pool consistency. In case of crash both pool and user data just revert to last transaction commit few seconds before. The same is for iSCSI by default. But loss of few seconds of data while VM is still running may be fatal, that is why sync=always for iSCSI may have sense if there is fast enough SLOG.

  • @MrJizez
    @MrJizez 4 года назад +1

    I love your knowledge and how you explain everything in a logic understandable way.
    I have just started to use Freenas at home and struggling how to get the built in openvpn to work, and then try to connect that to my proxmox server located in a datacenter. Goal is to use the freenas as a NFS backup for the VM's on my proxmox server.
    There is so little information out there about connecting a home freenas NFS share to a outside server with toatlly different IP's, have you tried this? Would be so great and thankful if you could make a video about this, the guides on internet about how to make Freenas built in openvpn to work is really bad and a lot of people struggling with this.

  • @lanceeilers5061
    @lanceeilers5061 5 лет назад +1

    Thanks Tom for the great content keep them coming , and keep smiling :-)

  • @BradleyHerbst
    @BradleyHerbst 5 лет назад +2

    Great video. I learned a lot. Thanks!!!

  • @Lee-qy3bc
    @Lee-qy3bc 4 года назад

    Great work, thank you Tom!

  • @hayzeproductions7093
    @hayzeproductions7093 4 года назад +1

    Lawrence,
    im not uzing xenserver in the meantime, and probably should have from the beginning. But i am facing some difficulty with proxmox, inside the vm, the hard drive speeds are slow. Have you ever used it or have any experience with it? By chance would you happen to have a video on how to optimize the speed for an proxmox vm?

    • @savagedk
      @savagedk 4 года назад

      -What OS is your VM running and how is the zfs pool/dataset setup?
      -How is the vhd allocated? ZVOL or File ontop of ZFS?
      If you could give me the printouts of the following commands I might be able to help.
      zpool status -v
      zfs get atime "poolname/dataset/zvol"
      zfs get xattr "poolname/dataset/zvol"
      zfs get sync "poolname/dataset/zvol"
      zpool get all | grep ashift
      We can go from there :)

  • @davidg4512
    @davidg4512 5 лет назад +3

    What happens if you disable syncing in nfs but leave it on default on the dataset

    • @davidg4512
      @davidg4512 5 лет назад +1

      I guess my question is a fallacy now that I think about it. The reason sync is enabled on nfs is because nfs is not aware of sync commands sent in the guest os so nfs just sync writes everything just to be safe.

    • @Lee-qy3bc
      @Lee-qy3bc 5 лет назад

      @@davidg4512 You set the sync setting on the dataset, not the NFS export.

  • @Mastakilla13
    @Mastakilla13 4 года назад

    Also I'm wondering what's the reason you're getting full 10gbit when using iperf with a single connection? I have Intel X550 and Intel X540 cards and I need to use multiple connections (for example "iperf3 -P 4 -c 192.168.10.10") to get iperf to reach 10gbit...? I'm using a direct connection, so no router in between to slow things down... Could it be because I'm using Windows 10 on one side? Or have you tweaked your network settings?

  • @mikk36est
    @mikk36est 5 лет назад

    What about iSCSI file extent performance?

  • @pr0jectSkyneT
    @pr0jectSkyneT 5 лет назад +3

    Great video. I find myself watching a lot of your work and I learn a lot from it. Does a SMB share benefit from a SLOG device on synchronous shares? Does the SLOG need to be a SSD/NVME? I have a 3x 10TB HDD RAIDZ1 Pool (SMB) , will adding a 3TB HDD SLOG device be beneficial to me? If the SLOG device fails, will my volume not work properly until it gets replaced?

    • @robbymoeyaert7482
      @robbymoeyaert7482 5 лет назад +5

      "Does a SMB share benefit from a SLOG device on synchronous shares?"
      Yes
      "Does the SLOG need to be a SSD/NVME?"
      No, even having a HDD as SLOG gives a little benefit as it moves the ZIL off the main array, thus preventing double writes. However, the faster the SLOG, the better for performance, as with synchronous writes the system will return a "okay, it's written" the moment the data is written into the ZIL on the SLOG. Note that in normal operation, data is never read from the SLOG, only written, so read speeds don't matter, write speed, write latency and write endurance do, on top of being protected against power loss.
      For this reason, a small Intel Optane is often one of the best SLOG devices. An example would be the 58 GB Intel Optane 800p. 3DXPoint has much lower latency than traditional NAND, and is for the most part protected against power loss by design.
      " If the SLOG device fails, will my volume not work properly until it gets replaced?"
      forums.freebsd.org/threads/zfs-slog-ramifications-it-if-fails.66944/
      If a SLOG fails while the system is running, the pool will continue without issue. If the system crashes and you lose SLOG, you may lose several seconds of sync writes (writes that were written just before the crash). The pool will still be completely consistent but those missing writes could easily cause issues with software such as databases.

    • @pr0jectSkyneT
      @pr0jectSkyneT 5 лет назад +1

      @@robbymoeyaert7482 thank you very much. Very enlightening. You mentioned that it provides protection from power loss in writes, is this the same as having ECC memory? I was under the impression that ECC memory does the same. And one final question; if I use a spare HDD now as SLOG, will I easily be able to swap it out to another faster drive (SSD/NVME) that might be of a different size (likely smaller capacity)?

    • @robbymoeyaert7482
      @robbymoeyaert7482 5 лет назад +2

      @@pr0jectSkyneT No it is not the same as having ECC memory, those are two different things.
      ECC memory protects agains errors within a stored memory cell and either recovers the error if it's not too big, or tells the system it can't recover a memory error, after which the system should immediately halt to prevent further data corruption (in Windows this would cause a BSOD). Such errors can happen through many means, including cosmic radiation, and the chance of them occuring is low, but not 0. This is why it is very highly recommended you use ECC memory on ZFS systems, as ZFS is very good at healing corruption within its arrays, but considers everything in RAM to be "true and correct", so any corruption in RAM can have dire consequences towards the rest of your data.
      Power loss protection for individual disks is different. Basically when writing to a disk, even NAND flash, the data is most often cached in a tiny bit of RAM on the disk before being written to the actual disk itself. If a power loss occurs, normally this data in RAM is lost and therefore your net result is data loss as the system thinks it is written (because the disk says "yup, I've written it" when it enters the cache), but it hasn't really been written. Enterprise grade SSDs have a capacitor bank onboard to give the SSD enough residual power to flush the cache onto the NAND before total power loss, thus preventing data loss. 3DXPoint works completely different and doesn't have RAM onboard, it immediately writes to the storage medium because the medium is fast enough, so in flight power loss doesn't cause data loss.
      " if I use a spare HDD now as SLOG, will I easily be able to swap it out to another faster drive (SSD/NVME) that might be of a different size (likely smaller capacity)?"
      Yes, you can remove and add SLOG and L2ARC devices at will to an array. Just don't yank it out without first removing it from the actual array to give ZFS time to get stuff in order properly.

    • @pr0jectSkyneT
      @pr0jectSkyneT 5 лет назад +1

      @@robbymoeyaert7482 Thank you very much sir. Hats off to the detailed and easily understandable response. I will be adding a SLOG device to my one of m FreeNAS pools as I have an unused 3TB disk available.

    • @robbymoeyaert7482
      @robbymoeyaert7482 5 лет назад +1

      @@pr0jectSkyneT you're very welcome

  • @KebraderaPumper
    @KebraderaPumper 5 лет назад

    i participate in the last live very nice content

  • @nicoladellino8124
    @nicoladellino8124 5 лет назад

    Nice video, TNX

  • @multeemedia
    @multeemedia 5 лет назад +1

    What happens if you turn on sync for iscsi?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  5 лет назад +1

      It is on for the ZFS file system for iSCSI but XCP-NG writes to it asycn via the protocol.

    • @alexandermotin4106
      @alexandermotin4106 5 лет назад

      For iSCSI sync=always does about the same as sync=standard for NFS with client requesting sync IO. There are 3 states in that knob.

  • @danielday8828
    @danielday8828 4 года назад

    I almost feel like iSCSI is more designed for database style storage. Integrity between transactions is more important when dealing with sensitive data. If you are using the storage for just virtual machine usage, it is less important. I think virtual machines should be more designed to be "ephemeral" so that if something does happen, oh well. Just spin up a new machine.

  • @markgolding71
    @markgolding71 2 года назад

    Complete joke software. After installing successfully on a 120GB SSD my password was rejected when I accessed via web browser using the ip given. Using option 7 to reset the password on the server I couldn't type anything at all. Re-installed it again, got the same results. Waste of time.