thank you so much for this, I've been looking for a way of testing drive performance under linux and I've never stumbled upon fio until I saw this video.
What is wrong with usb? Wouldn't it be cheaper to scale up using external harddrives vs. Internal? Also is it true that madmax is making his own fork of chia with gpu support for plotting?
Nothing “wrong” with usb, just one more hop in protocol translation with USB to SATA chip. Easier to house a ton of drives in a JBOD instead of plugging in each power supply of an external HDD
I m using fio to bench a zvol created to store VMs (raid10 from 4 10k sas drives) I am using fio --directory=/mnt/test/ --name=async_randwrite --rw=randwrite --bs=4k --direct=1 --sync=0 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=1G My issues are: where do I run fio? If the VM storage I want to test is in HHproxVM for instance, should I fist cd to that storage cd /HHproxVM and then run fio from there or run it from the boot-pool and specify as directory=/HHproxVM/mnt/test instead of /mnt/test I had very weird results by changing only the directory option like below fio --directory=/mnt/test/ --name=async_randwrite --rw=randwrite --bs=4k --direct=1 --sync=0 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=1G Result: WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=1024MiB (1074MB), run=6884-6884msec fio --directory=/HHproxVM/mnt/test --name=async_randwrite --rw=randwrite --bs=4k --direct=1 --sync=0 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=1G Result (didn t even finish after 30 min!!! and the bw and iops are wayyyyyyyy less) async_randwrite: Laying out IO file (1 file / 1024MiB) Jobs: 1 (f=1): [w(1)][63.1%][w=428KiB/s][w=107 IOPS][eta 12m:56s What am I missing here?
thank you so much for this, I've been looking for a way of testing drive performance under linux and I've never stumbled upon fio until I saw this video.
This is just what I needed, thanks!
What is wrong with usb? Wouldn't it be cheaper to scale up using external harddrives vs. Internal? Also is it true that madmax is making his own fork of chia with gpu support for plotting?
Nothing “wrong” with usb, just one more hop in protocol translation with USB to SATA chip. Easier to house a ton of drives in a JBOD instead of plugging in each power supply of an external HDD
I m using fio to bench a zvol created to store VMs (raid10 from 4 10k sas drives)
I am using
fio --directory=/mnt/test/ --name=async_randwrite --rw=randwrite --bs=4k --direct=1 --sync=0 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=1G
My issues are:
where do I run fio? If the VM storage I want to test is in HHproxVM for instance, should I fist cd to that storage cd /HHproxVM and then run fio from there or run it from the boot-pool and specify as directory=/HHproxVM/mnt/test instead of /mnt/test
I had very weird results by changing only the directory option like below
fio --directory=/mnt/test/ --name=async_randwrite --rw=randwrite --bs=4k --direct=1 --sync=0 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=1G
Result:
WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=1024MiB (1074MB), run=6884-6884msec
fio --directory=/HHproxVM/mnt/test --name=async_randwrite --rw=randwrite --bs=4k --direct=1 --sync=0 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=1G
Result (didn t even finish after 30 min!!! and the bw and iops are wayyyyyyyy less)
async_randwrite: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [w(1)][63.1%][w=428KiB/s][w=107 IOPS][eta 12m:56s
What am I missing here?
Thank you
can u plz make a video on io engines with uses gpu
*looks at clock* my man here is about to try to explain fio in 12 minutes. Let's see how this goes heh.
I love that the -h doesn’t even attempt to just dump every command
Do not run it against the boot drive! I wish I saw this walk through earlier.
Roberta Ports