December 3

Linux: Using fio for testing I/O performance

fio – The “flexible I/O tester”

fio is available on most distributions as a package with that name. It won’t be installed by default, you will need to get it. You can click apt://fio (Ubuntu) or appstream://fio (Plasma Discover) to install it (on some distributions, anyway).

fio is not at all strait-forward or easy to use. It requires quite a lot of parameters. The ones you want are:

  • --name to name your test-runs “job”. It’s required.
  • --eta-newline= forces a new line for every ‘t’ period. You’ll may want --eta-newline=5s
  • --filename= to specify a filename to write from.
  • --rw= specifies if you want to a read (--rw=read) or write (--rw=write) test
  • --size= decides how big of a test-file it should use. --size=2g may be a good choice. A file (specified with --filename=) this size will be created so you will need to have free space for it. Increasing to --size=20g or more may give a better real-world result for larger HDDs.
    • A small 200 MB file on a modern HDD won’t make the read/write heads move very far. A very big file will.
  • --io_size= specifies how much I/O fio will do. Settings it to --io_size=10g will make it do 10 GB worth of I/O even if the --size specifies a (much) smaller file.
  • --blocksize= specifies the block-size it will use, --blocksize=1024k may be a good choice.
  • --ioengine= specifies a I/O test method to use. There’s a lot to choose from. Run fio --enghelp for a long list. fio is a very versatile tool, whole books can and probably are written about it. libaio, as in --ioengine=libaio is a good choice and it is what we use in the examples below.
  • --fsync= tells fio to issue a fsync command which writes kernel cached pages to disk every number of blocks specified.
    • --fsync=1 is useful for testing random reads and writes.
    • --fsync=10000 can be used to test sequential reads and writes.
  • --iodepth= specifies a number of I/O units to keep in-flight.
  • --direct= specifies if direct I/O, which means O_DIRECT on Linux systems, should be used. You want --direct=1 to do disk performance testing.
  • --numjobs= specifies the number of jobs. One is enough for disk testing. Increasing this is useful if you want to test how a drive performs when many parallel jobs are running.
  • --runtime= makes fio terminate after a given amount of time. This overrides other values specifying how much data should be read or written. Setting --runtime=60 means that fio will exit and show results after 60 seconds even if it’s not done reading or writing all the specified data. One minute is typically enough to gather useful data.
  • --group_reporting makes fio group it’s reporting which makes the output easier to understand.

Put all the above together and we have some long commands for testing disk I/O in various ways.

Note: A file --filename= will be created with the specified --size= on the first run. This file will be created using random data due to the way some drives handle zeros. The file can be re-used in later runs if you specify the same filename and size each run.
Testing sequential read speed with very big blocks

fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

The resulting output will have a line under Run status group 0 (all jobs): which looks like:

  • WD Blue 500 GB SSD (WDC WDS500G2B0A-00SM50): bw=527MiB/s (552MB/s), 527MiB/s-527MiB/s (552MB/s-552MB/s), io=10.0GiB (10.7GB), run=19442-19442msec
  • The Seagate Momentus 5400.6: READ: bw=59.0MiB/s (62.9MB/s), 59.0MiB/s-59.0MiB/s (62.9MB/s-62.9MB/s), io=3630MiB (3806MB), run=60518-60518msec

The result should be close to what the hard drive manufacturer advertised and they won’t be that far off the guessimates hdparm provides with the -t option. Testing this on a two-drive RAID1 array will result in both drives being utilized:

  • Two Samsung SSDs: READ: bw=1037MiB/s (1087MB/s), 1037MiB/s-1037MiB/s (1087MB/s-1087MB/s), io=10.0GiB (10.7GB), run=9878-9878msec
Testing sequential write speed with very big blocks

fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

This will a line under “Run status group 0 (all jobs):” like

  • WRITE: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=3378MiB (3542MB), run=60575-60575msec
Note: Many modern SSDs with TLC (Tripple Level Cell) NAND will have a potentially large SLC (Single Level Cell) area used to cache writes. The drives firmware moves that data to the TLC area when the drive is otherwise idle. Doing 10 GB of I/O to a 2 GB during 60 seconds – what the above example does – is not anywhere near enough to account for the SLC cache on such drives.You will probably not be copying 100 GB to a 240 GB SSD on a regular basis so that may have little to no practical significance. However, do know that if you do a test (assuming you have 80 GB free) of a WD Green SSD with 100 GB of I/O to a 80 GB file with a 5 minute (60*5=300) limit you’ll get a lot lower results than you get if you write 10 GB to a 2 GB file. To test yourself, tryfio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=60g --io_size=100g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=300 --group_reportingYou need to increase size (files used for testing), io_size (amount of I/O done) and runtime (length the test is allowed to run to by-pass a drives caches.
Testing random 4K reads

Testing random reads is best done with a queue-depth of just one (--iodepth=1) and 32 concurrent jobs (--numjobs=32).

This will reflect real-world read performance.

fio --name TEST --eta-newline=5s --filename=temp.file --rw=randread --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=32 --runtime=60 --group_reporting

Some example results:

  • The Seagate Momentus 5400.6: READ: bw=473KiB/s (484kB/s), 473KiB/s-473KiB/s (484kB/s-484kB/s), io=27.9MiB (29.2MB), run=60334-60334msec
  • WD Blue 500 GB SSD (WDC WDS500G2B0A-00SM50): READ: bw=284MiB/s (297MB/s), 284MiB/s-284MiB/s (297MB/s-297MB/s), io=16.6GiB (17.8GB), run=60001-60001msec

As these example results show: The difference between an older 5400 RPM HDD and a average low-end SSD is staggering when it comes to random I/O. There is a world of difference between half a megabyte and 284 megabytes per second.

Mixed random 4K read and write

The --rw option randrw tells fio to do both reads and writes. And again, a queue-depth of just one (--iodepth=1) and 32 concurrent jobs (--numjobs=32) will reflect high real-world load. This test will show the absolute worst I/O performance you can expect. Don’t be shocked if a HDD shows performance-numbers that are in the low percentages of what it’s specifications claim it can do.

fio --name TEST --eta-newline=5s --filename=temp.file --rw=randrw --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting

By: linuxreviews.org


Copyright 2021. All rights reserved.

Posted December 3, 2021 by Timothy Conrad in category "Linux

About the Author

If I were to describe myself with one word it would be, creative. I am interested in almost everything which keeps me rather busy. Here you will find some of my technical musings. Securely email me using - PGP: 4CB8 91EB 0C0A A530 3BE9 6D76 B076 96F1 6135 0A1B