August 22

Techpository

Welcome to Techpository.  This is a personal repository of helpful information.  Content on this site is both self-created and copied from other resources.  We try to make sure we give credit where credit is due.  Use the information on this site at your own risk.

Category: Other | Comments Off on Techpository
December 31

ESXi: Esxcfg-vswitch Commands

Delete a Port Group:
esxcfg-vswitch -D “Service Console” vSwitch1

Add a nic (vmnic2) to a vswitch (vswitch1):
esxcfg-vswitch -L vmnic2 vswitch1

Remove a pnic (vmnic3) from a vswitch (vswitch0):
esxcfg-vswitch -U vmnic3 vswitch0

Create a portgroup (VM Network3) on a vswitch (vswitch1):
esxcfg-vswitch -A “VM Network 3” vSwitch1

Assign a VLAN ID (3) to a portgroup (VM Network 3) on a vswitch (vswitch1):
esxcfg-vswitch -v 3 -p “VM Network 3” vSwitch1

Change your Service Console (vswif0) IP and Subnet Mask:
esxcfg-vswif -i 172.20.20.5 -n 255.255.255.0 vswif0

Add a Service Console (vswif0):
esxcfg-vswif -a vswif0 -p “Service Console” -i 172.20.20.40 -n 255.255.255.0

Add Vlan to a Port Group:
esxcfg-vswitch “vSwitch name” -p “portgroup name” -v “vlan ID”

List the existing config so you can get your vswitch and portgroup name:
esxcfg-vswitch -l

By Paquin

Category: Virtualization | Comments Off on ESXi: Esxcfg-vswitch Commands
December 8

Linux: Create Network Bridge For KVM

The bridged network is a dedicated network interface to a virtual machine that helps virtual machines to connect outside the host machine.

Let us list the available network connections.COPY

nmcli connection show

Output:

NAME                UUID                                  TYPE      DEVICE
Wired connection 1  fbbdd6f9-0970-354e-8693-ff8050a85c77  ethernet  enp0s3

Now, we will create a virtual bridge network br0 with the help of physical interface enp0s3.

sudo nmcli con add ifname br0 type bridge con-name br0

sudo nmcli con add type bridge-slave ifname enp0s3 master br0

Next, we will assign the IP address of the physical interface to the bridge interface as the Bridge network interface will act as the primary network interface of your host system.COPY

sudo nmcli con mod br0 ipv4.addresses 192.168.0.10/24

sudo nmcli con mod br0 ipv4.gateway 192.168.0.1

sudo nmcli con mod br0 ipv4.dns "8.8.8.8","192.168.0.1"

sudo nmcli con mod br0 ipv4.method manual

KVM requires a few additional network settings. So, set them.COPY

sudo nmcli con modify br0 bridge.stp no

sudo nmcli con modify br0 bridge.forward-delay 0

Disable the physical interface and enable the network bridge.COPY

sudo nmcli con down "Wired connection 1" && sudo nmcli con up br0

Run the above command in the system terminal as you may lose SSH sessions when running them remotely.

Finally, check the network connections.COPY

sudo nmcli con show
Output:

NAME                 UUID                                  TYPE      DEVICE
br0                  ee117099-4935-4dde-a1f5-4981b0d9585e  bridge    br0
bridge-slave-enp0s3  492b5c81-e59d-4150-9b24-0348cd0dd87c  ethernet  enp0s3
Wired connection 1   fbbdd6f9-0970-354e-8693-ff8050a85c77  ethernet  --

By: Raj

Category: Linux | Comments Off on Linux: Create Network Bridge For KVM
December 3

Linux: Using fio for testing I/O performance

fio – The “flexible I/O tester”

fio is available on most distributions as a package with that name. It won’t be installed by default, you will need to get it. You can click apt://fio (Ubuntu) or appstream://fio (Plasma Discover) to install it (on some distributions, anyway).

fio is not at all strait-forward or easy to use. It requires quite a lot of parameters. The ones you want are:

  • --name to name your test-runs “job”. It’s required.
  • --eta-newline= forces a new line for every ‘t’ period. You’ll may want --eta-newline=5s
  • --filename= to specify a filename to write from.
  • --rw= specifies if you want to a read (--rw=read) or write (--rw=write) test
  • --size= decides how big of a test-file it should use. --size=2g may be a good choice. A file (specified with --filename=) this size will be created so you will need to have free space for it. Increasing to --size=20g or more may give a better real-world result for larger HDDs.
    • A small 200 MB file on a modern HDD won’t make the read/write heads move very far. A very big file will.
  • --io_size= specifies how much I/O fio will do. Settings it to --io_size=10g will make it do 10 GB worth of I/O even if the --size specifies a (much) smaller file.
  • --blocksize= specifies the block-size it will use, --blocksize=1024k may be a good choice.
  • --ioengine= specifies a I/O test method to use. There’s a lot to choose from. Run fio --enghelp for a long list. fio is a very versatile tool, whole books can and probably are written about it. libaio, as in --ioengine=libaio is a good choice and it is what we use in the examples below.
  • --fsync= tells fio to issue a fsync command which writes kernel cached pages to disk every number of blocks specified.
    • --fsync=1 is useful for testing random reads and writes.
    • --fsync=10000 can be used to test sequential reads and writes.
  • --iodepth= specifies a number of I/O units to keep in-flight.
  • --direct= specifies if direct I/O, which means O_DIRECT on Linux systems, should be used. You want --direct=1 to do disk performance testing.
  • --numjobs= specifies the number of jobs. One is enough for disk testing. Increasing this is useful if you want to test how a drive performs when many parallel jobs are running.
  • --runtime= makes fio terminate after a given amount of time. This overrides other values specifying how much data should be read or written. Setting --runtime=60 means that fio will exit and show results after 60 seconds even if it’s not done reading or writing all the specified data. One minute is typically enough to gather useful data.
  • --group_reporting makes fio group it’s reporting which makes the output easier to understand.

Put all the above together and we have some long commands for testing disk I/O in various ways.

Note: A file --filename= will be created with the specified --size= on the first run. This file will be created using random data due to the way some drives handle zeros. The file can be re-used in later runs if you specify the same filename and size each run.
Testing sequential read speed with very big blocks

fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

The resulting output will have a line under Run status group 0 (all jobs): which looks like:

  • WD Blue 500 GB SSD (WDC WDS500G2B0A-00SM50): bw=527MiB/s (552MB/s), 527MiB/s-527MiB/s (552MB/s-552MB/s), io=10.0GiB (10.7GB), run=19442-19442msec
  • The Seagate Momentus 5400.6: READ: bw=59.0MiB/s (62.9MB/s), 59.0MiB/s-59.0MiB/s (62.9MB/s-62.9MB/s), io=3630MiB (3806MB), run=60518-60518msec

The result should be close to what the hard drive manufacturer advertised and they won’t be that far off the guessimates hdparm provides with the -t option. Testing this on a two-drive RAID1 array will result in both drives being utilized:

  • Two Samsung SSDs: READ: bw=1037MiB/s (1087MB/s), 1037MiB/s-1037MiB/s (1087MB/s-1087MB/s), io=10.0GiB (10.7GB), run=9878-9878msec
Testing sequential write speed with very big blocks

fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

This will a line under “Run status group 0 (all jobs):” like

  • WRITE: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=3378MiB (3542MB), run=60575-60575msec
Note: Many modern SSDs with TLC (Tripple Level Cell) NAND will have a potentially large SLC (Single Level Cell) area used to cache writes. The drives firmware moves that data to the TLC area when the drive is otherwise idle. Doing 10 GB of I/O to a 2 GB during 60 seconds – what the above example does – is not anywhere near enough to account for the SLC cache on such drives.You will probably not be copying 100 GB to a 240 GB SSD on a regular basis so that may have little to no practical significance. However, do know that if you do a test (assuming you have 80 GB free) of a WD Green SSD with 100 GB of I/O to a 80 GB file with a 5 minute (60*5=300) limit you’ll get a lot lower results than you get if you write 10 GB to a 2 GB file. To test yourself, tryfio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=60g --io_size=100g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=300 --group_reportingYou need to increase size (files used for testing), io_size (amount of I/O done) and runtime (length the test is allowed to run to by-pass a drives caches.
Testing random 4K reads

Testing random reads is best done with a queue-depth of just one (--iodepth=1) and 32 concurrent jobs (--numjobs=32).

This will reflect real-world read performance.

fio --name TEST --eta-newline=5s --filename=temp.file --rw=randread --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=32 --runtime=60 --group_reporting

Some example results:

  • The Seagate Momentus 5400.6: READ: bw=473KiB/s (484kB/s), 473KiB/s-473KiB/s (484kB/s-484kB/s), io=27.9MiB (29.2MB), run=60334-60334msec
  • WD Blue 500 GB SSD (WDC WDS500G2B0A-00SM50): READ: bw=284MiB/s (297MB/s), 284MiB/s-284MiB/s (297MB/s-297MB/s), io=16.6GiB (17.8GB), run=60001-60001msec

As these example results show: The difference between an older 5400 RPM HDD and a average low-end SSD is staggering when it comes to random I/O. There is a world of difference between half a megabyte and 284 megabytes per second.

Mixed random 4K read and write

The --rw option randrw tells fio to do both reads and writes. And again, a queue-depth of just one (--iodepth=1) and 32 concurrent jobs (--numjobs=32) will reflect high real-world load. This test will show the absolute worst I/O performance you can expect. Don’t be shocked if a HDD shows performance-numbers that are in the low percentages of what it’s specifications claim it can do.

fio --name TEST --eta-newline=5s --filename=temp.file --rw=randrw --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting

By: linuxreviews.org

Category: Linux | Comments Off on Linux: Using fio for testing I/O performance