How cool is vscsiStats? Part-II

I enabled vscistats collection in my vSphere host before starting the purge2 operation (check my white paper for more details) in the vCenter database . While the operation ran, I collected 20 samples of vscsiStats output at equal intervals (each interval was 7.5 seconds). vscsiStats output consists of histograms of various metrics – outstanding IOs, seek distance, length of a request, arrival time, all split between reads and writes. To obtain the histogram of a given metric at a particular time instant, I divided the difference in histogram values of the metric collected at successive time intervals by the sampling interval.


Outstanding Read IOs (=1) at time t(x) = (Outstanding Read IO (=1) until time t(x) – Outstanding Read IO (=1) until time t(x-1))/Sampling interval

Outstanding Read IOs (=2) at time t(x) = (Outstanding Read IO (=2) until time t(x) – Outstanding Read IO (=2) until time t(x-1))/Sampling interval



Outstanding Read IOs (>64) at time t(x) = (Outstanding Read IO (>64) until time t(x) – Outstanding Read IO (>64) until time t(x-1))/Sampling interval

NOTE: If you are thinking that the above steps are very cumbersome, I agree with you. I have an excel template which does it for me. All I need is vscsiStats output of 20 consecutive samples all saved in a single excel file. Irfan, in his virtual scoop blog, has provided few links to some neat blogs on visualizing vscsiStats. Check his blog.

I followed the above steps for the following histograms in vcscsiStats output.

Outstanding IOs:

Figure 1. Outstanding IOs during purge2 operation.

The graphs in figure 1 show the outstanding IOs during purge2 operation.  The number of read outstanding IOs was 64 (tidbit: pvscsi driver installed in the guest operating system has a default queue length of 64. During purge2 operation, the I/O queue in the pvscsi driver was full with read requests. Hence the number of outstanding read IO requests coming from the VM was 64) whereas the number of write outstanding IOs was zero.

Request Type: Graphs in figure 1 also show that the purge2 operation consisted of only read requests.

NOTE: Since purge2 operation is completely dominated by reads, for the remaining vscsistats histograms I only considered the respective read histograms.

Randomness: To identify the randomness of the purge2 operation I looked at the ‘seek’ histogram in the vscsiStats output.

Figure 2. Seek distance between read requests

The seek read distance histogram shows the distance between consecutive read requests in terms of logical blocks. A seek distance of 1 logical block between consecutive requests indicate a purely sequential workload. A seek distance of < 10 logical blocks indicate a quasi-sequential workload. A seek distance of 10+ logical blocks indicate a random workload. In this case, the seek distance between successive read requests was 500,000+ logical blocks, indicating a pure random read access pattern.

Size of an I/O Read: The last parameter I needed was the size of an I/O read request during the purge2 operation. This was provided by the ‘ioLengthReads’ histogram.

Figure 3. Size of Read Requests during purge2 operation

The size of the read requests seen during purge2 operation varied from 16KB to 64KB with some requests  as large as 128KB. The variation in I/O size indicates some kind of optimization employed during reads to fetch as much data as possible in one read operation.

Arrival Time for Reads: Another interesting histogram provided by vscsiStats (that is not required to create an IOmeter workload profile, but interesting) is the arrival time for the I/O requests (in this case for reads).

Figure 4. Arrival Time for Reads during purge2 operation

An arrival time of ≤100 microseconds  indicates that purge2 operation was very I/O intensive (also evidenced by 64 outstanding read requests throughout the operation).

With the information I collected from vscsiStats, I created a workload in IOmeter with the following paramters:

  • Outstanding IOs: 64
  • Access Type: 100%Read, 100%Random
  • Request size: 48KB (median of 16KB, 32KB, 48KB, 64KB, 128KB)

Rest, you will know when you read the white paper 😉

Next time, you get into troubleshooting I/O problems or planning storage resource for your vSphere environment, remember that you have the secret sauce at your finger tips. Surprise your storage admins by speaking in a language they understand – outstanding IOs, request size, access pattern and more..

Isn’t vscsiStats cool?


About @_chethankumar
I currently work at PernixData as a Technical Marketing Engineer.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: