Destaging Writes from Acceleration Tier to Primary Storage – Part I

Frank posted a nice article on write-acceleration policies supported in FVP. It is a great read for anyone looking to for a quick intro on the two write-acceleration policies supported in FVP. At the end, some readers asked few interesting questions regarding ‘Write Destaging’, answers to which require a deep dive than a simple two-line replies. Hence, I thought of explaining FVP’s destager architecture as a multi-part blog series. This blog offers an introduction to asynchronous destaging of VM’s data from flash using an example.

BTW, kudos to all those readers who raised these questions! Just shows, how well these readers understood the technicalities of write-acceleration. Tip my hat to you folks and bow to you Frank.

Destaging Writes from Flash to Primary Storage

In the write-back mode, FVP acknowledges the writes coming from a VM as soon as it is written to the flash. The data is written to the primary storage (permanent residence of the data) eventually, at a rate the primary storage is comfortable of receiving data. This task of destaging the data written by VMs to their primary residence is delegated to what is called as a ‘Destager’, a key component of FVP that runs in the background. Essentially, in the write-back mode, writes from the VMs are acknowledged at flash speed (flash + network speed, when using peers), while they are sent to their permanent residence asynchronously at SAN speedNote that asynchronous data destaging is relevant only in write-back mode.

Destaging Area

At any given time, FVP uses flash in multiple ways – to host data read frequently by VMs (to accelerate reads), to buffer primary copies of data written by VMs running on the server which houses the flash (to accelerate writes), or to keep replicas of data written by VMs running on remote servers (to provide fault tolerance in write-back mode). In order to accelerate many VMs on a vSphere host, and to accelerate both reads and writes of these VMs, FVP has to manage the flash real estate very efficiently. FVP uses dynamically expanding and shrinking regions on flash to hold the writes coming from the VMs until all the data is moved to its permanent residence. This region is called ‘destaging area.  Each VM that is configured to be in write-back mode gets a separate destaging area.

Destaging Frequency

FVP acknowledges a write issued by a VM in write-back mode as soon as it is written to the VM’s destaging region on the flash. In the back ground, FVP activates the destager to migrate the VM’s data to its permanent residence. The migration happens at a rate the primary storage is capable of handling. When multiple VMs are configured to be in write-back mode, all their writes are acknowledged as soon as they are written to the individual destaging regions. In this case, destager migrates data from the destaging regions of all the VMs simultaneously, but more importantly, without overwhelming the underlying primary storage.

Implications of Destaging on Write-Acceleration: Flash Class Application Latencies! 

Let me illustrate the mechanics of destager with an example. In this experiment, a windows VM running iometer issued writes in burst to the primary storage. Figure 1 shows the rate of write operations during the experiment. Writes reached as high as 4K/sec during the bursty periods. This VM was selected to be accelerated by FVP and was put in write-back mode. All the writes were serviced by flash and the written data was destaged to the primary storage asynchronously by the destager. In this experiment, the primary storage was able to service writes at a high rate. Hence, the destager could empty the VM’s data as soon as it arrived.

The result: Writes/sec seen by VM = Writes/sec serviced by Flash = Destaging Rate = Writes/sec written to primary storage asynchronously (hence lines representing rate of writes serviced by different components overlap in fig 1).

(click to enlarge)
IOpsFig 1. Write Operations

However, the latency of write operations seen in the VM tells a different story. Figure 2 shows the latency of the write operations observed by different components during the test. By the virtue of write acceleration by FVP, all the writes were serviced by flash at flash speed (orange line showing “Local Flash Write” latency) even during periods of bursty writes. Write latency seen by the VM was almost the same as flash write latency (blue line showing “Total (Effective)” latency). Flash latency increased by only 200 microseconds during the bursty period. In contrast, I/O latency witnessed by destager when destaging VM’s data to the primary storage  reached as high as 3ms** (green line showing “Datastore Write” latency). This would be the latency seen by the VM, if it were to issue writes directly to the primary storage.

(click to enlarge)
LatencyFig 2. Latency of Write Operations

Most applications exhibit a write behavior that is similar to that shown in the above illustration. For such workloads, clearly, FVP offers an unprecedented boost in I/O QoS. This boost can be realized by a mere addition of an SSD to vSphere hosts and creation of a clustered acceleration tier on the SSDs using FVP.

NEXT UP: Accelerating write-intensive workloads…

** Primary storage used for this experiment was an all-flash SAN. In reality, latency could be even higher (few tens of milliseconds)  if the primary storage device was configured on magnetic disks.

Resources:

  1. Iometer configuration file used for the test: Bursty_writes
  2. Frank’s blog on Write-Back and Write-Through policies in FVP
  3. FVP Writeback policy deep dive whiteboard session
%d bloggers like this: