Destaging Writes from Acceleration Tier to Primary Storage – Part II

In the part I of this series, I introduced FVP’s asynchronous data destaging in write-back mode from flash to the primary storage. I discussed the various nuances of destaging and showed how asynchronous destaging helps applications by providing flash class latency using a typical I/O workload. In this blog, I will discuss the implications of accelerating a write-intensive workload and the impact of asynchronous destaging on the workload performance.

Accelerating Write Intensive workloads 

A VM running a bursty-write workload was used for this test. During the testing period, the workload issued only writes which peaked to a very high value periodically. This VM was selected to be accelerated by FVP and was put in write-back mode. Figure. 1 shows the write operations observed by the VM during the entire testing period. Writes reached as high as 15K/sec during the peak periods but were only ~250/sec otherwise. All the writes were serviced by the flash device during the entire testing period including the bursty periods. However unlike the experiment in part I of this series, during this test the primary storage couldn’t service writes at the same rate as that issued by the VM. As a result, the rate of destaging VM’s data from flash to the primary storage was slower (11K/sec) than the rate of writes issued by the VM (15K/sec) which meant all of the VM’s data couldn’t be destaged as soon as they arrived during the bursty period. Thanks to FVP, the writes were acknowledged as soon as they arrived allowing the VM to issue more writes, but were sent to the primary storage at a rate the storage was comfortable of handling. The non-overlapping write peaks in figure. 1 illustrates this behavior and highlights the advantage of having an acceleration tier that can service writes as soon as they arrive, but sends the data to its permanent residence asynchronously without overwhelming it.

(Click to enlarge)

IOPS-FCFig 1. Write Operations

As the VM starts issuing writes, the writes got serviced by flash at flash speed (flash  + network speed, when using peers) as shown in fig 2. However, since the rate of writes from the VM outpaced the rate of destaging, destaging region saw a continuous increase in the amount of data to be destaged.  FVP continued to service writes at flash speed till the occupancy of destaging region reached a threshold. If the occupancy crosses the threshold, FVP starts injecting additional latency when acknowledging a write back to the VM to throttle new writes. This threshold is a carefully selected value that gives destager enough cushion to flush the dirtied data even if the primary storage is slow in servicing writes. The injected latency depends on the destaging area occupancy and the SAN latency (latency experienced by destager when writing dirty blocks to the primary storage) and is added when acknowledging only those writes that fill the destaging area above the threshold. Thus, the effective write latency (blue line) seen by the VM during bursty write periods was higher than flash latency (orange line), but much lower than datastore latency (green line).

(click to enlarge)

Latency-FCFig 2. Latency of Write Operations

The throttling aggressiveness is determined using an intelligent algorithm and adjusts dynamically to maintain the occupancy of the destaging region under the threshold. If the occupancy doesn’t reduce, FVP increases the throttling rate further until destager is able to empty enough data from the destaging region so that the occupancy falls below the threshold. As soon as the occupancy drops below the threshold, FVP resumes servicing writes at flash speed. However, most often writes from enterprise applications occur in short-spurts. The default size chosen for the destaging area is adequate to handle the spurts. Writes, in such cases, should be serviced at flash speed.

In summary, even for write-intensive workloads,  FVP can still provide an SLA that is much better than that promised by the primary storage technologies available today. Even a high barrage of writes is easily handled by FVP at flash like latencies. With its intelligent capabilities, FVP handles the burst even when primary storage is incapable of handling it.

UP NEXT: Accelerating Write-only Workloads ….

Resources:

  1. Iometer configuration file used for the test: bursty_writes
  2. Destaging Writes from Acceleration Tier to Primary Storage – Part I
Advertisements

Get Pernix’d

The sudden explosion in the number of solutions built on flash-based storage surprises me. I remember researchers and industrial community discussing the reliability and longevity of Solid State Disks (SSDs) in FAST conference not too long ago. Fast forward today, these no longer seem to be something that worries the solution providers or the consumers. I now work for PernixData, a company that is aiming to carry forward the virtualization journey from where hypervisors left off (post CPU and memory virtualization).  Flash Virtualization Platform (FVP), the flagship product developed by PernixData is a clustered flash tier created by virtualizing server-side flash storage for accelerating virtual machines’ (VM) I/O access to the block-based storage devices. In this blog post, I intend to discuss the motivation behind developing FVP and the key benefits it offers.

Rise of high-performing, expensive storage tier (SANs, NASs)

Over the years, storage technology has taken an interesting course. Although, computing platforms (desktops, servers, laptops) provide persistent storage to the computing units, most IT users don’t trust this layer to be robust enough to provide either the performance that meet their SLAs or the technologies that let the data rest in peace (dedupe, compression, encryption, snapshot). As a result, a new storage layer with dedicated expertise to accomplish both, but  is external to the computing platform, has emerged. Almost all research efforts in the storage area continues in this external tier.

Bi-dimensional Problem

However, this external storage tier is plagued by a problem – improving a single layer in two orthogonal dimensions (performance and capacity) is extremely complicated.

The problem is illustrated better in the following graph:

graph_1

Storage for most data centers are sized in two dimensions – capacity and performance. Most often, storage sized for capacity doesn’t meet performance needs (Application SLAs). In that case, additional storage media have to be added to meet the performance needs (this is mostly true for transaction based applications). With advances in media capacity out-racing advances in media performance, this almost always lead to over-provisioning as adding storage media means adding extra giga and terabytes of unused storage capacity. There is a significant ‘Capex’ implication of sizing the storage this way.

Another complication may arise, when almost all processing cycles of the storage has to be dedicated to process application I/Os to meet their SLA. This leads to postponing all non-application traffic (mostly administrative in nature such as snapshots, backups, storage cloning, migration etc.,) to idle periods. This means that either storage admins have to depend on heavy automation and scheduling of these tasks OR burn mid-night oil to ensure the success of these operations. This has significant ‘Opex’ implications.

In summary, meeting performance requirements requires capacity to be over-provisioned, sticking to capacity needs compromises performance. Hence, it is very hard to expect the convergence of both.

Where is the local storage?

What happened to the local storage? Applications that effectively use local storage can be counted with fingers – Hadoops, High Performance Computing applications, the Googles, the Facebooks to name a few. But, they take a different approach to utilize the local storage capacity. They implement all the afore-mentioned capacity and resiliency features in software using commodity hardware. To obtain the performance their application need, they use a ridiculous amount of hardware that average business can’t even imagine. Don’t forget – they can throw many engineers with specialized expertise to solve the bi-dimensional problem. Finally, there are industry-standard benchmarks such as TPCs that can use local storage for reducing the cost of performance. Here, data protection at the hardware level is not given a high priority.

The lack of interest and demand have largely limited innovations in the local storage tier – except improvements to the media types. Server vendors are now supporting SAS/SATA/PCI-e based Solid State Disks (SSDs) along with traditional SATA/SAS magnetic disks. But the concern remains – who will use them?

Flash Virtualization Platform

Meanwhile, another revolution happened in the IT industry. VMware, with its flagship product vSphere, fork-lifted compute layer from the storage layer. This opens up interesting opportunities. One such opportunity is to solve the problem I have been discussing all along – to split  storage tier into two separate dimensions – Performance and Capacity. PernixData is one among the early few who recognized this opportunity. Result of their tireless effort is what you see today – Flash Virtualization Platform, an ethereal storage layer that uses local fast storage (SAS/SATA/PCI-e SSDs) to accelerate transient data and SAN storage to rest persistent data.

FVP aims to glue the orthogonal challenges that plague the storage layer by intelligently using the two storage tiers. This independent usage of the two tiers leads to plethora of opportunities for the server and storage vendors. Server vendors can focus on providing high-speed, local storage for servicing transient data while not worrying about the complicated data-resting technologies. While, storage vendors can focus on jazzing up their storage devices with attractive capacity-saving, data-protection technologies while not worrying about the performance impact of these technologies. Essentially, FVP lets you use storage solutions from your preferred vendor while speeding up data access by utilizing the best flash technology out there.

Let us revisit the problem.

graph_2When the persistent tier (external storage) is combined with the transient tier (Flash media in the local storage), a new solution emerges that can allow sizing of persistent tier to meet the current capacity requirements (and room for growth), while allowing the transient tier to use the latest technologies in flash to adequately meet the I/O performance demands by absorbing any burst of I/O requests from the applications at the moment it occurs. Even if an emergency administrative task has to be scheduled, application users wouldn’t experience any noticeable impact as the I/O request is serviced by the transient tier. How does FVP achieve this? Check out the videos here.

This solution has significant capex savings as the persistent storage doesn’t have to be over-sized. The only additional investment will be for procuring flash media (cost keeps reducing every day) and the license cost of FVP ;-). There is noticeable opex savings as well – no need to maintain the extra storage (power, space and cooling savings). Storage admins can breathe easy as the admin tasks they have to execute on the external storage is hidden from the application users and the impact is mostly not felt.

Linguistic Lesson

‘Pernix’ means agile, active. The name is very apt for what FVP can do to your IT environments; it can activate your virtual machines. Think of it as the magical spinach that gives Popeye his awesome power. “Protect your investment, Pernix your data”.

As Satyam (CTO, PernixData) likes to ask – do you want to get ‘Pernix’d?. I do. That’s why I decided to join the team. Questions is – Do you? If the answer is yes, join the beta program today.

Stay tuned as more is yet to come …

%d bloggers like this: