Microsoft SQL Server 2014 on VMware vSAN 6.2 All-Flash

Executive Summary

This section covers the Business Case, Solution Overview and Key Highlights of the Microsoft SQL Server 2014 on VMware vSAN 6.2 All-Flash document.

Business Case

Microsoft SQL Server 2014 offers an array of new and improved capabilities with emphasis on reliability, availability, serviceability, and performance. It is crucial for administrators to effectively design and implement storage solutions for Microsoft SQL Server. With more and more production servers being virtualized, the demand for highly converged server-based storage is surging. VMware® vSAN™ aims at providing highly scalable, available, reliable, and high performance storage using cost-effective hardware, specifically direct-attached disks in VMware ESXi™ hosts. vSAN adheres to a new policybased storage management paradigm, which simplifies and automates complex management workflows that exist in enterprise storage systems with respect to configuration and clustering.

Solution Overview

This solution addresses common business challenges that CIOs face today in an online transaction processing (OLTP) environment that requires predicable performance and cost-effective storage, which helps customers design and implement optimal configurations specifically for Microsoft SQL Server on vSAN. The Microsoft SQL Server on All-Flash vSAN solution uses servers with local Solid State Drive (SSD) running on the VMware vSphere® 6.0 for application workloads as well as the management components such as the VMware vCenter® Server Appliance™ and Active Directory. This solution provides a reference architecture of a hyperconverged infrastructure combining x86-based compute and flash for application workloads on SQL Server 2014 running on Windows Server 2012.

Key Results

The following highlights validate that All-Flash vSAN is an enterprise-class storage solution suitable for Microsoft SQL Server:

  • Consistent SQL Server OLTP performance and predictable virtual disk latency 
  • Highly efficient space saving by deduplication and compression and erasure coding
  • Highly resilient storage against component failures
  • Cross-site Tier-1 application with reasonable performance and business continuity capability

vSAN SQL Server Reference Architecture

This section covers the purpose, scope and the intended audience for the vSAN SQL Server Reference Architecture.

Purpose

This reference architecture validates All-Flash vSAN’s ability to support industry-standard TPC-E-like workloads.

Scope

The solution validates the performance and functionality of enterprise-class SQL Server instances in a virtualized VMware environment running SQL Server 2014 on All-Flash vSAN. Test scenarios include:

  • All-Flash vSAN performance
  • All-Flash vSAN Stretched Cluster performance
  • Space saving by enabling deduplication and compression and erasure coding (RAID 5) 
  • All-Flash vSAN resiliency
  • Site failure and continuous data availability on All-Flash vSAN Stretched Cluster 
  • vSphere vMotion across sites in All-Flash vSAN Stretched Cluster

Audience

This reference architecture is intended for SQL Server database administrators and storage architects involved in planning, architecting, or administering a SQL Server environment on vSAN.

Technology Overview

This section provides an overview of the technologies used in this solution:

  • VMware vSphere 6.0 Update 2
  • VMware vSAN 6.2
  • All-Flash architecture
  • Deduplication and compression for space efficiency 
  • Erasure coding
  • Quality of Service (QoS)
  • VMware vSAN Stretched Cluster
  • Microsoft SQL Server 2014

VMware vSphere 6.0 Update 2

VMware vSphere is the industry-leading virtualization platform for building cloud infrastructures. It enables users to run business-critical applications with confidence and respond quickly to business needs. vSphere accelerates the shift to cloud computing for existing data centers and underpins compatible public cloud offerings, forming the foundation for the industry’s best hybrid cloud model. VMware vSphere 6.0 Update 2 supports the following new features that can benefit the solution:

  • High Ethernet link speed: ESXi 6.0 Update 2 supports 25G and 50G Ethernet link speeds. 
  • VMware host client: the VMware host client is an HTML5 client used to connect to and manage single ESXi servers. 
  • VMware vSAN 6.2: the new VMware vSAN 6.2 is an integral part of ESXi 6.0 Update 2.

VMware vSAN 6.2

VMware vSAN is VMware’s software-defined storage solution for hyperconverged infrastructure, a software-driven architecture that delivers tightly integrated compute, networking, and shared storage from a single virtualized x86 server.

With the major enhancements in vSAN 6.2, vSAN provides enterprise-class scale and performance as well as new capabilities that broaden the applicability of the proven vSAN architecture to business-critical environments. The new features of vSAN 6.2 include:

  • Deduplication and compression: software-based deduplication and compression optimizes All-Flash storage capacity, providing as much as 7x data reduction with minimal CPU and memory overhead. 
  • Erasure coding: erasure coding increases usable storage capacity by up to 100 percent while keeping data resiliency unchanged. It is capable of tolerating one or two failures with single parity or double parity protection. 
  • QoS with IOPS limit: policy-driven QoS limits and monitoring of IOPS consumed by specific virtual machines, eliminating noisy neighbor issues and managing performance SLAs.  
  • Software checksum: end-to-end data checksum detects and resolves silent errors to ensure data integrity, also this feature is policy-driven.
  • Client Cache:  leverages local dynamic random access memory (DRAM) to virtual machines to accelerate read performance. The amount of memory allocated is 0.4 percent of total host memory up to 1GB per host to local virtual machines.

With these new features, vSAN 6.2 provides the following advantages:

  • VMware HyperConverged Software (HCS)-powered All-Flash solutions available at up to 50 percent less than the costs of other competing hybrid solutions in the market.
  • Increased storage utilization by as much as 10x through new data efficiency features including deduplication and compression, and erasure coding.
  • Future-proof IT environments with a single platform supporting business-critical applications, OpenStack, and containers with up to 100K IOPS per node at sub-millisecond latencies.

All-Flash Architecture

All-Flash vSAN aims at delivering extremely high IOPS with predictable low latencies. In All-Flash architecture, two different grades of flash devices are commonly used in an All-Flash vSAN configuration: lower capacity and higher endurance devices for the cache layer; more cost-effective, higher capacity, and lower endurance devices for the capacity layer. Writes are performed at the cache layer and then destaged to the capacity layer, only as needed. This helps extend the usable life of lower endurance flash devices in the capacity layer.

Technology Overview

Figure 1: vSAN All-Flash Datastore

Deduplication and Compression for Space Efficiency

Near-line deduplication and compression happens during destaging from the caching tier to the capacity tier. Customers enable “space efficiency” on a cluster level and the deduplication and compression feature happens per disk group basis. Bigger disk groups will result in a higher deduplication ratio. The blocks are compressed after they are deduplicated.

Technology-Overview

Figure 2: Deduplication and Compression for Space Efficiency

Erasure Coding

Erasure coding provides the same levels of redundancy as mirroring, but with a reduced capacity requirement. In general, erasure coding is a method of taking data, breaking it into multiple pieces and spreading it across multiple devices, while adding parity data so it may be recreated in the event that one or more pieces are corrupted or lost. 

In vSAN 6.2, two modes of erasure coding are supported:

  • RAID 5 in 3+1 configuration, which means 3 data blocks and 1 parity block per stripe.
  • RAID 6 in 4+2 configuration, which means 4 data blocks and 2 parity blocks per stripe.

RAID 5

In this case, RAID 5 requires four hosts at a minimum because it uses a 3+1 logic. With four hosts, one can fail without data loss. This results in a significant reduction of required disk capacity. Normally, a 20GB disk would require 40GB of disk capacity in a mirrored protection, but in the case of RAID 5, the requirement is only around 27GB.

image

Figure 3: RAID 5 Data and Parity Placement

RAID 6

With RAID 6, two host failures can be tolerated. In the RAID 1 scenario for a 20GB disk, the required disk capacity would be 60GB. However, with RAID 6, this is just 30GB. Note that the parity is distributed across all hosts and there is no dedicated parity host. A 4+2 configuration is used in RAID 6, which means that at least six hosts are required in this configuration.

image

Figure 4: RAID 6 Data and Parity Placement

Space efficiency features (including deduplication and compression, and erasure coding) work together to provide up to 10x reduction in dataset size.

Quality of Service

vSAN 6.2 introduces a QoS feature that limits the number of IOPS that an object may consume. In underutilized configurations, limits may or may not be necessary since objects likely have sufficient resources to effectively meet the needs of their workload. While it is entirely desirable to have more than enough resources, it does not come without cost. Efficiently sized configurations are typically a good mix of cost and available resources. The metrics of appropriate resources for workloads can change over time, especially when utilization grows or when workloads are added over the lifecycle of a platform.

With the QoS addition to vSAN 6.2, IOPS limits are available. QoS for vSAN 6.2 is an SPBM rule. Because QoS is applied to vSAN objects through a storage policy, it can be applied to individual components or the entire virtual machine without interrupting the operation of the virtual machine.

VMware vSAN Stretched Cluster

vSAN 6.1 introduced the Stretched Cluster feature. vSAN Stretched Cluster provides customers with the ability to deploy a single vSAN Cluster across multiple data centers. vSAN Stretched Cluster is a specific configuration implemented in environments where disaster or downtime avoidance is a key requirement.

vSAN Stretched Cluster builds on the foundation of fault domains. The fault domain feature introduced rack awareness in vSAN 6.0. The feature allows customers to group multiple hosts into failure zones across multiple server racks to ensure that replicas of virtual machine objects are not provisioned onto the same logical failure zones or server racks. vSAN Stretched Cluster requires three failure domains based on three sites (two active sites and one witness site). The witness site is only utilized to host witness virtual appliances that store witness objects and cluster metadata information and also provide cluster quorum services during failure events.

Microsoft SQL Server 2014

Microsoft SQL Server is one of the most widely deployed database platforms in the world, with many organizations having dozens or even hundreds of instances deployed in their environments. The flexibility of SQL Server, with its rich application capabilities combined with the low costs of x86 computing, has led to a wide variety of SQL Server installations ranging from large data warehouses to small, highly specialized departmental and application databases. The flexibility at the database layer translates directly into application flexibility, giving end users more useful application features and ultimately improving productivity.

Solution Configuration

This section introduces the resources and configurations for the solution including:

  • Solution architecture 
  • Hardware resources
  • Software resources
  • Network configuration
  • VMware ESXi Server: storage controller mode
  • Microsoft SQL Server 2014 virtual machine configuration

Solution Architecture

This solution has two architectures: one is vSAN Cluster as shown in Figure 5 and the other is vSAN Stretched Cluster. In the configuration of vSAN Stretched Cluster, we used the same servers with same configuration used in vSAN Cluster but the four nodes were split into two sites evenly with each site hosting two SQL Server virtual machines.

Solution Configuration

Figure 5. SQL Server on All-Flash vSAN Datastore

Hardware Resources

We used direct-attached SSDs on ESXi server to provide vSAN solution. Each ESXi server has two disk groups each consisting of one cache tier SSD and four capacity tier SSDs. The raw capacity of the vSAN datastore is around 11.88TB.

Each ESXi Server in the vSAN Cluster has the following configuration as shown in Table 1.

Table 1. ESXi Server Configuration

PROPERTY SPECIFICATION
Server 4 x Dell PowerEdge R630
CPU 2 sockets, 12 cores each of 2.3GHz with hyper-threading enabled
RAM 256GB DDR4 RDIMM
Network adapter 2 x Intel 10 Gigabit X540-AT2, + I350 1Gb Ethernet
Storage adapter 2 x 12Gbps SAS PCI-Express
Disks SSD: 2 x 400GB drive as cache SSD
SSD: 8 x 400GB drive as capacity SSD

Software Resources

Table 2 shows the software resources used in this solution.

Table2. Software Resources

SOFTWARE VERSION PURPOSE
VMware vCenter and ESXi 6.0 U2 ESXi Cluster to host virtual machines and provide vSAN Cluster. VMware vCenter Server provides a centralized platform for managing VMware vSphere environments
VMware vSAN 6.2 Software-defined storage solution for hyperconverged infrastructure
Microsoft SQL Server 2014 Enterprise Edition, SP1 Database software
Windows Server 2012 2012 R2 x 64 SP1, Enterprise Edition SQL Server database virtual machines
Load generation virtual machines
Domain controller
VMware vCenter Server
Benchmark Factory 7.2 TPC-E-like data generator and workload test client

Network Configuration

A VMware vSphere Distributed Switch™ acts as a single virtual switch across all associated hosts in the data cluster. This setup allows virtual machines to maintain a consistent network configuration as they migrate across multiple hosts.The vSphere Distributed Switch uses two 10GbE adapters for the teaming and failover purposes. A port group defines properties regarding security, traffic shaping, and NIC teaming. We used default port group setting except the uplink failover order as shown in Table 3. It also shows the distributed switch port groups created for different functions and the respective active and standby uplink to balance traffic across the available uplinks.

Table 3. Uplink and VLAN settings of the Distributed Switch Port Groups

DISTRIBUTED SWITCH PORT GROUP NAME VLAN ACTIVE UPLINK STANDBY UPLINK
vSphere vMotion 4021 Uplink1 Uplink2
vSAN Cluster 1284 Uplink2 Uplink1
vSAN Stretched Cluster (Site A-preferred site) 4040 Uplink2 Uplink1
vSAN Stretched Cluster (Site B) 4041 Uplink2 Uplink1

Table 3. Uplink and VLAN settings of the Distributed Switch Port Groups

We used different VLANs to separate the vSAN traffic in different sites in vSAN Stretched Cluster to emulate the actual geographical-dispersed network environment and to separate the traffic of the vSphere vMotion and vSAN while providing the NIC failover function. In the All-Flash vSAN deployment as shown in Figure 6, VLAN 1284 was used for vSAN traffic and in the vSAN Stretched Cluster deployment as shown in Figure 7, VLAN 4040 was used for vSAN traffic in site A (the preferred site) and VLAN 4041 was traffic in site B. The active uplink is uplink2 and the standby uplink is uplink1. The virtual machine running ESXi in an appliance and the virtual machine running the layer-3 router for the deployment of the vSAN Stretched Cluster communicate with each other through different VLAN (VLAN ID: 4036) as shown in Figure 7.

Solution Configuration

Figure 6. Distributed Switch VLAN in the All-Flash vSAN Cluster

Solution Configuration

Figure 7. Distributed Switch VLAN in the All-Flash vSAN Stretched Cluster

VMware ESXi Server: Storage Controller Mode

The storage controller supports both pass-through and RAID mode. The passthrough mode is the preferred mode for vSAN and it gives vSAN a complete control of the local SSDs attached to the storage controller.

Microsoft SQL Server 2014 Virtual Machine Config

We configured four SQL Server virtual machines for the performance tests and we created databases per Benchmark Factory for Databases. The database and index files consumed approximately 200GB and 500GB space. We assigned 24 vCPUs to the VM hosting the 200GB databases and 32 vCPUs to the VM hosting 500GB databases. We set the maximum server memory to 48GB for the 200GB database and 128GB for the 500GB database with Lock Pages in Memory privilege granted to the SQL Server instance startup account. Table 4 lists the SQL Server 2014 VM configuration.

Table 4. SQL Server 2014 VM Configuration

SQL VM ROLE VCPU MEMORY(GB) VM NAME SQL SERVER VERSION OPERATING SYSTEM
VM1/VM2—200 GB TPC-E-like DB 24 80 sql200-a and sql200-b SQL Server 2014 enterprise edition, sp1 Windows Server 2012 Datacenter 64-bit
VM3/VM4—500 GB TPC-E-like DB 32 160 sql500-a and sql500-b SQL Server 2014 enterprise edition, sp1 Windows Server 2012 Datacenter 64-bit

Database VM and Disk Layout

For the TPC-E-like workload, the database size is based on the actual disk space requirement and additional space for database growth:

  • The virtual disk configuration for the 200GB database is: 1 x 100GB OS, 2 x 200GB data disks, 1 x 100GB log disk, and 1 x 80GB tempdb disk.
  •  The virtual disk configuration for the 500GB database is: 1 x 100GB OS, 4 x 250GB data disks, 1 x 100GB log disk, and 2 x 80GB tempdb disks.

Solution Validation

In this section, we present the test methodologies and processes used for this solution.

We detailed testing tools, scenarios, and results in this chapter. In addition, we covered stripe width and QoS best practice validations in this solution.

Testing Tools

We used the following monitoring tools and benchmark tools in the solution testing:

  • Monitoring tools
  • vSAN Observer

vSAN Observer is designed to capture performance statistics and bandwidth for a VMware vSAN Cluster. It provides an in-depth snapshot of IOPS, bandwidth, and latencies at different layers of vSAN, ratio of read cache hits and misses, outstanding I/Os, and congestion. This information is provided at different layers in the vSAN stack to help troubleshoot storage performance. For more information about the VMware vSAN Observer, see the Monitoring VMware vSAN with vSAN Observer documentation.

  • ESXTOP

ESXTOP is a command line tool that can be used to collect data and provide real-time information about the resource usage of a vSphere environment such as CPU, disk, memory, and network usage. We measure the ESXi Server performance by this tool.

  • Windows Performance Monitor

Performance Monitor is a Windows tool that enables users to capture statistics about SQL Server, memory usage, I/O throughput from SQL Server instances and operating system levels. We measure the disk latency and virtual CPU utilization by this tool. 

  • Database and load generation tool
  • Benchmark Factory for Databases

Benchmark Factory for Databases is a database performance-testing tool that enables you to conduct database industry-standard benchmark testing and scalability testing. See Benchmark Factory for Databases for more information.

Table 5 lists the key performance counters used in the testing.

Table 5. Key Matrix of Benchmark Factory for Databases

MONITOR COUNTER DESCRIPTION
Response time The amount of time it takes to respond to a SQL request.  
Transaction per second (TPS) Measures the transactions in the user database.
The ideal value is as large as possible for a designed SQL Server.
Transaction time The summation of response time and retrieve time. The retrieve time is the time taken from the server responds to a SQL statement until the last bytes of the data are obtained.

All-Flash vSAN Performance

Testing Overview

This test used SQL Server 2014 Enterprise Edition running on Windows Server 2012 R2 guest VMs, being stressed by Benchmark Factory for Databases. We created four virtual machines on the 4-node All-Flash vSAN Cluster with each host having one virtual machine, one SQL Server instance, and one TPC-E-like database. We had two databases of different size: 200GB and 500GB.

Each user simulates the same TPC-E-like workload against the system under test (SUT), so one user performs the workload once, but running 10 users would run the workload 10 times in parallel.

Note: The number of users is related to Benchmark Factory load and does not correlate to actual users connecting to a database server. In the performance test, we used 70 users to generate the workload without any transaction delay. We focused on the All-Flash vSAN aggregate performance of the 4-node cluster. Each test duration was set to one hour with 15-minute preconditioning and 45-minute sample period.

Testing Scenarios and Results

We validated four test scenarios listed below. We measured the key performance indicators include aggregate TPS, response time, and transaction time from Benchmark Factory for Databases. We also measured the disk performance from every virtual machine including the total IOPS, average disk read, and write latency.

NAME FTT CHECKSUM RAID LEVEL DEDUPLICATION AND COMPRESSION
Scenario 1 1 No 1 No
Scenario 2 1 Yes 1 No
Scenario 3 1 Yes 1 Yes
Scenario 4 1 Yes 5 Yes

Table 6: Testing Scenarios

In all above test scenarios, vSAN was designed with the stripe width of one, with the default cache policies, and with no cache reservation.

In the SQL Server TPC-E-like test on the hyperconverged platforms, we kept the CPU utilization of the virtual machine below 80 percent.

We took scenario 1- deactivate deduplication and compression (no checksum) and scenario 3- enable deduplication and compression (enable checksum) as the typical configurations and detailed the test results. We also summarized the test results of the four scenarios in Table 7.

For scenario 1 and 3, we detailed the test results as shown in Figure 8 and Figure 9:

  • Performance ranging from 1,906 and 1,907 TPS on the 200GB databases to 2,051 and 2,158 TPS on the 500GB databases when deactivating all the deduplication and compression, and checksum functions.
  • Performance ranging from 1,850 and 1,851 TPS on the 200GB databases to 2,092 and 2,172 TPS on the 500GB databases when enabling all the deduplication and compression, and checksum functions.

In aggregate, we observed that the cluster-wide performance measured 7,965 and 8,022 TPS on the vSAN Cluster in the tests respectively. We measured the average disk read and write latency ranging from 1ms to 2ms in these scenarios.

Solution Validation

Figure 8. TPS and Virtual Disk Latency with Deduplication/Compression/Checksum Deactivated

Solution Validation

Figure 9. TPS and Virtual Disk Latency with Deduplication/Compression/Checksum Enabled

We observed that the test stable latency ranged from 1ms to 2ms, and the stable aggregate IOPS ranged from around 4,000 to 5,000 through vSAN Observer.

Solution Validation

Figure 10. vSAN Observer -Deduplication/Compression/Checksum Deactivated vSAN

Solution Validation

Figure 11. vSAN Observer -Deduplication/Compression/Checksum Enabled vSAN

The overall physical CPU utilization was less than 60 percent and the CPU utilizations in the various scenarios were very similar.

Solution Validation

Figure 12. Average Physical CPU Utilization in the Four test scenarios Aggregate Performance on All-Flash vSAN

In Table 7, the aggregate TPS for the four test scenarios ranged from 7,880

(checksum enabled with deduplication and compression deactivated) to 8,022 (checksum deactivated with deduplication and compression deactivated).

We measured VMware vSAN disk write latency ranging from 1.7ms to 2.1ms by using the default vSAN policy (RAID 1- Mirroring). After changing the SPBM policy to erasure coding (RAID 5), the average virtual disk write latency increased to 4.4ms. The average disk read latency was less than 2ms in all test scenarios.

We measured the average CPU utilization of virtual machine ranging from 66 percent to 80 percent, and the average IOPS in the backend vSAN ranging from 13,000 to 33,000 in all test scenarios.

Test Scenarios Aggregate TPS Average Response Time (ms) Average Transaction Time (ms) Total VM IOPS Average virtual disk read latency (ms) Average virtual disk write latency (ms)
Deactivate deduplication and compression (RAID 1, no checksum) 8,022 9 34 17,014 1.0 1.7
Deactivate deduplication and compression (RAID 1, checksum) 7,880 9 34 16,716 1.1 2.1
Enable deduplication and compression (RAID 1, checksum) 7,965 8 34 16,656 1.6 1.9
Enable deduplication and compression (EC/RAID 5, checksum) 8,007 8 35 16,506 1.5 4.4

Summary

In summary, enabling different features (deduplication and compression, erasure coding, and checksum) on All-Flash vSAN just had a slight performance impact while with minimum resource overhead.

All-Flash VSAN Stretched Cluster Performance

Testing Overview

We measured SQL Server performance on vSAN Stretched Cluster with space efficiency features enabled (deduplication and compression, and checksum) and using RAID 1. Four-node cluster was split into two sites, with each site hosting two virtual machines. We also deployed a witness appliance in a third site that had 200ms network latency to the two sites. By introducing the delay for vSAN kernel ports, we emulated the site round-trip latency of 1ms, 2ms, and 4ms.

Solution Validation

Figure 13. Stretched Cluster with Different Intersite Latencies

Testing Scenario and Results

We observed a reduction in aggregate TPS with the increase of intersite latency as shown in Figure 14.

Solution Validation

Figure 14. TPS Comparison with Intersite Latency Increased

As shown in Figure 15, we measured that the average write latency of the SQL Server data disk increased from 1.9ms to 10.3ms when intersite latency increased. There was no performance deduction on data disk read. As shown in Figure 16, the average write latency of the log disk increased from 5ms to 15.5ms.

All-Flash VSAN Stretched Cluster Performance

Figure 15. Average Latency of Data Disk with Different Intersite Latencies

All-Flash VSAN Stretched Cluster Performance

Figure 16. Average Write Latency of Log Disk with Different Intersite Latencies

Best Practice

For mission-critical OLTP application, Microsoft recommends an average disk latency from 5ms to 20ms for database files; 10ms or less is ideal. Therefore, we recommend deploying SQL Server databases on an All-Flash vSAN Stretched Cluster with the intersite latency of 2ms or less.

Space Saving by Enabling Deduplication [RAID 5]

Testing Overview

This test is to measure the space consumption reduction after installing databases on All-Flash vSAN with deduplication and compression and erasure coding enabled.

  • Deduplication and Compression: Deduplication and compression is applied on a “per disk group” basis. The results of deduplication might vary for different kinds of data. We measure the space saving of the structured data (OLTP/TPC-E-like database) in the test.
  • Erasure coding: Before vSAN 6.2, when you deploy a 100GB VM and have FTT defined as 1, you need to have around 200GB of capacity available on vSAN. With erasure coding introduced in vSAN 6.2, the required capacity will be significantly lower. You only need a 3+1 (RAID 5) or a 4+2 (RAID 6) configuration. From a capacity stance, you need 1.3x the space of a given disk when 3+1 is used or 1.5x the space when 4+2 is used.

Testing Scenarios and Results

To measure the space saving of SQL Server, five virtual machines were deployed in the All-Flash vSAN Cluster, including two virtual machines with each hosting a 200GB database, two virtual machines with each hosting a 500GB database, and one domain controller.

We checked the provisioned space of the virtual machine to measure the provisioned space and actual space usage. The actual used space is the baseline to demonstrate the space saving after enabling deduplication and compression. The provisioned space for the 200GB database virtual machine was 680GB (100GB OS, 2 x 200GB data disks, 1 x 100GB log disk, and 1 x 80GB tempdb disk), the provisioned space for the 500GB database virtual machine is 1,360GB (100GB OS, 4 x 250GB data disks, 1 x 100GB log disk, and 2 x 80GB tempdb disk), and the provisioned space of the domain controller virtual machine is 100GB. Under the default policy, the provisioned space was more than 8TB. vSAN calculates the actual written space after deployment, which is 5,050GB.

After enabling deduplication and compression, the actual space used was 2,220GB. The deduplication and compression ratio was around 2.27x. After changing the SPBM policy to RAID 5, the actual spaced used was 1,900GB. The total space saving enabled by deduplication and compression with RAID 5 was around 2.66 times.

Deduplication and compression required space to store the metadata (deduplication and compression overhead) and the space consumption was around 630GB. Note this overhead is the additional space consumption after enabling deduplication and compression and it is not applicable on a vSAN without this feature.

Space Saving by Enabling Deduplication [RAID 5]

Figure 17. Deduplication and Compression Ratio for SQL Server VM

All-Flash vSAN Resiliency

Testing Overview

vSAN can handle disk, disk group, or host failure. We performed tests against All-Flash vSAN with running workload to measure whether vSAN can handle the failures without losing data and how the performance would be affected during the failure period. Because the behavior of vSAN changes after enabling deduplication and compression, we performed the tests on All-Flash vSAN with and without this feature respectively.

 vSAN can rebuild the failed component in the cluster by resynchronizing the component to a new place. We measured the resynchronization

Testing Scenarios and Results

We tested disk, disk group, and host failure with a 200GB TPC-E-like database workload. Component failure scenarios are as follows:

  • Disk failure

This test evaluated the impact on the virtualized SQL Server when encountering one capacity SSD failure. The capacity SSD stored the VMDK component of the user database. We injected a permanent disk error to the capacity SSD on one of the nodes to observe whether it has functional or performance impact on the production SQL Server database. Note the disk failure in an All-Flash vSAN with deduplication and compression enabled equals the disk group failure. We also measured the component resynchronization duration.

Note that after enabling deduplication and compression on vSAN, a disk failure in a disk group makes the entire disk group inaccessible because deduplication and compression are disk group based. Therefore, the validation of disk failure equals disk group failure in vSAN with deduplication and compression enabled. However, the protection policy including mirroring and erasure coding of vSAN helps in availability of data during component failures.

  • Disk group failure

This test evaluated the impact on the virtualized SQL Server when encountering a disk group failure. We injected a permanent disk error to the cache SSD of the disk group to simulate a disk group failure to observe whether it has functional or performance impact on the production SQL Server database.

  • Storage host failure

This test evaluated the impact on the virtualized SQL Server when encountering one vSAN host failure. We powered off one host in the vSAN Cluster to simulate a host failure to observe whether it has functional or performance impact on the production SQL Server database.

We measured the affected performance in various failure scenarios by enabling and disabling deduplication and compression. Before every component failure test, we confirmed that the disk, or the disk group, or the host stored the VMDK for the affected SQL Server database.

The test started after the workload ran into stable state and before the failure was emulated, the average TPS before each failure test was not the same because we used the same database without refreshing but every test had heavy transaction updates against the database. However, the result of the failure component test was very consistent, that is, the component failure did not cause database or application outage, and the TPS can recover after some time.

  • Deactivate deduplication and compression:
  • Single physical disk failure in the capacity SSD group caused the average TPS to drop from 1,813 to the average 1,643, and the recovery time was around 160 seconds.
  • One disk group failure caused the average TPS to drop from 1,853 to 1,659 and the recovery time was around 430 seconds.
  • One host failure caused the average TPS to drop from 1,568 to 1,312 and the recovery time was around 715 seconds.

In all test scenarios, the read IOPS was affected at the failure moment and the average response time of disk write increased. There was no obvious impact on the write IOPS or read latency.

FAILURE TYPE AVERAGE TPS BEFORE FAILURE AVERAGE TPS AFTER FAILURE TIME TAKEN FOR RECOVERY TO STEADY STATE TPS AFTER FAILURE (SEC)
Disk 1,813 1,643 160
Disk Group 1,853 1,659 430
Host 1,568 1,312 715

Table 8. All-Flash vSAN Resiliency without Deduplication and Compression Enabled

  • Enable deduplication and compression:
    • Single physical disk failure or one disk group failure caused the average TPS to drop from 1,760 to 1,510, and the recovery time was around 540 seconds.
    • One host failure caused the average TPS to drop from 1,638 to 1,211, and the recovery time was around 560 seconds.

In all test scenarios, the read IOPS was affected at the moment of failure and the average response time of disk write increased. There was no obvious impact on the write IOPS or read latency.

FAILURE TYPE AVERAGE TPS BEFORE FAILURE AVERAGE TPS AFTER FAILURE TIME TAKEN FOR RECOVERY TO STEADY STATE TPS AFTER FAILURE (SEC)
Disk or disk group 1,760 1,510 540
Host 1,638 1,211 560

Component Resynchronization Duration

vSAN tolerates disk failure by rebuilding objects stored in the failed disk to other disks. There are two failure statuses: absent and degraded. In the absent status, vSAN waits for the repair delay, which is 60 minutes by default before starting to recreate missing components on the disk. You can change the delay setting per VMware Knowledge Base Article 2075456. In the degraded status, vSAN recreates the components instantly. In the recreating process, a new component on another host appeared in the reconfiguring status.

We measured the data resynchronization rate in different scenarios:

  • With deduplication and compression enabled, the resynchronization rate ranged from 5GB/min to 6GB/min when there was 1 x 200GB SQL Server DB serving around 1,500 TPS, the resynchronization rate ranged from 8GB/min to 13GB/min when there was no workload.
  • With deduplication and compression deactivated, the resynchronization rate ranged from 5GB/min to 20GB/min when there was 1 x 200GB SQL Server DB serving around 1,500 TPS, the resynchronization rate ranged from 6GB/min to 30GB/min when there was no workload.

Note: vSAN actively throttles the storage and network throughput used for resynchronization to minimize the impact on normal workload.

Site Failure and Continuous Data Availability

After completing the Stretched Cluster performance test, we validated the Stretched Cluster resiliency of the All-Flash vSAN. The intersite latency was 2ms, and the site latency to the witness was 200ms before failure emulation. The test validated that the All-Flash vSAN Stretched Cluster could serve four SQL Servers on the four virtual machines after one site was down.

Solution Validation

Figure 18. vSAN Network with Intersite Latency

In the validation, the servers on site B were powered off. The preconfigured vSphere High Availability would restart the VMs running on this site to the surviving site. Because only half of the compute and storage capacity could serve the workload, the aggregate performance was affected.

Comparing to the aggregated TPS of 7,743 before the site failure, TPS decreased to 5,856 after moving virtual machines to the two servers in the surviving site. The average disk read latency increased to 4ms, and since there was no mirror write across sites any more, the average write latency of the data virtual disks decreased from 7ms to 3.6ms.

Solution Validation

Figure 19. Aggregate SQL Server Performance before and after Site A Outage

The application performance-TPS degradation was caused by the limited storage and compute resources available in one site. Before one site was down, we measured the physical CPU utilization of the four ESXi servers ranging from 36.56 percent to 47.13 percent. After the site failure, we measured the physical CPU utilization of the two surviving ESXi servers ranging from 71.62 percent to 80.95 percent. We recommend planning for sufficient resources on both sites to accommodate VMs to ensure one site failure does not heavily affect the mission-critical application.

Solution Validation

Figure 20. Physical CPU Utilization after One Site was Down

vSphere vMotion across Sites on All-Flash VSAN

vSphere vMotion allows moving an entire virtual machine from one physical server to another without downtime. We used this feature to live migrate a virtual machine that hosted a 200GB database with running workload in a Stretched Cluster across sites. The interlink with different latencies (1ms, 2ms, and 4ms) resulted in different vSphere vMotion durations. During vSphere vMotion with TPC-E-like workload, the average TPS decreased and we observed the lowest one TPS but we did not observe the application interruption. After migration, the TPS was back to the pre-failure level. The average vSphere vMotion duration with workload for a 200GB database was around one minute across sites and the duration for a 500GB database was around three minutes in this solution.

INTERSITE LATENCY (MS) AVERAGE TPS BEFORE THE TEST AVERAGE TPS DURING VSPHERE VMOTION VSPHERE VMOTION DURATION (MINUTE: SECOND) AVERAGE TPS AFTER VSPHERE VMOTION
1 1,721 929 00:58 1,718
2 1,714 1021 00:59 1,744
4 1,618 834 01:03 1,640

Best Practice

Virtualizing Microsoft SQL Server with vSphere enables additional benefits such as vSphere vMotion, which allows seamless migrations of Microsoft SQL servers between physical servers and between data centers without interrupting users or their applications.

However, both vSphere vMotion and vSAN are relying on the high-speed link.  In our validations, sharing the same NIC for vSAN with vSphere vMotion would cause the vSphere vMotion duration increasing from average one minute to more than 15 minutes. Therefore, for better performance, we highly recommend using 10GE NIC to separate the traffic on the virtual switches where vSphere vMotion traffic and vSAN are enabled by using different NICs.

Best Practices of SQL Server on All-Flash vSAN

This section highlights the Best Practices to be followed for SQL Server on All-Flash vSAN.

Configure SQL Server on All-Flash vSAN according to the Architecting Microsoft SQL Server on VMware vSphere guide first and follow the best practices for special setups or configurations.

RAID 5 for Data Disk and RAID 1 for Log Disk

For read-intensive OLTP databases such as TPC-E-like database, the most space requirement comes from data including table and index, and the space requirement for log can be maintained in a small portion comparing to the size of the data. We recommend using separate vSAN policies for the virtual disks for data and log of SQL Server. For data, we recommend using RAID 5 to reduce the space usage from 2x to 1.33x and the test of TPC-E-like workload validated that the RAID 5 could achieve good disk performance. Regarding the virtual disks for log, we recommend using RAID 1.

Stripe Width

We measured the performance impact on All-Flash vSAN with different stripe widths. In summary, after leveraging multiple virtual disks for one database that essentially distributes data in the cluster to better utilize resources, the TPC-E-like performance had no obvious improvement or degradation with additional stripe width.

We tested different stripe width (1 to 6, and 12) for a 200GB database in All-Flash vSAN and found:

  • The TPS, transaction time, and response time were similar in all configurations.
  • Virtual disk latency was less than 2ms in all test configurations.

We suggest setting stripe width as needed to split the disk object into multiple components to distribute the object components to more disks in different disk groups. In some situations, you might need this setting for large virtual disks.

Use Quality of Service for DB Restore Operations

vSAN 6.2 introduces a QoS feature that sets a policy to limit the number of IOPS that an object might consume. This QoS feature was validated in the sequential-I/O-dominant database restore operations in this solution.

After enabling deduplication and compression on All-Flash vSAN, we restored the 2 x 200GB and 2 x 500GB databases concurrently. We set up two policies to limit the IOPS to 1,000 and 1,500 respectively, and measured the restore duration after applying the policies to VMs for database restore operations. We concluded that limiting the IOPS affected the overall duration of the concurrent database restore operations. With QoS involved, we can benefit the other applications residing on the same vSAN that has performance contention with I/O-intensive operations such as database maintenance.

DATABASE 1,000 IOPS PER OBJECT/DB RESTORE DURATION (HH:MM:SS) 1,500 IOPS PER OBJECT/DB RESTORE DURATION (HH:MM:SS)
VM1-200GB  2:08:17 1:11:57
VM2-200GB  3:33:26 1:11:57
VM3-500GB 4:49:59 2:59:49
VM4-500GB 4:49:03 2:59:47

Table 11. Restore Performance

Best Practices of SQL Server on All-Flash vSAN

Figure 21: Database Restoring Duration

Best Practices of SQL Server on All-Flash vSAN

Figure 22: Database Restoring Throuhgput

Conclusion

This section provides a summary on how vSAN is optimized for modern All-Flash storage.

vSAN is optimized for modern All-Flash storage with efficient near-line deduplication, compression, and erasure coding capabilities that lower TCO while delivering incredible performance.

We drove OLTP workload and tested performance with new storage efficiency features including deduplication and compression, and erasure coding (RAID 5). With all these storage efficiency features, vSAN provides great performance with minimal resource overhead.

We proved that vSAN offers resiliency and high availability to Tier-1 application with running workload.

Furthermore, we verified that SQL Server could be deployed on All-Flash vSAN Stretched Cluster with reasonable performance and unparalleled availability in disaster situations. Cross-site vSphere vMotion can help system administrator conveniently perform site maintenance on vSAN Stretched Cluster.

Reference

This section lists the relevant references used for this document.

White Paper

For additional information, see the following white papers:

 

Product Documentation

For additional information, see the following product documentation:

 

Other Reference

VMware Compatibility Guide

About the Author

This section provides a brief background on the author and contributors of this document.

  • Tony Wu, Solution Architect in the vSAN Product Enablement team wrote the original version of this paper. 
  • Catherine Xu, Technical Writer in the vSAN Product Enablement team, edited this paper to ensure that the contents conform to the VMware writing style.

Filter Tags