VMware Horizon 7.7 on VMware vSAN 6.7 using VMware Cloud Foundation

Executive Summary

VMware Cloud Foundation™ automates the deployment of VMware Horizon® 7.7 Infrastructure. VMware vSAN™ as the key storage component of VMware Cloud Foundation, delivers radically simple storage, superior performance that scales, and pay-as-you-grow affordability for desktop and application virtualization.

Business Case

Customers today wanting to deploy a virtual desktop infrastructure require a cost-effective, highly scalable, and an easy-to-manage solution. Applications need to be refreshed and published at will and should not require multiple levels of IT administration. Most importantly, the infrastructure itself must be able to scale with the minimized TCO yet still provide enterprise-class performance.

VMware Horizon enables IT departments to run remote desktops and applications in the datacenter and deliver these desktops and applications to employees as a managed service. However, it is difficult to install and upgrade Horizon environment, as customers expect their deployment to work seamlessly. To respond to the ever-increasing demand for faster innovation, organizations are looking to shift to a more agile, service-oriented IT model that leverages both private and public clouds.

VMware Cloud Foundation automates deployment of Horizon infrastructure. It is an integrated cloud infrastructure that combines compute, storage, networking, security, and cloud management services, Cloud Foundation provides an ideal platform on which to run enterprise workloads and containerized applications across both private and public environments. VMware Cloud Foundation makes it easy to deploy and run a hybrid cloud by delivering common infrastructure that is fully compatible, stretched, and distributed along with a consistent cloud operational model for your on- and off-premises data centers and public cloud environments.

vSAN, as the key storage component of VMware Cloud Foundation, provides simpler operations, lower costs, and greater agility to customers with infrastructure scalability and without data center complexity. VMware vSAN solves the problems of storage cost and complexity by giving you a highperformance, flash-accelerated datastore you can enable with just a few clicks and grow affordably without large capital investments.

  • vSAN simplifies managing and automating storage for desktops and apps by eliminating traditional, purpose-built storage systems and by letting IT use familiar vCenter tools rather that proprietary storage-management interfaces. vSAN integrates storage policies into the VM-creation workflow, ensuring each virtual desktop automatically has the type of storage it needs.

  • vSAN delivers the storage performance critical to ensuring virtual desktops and apps meet the expectations of users accustomed to physical devices.

  • vSAN provides a distributed architecture that allows for elastic, non-disruptive scaling. Capacity and performance can be scaled at the same time. This “grow-as-you-go” model provides predictable, linear scaling with affordable investments spread out over time. vSAN is included with Horizon Advanced and Horizon Enterprise Editions, giving you an easy path to greater desktop virtualization ROI and reduced TCO.

Solution Overview

This document provides the deployment details and performance data of 800 virtual desktops on an 8 all-flash VMware ESXi™ hosts vSAN cluster using VMware Horizon 7.7, running Microsoft Windows 10 with Office 2013, provisioning via instant clone, linked clone on vSphere 6.7 U1. 

Key Results

Table 1 summarizes the results of all-flash vSAN with Horizon 7.7.

Table 1. Key Results Overview

800 instant clones provisioned (including 5-minute priming time)

12 minutes

 

 

 

 

Rapid and auto deployment and configuration of Horizon and SDDC:

  • Provide a simple and intuitive GUI (SDDC Manager) to configure and deploy Horizon.
  • Provides elastic and scalable infrastructure for new Horizon workloads’ deployment

 

Radically Simple Storage for Desktops and Apps

  • Rapid provisioning: Provision storage for virtual desktops and apps in just a few clicks. Integrates storage policies into the VM-creation workflow.
  • Familiar Tools: No specialized skill-sets required, use familiar vCenter tools and eliminate siloed hardware. No additional VMs or virtual appliances to install.
  • App in charge: Intelligent automated management based on desktop and app demands

 

Superior Performance at Scale

  • Deliver the storage performance to meet the expectations of users accustomed to physical devices
  • Up to 4.27x space saving by erasure code, deduplication and compression feature
  • Kernel-embedded storage optimizes I/O path for better performance

 

Pay-As-You-Grow Affordability with reduced CapEx

  • Eliminate storage upgrades and pay only for what you need with industry-standard x86 servers, disks and flash
  • Avoid overprovisioning IOPS with granular scaling to provide the right amount of storage capacity and performance every time
  • Reduce OpEx with integrated management, automated processes, and familiar vSphere tools

800 instant clones

new image pushed

(including 5-minute priming time)

 

19 minutes

 

52 minutes

800 linked clones  provisioned

52 minutes

 

 

 

800 linked clones recomposed

73 minutes

 

800 linked clones refreshed

19 minutes

 

800 linked clones started

4 minutes

 

Single-host failure, recovery time for 115 linked clones’ desktops

26 minutes

 

 

 

800 Instant clones with AppVolume

 

 

 

800 linked clones with AppVolume

 

 

 

800 linked clones with DC enabled

 

 

 

 

 

VSIbase: 738

VSImax v4.1 average: 1102

VSImax reached: No

 

VSIbase: 724

VSImax v4.1 average: 1298

VSImax reached: No

 

VSIbase: 731

VSImax v4.1 average: 1425

VSImax reached: No

Introduction

This section provides the purpose, scope, and the intended audience of this document.

Purpose

This reference architecture provides a standard, repeatable and highly scalable design that can be easily adapted to specific environments and customer requirements. It aims at developing a common customer virtual desktop infrastructure environment using Horizon 7.7 on all flash vSAN 6.7 U1.

Scope

This reference architecture:

  • Demonstrates storage performance and resilience of Horizon based VDI deployments using all-flash vSAN.

  • Validates instant clone and linked clone with App Volumes work well with vSAN to manage desktops and applications.

  • Proves vSAN with space efficiency features enabled can easily support sustainable workloads with minimal resource overhead and impact on desktop application performance.

  • Validates the guest OS space reclamation feature enabled in vSAN 6.7 U1. Supports guest OS initiated TRIM/UNMAP commands.

Audience

This reference architecture is intended for customers—IT architects, consultants and administrators—involved in the early phases of planning, design and deployment of VDI solutions using VMware Horizon running on allflash vSAN. It is assumed that the reader is familiar with the concepts and operations of Horizon technologies and VMware vSphere products.

Technology Overview

Overview

This section provides an overview of the technologies that are used in this solution:

  • VMware Cloud Foundation

  • VMware vSphere 6.7 Update 1

  • VMware vSAN 6.7 Update 1

    • TRIM/UNMAP

  • VMware Horizon 7.7

    • VMware App Volumes 2.15

VMware Cloud Foundation

VMware Cloud Foundation integrates compute, storage, networking, security, and cloud management services, to create a consistent, and dynamically configurable infrastructure for applications. Cloud Foundation delivers best-in class lifecycle automation for VMware software stack – from tasks that include deployment, configuration of the environment, provisioning infrastructure pools, and one of the biggest pain points for our customers – automated patching and upgrading.

  • Rapid deployment and configuration of Horizon and SDDC

  • Standardized deployment methodology across environments – ready for Cloud Pod Architecture

  • Comprehensive LCM includes automated hardware/firmware updates (when deploying Cloud Foundation on VxRail)

  • On-demand provisioning of infrastructure tools

  • Automated patching and upgrades for increased uptime through the use of tested software packages

Automated Lifecyle Management

VMware Cloud Foundation abstracts the individual building blocks of the Software-Defined Data Center—compute, storage, networking, and cloud management—through the Workload Domain construct. The Workload Domain is the unit of consumption in the private cloud. It aggregates the physical servers created on the composable infrastructure into logical pools of capacity on top of which the SDDC building blocks of compute, network and storage virtualization are deployed. A Horizon Domain is standardized deployment from the Horizon Reference Architecture.

Customers can use a simple and intuitive GUI (SDDC Manager) to configure and deploy Horizon. The Horizon deployment wizard provides best practices suggestions and tips to make it easier for customers to configure and choose the components that they want to deploy. Apart from the mandatory components, connection servers and NSX load balancers, all the other components such as App Volumes, Composer servers, Unified Access Gateways and User Environment Manager are also supported, but optional. VMware Cloud Foundation also allows customers to easily clone Horizon deployments by exporting configuration of an existing domain and importing this configuration to pre-populate the fields in the deployment wizard. VMware Cloud Foundation drastically simplifies the path to the hybrid cloud by delivering a single integrated solution that is easy to deploy and operate, enabled by built-in automated lifecycle management. Figure 1 shows automated deployment of Horizon 7.7 infrastructure.

Figure 1. Automated Deployment of Horizon 7.7 Infrastructure

Automated Lifecyle Management

Cloud Foundation offers Automated Lifecycle Management on a per-Workload Domain basis. It delivers simple management of your environment with built-in automation of day 0 to day 2 operations of the software platform.

  • Rapid deployment - Cloud Foundation automates the bring-up process of the entire software platform, including deployment of infrastructure VMs, creation of the management cluster, configuration of storage, cluster creation and provisioning.

  • Simplified patching and upgrades - Cloud Foundation enables a simplified patching/ upgrading process of the software platform. Cloud admins have the flexibility to choose the timing and scope of the updates.

  • Infrastructure cluster provisioning - Enables on-demand provisioning of isolated infrastructure clusters to enable workload separation.

Elastic and Scalable Infrastructure

Customers can deploy new Horizon workloads, scale up or down capacity for the existing Horizon workloads easily with Cloud Foundation. A Cloud Foundation solution can support a maximum of 15 workload domains (1 workload domains is used for Management), and each domain supports ESXi hosts in accordance with vCenter maximums. As the demand for virtual desktops and applications goes up and down, ESXi hosts can be easily added or removed from the Horizon Workload Domain.

Management and Operational Simplicity

Cloud Foundation enables self-driving operations (vRealize Suite) from applications to infrastructure to help organizations plan, manage and scale their SDDC efficiently. Customers can easily enable the entire vRealize Suite for their Horizon deployments and leverage vRealize Suite components to simplify management and operations and provide superior end user experience as well as Lifecycle management.

VMware vRealize Operations collects metrics from virtual desktops, applications and virtual infrastructure and presents aggregated data to the vRealize Operations Manager for monitoring, trending and predictive analysis. Its smart alerts with dynamic thresholds allow customers to easily isolate root causes of issues and proactively optimize performance of the entire stack.

VMware vRealize Log Insight collects, imports, and analyzes logs to provide real-time answers to problems related to systems, services, and applications, and derive important insights. It is a particularly useful tool in Horizon environment where many components generate logs and troubleshooting can prove challenging. vRealize Log Insight and vRealize Operations together provide the most comprehensive environment insight and the best troubleshooting capabilities for Horizon customers.

VMware vRealize Automation extends Horizon as a service by allowing the end users to request and provision virtual desktops and applications on demand.

Highly Available and Secure Virtual Desktops and Applications

Cloud Foundation automatically installs and configure VMware NSX in each Horizon Workload Domain. NSX provides high availability, load balancing and security for Horizon workloads. For external communication with Horizon virtual desktops that are initiated by a Web browser or a mobile application, a VMware NSX edge services gateway manages and optimizes north–south network. Load balancing is included through automated deployment of NSX Load Balancers. Because these external connections can have vastly different security requirements, customers can use VMware NSX to associate firewall rules at the router or at the virtual desktop level to achieve greater granularity. NSX micro-segmentation enables flexible security by providing East-West traffic security between virtual desktops or the RDSH systems. Using the same methodologies, NSX can provide the same level of microsegmentation around the Horizon management components such as Connection servers, App Volumes, Composer servers, Unified Access Gateways and User Environment Manager. Figure 2 shows architecture of Horizon 7.7 infrastructure deployment. In Figure 2, Active Directory and SQL Server marked as yellow are required for the customer to provide into the environment. Other components including Load Balancers, Unified Access Gateways (UAGs), UEM, Composer, vCenter, App Volumes are deployed by VMware Cloud Foundation.

Figure 2. Architecture of Horizon 7.7 Infrastructure Deployment

VMware vSphere 6.7 Update 1

VMware vSphere 6.7 is the next-generation infrastructure for next-generation applications. It provides a powerful, flexible and secure foundation for business agility that accelerates the digital transformation to cloud computing and promotes success in the digital economy. vSphere 6.7 supports both existing and next-generation applications through its:

  • Simplified customer experience for automation and management at scale

  • Comprehensive built-in security for protecting data, infrastructure and access

  • Universal application platform for running any application anywhere

With vSphere 6.7, customers can run, manage, connect, and secure their applications in a common operating environment, across clouds and devices.

VMware vSAN 6.7 Update 1

VMware vSAN 6.7 Update 1

VMware vSAN 6.7 Update 1, Horizon 7.7 are the bundled components in VMware Cloud Foundation. vSAN is the industry-leading software powering VMware’s software defined storage and HCI solution. vSAN helps customers evolve their data center without risk, control IT costs and scale to tomorrow’s business needs. vSAN, native to the market-leading hypervisor, delivers flashoptimized, secure storage for all your critical vSphere workloads. vSAN is built on industry-standard x86 servers and components that help lower TCO in comparison to traditional storage. It delivers the agility to easily scale IT and offers the industry’s first native HCI encryption. vSAN 6.7 U1 simplifies day-1 and day-2 operations, and customers can quickly deploy and extend cloud infrastructure and minimize maintenance disruptions. Secondly, vSAN 6.7 U1 lowers the total cost of ownership with more efficient infrastructure. New version of vSAN ReadyCare rapidly resolves support requests. vSAN ReadyCare is a marketing name introduced to capture the significant investments VMware has made to support vSAN customers. VMware continues to invest in ReadyCare support, and new ReadyCare simplifies support request resolution and expedites diagnosis of issues.

vSAN 6.7 U1 can automatically reclaims capacity, using less storage at the capacity tier for popular workloads. It has full awareness of TRIM/UNMAP command sent from the guest OS and can reclaim the previously allocated storage as free space. This is an opportunistic space efficiency feature that can deliver much better storage capacity utilization in vSAN environments.

VMware Horizon 7.7

The Horizon workload is automatically deployed and maintained with VMware Cloud Foundation. It is an integrated software stack that bundles vSphere, vSAN, NSX and vRealize Suite into a single platform to provides the simplicity and flexible for Horizon workload.

Horizon 7 enables IT to centrally manage images to streamline management, reduce costs, and maintain compliance. With Horizon 7, virtualized or hosted desktops and applications can be delivered through a single platform to end users. These desktop and application services—including RDS hosted apps, packaged apps with ThinApp, SaaS apps, and even virtualized apps from Citrix —can all be accessed from one unified workspace to provide end users with all of the resources they want, at the speed they expect, with the efficiency​ business demands.

Drawing on the best of mobile and cloud, Horizon 7 radically transforms virtual desktop infrastructure (VDI), giving you unprecedented simplicity, security, speed, and scale—all at lower costs. Horizon 7 helps you get up and running up to 30x faster while cutting costs over traditional solutions by as much as 50%.

  • Just-in-time desktops—leverage Instant Clone Technology coupled with App Volumes to dramatically accelerate the delivery of user-customized and fully personalized desktops. Dramatically reduce infrastructure requirements while enhancing security by delivering a brand-new personalized desktop and application services to end users every time they log in.

  • VMware App Volumes—provides real-time application delivery and management.

  • VMware User Environment Manager™—offers personalization and dynamic policy configuration across any virtual, physical, and cloud-based environment.

  • Horizon Smart Policies—deliver a real-time, policy-based system that provides contextual, fine-grained control. IT can now intelligently enable or disable client features based on user device, location, and more.

  • Blast Extreme—purpose-built and optimized for the mobile cloud, this new additional display technology is built on industry-standard H.264, delivering a high-performance graphics experience accessible on billions of devices including ultra-low-cost PCs.

App Volume 2.15

VMware App Volumes is an integrated and unified application delivery and end-user management system for Horizon and virtual environments:

  • Quickly provision applications at scale.

  • Dynamically attach applications to users, groups, or devices, even when users are logged into their desktop.

  • Provision, deliver, update, and retire applications in real time.

  • Provide a user-writable volume allowing users to install applications.

App Volumes makes it easy to deliver, update, manage, and monitor applications and users across VDI and published application environments. It uniquely provides applications and user environment settings to desktop and published application environments and reduces management costs by efficiently delivering applications from one virtual disk to many desktops or published application servers. Provisioning applications requires no packaging, no modification and no streaming.

Solution Configuration

  • Architecture diagram
  • Hardware resources
  • Software resources
  • Virtual Machine test image build
  • Network configuration
  • VMware ESXi Servers
  • vSAN configuration
  • Horizon on VMware Cloud Foundation configuration

Architecture

Figure 3 shows management cluster in dotted line is in VMware Cloud Foundation Management Domain, and Horizon Desktops cluster is in a workload domain.

SDDC Manager is the centralized management software in Cloud Foundation used to automate the lifecycle of components, from bring-up, to configuration, to infrastructure provisioning to upgrades/patches. SDDC Manager complements vCenter Server and vRealize Suite products by delivering new functionality that helps cloud admins build and maintain the SDDC.

We continued to use vCenter Server as the primary management interface for the virtualized environment after the deployment.

Figure 3. Management Cluster in VMware Cloud Foundation Management Domain

In Figure 4, it shows management cluster in dotted line is in VMware Cloud Foundation Management Domain, and Horizon Desktops cluster is in a workload domain.

Figure 4. vSphere Architectural Design of vSphere Cluster

Hardware Resources

Table 2 shows two vSAN clusters used in the environment:

  • 8-node all-flash vSAN cluster was deployed to support 800 virtual desktops.
  • 4-node hybrid vSAN Management cluster was deployed to support infrastructure, management, Login VSI management console and Launcher virtual machines. (Noted that this vSAN management cluster is not dedicated, it is shared with VMware Cloud Foundation management VMs.)

Table 2. Hardware Resources for Horizon Desktop Cluster

Property Specification

Server

8 x rack PowerEdge R640 Server

System power management policy

High Performance

CPU

2 sockets, Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz 14cores

RAM

480GB

Network Adapter

2 x Intel X550 10Gb/s SFP

Storage Adapter

1 x Dell HBA330 mini

Disks

Cache Tier: 2 x 1.46TB NVMe SSD

Capacity Tier: 8 x 1.75TB SSD

Software Resources

Table 3 shows software resources used in this solution and lists system configurations for different server roles.

Table 3. Software Resources

Software 

 

Version

Purpose

VMware vCenter and ESXi

6.7 Update 1

ESXi cluster to host virtual machines and provide vSAN cluster. VMware vCenter server provides a centralized platform for managing VMware vSphere environments

VMware Cloud Foundation 

3.7

VMware Cloud Foundation provides integrated cloud infrastructure (compute, storage, networking, and security) and cloud management services to run enterprise applications in both private and public environments.

VMware vSAN

6.7 Update 1

Software-defined storage solution for hyperconverged infrastructure

VMware Horizon

7.7

Horizon 7.7 offers greater simplicity, security, speed and scale in delivering on-premises virtual desktops and applications while offering cloud-like economics and elasticity of scale.

Table 4. System Configuration

Infrastructure VM Role 

vCPU

RAM (GB)

Storage (GB)

OS

Active Directory

2

8

40

Windows Server 2012 R2 64-bit

SQL Server

(Composer DB)

4

16

100

Windows Server 2016 64-bit

Horizon View 7.7 Composer

2

10

100

Windows Server 2012 R2 64-bit

Horizon View 7.7 Connection Server 1

2

10

100

Windows Server 2012 R2 64-bit

Horizon View 7.7 Connection Server 1

2

10

100

Windows Server 2012 R2 64-bit

App Volumes 2.15

2

10

100

Windows Server 2012 R2 64-bit

Login VSI Management Console

4

8

220

Windows Server 2012 R2 64-bit

Login VSI Launcher

4

8

100

Windows Server 2012 R2 64-bit

Virtual Machine Test Image Build

Two different virtual machine images were used to provision desktop sessions in the View environment, one for instant clone and the other for linked clone with App Volumes with Login VSI. We used optimization tools according to VMware OS Optimization Tool. The test image configurations are the same for instant clone and linked clone except the VMware View Agent: select Horizon View Composer Agent for linked clone and select Horizon Instant Clone Agent for instant clone.

Table 5. Virtual Machine Template Configuration

Attribute 

Login VSI Image

Desktop OS

Windows 10 Enterprise 2016 LTSB 64-bit

Hardware

VMware Virtual Hardware version 14

vCPU

2

Memory

4GB

Memory reserved

0MB

Video RAM

4MB

3D graphics 

Disabled

NICs

1

Virtual network adapter 1

VMXNet3 Adapter

Virtual disk--VMDK1

30GB

SCSI controller

VMware Paravirtual

Applications

Microsoft Office 2013

Internet Explorer 11

Adobe Reader 11

Adobe Flash Player 11

Doro PDF 1.82

VMware Tools 

10338(10.3.2)

VMware View Agent

7.7.0-11054235

VMware View Agent How to disable or uninstall Onedrive on Windows 10 PC

Number deployed

800

Network Configuration

A VMware vSphere Distributed Switch (VDS) acts as a single virtual switch across all associated hosts in the data cluster. This setup allows virtual machines to maintain a consistent network configuration as they migrate across multiple hosts. Figure 5 shows network configuration of Horizon environment.

image

Figure 5. Network Configuration of Horizon Environment

Note the following settings for VMkernel ports and VM networks:

  • vmk0-management
  • vmk1-vSAN
  • vmk2-vMotion
  • vMotion-Active/Standby - MTU 9000
  • vSAN-Standby/Active - MTU 9000
  • Desktop VMs- Active/Active - MTU 9000

VMware ESXi Cluster Configuration

Table 6 lists VMware Cluster configuration. We should enable VMware vSphere High Availability (vSphere HA) and DRS features in VMware Cluster.

Table 6. ESXi Cluster Configuration

Property

Setting

Setting

Cluster Features

vSphere HA

Enabled

DRS

Partially

Automated

The storage controller of ESXi Server supports both pass-through and RAID mode. It is recommended to use controllers that support the pass-through mode with vSAN to lower complexity and ensure performance.

VMware vSAN Configuration

Linked clones and instant clones use vSAN for storage. Each ESXi server has the same configuration of two disk groups, each consisting of one 1.6TB cache-tier NVMe and four 1.8TB capacity-tier SSDs.

vSAN Storage Policy

vSAN can set availability, capacity and performance policies per virtual machine if the virtual machines are deployed on the vSAN datastore. Horizon creates the default storage policies automatically. We need to modify the storage policy to enable or disable certain vSAN features. Table 7 shows the storage policy setting of RAID 1 and RAID 5.

Table 7. vSAN Storage Setting with RAID 1 and RAID 5

Storage Capability

RAID 1 Setting

RAID 5 Setting

Number of Failures to Tolerate (FTT)

1

1

Number of disk stripes per object

1

1

Flash read cache reservation

0%

0%

Object Space reservation

0%

0%

Disable object checksum

No

No

Failure tolerance method

Mirroring

Erasure coding

Horizon on VMware Cloud Foundation Deployment

View in Horizon includes the following core systems in Workload Domain, they are deployed automatically by VMware Cloud Foundation:

  • Two connection servers
  • One vCenter Server (vCenter Appliance)
  • View in Horizon Composer
  • App Volumes Manager

App Volumes delivers native applications to VMware Horizon virtual desktops on-demand through VMDKs. App Volumes Manager plays two roles:

  • Administrator—Provisions new AppStacks, assigns AppStacks with applications to VMs and monitors processes and usage.
  • Service provider—Brokers the assignment of applications to end users, groups of users and computers.

Figure 6 is screenshot of workload domain of the VMware Cloud Foundation – SDDC Manager. SDDC Manager is the centralized management software in Cloud Foundation used to automate the lifecycle of Horizon Components.

image

Figure 6. Workload Domain of the VMware Cloud Foundation

See Create a Horizon Domain for detailed prerequisites and procedures.

Horizon Configuration Settings

vCenter Server Settings

View Connection Server uses vCenter Server to provision and manage View desktops. vCenter Server is configured in View Manager as shown in Table 8.

Table 8. View Manager—vCenter Server Configuration

Attribute

Specification

Description

vCenter Server

SSL

On

Port

443

View Composer Setting 

Standalone View Composer Server 

View Composer Port

18,443

Advanced Settings:

Max Concurrent vCenter Provisioning Operations

Max Concurrent Power Operations

Max Concurrent View Composer Maintenance Operations

Max Concurrent View Composer Provisioning Operations

Max Concurrent Instant Clone Engine Provisioning Operations

 

24

50

24

24

24

App Volumes Settings

We tested all the applications in a single AppStack except that IE (Internet Explorer) is installed on the OS by default.

Table 9. App Volumes—AppStack Configuration

Attribute

Specification

Storage Path

[vsanDatastore] cloudvolumes/apps/appstack.vmdk (6,036 MB)

Template Path

[vsanDatastore] cloudvolumes/apostolates/template.vmdk (2.15.0.54)

Assignments

800

Attachments

800

Applications

  • Microsoft_Office_Professional_Plus_2013
  • Adobe_Reader_XI
  • Adobe Flash Player 11
  • Doro PDF 1.82

Solution Validation

This section documents the test methodologies and processes.

Test Overview

The solution validates that all-flash vSAN storage platform can deliver the required performance for above 800 desktops with App Volumes 2.15 on vSAN 6.7 Update 1 with its new features enabled. And it includes the following tests:

  • Performance benchmarking testing: to measure the VDI performance using Login VSI (Knowledge Worker).

  • View Operation testing: to validate vSAN new features, which reduce the total storage needed with an excellent performance on Horizon 7.7.

  • Host failure testing: to ensure vSAN can support sustainable workload under predictable failure scenarios.

  • vSAN TRIM/UNMAP feature: to verify vSAN can automatically reclaim capacity from guest OS using less storage at the capacity tier for Horizon workloads.

Test Tools

We used the following monitoring and benchmark tools in the solution:

  • vSAN Performance Service

vSAN performance service collects and analyzes performance statistics and displays the data in a graphical format. vSAN administrators can use the performance charts to manage the workload and determine the root cause of problems. When the vSAN performance service is turned on, the cluster summary displays an overview of vSAN performance statistics, including IOPS, throughput and latency. vSAN administrators can view detailed performance statistics for the cluster, for each host, disk group and disk in the vSAN cluster.

  • Login VSI 4.1.32.1

Login VSI is an industry-standard solution that simulates typical user behavior in centralized virtualized desktop environments. When used for benchmarking, the software measures the total response time of several specific user operations being performed within a desktop workload in a scripted loop. The baseline is the measurement of the response time of specific operations performed in the desktop workload, which is measured in milliseconds (ms). This standardization makes all conclusions that are based on Login VSI test data objective, verifiable, and repeatable.

In this solution, we used the Login VSI in Benchmark mode with 32 launchers (25 sessions per launcher) to measure VDI performance in terms of Login VSI baseline performance score. The baseline is the measurement of the response time for specific operations performed in the desktop workload, measured in milliseconds (ms). A lower Login VSI baseline score is better because it reflects that the desktops can respond with less time. There are two values in particular that are important to note:

  • VSIbase: A score reflecting the response time of specific operations performed in the desktop workload when there is little or no stress on the system. A low baseline indicates a better user experience, resulting in applications responding faster in the environment.

  • VSImax: The maximum number of desktop sessions attainable on the host before experiencing degradation in host and desktop performance. If VSImax is not encountered by the completion of the test, then user experience is considered to be good even at maximum concurrent users.

Login VSI has several different workload templates depending on the type of user to be simulated. Each workload differs in application operations and in the number of operations executed simultaneously. In the tests the workload type is ‘Knowledge Worker * 2vCPU’. The medium-level Knowledge Worker workload was selected for because it is the closest analog to the average desktop user in our customer deployments. Table 11 shows parameter of Knowledge worker workload usage including CPU usage, disk reads, disk writes and applications.

The VDI workload in general is very CPU intensive. vSAN can support more desktops per host from the storage perspective. But we found that host CPU was completely saturated during Login VSI knowledge worker workload when number of desktops per host reached a certain level. Therefore, we focused our tests on 100 desktops to observe vSAN performance.

Table 10. Knowledge Worker Parameter

Parameter

Knowledge Worker Setting

Apps open

5-9

CPU usage

100%

Disk reads

100%

Disk writes

100%

IOPS

8.5

Memory

1.5 GB

CPU

2 vCPU

The Appstack “appstack” was created based on the default App Volumes AppStack template, which included the applications used by Login VSI. The “appstack” size was 6,036MB.

AppStack

Figure 7. AppStack

We took the following parameters into consideration to measure the testing performance:

  • Test running time

  • Benchmark VSImax

  • CPU memory usage percent

  • vSAN IOPS, IO latency

  • Capacity

Performance Benchmark Testing

From CPU Usage during Login VSI knowledge worker workload, the peak CPU usage was over 85 percent. Although we had additional CPU headroom, it would not be realistic to push the host CPU to 100 percent since this would have a negative impact on other services. And Microsoft Windows 10 desktop image is used to provision, and we used optimization tools according to VMware OS Optimization Tool. Prior version of, non-Windows 10 could have been run with single vCPU and 2GB vRAM instead of 2 vCPUs and 4GB vRAM per VM. So, we decide to deploy 800 desktops as the stable number of performance testing. Multiple test iterations were run for each test scenario below. The virtual machines were not rebooted between each test iteration.

All the performance diagrams are in the Appendix for reference.

Test 1: Instant Clone in R1 Configuration

VSImax Knowledge Worker v4.1 was not reached with a Login VSI baseline performance score of 738. We ran 800 sessions in total and 794 knowledge worker sessions ran successfully. This was equal to 794 desktop users reading documents, sending emails, printing docs and browsing the internet.

The peak average CPU usage was 86.34 percent. Although we had additional CPU headroom, it would not be realistic to push the host CPU to 100 percent since this would have a negative impact on other services. The maximum memory percent of active sessions was about 43 percent.

IOPS increased steadily because the number of active sessions increased. Peak write IOPS was 19,473 and peak read IOPS was 21,393. The peak write latency was 0.542ms and the peak read latency was 0.391ms.

Test 2: Instant Clone in R5 Configuration

VSImax Knowledge Worker v4.1 was not reached with a Login VSI baseline performance score of 724. We ran 800 sessions in total and 795 knowledge worker sessions ran successfully. This was equal to 795 desktop users reading documents, sending emails, printing docs and browsing the internet.

The peak average CPU usage was 95.7 percent. The peak memory percent of active sessions was about 43 percent.

IOPS increased nearly linearly because the number of active sessions increased. The peak write IOPS was 19,923 and the peak read IOPS was 23,083. The peak write latency was 1.012ms and the peak read latency was 0.609ms.

Test 3: Linked Clone in R1 Configuration

The Windows 10 linked clone pool with AppStack passed the knowledge worker workload easily without reaching VSImax v4.1 at the baseline performance score of 724. We ran 800 sessions in total and 798 knowledge worker sessions ran successfully.

The peak average CPU usage was 95.52 percent. The peak memory percent of active sessions was about 46 percent.

The peak write IOPS was 12,834 and the peak read IOPS was 912. The peak write latency was 0.417ms and the peak read latency was 0.351ms.

Test 4: Linked Clone in R5 Configuration

The Windows 10 linked clone pool with AppStack passed the knowledge worker workload easily without reaching VSImax v4.1 at the baseline performance score of 718. We ran 800 sessions in total and 797 knowledge worker sessions ran successfully.

The peak average CPU usage was 98 percent. The maximum memory percent of active sessions was about 46 percent.

The peak write IOPS was 11,830 and the peak read IOPS was 11,472. The peak write latency was 0.421ms and the peak read latency was 0.446ms.

Test 5: Linked Clone in R1+DC Configuration

There were 6.01TB and 5.91TB deduplication and compression overhead, which was 5 percent of the vSAN datastore capacity. Deduplication and compression ratio was 4.27x.

VSImax Knowledge Worker v4.1 was not reached with a Login VSI baseline performance score of 731. We ran 800 sessions in total and 798 knowledge worker sessions ran successfully.

The peak average CPU usage was 99.83 percent. The peak memory percent of active sessions was about 46 percent.

The peak write IOPS was 12,547 and the peak read IOPS was 909. The peak write latency was 0.438ms and the peak read latency was 0.364ms.

Test 6: Linked Clone in R5+DC Configuration

There were 5.79TB and 5.91TB deduplication and compression overhead, which was 5 percent of the vSAN datastore capacity. The deduplication and compression ratio was 3.4x.

VSImax Knowledge Worker v4.1 was not reached with a Login VSI baseline performance score of 719. We ran 800 sessions in total and 800 knowledge worker sessions ran successfully.

The peak average CPU usage was 106.89 percent. The peak memory percent of active sessions was about 48 percent. The peak write IOPS was 12,589 and the peak read IOPS was 12,211. The peak write latency was 0.457ms and the peak read latency was 0.454ms.

Table 11. Performance Test Result summary

 

Instant

Clone in

R1 Test Result

Instant

Clone in

R5 Test Result

Linked C lone in

R1 Test Result

Linked C lone in

R5 Test Result

Linked Cl one in

R1+DC

Test

Result

Linked Cl one in

R5+DC

Test

Result

Peak averag e CPU usage

86.34%

95.7%

95.52%

98

99.83

106.89

Peak

write

IOPS

19,473

19,923

12,834

11,830

12,547

12,589

Peak read

IOPS

21,393

23,083

912

11,472

909

12,211

Peak

write latenc y

0.542ms

1.012ms

0.417ms

0.421ms

0.438ms

0.457ms

Peak read latenc y

0.391ms

0.609ms

0.351ms

0.446ms

0.364ms

0.454ms

Test Results

Summary of Instant Clone Login VSI Testing Results

As shown in Figure 8, Login VSI baseline performance score was 724 in RAID 5 configuration. Baseline performance score was 738 in RAID 1 configuration. The VSImax score was slightly affected in RAID 5 configuration.

Figure 8. Instant Clone Login VSI Results Comparison

Regarding the resource usage shown in Figure 52, it consumed 9 percent more peak average CPU in RAID 5 configuration than those in RAID 1. Used memory percent of active session of RAID 1 configuration was the same as RAID 5 configuration. RAID 5 overhead was very limited. VSAN space efficiency feature consumed more CPU and memory resource, but it was acceptable considering the space savings.

Figure 9. 800 Instant Clone Login VSI Results Resource Usage Comparison

Storage performance was good in all configurations as shown in Figure 53 and Figure 54. The write and read latency were also low, there was no storage bottleneck. In summary, deduplication and compression on instant clone pools had limited storage performance impact during Login VSI tests.

Figure 10. 800 Instant Clone Login VSI Storage Performance Comparison (IOPS)

Figure 11. 800 Instant Clone Login VSI Storage Performance Comparison (Latency)

From instant clone Login VSI test results: although it consumed more resources in RAID 5 configuration, it had minimal impact on the Login VSI score. It performed well in RAID 5 configuration with Login VSI knowledge worker workload. 

Summary of Linked Clone Login VSI Testing Results

As shown in Figure 55, Login VSI baseline performance score was 718 in RAID 5 configuration. Baseline performance score was 724 in RAID 1 configuration. The VSImax score was slightly affected in RAID 5 configuration. VSIMax v4.1 Average was 1425 in RAID1+DC configuration while 1623 in RAID5+DC configuration. Overall, space efficiency features have little performance impact in terms of VSIMax score.

Figure 12. Linked Clone Login VSI Results Comparison

Regarding the resource usage shown in Figure 56 and Figure 57, it consumed 3 percent more peak average CPU in RAID 5 configuration than those in RAID 1. It consumed 4 percent more peak average CPU in RAID1+DC configuration than those in RAID 1. While it consumed 8 percent more peak average CPU in RAID5+DC configuration than those in RAID 5. Used memory percent of active session of all configuration was limited impact. Space efficiency features overhead was very limited. vSAN space efficiency feature consumed more CPU and memory resource, but it was acceptable considering the space savings.

Figure 13. Linked Clone Login VSI Resource Usage Comparison

Storage performance was good in all configurations as shown in Figure 54. The write and read latency were also low, there was no storage bottleneck. Peak read IOPS decreased because more data was in cache layer. In summary, deduplication and compression on linked clone pools had limited storage performance impact during Login VSI tests.

Figure 14. Linked Clone Login VSI Storage Performance Comparison (IOPS)

Figure 15. Linked Clone Login VSI Storage Performance Comparison (Latency) 5. 4 View Operations Testing

View Operations Testing

Instant Clone Desktops

Provision 800 Desktops

In this test, a new pool of 800 instant clone virtual desktops was provisioned on the vSAN datastore, with about 100 desktops per ESXi host. To complete this task:

  • Create internal VMs such as the internal template, replica VMs and parent VMs, which is called the priming phase.

  • Use VMware Instant Clone Technology to create desktops and prepare the operating system with the use of the Clone Prep Feature.

We conducted the testing with R1 and R5 configuration respectively.

Testing Results

It took 5 minutes for priming and 7 minutes for 800 desktops to become “available” in R1 configuration. It also took 5 minutes for priming and 7 minutes for 800 desktops to become “available” in R5 configuration.

For R1 configuration, the total used capacity was 3.84TB including 3.74TB physically written space, 102.29GB VM over-reserved space. And vSAN system overhead was 1.47TB.

For R5 configuration, the total capacity used was 3.07TB including 2.97TB physically written space, 102.29GB VM over-reserved space. And vSAN system overhead was 1.49B.

Figure 16 demonstrates the resource usage during 800 instant clone provision. CPU usage increased 8 percent.

Figure 16. 800 Instant Clones Provision Resource Usage

We summarized the storage performance of instant clones as shown in Figure 17 and Figure 18. Peak write IOPS increased from 16,483 to 17,509 and peak read IOPS increase from 5,975 to 15,045. Peak write latency and read latency also increased, but the overall storage performance was good.

Figure 17. Instant Clones Provision Storage Performance (IOPS)

Figure 18. Instant Clones Provision Storage Performance (Latency)

Push Image 800 Desktops

You can change the image of an instant clone desktop pool to push out changes or to revert it to a previous image. You can select any snapshots from any virtual machines to be the new image.

Testing Results

It just took 19 minutes to push a new image to 800 instant clone pool in the default R1 configuration and 20 minutes in R5 configuration.

Figure 19 shows the resource usage during the new image push operation. The average CPU consumption was 37.97 percent in R5 configuration, which was 6 percent more than that in R1 configuration. The memory usage was 33.60 percent, which was only 1 percent more that in R1 configuration.

Figure 19. 800 Instant Clones Push Image Resource Usage

As Shown in Figure 20 and Figure 21, the overall vSAN performance was good. vSAN space efficiency feature impacts limited.

Figure 20. Instant Clones Push Image Storage Performance (IOPS)

Figure 21. Instant Clones Push Image Storage Performance (Latency)

Linked Clone Desktops

Provision Desktops Pool

A new pool of 800 Windows 10 (64-bit) linked clone virtual desktops was provisioned on the vSAN datastore.

Testing Results

It took 52 minutes to provision 800 Windows 10 (64-bit) linked-clone virtual desktops in the Available state in the Horizon Administrator console.

The total capacity used was 6.16TB as shown in Figure 22 including 6.06TB physically written space and 120.19GB VM over-reserved space. vSAN system overhead space was 1.5TB.

Figure 22. Capacity Information for 800 Linked-Clones, R1

It took 57 minutes to provision 800 linked-clone desktops in R5 configuration. As shown in Figure 23, there were 5.96TB used space including 5.86TB physically written space and 102.10GB VM over-reserved space, and 1.62TB vSAN system overhead.

Figure 23. Capacity Information for 800 Linked Clones, R5

Figure 24 demonstrates the resource usage during 800 linked clone provision. CPU usage increased 1 percent and Memory usage increased 4 percent.

Figure 24. 800 Linked Clones Provision Resource Usage

We summarized the storage performance of linked clones as shown in Figure 25 and Figure 26. Peak write IOPS increased from 13944 to 15301 and peak read IOPS increase from 6950 to 21213. The overall storage performance was good.

Figure 25. Linked Clones Provision Storage Performance (IOPS)

Figure 26. Linked Clones Provision Storage Performance (Latency)

Refresh Desktop Pool

A Horizon View refresh operation reverted a pool of linked-clone desktops to their original state. Any changes made to the desktop were discarded because the desktop was provisioned, recomposed and last refreshed. When a refresh operation was initiated, desktops in the pool were refreshed in a rolling fashion, several at a time.

Testing Results

It took 19 minutes to refresh the 800 Windows 10 linked-clone virtual desktops in the Available state in the R1 and R5 configuration in the View Administrator console.

As shown in Figure 27, the average CPU usage was 16.78% in R5, which was 3 percent higher than that in the R1 configuration. The memory usage was 35.7 percent in R5, which was 1 percent higher than that in the R1 configuration.

Figure 27. Linked Clones Refresh Resource Usage

We summarized the storage performance of linked clones as shown in Figure 28 and Figure 29. Peak write IOPS increased from 12,359 to 12,717 and peak read IOPS increase from 6,537 to 18,925. The overall storage performance was good.

Figure 28. Linked Clones Refresh Storage Performance (IOPS)

Figure 29. Linked Clones Refresh Storage Performance (Latency)

Recompose Desktop Pool

A Horizon View recompose operation changed the linked clone to a new parent-base image.

Testing Results

It took 73 minutes to recompose 800 Windows 10 linked-clone virtual desktops while it took 74 minutes in R5 configuration.

As shown in Figure 30, the average CPU usage was 15.4% in R5, while was 14.26 percent in the R1 configuration. The memory usage was 36 percent in R5, the same as that in the R1 configuration.

Figure 30. Linked Clones Recompose Resource Usage

We summarized the storage performance of linked clones as shown in Figure 31 and Figure 32. Peak write IOPS increased from 9,736 to 11,719 and peak read IOPS increase from 9531 to 21,068. The overall storage performance was good.

Figure 31. Linked Clones Recompose Storage Performance (IOPS)

Figure 32. Linked Clones Recompose Storage Performance (Latency) Boot Storm

Boot Storm

A boot storm was simulated for a pool of 800 linked clones. The desktops were booted all together from vCenter. The task took less 5 minutes for all 800 desktops to boot up and become available in R5 configuration, while it took 4 minutes to boot up in R1 configuration. The average CPU usage was 50.3 percent in R5 configuration, 28 percent higher than that in R1 configuration in Figure 33.

Figure 33. Boot Storm Resource Usage

Resiliency Testing – One Node Failure

A single vSAN node hardware failure was simulated for a vSAN Cluster with 8 hosts and 800 running linked-clone virtual desktops, all under simulated workload for virtual desktops with FTT=1.We tested linked clone in R1 and R5+DC configurations.

In both configurations, vSphere HA and DRS behaved as expected, VMware vSphere HA restarted the desktops on the other nodes. The 100 desktops were restarted and all desktops were ready for user login. VMware vSphere DRS rebalanced the load across all hosts in the vSAN cluster. For host failures, it doesn’t return an IO error, vSAN has a configurable repair delay time (60 minutes by default) and components are rebuilt across the cluster after the delay time. vSAN prioritizes the current workload by rebuilding to minimize the impact on cluster performance. It took 36 minutes for the whole failover and rebalance in R5+DC configuration, while it took 26 minutes in R1 configuration.

Reclaim capacity feature Testing—Trim/Unmap

VSAN 6.7 Update 1 now has full awareness of TRIM/UNMAP command sent from the guest OS and can reclaim the previously allocated storage as free space. This is an opportunistic space efficiency feature that can deliver much better storage capacity utilization in vSAN environment. For this release the feature is disabled by default.

Testing Scenario

A new pool of 800 Windows 10 (64-bit) linked-clone virtual desktops with UNMAP enabled was provisioned on the vSAN datastore.

Delete files with unmap enabled in VM, vSAN can automatically reclaim no longer used space using what are known as TRIM/UNMAP commands.

Testing Results

It took 65 minutes for 800 desktops with UNMAP enabled to become “available” in R1 configuration while it took 52 minutes for 800 desktops with UNMAP disabled.

UNMAP File Deletion

In our validation test, the size of user disk was 1,552,384KB. Then delete a file of 725,466KB from the file folder, and the size of VMDKs with UNMAP enabled after the file deletion was 86,016KB, the deleted file space was reclaimed as expected.

Best Practices

This section provides the best practices to be followed, based on the solution validation. We provided the following best practices based on our solution validation:

  • View operation parameters

  • vSAN sizing

View Operation Parameters

The running time of linked clone and instant clone operations might improve when the max concurrent operations change. If the backend storage performance is good, we can increase the value to a larger number.

For instant clones, the default value of Max Concurrent Instant Clone Engine Provisioning Operations is 20, the provision time is good. We can increase the value to a larger number if the storage latency not cause contention during provision. Otherwise, a larger number will not get a quicker provision.

For linked clones, we changed the default value of Max Concurrent View Composer Provisioning Operations, Max Concurrent View Composer Maintenance Operation and Max Concurrent vCenter Provisioning Operations to larger number, since we have eight hosts in the desktop cluster and All flash vSAN performance is good. But if the storage latency cause contention during provision, the execution might not improve even if the concurrency number increases.

vSAN Sizing

Based on the solution testing, for linked clone pools, the optimal balance of performance and cost-per-desktop is R5 configuration. For instant clones, it is recommended to use R5+DC configuration.

Conclusion

VMware Cloud Foundation™ is an integrated cloud infrastructure that combines compute, storage, networking, security, and cloud management services, Cloud Foundation provides an ideal platform on which to run enterprise workloads and containerized applications across both private and public environments. VMware Cloud Foundation makes it easy to deploy and run a hybrid cloud by delivering common infrastructure that is fully compatible, stretched, and distributed along with a consistent cloud operational model for your on- and off-premises data centers and public cloud environments.

vSAN, as the key storage component of VMware Cloud Foundation, provides simpler operations, lower costs, and greater agility to customers with infrastructure scalability and without data center complexity. VMware vSAN solves the problems of storage cost and complexity by giving you a high-performance, flash-accelerated datastore you can enable with just a few clicks and grow affordably without large capital investments.

Extensive workload, operations and resiliency testing show that Horizon 7.7 with AppVolumes 2.15 on allflash vSAN delivers exceptional performance, a consistent end-user experience and a resilient architecture, all with a relative low price.

Appendix: Performance Test Diagram References

Test 1: Instant Clone in R1 Diagrams

Figure 34. VSImax on Login VSI Knowledge Worker Workload, 800 R1 Desktops

Figure 35. CPU usage during Login VSI Knowledge Worker Workload, R1

Figure 36. Memory Usage during Login VSI Knowledge Worker Workload, R1

Figure 37. vSAN IOPS during Login VSI Knowledge Worker Workload, R1

Figure 38. vSAN IO Latency during Login VSI Knowledge Worker Workload, R1

Test 2: Instant Clone in R5 Diagrams

Figure 39. VSImax on Login VSI Knowledge Worker Workload, 800 R5 Desktops

Figure 13 illustrates the CPU usage: the peak average CPU usage was 95.7 percent. Figure 13 illustrates the maximum memory usage: the peak memory percent of active sessions was about 43 percent.

Figure 40. CPU Usage during Login VSI Knowledge Worker Workload, R5

Figure 41. Memory Usage during Login VSI Knowledge Worker Workload, R5

From vSAN Performance Service as shown in Figure 15, IOPS increased nearly linearly because the number of active sessions increased. Peak write IOPS was 19,923 and peak read IOPS was 23,083.

Figure 42. vSAN IOPS during Login VSI Knowledge Worker Workload, R5

As shown in Figure 16, peak write latency was 1.012ms and peak read latency was 0.609ms.

Figure 43. vSAN IO Latency during Login VSI Knowledge Worker Workload, R5

Linked Clone in R1 Diagrams

Figure 44. VSImax on Login VSI Knowledge Worker Workload, 800 R1 Desktops

Figure 45. CPU Usage during Login VSI Knowledge Worker Workload, 800 R1 Desktops

Figure 45. Memory Usage during Login VSI Knowledge Worker Workload, 800 R1 Desktops

Figure 46. vSAN IOPS during Login VSI Knowledge Worker Workload, 800 R1 Desktops

Figure 47. vSAN IO Latency during Login VSI Knowledge Worker Workload, 800 R1 Desktops

Test 4: Lined Clone in R5 Diagrams

 VSImax

Figure 48. VSImax on Login VSI Knowledge Worker Workload, 800 R5 Desktops

Memory Usage

Figure 49. Memory Usage during Login VSI Knowledge Worker Workload, 800 R5 Desktops

vSAN IOPS

Figure 50. vSAN IOPS during Login VSI Knowledge Worker Workload, 800 R5 Desktops

SAN IO

Figure 51. vSAN IO Latency during Login VSI Knowledge Worker Workload, 800 R5 Desktops

Test 5: Linked Clone in R1+DC Diagrams

R1+DC Desktops

Figure 52. VSImax on Login VSI Knowledge Worker Workload, R1+DC Desktops

R1+DC Desktops

Figure 53. CPU Usage during Login VSI Knowledge Worker Workload, R1+DC Desktops

R1+DC Desktops

Figure 54. Memory Usage during Login VSI Knowledge Worker Workload, R1+DC Desktops

R1+DC Desktops

Figure 55. vSAN IOPS during Login VSI Knowledge Worker Workload, R1+DC Desktops

R1+DC Desktops

Figure 56. vSAN IO Latency during Login VSI Knowledge Worker Workload, R1+DC Desktops

Test 6: Linked Clone in R5+DC Diagrams

VSImax

Figure 57. VSImax on Login VSI Knowledge Worker Workload, R5+DC Desktops

CPU Usage

Figure 58. CPU Usage during Login VSI Knowledge Worker Workload, R5+DC Desktops

 Memory Usage

Figure 59. Memory Usage during Login VSI Knowledge Worker Workload, R5+DC Desktops

vSAN IOPS

Figure 60. vSAN IOPS during Login VSI Knowledge Worker Workload, R5+DC Desktops

vSAN IO

Figure 61. vSAN IO Latency during Login VSI Knowledge Worker Workload, R5+DC Desktops

About the Author

Yimeng Liu, solution architect in the Solutions Architecture team of the HCI Business Unit wrote the original version of this paper.

Sophie Yin, senior solution architect in the Solutions Architecture team of the HCI Business Unit is the co-author of this paper.

Blair Parkhill, Product Director from Login VSI also reviewed the paper from the Login VSI perspective.

The follow three colleagues also contributed to the VMware Cloud Foundation parts of the paper:

  • Jim Senicka, director of technical marketing of the HCI Business Unit

  • Kevin Tebear, staff technical marketing architect of the HCI Business Unit

  • Kyle Gleed, manager of technical marketing of the HCI Business Unit

Filter Tags