What's New with vSphere 8 Core Storage

vSphere 8 Update 2

vSphere 8 Update 2 has some significant announcements, and the storage side is no exception. As you have hopefully noticed, VMware is focusing on vSAN, vVols, and NVMeoF and with that, it is definitely the year for vVols. There have been so many great updates and enhancements in vSphere 8. For example, new vVols VASA specs, better performance and resilience, enhanced certificate mgmt., and support for NVMeoF to name a few. We have also made sure vVols is supported throughout the VMware ecosystem.

Although there aren’t a lot of new core storage features in vSphere 8 Update 2, there are some incredibly significant updates nonetheless. vVols, NVMeoF, VMFS, and NFS all received enhancements and features many customers will appreciate.

 

You can see the latest in What's New with vSphere 8 U2 here.

 

vVols

image 409

vVols Extend Online Shared Disks

image 407

 With the release of vSphere 6.7, we added support for SCSI3-Persistent Reservations for vVols. This feature allows clustering applications to control the locking of shared disks. Microsoft WSFC is an example of an application requiring SCSI3-PR for the shared disks within the cluster. But one of the last features RDMs had over vVols was the ability to extend an online shared disk in applications like Microsoft WSFC. This has been a feature that has been requested at every VMworld and Explore as well as from numerous customers. I’m very excited to announce vVols now supports the extension of online shared disk applications using SCSI3-Persistent Reservations! This provides the customer with the ability to extend shared disks without having to shut down the application cluster. RDMs no longer have an edge over vVols. Customers can now migrate their MS WSFC applications to vVols and get rid of RDMs. This makes the virtual storage environment much easier to manage and reduces complexity while maintaining array-based features.

 

image-20230816132303-1

If you are interested in the process of migrating your WSFC from RDMs to vVols, see this article: Migrating your WSFC from RDMs to vVols

 

Support for vVols Online Disk Extension with Oracle RAC

 In this release, in addition to the MS WSFC, we’re added hot extend support for Oracle RAC disks, using Multi-writer Mode. There’s no downtime required to extend the clustered disks. This can be performed on both SCSI and NVMe vVols disks. This again enables customers to migrate off RDMs.

 

vVols NVMe in-band Migration

 Enables support for in-band migration of NVMe vVol namespaces between ANA groups. This functionality provides equivalence to SCSI rebind primitive and allows target/storage admin to balance IO loads across PEs.

 

Automated recovery from PDL for vVol PEs

 Previously, when a PE went PDL and was later brought back, the PSA stack needed to recycle the device (destroy and detect paths again).  This could only happen once the vVol objects using the PE are closed. With this release, the VMs will be auto-terminated in the cases where we detect that a PE to which the VM is bound vVols is PDL. This further enhances resilience and recovery during certain storage failures.

 

Enable 3rd party MPP support for NVMe vVols

 Allows 3rd party Multi Path Policy (MPP) to support NVMe vVols. Enables our storage partners to use their custom MPP.

 

UNMAP Support for config vVol

 Starting at vSphere 8.0U1, the config vVols are now created with thin provisioned size of 255GB and formatted with VMFS-6. With this release, we support command line (esxcli) based unmap support for config vVols.

 

 

 

NVMeoF

Clustered Applications

image 408

Support vNVME controller type for MS WSFC

 In vSphere 7, we added a new VM vNVMe storage controller, but it was initially not supported for use with WSFC. In vSphere 8, we added support for clustered applications on NVMeoF Datastores. In vSphere 8 U2, our cluster validation team has now certified the vNVMe controller to be used with Microsoft WSFC. With this, you can now have end-to-end NVMe for your WSFC application. Initially, this will only be supported with SCSI vVols.

 

Support Oracle RAC clustering with vVols NVMe (FC, TCP)

 We have added support for Oracle RAC clustered vVol disks to be hosted on NVMe backend. (NVMe-FC vVols, NVMe-TCP vVols).

 

Enable vNVMe support with Oracle RAC (vVols)

 We added support for using the vNVMe controller as the frontend for multi-writer clustering solution (Oracle RAC)

 

 

VMFS

Improved SE Sparse snapshot offline consolidation performance.

 The performance of SE Sparse snapshot offline consolidation has been improved by multiple factors in this release. This helps enhance RPO and RTO for customers using offline consolidation for DR purposes.

 

 

NFS

DNLC cache for NFS4.1

 Some environments using NFS4.1 have large datastores housing hundreds of VMs, and doing searches, powering on, or listing VMs, can be slower than NFSv3. Directory Name Lookup Cache or DNLC is intended to reduce the number of NFS LOOKUP operations by caching some of this data. With this release, we’ve added DNLC support for NFS4.1. This will benefit operations like "ls" on a directory with a large number of VMs or files in a datastore.

 

nConnect

 in vSphere 8.0 U1, we added support for nConnect which enables the ability to add multiple connections to an NFSv3 datastore. This can help reduce latency and increase performance. With this release, we’ve added support for dynamically increasing and decreasing the number of connections used with nConnect. Currently, this is configurable via esxcli only.

Preliminary support for nConnect feature added in ESXi NFS client (91497) (vmware.com)

 

 


 

vSphere 8 Update 1

New vSphere 8 Update 1 vVols and Core Storage enhancements, features, and additions.

vSphere 8 U1 storage has several storage enhancements in vVols, NVMeoF, VMFS, and NFS. 

 

vSphere Virtual Volumes (vVols)

image 309

Continuing with vVols as a priority, we have added additional features, capabilities, and enhancements in vSphere 8 Update1.

One of the feature enhancements is a new certificate management framework. This will simplify the ability to register multiple vCenters to a single VASA provider. This lays the groundwork for potential future capabilities such as vVols vMSC. Some of the other features focus on scalability and performance. Because vVols can scale much larger than traditional storage, we want to ensure vVols will perform at scale as well.

 

Multi VC deployment for VASA Provider without Self-Signed Certificate

The new VASA 5 spec was developed to enhance vVols certificate management enabling self-signed certificates for multi-vCenter deployments. The solution also addresses certificate management where independent vCenter deployments running with different certificate management can work together. For example, it might be possible that one vCenter uses a 3rd party CA and another vCenter uses the VMCA signed certificate. This kind of deployment can be useful in shared VASA Provider deployment. This new capability will utilize Server Name Indication (SNI).

Server Name Indication (SNI) is an extension to the Transport Layer Security (TLS) computer networking protocol by which a client indicates which hostname it is attempting to connect at the start of the handshaking process. This enables a server to present multiple certificates on the same IP address and TCP port. Subsequently, allows multiple secure (HTTPS) websites (or other services over TLS) to be served by the same IP address without requiring all those sites to use the same certificate. It is the conceptual equivalent to HTTP/1.1 name-based virtual hosting, but for HTTPS. This also allows a proxy to forward client traffic to the right server during TLS/SSL handshake. 

 

New vVols VASA 5.0 Spec

Specific features added to vSphere 8 U1

  • Container isolation - Specific to the vendor's VASA provider.
  • Uptime - Support alarm of certificate change and VASA 5.0 workflow which allows the VMCA cert to be refreshed in multi-VC systems.
  • Better Security for Shared Environment - Specific to the vendor's VASA provider.
  • Backward Compatibility - ESX which supports VASA 5.0, also resolves the Self Signed Certificate issues and downtime.
  • Heterogeneous Certificate Configuration - Specific to the vendor's VASA provider.
  • Zero User Intervention - Multi-VC support VMCA provisioning which does not require the user to install and manage certificates for the VP. SMS would take care of certificate provisioning.
  • Security Compliance  -  No Self Signed Certificate in the trusted roots solving security compliance issues.

 

VASA 5.0 Feature Details 

Container isolation - Allow per vCenter access control policy, even allowed to share containers in selected vCenter (Cross VC migration). Allow better isolation at the container level, VASA Provider can manage the access rights for a container per vCenter. 

Uptime - Invalid/Expired Certificate causes downtime and there is the possibility of downtime when multiple vCenters try to register with the same VASA Provider. In a multi-VC setup, a certificate could be refreshed without any downtime.

Better Security for Shared Environment - All operations could be authenticated in the context of a vCenter and each vCenter would be having its own ACL (Access Control List). No self-signed certificate in the trust store. VASA Provider could be shared in a cloud-like environment. VASA 5.0, access role to do that.

Backward Compatibility - VASA 5.0 remains backward compatible and gives control to upgrade as per security needs. VASA 5.0 can coexist with earlier releases if the vendor will support it.

Heterogeneous Certificate Configuration - Uses only VMCA signed certificate and no additional CA is needed. Isolate the trust domain of vSphere. VASA 5.0 allows different configurations for each vCenter (i.e., Self-Managed 3rd Party CA Signed Certificate and VMCA managed Certificate).

Zero User Intervention - Plug and Play with automated certificate provisioning without any additional user intervention, no manual steps to use the VASA Provider with any vCenter.

Workload Separation - Allow load balancing by redirecting the transport for vCenter-specific virtual hosts. Each vCenter can be running in separate transport layer configurations. This creates more flexibility to isolate the workflow for each vCenter.

Security Compliance - Non-CA certificates are no longer part of the vSphere Certificate Trust Store. VASA 5.0. This enforces the VASA Provider to use CA signed certificate for VASA communication.

 

Move Sidecar vVols in config-vvol instead of another vVol object.

vVols Sidecars were created as vVol objects which introduced the overhead of VASA operations like bind/unbind. Solutions like First Class Disks (FCD) create a large number of small sidecars, which can degrade vVols performance. Also, since numerous sidecars can be created, this can count towards the total number of vVols objects supported on the storage array. To improve performance and scalability, these are now treated as files in the config-vvol and where normal file operations may be performed. Remember, in vSphere 8 we updated how the config-vvol was bound. Subsequently, with the config-vvol remaining bound, it reduces operations and delays, thus improving performance as well as scale.

With this new functionality introduced in this release, there are a few constraints regarding how the VM is created. Older vSphere releases will work with this new change, meaning existing VMs can run in the new format with updated hosts, but the new format will not work with older vSphere releases. Newly created VMs or Virtual Disks using the new config-vVol/namespace created by ESXi 8 U1 hosts are not supported on ESXi hosts of previous versions

Please refer to this knowledge base article to learn more about this functionality - New config vVols objects created on ESXi 8.0U1 (90791) (vmware.com)

 

 

Config-vvol Enhancements, Support for VMFS6 config-vvol with SCSI vVols (instead of VMFS5)

The config-vvol, which acts as a directory for the vVols datastore, and VM home contents, was capped at 4GB. This restricts the usage of folders with the vVols Datastore as content repositories. To overcome this, the config-vvol will now be created as 255GB thin-provisioned objects. Additionally, VMFS-6 will be used as the format for these objects instead of VMFS-5. This will enable the sidecar files, other VM files, and content libraries to be placed in the config-vvol.

In the image below, you can see the different sized config-vvols. For the Win10-5 VM, the config-vvol is using the original 4GB format. The Win10-vVol-8u1 VM is using the new 255GB config-vvol format.

image 388

 

Add NVMe-TCP support for vVols

NVMe over TCP support for vVols:  In vSphere 8 we added NVMe-FC support for vVols. With vSphere 8 U1, we have validated NVMe-TCP further enabling the perfect union of NVMeoF and vVols. See the article vVols with NVMe - A Perfect Match | VMware

 

 

NVMeoF, PSA, HPP

Infrastructure to support End to End NVMe

image 391

Extend NVMe capabilities to support end-to-end NVMe stack without any SCSI to NVMe translation in any of the ESXi layers. Another important aspect of supporting end-to-end NVMe is to allow third-party multi-pathing plugins to control and manage NVMe arrays.

This is a substantial final step in enabling the full capability of NVMe. Now, with supporting GOS, the NVMe protocol can be used from GOS to the end target.

A significant aspect of how VMware implemented the VM storage translation is full reverse compatibility support. We have made sure that with any combination of VM using either a SCSI or vNVMe controller and the target being either SCSI or NVMe, we can translate the storage stack path. This is a key design enabling customers to move between SCSI and NVMe Datastores without having to change the VM’s storage controller. Similarly, if a VM has either a SCSI or vNVMe controller, it will work on both a SCSI and NVMeoF Datastore.

Simplified storage stack diagram.

image-20230321131256-2

For more information on NVMeoF for vSphere, see the NVMeoF Resource page.

 

Increase Max Paths per NVMe-oF Namespaces from 8 to 32.

Increasing the number of paths helps scale with multiple paths to NVMe namespaces.  This is needed in HA and scalability use cases where hosts can have multiple ports and the appliance could have multiple nodes per appliance and multiple ports per node. 

 

Increase WSFC clusters per ESXi host from 3 to 16.

Increase the maximum number of WSFC clusters (multi-cluster) running on the same set of ESXi hosts from 3 to 16. With this increase, it can reduce the number of Microsoft WSFC licenses required by increasing the number of clusters that can run on a single host.

For more information on Microsoft WSFC on vSphere, here are some resources:

 

 

VMFS

Enhanced XCOPY to Datastores Across Different Storage Arrays.

ESXi now supports Extended XCOPY which optimizes the data copy between Datastores across different arrays. This will help customers offload migrate and clone workloads to the storage array arrays. While vSphere 8 U1 enables this feature, the actual data migration across the arrays must be supported on the storage array side.

 

 

NFS

NFSv3 vmkPortBinding

This feature provides the ability to bind an NFS connection for a volume to a specific vmkernel. This helps with security by directing NFS traffic across a dedicated subnet/VLAN and ensures NFS traffic does not use the mgmt. or other vmkernels.

Previous NFS mounts will not have these values stored in config store. During the upgrade when the configuration is read from config store, vmknic and bindTovmnic values, if present, will be read. Upgrades from previous versions will not have these values since they are optional there will not be any impact.

image-20230314112443-1

 

 


 

vSphere 8

New vSphere 8.0 Core Storage enhancements, features, and additions.

vSphere Virtual Volumes (vVols)

image 309

VM Swap improvements

  • Faster Power on/off performance
  • Faster vMotion performance

Changes in how the vVols swap is provisioned/destroyed have helped to improve power on/off as well as vMotion and svMotion performance.

 

Config vVol to remain bound.

  • Helps reduce query times when looking for VM information.
  • Caching various vVol attributes size, name, etc.

The config-vvol is where the VM’s home data resides. (vmx, nvrams, logs, etc.) and is usually only accessed at boot or change. Previously we did what was called a lazy unbind and unbound the config-vvol when not in use. In some cases, some applications periodically access the config-vvol and a new bind operation was required. Keeping the config-vvol bound reduces latency in accessing the VM home data.

 

 

NVMeoF vVols

vVols has been the primary focus of VMware storage engineering for the last few releases, and with vSphere 8.0, it is no different. The biggest announcement in vSphere 8.0 core storage is adding vVols support in NVMeoF. Initially, we will support FC only but will continue to validate and support other protocols supported with vSphere NVMeoF. This is a new vVols Spec, VASA/VC framework – VASA 4.0/vVols 3.0.

 The reason for adding vVols support to NVMeoF is many of the array vendors, and the industry, are moving towards using or at least adding NVMeoF support for better performance and throughput. Subsequently, VMware is making sure vVols remains current with the latest storage technologies. 

 Another benefit of NVMeoF vVols is the setup. When deploying, once you register the VASA, the underlying setup is completed in the background, you only need to create the datastore. The virtual Protocol Endpoints (vPEs) and connections are all handled by the VASA, simplifying the setup.

 

Technical Details:

ANA Group (Asymmetrical Namespace Access)

 With NVMeoF, the implementation of vVols is a bit different. With traditional SCSI based vVols, the Storage Container is the logic grouping of the vVol objects themselves. With NVMeoF, this will vary depending on how the array vendor implements. But in general, at the array, an ANA Group is a grouping of vVol namespaces. The array decides on the number of ANA Groups, each having a unique ANAGRPID within the NVM subsystem. Namespaces are allocated and active only on BIND request to the VASA Provider (VP). Namespaces are also be added to an ANA Group on a BIND request to VP. A namespace remains allocated/active until the last host UNBINDs the vVol.

vPE (virtual Protocol Endpoint)

 With traditional SCSI based vVols, the Protocol Endpoint (PE) is a physical LUN or volume on the array and shows up in the storage devices on the hosts. With NVMEoF vVols, there is no physical PE, the PE is now a logical object representation of the ANA group where the vVols reside. In fact, until a VM is powered on, the vPE doesn’t exist. Array decides on the number of ANA Groups, each having a unique ANAGRPID within the NVM subsystem. Once a VM is powered on, the vPE is created so the host can access the vVols in the ANA group. You can see in the diagram the vPE points to the ANA group on the array.

NS (Namespace, NVMe equivalent to a LUN)

 Each vVol type (Config, Swap, Data, Mem) created and used by a VM will create an NS that resides in an ANA group. It’s a 1:1 vVol to NS ratio. This allows vendors to scale vVols objects easily. Typically, vendors support thousands to even hundreds of thousands of NS. The NS limits will be based on the array vendor.

Here in the diagram, you can see the VM itself is a NS, this would be the Config vVol, and the disk is another NS, a Data vVol.

image 299

Learn more about the technical details of NVMe-FC and vVols in this blog article vVols with NVMe - A Perfect Match | VMware

 

 

 

NVMeoF Enhancements

image 303

Support 256 namespaces and 2K paths with NVMe-TCP and NVMe-FC

NVMe over Fabrics (NVMeoF) continues to gain popularity for obvious reasons. Higher performance and throughput over traditional SCSI or NFS connectivity. Many storage partners are also moving to NVMe arrays, and using SCSI to access NVMe flash is a bottleneck for the potential gains.

•Continuing to add features and enhancements, VMware has increased the supported namespaces and paths for both NVMe-FC and TCP.

 

image 302

Extend reservation Support for NVMe device

Support NVMe reservation commands to enable solutions such as WSFC. This will allow customers to use Clustered VMDK capability for use with Microsoft WSFC with NVMeoF Datastores. Initially FC only.

 

image 310

Auto-discovery of NVMe Discovery Service support in ESXi

  • Advanced NVMe-oF Discovery Service support in ESXi enables dynamic discovery of standards compliant NVMe Discovery Service.
  • ESXi will use mDNS/DNS-SD service to obtain information such as IP address and port number of active NVMe-oF discovery services on the network.

ESXi sends a multicast DNS (mDNS) query requesting information from entities providing (NVMe) discovery service (DNS-SD). If such an entity is active on the network (on which the query was sent), it will send a (unicast) response to the host with the requested information - IP address and port number where the service is running.

 

 

Unmap Space Reclamation Enhancement

Lower minimum reclamation rate to 10MBPS

Starting with vSphere 6.7 we added a feature to make the unmap rate configurable at the datastore level. With this enhancement, customers may change the unmap rate best suited for their array’s capability and vendor recommendation. The higher unmap rate has benefited many array vendors quickly reclaim space. But we heard from some customers, even with the lowest unmap rate of 25 MB/sec, the rate can be disruptive when multiple hosts send unmap commands concurrently. The disruption can increase when scaling the number of hosts per datastore.

Example of potential overload: 25 MB/s * 100 datastores * 40 hosts ~ 104GB/s

 To help customers in situations where the 25MB/s unmap rate can be disruptive, we have reduced the minimum rate to 10MB/s, configurable per datastore.

image 300

 This allows customers to reduce the potential impact of numerous unmap commands being sent to a single datastore. If needed, you can also disable space reclamation completely for a given datastore.

Dedicated Unmap Scheduling Queue

The dedicated unmap scheduling queue allows high priority VMFS metadata IOs to be separated and served from separate schedQueues to prevent them from getting starved behind UNMAP commands.

 

Container Storage CNS/CSI

VMFS and vSANDirect Disk Provisioning Storage Policy.

Choose EZT, LZT, or Thin provisioning via SPBM policy for CNS/Tanzu.

The goal is to add SPBM capability to support the creation/modification of storage policy rules to specify volume allocation options. It will also facilitate compliance checks in SPBM regarding the volume allocation rules in a storage policy.

  • Operations supported for virtual disks are: create, reconfigure, clone, and relocate
  • Operations supported for FCDs are: create, update storage policy, clone, and relocate.

 

image 304

image 305

image 308

Utilize SPBM provisioning rules for volume creation and support compliance checks.

 

 

NFS Enhancements

Engineering is always working to enhance storage resiliency. In vSphere 8 we have added NFS enhancements increasing the resilience through services, checks and permission validations.

  • Retry NFS mounts on failure
  • NFS mount validation

 

 

Filter Tags

Storage vCenter Server vSphere Cloud Native Storage Container Storage Interface NFS NVMeoF Virtual Volumes (vVols) VMFS Document Announcement What's New Overview Intermediate Design Planning