What's New with vSphere 8 Core Storage

vSphere 8

New vSphere 8.0 Core Storage enhancements, features, and additions.

NVMeoF vVols

vVols has been the primary focus of VMware storage engineering for the last few releases, and with vSphere 8.0, it is no different. The biggest announcement in vSphere 8.0 core storage is adding vVols support in NVMeoF. Initially, we will support FC only but will continue to validate and support other protocols supported with vSphere NVMeoF. This is a new vVols Spec, VASA/VC framework – VASA 4.0/vVols 3.0.

 The reason for adding vVols support to NVMeoF is many of the array vendors, and the industry, are moving towards using or at least adding NVMeoF support for better performance and throughput. Subsequently, VMware is making sure vVols remains current with the latest storage technologies. 

 Another benefit of NVMeoF vVols is the setup. When deploying, once you register the VASA, the underlying setup is completed in the background, you only need to create the datastore. The virtual Protocol Endpoints (vPEs) and connections are all handled by the VASA, simplifying the setup.

 

Technical Details:

ANA Group (Asymmetrical Namespace Access)

 With NVMeoF, the implementation of vVols is a bit different. With traditional SCSI based vVols, the Storage Container is the logic grouping of the vVol objects themselves. With NVMeoF, this will vary depending on how the array vendor implements. But in general, at the array, an ANA Group is a grouping of vVol namespaces. The array decides on the number of ANA Groups, each having a unique ANAGRPID within the NVM subsystem. Namespaces are allocated and active only on BIND request to the VASA Provider (VP). Namespaces are also be added to an ANA Group on a BIND request to VP. A namespace remains allocated/active until the last host UNBINDs the vVol.

vPE (virtual Protocol Endpoint)

 With traditional SCSI based vVols, the Protocol Endpoint (PE) is a physical LUN or volume on the array and shows up in the storage devices on the hosts. With NVMEoF vVols, there is no physical PE, the PE is now a logical object representation of the ANA group where the vVols reside. In fact, until a VM is powered on, the vPE doesn’t exist. Array decides on the number of ANA Groups, each having a unique ANAGRPID within the NVM subsystem. Once a VM is powered on, the vPE is created so the host can access the vVols in the ANA group. You can see in the diagram the vPE points to the ANA group on the array.

NS (Namespace, NVMe equivalent to a LUN)

 Each vVol type (Config, Swap, Data, Mem) created and used by a VM will create an NS that resides in an ANA group. It’s 1:1 vVol to NS ratio. This allows vendors to scale vVols objects easily. Typically, vendors support thousands to even hundreds of thousands of NS. The NS limits will be based on the array vendor.

Here in the diagram you can see the VM itself is a NS, this would be the Config vVol, and the disk is another NS, a Data vVol.

image 299

 

 

NVMeoF Enhancements

image 303

Support 256 namespaces and 2K paths with NVMe-TCP and NVMe-FC

NVMe over Fabrics (NVMeoF) continues to gain popularity for obvious reasons. Higher performance and throughput over traditional SCSI or NFS connectivity. Many storage partners are also moving to NVMe arrays, and using SCSI to access NVMe flash is a bottleneck for the potential gains.

•Continuing to add features and enhancements, VMware has increased the supported namespaces and paths for both NVMe-FC and TCP.

 

image 302

Extend reservation Support for NVMe device

Support NVMe reservation commands to enable solutions such as WSFC. This will allow customers to use Clustered VMDK capability for use with Microsoft WSFC with NVMeoF Datastores. Initially FC only.

 

image 310

Auto-discovery of NVMe Discovery Service support in ESXi

  • Advanced NVMe-oF Discovery Service support in ESXi enables dynamic discovery of standards compliant NVMe Discovery Service.
  • ESXi will use mDNS/DNS-SD service to obtain information such as IP address and port number of active NVMe-oF discovery services on the network.

ESXi sends a multicast DNS (mDNS) query requesting information from entities providing (NVMe) discovery service (DNS-SD). If such an entity is active on the network (on which the query was sent), it will send a (unicast) response to the host with the requested information - IP address and port number where the service is running.

 

 

vVols

image 309

VM Swap improvements

  • Faster Power on/off performance
  • Faster vMotion performance

Changes in how the vVols swap is provisioned/destroyed have helped to improve power on/off as well as vMotion and svMotion performance.

 

Config vVol to remain bound.

  • Helps reduce query times when looking for VM information.
  • Caching various vVol attributes size, name, etc.

The config-vvol is where the VM’s home data resides. (vmx, nvrams, logs, etc.) and is usually only accessed at boot or change. Previously we did what was called a lazy unbind and unbound the config-vvol when not in use. In some cases, some applications periodically access the config-vvol and a new bind operation was required. Keeping the config-vvol bound reduces latency in accessing the VM home data.

 

 

Unmap Space Reclamation Enhancement

Lower minimum reclamation rate to 10MBPS

Starting with vSphere 6.7 we added a feature to make the unmap rate configurable at the datastore level. With this enhancement, customers may change the unmap rate best suited for their array’s capability and vendor recommendation. The higher unmap rate has benefited many array vendors quickly reclaim space. But we heard from some customers, even with the lowest unmap rate of 25 MB/sec, the rate can be disruptive when multiple hosts send unmap commands concurrently. The disruption can increase when scaling the number of hosts per datastore.

Example of potential overload: 25 MB/s * 100 datastores * 40 hosts ~ 104GB/s

 To help customers in situations where the 25MB/s unmap rate can be disruptive, we have reduced the minimum rate to 10MB/s, configurable per datastore.

image 300

 This allows customers to reduce the potential impact of numerous unmap commands being sent to a single datastore. If needed, you can also disable space reclamation completely for a given datastore.

Dedicated Unmap Scheduling Queue

The dedicated unmap scheduling queue allows high priority VMFS metadata IOs to be separated and served from separate schedQueues to prevent them from getting starved behind UNMAP commands.

 

Container Storage CNS/CSI

VMFS and vSANDirect Disk Provisioning Storage Policy.

Choose EZT, LZT, or Thin provisioning via SPBM policy for CNS/Tanzu.

The goal is to add SPBM capability to support the creation/modification of storage policy rules to specify volume allocation options. It will also facilitate compliance checks in SPBM regarding the volume allocation rules in a storage policy.

  • Operations supported for virtual disks are: create, reconfigure, clone, and relocate
  • Operations supported for FCDs are: create, update storage policy, clone, and relocate.

 

image 304

image 305

image 308

Utilize SPBM provisioning rules for volume creation and support compliance checks.

 

 

NFS Enhancements

Engineering is always working to enhance storage resiliency. In vSphere 8 we have added NFS enhancements increasing the resilience through services, checks and permission validations.

  • Retry NFS mounts on failure
  • NFS mount validation

 

 

Associated Content

From the action bar MORE button.

Filter Tags

Storage vCenter Server vSphere Cloud Native Storage Container Storage Interface ESXi NFS NVMeoF Virtual Volumes (vVols) VMFS Document Announcement What's New Overview Intermediate Design Planning