What’s New in vVols and Core Storage vSphere 8 Update 1
VMware is announcing the upcoming release of vSphere 8 Update 1 and with that a number of vVols and Core Storage enhancements.
- vVols adds a new VASA Spec 5 which simplifies certificate management and multi-vCenter deployments. We have also made modifications that will increase performance and scale. Additionally, we’ve added support for NVMe/TCP.
- NVMeoF has some exciting news; End to End support for NVMe! Now with supporting GOS, it is possible to use the NVMe transport/protocol from GOS to the target array. The scale has also been increased for NVMeoF.
- For VMFS, we’ve added support for Extended XCOPY which optimizes the data copy between Datastores across different arrays. We have also increased the number of clusters supported with Microsoft WSFC.
- NFS has added a long-awaited feature, vmkernel port binding for NFSv3. This provides the ability to bind an NFS connection for a volume to a specific vmkernel.
vSphere Virtual Volumes (vVols)
Continuing with vVols as a priority, we have added new features, capabilities, and enhancements in vSphere 8 Update1.
Multi VC deployment for VASA Provider without Self-Signed Certificate
A new VASA Spec, 5, has been released to add additional features and capabilities to vVols. The new VASA 5 spec was developed to enhance vVols certificate management enabling self-signed certificates for multi-vCenter deployments.
New vVols VASA 5.0 Spec Features
- Container isolation
- Enhanced uptime
- Better security for shared environment
- Backward compatibility
- Heterogeneous Certificate Configuration
- Zero user intervention
- Workload separation
- Security compliance
Move Sidecar vVols into config-vvol instead another vVol object.
vVols Sidecars are created as vVol objects which introduced the overhead of VASA operations like bind/unbind. To improve performance and scalability, these are now treated as files in the config-vvol where normal file operations may be performed.
Config-vvol Enhancements Support for VMFS6 config-vvol with SCSI vVols
The config-vvol, which acts as a directory for the VM home contents and vVols datastore, was capped at 4GB. This restricts the usage of folders with the vVols Datastore as content repositories. To overcome this, the config-vvol will now be created as 255GB thin-provisioned objects.
Here you can see the different sized config-vvols. For the Win10-5 VM, the config-vvol is using the original 4GB format. The Win10-vVol-8u1 VM is using the new 255GB config-vvol format.
Add NVMe-TCP support for vVols
In vSphere 8 we added NVMe-FC support for vVols. With vSphere 8 U1, we have validated NVMe-TCP further enabling the perfect union of NVMeoF and vVols. See the article vVols with NVMe - A Perfect Match | VMware
NVMeoF, PSA, HPP
Infrastructure to support End to End NVMe.
Extend NVMe capabilities to support end-to-end NVMe stack without any SCSI to NVMe translation in any of the ESXi layers. This is a huge final step in enabling the full capability of NVMe. Now, with supporting OS, the NVMe protocol can be used from GOS to the end target.
Increase Max Paths per NVMe-oF Namespaces from 8 to 32.
Increasing the number of paths helps scale with multiple paths to NVMe namespaces. This is needed in HA and scalability use cases where hosts can have multiple ports and the appliance could have multiple nodes per appliance and multiple ports per node.
Increase the Number of WSFC Clusters per ESXi host from 3 to 16.
Increase the maximum number of WSFC clusters (multi-cluster) running on the same set of ESXi hosts from 3 to 16. With this increase, it can reduce the number of Microsoft WSFC licenses required by increasing the number of clusters that can run on a single host.
Enhanced XCOPY to Datastores Across Different Storage Arrays.
ESXi now supports Extended XCOPY which optimizes the data copy between Datastores across different arrays.
This feature provides the ability to bind an NFS connection for a volume to a specific vmkernel. This helps with security by directing NFS traffic across a dedicated subnet/VLAN and ensures NFS traffic does not use the mgmt. or other vmkernels.
For more details, please see the full technical article here.