iSCSI Path Limit increase
One of the enhancements from the vSphere 7 Update 2 release I’m sure many customers will be thrilled about is the iSCSI path limit increase. Until this release, the iSCSI path limit was 8 paths per LUN, and many customers end up going over this. Whether it’s from multiple VMKernels or targets, customers often ended up with 16 or 24 paths. I’m excited to announce that with the vSphere 7.0 U2, the new iSCSI path limit is now 32 paths per LUN.
RDM support for RHEL HA
There were a few changes that needed to be made to enable support for Red Hat Enterprise HA to be able to use RDMs in vSphere. With the release of vSphere 7.0 U2, RHEL HA is now supported on RDMs.
VMFS SESparse Snapshot Improvements
Read performance improvements by using a technique for directing the reads to where the data resides rather than traversing the delta disk snapshot chain every time. Previously, if a read came into the Virtual Machine and the VM had snapshots, the reads would traverse the snapshot chain then to the base disk. Now, when a read comes in a filter will direct the read to either the snapshot chain or base disk reducing the read latency.
Multiple Paravirtual RDMA (PVRDMA) adapter support
In vSphere 6.7, we announced support for RDMA in vSphere. One of the limitations was only a single PVRDMA adapter was supported per Virtual Machine. With the release of vSphere 7.0 U2, we now support multiple PVRDMA adapters per VM.
Performance Improvements on VMFS
With the release of vSphere 7.0 U2, we have made performance improvements to VMFS. Performance was improved for first writes on thin disks. These changes improve performance for backup and restore, copy operations, and Storage vMotion in certain instances. With this improvement and the enhancements with Affinity 2.0, the first write impact has further been reduced. These improvements help to reduce the potential effects of first writes when using thin-provisioned disks.
NFS required a clone to be created first for a newly created VM and the subsequent ones could be offloaded to the array. With the release of vSphere 7.0 U2, we have enabled NFS array snapshots of full, non-cloned VMs to not use redo logs but instead use the snapshot technology of the NFS array in order to provide better snapshot performance. The improvement here will remove the requirement/limitation of creating a clone and enables the first snapshot also to be offloaded to the array.
HPP Fast Path Support for Fabric Devices
With the release of vSphere 7.0 U2, HPP is now the default plugin for NVMe devices. The plugin comes with 2 options – SlowPath with legacy behavior, VM fairness capabilities and the newly added FastPath, designed to provide better performance as compared to SlowPath with some restrictions. Even in SlowPath mode HPP can often perform better than NMP for the same device due to IOs being handled in batch mode by helping to reduce lock contention and CPU overhead in the IO path. There are some limitations to when FastPath will apply, so it is mostly intended for limited use cases. The FastPath is enabled by setting a Latency Sensitive Threshold, which is the threshold below which we allow operation of FastPath . Once the device latency goes above the threshold we will move to SlowPath and thus ensure that fairness is respected when latency has a higher impact. You can see how to set the Latency Sensitive Threshold in the link here.
HPP as the default plugin for vSAN
With the release of vSphere 7.0 U2, HPP is now the default MPP for all devices (SAS/SATA/NVMe) used with vSAN. Note that HPP is also the default plugin for NVMe fabric devices. This is an infrastructure improvement to ensure vSAN uses the improved storage plugin and can take advantage of the improvements.
vSphere On-disk Metadata Analyzer (VOMA) is used to identify and fix metadata corruption affecting the file system or underlying logical volumes. With the release of vSphere 7.0 U2, VOMA support has now been enabled for spanned VMFS volumes. For more information on VOMA, see the VMware Docs article here.
Support for Higher Queue Depth with vVols Protocol Endpoints
In some cases, the Disk.SchedNumReqOutstanding (DSNROconfiguration parameter did not match the queue depth of the vVols Protocol Endpoint (PE) (VVolPESNRO ). With the release of vSphere 7.0 U2, the default QD for the PE will now be 256 or the maxQueueDepth of the exposed LUN. So the default minimum PE QD is now 256.
Create larger than 4GB Config vVol
This allows the Config vVol to be larger than the default 4GB for partners to be able to store images for automatic builds.
vVols with CNS and Tanzu
SPBM Multiple Snapshot Rule Enhancements
With vVols, Storage Policy Based Management gives the VI admin autonomy to manage storage capabilities, at a VM level, via policy. With the release of vSphere 7.0 U2, we have enabled our vVols partners to support multiple snapshot rules in a single SPBM storage policy. This feature will need to be supported in the respective VASA providers that enable snapshot policies to be constructed. When supported by our vVols partners, it will be possible to have a single policy with multiple rules with different snapshot intervals.
32 Snapshot support for Cloud Native Storage (CNS) for First Class Disks
Persistent Volumes (PV) are created in vSphere as First-Class Disks (FCD). FCDs are independent disks with no VM attached. With the release of vSphere 7.0 U2, we are adding snapshot support of up to 32 snapshots for FCDs. This enables you to create snapshots of your K8s PVs which goes along with the SPBM multiple snapshot rules.
CNS PV to vVol mapping
In some cases, customers may want to see which vVols is associated with which CNS Persistent Volume (PV). With the release of vSphere 7.0 U2 in the CNS UI, you can now see a mapping of the PV to its corresponding vVol FCD.