Problem: Slow VDDK performance when reading/writing sequentially to virtual disk on target with high latency.
When using NBD (network block device) protocol for backups on storage with higher latencies, backup performance can be much slower than traditional IO. Beginning with vSphere 6.7 U3, there are NFC (network file copy) AIO (asynchronous IO) options available to tune NBD for better throughput. With vSphere 6.7 U3 and vSphere 7.0 U3, there are additional configurable options to further enhance backup performance. This article will provide an overview of settings for improving NBD backup performance as well as links to resources.
NBD transport is the most universal VDDK (Virtual Disk Development Kit) of the backup modes. No dedicated backup proxy VM is required, and it works with all the different datastore types. Subsequently, numerous customers use NBD for their backups. When backups use NBD with vVols and VMFS, they may be limited to a single outstanding IO at a time. Consequently, only when that single IO completes can the next be started. Here it is more efficient to have more IO queued and thus giving the array more to process immediately after it completes an IO. This becomes more important as latency on the array increases.
Option 1: Get a faster array 😊.
Option 2: Utilize VIXDISKLIB_FLAG_OPEN_UNBUFFERED and NFC AIO configurable options
NFC compress flags
For environments with vSphere 6.5 and above, NFC compression flags can be configured. In environments where the data compresses well, NBD performance can be significantly improved using data compression.
- VIXDISKLIB_FLAG_OPEN_COMPRESSION_ZLIB – zlib compression
- VIXDISKLIB_FLAG_OPEN_COMPRESSION_FASTLZ – fastlz compression
- VIXDISKLIB_FLAG_OPEN_COMPRESSION_SKIPZ – skipz compression
Configurable NFC AIO buffer options since VDDK 7.0 u3 and VDDK 6.7 u3 EP1
For environments with vSphere 6.7 U3 EP1 or vSphere 7.0 U3 or later, asynchronous I/O for NBD transport mode is available. AIO can improve data transfer speed of NBD transport mode. Performance can increase upwards of 4x in 7.0 U2 or later.
With VDDK 7.0.3 users can configure asynchronous I/O buffers for NBD(SSL) transport. With high latency storage, backup and restore performance may improve after increasing NFC AIO buffer size. With our testing, we used 1MB and received good results.
Parallel jobs on one NFC server
ESXi hosts have two NFC servers: one in hostd and the other in vpxa. For connections to vCenter, VDDK as an NFC client connects to the NFC server in vpxa. For connections to ESXi hosts, VDDK connects to the NFC server in hostd. The performance gains are only relevant to NFC and not vpxa.
If programs connect directly to ESXi hosts, the NFC server memory limit in hostd can be increased from default 48MB by editing the /etc/vmware/hostd/config.xml file. If programs connect through vCenter, the NFC memory limit in vpxa is not configurable.
If connecting through vCenter Server, VMware recommends backing up 50 or fewer disks in parallel on a host. The NFC server cannot handle too many requests at the same time. It will queue requests in the list until previous requests have been completed.
With the proper configurations to ESXi NFC, a good backup client using AIO, and larger buffers, we have seen a substantial performance improvement sometimes as high as 10x with vVols.
1) It is highly recommended the VDDK IO buffer multiple of vixDiskLib.nfcAio.Session.BufSizeIn64KB to archive better performance.
2) Memory consumption in NFC server will be increased with larger NFC AIO buffer size and buffer count.
Refer to the VDDK programming guide for more information.
- VDDK 7.0.3 Release Notes
- Virtual Disk Development Kit Programming Guide (7.0.3)
- Virtual Disk Development Kit Programming Guide - VMware vSphere 7.0 U3 Doc
- Virtual Disk Development Kit (VDDK)
- Best Practices for NBD Transport Summary
- Best Practices for NBD Transport Docs
- NBDSSL Transport
- Asynchronous Mode NBDSSL
- Virtual Disk Transport Methods
- Backup speed is slow over NBD transport mode for VMs on high-latency storage (83401)
- Impacted performance over NDB/NSBSSL (86269)