Configuring NVMeoF TCP

Configuring NVMe-TCP

Configuring NVMe-TCP in vSphere is simple and doesn’t require special hardware. NVMe-TCP uses standard Ethernet HW and can be converged with other traffic. Now, a best practice would be to dedicate NICs for NVMe-TCP for maximum performance but is not required. It should be noted NVMe-TCP or NVMe, in general, can utilize much, if not all, of the available bandwidth. Subsequently, converging NVMe-TCP with other traffic without enough bandwidth could impact other network traffic.

This article will detail the process of setting up NVMe-TCP in vSphere.

Network Requirements

Before you configure the storage piece, you first must configure the network. It is recommended you use port binding for NVMe-TCP. You will need to create a vmkernel for each NIC you are utilizing.

image 346

Configure VMkernel Binding for the NVMe over TCP Adapter (

If your array target controllers are on the same VLAN/subnet, you can use a single switch with multiple Portgroups. If your array target controllers are on separate VLANs/subnets, you need to use separate switches for each VLAN/subnet. The setup for NVMe-TCP is similar to iSCSI with the difference being the virtual NVMe adaptors. You will create a virtual NVMe adaptor for each vmkernel/NIC used for NVMe-TCP.

In this example, the array controllers are on the same VLAN/subnet. As a result, I only needed to create Portgroups for each uplink, in the existing switch, being used for NVMe-TCP. I am converging on a 10Gb link for the example, but again I want to remind you to make sure you have adequate bandwidth when converging network traffic.


Network Portgroup Configuration

Reviewing the Portgroup setup, you will see each NIC is explicitly active with no failover. For each NIC used, a Portgroup should be set up for that NIC to be active and all other NICs should be unused.

image 348




VMkernel Configuration

Once the Portgroups have been created, you can then set up your VMkernels for each NIC used. Under VMkernel adapters on your host, add new VMkernel.

image 349


Select one of the Portgroups you created for NVMe-TCP. Remember you will do this for each NIC/VMkernel pair used.

image 350


Under the Port properties, you will select he NVMe over TCP service. On this screen, you can also change the default MTU depending on what your network utilizes.

image 351


On the next screen, you will enter your IP information for the VMkernel. Another best practice is not to route your traffic if possible. Each hop can add latency.

image 352


Once you finish entering the data and click finish, you will have created a VMkernel for NVMe-TCP. Make sure to repeat this process for all NIC/VMkernel pairs to be used for NVMe-TCP.

image 353



Configuring NVMe-TCP Adapters

After completing the NIC/VMkernel setup, you can now add the NVME over TCP adapters for each VMkernal/NIC pair you created. In the host configuration, under Storage Adapters, you ADD SOFTWARE ADAPTER selecting NVMe over TCP.

image 354


On the Add Software NVMe over TCP adapter screen, you will select the NICs you configured for NVMe-TCP. Again, you will add an SW NVMe-TCP adapter for each NIC you configured previously.

image 355


In this example, we configured two NICs to be used for NVMe-TCP so we will have two SW NVMe over TCP adapters.

image 356



Adding Storage Controller

Now that the network, NICs, VMkernels, and SW NVMe-TCP adapters have been created, we will add the storage controllers.

In this example, we are using an Infinidat Infinibox, so some of these steps may vary based on the array you are using. Make sure to review your array vendor’s documentation to ensure you set up the NVMe targets correctly.

Under the Storage Adapters configuration, select one of the SW NVMe-TCP adapters, then select Controllers. Under Controllers, you select ADD CONTROLLER.

image 357


On the ADD CONTROLLER screen, you will see the Host NQN, this is similar to the iSCSI IQN, but for NVMe. Click on copy, you will need to add each SW NVMe-TCP host’s NQN to the storage array. NOTE: the NQN is unique to the host, not the adapters. So you will only need to copy the NQN to the array from one of the SW NVME-TCP adapters for each host.

image 358






Example of Storage Array Configuration

On the array side, you will create host groups/clusters similar to the way you would for iSCSI. DO NOT use any of the iSCSI host groups for the NVMe targets. NVMe is a completely different protocol/transport.

Here you can see I’ve created a host profile for each host in the vSphere cluster.

image 359


For each host in the vSphere cluster that will be accessing the NVMe target, add that respective host’s NQN to the corresponding profile on the array.


image 360


Choose NVMe-OF

image 361


Depending on the array, it may already see the host’s NQN, select the correct NQN for the host profile.

image 362



Adding Controller Details

Back in the Add controller setup, you will add the IP for the NVMe-TCP interface and then click on DISCOVER CONTROLLERS. If everything has been properly configured, it will populate all the controller interfaces in the adapter. Then click on OK to finish. You will repeat the adding controller portion for each SW NVMe-TCP adapter configured on each host. In this example, we have two SW NVMe-TCP adapters, and three hosts. So, I repeated the process 5 more times.

image 363


Once completed, you will see the controllers listed under Controller for each SW NVMe-TCP adapter.

image 364


You should verify the array is also connected to all the adapters as well.

image 365


Mapping Volume

Now that the connectivity has been configured, you can create the map to a new NVMe volume for the hosts.

Again, this example is for an Infinibox and will vary from vendor to vendor.

image 366

image 367


Once the volume has been mapped to the hosts, it will show up in the SW NVMe-TCP adapter’s Devices. No storage rescan is required for NVMe.

image 368


You can also see the Namespace details for the volumes.

image 369


You can go into Storage Devices and you will see the NVMe-TCP disk and the details.

image 370



Creating New Datastore

At this point, all configurations should be completed and you can now create a new VMFS Datastore. On one of the hosts, right-click and select Storage, New Datastore.

image 371


Then you will select the Namespace volume you created in the previous steps.

image 372


Select VMFS6.

image 373


In the next screen, you can Use all available partitions or a subset of the space. Typically you would use all available partitions/space.

image 374


Review the details for your new Datastore and click Finish.

image 375


Your new Datastore will be created and should be attached to all hosts configured with access. You notice the Drive type is Flash.

image 376




  • Ensure you have adequate network bandwidth when converging NVMe-TCP with other vSphere traffic. If possible, dedicate NICs for NVMe-TCP to attain best possible performance.
  • Make sure to complete the required host steps on all vSphere hosts connecting to the NVMeoF target volume (Namespace). 
  • Make sure you DO NOT add any of the host’s NQN to an existing iSCSI volume! Create new NVMe specific host profiles for the NVMe target volume(s).
  • You can connect to the same array via SCSI and NVMe at the same time. You just cannot connect to the same targets. For example, you could have an iSCSI LUN Datastore and an NVMe-TCP Namespace Datastore from the same array connecting to the same set of hosts.

NVMeoF Resources

I've created and NVMeoF Resource page to help with many of the NVMeoF docs, KB articles and other resources.

NVMeoF Resources | VMware



Associated Content

From the action bar MORE button.

Filter Tags

Storage ESXi 7 ESXi 8 vSphere NVMeoF VMFS Document Best Practice Operational Tutorial Quick-Start Intermediate Design Planning Deploy