August 15, 2022

Infinidat adds NVMe-TCP Certification for VMware

As NVMe over Fabrics or NVMeoF continues to grow in interest and adoption, our storage partners are continuously working on adding support for NVMeoF in the VMware ecosystem. Today, Infinidat announced support for NVMe/TCP on their Infinibox.

Summary:

As NVMe over Fabrics or NVMeoF continues to grow in interest and adoption, our storage partners are continuously working on adding support for NVMeoF in the VMware ecosystem. Today, Infinidat announced support for NVMe/TCP on their InfiniBox.

image 293

With the release of vSphere 7.0, VMware added support for NVMeoF. Initially, only FC and RDMA (RoCEv2) were supported then in vSphere 7.0 U3, and we added NVMe-TCP. This allows customers a choice of how they enable NVMeoF in their environment.

image 294

 

Today, Infinidat has announced they added support for NVMe-TCP to their InfiniBox. Infinidat is known for its petabyte-scale and works hard to keep current with VMware storage technologies. Designed with multi-protocol from the beginning, which included FC, iSCSI, NFS, and SMB, Infinidat has certified their InfiniBox for NVMe-over-Fabrics (NVMe-oF) using the TCP protocol with VMware. Another benefit with InfiniBox, is all the protocols are available regardless of the backend media. For more information on their announcement, see their blog Infinidat Extends NVMe/TCP to VMware Environments and New InfiniBox Capabilities for Modern VMware Infrastructures.

image 295

image 296

Infinidat has also added vVols replication with SRM. I will follow up with another blog with more details, but here is their announcement blog Infinidat Supports vVols Replication.

Why NVMeoF?

With modern arrays, many NVMe flash, using SCSI to access these devices limits the potential of NVMe SSDs and arrays. SCSI was designed for spinning media and serialized access based on the physical limitations of the hardware. NVMe and NVMe SSDs work completely different than spinning media. With NVMe, you have the potential of 64,000 queues and 64000 commands per queue. NVMe needs massive parallelism to be able to take advantage of the potential. NVMe SSDs internal to the local server use the PCIe bus. So how do you access an array with the same protocol across the wire? NVMeoF is the solution. This allows servers to access remote NVMe devices using the same protocol but across supported transports.

image 297

 

Why NVMe-TCP?

One of the NVMeoF protocols supported with vSphere is TCP. The benefit of NVMe-TCP is, in most cases, you do not need specialized connectivity hardware. You can utilize your existing Ethernet hardware and network. This makes adding NVMeoF to your environment simpler and more economical. Now, keep in mind you must have the available bandwidth to support the additional load of NVMe-TCP. Don’t expect to add NVMe-TCP to a pair of 1Gb NICs and expect to see amazing performance. At the very least, you should be using 10Gb NICs. NVMeoF is capable of more IO. Subsequently, you must ensure you have adequate bandwidth available for the additional load on your network infrastructure.

image 298

 

I have put together an NVMe Resource page cataloging much of VMware’s docs, KB, and articles on NVMe and NVMeoF here.

@jbmassae

 

 

 

Associated Content

home-carousel-icon From the action bar MORE button.

Filter Tags

Storage ESXi 7 vSphere vSphere 7 NVMeoF Blog Announcement Overview Design