]

Solution

  • Storage

Type

  • Document

Category

  • Technical Overview

Product

  • vSAN 7

Phase

  • Design

vSAN File Services Tech Note

Introduction

File services is a common requirement in today’s enterprise environments.  The need for file-level access can come from  a variety of use cases -  from traditional systems needing to store user directories, to modern cloud native applications demanding persistent, read-write-many (RWM) volumes. 

Historically, providing these file-level services meant using physical storage array or VMs capable of serving file-level protocols such as NFS and SMB.  These approaches introduce additional factors in the design and management of an environment that are not trivial. 

vSAN 7 introduces native file services.  This hypervisor integrated service eases the burden of design and management for environments that require file-level services by providing a simple and robust solution using software you already know. VMware took a very careful approach in how best to provide file services that can easily scale and support a broad variety of conditions.  The following content will describe this feature and its capabilities in more detail.

Use Cases

With the vSAN 7 release, there are several key use cases that were targeted for file services including, home directories, user profiles, infrastructure share needs, cloud-native workloads, physical and virtual workloads needing file services.

For more information on use case, limitations and capabilities please see the vSAN File Services FAQ.

Architecture

File services is a cluster-level service and can be enabled in the same location of the UI as the other cluster level services such as the iSCSI service, deduplication and compression, and data-at-rest encryption. File services are rendered through  a series of containers deployed in the cluster.  The containers run inside photon powered VMs deployed across the cluster.  These container hosts are deployed from a lightweight OVF that is cached on the vCenter server after download. They are powered off and deleted if file services are disabled on the cluster but will run on every host in the cluster when the service is enabled.

Architecture overview

Let’s look at the basic architecture of vSAN file services.  When the vSAN file services feature is enabled, this will initiate the deployment of stateless containers that provide the file system protocol desired – in this case NFSv3, NFSv4, SMB2, and SMB3. The protocol services containers are spread across the cluster to provide the resilience of the service, and is not something that the user needs to manage.

Scaling SMB and NFS Shares

Expanding front end access

The VMware vSAN Virtual Distributed File System (VDFS) is a purpose-built distributed file system designed specifically to provide access for consumable protocols such as NFS.  It is currently exposed through protocol containers that export SMB2, SMB3, NFSv3 or NFSv4.1 as a part of vSAN file services.

NFS Front end access

While VDFS does offer a cluster wide namespace for NFSv4.1. NFS4.1 will use the in-protocol redirection to support connections to a single namespace being re-directed out to the container that is hosting a given share. In this way, the master IP used for the namespace does not proxy or hairpin the IO path, but acts as a redirection broker for client requests. This allows a scaling out of throughput and performance as additional shares are assigned and balanced across different containers exposing the protocols in the cluster.

SMB front end access

Connecting to any container will present a base share path that is the namespace created for the cluster. DFS links will redirect to the specific container that a share has been placed on. Users can connect to any container where they will follow the DFS link to the specific host that is hosting a given share.

 

Round Robin DNS can addditionally be configured to avoid the need to directly use the hostnames of the backing containers. In this example vault.satm.eng.vmware.com has been configured as the domain namespace for the cluster along with a forwar DNS names that point to the IPs of the backing containers.

Expanding back end placement

VDFS is a distributed file system that sits inside the hypervisor directly ontop of vSAN objects providing the block-based back end. As a result, it can easily consume SPBM policies on a per-share basis. New objects are automatically added and concatenated onto a VDFS volume when the maximum object size is reached (256GB). Components behind these objects can be striped, or as a result of various reasons be automatically spanned and created across the cluster. This allows shares to automatically "Scale deep" in their capacity usage while scaling out the performance load across additional back end hosts, and devices.

VDFS Management and Control Plane

Each node has a control path that helps manage responsibilities such as monitoring the health, and handling failover and load balancing.  It also provides the conduit for connectivity to vCenter and the vSAN management daemons on each host. vCenter Includes management modules in the vsan-health daemon.. vCenter is not required for access to shares and does not sit within the I/O path. The VDFS controller services are distributed across ESXi hosts in the cluster.  These services handle monitoring of the share and file system health and communicate with the vSAN management daemons, and remediate failures at the protocol endpoint layer above. This layer is what helps tolerate failures to an individual host, a VM, a network, or storage.

VDFS IO Path

VDFS sits above the vSAN layer, but below the virtual machine layer within the IO path. VDFS daemons run inside the hypervisor. They work closely with the vSAN services to pass IO, create and expand shares automatically, and monitor and recover from failures.

When setting up vSAN file services you will notice that there is only a single front facing network used to export NFS or SMB to clients. No additional backend networking to the vSAN is required, and no VMDKs are attached to the container hosts. Traditionally a file server running inside a virtual machine would need to use one of these methods and incur additional compute and networking overhead as data would have to travel over ethernet, or through the internal vSCSI layers. The container hosts bypass the traditional vNIC or vSCSI methods of connecting to the storage layer, but rather, uses an optimized transport protocol to reduce hops and improve efficiency. This connection speaks to the hypervisor layer where VDFS runs. It is used to transmit the data to a hypervisor-native VDFS.  This provides load balancing through share volumes that can be accesses from any client in the cluster, performance through zero-copy, and ease of use through a simple connection of the file client to the SMB/NFS server (container).  VDFS is also responsible for translating the commands into block requests, to the vSAN layers.  This design optimizes performance, overhead and security of data access.

VDFS Write IO path

The data moving from the protocol services layer to the VDFS layer uses a form of  Zero-Copy operation. This efficient alternative to a network hairpin, or trip through the vSCSI layer helps reduce latency and keep the compute overhead cost of file operations down.

Protocol Service Architecture

The front end file protocol services run in containers. This enables rapid recovery from failure, quick upgrades to the protocol stack, and a scale out architecture. The image for the containers and container hosts is included within the vSAN File Services OVA package. This package can be automatically downloaded and updated from within vCenter if it has an internet connection, or may be manually updated in an air gap environment. The file services images are cached on the vCenter server for fast re-deployment.

These containers hosts, displayed as FSVMs in vCenter, and managed entirely by the system, use a special Storage policy called:  FSVM_Profile_DO_NOT_MODIFY.  This storage policy is controlled by the file services, and is designed to help provide resilience of the services, not the data.  Data level resilience is controlled by a single storage policy applied to the share. It is worth noting that vSAN File Services will replace an impacted container host, and is not dependent on vSAN policy to protect the objects backing them.  This policy is NEVER used for the file shares and data protection of file shares is handled by the vSAN policy assigned to the share. The File shares do not use VMDKs for storage.  The container hosts remain independent from the presentation of a discrete share, thus are pinned to an ESXi host through host affinity.  This provides a platform for the protocol services rendered by the container to spawn on another host in the event of maintenance or a failure.  As the name implies, this storage policy should not be modified, and it should only be assigned to the File Services agent VMs. File Services for vSAN 7 will use up to eight containers per vSAN cluster, but the container hosts providing protocol services will reside on all ESXi hosts in a vSAN cluster.

Monitoring of vSAN File Services

Performance metrics for file shares can be viewed on a per-share basis.  These performance metrics are provided by the vSAN Performance Service, and are visible in the vCenter UI.  The APIs in vSAN also allow for file share performance data to be viewed in other applications, such as vRealize Operations.
Read and write metrics for:

  • IOPS
  • Throughput
  • Latency

Skyline health checks for file services

  • Infrastructure Health
  • File Server Health
  • Share Health

Additional Information

The following contains additional links and presentations that may be useful for vSAN File Services. Note, a number of common questions can be found here in the vSAN FAQ.

Videos and Blogs

Videos:

File Services Setup

How to create a share

vSAN File Services Upgrade:

 

Filter Tags

  • Storage
  • Technical Overview
  • Document
  • vSAN 7
  • Design