Moving off of SD Boot Media

Overview

Introduction

Recent changes in ESXi 7 have prompted many customers to consider moving from SD Cards and USB sticks as their ESXi boot media and back to SSD’s or NVME drives. We here within VMware with lab systems are not immune to this challenge! I recently had some SSDs installed in my vSAN cluster so I could move from the SD card to using SSD drives as the boot device for my hosts.  This blog post will go over the steps I took to accomplish this.

Purpose of This Tutorial

This tutorial takes you through the steps to move from SD or USB devices to locally attached disks as the boot media for VMware ESXi. It is not meant to be a replacement for documentation but merely as a guide of what I had to do to move from SD boot media to locally attached disks. Your environment may be different.

Audience

This tutorial is intended for vSphere system administrators.

Moving from SD/USB to locally attached

Introduction

As mentioned, I needed to move off my SD cards and on to higher endurance media for my ESXi hosts. In my lab I have four HPE DL360 systems in a vSAN cluster. I was fortunate that IT had some spare 80GB SSD drives available so I had them add them to the hosts.

Existing KB’s and blogs on the topic of SD cards as boot media

Please review the KB and blog associated with the topic of using SD cards as boot media. They will have the most up to date official guidance.

This blog is primarily written for the vSphere Admin who has decided to move from SD cards to another boot device. No suggestion to move should be inferred.

Prerequisites

Before you can perform the steps in this exercise, you should ensure that

  1. Any hardware you are adding falls under the VMware Hardware Compatibility List.
  2. Perform a configuration backup of each of your ESXi hosts.  
  3. You have access to the console of the hosts via direct means or via an iLO or DRAC or other out of band method depending on your host vendor.
  4. You have a copy of the ISO of your ESXi installation media to ensure configuration backup and restore go flawlessly.
  5. A method to attach the ISO so you can boot from it. For example, iLO and DRAC devices allow for mounting the ISO as a virtual CD you can boot from

Backup of your ESXi Host configuration

There is an existing KB article on how to backup and restore your ESXi host configuration. I won’t show each method here but the KB does do that. For the purposes of this blog post I’ll show using PowerCLI how to backup and restore the host configuration.

Note: PowerShell is available for Windows, Mac and Linux systems and the PowerCLI module works on all of them. You’ll see in the example below that the $output_directory value is a unix-style file location. I ran this from my Mac. You can learn more about PowerCLI at VMware’s developer portal. You can find PowerShell for Windows, Mac and Linux at the Microsoft website.

Connect-VIServer -Server <VCSA IP or FQDN> -User administrator@vsphere.local -password VMware1!

$output_directory = "/Users/mfoley/Temp"

$cluster = "vSANCluster"

$VMhosts = Get-Cluster  $cluster | Get-VMHost

foreach($VMhost in $VMhosts) {Get-VMHostFirmware -VMHost $vmhost -BackupConfiguration -DestinationPath $output_directory

Modify this code to use your VCSA FQDN/IP and your own credentials. Also change the output directory and cluster name. Running this will generate a unique configuration file for each host as seen below.

Text</p>
<p>Description automatically generated

Great! Now we have our configurations backed up. Next step is to install ESXi on the new disk. We'll start first by shutting down the system you are going to work on. If you are in a vSphere cluster then you can enable Maintenance Mode to evacuate the VMs from the host. If your host is standalone you'll need to suspend or shutdown your VMs.

Installing ESXi on the new disk

Because each vendors BIOS is different, I can’t show how every combination will look. I have to go with what I have on my systems. I’ll attempt to make these instructions as generic as possible, but you may need to consult your server documentation for anything specific.

System Partition Sizes

In 7.0 Update 1c a new boot option, systemMediaSize, was added to allow the installer to use a different size for the system partitions. This is most helpful to those with small configurations that also use the boot drive for VMFS use as well. If you don’t specify the system partition size the installer will consume 138GB of disk space by default. The systemMediaSize boot option accepts the following parameters with the corresponding size used for ESXi system partitions:

  • min     33GB, for single disk or embedded servers
  • small   69GB, for servers with at least 512GB RAM
  • max     all available space, for multi-terabyte servers

Note: GB units specified are in storage device sizes, i.e. 1,000,000,000 byte multiples

  • If you wish to specify the boot option then start the host with the install image and when the ESXi installer window appears, press Shift+O within 5 seconds to edit the boot options.

For example, add the following prompt:

systemMediaSize=small

You can read more about this option at the following locations:

 

Connect the ESXi ISO to your server and boot from it

Graphical user interface, text, application</p>
<p>Description automatically generated

Ensure you use the same ISO as the version of ESXi you are running on the SD card. In my case, I’m running 7.0 Update 3C and I’m selecting the HPE ISO and not the generic ESXi ISO. This ensures that I have all the right drivers installed at boot time. If your vendor supplies a custom image for your server then use that. VMware has a list of vendor custom ISO’s for download at the Customer Connect download portal.

Graphical user interface, text, application, email</p>
<p>Description automatically generated

Boot from the virtual CD drive

In this image I’m selecting to boot from the virtual CD drive created by the iLO interface when I connected the ISO file.

Text</p>
<p>Description automatically generated

Once booted the ESXi installer will start.

Graphical user interface, text</p>
<p>Description automatically generated

The ESXi installer

Graphical user interface, text</p>
<p>Description automatically generated

 

vSAN Considerations

If you are installing on a system that is configured with vSAN then your boot media needs to be on a disk that’s not used by vSAN. In my case, IT added an additional 80GB SSD drive per server. You can see below that I am selecting this 80GB disk to install ESXi on to.

Graphical user interface, text</p>
<p>Description automatically generated

Select Install to do a clean installation of ESXi onto your new boot device.

Graphical user interface, text</p>
<p>Description automatically generated

Graphical user interface, text, website</p>
<p>Description automatically generated

Once you have answered all the question start the installation of ESXi.

Graphical user interface, website</p>
<p>Description automatically generated

Boot from the new disk

You are now ready to boot from the new disk. Set your BIOS to boot from the new device instead of from the SD card. Once booted, if you are using DHCP then ESXi will get an IP address, otherwise configure networking. You need networking to restore the configuration. Network settings will be overwritten when the configuration is restored.

What to do if you are running NSX-T

If you are not running NSX-T you can skip this step. If you are running NSX-T in your environment and you restore the configuration without installing the drivers you could end up with a blank DCUI and have to restart the installation of ESXi from scratch.

Rectangle</p>
<p>Description automatically generated

This happened to me! This is easily addressed however.

Installing NSX drivers on ESXi prior to configuration restore

Once ESXi is installed and booted up you will need to temporarily enable SSH so you can copy some files to the host. I say “temporarily” because SSH is disabled by default for a good reason. Enabling SSH and leaving it enabled opens up an attack vector. Almost all ransomware attacks lately have been against hosts that have SSH enabled. Keep it off by default!

A picture containing graphical user interface</p>
<p>Description automatically generated

First step is to download the NSX Kernel Module for VMware ESXi 7.0 from the Customer Connect download portal.

Next, upload this to your freshly installed ESXi host (but only if you were using NSX-T previously!). Use SCP/WinSCP/etc to connect to the host securely and upload the file to /tmp on the ESXi host.

Install NSX Drivers

Once uploaded, log into the ESXi shell (either at the console or via SSH) and install these drivers using ESXCLI. Here’s an example command:

Esxcli software vib install -d /tmp/nsx-lcp-3.2.0.1.0.19232397-esx70.zip

Here’s what that looks like:

Text</p>
<p>Description automatically generated

At the end of this process you should reboot the ESXi host. This ensures the drivers are loaded when you restore the configuration. Restoring the configuration at this stage without rebooting will fail.

Restoring the ESXi host configuration

Once the host is back up don’t forget to disable SSH! Now we are going to use PowerCLI to restore the saved configuration.

Connect to the host

Using PowerCLI connect to the host using the Connect-VIServer cmdlet. E.g.

Connect-viserver <FQDN or IP> -Username root -Password <password>

Maintenance Mode

Now put the host into Maintenance Mode prior to restoring the configuration

Get-VMhost <FQDN or IP> | Set-VMhost -state Maintenance

Text</p>
<p>Description automatically generated

Restore Configuration

Now that the host is in Maintenance Mode we can restore the configuration. The command will be:

Set-VMhostFirmware -vmhost <FQDN or IP> -restore -SourcePath /directory/filename

Text</p>
<p>Description automatically generated

System Booted

At this point the configuration bundle has been uploaded and the host is rebooting. Provided you set your boot device in the BIOS as called out earlier, the host should come up with the configuration used in the SD card, only booting from your new device.

Text</p>
<p>Description automatically generated

Reconnect host in the vSphere Client

Once the host is up you will have to reconnect it to the cluster in the vSphere client

Graphical user interface, text, application</p>
<p>Description automatically generated

Confirm Boot Device

To confirm you have booted from the new device you can run a PowerCLI script I found on fellow vExpert Ivo Beerens blog to show what the boot disk is. Thank you Ivo!

Text</p>
<p>Description automatically generated

You can see above that mgmt-esx-01.cpbu.lab is now booting off an SSD drive I selected in the installer instead of the SD card.

Wrap Up

You’ll notice I confined this blog post to just moving from the SD card boot device to a new boot device. I have purposely shied away from the SD card discussion as there’s already plenty of content on that already.

My goal for this blog was to show you how to move to a new boot device (for whatever reason!). I moved off of the SD cards because I didn’t want to come into work on a Monday and find out that one or more of them had self-destructed. I suspect that’s the major reason many of you will move to a new boot device.

If you have questions on this blog then hit me up on Twitter. Because I used supported methods to do this, if you are a customer with support, I would ask that you open a Support Request first. This way GSS will be able to track the issue you may be having.

Please note that vSphere configuration changes like this are not part of my primary focus. I’m still working on vSphere with Tanzu but I thought that this would be useful seeing as I’m sure I’m not the only one who considered doing this.

Thanks!

mike

Filter Tags

Storage vSphere vSphere 7 Document Best Practice Deployment Considerations Intermediate Advanced Deploy Manage Migrate