Moving off of SD Boot Media
Overview
Introduction
Recent changes in ESXi 7 have prompted many customers to consider moving from SD Cards and USB sticks as their ESXi boot media and back to SSD’s or NVME drives. We here within VMware with lab systems are not immune to this challenge! I recently had some SSDs installed in my vSAN cluster so I could move from the SD card to using SSD drives as the boot device for my hosts. This blog post will go over the steps I took to accomplish this.
Purpose of This Tutorial
This tutorial takes you through the steps to move from SD or USB devices to locally attached disks as the boot media for VMware ESXi. It is not meant to be a replacement for documentation but merely as a guide of what I had to do to move from SD boot media to locally attached disks. Your environment may be different.
Audience
This tutorial is intended for vSphere system administrators.
Moving from SD/USB to locally attached
Introduction
As mentioned, I needed to move off my SD cards and on to higher endurance media for my ESXi hosts. In my lab I have four HPE DL360 systems in a vSAN cluster. I was fortunate that IT had some spare 80GB SSD drives available so I had them add them to the hosts.
Existing KB’s and blogs on the topic of SD cards as boot media
- Removal of SD card/USB as a standalone boot device option (85685)
- Any updated to official messaging will be done in this KB. I would suggest you subscribe to this KB to be kept up to date on any changes.
- vSphere 7 – ESXi System Storage Changes
Please review the KB and blog associated with the topic of using SD cards as boot media. They will have the most up to date official guidance.
This blog is primarily written for the vSphere Admin who has decided to move from SD cards to another boot device. No suggestion to move should be inferred.
Prerequisites
Before you can perform the steps in this exercise, you should ensure that
- Any hardware you are adding falls under the VMware Hardware Compatibility List.
- Perform a configuration backup of each of your ESXi hosts.
- You have access to the console of the hosts via direct means or via an iLO or DRAC or other out of band method depending on your host vendor.
- You have a copy of the ISO of your ESXi installation media to ensure configuration backup and restore go flawlessly.
- A method to attach the ISO so you can boot from it. For example, iLO and DRAC devices allow for mounting the ISO as a virtual CD you can boot from
Backup of your ESXi Host configuration
There is an existing KB article on how to backup and restore your ESXi host configuration. I won’t show each method here but the KB does do that. For the purposes of this blog post I’ll show using PowerCLI how to backup and restore the host configuration.
Note: PowerShell is available for Windows, Mac and Linux systems and the PowerCLI module works on all of them. You’ll see in the example below that the $output_directory
value is a unix-style file location. I ran this from my Mac. You can learn more about PowerCLI at VMware’s developer portal. You can find PowerShell for Windows, Mac and Linux at the Microsoft website.
Connect-VIServer -Server <VCSA IP or FQDN> -User administrator@vsphere.local -password VMware1!
$output_directory = "/Users/mfoley/Temp"
$cluster = "vSANCluster"
$VMhosts = Get-Cluster $cluster | Get-VMHost
foreach($VMhost in $VMhosts) {Get-VMHostFirmware -VMHost $vmhost -BackupConfiguration -DestinationPath $output_directory
Modify this code to use your VCSA FQDN/IP and your own credentials. Also change the output directory and cluster name. Running this will generate a unique configuration file for each host as seen below.
Great! Now we have our configurations backed up. Next step is to install ESXi on the new disk. We'll start first by shutting down the system you are going to work on. If you are in a vSphere cluster then you can enable Maintenance Mode to evacuate the VMs from the host. If your host is standalone you'll need to suspend or shutdown your VMs.
Installing ESXi on the new disk
Because each vendors BIOS is different, I can’t show how every combination will look. I have to go with what I have on my systems. I’ll attempt to make these instructions as generic as possible, but you may need to consult your server documentation for anything specific.
System Partition Sizes
In 7.0 Update 1c a new boot option, systemMediaSize, was added to allow the installer to use a different size for the system partitions. This is most helpful to those with small configurations that also use the boot drive for VMFS use as well. If you don’t specify the system partition size the installer will consume 138GB of disk space by default. The systemMediaSize
boot option accepts the following parameters with the corresponding size used for ESXi system partitions:
- min 33GB, for single disk or embedded servers
- small 69GB, for servers with at least 512GB RAM
- max all available space, for multi-terabyte servers
Note: GB units specified are in storage device sizes, i.e. 1,000,000,000 byte multiples
- If you wish to specify the boot option then start the host with the install image and when the ESXi installer window appears, press Shift+O within 5 seconds to edit the boot options.
For example, add the following prompt:
systemMediaSize=small
You can read more about this option at the following locations:
- KB81166
- vSphere 7.0 Update 1C Release Notes
- vSphere Documentation - Enter Boot Options to Start an Installation or Upgrade Script
Connect the ESXi ISO to your server and boot from it
Ensure you use the same ISO as the version of ESXi you are running on the SD card. In my case, I’m running 7.0 Update 3C and I’m selecting the HPE ISO and not the generic ESXi ISO. This ensures that I have all the right drivers installed at boot time. If your vendor supplies a custom image for your server then use that. VMware has a list of vendor custom ISO’s for download at the Customer Connect download portal.
Boot from the virtual CD drive
In this image I’m selecting to boot from the virtual CD drive created by the iLO interface when I connected the ISO file.
Once booted the ESXi installer will start.
The ESXi installer
vSAN Considerations
If you are installing on a system that is configured with vSAN then your boot media needs to be on a disk that’s not used by vSAN. In my case, IT added an additional 80GB SSD drive per server. You can see below that I am selecting this 80GB disk to install ESXi on to.
Select Install to do a clean installation of ESXi onto your new boot device.
Once you have answered all the question start the installation of ESXi.
Boot from the new disk
You are now ready to boot from the new disk. Set your BIOS to boot from the new device instead of from the SD card. Once booted, if you are using DHCP then ESXi will get an IP address, otherwise configure networking. You need networking to restore the configuration. Network settings will be overwritten when the configuration is restored.
What to do if you are running NSX-T
If you are not running NSX-T you can skip this step. If you are running NSX-T in your environment and you restore the configuration without installing the drivers you could end up with a blank DCUI and have to restart the installation of ESXi from scratch.
This happened to me! This is easily addressed however.
Installing NSX drivers on ESXi prior to configuration restore
Once ESXi is installed and booted up you will need to temporarily enable SSH so you can copy some files to the host. I say “temporarily” because SSH is disabled by default for a good reason. Enabling SSH and leaving it enabled opens up an attack vector. Almost all ransomware attacks lately have been against hosts that have SSH enabled. Keep it off by default!
First step is to download the NSX Kernel Module for VMware ESXi 7.0 from the Customer Connect download portal.
Next, upload this to your freshly installed ESXi host (but only if you were using NSX-T previously!). Use SCP/WinSCP/etc to connect to the host securely and upload the file to /tmp on the ESXi host.
Install NSX Drivers
Once uploaded, log into the ESXi shell (either at the console or via SSH) and install these drivers using ESXCLI. Here’s an example command:
Esxcli software vib install -d /tmp/nsx-lcp-3.2.0.1.0.19232397-esx70.zip
Here’s what that looks like:
At the end of this process you should reboot the ESXi host. This ensures the drivers are loaded when you restore the configuration. Restoring the configuration at this stage without rebooting will fail.
Restoring the ESXi host configuration
Once the host is back up don’t forget to disable SSH! Now we are going to use PowerCLI to restore the saved configuration.
Connect to the host
Using PowerCLI connect to the host using the Connect-VIServer cmdlet. E.g.
Connect-viserver <FQDN or IP> -Username root -Password <password>
Maintenance Mode
Now put the host into Maintenance Mode prior to restoring the configuration
Get-VMhost <FQDN or IP> | Set-VMhost -state Maintenance
Restore Configuration
Now that the host is in Maintenance Mode we can restore the configuration. The command will be:
Set-VMhostFirmware -vmhost <FQDN or IP> -restore -SourcePath /directory/filename
System Booted
At this point the configuration bundle has been uploaded and the host is rebooting. Provided you set your boot device in the BIOS as called out earlier, the host should come up with the configuration used in the SD card, only booting from your new device.
Reconnect host in the vSphere Client
Once the host is up you will have to reconnect it to the cluster in the vSphere client
Confirm Boot Device
To confirm you have booted from the new device you can run a PowerCLI script I found on fellow vExpert Ivo Beerens blog to show what the boot disk is. Thank you Ivo!
You can see above that mgmt-esx-01.cpbu.lab
is now booting off an SSD drive I selected in the installer instead of the SD card.
Wrap Up
You’ll notice I confined this blog post to just moving from the SD card boot device to a new boot device. I have purposely shied away from the SD card discussion as there’s already plenty of content on that already.
My goal for this blog was to show you how to move to a new boot device (for whatever reason!). I moved off of the SD cards because I didn’t want to come into work on a Monday and find out that one or more of them had self-destructed. I suspect that’s the major reason many of you will move to a new boot device.
If you have questions on this blog then hit me up on Twitter. Because I used supported methods to do this, if you are a customer with support, I would ask that you open a Support Request first. This way GSS will be able to track the issue you may be having.
Please note that vSphere configuration changes like this are not part of my primary focus. I’m still working on vSphere with Tanzu but I thought that this would be useful seeing as I’m sure I’m not the only one who considered doing this.
Thanks!
mike