Holodeck 5.1.1 Software Defined Networking (Terraform)

Terraform SDN Infrastructure Deployment

 

Terraform SDN Infrastructure Deployment

Overview

This section details the deployment of Holodeck SDN Exploration Infrastructure using Terraform sample code. The Terraform demo package provides the following functionality to enable rapid demonstration of cloud consumption in the holodeck environment: 

  • Creates vSphere content library and populates with Ubuntu 18.04 base image
  • Creates Holodeck-T1 router connected to default VLC Tier 0 router
  • Create NSX segments for database and web server use
  • Creates distributed firewall rules for Opencart application
  • Deploys an instance of Opencart e-commerce application with two frontend Apache web servers, and a backend MySQL database

Note: This automated infrastructure mimics the Holodeck 5.1 SDN lab that will allow the user to create this infrastructure manually.

Prerequisites

  • Holodeck 5.1.1 package deployed
  • VLC-Holo-Site-1 deployed with no changes to default configuration files.
  • C:\VLC\VLC-Holo-Site-1\Holo-Build\Post-Deployment\Holodeck-Infrastructure.ps1 run to create necessary folders

Download Terraform to Holo Console

  1. From the Holo Console, open a web browser to   https://developer.hashicorp.com/terraform/install
  2. Download the AMD64 version for windows

 

 

A screenshot of a computer</p>
<p>Description automatically generated

  1. Unzip the download package
  2. Copy the terraform executable into C:\windows as a simple way to make the exe available in the path

Initialize Terraform

  1. On Holo Console open a command prompt
  2. Change directories to  c:\VLC\VLC-Holo-Site-1\Holo-Lab-Support-Files\TF-Full-SDN-Demo

A screenshot of a computer program</p>
<p>Description automatically generated

  1. Run terraform init
  2. Output should resemble the following

A screenshot of a computer program</p>
<p>Description automatically generated

Deploy infrastructure

  1. Run terraform apply --auto-approve

  1. Within several seconds terraform begins deploying.
  2. The script typically takes between 7-10minutes to run. When it completes the output will look like the following.

A screenshot of a computer</p>
<p>Description automatically generated

Test basic OpenCart functionality

  1. Using the Chrome bookmarks, access the Management vCenter Server Web Client  using the username administrator@vsphere.local and the password of VMware123!
  2. Monitor the summary page for TF-OC-Apache-A and wait for it to complete the boot process and IP address 10.1.10.18 to show up in virtual machine details.

A screenshot of a computer</p>
<p>Description automatically generated

  1. Open a browser to 10.1.10.18.
  2. You should see the following

 

 

A laptops on a web page</p>
<p>Description automatically generated

 

 

 

6.1- Holodeck 5.1.1 VCF Networking  Exploring Segments and Distributed Routing

Segments and distributed routing

Overview

VMware Cloud Foundation leverages virtualized (Overlay) networking. This configuration encapsulates L2 network traffic within an L3 underlay network, which facilitates the delivery of networks and network services in a software defined way.

One of the key advantages is its ability to create flexible logical networks that can extend beyond the constraints of physical network boundaries. VCF networking effectively transforms the physical network into a pool of resources. This transformation decouples the consumption of network services from the physical infrastructure, a concept that mirrors the approach taken by vSphere in abstracting compute capacity from server hardware.

This model offers several benefits. It enhances network management efficiency by allowing for the programmable delivery of network services. It also provides scalability, as the logical networks created can span physical network boundaries. Furthermore, it offers flexibility by transforming the physical network into a pool of resources that can be consumed as needed.

 

Prerequisites

  • External network access from the Holodeck environment 
  • C:\VLC\VLC-Holo-Site-1\Holo-Build\Post-Deployment\Holodeck-Infrastructure.ps1 has been run post installation to create VM folders
  • Terraform plan at c:\VLC\VLC-Holo-Site-1\Holo-Lab-Support-Files\TF-Full-SDN-Demo has been applied 

A screenshot of a computer program</p>
<p>Description automatically generated

Lab 1: Exploring basic virtualized network functionality

This lab walks the vSphere team through the basics of  VCF virtualized networking in vCenter Server and NSX interfaces.

View TF-OC-Web-Segment in vCenter Server

  1. Login to Management Domain vCenter Server
  2. Click on the networking icon and expand the menu on the left-hand side of the vSphere Web Client
  3. Click on TF-OC-Web-Segment
  4. Note the following:
     
    • The “N” denotes this is a NSX segment and not a standard port group
    • The segment ID and Transport Zone for the segment are shown
    • The vDS the segment is attached to
    • The hyperlink for the NSX Manager for the segment

  1. Click on Ports
  2. Note each VM attached to the segment has a port assigned. Other details, such as the MAC address and VLAN ID, are also displayed

  1. Click on the Hosts tab
  2. Note the TF-OC-Web-Segment is connected on each ESXi host in the transport zone. When a segment is created, it is accessible to all hosts in the transport zone.

A screenshot of a computer</p>
<p>Description automatically generated

Perform ICMP Ping between VM's

  1. Click on the summary tab for TF-OC-Apache-A
  2. Open a web console
  3. Login as ocuser/VMware123!

A screenshot of a computer</p>
<p>Description automatically generated

  1. Ping 10.1.10.19 (TF-OC-Apache-B) to test ICMP ping on the same subnet
  2. Ping 10.1.10.50 (TF-OC-MySQL) to test ICMP ping on different subnets

Discover the Network Topology

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Holodeck 5.1 folder in the bookmark bar then select Holo-Site-1->Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click the Networking tab
  5. Select Network Topology from the left-hand side menu
  6. Click Skip to close the mini-tour if needed.

A screenshot of a computer screen</p>
<p>Description automatically generated

  1. Locate the TF-OC-T1 on the topology view.

A screenshot of a computer</p>
<p>Description automatically generated

  1. Click on 2 VMs under TF-OC-Web-Segment to expand the view
  2. Note the two VMs previously configured on the TF-OC-Web-Segment
  3. Repeat the action for the TF-OC-DB-Segment

A screenshot of a computer</p>
<p>Description automatically generated

  1. Scroll to the top of the topology view. Note the Tier 0 router which connects to the customer core network using an ECMP connection with BGP route propagation. The Tier 0 is typically connected to the customer network when the VCF Workload Domain is initially deployed by the network engineering team. After that, the operations team can create tier 1 routers as needed with no additional network configuration required on the physical network.

Lab 1 Summary

This lab demonstrated how VCF virtualized networking can be utilized to quickly  provision L2 and L3 services on existing infrastructure. This enables:

  • Easily delivering network services needed by an application with the application
  • Eliminating delays with traditional network provisioning processes, which can take days to months.
  • Empowering operations staff to deploy approved networks or retain control with the network admin team. 
  • Building a virtualized networking foundation that facilitates ease of workload migrations to other VMware Cloud properties for Disaster Recovery or Cloud Bursting activities. 

This lab usually takes less than 30 minutes to complete. How does this compare to your experience in getting a multi tier network provisioned for VM use?

 

Lab 2: View packet flow within a host

This lab demonstrates the use of a powerful diagnostic and visualization tool in VCF networking known as Traceflow to view traffic moving between virtual machines on the same host and same segment. Subsequent labs will examine traffic between VM’s running on different hosts on the same segment, and VM’s communicating between segments in same and different hosts. 

Traceflow injects packets at the point the point where a VM connects to a vSphere distributed switch (VDS) port. It provides observation points along the packet’s path as it traverses physical and logical entities (such as ESXi hosts, logical switches, and logical routers) in the overlay and underlay network. This provides the ability to identify the path a packet takes to reach its destination or where a packet is dropped along the way. Each entity reports the packet handling on input and output, allowing for ease of troubleshooting. 

Keep in mind that Traceflow is not the same as a ping request/response that goes from guest-VM stack to guest-VM stack. What Traceflow does is observe a marked packet as it traverses the overlay network. Each packet is monitored as it crosses the overlay network until it reaches and is deliverable to the destination guest VM. However, the injected Traceflow packet is never actually delivered to the destination guest VM. This means that a Traceflow can be successful even when the guest VM is powered down.

Note: Until the VM has been powered on after attaching it to an NSX segment, the NSX control plane initially does not know which host to use to inject packets from that VM as source. This results in a failure of the Traceflow test. After initial power-on of the VM on any host on the segment, NSX Manager keeps track of the location of the last run location for the VM.

Step 1: Setup VMs for test

This step ensures the participant can observe packet flow between two VM’s on the same host by moving one VM as necessary to co-locate with the other. The subsequent lab will move the VM to show communications between hosts. 

  1. Launch the vSphere Client and login to the Mgmt vCenter using the username administrator@vsphere.local and the password of VMware123!
  2. From the Hosts and Clusters view click on TF-OC-Apache-A to determine on which ESXi is the VM running.  In this example,TF-OC-Apache-A is running on host esxi-1.vcf.sddc.lab:

A screenshot of a computer</p>
<p>Description automatically generated

  1. Click on TF-OC-Apache-B to determine on which ESXi is the VM running
  2. If the two VMs (TF-OC-Apache-B and TF-OC-Apache-A) are not on the same host, initiate a vMotion to move them to the same host.
  3. To perform the vMotion, right click on the VM and select Migrate.
  4. Click Next to change the compute resource only
  5. Select the ESXi host to migrate to, then click Next
  6. Click Next to accept the default network selection
  7. Click Next to accept the default vMotion priority
  8. Click Finish to perform the migration
  9. Below shows TF-OC-Apache-B being migrated to esxi-1.vcf.sddc.lab that TF-OC-Apache-A was shown to be on earlier

A screenshot of a computer</p>
<p>Description automatically generated

Step 2: Test packet flow

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

A screenshot of a computer</p>
<p>Description automatically generated

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • One packet was delivered
    • The physical hop count is zero, indicating that the packet did not leave the host 
    • The packet was injected at the network adapter for TF-OC-Apache-A virtual machine
    • It is then received at the distributed firewall at the VDS port for TF-OC-Apache-A
    • With no rule blocking, the packet is then forwarded on from the sending VDS port
    • The packet is then received on the distributed firewall at the receiving VDS port for TF-OC-Apache-B
    • With no rule blocking forwarding, the packet is then forwarded to the destination
    • The last step shows the packet being delivered to the network adapter for the TF-OC-Apache-B VM

Lab 2 Summary

Lab 2 shows two very specific capabilities. 

  • Traceflow provides a powerful tool for visualizing and diagnosing VCF overlay networks. You will use Traceflow in several other communication scenarios in the remainder of this lab.
  • The second point may not have been immediately obvious. While sending a packet between two hosts on the same subnet, the packet crossed two firewalls. In traditional networking, forcing a packet to traverse a firewall involves subnets, or complex traffic steering rules. With VCF virtual networking, the distributed firewall is present on every VDS port. You will explore the distributed firewall in more detail in another module. 

 

Lab 3: View packet flow between hosts

This lab uses the Traceflow capability to view traffic moving between virtual machines on different hosts on the same segment.

Step 1: Setup VMs for test

  • The previous lab moved TF-OC-Apache-A and TF-OC-Apache-B to the same host for testing. This lab requires the virtual machines to be split across two different hosts.
  • Initiate a vMotion to move TF-OC-Apache-B to a different host. This example uses ESXI-1

Step 2: View packet flow

  1. Log into NSX Manager
  2. Click Plan and Troubleshoot 
  3. Click Traffic Analysis 
  4. Click Get Started on the Traceflow box
  5. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  6. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B
  7. Scroll down and click Trace
  8. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  9. Resize the observations window by dragging the center icon upward
  10. In the Observations panel, note the following
     
    • One packet was delivered as expected. These is no change in firewall behavior from the last example
    • Because TF-OC-Apache-B is now on a different host, the packet crosses the physical layer and increments the hop count
    • You can see the Local Endpoint IP and Remote Endpoint IP for esxi-3, and opposite local and remote view from esxi-1. This is an example of NSX “Tunnel Endpoints” in use as opposite ends of an overlay network path between hosts. 

A screenshot of a computer</p>
<p>Description automatically generated

Step 3: View Host TEP information

  1. Log into NSX Manager 
  2. On the top menu bar click System
  3. In the left menu click Fabric
  4. Click Hosts
  5. Expand the management cluster
  6. Notice the TEP IP Addresses column. Click on the and 1 more hyperlink to expand . Each host has two TEP interfaces in the Host TEP VLAN. In the Holodeck lab configuration, Host TEP addresses are automatically allocated using DHCP on the 172.16.254.1/24 network.

A screenshot of a computer</p>
<p>Description automatically generated

  1. The NSX Manager is responsible for updating all transport nodes in the transport zone any time a VM powers on or is migrated. This provides mapping of VM to TEP addresses to send overlay traffic for a specific VM. As a Tunneling End Point, the NSX prepped vSphere Distributed switch is responsible to de-encapsulate overlay traffic to a VM and encapsulate traffic to communicate on the overlay. This is transparent to the VM and the underlay network

Lab 3 Summary

Lab 3 extends the concept of overlay networks to separate hosts. NSX Manager keeps all hosts participating in a transport zone up to date as to what Host TEP address to use to send packets to a specific VM. The concept of overlay networking is very powerful, as the IP network information of the virtual machines communicating is completely independent of the underlying transport network. In this example, the two ESXi hosts communicate over a 172.16.254.0/24 subnet for any overlay traffic on any segment running on these hosts. The underlying ESXi hosts could also be on different subnets due to being in different rows in a datacenter, buildings in a campus or datacenters in a local region. Overlay networking removes the artificial limits various datacenter IP strategies place on where a given workload can be run. 

 

Lab 4: View L3 packet flow within a host

This lab uses the Traceflow feature in NSX to view traffic moving between virtual machines on the same host, on different segments. In most datacenters, the network path required to allow for this communication requires packets moving through to an external router at the top of rack, or end of row, or datacenter core router. 

Step 1: Setup VMs for test

  • This lab requires TF-OC-Apache-A and TF-OC-MySQL to be on the same host.
  • Initiate a vMotion to move TF-OC-Apache-A to the same host as TF-OC-MySQL. This example uses esxi-3

Step 2: View Layer 3 communications in Traceflow

  1. Log into NSX Manager
  2. Click Plan & Troubleshoot 
  3. Click Traffic Analysis in the left navigation panel
  4. Click Get Started on the Traceflow box
  5. Configure a Traceflow from TF-OC-Apache-A to TF-OC-MySQL
  6. Click Trace
  7. Notice the communication path traverses the OC-T1 router in the Traceflow topology diagram.

A diagram of a computer</p>
<p>Description automatically generated

  1. In the Observations panel, review the following:
     
    • As before, with no firewall rules in place, one packet was delivered
    • Notice the packet routed between the TF-OC-Web-Segment and TF-OC-DB-Segment via the TF-C-T1 router, while never leaving the host as the Physical Hop Count remains zero

A screenshot of a computer</p>
<p>Description automatically generated

Lab 4 Summary

Lab 4 is a very simple example of the power of distributed routing in VCF. In a traditional environment, L3 routing happens in the datacenter, somewhere away from the server. There are many different architectures but each effectively requires the packets to leave the host to be routed elsewhere in the datacenter to get back to a VM on the same host. With VCF distributed routing, the routing happens right at the host between different connected segments. 

Lab 5: View L3 packet flow between hosts

This lab demonstrates the use of the Traceflow to view traffic moving between virtual machines on the different hosts and segments. In most datacenters, the network path required to allow for this communication requires packets moving through an external router. This example shows the power of distributed routing. 

Step 1: Setup VMs for test

  • This  -OC-Apache-A and TF-OC-MySQL to be on the same host.
  • Initiate a vMotion to move TF-OC-Apache-A to a different host  than TF-OC-MySQL. This example uses esxi-2

Step 2: View Layer 3 communications in NSX Traceflow

  1. Configure a Traceflow fromTF-OC-Apache-A toTF-OC-MySQL
  2. Click Trace
  3. In the Observations panel, review the following
     
    • One packet was delivered
    • The packet was injected at the network adapter for TF-OC-Apache-A virtual machine
    • It is then received at the distributed firewall at the VDS port forTF-OC-Apache-A
    • With no rule blocking, the packet is then forwarded on from the sending VDS port
    • The packet then hits OC-T1 router and gets forwarded to the OC-DB-Segment
    • Since TF-OC-Apache-A and OC-MySQL are running on different ESXi hosts, the physical hop count increases
    • The packet is then received on the distributed firewall at the receiving VDS port for TF-OC-MySQL
    • With no rule blocking forwarding, the packet is then forwarded to the destination 
    • The last step shows the packet being delivered to the network adapter for the TF-OC-MySQL VM

A screenshot of a computer</p>
<p>Description automatically generated

Lab 5 Summary

Lab 5 demonstrates packets traveling between the VDS ports for two virtual machines running on different ESXi hosts across two segments connected by a Tier-1 router. The important distinction is this router functionality was distributed across all hosts versus a physical device cabled somewhere else in the datacenter. 

 

Lab 6: Test end to end communications

  1. On the Holo-Console, double click the PuTTY icon on the desktop to start the PuTTY application
  2. Enter 10.1.1.50 for the IP address to connect to. This is the IP address for the TF-OC-MySQL VM
  3. Click Open

A computer screen shot of a computer</p>
<p>Description automatically generated

  1. Click Accept to add the SSH host key to PuTTY’s cache and continue to connect

A screenshot of a computer error</p>
<p>Description automatically generated

  1. If a login prompt does not appear, close the PuTTY window, and restart this step.
  2. Login with the username ocuser and the password VMware123!

A screenshot of a computer screen</p>
<p>Description automatically generated

  1. Successfully connecting from the Holo-Console to the TF-OC-MySQL VM verifies the entire SDN connection. In this lab configuration, the VCF Edge Cluster connects via ECMP to the Holodeck router where the Holo-Console is connected.  SSH traffic from the Holo-Console flows to the Holodeck router, over ECMP links to the Tier-0 router, to the TF-OC-T1 router, to theTF-OC-MySQL VM on TF-OC-DB-Segment, and returns.

A computer screen shot of a network diagram</p>
<p>Description automatically generated

Lab 6 Summary

Lab 6 demonstrates adding new distributed routing and overlay networking with immediate access from outside of the VCF environment through the Tier 0 router configured by the network team. 

 

6.2 Holodeck 5.1.1 Exploring Zero Trust with Distributed Firewall

Prerequisites

Lab 1: Tagging VMs and Grouping Workloads based on Tags

This lab explores the use of tagging to create groups of VMs to apply specific distributed firewall rules to. In small environments, creating groups based on VM name may suffice. However, as an environment grows, tagging may be a better alternative. This lab assumes the user has familiarity with the NSX interface.

Terminology and definitions:

Tags – A virtual machine is not directly managed by NSX, however, NSX allows attachment of tags to a virtual machine. This tagging enables tag-based grouping of objects. For example, a tag called AppServer can be associated to all application servers). 

Security Groups – A security group is a collection of assets or grouping objects from your vSphere inventory.

Security Groups are containers that can contain multiple object types including logical switch, vNIC, IPset, and Virtual Machine (VM). Security groups can have dynamic membership criteria based on security tags, VM name or logical switch name. For example, all VMs that have the security tag web will be automatically added to a specific security group destined for Web servers. After creating a security group, a security policy is applied to that group.

Security Policies – A security policy is a set of Guest Introspection, firewall, and network introspection services that can be applied to a security group. The order in which security policies are displayed is determined by the weight associated with the policy. By default, a new policy is assigned the highest weight so that it is at the top of the table. However, you can modify the default suggested weight to change the order assigned to the new policy. Policies can be stateful or stateless. 

Note: Tagging in NSX is distinct from tagging in vCenter Server. At this time, vCenter Server tags cannot be used to create groupings in NSX. In larger, more automated environments, customers use a solution such as vRealize Automation to deploy virtual machines and containers with security tagging set at time of creation.

To show the capability of tags the terraform script has set up TF-OC-Apache-A,  TF-OC-Apache-B and TF-OC-MySQL with the appropriate Tags and Security Group. The VM-Tag-Group mapping is as follows

�VM �IP Address �Tag �Security �Group 
TF-OC-MySQL 10.1.10.50 TF-OC-DB-Tag TF-OC-DB-Group
TF-OC-Apache-A 10.1.10.18 TF-OC-Web-Tag TF-OC-Web-group
TF-OC-Apache-B 10.1.10.19 TF-OC-Web-Tag TF-OC-Web-Group

Verify Tags

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Holodeck-5.1.1 folder in the bookmark bar then select Holo-Site-1 -> Mgmt Domain -> Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123!
  4. Navigate to Inventory-Tags
  5. Click on Inventory-Tags-Filter by Name, Path and more
  6. Click Tag

 

  1. Search for Tags with the string “TF-OC” in the name and click OK

  1. Verify the two tags created earlier are displayed

 

Verify virtual machines are mapped to tags.

  1. Select Inventory-Virtual Machines and click in the Filter area
  2. Scroll down in Basic Detail and select Tag
  3. Filter on TF-OC

 

 

  1. Select our two tags and click Apply

  1. Verify TF-OC-Apache-A, TF-OC-Apache-B  and TF-OC-MySQL are present

 

Verify Groups

  1. On the Inventory-Groups panel click in the Filter by Name, Path and More field

  1. Click on Name in the Basic Detail column

 

  1. Type TF-OC to filter for our group names
  2. Select the TF-OC-Web-Group and TF-OC-DB-Group groups
  3. Click Apply

 

 

  1. Click View Members for each group
  2. The following example shows the view when looking at the View Members details for the TF-OC-DB-Group 

At this point we have implemented the following:

 

�VM �IP Address �Tag �Security �Group 
TF-OC-MySQL 10.1.10.50 TF-OC-DB-Tag TF-OC-DB-Group
TF-OC-Apache-A 10.1.10.18 TF-OC-Web-Tag TF-OC-Web-group
TF-OC-Apache-B 10.1.10.19 TF-OC-Web-Tag TF-OC-Web-Group

Lab 1 Summary

Lab 1 shows  tagging and grouping in NSX. This capability allows creation and management of a scalable set of distributed firewall rules.

Lab 2: Implementing zero trust with VCF Distributed Firewall

This lab will show implementing a zero trust configuration with the distributed firewall and only opening necessary communications to access in our Opencart Application. For the purposes of this lab, we will create the following rules. Note: this is a very simplified example, and does not represent production security rules.

�Name �Source �Destination �Port/Protocol �Allowed �Notes
HTTP-Allow Any TF-OC-Web-Group HTTP (80) Allow Outside to web port 80
Web-DB TF-OC-Web-Group TF-OC-DB-Group 3306 (MySQL) Allow Web to DB comms
ssh-admin 10.0.0.0/24 (Holo-Console) TF-OC-DB-Group TF-OC-Web-Group SSH Allow Allow SSH from Holo console network only
ICMP-Admin 10.0.0.0/24 (Holo-Console) TF-OC-DB-Group TF-OC-Web-Group ICMP ALL Allow Allow ICMP from Holo console network only
SDN-Lab-Deny-All� Any Any Any Reject Reject all else�

 

Keep in mind that this all happening at the distributed firewall level, where firewall rules are implemented at the VM switch port versus needing the services of a routed (perimeter) firewall to implement. Since we have created groups in the previous lab, now we can create access rules based on these groups. 

Test OpenCart VM’s

  1. Wait approximately five minutes after  the terraform script completes to allow the Opencart system to build the database as part of the installation
  2. Open a web browser to 10.1.10.18
  3. Verify the website is up and running

 

Review TF-OC Policy

  1. If necessary, open a new tab in the Chrome browser
  2. Click the Management NSX-T shortcut in the bookmark bar
  3. Log into NSX Manager as user: admin with the password: VMware123!VMware123!
  4. Navigate to Security  Distributed Firewall in the NSX-T Console
  5. Open the policy  named “TF-OC

 

 

  1. Click on  2 Groups in the TF-OC policy.
  2. Note the policy is only applied to the groups in this lab 

Reset TF-OC Policy for lab

This step sets up the firewall for demonstrating zero trust in the lab.

  1. Disable the first four rules inside the TF-OC policy by setting the slider to off
  2. Set the action for TF-OC Deny All to Reject
  3. Click Publish to publish the rules

 

 

Test OpenCart VM’s

  1. Refresh the web browser to 10.1.10.18
  2. Verify the website not accessible

 

Enable HTTP-Allow

  1. Enable the HTTP-Allow rule with the slider
  2. Click publish

 

Test OpenCart

  1. Refresh the web browser to 10.1.10.18
  2. The error should change as the web server is now accessible, but the web server cannot access the database server.

Add Web-DB rule

The step will Allow communications from  the Apache web servers to MySQL

  1. Enable the Web–DB rule
  2. Click publish

Test Opencart

  1. Open a web browser (or refresh an existing window) for 10.1.10.18 and/or 10.1.10.19
  2. Access to the web server should be restored

 

Add SSH-Admin Rule

This rule simulates allowing trusted access from a small set of hosts in the administrative area but blocking lateral SSH in the environment

  1. Enable the SSH admin rule
  2. Click publish

 

Test SSH access

  1. On Holo-Console, click the start menu and then putty
  2. Open an SSH session to 10.1.10.18
  3. Accept the security warning if needed

 

  1. Login as ocuser with password VMware123!
  2. You should successfully login

  1. Attempt to SSH laterally to 10.1.10.19.

Add ICMP-Admin Rule

This rule simulates allowing ICMP Ping for troubleshooting from a small set of hosts in the administrative area but blocking lateral ping in the environment

  1. Enable the ICMP-Admin rule
  2. Click Publish

 

Test ICMP access

  1. On Holo-Console open a command window
  2. Ping 10.1.10.18 
  3. Ping from the Holo Console should work

  1. Open or reopen a putty session to 10.1.10.18.
  2. Login as ocuser with password VMware123!
  3. You should successfully login
  4. Attempt to ping of 10.1.10.19.

[Lab 2 Summary]

Lab 2 shows the power of the distributed firewall capability in NSX. Using tagging and grouping, we were able to create a scalable set of rules for our Opencart application that only allow necessary communications for application operation, while blocking all other traffic. This was all done directly at the vSphere VDS switch port level, versus a piece of hardware elsewhere in the datacenter.  

Lab 3: View packet flow across the Distributed Firewall

This lab demonstrates the use of Traceflow to view traffic moving between virtual machines on the same segment through the distributed firewall.   This shows how Zero Trust East/West security can be achieved on a single subnet. 

Test packet flow

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • Zero packets were delivered.
    • The packet was dropped at the sending port onThe packet was dropped at the sending port on TF-OC-Apache-A (Note Rule ID:1036 in this example)

  1. Click Security, Distributed Firewall, and open the TF-OC policy for viewing
  2. In this example, the deny all rule is rule ID 1029 (Rule IDs will change between deployments, but conceptually this traffic was blocked by the Deny All rule)

View Firewall rule blocking

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • Zero packets were delivered.
    • The packet was dropped at the sending port onThe packet was dropped at the sending port on TF-OC-Apache-A (Note Rule ID:1029 in this example)

Lab 2 Summary

Lab 2 shows two very specific capabilities. 

  • Traceflow provides a powerful tool for visualizing and diagnosing VCF overlay networks. 
  • The second point may not have been immediately obvious. While sending a packet between two hosts on the same subnet, the packet crossed two firewalls. In traditional networking, forcing a packet to traverse a firewall involves subnets, or complex traffic steering rules. With VCF virtual networking, the distributed firewall is present on every VDS port. You will explore the distributed firewall in more detail in another module. 

 

 

VCF Networking Exploring Segments and Distributed Routing

Segments and distributed routing

Overview

VMware Cloud Foundation leverages virtualized (Overlay) networking. This configuration encapsulates L2 network traffic within an L3 underlay network, which facilitates the delivery of networks and network services in a software defined way.

One of the key advantages is its ability to create flexible logical networks that can extend beyond the constraints of physical network boundaries. VCF networking effectively transforms the physical network into a pool of resources. This transformation decouples the consumption of network services from the physical infrastructure, a concept that mirrors the approach taken by vSphere in abstracting compute capacity from server hardware.

This model offers several benefits. It enhances network management efficiency by allowing for the programmable delivery of network services. It also provides scalability, as the logical networks created can span physical network boundaries. Furthermore, it offers flexibility by transforming the physical network into a pool of resources that can be consumed as needed.

 

Prerequisites

  • External network access from the Holodeck environment 
  • C:\VLC\VLC-Holo-Site-1\Holo-Build\Post-Deployment\Holodeck-Infrastructure.ps1 has been run post installation to create VM folders
  • Terraform plan at c:\VLC\VLC-Holo-Site-1\Holo-Lab-Support-Files\TF-Full-SDN-Demo has been applied 

A screenshot of a computer program</p>
<p>Description automatically generated

Lab 1: Exploring basic virtualized network functionality

This lab walks the vSphere team through the basics of  VCF virtualized networking in vCenter Server and NSX interfaces.

View TF-OC-Web-Segment in vCenter Server

  1. Login to Management Domain vCenter Server
  2. Click on the networking icon and expand the menu on the left-hand side of the vSphere Web Client
  3. Click on TF-OC-Web-Segment
  4. Note the following:
     
    • The “N” denotes this is a NSX segment and not a standard port group
    • The segment ID and Transport Zone for the segment are shown
    • The vDS the segment is attached to
    • The hyperlink for the NSX Manager for the segment

  1. Click on Ports
  2. Note each VM attached to the segment has a port assigned. Other details, such as the MAC address and VLAN ID, are also displayed

  1. Click on the Hosts tab
  2. Note the TF-OC-Web-Segment is connected on each ESXi host in the transport zone. When a segment is created, it is accessible to all hosts in the transport zone.

A screenshot of a computer</p>
<p>Description automatically generated

Perform ICMP Ping between VM's

  1. Click on the summary tab for TF-OC-Apache-A
  2. Open a web console
  3. Login as ocuser/VMware123!

A screenshot of a computer</p>
<p>Description automatically generated

  1. Ping 10.1.10.19 (TF-OC-Apache-B) to test ICMP ping on the same subnet
  2. Ping 10.1.10.50 (TF-OC-MySQL) to test ICMP ping on different subnets

Discover the Network Topology

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Holodeck 5.1 folder in the bookmark bar then select Holo-Site-1->Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click the Networking tab
  5. Select Network Topology from the left-hand side menu
  6. Click Skip to close the mini-tour if needed.

A screenshot of a computer screen</p>
<p>Description automatically generated

  1. Locate the TF-OC-T1 on the topology view.

A screenshot of a computer</p>
<p>Description automatically generated

  1. Click on 2 VMs under TF-OC-Web-Segment to expand the view
  2. Note the two VMs previously configured on the TF-OC-Web-Segment
  3. Repeat the action for the TF-OC-DB-Segment

A screenshot of a computer</p>
<p>Description automatically generated

  1. Scroll to the top of the topology view. Note the Tier 0 router which connects to the customer core network using an ECMP connection with BGP route propagation. The Tier 0 is typically connected to the customer network when the VCF Workload Domain is initially deployed by the network engineering team. After that, the operations team can create tier 1 routers as needed with no additional network configuration required on the physical network.

Lab 1 Summary

This lab demonstrated how VCF virtualized networking can be utilized to quickly  provision L2 and L3 services on existing infrastructure. This enables:

  • Easily delivering network services needed by an application with the application
  • Eliminating delays with traditional network provisioning processes, which can take days to months.
  • Empowering operations staff to deploy approved networks or retain control with the network admin team. 
  • Building a virtualized networking foundation that facilitates ease of workload migrations to other VMware Cloud properties for Disaster Recovery or Cloud Bursting activities. 

This lab usually takes less than 30 minutes to complete. How does this compare to your experience in getting a multi tier network provisioned for VM use?

 

Lab 2: View packet flow within a host

This lab demonstrates the use of a powerful diagnostic and visualization tool in VCF networking known as Traceflow to view traffic moving between virtual machines on the same host and same segment. Subsequent labs will examine traffic between VM’s running on different hosts on the same segment, and VM’s communicating between segments in same and different hosts. 

Traceflow injects packets at the point the point where a VM connects to a vSphere distributed switch (VDS) port. It provides observation points along the packet’s path as it traverses physical and logical entities (such as ESXi hosts, logical switches, and logical routers) in the overlay and underlay network. This provides the ability to identify the path a packet takes to reach its destination or where a packet is dropped along the way. Each entity reports the packet handling on input and output, allowing for ease of troubleshooting. 

Keep in mind that Traceflow is not the same as a ping request/response that goes from guest-VM stack to guest-VM stack. What Traceflow does is observe a marked packet as it traverses the overlay network. Each packet is monitored as it crosses the overlay network until it reaches and is deliverable to the destination guest VM. However, the injected Traceflow packet is never actually delivered to the destination guest VM. This means that a Traceflow can be successful even when the guest VM is powered down.

Note: Until the VM has been powered on after attaching it to an NSX segment, the NSX control plane initially does not know which host to use to inject packets from that VM as source. This results in a failure of the Traceflow test. After initial power-on of the VM on any host on the segment, NSX Manager keeps track of the location of the last run location for the VM.

Step 1: Setup VMs for test

This step ensures the participant can observe packet flow between two VM’s on the same host by moving one VM as necessary to co-locate with the other. The subsequent lab will move the VM to show communications between hosts. 

  1. Launch the vSphere Client and login to the Mgmt vCenter using the username administrator@vsphere.local and the password of VMware123!
  2. From the Hosts and Clusters view click on TF-OC-Apache-A to determine on which ESXi is the VM running.  In this example,TF-OC-Apache-A is running on host esxi-1.vcf.sddc.lab:

A screenshot of a computer</p>
<p>Description automatically generated

  1. Click on TF-OC-Apache-B to determine on which ESXi is the VM running
  2. If the two VMs (TF-OC-Apache-B and TF-OC-Apache-A) are not on the same host, initiate a vMotion to move them to the same host.
  3. To perform the vMotion, right click on the VM and select Migrate.
  4. Click Next to change the compute resource only
  5. Select the ESXi host to migrate to, then click Next
  6. Click Next to accept the default network selection
  7. Click Next to accept the default vMotion priority
  8. Click Finish to perform the migration
  9. Below shows TF-OC-Apache-B being migrated to esxi-1.vcf.sddc.lab that TF-OC-Apache-A was shown to be on earlier

A screenshot of a computer</p>
<p>Description automatically generated

Step 2: Test packet flow

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

A screenshot of a computer</p>
<p>Description automatically generated

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • One packet was delivered
    • The physical hop count is zero, indicating that the packet did not leave the host 
    • The packet was injected at the network adapter for TF-OC-Apache-A virtual machine
    • It is then received at the distributed firewall at the VDS port for TF-OC-Apache-A
    • With no rule blocking, the packet is then forwarded on from the sending VDS port
    • The packet is then received on the distributed firewall at the receiving VDS port for TF-OC-Apache-B
    • With no rule blocking forwarding, the packet is then forwarded to the destination
    • The last step shows the packet being delivered to the network adapter for the TF-OC-Apache-B VM

Lab 2 Summary

Lab 2 shows two very specific capabilities. 

  • Traceflow provides a powerful tool for visualizing and diagnosing VCF overlay networks. You will use Traceflow in several other communication scenarios in the remainder of this lab.
  • The second point may not have been immediately obvious. While sending a packet between two hosts on the same subnet, the packet crossed two firewalls. In traditional networking, forcing a packet to traverse a firewall involves subnets, or complex traffic steering rules. With VCF virtual networking, the distributed firewall is present on every VDS port. You will explore the distributed firewall in more detail in another module. 

 

Lab 3: View packet flow between hosts

This lab uses the Traceflow capability to view traffic moving between virtual machines on different hosts on the same segment.

Step 1: Setup VMs for test

  • The previous lab moved TF-OC-Apache-A and TF-OC-Apache-B to the same host for testing. This lab requires the virtual machines to be split across two different hosts.
  • Initiate a vMotion to move TF-OC-Apache-B to a different host. This example uses ESXI-1

Step 2: View packet flow

  1. Log into NSX Manager
  2. Click Plan and Troubleshoot 
  3. Click Traffic Analysis 
  4. Click Get Started on the Traceflow box
  5. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  6. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B
  7. Scroll down and click Trace
  8. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  9. Resize the observations window by dragging the center icon upward
  10. In the Observations panel, note the following
     
    • One packet was delivered as expected. These is no change in firewall behavior from the last example
    • Because TF-OC-Apache-B is now on a different host, the packet crosses the physical layer and increments the hop count
    • You can see the Local Endpoint IP and Remote Endpoint IP for esxi-3, and opposite local and remote view from esxi-1. This is an example of NSX “Tunnel Endpoints” in use as opposite ends of an overlay network path between hosts. 

A screenshot of a computer</p>
<p>Description automatically generated

Step 3: View Host TEP information

  1. Log into NSX Manager 
  2. On the top menu bar click System
  3. In the left menu click Fabric
  4. Click Hosts
  5. Expand the management cluster
  6. Notice the TEP IP Addresses column. Click on the and 1 more hyperlink to expand . Each host has two TEP interfaces in the Host TEP VLAN. In the Holodeck lab configuration, Host TEP addresses are automatically allocated using DHCP on the 172.16.254.1/24 network.

A screenshot of a computer</p>
<p>Description automatically generated

  1. The NSX Manager is responsible for updating all transport nodes in the transport zone any time a VM powers on or is migrated. This provides mapping of VM to TEP addresses to send overlay traffic for a specific VM. As a Tunneling End Point, the NSX prepped vSphere Distributed switch is responsible to de-encapsulate overlay traffic to a VM and encapsulate traffic to communicate on the overlay. This is transparent to the VM and the underlay network

Lab 3 Summary

Lab 3 extends the concept of overlay networks to separate hosts. NSX Manager keeps all hosts participating in a transport zone up to date as to what Host TEP address to use to send packets to a specific VM. The concept of overlay networking is very powerful, as the IP network information of the virtual machines communicating is completely independent of the underlying transport network. In this example, the two ESXi hosts communicate over a 172.16.254.0/24 subnet for any overlay traffic on any segment running on these hosts. The underlying ESXi hosts could also be on different subnets due to being in different rows in a datacenter, buildings in a campus or datacenters in a local region. Overlay networking removes the artificial limits various datacenter IP strategies place on where a given workload can be run. 

 

Lab 4: View L3 packet flow within a host

This lab uses the Traceflow feature in NSX to view traffic moving between virtual machines on the same host, on different segments. In most datacenters, the network path required to allow for this communication requires packets moving through to an external router at the top of rack, or end of row, or datacenter core router. 

Step 1: Setup VMs for test

  • This lab requires TF-OC-Apache-A and TF-OC-MySQL to be on the same host.
  • Initiate a vMotion to move TF-OC-Apache-A to the same host as TF-OC-MySQL. This example uses esxi-3

Step 2: View Layer 3 communications in Traceflow

  1. Log into NSX Manager
  2. Click Plan & Troubleshoot 
  3. Click Traffic Analysis in the left navigation panel
  4. Click Get Started on the Traceflow box
  5. Configure a Traceflow from TF-OC-Apache-A to TF-OC-MySQL
  6. Click Trace
  7. Notice the communication path traverses the OC-T1 router in the Traceflow topology diagram.

A diagram of a computer</p>
<p>Description automatically generated

  1. In the Observations panel, review the following:
     
    • As before, with no firewall rules in place, one packet was delivered
    • Notice the packet routed between the TF-OC-Web-Segment and TF-OC-DB-Segment via the TF-C-T1 router, while never leaving the host as the Physical Hop Count remains zero

A screenshot of a computer</p>
<p>Description automatically generated

Lab 4 Summary

Lab 4 is a very simple example of the power of distributed routing in VCF. In a traditional environment, L3 routing happens in the datacenter, somewhere away from the server. There are many different architectures but each effectively requires the packets to leave the host to be routed elsewhere in the datacenter to get back to a VM on the same host. With VCF distributed routing, the routing happens right at the host between different connected segments. 

Lab 5: View L3 packet flow between hosts

This lab demonstrates the use of the Traceflow to view traffic moving between virtual machines on the different hosts and segments. In most datacenters, the network path required to allow for this communication requires packets moving through an external router. This example shows the power of distributed routing. 

Step 1: Setup VMs for test

  • This  -OC-Apache-A and TF-OC-MySQL to be on the same host.
  • Initiate a vMotion to move TF-OC-Apache-A to a different host  than TF-OC-MySQL. This example uses esxi-2

Step 2: View Layer 3 communications in NSX Traceflow

  1. Configure a Traceflow fromTF-OC-Apache-A toTF-OC-MySQL
  2. Click Trace
  3. In the Observations panel, review the following
     
    • One packet was delivered
    • The packet was injected at the network adapter for TF-OC-Apache-A virtual machine
    • It is then received at the distributed firewall at the VDS port forTF-OC-Apache-A
    • With no rule blocking, the packet is then forwarded on from the sending VDS port
    • The packet then hits OC-T1 router and gets forwarded to the OC-DB-Segment
    • Since TF-OC-Apache-A and OC-MySQL are running on different ESXi hosts, the physical hop count increases
    • The packet is then received on the distributed firewall at the receiving VDS port for TF-OC-MySQL
    • With no rule blocking forwarding, the packet is then forwarded to the destination 
    • The last step shows the packet being delivered to the network adapter for the TF-OC-MySQL VM

A screenshot of a computer</p>
<p>Description automatically generated

Lab 5 Summary

Lab 5 demonstrates packets traveling between the VDS ports for two virtual machines running on different ESXi hosts across two segments connected by a Tier-1 router. The important distinction is this router functionality was distributed across all hosts versus a physical device cabled somewhere else in the datacenter. 

 

Lab 6: Test end to end communications

  1. On the Holo-Console, double click the PuTTY icon on the desktop to start the PuTTY application
  2. Enter 10.1.1.50 for the IP address to connect to. This is the IP address for the TF-OC-MySQL VM
  3. Click Open

A computer screen shot of a computer</p>
<p>Description automatically generated

  1. Click Accept to add the SSH host key to PuTTY’s cache and continue to connect

A screenshot of a computer error</p>
<p>Description automatically generated

  1. If a login prompt does not appear, close the PuTTY window, and restart this step.
  2. Login with the username ocuser and the password VMware123!

A screenshot of a computer screen</p>
<p>Description automatically generated

  1. Successfully connecting from the Holo-Console to the TF-OC-MySQL VM verifies the entire SDN connection. In this lab configuration, the VCF Edge Cluster connects via ECMP to the Holodeck router where the Holo-Console is connected.  SSH traffic from the Holo-Console flows to the Holodeck router, over ECMP links to the Tier-0 router, to the TF-OC-T1 router, to theTF-OC-MySQL VM on TF-OC-DB-Segment, and returns.

A computer screen shot of a network diagram</p>
<p>Description automatically generated

Lab 6 Summary

Lab 6 demonstrates adding new distributed routing and overlay networking with immediate access from outside of the VCF environment through the Tier 0 router configured by the network team. 

 

6.2 Holodeck 5.1.1 Exploring Zero Trust with Distributed Firewall

Prerequisites

Lab 1: Tagging VMs and Grouping Workloads based on Tags

This lab explores the use of tagging to create groups of VMs to apply specific distributed firewall rules to. In small environments, creating groups based on VM name may suffice. However, as an environment grows, tagging may be a better alternative. This lab assumes the user has familiarity with the NSX interface.

Terminology and definitions:

Tags – A virtual machine is not directly managed by NSX, however, NSX allows attachment of tags to a virtual machine. This tagging enables tag-based grouping of objects. For example, a tag called AppServer can be associated to all application servers). 

Security Groups – A security group is a collection of assets or grouping objects from your vSphere inventory.

Security Groups are containers that can contain multiple object types including logical switch, vNIC, IPset, and Virtual Machine (VM). Security groups can have dynamic membership criteria based on security tags, VM name or logical switch name. For example, all VMs that have the security tag web will be automatically added to a specific security group destined for Web servers. After creating a security group, a security policy is applied to that group.

Security Policies – A security policy is a set of Guest Introspection, firewall, and network introspection services that can be applied to a security group. The order in which security policies are displayed is determined by the weight associated with the policy. By default, a new policy is assigned the highest weight so that it is at the top of the table. However, you can modify the default suggested weight to change the order assigned to the new policy. Policies can be stateful or stateless. 

Note: Tagging in NSX is distinct from tagging in vCenter Server. At this time, vCenter Server tags cannot be used to create groupings in NSX. In larger, more automated environments, customers use a solution such as vRealize Automation to deploy virtual machines and containers with security tagging set at time of creation.

To show the capability of tags the terraform script has set up TF-OC-Apache-A,  TF-OC-Apache-B and TF-OC-MySQL with the appropriate Tags and Security Group. The VM-Tag-Group mapping is as follows

�VM �IP Address �Tag �Security �Group 
TF-OC-MySQL 10.1.10.50 TF-OC-DB-Tag TF-OC-DB-Group
TF-OC-Apache-A 10.1.10.18 TF-OC-Web-Tag TF-OC-Web-group
TF-OC-Apache-B 10.1.10.19 TF-OC-Web-Tag TF-OC-Web-Group

Verify Tags

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Holodeck-5.1.1 folder in the bookmark bar then select Holo-Site-1 -> Mgmt Domain -> Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123!
  4. Navigate to Inventory-Tags
  5. Click on Inventory-Tags-Filter by Name, Path and more
  6. Click Tag

 

  1. Search for Tags with the string “TF-OC” in the name and click OK

  1. Verify the two tags created earlier are displayed

 

Verify virtual machines are mapped to tags.

  1. Select Inventory-Virtual Machines and click in the Filter area
  2. Scroll down in Basic Detail and select Tag
  3. Filter on TF-OC

 

 

  1. Select our two tags and click Apply

  1. Verify TF-OC-Apache-A, TF-OC-Apache-B  and TF-OC-MySQL are present

 

Verify Groups

  1. On the Inventory-Groups panel click in the Filter by Name, Path and More field

  1. Click on Name in the Basic Detail column

 

  1. Type TF-OC to filter for our group names
  2. Select the TF-OC-Web-Group and TF-OC-DB-Group groups
  3. Click Apply

 

 

  1. Click View Members for each group
  2. The following example shows the view when looking at the View Members details for the TF-OC-DB-Group 

At this point we have implemented the following:

 

�VM �IP Address �Tag �Security �Group 
TF-OC-MySQL 10.1.10.50 TF-OC-DB-Tag TF-OC-DB-Group
TF-OC-Apache-A 10.1.10.18 TF-OC-Web-Tag TF-OC-Web-group
TF-OC-Apache-B 10.1.10.19 TF-OC-Web-Tag TF-OC-Web-Group

Lab 1 Summary

Lab 1 shows  tagging and grouping in NSX. This capability allows creation and management of a scalable set of distributed firewall rules.

Lab 2: Implementing zero trust with VCF Distributed Firewall

This lab will show implementing a zero trust configuration with the distributed firewall and only opening necessary communications to access in our Opencart Application. For the purposes of this lab, we will create the following rules. Note: this is a very simplified example, and does not represent production security rules.

�Name �Source �Destination �Port/Protocol �Allowed �Notes
HTTP-Allow Any TF-OC-Web-Group HTTP (80) Allow Outside to web port 80
Web-DB TF-OC-Web-Group TF-OC-DB-Group 3306 (MySQL) Allow Web to DB comms
ssh-admin 10.0.0.0/24 (Holo-Console) TF-OC-DB-Group TF-OC-Web-Group SSH Allow Allow SSH from Holo console network only
ICMP-Admin 10.0.0.0/24 (Holo-Console) TF-OC-DB-Group TF-OC-Web-Group ICMP ALL Allow Allow ICMP from Holo console network only
SDN-Lab-Deny-All� Any Any Any Reject Reject all else�

 

Keep in mind that this all happening at the distributed firewall level, where firewall rules are implemented at the VM switch port versus needing the services of a routed (perimeter) firewall to implement. Since we have created groups in the previous lab, now we can create access rules based on these groups. 

Test OpenCart VM’s

  1. Wait approximately five minutes after  the terraform script completes to allow the Opencart system to build the database as part of the installation
  2. Open a web browser to 10.1.10.18
  3. Verify the website is up and running

 

Review TF-OC Policy

  1. If necessary, open a new tab in the Chrome browser
  2. Click the Management NSX-T shortcut in the bookmark bar
  3. Log into NSX Manager as user: admin with the password: VMware123!VMware123!
  4. Navigate to Security  Distributed Firewall in the NSX-T Console
  5. Open the policy  named “TF-OC

 

 

  1. Click on  2 Groups in the TF-OC policy.
  2. Note the policy is only applied to the groups in this lab 

Reset TF-OC Policy for lab

This step sets up the firewall for demonstrating zero trust in the lab.

  1. Disable the first four rules inside the TF-OC policy by setting the slider to off
  2. Set the action for TF-OC Deny All to Reject
  3. Click Publish to publish the rules

 

 

Test OpenCart VM’s

  1. Refresh the web browser to 10.1.10.18
  2. Verify the website not accessible

 

Enable HTTP-Allow

  1. Enable the HTTP-Allow rule with the slider
  2. Click publish

 

Test OpenCart

  1. Refresh the web browser to 10.1.10.18
  2. The error should change as the web server is now accessible, but the web server cannot access the database server.

Add Web-DB rule

The step will Allow communications from  the Apache web servers to MySQL

  1. Enable the Web–DB rule
  2. Click publish

Test Opencart

  1. Open a web browser (or refresh an existing window) for 10.1.10.18 and/or 10.1.10.19
  2. Access to the web server should be restored

 

Add SSH-Admin Rule

This rule simulates allowing trusted access from a small set of hosts in the administrative area but blocking lateral SSH in the environment

  1. Enable the SSH admin rule
  2. Click publish

 

Test SSH access

  1. On Holo-Console, click the start menu and then putty
  2. Open an SSH session to 10.1.10.18
  3. Accept the security warning if needed

 

  1. Login as ocuser with password VMware123!
  2. You should successfully login

  1. Attempt to SSH laterally to 10.1.10.19.

Add ICMP-Admin Rule

This rule simulates allowing ICMP Ping for troubleshooting from a small set of hosts in the administrative area but blocking lateral ping in the environment

  1. Enable the ICMP-Admin rule
  2. Click Publish

 

Test ICMP access

  1. On Holo-Console open a command window
  2. Ping 10.1.10.18 
  3. Ping from the Holo Console should work

  1. Open or reopen a putty session to 10.1.10.18.
  2. Login as ocuser with password VMware123!
  3. You should successfully login
  4. Attempt to ping of 10.1.10.19.

[Lab 2 Summary]

Lab 2 shows the power of the distributed firewall capability in NSX. Using tagging and grouping, we were able to create a scalable set of rules for our Opencart application that only allow necessary communications for application operation, while blocking all other traffic. This was all done directly at the vSphere VDS switch port level, versus a piece of hardware elsewhere in the datacenter.  

Lab 3: View packet flow across the Distributed Firewall

This lab demonstrates the use of Traceflow to view traffic moving between virtual machines on the same segment through the distributed firewall.   This shows how Zero Trust East/West security can be achieved on a single subnet. 

Test packet flow

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • Zero packets were delivered.
    • The packet was dropped at the sending port onThe packet was dropped at the sending port on TF-OC-Apache-A (Note Rule ID:1036 in this example)

  1. Click Security, Distributed Firewall, and open the TF-OC policy for viewing
  2. In this example, the deny all rule is rule ID 1029 (Rule IDs will change between deployments, but conceptually this traffic was blocked by the Deny All rule)

View Firewall rule blocking

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • Zero packets were delivered.
    • The packet was dropped at the sending port onThe packet was dropped at the sending port on TF-OC-Apache-A (Note Rule ID:1029 in this example)

Lab 2 Summary

Lab 2 shows two very specific capabilities. 

  • Traceflow provides a powerful tool for visualizing and diagnosing VCF overlay networks. 
  • The second point may not have been immediately obvious. While sending a packet between two hosts on the same subnet, the packet crossed two firewalls. In traditional networking, forcing a packet to traverse a firewall involves subnets, or complex traffic steering rules. With VCF virtual networking, the distributed firewall is present on every VDS port. You will explore the distributed firewall in more detail in another module. 

 

 

VCF Networking Exploring Segments and Distributed Routing

Prerequisites

  • Deploy SDN Exploration Infrastructure with Terraform completed

Lab 1: Tagging VMs and Grouping Workloads based on Tags

This lab explores the use of tagging to create groups of VMs to apply specific distributed firewall rules to. In small environments, creating groups based on VM name may suffice. However, as an environment grows, tagging may be a better alternative. This lab assumes the user has familiarity with the NSX interface.

Terminology and definitions:

Tags – A virtual machine is not directly managed by NSX, however, NSX allows attachment of tags to a virtual machine. This tagging enables tag-based grouping of objects. For example, a tag called AppServer can be associated to all application servers). 

Security Groups – A security group is a collection of assets or grouping objects from your vSphere inventory.

Security Groups are containers that can contain multiple object types including logical switch, vNIC, IPset, and Virtual Machine (VM). Security groups can have dynamic membership criteria based on security tags, VM name or logical switch name. For example, all VMs that have the security tag web will be automatically added to a specific security group destined for Web servers. After creating a security group, a security policy is applied to that group.

Security Policies – A security policy is a set of Guest Introspection, firewall, and network introspection services that can be applied to a security group. The order in which security policies are displayed is determined by the weight associated with the policy. By default, a new policy is assigned the highest weight so that it is at the top of the table. However, you can modify the default suggested weight to change the order assigned to the new policy. Policies can be stateful or stateless. 

Note: Tagging in NSX is distinct from tagging in vCenter Server. At this time, vCenter Server tags cannot be used to create groupings in NSX. In larger, more automated environments, customers use a solution such as vRealize Automation to deploy virtual machines and containers with security tagging set at time of creation.

To show the capability of tags the terraform script has set up TF-OC-Apache-A,  TF-OC-Apache-B and TF-OC-MySQL with the appropriate Tags and Security Group. The VM-Tag-Group mapping is as follows

�VM �IP Address �Tag �Security �Group 
TF-OC-MySQL 10.1.10.50 TF-OC-DB-Tag TF-OC-DB-Group
TF-OC-Apache-A 10.1.10.18 TF-OC-Web-Tag TF-OC-Web-group
TF-OC-Apache-B 10.1.10.19 TF-OC-Web-Tag TF-OC-Web-Group

Verify Tags

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Holodeck-5.1.1 folder in the bookmark bar then select Holo-Site-1 -> Mgmt Domain -> Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123!
  4. Navigate to Inventory-Tags
  5. Click on Inventory-Tags-Filter by Name, Path and more
  6. Click Tag

 

  1. Search for Tags with the string “TF-OC” in the name and click OK

  1. Verify the two tags created earlier are displayed

 

Verify virtual machines are mapped to tags.

  1. Select Inventory-Virtual Machines and click in the Filter area
  2. Scroll down in Basic Detail and select Tag
  3. Filter on TF-OC

 

 

  1. Select our two tags and click Apply

  1. Verify TF-OC-Apache-A, TF-OC-Apache-B  and TF-OC-MySQL are present

 

Verify Groups

  1. On the Inventory-Groups panel click in the Filter by Name, Path and More field

  1. Click on Name in the Basic Detail column

 

  1. Type TF-OC to filter for our group names
  2. Select the TF-OC-Web-Group and TF-OC-DB-Group groups
  3. Click Apply

 

 

  1. Click View Members for each group
  2. The following example shows the view when looking at the View Members details for the TF-OC-DB-Group 

At this point we have implemented the following:

 

�VM �IP Address �Tag �Security �Group 
TF-OC-MySQL 10.1.10.50 TF-OC-DB-Tag TF-OC-DB-Group
TF-OC-Apache-A 10.1.10.18 TF-OC-Web-Tag TF-OC-Web-group
TF-OC-Apache-B 10.1.10.19 TF-OC-Web-Tag TF-OC-Web-Group

Lab 1 Summary

Lab 1 shows  tagging and grouping in NSX. This capability allows creation and management of a scalable set of distributed firewall rules.

Lab 2: Implementing zero trust with VCF Distributed Firewall

This lab will show implementing a zero trust configuration with the distributed firewall and only opening necessary communications to access in our Opencart Application. For the purposes of this lab, we will create the following rules. Note: this is a very simplified example, and does not represent production security rules.

�Name �Source �Destination �Port/Protocol �Allowed �Notes
HTTP-Allow Any TF-OC-Web-Group HTTP (80) Allow Outside to web port 80
Web-DB TF-OC-Web-Group TF-OC-DB-Group 3306 (MySQL) Allow Web to DB comms
ssh-admin 10.0.0.0/24 (Holo-Console) TF-OC-DB-Group TF-OC-Web-Group SSH Allow Allow SSH from Holo console network only
ICMP-Admin 10.0.0.0/24 (Holo-Console) TF-OC-DB-Group TF-OC-Web-Group ICMP ALL Allow Allow ICMP from Holo console network only
SDN-Lab-Deny-All� Any Any Any Reject Reject all else�

 

Keep in mind that this all happening at the distributed firewall level, where firewall rules are implemented at the VM switch port versus needing the services of a routed (perimeter) firewall to implement. Since we have created groups in the previous lab, now we can create access rules based on these groups. 

Test OpenCart VM’s

  1. Wait approximately five minutes after  the terraform script completes to allow the Opencart system to build the database as part of the installation
  2. Open a web browser to 10.1.10.18
  3. Verify the website is up and running

 

Review TF-OC Policy

  1. If necessary, open a new tab in the Chrome browser
  2. Click the Management NSX-T shortcut in the bookmark bar
  3. Log into NSX Manager as user: admin with the password: VMware123!VMware123!
  4. Navigate to Security  Distributed Firewall in the NSX-T Console
  5. Open the policy  named “TF-OC

 

 

  1. Click on  2 Groups in the TF-OC policy.
  2. Note the policy is only applied to the groups in this lab 

Reset TF-OC Policy for lab

This step sets up the firewall for demonstrating zero trust in the lab.

  1. Disable the first four rules inside the TF-OC policy by setting the slider to off
  2. Set the action for TF-OC Deny All to Reject
  3. Click Publish to publish the rules

 

 

Test OpenCart VM’s

  1. Refresh the web browser to 10.1.10.18
  2. Verify the website not accessible

 

Enable HTTP-Allow

  1. Enable the HTTP-Allow rule with the slider
  2. Click publish

 

Test OpenCart

  1. Refresh the web browser to 10.1.10.18
  2. The error should change as the web server is now accessible, but the web server cannot access the database server.

Add Web-DB rule

The step will Allow communications from  the Apache web servers to MySQL

  1. Enable the Web–DB rule
  2. Click publish

Test Opencart

  1. Open a web browser (or refresh an existing window) for 10.1.10.18 and/or 10.1.10.19
  2. Access to the web server should be restored

 

Add SSH-Admin Rule

This rule simulates allowing trusted access from a small set of hosts in the administrative area but blocking lateral SSH in the environment

  1. Enable the SSH admin rule
  2. Click publish

 

Test SSH access

  1. On Holo-Console, click the start menu and then putty
  2. Open an SSH session to 10.1.10.18
  3. Accept the security warning if needed

 

  1. Login as ocuser with password VMware123!
  2. You should successfully login

  1. Attempt to SSH laterally to 10.1.10.19.

Add ICMP-Admin Rule

This rule simulates allowing ICMP Ping for troubleshooting from a small set of hosts in the administrative area but blocking lateral ping in the environment

  1. Enable the ICMP-Admin rule
  2. Click Publish

 

Test ICMP access

  1. On Holo-Console open a command window
  2. Ping 10.1.10.18 
  3. Ping from the Holo Console should work

  1. Open or reopen a putty session to 10.1.10.18.
  2. Login as ocuser with password VMware123!
  3. You should successfully login
  4. Attempt to ping of 10.1.10.19.

[Lab 2 Summary]

Lab 2 shows the power of the distributed firewall capability in NSX. Using tagging and grouping, we were able to create a scalable set of rules for our Opencart application that only allow necessary communications for application operation, while blocking all other traffic. This was all done directly at the vSphere VDS switch port level, versus a piece of hardware elsewhere in the datacenter.  

Lab 3: View packet flow across the Distributed Firewall

This lab demonstrates the use of Traceflow to view traffic moving between virtual machines on the same segment through the distributed firewall.   This shows how Zero Trust East/West security can be achieved on a single subnet. 

Test packet flow

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • Zero packets were delivered.
    • The packet was dropped at the sending port onThe packet was dropped at the sending port on TF-OC-Apache-A (Note Rule ID:1036 in this example)

  1. Click Security, Distributed Firewall, and open the TF-OC policy for viewing
  2. In this example, the deny all rule is rule ID 1029 (Rule IDs will change between deployments, but conceptually this traffic was blocked by the Deny All rule)

View Firewall rule blocking

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123! 
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • Zero packets were delivered.
    • The packet was dropped at the sending port onThe packet was dropped at the sending port on TF-OC-Apache-A (Note Rule ID:1029 in this example)

 

Summary

This exercise shows two very specific capabilities.

  • TraceFlow provides a powerful tool for visualizing and diagnosing VCF overlay networks.
  • The second point may not have been immediately obvious. While sending a packet between two hosts on the same subnet, the packet crossed two firewalls. In traditional networking, forcing a packet to traverse a firewall involves subnets, or complex traffic steering rules. With VCF virtual networking, the distributed firewall is present on every VDS port. You will explore the distributed firewall in more detail in another module. 

Associated Content

home-carousel-icon From the action bar MORE button.

Filter Tags

Document