Holodeck 5.1.1 Packet Flow

Utilizing Traceflow in NSX Manager

These exercises demonstrates the use of a powerful diagnostic and visualization tool in VCF networking known as Traceflow to view traffic moving between virtual machines. The labs will examine multiple use cases including traffic between VM’s running on; same host on the same segment, different hosts on the same segment, and VM’s communicating between segments in same and different hosts.

Traceflow injects packets at the point the point where a VM connects to a vSphere distributed switch (VDS) port. It provides observation points along the packet’s path as it traverses physical and logical entities (such as ESXi hosts, logical switches, and logical routers) in the overlay and underlay network. This provides the ability to identify the path a packet takes to reach its destination or where a packet is dropped along the way. Each entity reports the packet handling on input and output, allowing for ease of troubleshooting.

Keep in mind that Traceflow is not the same as a ping request/response that goes from guest-VM stack to guest-VM stack. What Traceflow does is observe a marked packet as it traverses the overlay network. Each packet is monitored as it crosses the overlay network until it reaches and is deliverable to the destination guest VM. However, the injected Traceflow packet is never actually delivered to the destination guest VM. This means that a Traceflow can be used successfully even when the guest VM is powered down.

Note 1: All of these exercises can. be accomplished with either the Manual SDN configuration or the Terraform configuration - Please select the appropriate VMs during the exercise

Note 2: Until the VM has been powered on after attaching it to an NSX segment, the NSX control plane initially does not know which host to use to inject packets from that VM as source. This results in a failure of the Traceflow test. After initial power-on of the VM on any host on the segment, NSX Manager keeps track of the location of the last run location for the VM.

Note 3: If the Distributed Firewall has been configured; please make sure that it is currently set to accept all traffic.


 

Visualize packet flow within a host

This lab demonstrates the use of Traceflow to view traffic moving between virtual machines on the same host and same segment. 

Setup VMs for test

These steps ensure the participant can observe packet flow between two VM’s on the same host by moving one VM as necessary to co-locate with the other. The subsequent lab will move the VM to show communications between hosts.

  1. Launch the vSphere Client and login to the Mgmt vCenter using the username administrator@vsphere.local and the password of VMware123!
  2. From the Hosts and Clusters view click on TF-OC-Apache-A to determine on which ESXi is the VM running.  In this example,TF-OC-Apache-A is running on host esxi-1.vcf.sddc.lab:

A screenshot of a computer</p>
<p>Description automatically generated

  1. Click on TF-OC-Apache-B to determine on which ESXi is the VM running
  2. If the two VMs (TF-OC-Apache-B and TF-OC-Apache-A) are not on the same host, initiate a vMotion to move them to the same host.
  3. To perform the vMotion, right click on the VM and select Migrate.
  4. Click Next to change the compute resource only
  5. Select the ESXi host to migrate to, then click Next
  6. Click Next to accept the default network selection
  7. Click Next to accept the default vMotion priority
  8. Click Finish to perform the migration
  9. Below shows TF-OC-Apache-B being migrated to esxi-1.vcf.sddc.lab that TF-OC-Apache-A was shown to be on earlier

A screenshot of a computer</p>
<p>Description automatically generated

 

Test packet flow

  1. On the Holo-Console, open a new tab in the Chrome browser
  2. Click the Managed bookmarks folder in the bookmark bar then select Mgmt Domain->Mgmt NSX
  3. Log into NSX Manager as the user admin with the password VMware123!VMware123!
  4. Click Plan and Troubleshoot on the top menu bar
  5. Click on Traffic Analysis
  6. Click Get Started on the Traceflow box

  1. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  2. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B

A screenshot of a computer</p>
<p>Description automatically generated

  1. Scroll down and click Trace
  2. The path the packets take are shown on the resulting topology view.
  3. In this example, packets moved from TF-OC-Apache-A to TF-OC-Apache-B via the OC-Web-Segment

  1. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  2. In the Observations panel, observe the following
     
    • One packet was delivered
    • The physical hop count is zero, indicating that the packet did not leave the host
    • The packet was injected at the network adapter for TF-OC-Apache-A virtual machine
    • It is then received at the distributed firewall at the VDS port for TF-OC-Apache-A
    • With no rule blocking, the packet is then forwarded on from the sending VDS port
    • The packet is then received on the distributed firewall at the receiving VDS port for TF-OC-Apache-B
    • With no rule blocking forwarding, the packet is then forwarded to the destination
    • The last step shows the packet being delivered to the network adapter for the TF-OC-Apache-B VM

Lab 2 Summary

Lab 2 shows two very specific capabilities.

  • Traceflow provides a powerful tool for visualizing and diagnosing VCF overlay networks. You will use Traceflow in several other communication scenarios in the remainder of this lab.
  • The second point may not have been immediately obvious. While sending a packet between two hosts on the same subnet, the packet crossed two firewalls. In traditional networking, forcing a packet to traverse a firewall involves subnets, or complex traffic steering rules. With VCF virtual networking, the distributed firewall is present on every VDS port. You will explore the distributed firewall in more detail in another module.

Lab 3: View packet flow between hosts

This lab uses the Traceflow capability to view traffic moving between virtual machines on different hosts on the same segment.

Step 1: Setup VMs for test

  • The previous lab moved TF-OC-Apache-A and TF-OC-Apache-B to the same host for testing. This lab requires the virtual machines to be split across two different hosts.
  • Initiate a vMotion to move TF-OC-Apache-B to a different host. This example uses ESXI-1

Step 2: View packet flow

  1. Log into NSX Manager
  2. Click Plan and Troubleshoot
  3. Click Traffic Analysis
  4. Click Get Started on the Traceflow box
  5. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  6. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B
  7. Scroll down and click Trace
  8. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  9. Resize the observations window by dragging the center icon upward
  10. In the Observations panel, note the following
     
    • One packet was delivered as expected. These is no change in firewall behavior from the last example
    • Because TF-OC-Apache-B is now on a different host, the packet crosses the physical layer and increments the hop count
    • You can see the Local Endpoint IP and Remote Endpoint IP for esxi-3, and opposite local and remote view from esxi-1. This is an example of NSX “Tunnel Endpoints” in use as opposite ends of an overlay network path between hosts.

A screenshot of a computer</p>
<p>Description automatically generated

Step 3: View Host TEP information

  1. Log into NSX Manager
  2. On the top menu bar click System
  3. In the left menu click Fabric
  4. Click Hosts
  5. Expand the management cluster
  6. Notice the TEP IP Addresses column. Click on the and 1 more hyperlink to expand . Each host has two TEP interfaces in the Host TEP VLAN. In the Holodeck lab configuration, Host TEP addresses are automatically allocated using DHCP on the 172.16.254.1/24 network.

A screenshot of a computer</p>
<p>Description automatically generated

  1. The NSX Manager is responsible for updating all transport nodes in the transport zone any time a VM powers on or is migrated. This provides mapping of VM to TEP addresses to send overlay traffic for a specific VM. As a Tunneling End Point, the NSX prepped vSphere Distributed switch is responsible to de-encapsulate overlay traffic to a VM and encapsulate traffic to communicate on the overlay. This is transparent to the VM and the underlay network

Lab 3 Summary

Lab 3 extends the concept of overlay networks to separate hosts. NSX Manager keeps all hosts participating in a transport zone up to date as to what Host TEP address to use to send packets to a specific VM. The concept of overlay networking is very powerful, as the IP network information of the virtual machines communicating is completely independent of the underlying transport network. In this example, the two ESXi hosts communicate over a 172.16.254.0/24 subnet for any overlay traffic on any segment running on these hosts. The underlying ESXi hosts could also be on different subnets due to being in different rows in a datacenter, buildings in a campus or datacenters in a local region. Overlay networking removes the artificial limits various datacenter IP strategies place on where a given workload can be run.

Lab 4: View L3 packet flow within a host

This lab uses the Traceflow feature in NSX to view traffic moving between virtual machines on the same host, on different segments. In most datacenters, the network path required to allow for this communication requires packets moving through to an external router at the top of rack, or end of row, or datacenter core router.

Step 1: Setup VMs for test

  • This lab requires TF-OC-Apache-A and TF-OC-MySQL to be on the same host.
  • Initiate a vMotion to move TF-OC-Apache-A to the same host as TF-OC-MySQL. This example uses esxi-3

Step 2: View Layer 3 communications in Traceflow

  1. Log into NSX Manager
  2. Click Plan & Troubleshoot
  3. Click Traffic Analysis in the left navigation panel
  4. Click Get Started on the Traceflow box
  5. Configure a Traceflow from TF-OC-Apache-A to TF-OC-MySQL
  6. Click Trace
  7. Notice the communication path traverses the OC-T1 router in the Traceflow topology diagram.

A diagram of a computer</p>
<p>Description automatically generated

  1. In the Observations panel, review the following:
     
    • As before, with no firewall rules in place, one packet was delivered
    • Notice the packet routed between the TF-OC-Web-Segment and TF-OC-DB-Segment via the TF-C-T1 router, while never leaving the host as the Physical Hop Count remains zero

A screenshot of a computer</p>
<p>Description automatically generated

Lab 4 Summary

Lab 4 is a very simple example of the power of distributed routing in VCF. In a traditional environment, L3 routing happens in the datacenter, somewhere away from the server. There are many different architectures but each effectively requires the packets to leave the host to be routed elsewhere in the datacenter to get back to a VM on the same host. With VCF distributed routing, the routing happens right at the host between different connected segments.

Lab 5: View L3 packet flow between hosts

This lab demonstrates the use of the Traceflow to view traffic moving between virtual machines on the different hosts and segments. In most datacenters, the network path required to allow for this communication requires packets moving through an external router. This example shows the power of distributed routing.

Step 1: Setup VMs for test

  • This  -OC-Apache-A and TF-OC-MySQL to be on the same host.
  • Initiate a vMotion to move TF-OC-Apache-A to a different host  than TF-OC-MySQL. This example uses esxi-2

Step 2: View Layer 3 communications in NSX Traceflow

  1. Configure a Traceflow fromTF-OC-Apache-A toTF-OC-MySQL
  2. Click Trace
  3. In the Observations panel, review the following
     
    • One packet was delivered
    • The packet was injected at the network adapter for TF-OC-Apache-A virtual machine
    • It is then received at the distributed firewall at the VDS port forTF-OC-Apache-A
    • With no rule blocking, the packet is then forwarded on from the sending VDS port
    • The packet then hits OC-T1 router and gets forwarded to the OC-DB-Segment
    • Since TF-OC-Apache-A and OC-MySQL are running on different ESXi hosts, the physical hop count increases
    • The packet is then received on the distributed firewall at the receiving VDS port for TF-OC-MySQL
    • With no rule blocking forwarding, the packet is then forwarded to the destination
    • The last step shows the packet being delivered to the network adapter for the TF-OC-MySQL VM

A screenshot of a computer</p>
<p>Description automatically generated

Lab 5 Summary

Lab 5 demonstrates packets traveling between the VDS ports for two virtual machines running on different ESXi hosts across two segments connected by a Tier-1 router. The important distinction is this router functionality was distributed across all hosts versus a physical device cabled somewhere else in the datacenter.

Lab 6: Test end to end communications

  1. On the Holo-Console, double click the PuTTY icon on the desktop to start the PuTTY application
  2. Enter 10.1.1.50 for the IP address to connect to. This is the IP address for the TF-OC-MySQL VM
  3. Click Open

A computer screen shot of a computer</p>
<p>Description automatically generated

  1. Click Accept to add the SSH host key to PuTTY’s cache and continue to connect

A screenshot of a computer error</p>
<p>Description automatically generated

  1. If a login prompt does not appear, close the PuTTY window, and restart this step.
  2. Login with the username ocuser and the password VMware123!

A screenshot of a computer screen</p>
<p>Description automatically generated

  1. Successfully connecting from the Holo-Console to the TF-OC-MySQL VM verifies the entire SDN connection. In this lab configuration, the VCF Edge Cluster connects via ECMP to the Holodeck router where the Holo-Console is connected.  SSH traffic from the Holo-Console flows to the Holodeck router, over ECMP links to the Tier-0 router, to the TF-OC-T1 router, to theTF-OC-MySQL VM on TF-OC-DB-Segment, and returns.

A computer screen shot of a network diagram</p>
<p>Description automatically generated

Lab 6 Summary

Lab 6 demonstrates adding new distributed routing and overlay networking with immediate access from outside of the VCF environment through the Tier 0 router configured by the network team. 

Summary

The preceding TraceFlow output shows two very specific capabilities.

  • Easily visualize and diagnose VCF overlay networks. 
  • The second point may not have been immediately obvious. While sending a packet between two hosts on the same subnet, the packet crossed two firewalls. In traditional networking, forcing a packet to traverse a firewall involves subnets, or complex traffic steering rules. With VCF virtual networking, the distributed firewall is present on every VDS port. You will explore the distributed firewall in more detail in another module.

 

View packet flow between hosts

This lab uses the Traceflow capability to view traffic moving between virtual machines on different hosts on the same segment.

Setup VMs for test

The previous lab moved TF-OC-Apache-A and TF-OC-Apache-B to the same host for testing. This lab requires the virtual machines to be split across two different hosts.

  • Initiate a vMotion to move TF-OC-Apache-B to a different host. This example uses ESXI-1

Step 2: View packet flow

  1. Log into NSX Manager
  2. Click Plan and Troubleshoot
  3. Click Traffic Analysis
  4. Click Get Started on the Traceflow box
  5. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  6. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B
  7. Scroll down and click Trace
  8. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  9. Resize the observations window by dragging the center icon upward
  10. In the Observations panel, note the following
     
    • One packet was delivered as expected. These is no change in firewall behavior from the last example
    • Because TF-OC-Apache-B is now on a different host, the packet crosses the physical layer and increments the hop count
    • You can see the Local Endpoint IP and Remote Endpoint IP for esxi-3, and opposite local and remote view from esxi-1. This is an example of NSX “Tunnel Endpoints” in use as opposite ends of an overlay network path between hosts.

A screenshot of a computer</p>
<p>Description automatically generated

Step 3: View Host TEP information

  1. Log into NSX Manager
  2. On the top menu bar click System
  3. In the left menu click Fabric
  4. Click Hosts
  5. Expand the management cluster
  6. Notice the TEP IP Addresses column. Click on the and 1 more hyperlink to expand . Each host has two TEP interfaces in the Host TEP VLAN. In the Holodeck lab configuration, Host TEP addresses are automatically allocated using DHCP on the 172.16.254.1/24 network.

A screenshot of a computer</p>
<p>Description automatically generated

  1. The NSX Manager is responsible for updating all transport nodes in the transport zone any time a VM powers on or is migrated. This provides mapping of VM to TEP addresses to send overlay traffic for a specific VM. As a Tunneling End Point, the NSX prepped vSphere Distributed switch is responsible to de-encapsulate overlay traffic to a VM and encapsulate traffic to communicate on the overlay. This is transparent to the VM and the underlay network

Lab 3 Summary

Lab 3 extends the concept of overlay networks to separate hosts. NSX Manager keeps all hosts participating in a transport zone up to date as to what Host TEP address to use to send packets to a specific VM. The concept of overlay networking is very powerful, as the IP network information of the virtual machines communicating is completely independent of the underlying transport network. In this example, the two ESXi hosts communicate over a 172.16.254.0/24 subnet for any overlay traffic on any segment running on these hosts. The underlying ESXi hosts could also be on different subnets due to being in different rows in a datacenter, buildings in a campus or datacenters in a local region. Overlay networking removes the artificial limits various datacenter IP strategies place on where a given workload can be run.

 

View packet flow

  1. Log into NSX Manager
  2. Click Plan and Troubleshoot
  3. Click Traffic Analysis
  4. Click Get Started on the Traceflow box
  5. Select the dropdown for the Source VM Name and select TF-OC-Apache-A
  6. Select the dropdown for the Destination VM Name and select TF-OC-Apache-B
  7. Scroll down and click Trace
  8. Click the X to close the “multiple physical received observations” banner if displayed, as this is expected in a nested lab environment and it can be safely ignored
  9. Resize the observations window by dragging the center icon upward
  10. In the Observations panel, note the following
     
    • One packet was delivered as expected. These is no change in firewall behavior from the last example
    • Because TF-OC-Apache-B is now on a different host, the packet crosses the physical layer and increments the hop count
    • You can see the Local Endpoint IP and Remote Endpoint IP for esxi-3, and opposite local and remote view from esxi-1. This is an example of NSX “Tunnel Endpoints” in use as opposite ends of an overlay network path between hosts.

A screenshot of a computer</p>
<p>Description automatically generated

Step 3: View Host TEP information

  1. Log into NSX Manager
  2. On the top menu bar click System
  3. In the left menu click Fabric
  4. Click Hosts
  5. Expand the management cluster
  6. Notice the TEP IP Addresses column. Click on the and 1 more hyperlink to expand . Each host has two TEP interfaces in the Host TEP VLAN. In the Holodeck lab configuration, Host TEP addresses are automatically allocated using DHCP on the 172.16.254.1/24 network.

A screenshot of a computer</p>
<p>Description automatically generated

  1. The NSX Manager is responsible for updating all transport nodes in the transport zone any time a VM powers on or is migrated. This provides mapping of VM to TEP addresses to send overlay traffic for a specific VM. As a Tunneling End Point, the NSX prepped vSphere Distributed switch is responsible to de-encapsulate overlay traffic to a VM and encapsulate traffic to communicate on the overlay. This is transparent to the VM and the underlay network

Lab 3 Summary

Lab 3 extends the concept of overlay networks to separate hosts. NSX Manager keeps all hosts participating in a transport zone up to date as to what Host TEP address to use to send packets to a specific VM. The concept of overlay networking is very powerful, as the IP network information of the virtual machines communicating is completely independent of the underlying transport network. In this example, the two ESXi hosts communicate over a 172.16.254.0/24 subnet for any overlay traffic on any segment running on these hosts. The underlying ESXi hosts could also be on different subnets due to being in different rows in a datacenter, buildings in a campus or datacenters in a local region. Overlay networking removes the artificial limits various datacenter IP strategies place on where a given workload can be run.

 

View Host TEP information

  1. Log into NSX Manager
  2. On the top menu bar click System
  3. In the left menu click Fabric
  4. Click Hosts
  5. Expand the management cluster
  6. Notice the TEP IP Addresses column. Click on the and 1 more hyperlink to expand . Each host has two TEP interfaces in the Host TEP VLAN. In the Holodeck lab configuration, Host TEP addresses are automatically allocated using DHCP on the 172.16.254.1/24 network.

A screenshot of a computer</p>
<p>Description automatically generated

  1. The NSX Manager is responsible for updating all transport nodes in the transport zone any time a VM powers on or is migrated. This provides mapping of VM to TEP addresses to send overlay traffic for a specific VM. As a Tunneling End Point, the NSX prepped vSphere Distributed switch is responsible to de-encapsulate overlay traffic to a VM and encapsulate traffic to communicate on the overlay. This is transparent to the VM and the underlay network

Lab 3 Summary

Lab 3 extends the concept of overlay networks to separate hosts. NSX Manager keeps all hosts participating in a transport zone up to date as to what Host TEP address to use to send packets to a specific VM. The concept of overlay networking is very powerful, as the IP network information of the virtual machines communicating is completely independent of the underlying transport network. In this example, the two ESXi hosts communicate over a 172.16.254.0/24 subnet for any overlay traffic on any segment running on these hosts. The underlying ESXi hosts could also be on different subnets due to being in different rows in a datacenter, buildings in a campus or datacenters in a local region. Overlay networking removes the artificial limits various datacenter IP strategies place on where a given workload can be run.

 

Summary

In this example we extended the concept of overlay networks to separate hosts. NSX Manager keeps all hosts participating in a transport zone up to date as to what Host TEP address to use to send packets to a specific VM. The concept of overlay networking is very powerful, as the IP network information of the virtual machines communicating is completely independent of the underlying transport network. In this example, the two ESXi hosts communicate over a 172.16.254.0/24 subnet for any overlay traffic on any segment running on these hosts. The underlying ESXi hosts could also be on different subnets due to being in different rows in a datacenter, buildings in a campus or datacenters in a local region. Overlay networking removes the artificial limits various datacenter IP strategies place on where a given workload can be run.

 

View L3 packet flow within a host

This lab uses the Traceflow feature in NSX to view traffic moving between virtual machines on the same host, on different segments. In most datacenters, the network path required to allow for this communication requires packets moving through to an external router at the top of rack, or end of row, or datacenter core router.

Setup VMs for test

This lab requires TF-OC-Apache-A and TF-OC-MySQL to be on the same host.

  • Initiate a vMotion to move TF-OC-Apache-A to the same host as TF-OC-MySQL. This example uses esxi-3

Step 2: View Layer 3 communications in Traceflow

  1. Log into NSX Manager
  2. Click Plan & Troubleshoot
  3. Click Traffic Analysis in the left navigation panel
  4. Click Get Started on the Traceflow box
  5. Configure a Traceflow from TF-OC-Apache-A to TF-OC-MySQL
  6. Click Trace
  7. Notice the communication path traverses the OC-T1 router in the Traceflow topology diagram.

A diagram of a computer</p>
<p>Description automatically generated

  1. In the Observations panel, review the following:
     
    • As before, with no firewall rules in place, one packet was delivered
    • Notice the packet routed between the TF-OC-Web-Segment and TF-OC-DB-Segment via the TF-C-T1 router, while never leaving the host as the Physical Hop Count remains zero

A screenshot of a computer</p>
<p>Description automatically generated

Lab 4 Summary

Lab 4 is a very simple example of the power of distributed routing in VCF. In a traditional environment, L3 routing happens in the datacenter, somewhere away from the server. There are many different architectures but each effectively requires the packets to leave the host to be routed elsewhere in the datacenter to get back to a VM on the same host. With VCF distributed routing, the routing happens right at the host between different connected segments.

 

View Layer 3 communications in Traceflow

 

  1. Log into NSX Manager
  2. Click Plan & Troubleshoot
  3. Click Traffic Analysis in the left navigation panel
  4. Click Get Started on the Traceflow box
  5. Configure a Traceflow from TF-OC-Apache-A to TF-OC-MySQL
  6. Click Trace
  7. Notice the communication path traverses the OC-T1 router in the Traceflow topology diagram.

A diagram of a computer</p>
<p>Description automatically generated

  1. In the Observations panel, review the following:
     
    • As before, with no firewall rules in place, one packet was delivered
    • Notice the packet routed between the TF-OC-Web-Segment and TF-OC-DB-Segment via the TF-C-T1 router, while never leaving the host as the Physical Hop Count remains zero

A screenshot of a computer</p>
<p>Description automatically generated

Lab 4 Summary

Lab 4 is a very simple example of the power of distributed routing in VCF. In a traditional environment, L3 routing happens in the datacenter, somewhere away from the server. There are many different architectures but each effectively requires the packets to leave the host to be routed elsewhere in the datacenter to get back to a VM on the same host. With VCF distributed routing, the routing happens right at the host between different connected segments.

 

Summary

The above is a very simple example of the power of distributed routing in VCF. In a traditional environment, L3 routing happens in the datacenter, somewhere away from the server. There are many different architectures but each effectively requires the packets to leave the host to be routed elsewhere in the datacenter to get back to a VM on the same host. With VCF distributed routing, the routing happens right at the host between different connected segments.

 

View L3 packet flow between hosts

This lab demonstrates the use of the Traceflow to view traffic moving between virtual machines on the different hosts and segments. In most datacenters, the network path required to allow for this communication requires packets moving through an external router. This example shows the power of distributed routing.

Setup VMs for test

Both  TF-OC-Apache-A and TF-OC-MySQL to be on the same host.

  • Initiate a vMotion to move TF-OC-Apache-A to a different host  than TF-OC-MySQL. This example uses esxi-2

Step 2: View Layer 3 communications in NSX Traceflow

  1. Configure a Traceflow fromTF-OC-Apache-A toTF-OC-MySQL
  2. Click Trace
  3. In the Observations panel, review the following
     
    • One packet was delivered
    • The packet was injected at the network adapter for TF-OC-Apache-A virtual machine
    • It is then received at the distributed firewall at the VDS port forTF-OC-Apache-A
    • With no rule blocking, the packet is then forwarded on from the sending VDS port
    • The packet then hits OC-T1 router and gets forwarded to the OC-DB-Segment
    • Since TF-OC-Apache-A and OC-MySQL are running on different ESXi hosts, the physical hop count increases
    • The packet is then received on the distributed firewall at the receiving VDS port for TF-OC-MySQL
    • With no rule blocking forwarding, the packet is then forwarded to the destination
    • The last step shows the packet being delivered to the network adapter for the TF-OC-MySQL VM

A screenshot of a computer</p>
<p>Description automatically generated

Lab 5 Summary

Lab 5 demonstrates packets traveling between the VDS ports for two virtual machines running on different ESXi hosts across two segments connected by a Tier-1 router. The important distinction is this router functionality was distributed across all hosts versus a physical device cabled somewhere else in the datacenter.

 

View Layer 3 communications in NSX Traceflow

  1. Configure a Traceflow fromTF-OC-Apache-A toTF-OC-MySQL
  2. Click Trace
  3. In the Observations panel, review the following
     
    • One packet was delivered
    • The packet was injected at the network adapter for TF-OC-Apache-A virtual machine
    • It is then received at the distributed firewall at the VDS port forTF-OC-Apache-A
    • With no rule blocking, the packet is then forwarded on from the sending VDS port
    • The packet then hits OC-T1 router and gets forwarded to the OC-DB-Segment
    • Since TF-OC-Apache-A and OC-MySQL are running on different ESXi hosts, the physical hop count increases
    • The packet is then received on the distributed firewall at the receiving VDS port for TF-OC-MySQL
    • With no rule blocking forwarding, the packet is then forwarded to the destination
    • The last step shows the packet being delivered to the network adapter for the TF-OC-MySQL VM

A screenshot of a computer</p>
<p>Description automatically generated

Lab 5 Summary

Lab 5 demonstrates packets traveling between the VDS ports for two virtual machines running on different ESXi hosts across two segments connected by a Tier-1 router. The important distinction is this router functionality was distributed across all hosts versus a physical device cabled somewhere else in the datacenter.

 

Summary

This Traceflow demonstrates packets traveling between the VDS ports for two virtual machines running on different ESXi hosts across two segments connected by a Tier-1 router. The important distinction is this router functionality was distributed across all hosts versus a physical device cabled somewhere else in the datacenter.

 

Test end to end communications

 

  1. On the Holo-Console, double click the PuTTY icon on the desktop to start the PuTTY application
  2. Enter 10.1.1.50 for the IP address to connect to. This is the IP address for the TF-OC-MySQL VM
  3. Click Open

A computer screen shot of a computer</p>
<p>Description automatically generated

  1. Click Accept to add the SSH host key to PuTTY’s cache and continue to connect

A screenshot of a computer error</p>
<p>Description automatically generated

  1. If a login prompt does not appear, close the PuTTY window, and restart this step.
  2. Login with the username ocuser and the password VMware123!

A screenshot of a computer screen</p>
<p>Description automatically generated

  1. Successfully connecting from the Holo-Console to the TF-OC-MySQL VM verifies the entire SDN connection. In this lab configuration, the VCF Edge Cluster connects via ECMP to the Holodeck router where the Holo-Console is connected.  SSH traffic from the Holo-Console flows to the Holodeck router, over ECMP links to the Tier-0 router, to the TF-OC-T1 router, to theTF-OC-MySQL VM on TF-OC-DB-Segment, and returns.

This demonstrates adding new distributed routing and overlay networking with immediate access from outside of the VCF environment through the Tier 0 router configured by the network team. 

A computer screen shot of a network diagram</p>
<p>Description automatically generated

Associated Content

home-carousel-icon From the action bar MORE button.

Filter Tags

Document