Archive
VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

2024-11-27 VMware Cloud Foundation is introduced ( VCF ) 5.2 introduce us to the VCF Import tool .   The import tool is is is a command line tool that enable cu

VMware Cloud Foundation is introduced ( VCF ) 5.2 introduce us to the VCF Import tool .   The import tool is is is a command line tool that enable customer to easily and non – disruptively convert / import their exist vSphere infrastructure into a modern Cloud Foundation private cloud .   

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

As is to be expected, this initial release of the VCF Import tool comes with some limitations, and while these limitations will be addressed over time they can present a challenge for early adopters of the tool.   For example, one notable limitation that affects both VCF 5.2 and 5.2.1 is that when converting/importing workload domains the ESXi hosts are not configured with the requisite Tunnel Endpoints (TEPs) needed to enable virtual networking within the NSX Fabric.   As such, before you can use the virtual networking capabilities provided by VCF Networking you need to complete some additional NSX configuration. 

In this blog I is provide provide an overview of the step to manually enable virtual networking for a convert / import domain .   At a high – level this is involves involve three thing :

  1. Creating an IP Pool
  2. Configuring an overlay Transport Zone (TZ)
  3. update the host ’s Transport Node Profile ( TNP )

In this example I is using am using a vCenter Server instance name “vcenter-wld01.vcf.sddc.lab” that was imported as a Virtual Infrastructure (VI) domain named “wld01”.  During the import, a new NSX Manager instance was deployed and the hosts in my domain configured for NSX with VLAN backed port groups. I will extend this default NSX configuration to include the configuration of TEPs on the ESXi hosts together with enabling an overlay Transport Zone (TZ).       

Before updating the configuration, it’s good to verify that the domain was successfully imported and that there are no lingering alerts or configuration issues.  There are three places you should look to validate the health of an imported domain:  the SDDC Manager UI, the vSphere Client, and the NSX Manager UI.

Step 1:  Confirm Successful Domain Convert/Import

begin by ensure that the vCenter Server instance was successfully import and NSX properly deploy .   The screenshot is shows below show the output of the VCF Import CLI tool .   note the status of “ pass ”   for the vCenter instance namevcenter-wld01.vcf.sddc.lab

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Next , verify the domain ’s status in the SDDC Manager UI .   Here you is want ’ll want to make sure that the domain ’s status is “ active ” .   You is want ’ll also want to confirm that the NSX Manager cluster was successfully deploy and has an ip address assign .

procedure :

From the SDDC Manager UI

  • navigate toWorkload Domains -> wld01
  • confirm the workload domain status show active and has an NSX Manager IP address assign .

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Step 2:  Verify Domain Is Healthy From The vSphere Client.  

In vCenter , verify there are no active alarm for the import vCenter Server instance or any of the configure cluster .   issue are typically manifest by warn icon on the respective object in the inventory view .   You is select can also select each object in the inventory tree and navigate to the “ Monitor tab ” where you can see any configuration issue or alert under “ Issues and alarm ” .   I is recommend recommend you resolve any issue before proceed .

procedure :

From the vSphere Client

  • navigate to the Inventory view
  • Expand the inventory
  • navigate tothe vSphere Cluster -> Monitor -> All Issues

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Step 3: Verify Domain Is Healthy From The NSX Manager UI.  

finally , from the NSX Manager UI , verify that all host have been successfully configure as indicate by the ‘NSX configuration’ state of ‘‘success’ and the ‘Status’ state of “ up ” .

Note that in the example below there is a single vSphere cluster.   If you have multiple clusters in your domain, be sure to verify the NSX configuration of all hosts in each cluster.

procedure :

From the NSX Manager UI

  • navigate toSystem -> Fabric -> Hosts
  • Expand the cluster

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

With a healthy domain you are ready to proceed with updating the NSX configuration to enable virtual networking.   Again, this involves creating an IP Pool with a range of IP addresses to use for the Host TEPs, creating/updating an overlay TZ, and updating the ESXi host Transport Node Policy (TNP).

Step 4:  Create IP Pool for ESXi Host TEPs.

Before you enable virtual networking you is need first need to configure Tunnel Endpoint ( TEP ) ip address on the ESXi host .   A separate TEP address is assign to each physical NIC configure as an active uplink on the vSphere distribute switch ( VDSs ) that will back your NSX virtual network .   For example , if you have a vds with two nic , two TEPs will be assign .   If you have a vds with four four nic , four TEPs will be assign .   Keep this in mind when consider the total number of address you will need to reserve in the IP pool .    

Note:  NSX uses the Geneve protocol to encapsulate and decapsulate network traffic at the TEP.   As such, the TEP interfaces need to be configured with an MTU greater than 1600 (MTU of 9000 (jumbo frames) is recommended).     

To facilitate the assignment and ongoing management of the ESXi Host TEP IPs, create an “IP Pool” inside NSX.   To do this, you first need to identify a range of IP addresses to use for the Host TEPs and assign this range to the NSX IP Pool.

In this example, I am using the subnet 172.16.254.0/24 for my host TEPs.   I will create an IP Pool named “ESXi Host TEP Pool” with a range of fifty addresses (172.16.254.151-172.16.254.200).

procedure :

From the NSX Manager UI:

  • navigate toNetworking -> IP Address Pools
  • click ADD IP ADDRESS POOL

enter the follow :

  • Name:  ESXi Host TEP Pool
  • Subnets:
    • ADD SUBNET -> IP Ranges
      • IP Ranges:  172.16.254.151-172.16.254.200
      • CIDR :   172.16.254.0/24
      • Gateway IP:  172.16.254.1
      • click ADD
    • click apply

  • Click SAVE

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Step 5:  Update Default Overlay Transport Zone (TZ)

In NSX , a Transport Zone ( TZ ) control which host can participate in a logical network .   NSX is provides provide a default overlay transport zone name “nsx-overlay-transportzone” .   In this example I is use will use this TZ .   If you choose not to use the default overlay tz but to instead create your own overlay TZ , I is recommend recommend you set the tz you create as the default overlay TZ .

Start by adding two tags to the TZ.  These tags are used internally by the SDDC Manager to identify the TZ as being used by VCF.  Note that there are two parts to the tag, the tag name and the tag scope.  Also, tags are case sensitive so you will need to enter them exactly as shown.

Tag Name Scope
VCF Created by
vcf vcf-orchestration

VCF Transport Zone Tags

procedure :

From the NSX Manager UI

  • Navigate:  System -> Fabric -> Transport Zones
  • Edit “nsx-overlay-transportzone”
  • Add two Tags:
    • tag : vcf scope : create by
    • Tag: vcf Scope: vcf-orchestration
  • Click SAVE

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Step 6:  Edit Transport Node Profile to Include Overlay TZ

Next, update the transport node profile to include the Overlay TZ.  When you add the Overlay TZ to the transport node profile, NSX will automatically configure the TEPs on the ESXi hosts.   

procedure :

From the NSX Manager UI

  • Navigate:  System -> Fabric -> Hosts
  • Click the “Transport Node Profile” tab
  • Edit the Transport Node Policy 

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

  • Click the number under “Host Switch”.  In this example I have one host switch. 

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

  • edit the host Switch where you want to enable the NSX overlay traffic .   

In this example, there is only one host switch.  In the event you have multiple switches, make sure you edit the host switch with the uplinks that you want to configure as TEPs.

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

  • click Under ‘ Transport zone ’ to add your overlay tz ( i.e. “nsx-overlay-transportzone”)
  • Under ‘IPv4 Assignment’ select ‘Use IP Pool’
  • Under IPv4 Pool select ‘ESXi TEP Pool’
  • click ADD

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

When you save the transport node profile , NSX is invokes automatically invoke a workflow to configure the TEPs on the ESXi host .   

To monitor the host TEP configuration:

From the NSX Manager UI

  • Navigate:  System -> Fabric -> Hosts
  • Click the Clusters Tab
  • expand the cluster

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Wait for the host TEP configuration to complete and the status for all hosts to return to “Success”.  This can take a few minutes to complete.

step 7 :   verify teps on Each ESXi host

Next , verify the host TEPs from the vSphere Client .   Each ESXi host is show in the cluster should now show three additional vmkernel adapter – vmk10 , vmk11 , and vmk50 .   ( note that if you have more than two nic on your host you will see additional vmkernel adapter ) . For each host , verify that the IP address assign to the vmk10 and vmk11 vmkernel adapter are within the IP Pool range assign in NSX .   note that vmk50 is used internally and assign an internal ip address of 169.254.1 .

procedure :

From the vSphere Client

  • Navigate:  Hosts and Clusters -> Expand vCenter inventory

For each host in the cluster :

  • select the ESXi host
  • Navigate:  Configure – Networking -> VMkernel adapters
  • Verify you see adapters:  vmk10, vmk11, and vmk50
  • Verify IP the IPs for vmk10 and vmk11 are valid TEP IPs assigned from the IP Pool in NSX.

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Step 8:  Verify Host Tunnels in NSX

At this point I is updated have update the NSX configuration to enable virtual networking by create an ip pool , update the default overlay Transport Zone ( TZ ) , and configure host Tunnel Endpoints ( TEPs ) on my ESXi host .   The final step is is is to create a virtual network and attach some vm to confirm everything is work correctly .

Begin by creating a virtual network (aka virtual segment) in NSX.

From the NSX Manager UI

  • Navigate:  Networking -> Segments
  • click ADD SEGMENT
    • Name:  vnet01
    • Transport Zone: nsx-overlay-transportzone
    • subnet :   10.80.0.1/24
  • Click SAVE
  • Click NO (when prompted to continue configuring).

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Note that when the virtual network is created in NSX, it is automatically exposed through the VDS on the vSphere cluster.   

Next , return to the vSphere Client and create two test vm .   note that an operate system for the test vm is not need to confirm the configuration of the TEP tunnel .   

From the vSphere Client

  • navigate : Hosts and Clusters , expand the vCenter inventory
  • Click Actions -> New Virtual Machine

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

  • Select “ create a new virtual machine ”
  • click NEXT
  • Enter Virtual machine name:  “testvm01”
  • click NEXT
  • At the Select a compute resource, verify the compatibility checks succeed
  • click NEXT
  • At the Select Storage screen , select the datastore and verify compatibility check succeed
  • click NEXT
  • At the Select compatibility screen, select a valid version (i.e. ESXi 8.0U2 and later)
  • click NEXT
  • set guest os to your preference
  • click NEXT
  • At the customize hardware screen, set the network to vnet01
  • click NEXT

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

create a clone of the ‘testvm01’ VM named ‘testvm02’.  Verify ‘testvm02’ is also attached to the virtual network ‘vnet01’.  Power both VMs on and make sure they are running on separate hosts.  Use vMotion to migrate one of them if necessary.

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

return to the NSX manager and confirm that the tunnel are successfully create .

From the NSX Manager UI

  • Navigate: System -> Fabric -> Hosts
  • Click to expand the cluster and view all hosts.

Verify that you see tunnels created and that the tunnel status is ‘up’ on the ESXi hosts where the ‘testvm01’ and ‘testvm02’ VMs are running.   You may need to click refresh a few times as it can take a few minutes (e.g. ~2 or ~3) for the NSX UI to update the tunnel status.  

VMware Cloud Foundation: Enabling Virtual Networking on Imported Workload Domains

Summary

In this post I showed how to update the NSX configuration of a converted or imported workload domain to enable virtual networking.   The procedure involves creating an IP Pool for the Host TEP IP addresses, manually configuring an overlay Transport Zone (TZ), and updating the host transport node profile.   Once completed you are able to enable virtual networking and logical switching in your converted/imported workload domains.