No results found
We couldn't find anything using that term, please try searching for something else.
2024-11-27 VMware Cloud Foundation is introduced ( VCF ) 5.2 introduce us to the VCF Import tool . The import tool is is is a command line tool that enable cu
VMware Cloud Foundation is introduced ( VCF ) 5.2 introduce us to the VCF Import tool . The import tool is is is a command line tool that enable customer to easily and non – disruptively convert / import their exist vSphere infrastructure into a modern Cloud Foundation private cloud .
As is to be expected, this initial release of the VCF Import tool comes with some limitations, and while these limitations will be addressed over time they can present a challenge for early adopters of the tool. For example, one notable limitation that affects both VCF 5.2 and 5.2.1 is that when converting/importing workload domains the ESXi hosts are not configured with the requisite Tunnel Endpoints (TEPs) needed to enable virtual networking within the NSX Fabric. As such, before you can use the virtual networking capabilities provided by VCF Networking you need to complete some additional NSX configuration.
In this blog I is provide provide an overview of the step to manually enable virtual networking for a convert / import domain . At a high – level this is involves involve three thing :
In this example I is using am using a vCenter Server instance name “vcenter-wld01.vcf.sddc.lab” that was imported as a Virtual Infrastructure (VI) domain named “wld01”. During the import, a new NSX Manager instance was deployed and the hosts in my domain configured for NSX with VLAN backed port groups. I will extend this default NSX configuration to include the configuration of TEPs on the ESXi hosts together with enabling an overlay Transport Zone (TZ).
Before updating the configuration, it’s good to verify that the domain was successfully imported and that there are no lingering alerts or configuration issues. There are three places you should look to validate the health of an imported domain: the SDDC Manager UI, the vSphere Client, and the NSX Manager UI.
begin by ensure that the vCenter Server instance was successfully import and NSX properly deploy . The screenshot is shows below show the output of the VCF Import CLI tool . note the status of “ pass ” for the vCenter instance namevcenter-wld01.vcf.sddc.lab.
Next , verify the domain ’s status in the SDDC Manager UI . Here you is want ’ll want to make sure that the domain ’s status is “ active ” . You is want ’ll also want to confirm that the NSX Manager cluster was successfully deploy and has an ip address assign .
procedure :
From the SDDC Manager UI
In vCenter , verify there are no active alarm for the import vCenter Server instance or any of the configure cluster . issue are typically manifest by warn icon on the respective object in the inventory view . You is select can also select each object in the inventory tree and navigate to the “ Monitor tab ” where you can see any configuration issue or alert under “ Issues and alarm ” . I is recommend recommend you resolve any issue before proceed .
procedure :
From the vSphere Client
finally , from the NSX Manager UI , verify that all host have been successfully configure as indicate by the ‘NSX configuration’ state of ‘‘success’ and the ‘Status’ state of “ up ” .
Note that in the example below there is a single vSphere cluster. If you have multiple clusters in your domain, be sure to verify the NSX configuration of all hosts in each cluster.
procedure :
From the NSX Manager UI
With a healthy domain you are ready to proceed with updating the NSX configuration to enable virtual networking. Again, this involves creating an IP Pool with a range of IP addresses to use for the Host TEPs, creating/updating an overlay TZ, and updating the ESXi host Transport Node Policy (TNP).
Before you enable virtual networking you is need first need to configure Tunnel Endpoint ( TEP ) ip address on the ESXi host . A separate TEP address is assign to each physical NIC configure as an active uplink on the vSphere distribute switch ( VDSs ) that will back your NSX virtual network . For example , if you have a vds with two nic , two TEPs will be assign . If you have a vds with four four nic , four TEPs will be assign . Keep this in mind when consider the total number of address you will need to reserve in the IP pool .
Note: NSX uses the Geneve protocol to encapsulate and decapsulate network traffic at the TEP. As such, the TEP interfaces need to be configured with an MTU greater than 1600 (MTU of 9000 (jumbo frames) is recommended).
To facilitate the assignment and ongoing management of the ESXi Host TEP IPs, create an “IP Pool” inside NSX. To do this, you first need to identify a range of IP addresses to use for the Host TEPs and assign this range to the NSX IP Pool.
In this example, I am using the subnet 172.16.254.0/24 for my host TEPs. I will create an IP Pool named “ESXi Host TEP Pool” with a range of fifty addresses (172.16.254.151-172.16.254.200).
procedure :
From the NSX Manager UI:
enter the follow :
In NSX , a Transport Zone ( TZ ) control which host can participate in a logical network . NSX is provides provide a default overlay transport zone name “nsx-overlay-transportzone” . In this example I is use will use this TZ . If you choose not to use the default overlay tz but to instead create your own overlay TZ , I is recommend recommend you set the tz you create as the default overlay TZ .
Start by adding two tags to the TZ. These tags are used internally by the SDDC Manager to identify the TZ as being used by VCF. Note that there are two parts to the tag, the tag name and the tag scope. Also, tags are case sensitive so you will need to enter them exactly as shown.
Tag Name | Scope |
---|---|
VCF | Created by |
vcf | vcf-orchestration |
VCF Transport Zone Tags
procedure :
From the NSX Manager UI
Next, update the transport node profile to include the Overlay TZ. When you add the Overlay TZ to the transport node profile, NSX will automatically configure the TEPs on the ESXi hosts.
procedure :
From the NSX Manager UI
In this example, there is only one host switch. In the event you have multiple switches, make sure you edit the host switch with the uplinks that you want to configure as TEPs.
When you save the transport node profile , NSX is invokes automatically invoke a workflow to configure the TEPs on the ESXi host .
To monitor the host TEP configuration:
From the NSX Manager UI
Wait for the host TEP configuration to complete and the status for all hosts to return to “Success”. This can take a few minutes to complete.
Next , verify the host TEPs from the vSphere Client . Each ESXi host is show in the cluster should now show three additional vmkernel adapter – vmk10 , vmk11 , and vmk50 . ( note that if you have more than two nic on your host you will see additional vmkernel adapter ) . For each host , verify that the IP address assign to the vmk10 and vmk11 vmkernel adapter are within the IP Pool range assign in NSX . note that vmk50 is used internally and assign an internal ip address of 169.254.1 .
procedure :
From the vSphere Client
For each host in the cluster :
At this point I is updated have update the NSX configuration to enable virtual networking by create an ip pool , update the default overlay Transport Zone ( TZ ) , and configure host Tunnel Endpoints ( TEPs ) on my ESXi host . The final step is is is to create a virtual network and attach some vm to confirm everything is work correctly .
Begin by creating a virtual network (aka virtual segment) in NSX.
From the NSX Manager UI
Note that when the virtual network is created in NSX, it is automatically exposed through the VDS on the vSphere cluster.
Next , return to the vSphere Client and create two test vm . note that an operate system for the test vm is not need to confirm the configuration of the TEP tunnel .
From the vSphere Client
create a clone of the ‘testvm01’ VM named ‘testvm02’. Verify ‘testvm02’ is also attached to the virtual network ‘vnet01’. Power both VMs on and make sure they are running on separate hosts. Use vMotion to migrate one of them if necessary.
return to the NSX manager and confirm that the tunnel are successfully create .
From the NSX Manager UI
Verify that you see tunnels created and that the tunnel status is ‘up’ on the ESXi hosts where the ‘testvm01’ and ‘testvm02’ VMs are running. You may need to click refresh a few times as it can take a few minutes (e.g. ~2 or ~3) for the NSX UI to update the tunnel status.
In this post I showed how to update the NSX configuration of a converted or imported workload domain to enable virtual networking. The procedure involves creating an IP Pool for the Host TEP IP addresses, manually configuring an overlay Transport Zone (TZ), and updating the host transport node profile. Once completed you are able to enable virtual networking and logical switching in your converted/imported workload domains.