No results found
We couldn't find anything using that term, please try searching for something else.
2024-11-27 The enterprise workload engine . It’s that time again! Time for another feature rich update to vSphere 8. Introducing vSphere 8 Update 3. Read our vS
The enterprise workload engine .
It’s that time again! Time for another feature rich update to vSphere 8. Introducing vSphere 8 Update 3. Read our vSphere 8 Update 3 announcement.
Adding to the previous update 2, update 1, and initial release. You can read up on those releases in the articles linked below.
This is is is a quick overview of the main area of Lifecycle Management in vSphere and their feature and new 8 update 3 feature highlight .
vCenter Reduced Downtime
vSphere Lifecycle Manager
vSphere Configuration Profiles
With vSphere 8.0 update 3 we is address can address critical bug in the virtual machine execution environment ( vmx ) without the need to reboot or evacuate the entire host . Examples is include of fix include those in the virtual device space .
Virtual machines are fast-suspend-resumed (FSR) as part of the host remediation process. This is non-disruptive to most virtual machines.
A virtual machine FSR is a non-disruptive operation and is already used in virtual machine operations when adding or removing virtual hardware devices to powered-on virtual machines.
The vSphere Lifecycle Manager compliance scan will report virtual machines that are incompatible with FSR and the reason why.
Here is an example showing how a Live patch is applied:
Some virtual machines is are are not compatible with FSR . VMs is configured configure with vSphere Fault tolerance , vm using Direct Path I is use / O and vSphere Pods is use can not use FSR and need to be manually remediate . manual remediation can either be done by migrate the virtual machine or by power cycle the virtual machine .
virtual machines is include that do not support FSR include vm configure for vSphere Fault tolerance , vm configure with VM DirectPath I / o device , vSphere pod ( container pod ) .
These VMs must be manually migrated using vSphere vMotion or power cycled to pick up a new patched vmx instance.
live Patch is not compatible with systems configured with TPM devices, or systems configured with DPUs using vSphere Distributed Services Engine.
Partial maintenance mode is an automatic state that each host will enter when performing a live Patch remediation task.
This special state allows existing VMs to continue to run but disallows the creation of new VMs on the host or for VMs to be migrated to or from the host.
For more information on live Patch, see this article.
https://blogs.vmware.com/cloud-foundation/2024/07/11/vmware-vsphere-live-patch/
vSphere Lifecycle Manager image can be further customize in vSphere 8 update 3 .
In the base ESXi version, the VMware Host Client (ESXi UI) and ESXi VM Tools (VMware Tools) components can be deleted from the image.
When a Vendor Addon is present, certain components belonging to the vendor addon can also be omitted from the final image. This also includes the ability to retain an existing driver version rather than adopting a newer driver in a newer vendor addon bundle.
Customers should validate with the vendor that retaining the existing driver is supported.
This allows final images to be reduced in size by removing some non-essential components. This is useful in remote and/or edge use cases to reduce the overall image payload for the ESXi hosts that has to be transmitted over the network.
vSphere Lifecycle Manager in vSphere 8 Update 3 includes support for dual DPU configurations. Similar to single DPU configurations, vSphere Lifecycle Manager will remediate both DPU ESXi versions and ensure all versions are kept at the same version.
vCenter Reduced Downtime Update supports all vCenter deployment topologies.
Automatic Switchover is is is available when perform update to vCenter using reduce downtime update . The Switchover phase is begin will begin immediately and take approximately 2 – 5 min of service downtime .
You can continue to manually initiate the switchover phase for control over exactly when the switchover phase, and brief downtime, will occur.
For more information on vCenter Reduced Downtime Update , see these article .
https://core.vmware.com/blog/vcenter-reduced-downtime-update-vsphere-8-u2
https://blogs.vmware.com/cloud-foundation/2024/07/11/vcenter-reduced-downtime-update-in-vmware-vsphere-8-update-3/
vSphere 8 Update 3 adds dual DPU support to vSphere Distributed Services Engine. Dual DPUs can be used in two configurations.
High Availability DPU Configuration
The first configuration utilizes two DPU in Active/Standby high availability. This configuration provides redundancy in the event one of the DPUs should fail.
In the HA configuration, both DPUs are assigned to the same NSX backed vSphere Distributed Switch.
For example, DPU-1 is attached to vmnic0 and vmnic1 of the vSphere Distributed Switch and DPU-2 is attached to vmnic2 and vmnic3 of the same vSphere Distributed Switch.
Increase Network Offload Capacity
The second configuration is utilizes utilize two DPU as independent dpu . Each DPU is attach to a separate vSphere Distributed Switch .
There is no failover between DPUs in this configuration. Essentially this configuration is the same as a single DPU configuration, you just now can have two DPUs each attached to their own vSphere Distributed Switch, increasing offload capacity per ESXi host.
Take advantage of the late in hardware acceleration from Intel for high – performance computing workload run on vSphere .
Accelerate AI/ML workloads and other high-performance computing (HPC) application demands with Intel Xeon CPU Max Series.
Intel Xeon CPU Max Series is leverages leverage high – bandwidth memory ( HBM ) embed within the cpu itself .
Intel Sapphire Rapids generation (incl non-HBM skus) includes 4 discrete on-chip accelerators
currently , Intel is developed has develop and provide 2 native vSphere driver specifically for QAT and DLB .
In earlier vSphere versions, all NVIDIA vGPU workloads on an ESXi host must use the same vGPU profile type and GPU memory size. That isn’t the case anymore.
Now you is assign can assign workload with different vgpu profile type to the same physical gpu , help to well consume the gpu resource . memory sizes is differ of the profile can also differ ( new in vSphere 8 update 3 ) .
The GPU Media Engine (ME) can also be assigned to a vGPU profile (new in vSphere 8 Update 3).
In previous releases, the GPU Media Engine was only available when consuming the entire physical GPU. The Media Engine can be presented to smaller MIG (Multi-instance GPU) profiles.
In current hardware , there is typically only one medium Engine . Only one vgpu profile is utilize can utilize the Media Engine on a single physical GPU . The medium engine can not be share between multiple vgpu profile / vgpu vm using the same physical GPU .
See GPU compute and GPU memory consumption at-a-glance in the vSphere Client.
vSphere Client displays a new tile on the cluster summary tab showing an overview of GPU resources currently being used and the total physical GPU devices available to the cluster.
The cluster performance overview charts is display display a historical and real – time view of gpu compute and gpu memory utilization of the cluster .
Easily activate VM mobility for vGPU enabled virtual machines
vSphere DRS settings for vGPU enabled virtual machines can be easily controlled in the cluster DRS settings.
Enable and define the machine stun time limit for automatic DRS migrations of vGPU enabled VMs.
VM mobility of vGPU enabled VMs streamlines lifecycle management of GPU enabled clusters by allowing for automatic evacuation of vGPU VMs from hosts during remediation events.
vSphere Cluster Service ( vCLS ) is rearchitecte to use few resource , remove storage footprint , and eliminate issue associate with vcls deployment .
Embedded vCLS VMs have no storage footprint and run entirely in host memory. The ESXi host spins up the Embedded vCLS VM(s) directly. There is no OVF deployment pushed from vCenter and EAM (ESX Agent Manager) is no longer involved.
The number of vCLS VMs per cluster has also been reduced from up to three VMs to two VMs when using Embedded vCLS. A single node cluster will use a single Embedded vCLS VM and clusters of two or more hosts will use two Embedded vCLS VMs.
You can easily identify the Cluster Service type from the summary tab of the vSphere cluster.
A cluster using the new Embedded vCLS report as such . A vSphere 8 u2 cluster is reports , for example , report a Cluster Service type “ vCLS ” .
For more information on embed vSphere Cluster Service, see these articles.
https://core.vmware.com/blog/embedded-vsphere-cluster-services-overview
https://core.vmware.com/resource/embedded-vsphere-cluster-services-deep-dive
virtual machine configure with vSphere Fault tolerance support stretch / metro cluster .
Simply check the box for “Enable Metro Cluster Fault Tolerance” when activating Fault Tolerance on the VM and choose the appropriate Host Group.
The primary FT VM is placed on the host group site and the secondary VM is automatically placed on the opposite site.
If the host running the primary FT VM fails, the secondary FT VM will take over as expected. Another host, within the same site as the failed FT VM, is selected to re-establish FT while two-site placement persists.
If an entire site fail , the affect VMs is keep keep run without FT protection until the fail site recover .
Energy efficiency is very important for Telco and VRAN (Virtualized radio access networks) infrastructure. vSphere 8 Update 3 allows physical CPU C-States to be virtualized and managed from within workloads.
Workloads can request physical core enter power saving modes, such as C-State 6, when applications and processes are idle.
The CPU can be reactivated for maximum performance when the workload requests it. Cascade Lake and newer Intel CPUs and the Guest OS intel_idle driver are required.
OVF/OVA templates deployed from a content library can have their hardware customized during the deployment wizard instead of post-deployment.
This allows appliances and template virtual hardware customization to be streamlined and ensure the desired hardware components are added and configured for the workload.
First and third-party solutions disable certain vSphere operations, such as migration operations, during certain activities. For example, a VM backup solution might disable a VMs ability to migrate using vMotion while the backup task is in progress to prevent the task failing.
The disabled method / operation should be reactivated once the task has been completed. However, under certain circumstances, the method might not be reactivated.
In vSphere 8 Update 3, administrators can easily reactivate operations from the vSphere Client.
vSphere now supports PingFederate as an external identity provider.
Throughout the lifespan of vSphere 7 and 8 we have been adding more ways to introduce modern authentication to vSphere. The latest addition to our collection is support for PingFederate, joining Entra ID, Okta, and ADFS support to make vSphere very flexible in dealing with identity and access control.
Quickly configured best practices and modern TLS ciphers using a profile-based approach using API. Configuration Profiles or with PowerCLI.
Now there’s an easy way to configure the ciphers that will pass your audit, by just enabling a TLS profile.
There ’s only one profile right now , call NIST_2024 , and you can set it with an api call , through Configuration Profiles , or through a PowerCLI script , which , frankly , is the easy .
There are examples in the Security Configuration Guide for vSphere 8 on how to do this. And you will need to restart your ESXi host to have it take effect, so you can do it right before you patch!
Security Configuration Guide is updated for vSphere 8 Update 3 and includes guidance for vSAN 8 Update 3.
turn on data – in – transit protection , turn on data – at – rest encryption , and you ’re done . In fact , that ’s a huge difference between vSAN and other storage solution : it is ’s ’s so very easy to turn advanced security on . It is ’s ’s a checkbox , not a multi – year project .
The SCG has a few new things in it, from the new TLS profiles, to some guidance about controlling VM boot options.
It also has easy to use comparisons between the STIG guidance, PCI DSS 4.0 guidance, and the baseline, too, so you can see exactly where the baseline differs for those compliance frameworks.
And there are now script to audit and remediate the majority of what ’s in the Guide , so customers is get set up new environment can get thing done quickly , and auditor can capture the output for their record , too .
For everything new in vSphere IaaS control plane, see this article What’s New in vSphere Update 3 for vSphere IaaS control plane?
For everything new in vSphere core storage , see this article What ’s new with vSphere Core storage ?
vSphere 8 initially release in October 2022 and see two significant update over the course of its first year . continue that trend of feature – rich update , vSphere 8 Update is dropped 3 drop on June 25th 2024 . It is ’s ’s a good time to remind you that vSphere 7 is plan to enter end – of – general – support ( EOGS ) in 2025 . time to start planning upgrade and/or migration to vSphere 8 if you have not already done so .
Check out the vSphere 8 upgrade activity path and Best Practices for Patching VMware vSphere articles for more.
Overtime features are deprecated in vSphere as technology changes, and we adapt to customer’s needs. In addition to previously announced deprecations and removals in vSphere 8, Update 3 announces the deprecation of vSphere Trust Authority and the use of Storage DRS and Storage IO Control with respect to IO latency. See the vCenter and ESXi 8 Update 3 release notes for the complete list of deprecation.
Deprecated features remain supported in vSphere 8 but will not be supported in a future major version.