Whats New With vSphere 6.5 Update 2

VMware vSphere 6.7 has been announced by VMware recently and there are many enhancement and new features are available with this release.And now VMware released vSphere 6.5 Update 2 with some features of vSphere 6.7 .

vCenter Server 6.5 Update 2

  • A Windows vCenter Server that has custom HTTP & HTTPS ports are supported during migration to the vCenter Server Appliance.
  • You can use the TLS Configuration utility to configure SSL tunnels on port 8089.
  • you can configure SSL settings for the lightweight CIM daemon, SFCB, with the TLS Configuration utility
  • Backup and restore to Embedded Linked Mode with replication deployment topology API.
  • vMotion and cold migration of virtual machines across vCenter Server versions 6.0 Update 3 and later, also includes VMware Cloud on AWS.
  • During the GUI or CLI deployment process of the vCenter Server Appliance, you can customize the default network ports for the HTTP Reverse Proxy service. The default ports are 80 for HTTP and 443 for HTTPS.
  • Supporting Enhanced Linked Mode (ELM) with embedded PSC .

With vCenter Server 6.5 Update 2 you can use Enhanced Linked Mode (ELM) with an Embedded Platform Service Controller (PSC). This features really helps reduces the number of virtual machines to manage and removes the need for a load balancer for high availability and the maximum number of supported ELM configuration is  "10". This feature will support only on new installations , we cannot use for upgrade or additions.

vSphere 6.5

  • Customization of default network ports for the HTTP Reverse Proxy service via the GUI or CLI during the deployment of the vCenter Server Appliance.
  • IPv6 support for the Key Management Server (KMS) of VMware vSphere Virtual Machine Encryption (VM Encryption).
  • Additional alarms for expiration of KMS certificates, missing hosts, and virtual machine keys.
  • Management of multiple namespaces compatible with the Non-Volatile Memory Express (NVMe) 1.2 specification and enhanced diagnostic log.
  • Adding tags to the Trusted Platform Module (TPM) hardware version 1.2 on ESXi using ESXCLI commands.
  • New native driver to support the Broadcom SAS 3.5 IT/IR controllers with devices including the combination of NVMe, SAS, and SATA drives.
  • LightPulse and OneConnect adapters are supported by separate default drivers. The brcmfcoe driver supports OneConnect adapters and the lpfc driver supports only LightPulse adapters. Previously, the lpfc driver supported both OneConnect and LightPulse adapters.
  • Updates to time zones in the Linux guest operating system customization: vCenter Server Linux guest operating system customization supports latest time zones.
  • Disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers, nhpsa, now works with an extended list of expander devices to enable compatibility with HPE Gen9 configurations.

Limitations

  • Cloning of virtual machines between vCenter Server 6.0 and vCenter Server 6.5 is not supported.
  • Configuration of TLS protocols on clusters with mixed ESXi 6.0 and ESXi 6.5 hosts is not supported.
  • Upgrade from vSphere 6.5 U2 to vSphere 6.7 GA is not supported yet.
  • ELM with embedded PSC will support only on new installations .

vSphere 6.5 Update 2 Download Links

VMware vSphere Hypervisor (ESXi) 6.5 U2 - Download    Release Notes

VMware vCenter Server 6.5 U2                       -  Download    Release Notes

Reference Blog


vSphere 6.5 What's New With High Availability

There are many new features have been introduced with vSphere 6.5 and In this post we are going to discuss about what's new with HA in vSphere 6.5 .From vSphere 6.5 you can fin the configuration under vSphere Availability , you can check on below image .

Below are points we are going to discuss which are part of vSphere Availability .

  • Admission Control
  • Restart Priority enhancements
  • HA Orchestrated Restart
  • ProActive HA

Admission Control

Admission control has the same functionality as earlier with version 6.0 but with 6.5  , it is introduced in easier way with some more options . vSphere HA uses admission control to ensure that sufficient resources are reserved for virtual machine recovery when a host fails.

 

Navigate to Cluster -> Configure -> Services -> vSphere Availability  and Edit on the option -> Select Admission Control

Cluster resource percentage

You can see the “Host failures cluster tolerates” as “1” which we selected on current configuration and "Cluster Resource Percentage” .If you scale up by adding new nodes or scale down by remove node , the percentage value will be automatically adjusted based on the selected number of failures you want to tolerate.As per our current configuration , we have four ESXi hosts in the cluster and we can run the VM infra with 1 host failure worth of 25% CPU and 57 % Memory

Another enhancement is “ Performance degradation VMs tolerate " ,from this section you can specify the performance degradation you can accept for a failure. By default set to 100% , but you can configure for instance 25% or 50%, depending on your business requirement . This can be configured from the same option , just need to mention the percentage only and DRS should be enabled to use this feature .

Example

If you reduce the threshold to 0%, a warning is generated when cluster usage exceeds the available capacity.

If you reduce the threshold to 20%, the performance reduction that can be tolerated is calculated as performance reduction = current utilization * 20%. When the current usage minus the performance reduction exceeds the available capacity, a configuration notice is issued

Restart Priority Enhancements

This feature will allow you control the virtual machine startup priority if a host fails and HA triggered .We can set the priority in 3 modes   high, medium or low .

Navigate to Cluster -> Configure -> Configuration -> VM Overrides and Click on Add

Select the virtual machines and click OK

Next Select the VMs by clicking the add option (green plus icon) and then specify their relative startup priority. Here I select on 2 VMs and then choose “lowest” option , also other available options are “low, medium, high and highest”.

After specifying the priority you can also specify other settings if required  , example additional delay before the next batch will start , or you can specify even what triggers the next priority “group”, this could for instance be the VMware Tools guest heartbeat . Another option is “resources allocated” which is using for scheduling  batch itself, the power-on event completion or the “app heartbeat” detection. app heartbeat option is completely depend on App HA  , it has to enabled and services to be defined , it required detailed configuration .There are different value for heart beat detection and this can be modified ( decrease or increase ) as per requirement . If there is no Guest heartbeat , it will take more time to understand so  there should be a timeout setting .

This Option is  very useful in many cases such as if you have 2 VM ( Server and Application ) , first server should be powered on and then application .

HA Orchestrated Restart

HA Orchestrated Restart can be configured by creating “VM” Groups and assigning the VM Rule which is asscoaiated with the requirement .

Navigate to Cluster -> Configure -> Configuration -> VM/Host Groups and Click on Add  to create new VM Group

Provide the Name for group the click on Add and select the First VM  and click OK twice to close both windows .

Here we created Test Server Group and Add the Server VM to be part of Primary Group .

Follow Same Create Another group for depended  machine and here it is Test App .

After creation of both groups you can see both on the VM/Host Groups Option

Next Select the VM Groups according to requirement , here First is Server and then Application

Next Navigate to Cluster -> Configure -> Configuration -> VM/Host Rules and  Click on Add  provide the Policy Name

and add  Type of Policy then  VM Groups and Click OK .

Here we Choose the Rule Type Virtual Machine to Virtual Machine which will used for vSphere HA to restart virtual machines in the VM group Test Server first. When the cluster dependency restart condition has been met, virtual machines in the VM group Test App will be started afterwards.

There are Four Type which you can use for Set the Policy Rule  , You have to choose the appropriate one .

Pro-Active HA

Proactive HA is a function of DRS not a vSphere HA  , but it is part of  “Availability” section in Configuration .Proactive HA will  adds an additional layer of availability to your environment. Proactive HA integrates with the Server vendor’s monitoring software (more on this later), via a Web Client plugin, which will pass detailed server health status/alerts to DRS, and DRS will react based on the health state of the host’s hardware.

A Proactive HA failure occurs when a host component fails, which results in a loss of redundancy or a disastrous failure. However, the functional behavior of the VMs residing on the host is not yet affected .For example, if a power supply on the host fails, but other power supplies are available, that is a Proactive HA failure. If a Proactive HA failure occurs, the VMs on the affected host will be evacuated to other hosts and the host is either placed in Quarantine mode or Maintenance mode .

Navigate to Cluster -> Configure -> Configuration -> vSphere Availability Click Edit Option

Enable Proactive HA by  selecting the Check box  -> Turn on Proactive HA

Providers appear when their corresponding vSphere Web Client plugin has been installed and the providers monitor every host in the cluster. To view or edit the failure conditions supported by the provider, click the edit link.

Select Proactive HA Failures and Response Option as below  and Click OK

Automation Level - Automated / Manual

Remediation gives you 3 options for how Proactive HA will handle host-degradation alerts.

  • The first is to place a host into Quarantine Mode for any alert or degradation regardless of the severity.
  • The second is to place hosts into Quarantine Mode for moderate degradation, but Maintenance Mode for severe degradation.
  • The third option is to place hosts into Maintenance Mode for any alert or degradation regardless of severity.

Pro-Active HA will trigger for different  types of failures like power supply,  memory, network, storage and even a fan failure. Which state this results in (severe or moderate) is up to the vendor, this logic is built in to the Health Provider itself and w\this comes with the vendor Web Client plugins.

Health provider reads all the sensor data and analyze the results and then serve the state of the host up to vCenter Server. These states are “Healthy”, “Moderate Degradation”, “Severe Degradation” and “Unknown”. (Green, Yellow, Red) When vCenter is informed DRS can now take action based on the state of the hosts in a cluster, but also when placing new VMs it can take the state of a host in to consideration. The actions DRS can take by the way is placing the host in Maintenance Mode or Quarantine Mode.

Maintenance Mode - All VMs will be migrated off the host.

Quarantine Mode    - This mode will attempt to evacuate it’s running virtual machines with below

  1. No impact of VM performance results on any virtual machine in the cluster
  2. None of the DRS Affinity/Anti-Affinity rules are violated

If the above are satisfied, then the VMs will evacuate and DRS will avoid placing virtual machines on said quarantined host.

 


vSphere 6.5 Upgrade - VMware Official e-Book

VMware Officially released eBook to do upgrade to vSphere 6.5 . There are  three phases of the upgrade to ensure a successful process overall whether you are upgrading from vSphere 5.5 or vSphere 6.0. To understand what steps should be taken to support and meet the needs of your organization You can Download and Refer the eBook From below Link.

Also there are many reference links available from the eBook and it is very useful to Administrate the VMware Infrastructure .

Document Contain Below Topics

  • What’s New in VMware vSphere 6.5
  • How to Make the Most of This eBook
  • Phase 1: Pre-Upgrade
  • Phase 2: Upgrade
  • Phase 3: Post-Upgrade
  • Resource Repository

 

Also you Can Go to VMware Web Site and and Download the Document


Install and Configure Update Manager on Windows vCenter 6.5

We discussed about update manger in the previous post . VMware Update Manager provides centralized patch and version management for ESXi hosts, virtual machines, and virtual appliances. Update Manager can be used to upgrade and patch ESXi hosts, install and update third-party software on hosts, upgrade virtual machine hardware, VMware Tools, and virtual appliances. In this post i will explain the the installation and configuration of update Manager 6.5 on Windows vCenter 6.5

You can use the different Update Manager deployment models in different cases, depending on the size of your system.

You can use one of several common host-deployment models for Update Manager server:

All-in-one model

vCenter Server and Update Manager server are installed on one host and their database instances are on the same host. This model is most reliable when your system is relatively small.

Medium deployment model

vCenter Server and Update Manager server are installed on one host and their database instances are on two separate hosts. This model is recommended for medium deployments, with more than 300 virtual machines or 30 hosts.

Large deployment model

vCenter Server and Update Manager server run on different hosts, each with its dedicated database server. This model is recommended for large deployments when the datacenter's contain more than 1,000 virtual machines or 100 hosts.

Software Consideration
  • Update Manager 6.5 requires vCenter Server v6.5
  • Update Manager 6.5 requires Windows Server 2008 SP2 64-bit or later.
  • An external database should be Microsoft SQL Server 2008 R2 SP2 or above

Hardware Consideration 

  • vCenter there should be a minimum of 8 GB RAM and 2 CPU
  • VMware recommend at least 120 GB of free space on the drive where the patching repository will be stored.
  • Update Manager connects to vCenter Server on TCP port 80 and ESXi hosts connect to Update Manager using ports 9084 and 902. Refer KB for all port requirements for vCenter Update Manager .

 

Install and Configure Update Manager

Download the VMware vCenter Server and modules for Windows VMware Downloads; this includes VMware Update Manager 6.5. Mount the ISO and run autorun.exe from  the mounted drive .

Note:- Use Run as administrator in case  if if any error shows related access privilege .

Select Server under the vSphere Update Manager menu and tick the option Use Microsoft SQL Server 2012 express embedded database to include the Microsoft SQL Server 2012 Express installation and Microsoft .Net Framwork 4.7 incase it is not installed already  and Click Install.

Note :- Do not select this box if you want to use  an external database.

It will start extracting the files required for installing .net 4.7

Select or tick the I have read and accept the license terms and click install to complete the installation of .net 4.7

once Finished click Finish to start the SQL 2012 database installation

Once SQL database installation is completed Installation of vCenter menu will popup

Select the installation language and click Ok, then click Next to begin the install wizard.

Accept the license agreement and click Next.

Verify the support information, if the there is no internet access to Update Manager instance untick Download updates from default sources immediately after installation, otherwise click Next.

Enter the information for the vCenter Server and click Next.

Note :- For external database select the Data Source Name for the database and click Next.

Select  host name or IP of the vCenter to accept the default ports and configure a proxy server if required, then click Next.

Review the default installation locations, change if necessary, and click Next.

There will be a popup of warning  for Installation destination has less than 120 GB of available space , click Ok to continue.

Click Install to begin the installation.

The installation progress status bar will be displayed.

Once installation is complete click Finish.

Once Installation is completed , Login to vCenter verify that Update Manager available or not

Navigate to Cluster -> Update Manager


How to Configure RDM - Physical Compatibility Disk

We discussed about RDM in detail post and here I am going discuss about configure  RDM as a Physical Compatibility disk to virtual machine.

There are two types of RDM Disks

Virtual Compatibility RDM Disksact same as a virtual disk file, including the use of snapshots. There are 2 ways we an create physical compatibility RDM disks , through CLI and  vSphere Web Client.

 Create Physical Compatibility RDM Disks from CLI  

For this procedure we are using vmkfstools  , for this we should know the NAA ID of the physical LUN which we are going to add it as RDM disk to the virtual machine.

We can use the below command to find the NAA ID  by login to esxi through an ssh session

#esxcli storage core path list

Next change the directly  , cd towards  specific folder where you want to keep the RDM mapping file and execute the below command .

vmkfstools -z /vmfs/devices/disks/naa.XXXXXXXXXXXXXXXXXXXXXXXXX   diskname.vmdk

Verify that there will be two available in side the folder where  diskname-rdm.vmdk  & diskname.vmdk using " ls" commands from the folder.

Also you can check  by value   CreateType ="vmfsPassthroughRawDeviceMap inside the descriptor file

After creation of RDM mapping file using vmkfstools

  • Edit the VM properties -> Select “Existing Hard Disk” from the New Device drop-down Click on Add. ( From vCenter)

Browse RDM Mapping file - > Select the VMDK and Click  Ok .

RDM disk appear as similar to below  .

Next Login to the machine and Check the RDM is visible inside the GOS or not

Create Physical Compatibility RDM Disks using vSphere Web Client

login to vCenter Server using vSphere Web Client or ESXi Host Web UI.

Select the virtual machine in the vCenter Server inventory -> Edit Settings. Select RDM Disk from the New Device Dropdown

Select the required  LUN to create it as RDM disk in the virtual machine and Click on Ok.

Select or Change   compatibility Mode to “Physical” and also you can specify the location of the RDM Mapping file. You can choose either store with the virtual machine or select any specific datastore. Click OK.

Now login to the virtual machine and verify the 100 GB uninitialized RDM  , you can  create a partition on the attached RDM  disk an start using that .

 


How to Configure RDM - Virtual Compatibility Disk

We have discussed about RDM in detail post and here I am going discuss about configure  RDM as a Virtual Compatibility disk to virtual machine.

There are two types of RDM Disks

Virtual Compatibility RDM Disksact same as a virtual disk file, including the use of snapshots. There are 2 ways we an create Virtual compatibility RDM disks , through CLI and  vSphere Web Client.

 Create Virtual Compatibility RDM Disks from CLI  

For this procedure we are using vmkfstools  , for this we should know the NAA ID of the physical LUN which we are going to add it as RDM disk to the virtual machine.

We can use the below command to find the NAA ID  by login to esxi through an ssh session

#esxcli storage core path list

Next change the directly  , cd towards  specific folder where you want to keep the RDM mapping file and execute the below command .

vmkfstools -r /vmfs/devices/disks/naa.XXXXXXXXXXXXXXXXXXXXXXXXX   diskname.vmdk

Note: RDM mapping file name will be helpful in many conditions such as maintenance of the LUN’s or disk.

Verify that there will be two available in side the folder where  diskname-rdm.vmdk  & diskname.vmdk using " ls" commands from the folder.

Also you can check  by value   CreateType ="vmfsRawDeviceMap "  inside the descriptor file

After creation of RDM mapping file using vmkfstools

  • Edit the VM properties -> Add Had disk -> Existing Hard Disk  ( From ESXi host )
  • Edit the VM properties -> Select “Existing Hard Disk” from the New Device drop-down Click on Add. ( From vCenter)

Browse RDM Mapping file - > Select the VMDK and Click Save  / Ok .

RDM disk appear as similar to below and click ok to take effect .

Next Login to the machine and Check the RDM is visible inside the GOS or not

Create Virtual Compatibility RDM Disks using vSphere Web Client

login to vCenter Server using vSphere Web Client or ESXi Host Web UI.

Select the virtual machine in the vCenter Server inventory -> Edit Settings. Select RDM Disk from the New Device Dropdown

Select the required  LUN to create it as RDM disk in the virtual machine and Click on Ok.

Select or Change   compatibility Mode to “Virtual” and also you can specify the location of the RDM Mapping file. You can choose either store with the virtual machine or select any specific datastore. Click OK.

Now login to the virtual machine and verify the 100 GB uninitialized RDM  , you can  create a partition on the attached RDM  disk an start using that .


How to Reset vCenter SSO password of VCSA 6.5

We have already discussed about resetting root password vCenter Appliance in post , and here I am sharing the details about how can we reset the vCenter SSO password.

For all virtual environment one the major security part is vCenter SSO password and we have keep this as very secret since it has the higher privilege. Also initial password will be set by the deployment engineer or administrator who configuring the vCenter, always recommends to reset the password after initial deployment and hand over.

From vSphere 6.0 onwards the vCenter architecture is changed , compared to previous versions. Earlier version it was SSO server and 6.0 onward  Platform services controller (PSC) take over the role of vCenter SSO server . Platform service controller completely deals with identity management for administrators and applications that interact with the vSphere platform.

VMware Platform Services Controller provides a variety of identity and data services to vCenter Server and to integrated VMware products. When multiple Platform Services Controller instances are configured in a vCenter Single Sign-On domain, they replicate identity data and provide a resilient, highly available platform.

There are two methods ,you can configure PSC with vCenter and password reset for both are following.

  • vCenter server with embedded Platform Services controller - Password reset of SSO can be done from vCenter Server appliance
  • vCenter with external PSC - Password reset of SSO can be done by logging into PSC controller.

How to Reset vCenter SSO password for the VCSA appliance

First we need vCenter server root credentials of the PSC or vCenter Server Appliance to reset the vCenter SSO password. And we are using vdcadmintool for resetting the password .

Follow the below procedure

1.Find type of PSC using with vCenter Server

2.Login to Platform Services Controller if externally configured with vCenter or Login to vCenter Server if Platform Services Controller Appliance is embeded with vCenter using SSH as root user.

3.Run this command to enable access the Bash shell:

shell

Note:Here I am using an external PSC controller , same procedure  can be used for resetting SSO password of vCenter server with embedded Platform Services controller .

4.Verify your SSO domain Name by entering below command , this command is same for both scenarios .( embedded or external psc )

# /usr/lib/vmware-vmafd/bin/vmafd-cli get-domain-name --server-name localhost

Note: - Default SSO domain is vSphere. Local

5.Type below command for listing the options associated with vdcadmintool

# /usr/lib/vmware-vmdir/bin/vdcadmintool

6. Select option 3 - Reset account password and it will be prompted for the Account UPN, enter your UPN and it will generate new SSO password

User@vSphere.DomainName 

In my case, it is administrator@vsphere.vmarena.local.

Note: If vSphere Domain name is custom one , provide the same and a random password will be generated for the VCenter SSO admin account.

7.Log in to vSphere Web Client using the vCenter SSO admin account with the generated password. Select the Change Passwordoption under the logged in Username

8.Specify the password generated in the above steps in the current password field and your new password  in new password field and Click Ok to change it. Logout and Login back to the vCenter server using SSO user account with the new password.

Reference


vMotion Limits on Simultaneous Migrations

With vSphere 6.x VMware enhanced vmotion features which we discussed on the post and it really help full in virtual Infrastructure  . But vCenter Server places limits on the number of simultaneous migration and provisioning. This is occurring on operations on each host, network, and datastore.

Operations like migration with vMotion or cloning a virtual machine, is assigned a resource cost. Each host, datastore, or network resource, has a maximum cost that it can support at any one time. Any new migration or provisioning operation that causes a resource to exceed its maximum cost does not proceed immediately, but it will be queued until other operations complete and release resources. once the resources limits must satisfy then only the operation associated will be proceed.

vMotion without shared storage, migrating virtual machines to a different host and datastore simultaneously, is a combination of vMotion and Storage vMotion. This migration inherits the network, host, and datastore costs associated with those operations. vMotion without shared storage is equivalent to a Storage vMotion with a network cost of 1.

Follow below details to understand the limitation

Network Limits

Network limits apply only to migrations with vMotion. Network limits depend on the version of ESXi and the network type. All migrations with vMotion have a network resource cost of 1.

Network Limits for Migration with vMotion

Operation

ESXi Version Network Type

Maximum Cost

vMotion 5.0, 5.1, 5.5, 6.0 1GigE

4

vMotion 5.0, 5.1, 5.5, 6.0 10GigE

8

Datastore Limits

Datastore limits apply to migrations with vMotion and with Storage vMotion. A migration with vMotion has a resource cost of 1 against the shared virtual machine's datastore. A migration with Storage vMotion has a resource cost of 1 against the source datastore and 1 against the destination datastore.

Datastore Limits and Resource Costs for vMotion and Storage vMotion

Operation ESXi Version Maximum Cost Per Datastore Datastore Resource Cost

vMotion

5.0, 5.1, 5.5, 6.0 128

1

Storage vMotion

5.0, 5.1, 5.5, 6.0 128

16

Host Limits

Host limits apply to migrations with vMotion, Storage vMotion, and other provisioning operations such as cloning, deployment, and cold migration. All hosts have a maximum cost per host of 8. For example, on an ESXi 5.0 host, you can perform 2 Storage vMotion operations, or 1 Storage vMotion and 4 vMotion operations.

Host Migration Limits and Resource Costs for vMotion, Storage vMotion, and Provisioning Operations

Operation

ESXi Version Derived Limit Per Host Host Resource Cost

vMotion

5.0, 5.1, 5.5, 6.0 8 1
Storage vMotion 5.0, 5.1, 5.5, 6.0 2

4

vMotion Without Shared Storage

5.1, 5.5, 6.0 2

4

Other provisioning operations

5.0, 5.1, 5.5, 6.0

8

1

Reference


VMware vMotion in vSphere 6.5

We have already discussed about vmotion previous post and here we will learn more about different versions of vMotions.

vMotion of virtual machine host to host

When you migrate virtual machines with vMotion and choose to change only the host, the entire state of the virtual machine is moved to the new host. First it copies memory content and all the information that defines and identifies the virtual machine to target virtual machine through vMotion network. If the source virtual machine continues to change its memory, again copying will start but only delta changes occurred from last copy. After completion of all the copying process, source virtual machine will process shuts down and target virtual machine will resume its activities. This will be completed in very quick and next we have to wait for vMotion to finish is because of the network, recommends to have minimum 1G network for vMotion.

vMotion of virtual machine host and datastore to another  

When you choose to change both the host and the datastore, the virtual machine state is moved to a new host as mentioned above and the virtual disk is moved to another datastore. vMotion migration to another host and data store is possible in vSphere environments without shared storage.

Note: The memory content includes transaction data and the bits of the operating system and applications that are in the memory. The identification information stored in the state includes all the data that maps to the virtual machine hardware elements, such as BIOS, devices, CPU, MAC addresses for the Ethernet cards, chip set states, registers etc.

If errors occur during migration, the virtual machine reverts to its original state and location.

Long-distance vMotion

Long distance vMotion can be referred as cross datacenter vmotion or cross country vmotion . VMware vSphere 6.0 adds functionality to migrate virtual machines over long distances. You can now perform reliable migrations between hosts and sites that are separated by high network round-trip latency times (150 milliseconds or less, between hosts). To support long distance vmotion you required an Enterprise Plus license.

The vMotion process will keep the virtual machine historical data such as events, alarms etc and specific configuration details like HA properties and DRS Affinity/Anti-Affinity rules which are associated with vCenter.

Requirements:

  • A RTT (round-trip time) latency of 150 milliseconds or less, between hosts.
  • Your license must cover vMotion across long distances. The cross vCenter and long distance vMotion features require an Enterprise Plus license.
  • vMotion Network ( L2 network) .
  • Virtual Machine Network
  • vCenter Server 6.0 on Both Location
  • Both vCenter Server instances must be time-synchronized.
  • In vSphere Web Client, both vCenter Server instances must be in Enhanced Linked Mode and must be in the same vCenter Single Sign-On domain.
  • vCenter Server instances may exist in separate vSphere Single Sign-On domains by using vSphere APIs/SDK .
  • vMotion network has at least 250 Mbps

Possible to move VMs on below scenarios

  • From VSS to VSS
  • From VSS to VDS
  • From VDS to VDS

Note :

  • Source and Destination VDS must be the same version.
  • In VSS , network labels used for the virtual machine port groups are consistent across hosts.

vMotion Across vCenters

This operation allows virtual machine to move across vCenter Server, Datacenter Objects and Folder Objects. This operation will change compute, storage, network and vCenter . There is some requirement to perform this operation, hosts should be licensed for vMotion and required configuration for vmotion, and shared storage.

Useful Scenarios 

  • Balance workloads across clusters and vCenter Server instances.

  • Elastically expand or shrink capacity across resources in different vCenter Server instances in the same site or in another geographical area .

  • Move virtual machines between environments that have different purposes, for example, from a development to production.

  • Move virtual machines to meet different Service Level Agreements (SLAs) regarding storage space, performance, and so on

Properties of vMotion across vCenter Server instances

  • Virtual Machine UUID is maintained across vCenter server instances
  • Retain Alarms, Events, task and history
  • HA properties
  • DRS settings including Affinity/anti-affinity rules
  • VM resources (shares, reservations, limits)
  • MAC Address of virtual NIC is preserved across vCenters and remain always unique within a vCenter .

Note: -

VM MAC will not reuse when VM leaves from one vCenter. During the migration of a virtual machine to another vCenter Server system, the performance data that has been collected about the virtual machine is lost.

vMotion without shared storage

To move a virtual Machine without shared storage is available from vSphere 5.1 onwards. The process is the same as traditional vMotion (host and datastore), same as you need to choose to what host, datastore, and a priority level.

This feature is useful for performing cross-cluster migrations, when the target cluster machines might not have access to the source cluster's storage. vMotion to migrate virtual machines to a different compute resource and storage simultaneously. Unlike Storage vMotion, which requires a single host to have access to both the source and destination datastore, you can migrate virtual machines across storage accessibility boundaries.

Useful Scenarios

  • Host maintenance,
  • Storage maintenance and reconfiguration.
  • Storage load redistribution.

Note:- The amount of data is huge then better to shut down the virtual machine to speed up the vmotion process.

 Requirements:

  • The hosts must be licensed for vMotion.
  • Host version should be ESXi 5.1 or later.
  • The destination host must have access to the destination storage.
  • When you move a virtual machine with RDMs and do not convert those RDMs to VMDKs.
  • If you use RDM destination host must have access to the RDM LUNs.
  • vMotion network should be available on source and destination hosts.

Limitations

This type of vMotion counts against the limits for both vMotion and Storage vMotion, so it consumes both a network resource and 16 datastore resources

Reference

 


How to reset root password of vCenter Server 6.5 - VCSA

There might a situation when you can’t login into a vCenter system anymore because you either don’t know the root password anymore or the system is not able to log you in.The can happen if the root mountpoint of the VCSA 6.5 appliance filled up or forgotten the root password .

New VCSA (vCenter Server Appliance) 6.5 is build on top of Photon OS and this will not allow you to change the password by using the same standard procedure you know from Debian, Ubuntu or RedHat.

In such cases ,  Follow below procedure to reset the password
  • Restart your vCenter appliance and wait for the Photon OS Splash screen during boot.

  • Once booted hit the letter to enter the boot menu.
  • Change to the GNU GRUB boot menu editor and hit enter.

  • Add the following string behind the line that starts with linuxrw init=/bin/bash

  • Hit the F10 function key to boot the changed entry.

How to clean up the root partition

Check the root partition usage, using the df -h command. Very often the log files grew large and filled up your partition.

check the usage of  audit.log in /var/log/audit by follwoing commad

ls -sh /var/log/audit

To remove the file follow this

rm /var/log/audit/*.log to clean it up.

How to reset the root password

Please follow these steps:

passwd 

Enter a strong password twice and make sure you will remember same for future

umount /

reboot -f