Join  ESXi 6.x to Active Directory Using domainjoin-cli

The enhancement  with vSphere 6.x , Likewise utility called domainjoin-cli which allows you to join a system to an Active Directory Domain. Previously, if you wanted to automate the process of joining an ESXi host to an Active Directory Domain, you had to either manually configure it using the vSphere Web/Client .

Below are the steps need to perform .

  1. /etc/init.d/lwsmd start
  2. chkconfig lwsmd on

For  Joining  to your Active Directory Domain, you have to specify the following parameters:

  • Join - Specifying the operation is a join versus a leave
  • AD Domain Name - Active Directory Domain to join
  • Username - Active Directory username to join to the domain
  • Password - Active Directory password to join to the domain (optional as you will be prompted if it is not specified)

Here is an example of what the command looks like joining my Active Directory Domain in my lab:

/usr/lib/vmware/likewise/bin/domainjoin-cli join vmarena.com administrator

Additional Information

You can verify  that your ESXi is Joined to of Domain or not by using below steps .

  • Go to Path            /usr/lib/vmware/likewise/bin
  • Run command   ./domain-cli query

Below image shows the ESXi is not part of Domain , After Joining to active directory using steps mentioned in the First  you may run this again to verify status .

Reference


Restore ESXi 6.0 to Previous version

Restore ESXi 6.0 to Previous version

Here we explian simple method to restore to old ESXi version . This scenario os restoration of ESXi 6.0U2 to 6.0U1b

Note: Back up your configuration data before making any changes.


1. Reboot the ESXi from Direct Console User Interface (DCUI) screen.


   In the DCUI screen, press F12 to view the shutdown options for the ESXi host.

   Press F11 to reboot.


2. When the Hypervisor progress bar starts loading, press Shift+R. You see a pop-up with a warning:


3 . Press Y to roll back the build.

     Press Enter to boot or wait for few seconds it will boot automatically .


     Current hypervisor will permanently be replaced with build: X.X.X-XXXXXX. Are you sure? [Y/n]

 

 


What’s New in vCenter Server Appliance(vCSA) 6.0

What’s New in vCenter Server Appliance(vCSA) 6.0

What’s New with vCenter Server Appliance (vCSA) Installation:

As compared with the deployment of previous version of vCSA, vCSA 6.0 is different. Prior to vSphere 6.0, vCSA can be deployed using OVF template but It  should be deployed usingISO image in vCSA 6.0. You need to download .iso installer for the vCenter Server Appliance and Client Integration Plug-in.
vCSA 6.0 - Guided Installer
Install the Client Integration plugin and double-click “html” file in the software installer directory,which will allow access to the VMware Client Integration Plug-In and click Install or Upgrade to start the vCenter Server Appliance deployment wizard. You will be provided with various options during the deployment including the Deployment type of vCenter Server.

vCenter 6.0 Deployment Methods:

Embedded Platform Services Controller:
All services bundled with the Platform Services Controller are deployed on the same virtual machine as vCenter Server. vCenter Server with an embedded Platform Services Controller is suitable for smaller environments with eight or less product instances.
External Platform Services Controller:
The services bundled with the Platform Services Controller and vCenter Server are deployed on different virtual machines.You must deploy the VMware Platform Services Controller first on one virtual appliance and then deploy vCenter Server on another appliance. The Platform Services Controller can be shared across many products. This configuration is suitable for larger environments with nine or more product instances.
vCSA 6.0 - Deployment Types

 vCSA 6.0 Appliance Access:

As Compared with the Previous Versions of vCSA, vCSA 6.0 appliance access has been modified a bit. vCSA no more having admin URL with port 5480 to control and configure the vCenter Server appliance. Now there are 3 Methods to access the vCSA appliance
  • vSphere Web Client UI
  • Appliance Shell
  • Direct Control User Interface (DCUI)
With DCUI added with vCSA. Look and Feel of vCSA is very similar to ESXi host. Black box model.

vCSA 6.0 Appliance Sizing:

During vCSA 6.0 deployment, you will be asked to select the deployment size of vCSA appliance. There are 4 default out-of-box sizes available with vCSA deployment.
vCSA 6.0 - Appliance Sizes
vCSA 6.0 - Appliance Size

Comparison  between vCenter  6.0 Windows and vCSA 6.0

vCSA now supports most of the features which is supported with the windows version of vCenter server.  I would like to provide quick comparison table between vCenter windows version and vCenter server appliance with embedded database.
vSphere 6.0 - feature Comparision betwwen vCenter windows and vCSA

VMware Fault Tolerance (FT) - vSphere 6.0

VMware Fault Tolerance (FT) - vSphere 6.0

 

vSphere 6.0 - FT_1

Benefits of Fault Tolerance

  • Continuous Availablity with Zero downtime and Zero data loss
  • NO TCP connections loss during failover
  • Fault Tolerance is completely transparent to Guest OS.
  • FT doesn’t depend on Guest OS and application
  • Instantaneous Failover from Primary VM to Secondary VM in case of ESXi host failure

What’s New in vSphere 6.0 Fault Tolerance

  • FT support upto 4 vCPUs and 64 GB RAM
  • Fast Check-Pointing, a new Scalable technology is introduced to keep primary and secondary in Sync by replacing “Record-Replay”
  • vSphere 6.0, Supports vMotion of both Primary and Secondary Virtual Machine
  • With vSphere 6.0, You will be able to backup your virtual machines. FT supports for vStorage APIs for Data Protection (VADP) and it also supports all leading VADP solutions in Market like symantec, EMC, HP ,etc.
  • With vSphere 6.0, FT Supports all Virtual Disk Type like EZT, Thick or Thin Provisioned disks. It supports only Eager Zeroed Thick with vSphere 5.5 and earlier versions
  • Snapshot of FT configured Virtual Machines are supported with vSphere 6.0
  • New version of FT keeps the Separate copies of VM files like .VMX, .VMDk files to protect primary VM from both Host and Storage failures. You are allowed to keep both Primary and Secondary VM files on different datastore.
vSphere 6.0 - FT_2

Difference between vSphere 5.5 and vSphere 6.0 Fault Tolerance (FT)

Difefrence between FT 5.5 amd 6.0

What’s New in vCenter Server 6.0

In vSphere 6.0, you will notice considerably new terms, when installation vCenter Server 6.0. As similar to the previous versions of vCenter Deployment, You can install vCenter Server on a host machine running Microsoft Windows Server 2008 SP2 or later or you can deploy vCenter Server Appliance (VCSA). With vSphere 6.0, There are 2 different new vCenter Deployment Models.
  • vCenter with an embedded Platform Services Controller
  • vCenter with an external Platform Services Controller
One of the Considerable Change, you will notice with vCenter Server installation is deployment models and embedded database. Embedded database has been changed from SQL express edition to vFabric Postgres databasevFabric Postgres databaseembedded with vCenter installer is suitable for the environments with up to 20 hosts and 200 virtual machines and vCenter 6.0 continuous to support Microsoft and Oracle Database as external database. with vCenter Upgrades, where SQL express was installed will be converted to vPostgres. Let’s review the System requirements to install vCenter 6.0:
Supported Windows Operation System for vCenter 6.0 Installation:
  • Microsoft Windows Server 2008 SP2 64-bit
  • Microsoft Windows Server 2008 R2 64-bit
  • Microsoft Windows Server 2008 R2 SP1 64-bit
  • Microsoft Windows Server 2012 64-bit
  • Microsoft Windows Server 2012 R2 64-bit
 Supported Databases for vCenter 6.0 Installation:
  • Microsoft SQL Server 2008 R2 SP1
  • Microsoft SQL Server 2008 R2 SP2
  • Microsoft SQL Server 2012
  • Microsoft SQL Server 2012 SP1
  • Microsoft SQL Server 2014
  • Oracle 11g R2 11.2.0.4
  • Oracle 12c

Components of vCenter Server 6.0:

There are two Major Components of vCenter 6.0:
  • vCenter Server: vCenter Server product, that contains all of the products such as vCenter Server, vSphere Web Client,Inventory Service, vSphere Auto Deploy, vSphere ESXi Dump Collector, and vSphere Syslog Collector
  • VMware Platform Services Controller: Platform Services Controller contains all of the services necessary for running the products, such as vCenter Single Sign-On, License Service and VMware Certificate Authority

vCenter 6.0 Deployment Models:

vSphere 6.0 introduces vCenter Server with two deployment model. vCenter with external Platform Services Controller and vCenter Server with an embedded Platform Services Controller.

vCenter with an embedded Platform Services Controller:

All services bundled with the Platform Services Controller are deployed on the same host machine as vCenter Server. vCenter Server with an embedded Platform Services Controller is suitable for smaller environments with eight or less product instances.
vCenter 6.0 with an embedded Platform Services Controller

vCenter with an external Platform Services Controller:

The services bundled with the Platform Services Controller and vCenter Server are deployed on different host machines.You must deploy the VMware Platform Services Controller first on one virtual machine or host and then deploy vCenter Server on another virtual machine or host. The Platform Services Controller can be shared across many products. This configuration is suitable for larger environments with nine or more product instances.
vCenter 6.0 with an External Platform Services Controller

vSphere 6.0- Configuration maximums

vSphere 6.0- Configuration maximums

Metric vSphere 6.0
ESXi Host – Logical CPUs 480
ESXi Host – NUMA nodes 16
ESXi Host – Virtual CPUs 2 048 (Don’t know if the official PDF includes a typo since this was 4 096 in vSphere 5.5)
ESXi Host – Virtual CPUs per core 32 (based on performance required)
ESXi Host – RAM 6-12 TB (12 TB depending on partner)
ESXi Host – VMs 1 024
ESXi Host – FT Protected VMs 4 VMs or 8 vCPUs (whichever limit is reached first)
vCenter Server – ESXi Hosts 1 000
vCenter Server – ESXi hosts per Datacenter 500
vCenter Server – Powered on VMs 10 000
vCenter Server – Registered VMs 15 000
vCenter Server Content Library – Total items per VC (across all libraries) of vCenter Servers 200
vCenter Server Content Library – Total number of libraries per VC 20
vCenter Server Platform Service Controller (PSC) – Maximum PSCs per vSphere Domain 8
vCenter Server Platform Service Controller (PSC) – Maximum PSCs per site, behind a load balancer 4
vCenter Server Linked mode – Number of vCenter Servers 10
vCenter Server Linked mode – Number of ESXi hosts in linked mode vCenter Servers 4 000
vCenter Server Linked mode – Powered on VMs 30 000
vCenter Server Linked mode – Registered VMs 50 000
vSphere Cluster – Max ESXi hosts 64
vSphere Cluster – Number of VMs 8 000 (6 000 when using VSAN)
vSphere Cluster – Number of Powered On VM configuration files 2 048 (A FT protected VM counts for 2 VM configuration files)
vSphere Cluster – FT protected VMs 98
vSphere Cluster – FT VM vCPUs 256
VM – Virtual CPUs 128
VM – Virtual RAM 4 TB
VM – VMDK size 62 TB
VM – vNICs 10
VM – SCSI Controllers 4
VM – SCSI targets 60
VM – SATA Adapters 4
VM – SATA devices per SATA adapter 30
VM – Number of vCPUs for FT protected VM 4
VM – RAM per FT protected VM 64 GB
VM – Disks per FT protected VM 16
The official VMware vSphere 6.0 configuration maximum PDF can be downloaded here

Content Library - vSphere 6.0

Content Library

It is the new feature introduced with vSphere 6.0. vCenter’s Content Library provides simple and effective management of VM templates, vApps, ISO images and scripts for vSphere admins. Content Library centrally stores VM templates, vAPPs, ISO images and scripts. This content can be synchronized across sites and vCenter serversin your organization. In many environment,  NFS mount has been used to store all ISO images and templates but Content Library will simply the management of storing VM templates , vApps and ISO images either backed up by NFS mount or Datastores. Contents of Content library are synchronized with other vCenter Servers and ensure that workloads are consistence across your environment.

vSphere 6.0 -Content Library_1vCenter Content Library

  • Content Library provides storage and versioning of files including VM templates, ISOs and OVFs.
  • You can publish the Content Library  to public or Subscribe content library to get it synchronized with other vCenter Content Library. You will be able to Publish or Subscribe between vCenter -> vCenter & vCD -> vCenter

vSphere 6.0 -Content Library_2

  • Content Library basically backed up by vSphere Datastores or NFS file system,which uses this storage to store the library items like VM Templates, ISOs and OVFs.

vSphere 6.0 -Content Library_3

  • You can perform the deployment from the contents stored (templates, ISOs and appliances) in content library to host and clusters. and you can also perform deployment into Virtual Data center.

VMware Virtual Volumes (VVols)

VMware Virtual Volumes (VVols) 

It is one of the new feature with vSphere 6.0. VMware Virtual volumes are encapsulations of virtual machine files, virtual disks, and their derivatives. Virtual volumes are stored natively inside a storage system that is connected through Ethernet or SAN. They are exported as objects by a compliant storage system and are managed entirely by hardware on the storage side. Typically, a unique GUID identifies a virtual volume.

Virtual volumes are not preprovisioned, but created automatically when you perform virtual machine management operations. These operations include a VM creation, cloning, and snapshot creation. ESXi and vCenter Server associate one or more virtual volumes to a virtual machine

VMware Virtual Volumes

Currently all storage is LUN-centric or volume-centric, especially when it comes to snapshots, clones and replication. VVols makes it storage VM-centric. With VVols, most of the data operations can be offloaded to the storage arrays. VVols goes much further and makes storage arrays aware of individual VMDK files.Virtual volumes encapsulate virtual disks and other virtual machine files as natively stored the files on the storage system.

 Virtual Volumes (VVols) Per Virtual Machine 

Every VM a single VVol is created to replace the VM directory in the system.

  • 1 config VVol  represents a small directory that contains metadata files for a virtual machine. The files include a .vmx file, descriptor files for virtual disks, log files, and so forth.
  • 1 VVol for every virtual disk (.VMDK)
  • 1 VVol for swap, if needed
  • 1 VVol per disk snapshot and 1 per memory snapshot

Additional virtual volumes can be created for other virtual machine components and virtual disk derivatives, such as clones, snapshots, and replicas.

Components of VMware Virtual Volumes (VVols)

There are 3 important objects in particular related to VMware Virtual Volumes(VVols) are the storage provider, the protocol endpoint and the storage container. Let’s discuss about each of the 3 items:

Storage Providers:

  • A VVols storage provider, also called a and the vSphere Web Client.
  • Vendors are responsible for supplying sto
  •  VASA provider. Storage provider is implemented through VMware APIs for Storage Awareness (VASA) and is used to manage all aspects of VVols storage.
  • Storage provider delivers information from the underlying storage,so that storage container capabilities can appear in vCenter Server
  • rage providers that can integrate with vSphere and provide support to VVols.

VMware Virtual Volumes

Storage Container:

  • VVols uses a storage container, which is a pool of raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual volumes.
  • The storage container logically groups virtual volumes based on management and administrative needs. For example, the storage container can contain all virtual volumes created for a tenant in a multitenant deployment, or a department in an enterprise deployment. Each storage container serves as a virtual volume store and virtual volumes are allocated out of the storage container capacity.
  • Storage administrator on the storage side defines storage containers. The number of storage containers and their capacity depend on a vendor-specific implementation, but at least one container for each storage system is required.

Protocol EndPoint (PE):

  • Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.
  • ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their respective virtual volumes.
  • Each virtual volume is bound to a specific protocol endpoint. When a virtual machine on the host performs an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. Typically, a storage system requires a very small number of protocol endpoints. A single protocol endpoint can connect to hundreds or thousands of virtual volumes.

VVols Datastore:

  • A VVols datastore represents a storage container in vCenter Server and the vSphere Web Client.
  • After vCenter Server discovers storage containers exported by storage systems, you must mount them to be able to use them. You use the datastore creation wizard in the vSphere Web Client to map a storage container to a VVols datastore.

VMware Virtual Volumes

  • The VVols datastore that you create corresponds directly to the specific storage container and becomes the container’s representation in vCenter Server and the vSphere Web Client.
  • From a vSphere administrator prospective, the VVols datastore is similar to any other datastore and is used to hold virtual machines. Like other datastores, the VVols datastore can be browsed and lists configuration virtual volumes by virtual machine name. Like traditional datastores, the VVols datastore supports unmounting and mounting. However, such operations as upgrade and resize are not applicable to theVVols datastore. The VVols datastore capacity is configurable by the storage administrator outside of vSphere.

VMCP vSphere 6.0 Configuration - (VM Component Protection)

VM Component Protection (VMCP) Configuration

vSphere 6.0 introduces a powerful new feature as part of vSphere HA called VM Component Protection (VMCP). VMCP protects virtual machines from storage related events, specifically Permanent Device Loss (PDL) and All Paths Down (APD) incidents.

Permanent Device Loss (PDL)

A PDL event occurs when the storage array issues a SCSI sense code indicating that the device is unavailable. A good example of this is a failed LUN, or an administrator inadvertently removing a WWN from the zone configuration. In the PDL state, the storage array can communicate with the vSphere host and will issue SCSI sense codes to indicate the status of the device. When a PDL state is detected, the host will stop sending I/O requests to the array as it considers the device permanently unavailable, so there is no reason to continuing issuing I/O to the device.
All Paths Down (APD)
If the vSphere host cannot access the storage device, and there is no PDL SCSI code returned from the storage array, then the device is considered to be in an APD state. This is different than a PDL because the host doesn’t have enough information to determine if the device loss is temporary or permanent. The device may return, or it may not. During an APD condition, the host continues to retry I/O commands to the storage device until the period known as the APD Timeout is reached. Once the APD Timeout is reached, the host begins to fast-fail any non-virtual machine I/O to the storage device. This is any I/O initiated by the host such as mounting NFS volumes, but not I/O generated within the virtual machines. The I/O generated within the virtual machine will be indefinitely retried. By default, the APD Timeout value is 140 seconds and can be changed per host using the Misc.APDTimeoutadvanced setting.

VM Component Protection (VMCP)

 vSphere HA can now detect PDL and APD conditions and respond according to the behavior that you configure. The first step is to enable VMCP in your HA configuration. This settings simply informs the vSphere HA agent that you wish to protect your virtual machines from PDL and APD events. In the spirit of keeping things dead simple, it’s as easy as a clicking a checkbox.
Cluster Settings -> vSphere HA -> Host Hardware Monitoring – VM Component Protection -> Protect Against Storage Connectivity Loss.
The next step is configuring the way you want vSphere HA to respond to PDL and ADP events.  Each type of event can be configured independently.  These settings are found on the same window that VMCP is enabled by expanding the Failure conditions and VM response section.
 
Response for Datastore with Permanent Device Loss (PDL)
There are three actions that can be taken in response to a PDL event. These choices are pretty simple since a PDL event is black and white.
Disabled – No action will be taken to the affected VMs.
Issue events – No action will be taken against the affected VMs, however the administrator will be notified when a PDL event has occurred.
Power off and restart VMs – All affected VMs will be terminated on the host and vSphere HA will attempt to restart the VMs on hosts that still have connectivity to the storage device.
Response for Datastore with All Paths Down (APD)
There are few more options available for an APD response. This is because the device state is unknown and may only be temporarily unavailable.
Disabled – No Action will be taken to the affected VMs.
Issue events – No action will be taken against the affected VMs, however the administrator will be notified when an APD event has occurred.
Power off and restart VMs (conservative) – vSphere HA will not attempt to restart the affected VMs unless it has determined there is another host that can restart the VMs. The host experiencing the APD will communicate with the HA master to determine if there is sufficient capacity to power on the affected workloads. If the master determines there is sufficient capacity, the host experiencing the APD will terminate the VMs so they can be restarted on a healthy host. If the host experiencing the APD cannot communicate with the vSphere HA master, no action will be taken.
Power off and restart VMs (aggressive) – vSphere HA will terminate the affected VMs even if it cannot determine that another host can restart the VMs. The host experiencing the APD will attempt communicate with the HA master to determine if there is sufficient capacity to power on the affected workloads. If the HA master is not reachable, it will be unknown if there is sufficient capacity available to restart the VMs. In this scenario, the host takes the risk and terminates the VMs so they can be restarted on the remaining healthy hosts. However, if there is not sufficient capacity available, vSphere HA may not be able to recover all of the affected VMs. This is common in a network partition scenario where a host cannot communicate with the HA master to get a definitive response to the likelihood of a successful recovery.
Delay for VM failover for APD
Once the APD Timeout has been reached (default: 140 seconds) VMCP will wait an additional period of time before taking action against the affected VMs. By default, the waiting period is 3 minutes. In other words, VMCP will wait 5m:20s before taking action against VMs. The sum of the APD Timeout and the Delay for VM Failover is also known as the VMCP Timeout.
Response for APD recovery after APD timeout
This setting will instruct vSphere HA to take a certain action if an APD event is cleared after the APD timeout was reached but before the Delay for VM failover has been reached.
Disabled – No action will be taken against the affected VMs.
Reset VMs – The VMs will be reset on the same host. (Hard reset)
This option is available because some applications or guest operating systems may be in an unstable condition after losing connection with storage services for an extended period of time. This setting will instruct vSphere HA how to handle this situation.
VMCP Recovery Workflow
VMCP Recovery Timeline
T=0s: A storage failure is detected. VMCP will start the recovery workflow.
T=0s: For a PDL event, the recovery process immediately starts. VMs will be restarted on healthy hosts in the cluster.
T=0s: For an APD condition, the APD Timeout timer starts.
T=140s: The host declares an APD Timeout and will begin to fast fail non-virtual machine I/O to the unresponsive storage device. By default, this is 140 seconds.
T=320s: The VMCP Timeout.  This is 3 minutes after the APD Timeout has been reached. vSphere HA will start the APD recovery response.
T=140s-T=320s: The period after an APD Timeout, but before the VMCP Timeout. VMs may become unstable after losing access to storage for an extended period of time. If an APD is cleared in this time period, the option to reset the VMs is available.