Test Your vSphere 6.5 Knowledge - VMware vSphere 6.5 Quiz

Test Your Knowledge Level on vSphere 6.5 ?

Challenge yourself with this nine-question interactive quiz that explores the latest updates to vSphere. After you answer each question, we’ll share additional insights, which means you’ll learn more about the new features and capabilities that you only get with vSphere 6.5.

Challenge yourself with this short vSphere 6.5 quiz.

Start the vSphere 6.5 quiz now!

 

 


vSphere DRS (Distributed Resource Scheduler)

Distributed Resource Scheduler (DRS) is a resource management solution for vSphere clusters to provide optimized performance of application workloads. DRS will help you run the workloads efficiently  on resources in a vSphere Environment .

Download the Official DRS White Paper#   vSphere Resources and Availability

DRS will determine the current resource demand of workloads and the current resource availability of the ESXi host that are grouped into a single vSphere cluster. DRS provides recommendations throughout the life-cycle of the workload. From the moment, it is powered-on, to the moment it is powered-down.

DRS operations consist of generating initial placements and load balancing recommendations based on resource demand, business policies and energy-saving settings. It is able to automatically execute the initial placement and load balancing operations without any human interaction, allowing IT-organizations to focus their attention elsewhere.

DRS provides several additional benefits to IT operations:

  • Day-to-day IT operations are simplified as staff members are less affected by localized events and dynamic changes in their environment. Loads on individual virtual machines invariably change, but automatic resource optimization and relocation of virtual machines reduce the need for administrators to respond, allowing them to focus on the broader, higher-level tasks of managing their infrastructure.
  • DRS simplifies the job of handling new applications and adding new virtual machines. Starting up new virtual machines to run new applications becomes more of a task of high-level resource planning and determining overall resource requirements, than needing to reconfigure and adjust virtual machines settings on individual ESXi hosts.
  • DRS simplifies the task of extracting or removing hardware when it is no longer needed or replacing older host machines with newer and larger capacity hardware.
  • DRS simplifies grouping of virtual machines to separate workloads for availability requirements or unite virtual machines on the same ESXi host machine for increased performance or to reduce licensing costs while maintaining mobility.

DRS generates recommendations for initial placement of virtual machines on suitable ESXi hosts during power-on operations and generates load balancing recommendations for active workloads between ESXi hosts within the vSphere cluster. DRS leverages VMware vMotion technology for live migration of virtual machines. DRS responds to cluster and workload scaling operations and automatically generates resource relocation and optimization decisions as ESXi hosts or virtual machines are added or removed from the cluster. To enable the use of DRS migration recommendations, the ESXi hosts in the vSphere cluster must be part of a vMotion network. If the ESXi hosts are connected to the vMotion network, DRS can still make initial placement recommendations .

Clusters can consist of heterogeneous or homogeneous hardware configured ESXi hosts. ESXi hosts in a cluster can differ in capacity size. DRS allows hosts that have a different number of CPU packages or CPU cores, different memory or network capacity, but also different CPU generations. VMware Enhanced vMotion Compatibility (EVC) allows DRS to live migrate virtual machines between ESXi hosts with different CPU configurations of the same CPU vendor. DRS leverages EVC to manage placement and migration of virtual machines that have Fault Tolerance (FT) enabled.

DRS provides the ability contain virtual machines on selected hosts within the cluster by using VM to Host affinity groups for performance or availability purposes. DRS resource pools allow compartmentalizing cluster CPU and memory resources. A resource pool hierarchy allows resource isolation between resource pools and simultaneous optimal resource sharing within resource pools.

DRS Automation Levels

There are three levels of automation are available, allowing DRS to provide recommendations for initial placement and load balancing operations. DRS can operate in manual mode, partially automated mode and fully automated mode. Allowing the IT operation team to be fully in-control or allow DRS to operate without the requirement of human interaction.

Manual Automation Level

The manual automation level expects the IT operation team to be in complete control. DRS generates initial placement and load balancing recommendations and the IT operation team can choose to ignore the recommendation or to carry out any recommendations. If a virtual machine is powered-on on a DRS enabled cluster, DRS presents a list of mutually exclusive
initial placement recommendations for the virtual machine. If a cluster imbalance is detected during a DRS invocation, DRS presents a list of recommendations of virtual machine migrations to improve the cluster balance. With each subsequent DRS invocation, the state of the cluster is recalculated and a new list of recommendations could be generated

Partially Automated Level

DRS generates initial placement recommendations and executes them automatically. DRS generates load balancing operations for the IT operation teams to review and execute. Please note that the introduction of a new workload can impact current active workload, which may result in DRS generating load balancing recommendations. It is recommended to review the DRS recommendation list after power-on operations if the DRS cluster is configured to operate in partially automated mode.

Fully Automated Level

DRS operates autonomous in fully automated level mode and requires no human interaction. DRS generates initial placement and load balancing recommendations and executes these automatically. Please note that the migration threshold setting configures the aggressiveness of load balancing migrations.

Per-VM Automation Level

DRS allows Per-VM automation level to customize the automation level for individual virtual machines to override the cluster’s default automation level. This allows IT operation teams to still benefit from DRS at the cluster level while isolating particular virtual machines. This can be helpful if some virtual machines are not allowed to move due to licensing or strict performance requirement. DRS still considers the load utilization and requirements to meet the demand of these virtual machines during load balancing and initial placement operations, it just doesn’t move them around anymore.

Reference

Reference


vCenter Update Manager

Update Manager enables centralized, automated patch and version management for VMware vSphere and offers support for VMware ESXi hosts, virtual machines, and virtual appliances.

With Update Manager, you can perform the following tasks:

  • Upgrade and patch ESXi hosts.

  • Install and update third-party software on hosts.

  • Upgrade virtual machine hardware, VMware Tools, and virtual appliances.

Update Manager requires network connectivity with VMware vCenter Server. Each installation of Update Manager must be associated (registered) with a single vCenter Server instance.

The Update Manager module consists of a server component and of a client component.

You can use Update Manager with either vCenter Server that runs on Windows or with the vCenter Server Appliance.

If you want to use Update Manager with vCenter Server, you have to perform Update Manager installation on a Windows machine. You can install the Update Manager server component either on the same Windows server where the vCenter Server is installed or on a separate machine. To install Update Manager, you must have Windows administrator credentials for the computer on which you install Update Manager.

If your vCenter Server system is connected to other vCenter Server systems by a common vCenter Single Sign-On domain, and you want to use Update Manager for each vCenter Server system, you must install and register Update Manager instances with each vCenter Server system. You can use an Update Manager instance only with the vCenter Server system with which it is registered.

The vCenter Server Appliance delivers Update Manager as an optional service. Update Manager is bundled in the vCenter Server Appliance.

In vSphere 6.5, it is no longer supported to register Update Manager to a vCenter Server Appliance during installation of the Update Manager server on a Windows machine.

The Update Manager client component is a plug-in that runs on the vSphere Web Client. The Update Manager client component is automatically enabled after installation of the Update Manager server component on Windows, and after deployment of the vCenter Server Appliance.

You can deploy Update Manager in a secured network without Internet access. In such a case, you can use the VMware vSphere Update Manager Download Service (UMDS) to download update metadata and update binaries.

Refer VMware Page for More 

Check the POST to learn How to Install and Configure Update Manager on Windows vCenter 6.5


What is Raw Device Mapping (RDM)

Raw device mapping (RDM) is an option in the vSphere environment that enables a storage LUN to be directly presented to a virtual machine from the storage array .

Mostly RDM will be used for confiuring clusters ( SQL , Oracle ) between virtual machines or between physical and virtual machines .Also or where SAN-aware applications are running inside a virtual machine. Compare to VMFS , RDM produce similar input/output (I/O) and throughput in random workloads. For sequential workloads there is a small I/O block sizes, RDM provides a slight increase in throughput compared to VMFS. But the performance gap decreases as the I/O block size increases for RDM also for workloads, RDM has slightly better CPU cost .

An RDMwill be mapped as a  file in a separate VMFS volume that acts as a proxy for a raw physical storage device. Virtual machine can access the RDM directly from storage device . The RDM contains metadata for managing and redirecting disk access to the physical device.The mapped file will give some of the advantages of direct access to a physical device, but keeps some advantages of a virtual disk in VMFS. As a result, it merges the VMFS manageability with the raw device access.

Raw Device Mapping
A virtual machine has direct access to a LUN on the physical storage using a raw device mapping (RDM) file in a VMFS datastore.

RDM  can be used in the following situations

  • When SAN snapshot or other layered applications run in the virtual machine because RDM enables backup offloading systems by using features inherent to the SAN.

  • Clustering scenario that spans physical hosts, such as virtual-to-virtual clusters and physical-to-virtual clusters.

Two compatibility modes with  RDM 

Considerations and Limitations

  • Direct-attached block devices or certain RAID devices will not support RDM. The RDM uses a SCSI serial number to identify the mapped device. Because block devices and some direct-attach RAID devices do not export serial numbers, they cannot be used with RDMs.

  • Cannot use snapshot feature with RDM in physical compatibility .Physical compatibility mode allows the virtual machine to manage its own, storage-based, snapshot or mirroring operations.

  • Snapshots feature is available for RDMs with virtual compatibility mode.

  • You cannot map to a disk partition. RDMs require the mapped device to be a whole LUN.

  • For vMotion support with RDMs,  maintain consistent LUN IDs for RDMs across all participating ESXi hosts.

  • Flash Read Cache does not support RDMs in physical compatibility. Virtual compatibility RDMs are supported with Flash Read Cache.

Benefits

RDM cannot be used in every situation. but  RDM provides several benefits , find below for understand few of them

User-Friendly Persistent Names

Provides a user-friendly name for a mapped device. When you use an RDM, you do not need to refer to the device by its device name ,you can refer fit by the name of the mapping file .

/vmfs/volumes/Volume/VMDirectory/RawDisk.vmdk

Dynamic Name Resolution

Stores unique identification information for each mapped device. VMFS associates each RDM with its current SCSI device, regardless of changes in the physical configuration of the server because of adapter hardware changes, path changes, device relocation, and so on.

Distributed File Locking

Makes it possible to use VMFS distributed locking for raw SCSI devices. Distributed locking on an RDM makes it safe to use a shared raw LUN without losing data when two virtual machines on different servers try to access the same LUN.

File Permissions

Makes file permissions possible. The permissions of the mapping file are enforced at file-open time to protect the mapped volume.

File System Operations

Makes it possible to use file system utilities to work with a mapped volume, using the mapping file as a proxy. Most operations that are valid for an ordinary file can be applied to the mapping file and are redirected to operate on the mapped device.

Snapshots

Makes it possible to use virtual machine snapshots on a mapped volume. Snapshots are not available when the RDM is used in physical compatibility mode.

vMotion

vMotion is supported with RDM devices  . The mapping file acts as a proxy to allow vCenter Server to migrate the virtual machine by using the same mechanism that exists for migrating virtual disk files.

vMotion of a virtual machine with an RDM file. The mapping file acts as a proxy to allow vCenter Server to migrate the virtual machine by using the same mechanism that exists for migrating virtual disk files.SAN Management Agents

you can run some SAN management agents inside a virtual machine. Similarly, any software that needs to access a device by using hardware-specific SCSI commands can be run in a virtual machine. This kind of software is called SCSI target-based software. When you use SAN management agents, select a physical compatibility mode for the RDM.

N-Port ID Virtualization (NPIV)

NPIV technology that allows a single Fibre Channel HBA port to register with the Fibre Channel fabric using several worldwide port names (WWPNs). This ability makes the HBA port appear as multiple virtual ports, each having its own ID and virtual port name. Virtual machines can then claim each of these virtual ports and use them for all RDM traffic.

Note:You can use NPIV only for virtual machines with RDM disks.

VMware works with vendors of storage management software to ensure that their software functions correctly in environments that include ESXi. Find below few applications

  • SAN management software
  • Storage resource management (SRM) software
  • Snapshot software
  • Replication software

Such software uses a physical compatibility mode for RDMs so that the software can access SCSI devices directly.

Note:Various management products are best run centrally (not on the ESXi machine), while others run well on the virtual machines. VMware does not certify these applications or provide a compatibility matrix. To find out whether a SAN management application is supported in an ESXi environment, contact the SAN management software provider.

Reference


VMware Network Adapter Types

Depending on the operating system and its version you can use different network adapter types, those adapter types may vary. In this post we are discussing about different virtual network adapters

VMware Network Adapter Types

The type of network adapters that are available depend on the following factors:

  • The virtual machine compatibility, which depends on the host that created or most recently updated it.
  • Whether the virtual machine compatibility has been updated to the latest version for the current host.
  • The guest operating system.

The following NIC types are supported:

E1000E

Emulated version of the Intel 82574 Gigabit Ethernet NIC. E1000E is the default adapter for Windows 8 and Windows Server 2012.

E1000

Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later.

Flexible

Identifies itself as a Vlance adapter when a virtual machine boots, but initializes itself and functions as either a Vlance or a VMXNET adapter, depending on which driver initializes it. With VMware Tools installed, the VMXNET driver changes the Vlance adapter to the higher performance VMXNET adapter.

Vlance

Emulated version of the AMD 79C970 PCnet32 LANCE NIC, an older 10 Mbps NIC with drivers available in 32-bit legacy guest operating systems. A virtual machine configured with this network adapter can use its network immediately.

VMXNET

Optimized for performance in a virtual machine and has no physical counterpart. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.

VMXNET 2 (Enhanced)

Based on the VMXNET adapter but provides high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. VMXNET 2 (Enhanced) is available only for some guest operating systems on ESX/ESXi 3.5 and later.

VMXNET 3

A paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. VMXNET 3 is not related to VMXNET or VMXNET 2.

PVRDMA

A paravirtualized NIC that supports remote direct memory access (RDMA) between virtual machines through the OFED verbs API. All virtual machines must have a PVRDMA device and should be connected to a distributed switch. PVRDMA supports VMware vSphere vMotion and snapshot technology. It is available in virtual machines with hardware version 13 and guest operating system Linux kernel 4.6 and later.

For information about assigning an PVRDMA network adapter to a virtual machine, see the vSphere Networking documentation.

SR-IOV passthrough

Representation of a virtual function (VF) on a physical NIC with SR-IOV support. The virtual machine and the physical adapter exchange data without using the VMkernel as an intermediary. This adapter type is suitable for virtual machines where latency might cause failure or that require more CPU resources.

SR-IOV passthrough is available in ESXi 5.5 and later for guest operating systems Red Hat Enterprise Linux 6 and later, and Windows Server 2008 R2 with SP2. An operating system release might contain a default VF driver for certain NICs, while for others you must download and install it from a location provided by the vendor of the NIC or of the host.

You can Find the Type of Adapter Available on below image

 

If you're looking for a compatibility, you might want to check VMware Compatibility guide.

Reference1 

Refernce 2


Enhanced vMotion Compatibility (EVC)

Enhanced vMotion Compatibility which is a vCenter Server cluster-centric feature allowing virtual machines to vMotion or migrate across ESXi hosts equipped with dissimilar processors in the same cluster. VMware EVC Mode works by masking unsupported processor features thus presenting a homogeneous processor front to all the virtual machines in a cluster. This means that a VM can vMotion to any ESXi host in a cluster irrespective of the host’s micro-architecture examples of which include Intel’s Sandy Bridge and Haswell. One thing to remember is that all the processor(s) must be from a single vendor i.e. either Intel or AMD. You simply cannot mix and match.

Benefit

The main benefit is that you can add servers with the latest processors to your existing cluster(s) seamlessly and without incurring any downtime. More importantly, EVC provides you with the flexibility required to scale your infrastructure, lessening the need to decommission older servers prematurely, thus maximizing ROI. It also paves the way for seamless cluster upgrades once the decision to retire old hardware is taken.

Any Impacts

When a new family of processors is released to market, innovative microprocessor features and instruction sets are often included. These features include performance enhancements in areas such as multimedia, graphics or encryption. With this in mind try to determine in advance the type of applications you’ll be running in your vSphere environment. This gives you a rough idea of the type of processors you’ll be needing. This, in turn, allows you to predetermine the applicable EVC modes when mixing servers with processors from different generations. EVC modes are also dependent on the version of vCenter Server. This is shown in Figure 1 below (Intel based EVC modes)

Figure 1 - Intel based EVC modes (reproduced from VMware’s KB1003212)

Requirement

To enable EVC, you must make sure the ESXi hosts in your cluster satisfy the following.

  • Processors must be from the vendor, AMD or Intel.
  • Hosts must be properly configured for vMotion.
  • Hosts must be connected to the same vCenter Server.
  • Advanced virtualization features such as Intel-VT and AMD-V must be enabled for all hosts from the server’s BIOS.

Figure 3 - Enabling advanced CPU virtualization features

 

 

 

 

 

 

 

 

 

 

Use the VMware Compatibility Guide to assess your EVC options

The VMware Compatibility Guide is the best way to determine which EVC modes are compatible with the processors used in your cluster. Please check below example on how to determine which EVC mode to use given 3 types of Intel processors.

The steps are as follows;

  • Select the ESXi version installed.
  • Hold down the CTRL key and select the type of processors from the CPU Series list.
  • Press the CPU/EVC matrix button to view the results.

Figure 4

 

The results tell us that we can only use EVC modes Merom or the Penryn. This means we have to sacrifice some features exclusive to the Intel i7 processor. This is the stage at which you have to decide whether you’re better off getting new servers as opposed to adding old servers to the cluster.

Check  How to enable EVC


vMotion Requirements

We already explained about vMotion 

Since vMotion is intervening in an active virtual machine without that virtual machine’s knowledge, certain conditions must be fulfilled so that the process can run without problems or failures:

  • CPU compatibility
  • vMotion interface (minimum 1 Gb adapter)
  • Shared storage
  • Same naming for virtual port groups
  • VMkernel port group with vMotion enabled
  • Sufficient resources on the target host
  • At least one vSphere Essentials Plus license on the Corresponding ESX host

One of the main point which can sometimes present significant problems is CPU compatibility

Since the ESX server cannot predict which CPU instructions the virtual machine (or rather the guest operating system) will use, the user must pay attention to either use identical CPUs or to configure a proper masking.

The VMware’s CPU Identification Utility allows the user to determine which functionality the installed CPU has, including vMotion compatibility, EVC, and 64-bit support   .

VMware Knowledge Base articles  related to Intel  and AMD explain which CPUs are compatible with which others:

Intel

AMD

Compatibility Guide 

They is another technology called VMware Enhanced vMotion Compatibility (EVC). This technology ensures vSphere vMotion compatibility for the different hosts in the cluster by creating the common CPU ID baseline for all the hosts within the cluster. All hosts will present the same CPU features to the VMs, even if their CPUs differ. Note that, however, EVC only works with different CPUs in the same family, for example with different AMD Operon families. Mixing AMD and Intel processors is not allowed. Also note that EVC is a vCenter Server cluster setting that is enabled at the cluster level, so it is not specific for DRS.

When enabled, this feature enables you to migrate VMs among CPUs that would otherwise be considered incompatible. It works by forcing hosts to expose a common set of CPU features (baselines) to VMs. These features are supported by every host in the cluster. New hosts that are added to the cluster will automatically be configured to the CPU baseline. Hosts that can’t be configured to the baseline are not permitted to join the cluster

Enhanced vMotion Compatibility (EVC) processor suppot -  KB

Enabling EVC on a cluster when vCenter Server is running in a virtual machine  - KB

More about EVC


VMotion

VMware vMotion enables the live migration of running virtual machines from one physical server to another with zero downtime, continuous service availability, and complete transaction integrity. VMotion is a key enabling technology for creating the dynamic, automated, and self optimizing data center .

vMotion allow users to do below

  • Automatically optimize and allocate entire pools of resources for maximum hardware utilization, flexibility and availability.
  • Perform hardware maintenance without scheduled downtime.
  • Proactively migrate virtual machines away from failing or underperforming servers.

Working of vMotion

Live migration of a virtual machine from one physical server to another with VMotion is enabled by three underlying technologies.

First, the entire state of a virtual machine is encapsulated by a set of files stored on shared storage such as Fibre Channel or iSCSI Storage Area Network (SAN) or Network  Attached Storage (NAS).VMware’s clustered Virtual Machine File System (VMFS) allows multiple installations of ESX Server to access the same virtual machine files concurrently.

Second, the active memory and precise execution state of the virtual machine is rapidly transferred over a high speed network,allowing the virtual machine to instantaneously switch from running on the source ESX Server to the destination ESX Server. VMotion keeps the transfer period imperceptible to users by keeping track of on-going memory transactions in a bitmap. Once the entire memory and system state has been copied over to the target ESX Server, VMotion suspends the source virtual machine, copies the bitmap to the target ESX Server, and resumes the virtual machine on the target ESX Server. This entire process takes less than two seconds on a Gigabit Ethernet network.

Third, the networks being used by the virtual machine are also virtualized by the underlying ESX Server, ensuring that even after the migration, the virtual machine network identity and network connections are preserved. VMotion manages the virtual MAC address as part of the process. Once the destination machine is activated, VMotion pings the network router to ensure that it is aware of the new physical location of the virtual MAC address. Since the migration of a virtual machine with VMotion preserves the precise execution state, the network identity, and the active network connections, the result is zero downtime and no disruption to users.

More Detailed Information about vMotion 

1.Storage Subsystem Plays the First Key Role

Instead the server’s hard drives are also virtualized and set either on a Storage Attached Network (SAN) via FibreChannel or iSCSI, or mounted on a Network File System (NFS) volume on Network attached Storage (NAS). While there are other technologies that VMWare can use (VSAN, Hyper-Converged computing, or even storage migration), we are going to focus on this most common configuration.

With the server’s disk virtualized and encapsulated on the network, VMWare uses a Virtual Machine FileSystem (VMS) file, which  can be shared with multiple physical servers running VMWare’s ESXi bare-metal hypervisor. These physical servers all work together to share read-write access to this virtual hard drive. That alone is a pretty amazing feat when you think about the logistics of it!

Before the VMotion begins a “shadow copy” of the VMware guest’s OS configuration is created on the receiving ESXi host, but is not yet visible in VSphere.  This ghost is just a shell which will receive the memory contents next.

2.The Memory Manager Plays the Second Key Role

The virtualized computer’s memory is also mapped and virtualized it allows VMWare’s VMotion to do something amazing. VMotion takes a snapshot of the system RAM, a copy if you will, and starts the rapid transfer of this memory over the Ethernet network to the chosen host computer.  This includes the states of other system buffers, and even the video RAM.  VMWare engineers refer to this original snapshot as a “precopy”.

While this snapshot is being transferred VMotion maintains a change log buffer of any changes that have happened since the original snapshot was made. This is where faster network speeds are better, allowing a faster VMotion to occur .

VMotion will continue to make make and copy these change buffers (and integrate them into memory) to the receiving host until the next set of change buffers is small enough that it can be transferred over the network in less than roughly 500ms.  When that occurs, VMotion halts the virtual CPU(s) execution of the guest virtual machine, copies the last buffer and integrates it into the guest’s OS virtual RAM.  VMotion discontinues disk access on the sending host, and starts it on the receiving host.  Lastly VMotion starts the virtual CPU(s) on the receiving machine.

3.The Virtual Switch Network Plays the Final Role

After the virtual CPU(s) are started, VMotion has one final task at hand.  VMWare ESXi runs and controls one or more virtual switches on the local network. This is what the virtual network adapter on the guest OS connects to. And just like any other network, physical or virtual, all of the switches maintain a map of network MAC addresses and what port they are connected to. Since the machine is likely no longer connected to the same physical switch, VMotion instructs the ESXi hosts network subsystem to send out a Reverse ARP (RARP) on the receiving host.  This causes all of the switches, both physical and virtual to update the mappings so that network traffic will arrive at the new host, rather than the old.

Check the prerequisites for vMotion  

Reference

vMotion Support in vSphere 6.5


Install and Configure VMware ESXi 6.0

After understanding basics of VMware vSphere components it’s now time to Install and Configure VMware ESXi 6.0. Make sure you’ve gone through various editions of vSphere before purchasing and installing it. There are few system requirements that must be met before you can install ESXi server.

  • Make sure the server hardware that you are going to install ESXi server on is supported by VMware vSphere. You can check that using VMware Compatibility Guide.
  • The physical server must have 64-bit  processors with at least two CPU cores.
  • The physical server must have minimum of 4GB of RAM. You need at least 8GB memory to install a virtual machine after ESXi server is installed.
  • The NX/XD bit must be enabled in the BIOS. Intel-VT for Intel processors and AMD-V for AMD processors.
  • The physical server must have one or more Gigabit Ethernet adapter.
  • Compatible disk storage.

Install and Configure VMware ESXi 6.0

There are different ways to install ESXi server. You can use interactive installation (CD/DVD, USB drive, and PXE boot), scripts or auto deploy. Here, I will use interactive method using CD/DVD media to install ESXi server. You can download installation ISO image from VMware. Let’s begin the installation. First, make sure the server is configured to boot from CD/DVD. Insert CD/DVD in to the DVD-ROM or map ISO image to virtual CD/DVD drive and boot the server from ISO image.

Once you start the server with ESXi installation media, you will be presented with ESXi standard boot menu as shown above. Choose ESXi standard installer to start the ESXi installer. Press [Tab] to toggle the selection and press [Enter] to choose the selection. As you can see above, you also have option to booth from local disk.

Welcome screen appears as shown above. Press [Enter] to begin the installation of ESXi server.

Press F11 to accept the license agreement.

Choose the storage and press [Enter] to continue. As you can see above the disk type is VMware Virtual S, this is because I am installing ESXi server on a VMware Workstation virtual machine. You can press F1 to see more details about the disk. If you are installing ESXi server on a local SAS storage, it will be listed as remote devices.

Note :- Here we have Raid configuration so it shows as remote  .

Choose the keyboard layout. Press [Enter] to continue.

Enter the password for root user account. The password must be at least 7 characters long. Press [Enter] to continue.

To confirm the installation press F11.

The installation now begins.

Press [Enter] to reboot and complete the installation.

After the reboot, you can see the Direct Console User Interface (DCUI) above. You can see the ESXi build number, memory and processor information and IP address. As you can see above by default, ESXi is set to receive IP from DHCP server. You can press F2 to login to DCUI to change IP address, DNS, hostname and other information.


ESXi Initial Configuration - IP , HOSTNAME, DNS

After installing ESXi host, the very first thing you would want to do is configure IP address and Host name in ESXi server. By default, the hostname is called localhost the IP address is set to DHCP client. So, in this post I will show steps to configure IP address and hostname in ESXi server.

Configure IP Address and Hostname in ESXi Server

You need to have physical access to the server to configure IP address  in ESXi server. . Press F2 to log in to Direct Console User Interface (DCUI) of ESXi server. You can configure DNS and hostname using vSphere client.

Configure IP Address and Hostname in ESXi Server

Enter the password for root user account. Press [Enter]. You will be presented with list of system customization list as shown below.

Configure IP Address and Hostname in ESXi Server

As you can see above, the IP address 192.168.0.22 was acquired from DHCP server. Select Configure Management Network option and press [Enter].

IP Configuration

Here, you can configure IPv4, IPv6, DNS, DNS suffies and VLAN. To configure IPv4, select IPv4 Configuration option and press [Enter].

IP Address

Choose option, set static IPv4 address and network configuration. Press [Space Bar] to make the selection. Now, type IP address, subnet mask and gateway as shown above and press [Enter]. Then press [Esc] key to go back.

mgmt network

You will be asked to confirm the changes and restart the network. Press [Y] to restart the management network.

NOTE: Disable IPV6 require reboot of ESXI host 

DNS Configuration

Now, select DNS Configuration option and press [Enter].

Configure primary and alternate DNS server and hostname. Press [Enter]. Press [Esc] to go back, you will again be asked to confirm the changes and restart management network.

As you can see above IP address and hostname have now changed. In this way you can configure IP address and hostname in ESXi server.