What’s New in VMware vSphere 5.5

What’s New in VMware vSphere 5.5

VMware vSphere 5.5 introduces many new features and enhancements to further extend the core capabilities of the vSphere platform. This paper will discuss features and capabilities of the vSphere platform,including vSphere ESXi Hypervisor, VMware vSphere High Availability (vSphere HA), virtual machines,VMware vCenter Server, storage, networking and vSphere Big Data Extensions.

In this Post mainly explains following five sections:

vSphere ESXi Hypervisor Enhancements

  • Hot-Pluggable SSD PCI Express (PCIe) Devices
  • Support for Reliable Memory Technology
  • Enhancements for CPU C-States

Virtual Machine Enhancements

  • Virtual Machine Compatibility with VMware ESXi 5.5
  • Expanded Virtual Graphics Support
  • Graphic Acceleration for Linux Guests

VMware vCenter Server Enhancements

  • VMware vCenter Single Sign-On
  • VMware vSphere Web Client
  • VMware vCenter Server Appliance
  • vSphere App HA
  • vSphere HA and VMware vSphere Distributed Resource Schedulee (vSphere DRS)
  • Virtual Machine–Virtual Machine Affinity Rules Enhancements
  • vSphere Big Data Extensions

vSphere Storage Enhancements

  • Support for 62TB VMDK
  • MSCS Updates
  • vSphere 5.1 Feature Updates
  • 16GB E2E support
  • PDL AutoRemove
  • vSphere Replication Interoperability
  • vSphere Replication Multi-Point-in-Time Snapshot Retention
  • vSphere Flash Read Cache

 vSphere Networking Enhancements

  • Link Aggregation Control Protocol Enhancements
  • Traffic Filtering
  • Quality of Service Tagging
  • SR-IOV Enhancements
  • Enhanced Host-Level Packet Capture
  • 40GB NIC support

vSphere ESXi Hypervisor Enhancements

Hot-Pluggable PCIe SSD Devices

The ability to hot-swap traditional storage devices such as SATA and SAS hard disks on a running vSphere host has been a huge benefit to systems administrators in reducing the amount of downtime for virtual machine workloads. Solid-state disks (SSDs) are becoming more prevalent in the enterprise datacenter, and this same capability has been expanded to support SSD devices. Similarly as with SATA and SAS hard disks,users are now able to hot-add or hot-remove an SSD device while a vSphere host is running, and the underlying storage stack detects the operation.

Support for Reliable Memory Technology

The most critical component to vSphere ESXi Hypervisor is the VMkernel, which is a purpose-built operating system (OS) to run virtual machines. Because vSphere ESXi Hypervisor runs directly in memory, an error in it can potentially crash it and the virtual machines running on the host. To provide greater resiliency and to protect against memory errors, vSphere ESXi Hypervisor can now take advantage of new hardware vendor–enabled Reliable Memory Technology, a CPU hardware feature through which a region of memoryis reported from the hardware to vSphere ESXi Hypervisor as being more “reliable.” This information is then used to optimize the placement of the VMkernel and other critical components such as the initial thread, hostd and the watchdog process and helps guard against memory errors.

Enhancements to CPU C-States

In vSphere 5.1 and earlier, the balanced policy for host power management leveraged only the performance state (P-state), which kept the processor running at a lower frequency and voltage. In vSphere 5.5, the deep processor power state (C-state) also is used, providing additional power savings. Another potential benefit of reduced power consumption is with inherent increased performance, because turbo mode frequencies on Intel chipsets can be reached more quickly while other CPU cores in the physical package are in deep C-states.

Virtual Machine Enhancements

Virtual Machine Compatibility with VMware ESXi 5.5

vSphere 5.5 introduces a new virtual machine compatibility with several new features such as LSI SAS
support for Oracle Solaris 11 OS, enablement for new CPU architectures, and a new advanced host controller interface (AHCI). This new virtual-SATA controller supports both virtual disks and CD-ROM devices that can connect up to 30 devices per controller, with a total of four controllers. This enables a virtual machine to have as many as 120 disk devices, compared to the previous limit of 60.

Table summarizes the virtual machine compatibility levels supported in vSphere 5.5.

Expanded Virtual Graphics Support

vSphere 5.1 was the first vSphere release to provide support for hardware-accelerated 3D graphics—virtualshared graphics acceleration (vSGA)—inside of a virtual machine. That support was limited to only
NVIDIA-based GPUs. With vSphere 5.5, vSGA support has been expanded to include both NVIDIA- and
AMD-based GPUs. Virtual machines with graphic-intensive workloads or applications that typically have
required hardware-based GPUs can now take advantage of additional GPU vendors, makes and models. See the VMware Compatibility Guide for details on supported GPU adapters.

There are three supported rendering modes for a virtual machine configured with a vSGA: automatic, hardware and software. Virtual machines still can leverage VMware vSphere vMotion technology, even across a heterogeneous mix of GPU vendors, without any downtime or interruptions to the virtual machine. If automatic mode is enabled and a GPU is not available at the destination vSphere host, software rendering automatically is enabled. If hardware mode is configured and a GPU does not exist at the destination vSphere host, a vSphere vMotion instance is not attempted.

vSGA support can be enabled using both the vSphere Web Client and VMware Horizon View for Microsoft Windows 7 OS and Windows 8 OS. The following Linux OSs also are supported: Fedora 17 or later,Ubuntu 12 or later and Red Hat Enterprise Linux (RHEL) 7. Controlling vSGA use in Linux OSs is supported using the vSphere Web Client.

 

Graphic Acceleration for Linux Guests

With vSphere 5.5, graphic acceleration is now possible for Linux guest OSs. Leveraging a GPU on a vSphere host can help improve the performance and scalability of all graphics-related operations. In providing this support, VMware also is the first to develop a new guest driver that accelerates the entire Linux graphics stack for modern Linux distributions. VMware also is contributing 100 percent of the Linux guest driver code back to the open-source community. This means that any modern GNU/Linux distribution can package the VMware guest driver and provide out-of-the-box support for accelerated graphics without any additional tools or package installation.

The following Linux distributions are supported:

  • Ubuntu: 12.04 and later
    • Fedora: 17 and later
    • RHEL 7
    With the new guest driver, modern Linux distributions are enabled to support technologies such as
    the following:
    • OpenGL 2.1
    • DRM kernel mode setting
    • Xrandr
    • XRender
    • Xv

VMware vCenter Server Enhancements

vCenter Single Sign-On

vCenter Single Sign-On server 5.5, the authentication services of VMware vCloud Suite, has been greatly
enhanced to provide a richer experience that enables users to log in to vCloud Suite products in a true
one-touch, single sign-on manner. This feature provided challenges for users in a previous release. As a result
of extensive feedback, the following vCenter Single Sign-On enhancements have been made:

Simplified deployment 

A single installation model for customers of all sizes is now offered.

Enhanced Microsoft Active Directory integration 

The addition of native Active Directory support enables

cross-domain authentication with one- and two-way trusts common in multidomain environments.

Architecture 

Built from the ground up, this architecture removes the requirement of a database and now delivers a multimaster authentication solution with built-in replication and support for multiple tenants.

vSphere Web Client

The platform-agnostic vSphere Web Client, which replaces the traditional vSphere Client™, continues to exclusively feature all-new vSphere 5.5 technologies and to lead the way in VMware virtualization and cloud management technologies. Increased platform support – With vSphere 5.5, full client support for Mac OS X is now available in the vSphere Web Client. This includes native remote console for a virtual machine. Administrators and end users can now access and manage their vSphere environment using the desktop platform they are most comfortable with. Fully supported browsers include both Firefox and Chrome.

Improved usability experience – The vSphere Web Client includes the following key new features that improve overall usability and provide the administrator with a more native application feel:
• Drag and drop – Administrators now can drag and drop objects from the center panel onto the vSphere inventory, enabling them to quickly perform bulk actions. Default actions begin when the “drop” occurs, helping accelerate workflow actions. This enables administrators to perform “bulk” operations with ease.
For example, to move multiple virtual machines, grab and drag them to the new host to start the
migration workflow.
• Filters – Administrators can now select properties on a list of displayed objects and selected filters to meet specific search criteria. Displayed objects are dynamically updated to reflect the specific filters selected. Using filters, administrators can quickly narrow down to the most significant objects. For example, two checkbox filters can enable an administrator to see all virtual machines on a host that are powered on and running Windows Server 2008.
• Recent items – Administrators spend most of their day working on a handful of objects. The new recent items navigation aid enables them to navigate with ease, typically by using one click between their most commonly used objects.

vCenter Server Appliance

The popularity of vCenter Server Appliance has grown over the course of its previous releases. Although it offers matched API functionality to the installable vCenter Server version on Windows, administrators have found its widespread adoption prospects to be limited. One area of concern has been the embedded database that has previously been targeted for small datacenter environments. With the release of vSphere 5.5, the vCenter Server Appliance addresses this with a reengineered, embedded vPostgres database that can now support as many as 100 vSphere hosts or 3,000 virtual machines (with appropriate sizing). With new scalability maximums and simplified vCenter Server deployment and management, the vCenter Server Appliance offers an attractive alternative to the Windows version of vCenter Server when planning a new installation of vCenter Server 5.5.

vSphere App HA

In versions earlier than vSphere 5.5, it was possible to enable virtual machine monitoring, which checks for the presence of “heartbeats” from VMware Tools™ as well as I/O activity from the virtual machine. If neither of these is detected in the specified amount of time, vSphere HA resets the virtual machine. In addition to virtual machine monitoring, users can leverage third-party application monitoring agents or create their own agents to work with vSphere HA using the VMware vSphere Guest SDK.
In vSphere 5.5, VMware has simplified application monitoring for vSphere HA with the introduction of vSphere App HA. This new feature works in conjunction with vSphere HA host monitoring and virtual machine monitoring to further improve application uptime. vSphere App HA can be configured to restart an application service when an issue is detected. It is possible to protect several commonly used, off-the-shelf applications. vSphere HA can also reset the virtual machine if the application fails to restart.

Architecture Overview

vSphere App HA leverages VMware vFabric Hyperic to monitor applications. Deploying vSphere App HA begins with provisioning two virtual appliances per vCenter Server: vSphere App HA and vFabric Hyperic. vSphere App HA virtual appliance stores and manages vSphere App HA policies. vFabric Hyperic monitors applications and enforces vSphere App HA policies, which are discussed in greater detail in the following section. It is possible to deploy these virtual appliances to a cluster other than the one running the protected applications; for example, a management cluster.After the simple process of deploying the vFabric Hyperic and vSphere App HA virtual appliances,vFabric Hyperic agents are installed in the virtual machines containing applications that will be protected by vSphere App HA. These agents must be able to reliably communicate with the vFabric Hyperic virtual appliance.

vSphere App HA Policies

Policies define items such as the number of minutes vSphere App HA will wait for the service to start, the option to reset the virtual machine if the service fails to start, and the option to reset the virtual machine when the service is unstable. Policies can be configured to trigger vCenter Server alarms when the service is down and the virtual machine is reset. Email notification is also available.

 

Enabling Protection for an Application Service

Application protection is enabled when a policy is assigned. Right-click the application service to assign a policy . vSphere HA virtual machine monitoring and application monitoring must be enabled.

vSphere HA and vSphere Distributed Resource Scheduler

Virtual Machine–Virtual Machine Affinity Rules
vSphere DRS can configure DRS affinity rules, which help maintain the placement of virtual machines on hosts within a cluster. Various rules can be configured. One such rule, a virtual machine–virtual machine affinity rule, specifies whether selected virtual machines should be kept together on the same host or kept on separate hosts. A rule that keeps selected virtual machines on separate hosts is called a virtual machine–virtual machine antiaffinity rule and is typically used to manage the placement of virtual machines for availability purposes.
In versions earlier than vSphere 5.5, vSphere HA did not detect virtual machine–virtual machine antiaffinity rules, so it might have violated one during a vSphere HA failover event. vSphere DRS, if fully enabled, evaluates the environment, detects such violations and attempts a vSphere vMotion migration of one of the virtual machines to a separate host to satisfy the virtual machine–virtual machine antiaffinity rule. In a largemajority of environments, this operation is acceptable and does not cause issues. However, some environments might have strict multitenancy or compliance restrictions that require consistent virtual machine separation. Another use case is an application with high sensitivity to latency; for example, a telephony application, where migration between hosts might cause adverse effects.