Howto install Ruby vSphere Console (RVC) on Windows vCenter 6.5

Ruby vSphere Console (RVC) ,is a Ruby based command line interface for vSphere and can be used to manage VMware ESXi and vCenter. From the RVC console you can navigate and run commands against vCenter objects .By using RVC many Administrative tasks can be done  more efficiently compare to  vSphere Client.

RVC provides a shell-like method of navigating the vCenter inventory. Lays out is like vCenter objects in a filesystem-like hierarchy , it is the typical vCenter organizational model - Managed Object Browser. RVC using  *NIX commands for filesystem navigation. The cd command changes directories for navigation and  ls command lists the items available on the current location.

You can download the Ruby From below

Installation of Ruby is Starlight forward , follow the below step to install Ruby start using RVC

After Downloading the Ruby software , Run the file - > rubyinstaller-2.4.3-2-x64.exe

Accept the License and Click Next

Go with Default option and Click Next

Click Finish to complete the Ruby Setup Wizard and new CLI window will popup to continue the installation

You can enter the option 1,2,3 and press enter to continue

Click Next to continue the installation

Default Location will be on C , you may also specify the location you need and  Click Next

If you need to give Different name to  shortcut add new else Click Next to continue the installation

Click Next to complete the maintenance Tool Installation

Once Completed Click Finish .

After installation you will success message on the same CLI window and it ill automatically sync the required packages .

Once Sync is completed you enter to exit from the installation window and Close the terminal window also

Next We have to install the RVC  , Open a Command Prompt as Administrator and  run below commands

RVC - verify that RVC is available or not

gem install rvc  - To install the rvc on the windows server

After instillation of all required gems you can star the accessing the vCenter using RVC commands

How to Launch RVC 

You can Launch RVC in two ways

  • From Location  “C:\Program Files\VMware\vCenter Server\rvc” , incase vCenter Location is different use that
  • Open Command Prompt as Administrator and just type RVC

Launch the RVC batch file as an administrator form the vCenter installed directory : “C:\Program Files\VMware\vCenter Server\rvc”

ONce ran the batch file , it you have to enter the password of the Administrator@vsphere.local

Open CMD  as Administrator

#run the command rvc 

#Enter the uaername@vCenter  and password

Note:- Logged in windows account should have access to vCenter

Download Ruby vSphere Console Command Reference 

Check vSAN Administration with RVC -Windows vCenter 6.5


ESXi 5.5 Free - 32 GB RAM hard limit is Removed

ESXi 5.5 free limitations has been removed by VMware. VMware ESXi 5.5 Free version limitations to 32 GB of physical RAM only is no more. VMware has introduced the new enhanced version of VMware vSphere 5.5. System Admins, It Engineers, everyone using the Free version of ESXi for their Lab purpose , the hard limit of 32Gb of RAM on the Free ESXi 5.1 and the RAM limit was very difficult for them to work on wasn’t really a good thing.

ESXi 5.5 Free version of VMware hypervisor is now available for download, but for lab purpose you need minimum vCenter Essential Kit. By using Free Version of ESXi 5.5 you can overcome that . Also with help of free license key which unlock some advanced features for testing.

ESXi 5.5 -Free License

 

Upgraded Supports

320 physical CPUs per host
4 TB of Memory
16 NUMA Nodes
4096 vCPUs can be allocated to VMs

Features of  ESXi 5.5 Free version

  • 8 vCPU per VM limit
  • Virtual Hardware 10
    Detailed Supported All Features of VM Hardware version 10  Check the Page
  • LSI SAS support for Solaris 11
  • Support for IDE, CDROM will require AHCI controller in MAC GOS.
  • Support 30 devices per controller, up to 4 controllers with total of 120 devices

 


New features and capabilities available in vSphere 5.5

There are many new features introduced with version of vSphere 5.5 .

Doubled Host-Level Configuration Maximums

vSphere 5.5 is capable of hosting any size workload; a fact that is punctuated by the doubling of several host-level configuration maximums.  The maximum number of logical CPUs has doubled from 160 to 320, the number of NUMA nodes doubled from 8 to 16, the number of virtual CPUs has doubled from 2048 to 4096, and the amount of RAM has also doubled from 2TB to 4TB. There is virtually no workload that is too big for vSphere 5.5 .

Hot-pluggable PCIe SSD Devices

vSphere 5.5 provides the ability to perform hot-add and remove of SSD devices to/from a vSphere 5.5 host.  With the increased adoption of SSD, having the ability to perform both orderly as well as unplanned SSD hot-add/remove operations is essential to protecting against downtime and improving host resiliency.

Improved Power Management

ESXi 5.5 provides additional power savings by leveraging CPU deep process power states (C-states).   By leveraging the deeper CPU sleep states ESXi can minimizes the amount of power consumed by idle CPUs during periods of inactivity.  Along with the improved power savings comes additional performance boost on Intel chipsets as turbo mode frequencies can be reached more quickly when CPU cores are in a deep C-State.

Virtual Machine Compatibility ESXi 5.5 (aka Virtual Hardware 10)

ESXi 5.5 provides a new Virtual Machine Compatibility level that includes support for a new virtual-SATA Advance Host Controller Interface (AHCI) with support for up to 120 virtual disk and CD-ROM devices per virtual machine.   This new controller is of particular benefit when virtualizing Mac OS X as it allows you to present a SCSI based CD-ROM device to the guest.

VM Latency Sensitivity

Included with the new virtual machine compatibility level comes a new “Latency Sensitivity” setting that can be tuned to help reduce virtual machine latency.  When the Latency sensitivity is set to high the hypervisor will try to reduce latency in the virtual machine by reserving memory, dedicating CPU cores and disabling network features that are prone to high latency.

Expanded vGPU Support

vSphere 5.5 extends VMware’s hardware-accelerated virtual 3D graphics support (vSGA) to include GPUs from AMD.  The multi-vendor approach provides customers with more flexibility in the data center for Horizon View virtual desktop workloads.  In addition 5.5 enhances the “Automatic” rendering by enabling the migration of virtual machines with 3D graphics enabled between hosts running GPUs from different hardware vendors as well as between hosts that are limited to software backed graphics rendering.

Graphics Acceleration for Linux Guests

vShere 5.5 also provides out of the box graphics acceleration for modern GNU/Linux distributions that include VMware’s guest driver stack, which was developed by VMware and made available to all Linux vendors at no additional cost.

vCenter Single Sign-On (SSO)

In vSphere 5.5 SSO comes with many improvements.   There is no longer an external database required for the SSO server, which together with the vastly improved installation experience helps to simplify the deployment of SSO for both new installations as well as upgrades from earlier versions.   This latest release of SSO provides enhanced active directory integration to include support for multiple forest as well as one-way and two-way trusts.  In addition, a new multi-master architecture provides built in availability that helps not only improve resiliency for the authentication service, but also helps to simplify the overall SSO architecture.

vSphere Web Client 

The web client in vSphere 5.5 also comes with several notable enhancements.  The web client is now supported on Mac OS X, to include the ability to access virtual machine consoles, attach client devices and deploy OVF templates.  In addition there have been several usability improvements to include support for drag and drop operations, improved filters to help refine search criteria and make it easy to find objects, and the introduction of a new “Recent Items” icon that makes it easier to navigate between commonly used views.

vCenter Server Appliance

With vSphere 5.5 the vCenter Server Appliance (VCSA) now uses a reengineered, embedded vPostgres database that offers improved scalability.  I wasn’t able to officially confirm the max number of hosts and VMs that will be supported with the embedded DB.  They are targeting 100 hosts and 3,000 VMs but we’ll need to wait until 5.5 releases to confirm these numbers.  However, regardless what the final numbers are, with this improved scalability the VCSA is a very attractive alternative for folks who may be looking to move a way from a Windows based vCenter.

vSphere App HA

App HA brings application awareness to vSphere HA helping to further improve application uptime.  vSphere App HA works together with VMware vFabric Hyperic Server to monitor application services running inside the virtual machine, and when issues are detected perform restart actions as defined by the administrator in the vSphere App HA Policy.

vSphere HA Compatibility with DRS Anti-Affinity Rules

vSphere HA will now honor DRS anti-affinity rules when restarting virtual machines.  If you have anti-affinity rules defined in DRS that keep selected virtual machines on separate hosts, VMware HA will now honor those rules when restarting virtual machines following a host failure.

vSphere Big Data Extensions(BDE) 

Big Data Extensions is a new addition to the VMware vSphere Enterprise and Enterprise Plus editions.  BDE is a vSphere plug-in that enables administrators to deploy and manage Hadoop clusters on vSphere using the vSphere web client.

Support for 62TB VMDK

vSphere 5.5 increases the maximum size of a virtual machine disk file (VMDK) to 62TB (note the maximum VMFS volume size is 64TB where the max VMDK file size is 62TB).  The maximum size for a Raw Device Mapping (RDM) has also been increased to 62TB.

Microsoft Cluster Server (MCSC) Updates

MSCS clusters running on vSphere 5.5 now support Microsoft Windows 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.

16Gb End-to-End Support

In vsphere 5.5 16Gb end-to-end FC support is now available.  Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.

Auto Remove of Devices on PDL

This feature automatically removes a device from a host when it enters a Permanent Device Loss (PDL) state.  Each vSphere host is limited to 255 disk devices, removing devices that are in a PDL state prevents failed devices from occupying a device slot.

VAAI UNMAP Improvements

vSphere 5.5 provides  and new “esxcli storage vmfs unmap” command with the ability to specify the reclaim size in blocks, opposed to just a percentage, along with the ability to reclaim space in increments rather than all at once.

VMFS Heap Improvements

vSphere 5.5 introduces a much improved heap eviction process, which eliminates the need for large heap sizes.  With vSphere 5.5 a maximum of 256MB of heap is needed to enable vSphere hosts to access the entire address space of a 64TB VMFS.

vSphere Flash Read Cache 

A new flash-based storage solution that enables the pooling of multiple flash-based devices into a single consumable vSphere construct called a vSphere Flash Resource, which can be used to enhance virtual machine performance by accelerating read-intensive workloads.

Link Aggregation Control Protocol (LACP) Enhancements

With the vSphere Distributed Switch in vSphere 5.5 LACP now supports 22 new hashing algorithms, support for up to 64 Link Aggregation Groups (LAGs), and new workflows to help configure LACP across large numbers of hosts.

Traffic Filtering Enhancements

The vSphere Distributed Switch now supports packet classification and filtering based on MAC SA and DA qualifiers, traffic type qualifiers (i.e. vMotion, Management, FT), and IP qualifiers (i.e. protocol, IP SA, IP DA, and port number).

Quality of Service Tagging 

vSphere 5.5 adds support for Differentiated Service Code Point (DCSP) marking.  DSCP marking support enables users to insert tags in the IP header which helps in layer 3 environments where physical routers function better with an IP header tag than with an Ethernet header tag.

Single-Root I/O Virtualization (SR-IOV) Enhancements

vSphere 5.5 provides improved workflows for configuring SR-IOV as well as the ability to propagate port group properties to up to the virtual functions.

Enhanced Host-Level Packet Capture

vSphere 5.5 provides an enhanced host-level packet capture tool that is equivalent to the command-line tcpdump tool available on the Linux platform.

40Gb NIC Support

vSphere 5.5 provides support for 40Gb NICs.  In 5.5 the functionality is limited to the Mellanox ConnectX-3 VPI adapters configured in Ethernet mode.

vSphere Data Protection (VDP)

VDP has also been updated in 5.5 with several great improvements to include the ability to replicate  backup data to EMC Avamar,  direct-to-host emergency restore, the ability to backup and restore of individual .vmdk files, more granular scheduling for backup and replication jobs, and the ability to mount existing VDP backup data partitions when deploying a new VDP appliance.  For more information about these new features as well as more information about VDP vs.


vSphere 5.5

https://vmarena.com/new-features-and-capabilities-available-in-vsphere-5-5/

https://vmarena.com/new-in-vmware-vsphere-5-5/


What’s New in VMware vSphere 5.5

What’s New in VMware vSphere 5.5

VMware vSphere 5.5 introduces many new features and enhancements to further extend the core capabilities of the vSphere platform. This paper will discuss features and capabilities of the vSphere platform,including vSphere ESXi Hypervisor, VMware vSphere High Availability (vSphere HA), virtual machines,VMware vCenter Server, storage, networking and vSphere Big Data Extensions.

In this Post mainly explains following five sections:

vSphere ESXi Hypervisor Enhancements

  • Hot-Pluggable SSD PCI Express (PCIe) Devices
  • Support for Reliable Memory Technology
  • Enhancements for CPU C-States

Virtual Machine Enhancements

  • Virtual Machine Compatibility with VMware ESXi 5.5
  • Expanded Virtual Graphics Support
  • Graphic Acceleration for Linux Guests

VMware vCenter Server Enhancements

  • VMware vCenter Single Sign-On
  • VMware vSphere Web Client
  • VMware vCenter Server Appliance
  • vSphere App HA
  • vSphere HA and VMware vSphere Distributed Resource Schedulee (vSphere DRS)
  • Virtual Machine–Virtual Machine Affinity Rules Enhancements
  • vSphere Big Data Extensions

vSphere Storage Enhancements

  • Support for 62TB VMDK
  • MSCS Updates
  • vSphere 5.1 Feature Updates
  • 16GB E2E support
  • PDL AutoRemove
  • vSphere Replication Interoperability
  • vSphere Replication Multi-Point-in-Time Snapshot Retention
  • vSphere Flash Read Cache

 vSphere Networking Enhancements

  • Link Aggregation Control Protocol Enhancements
  • Traffic Filtering
  • Quality of Service Tagging
  • SR-IOV Enhancements
  • Enhanced Host-Level Packet Capture
  • 40GB NIC support

vSphere ESXi Hypervisor Enhancements

Hot-Pluggable PCIe SSD Devices

The ability to hot-swap traditional storage devices such as SATA and SAS hard disks on a running vSphere host has been a huge benefit to systems administrators in reducing the amount of downtime for virtual machine workloads. Solid-state disks (SSDs) are becoming more prevalent in the enterprise datacenter, and this same capability has been expanded to support SSD devices. Similarly as with SATA and SAS hard disks,users are now able to hot-add or hot-remove an SSD device while a vSphere host is running, and the underlying storage stack detects the operation.

Support for Reliable Memory Technology

The most critical component to vSphere ESXi Hypervisor is the VMkernel, which is a purpose-built operating system (OS) to run virtual machines. Because vSphere ESXi Hypervisor runs directly in memory, an error in it can potentially crash it and the virtual machines running on the host. To provide greater resiliency and to protect against memory errors, vSphere ESXi Hypervisor can now take advantage of new hardware vendor–enabled Reliable Memory Technology, a CPU hardware feature through which a region of memoryis reported from the hardware to vSphere ESXi Hypervisor as being more “reliable.” This information is then used to optimize the placement of the VMkernel and other critical components such as the initial thread, hostd and the watchdog process and helps guard against memory errors.

Enhancements to CPU C-States

In vSphere 5.1 and earlier, the balanced policy for host power management leveraged only the performance state (P-state), which kept the processor running at a lower frequency and voltage. In vSphere 5.5, the deep processor power state (C-state) also is used, providing additional power savings. Another potential benefit of reduced power consumption is with inherent increased performance, because turbo mode frequencies on Intel chipsets can be reached more quickly while other CPU cores in the physical package are in deep C-states.

Virtual Machine Enhancements

Virtual Machine Compatibility with VMware ESXi 5.5

vSphere 5.5 introduces a new virtual machine compatibility with several new features such as LSI SAS
support for Oracle Solaris 11 OS, enablement for new CPU architectures, and a new advanced host controller interface (AHCI). This new virtual-SATA controller supports both virtual disks and CD-ROM devices that can connect up to 30 devices per controller, with a total of four controllers. This enables a virtual machine to have as many as 120 disk devices, compared to the previous limit of 60.

Table summarizes the virtual machine compatibility levels supported in vSphere 5.5.

Expanded Virtual Graphics Support

vSphere 5.1 was the first vSphere release to provide support for hardware-accelerated 3D graphics—virtualshared graphics acceleration (vSGA)—inside of a virtual machine. That support was limited to only
NVIDIA-based GPUs. With vSphere 5.5, vSGA support has been expanded to include both NVIDIA- and
AMD-based GPUs. Virtual machines with graphic-intensive workloads or applications that typically have
required hardware-based GPUs can now take advantage of additional GPU vendors, makes and models. See the VMware Compatibility Guide for details on supported GPU adapters.

There are three supported rendering modes for a virtual machine configured with a vSGA: automatic, hardware and software. Virtual machines still can leverage VMware vSphere vMotion technology, even across a heterogeneous mix of GPU vendors, without any downtime or interruptions to the virtual machine. If automatic mode is enabled and a GPU is not available at the destination vSphere host, software rendering automatically is enabled. If hardware mode is configured and a GPU does not exist at the destination vSphere host, a vSphere vMotion instance is not attempted.

vSGA support can be enabled using both the vSphere Web Client and VMware Horizon View for Microsoft Windows 7 OS and Windows 8 OS. The following Linux OSs also are supported: Fedora 17 or later,Ubuntu 12 or later and Red Hat Enterprise Linux (RHEL) 7. Controlling vSGA use in Linux OSs is supported using the vSphere Web Client.

 

Graphic Acceleration for Linux Guests

With vSphere 5.5, graphic acceleration is now possible for Linux guest OSs. Leveraging a GPU on a vSphere host can help improve the performance and scalability of all graphics-related operations. In providing this support, VMware also is the first to develop a new guest driver that accelerates the entire Linux graphics stack for modern Linux distributions. VMware also is contributing 100 percent of the Linux guest driver code back to the open-source community. This means that any modern GNU/Linux distribution can package the VMware guest driver and provide out-of-the-box support for accelerated graphics without any additional tools or package installation.

The following Linux distributions are supported:

  • Ubuntu: 12.04 and later
    • Fedora: 17 and later
    • RHEL 7
    With the new guest driver, modern Linux distributions are enabled to support technologies such as
    the following:
    • OpenGL 2.1
    • DRM kernel mode setting
    • Xrandr
    • XRender
    • Xv

VMware vCenter Server Enhancements

vCenter Single Sign-On

vCenter Single Sign-On server 5.5, the authentication services of VMware vCloud Suite, has been greatly
enhanced to provide a richer experience that enables users to log in to vCloud Suite products in a true
one-touch, single sign-on manner. This feature provided challenges for users in a previous release. As a result
of extensive feedback, the following vCenter Single Sign-On enhancements have been made:

Simplified deployment 

A single installation model for customers of all sizes is now offered.

Enhanced Microsoft Active Directory integration 

The addition of native Active Directory support enables

cross-domain authentication with one- and two-way trusts common in multidomain environments.

Architecture 

Built from the ground up, this architecture removes the requirement of a database and now delivers a multimaster authentication solution with built-in replication and support for multiple tenants.

vSphere Web Client

The platform-agnostic vSphere Web Client, which replaces the traditional vSphere Client™, continues to exclusively feature all-new vSphere 5.5 technologies and to lead the way in VMware virtualization and cloud management technologies. Increased platform support – With vSphere 5.5, full client support for Mac OS X is now available in the vSphere Web Client. This includes native remote console for a virtual machine. Administrators and end users can now access and manage their vSphere environment using the desktop platform they are most comfortable with. Fully supported browsers include both Firefox and Chrome.

Improved usability experience – The vSphere Web Client includes the following key new features that improve overall usability and provide the administrator with a more native application feel:
• Drag and drop – Administrators now can drag and drop objects from the center panel onto the vSphere inventory, enabling them to quickly perform bulk actions. Default actions begin when the “drop” occurs, helping accelerate workflow actions. This enables administrators to perform “bulk” operations with ease.
For example, to move multiple virtual machines, grab and drag them to the new host to start the
migration workflow.
• Filters – Administrators can now select properties on a list of displayed objects and selected filters to meet specific search criteria. Displayed objects are dynamically updated to reflect the specific filters selected. Using filters, administrators can quickly narrow down to the most significant objects. For example, two checkbox filters can enable an administrator to see all virtual machines on a host that are powered on and running Windows Server 2008.
• Recent items – Administrators spend most of their day working on a handful of objects. The new recent items navigation aid enables them to navigate with ease, typically by using one click between their most commonly used objects.

vCenter Server Appliance

The popularity of vCenter Server Appliance has grown over the course of its previous releases. Although it offers matched API functionality to the installable vCenter Server version on Windows, administrators have found its widespread adoption prospects to be limited. One area of concern has been the embedded database that has previously been targeted for small datacenter environments. With the release of vSphere 5.5, the vCenter Server Appliance addresses this with a reengineered, embedded vPostgres database that can now support as many as 100 vSphere hosts or 3,000 virtual machines (with appropriate sizing). With new scalability maximums and simplified vCenter Server deployment and management, the vCenter Server Appliance offers an attractive alternative to the Windows version of vCenter Server when planning a new installation of vCenter Server 5.5.

vSphere App HA

In versions earlier than vSphere 5.5, it was possible to enable virtual machine monitoring, which checks for the presence of “heartbeats” from VMware Tools™ as well as I/O activity from the virtual machine. If neither of these is detected in the specified amount of time, vSphere HA resets the virtual machine. In addition to virtual machine monitoring, users can leverage third-party application monitoring agents or create their own agents to work with vSphere HA using the VMware vSphere Guest SDK.
In vSphere 5.5, VMware has simplified application monitoring for vSphere HA with the introduction of vSphere App HA. This new feature works in conjunction with vSphere HA host monitoring and virtual machine monitoring to further improve application uptime. vSphere App HA can be configured to restart an application service when an issue is detected. It is possible to protect several commonly used, off-the-shelf applications. vSphere HA can also reset the virtual machine if the application fails to restart.

Architecture Overview

vSphere App HA leverages VMware vFabric Hyperic to monitor applications. Deploying vSphere App HA begins with provisioning two virtual appliances per vCenter Server: vSphere App HA and vFabric Hyperic. vSphere App HA virtual appliance stores and manages vSphere App HA policies. vFabric Hyperic monitors applications and enforces vSphere App HA policies, which are discussed in greater detail in the following section. It is possible to deploy these virtual appliances to a cluster other than the one running the protected applications; for example, a management cluster.After the simple process of deploying the vFabric Hyperic and vSphere App HA virtual appliances,vFabric Hyperic agents are installed in the virtual machines containing applications that will be protected by vSphere App HA. These agents must be able to reliably communicate with the vFabric Hyperic virtual appliance.

vSphere App HA Policies

Policies define items such as the number of minutes vSphere App HA will wait for the service to start, the option to reset the virtual machine if the service fails to start, and the option to reset the virtual machine when the service is unstable. Policies can be configured to trigger vCenter Server alarms when the service is down and the virtual machine is reset. Email notification is also available.

 

Enabling Protection for an Application Service

Application protection is enabled when a policy is assigned. Right-click the application service to assign a policy . vSphere HA virtual machine monitoring and application monitoring must be enabled.

vSphere HA and vSphere Distributed Resource Scheduler

Virtual Machine–Virtual Machine Affinity Rules
vSphere DRS can configure DRS affinity rules, which help maintain the placement of virtual machines on hosts within a cluster. Various rules can be configured. One such rule, a virtual machine–virtual machine affinity rule, specifies whether selected virtual machines should be kept together on the same host or kept on separate hosts. A rule that keeps selected virtual machines on separate hosts is called a virtual machine–virtual machine antiaffinity rule and is typically used to manage the placement of virtual machines for availability purposes.
In versions earlier than vSphere 5.5, vSphere HA did not detect virtual machine–virtual machine antiaffinity rules, so it might have violated one during a vSphere HA failover event. vSphere DRS, if fully enabled, evaluates the environment, detects such violations and attempts a vSphere vMotion migration of one of the virtual machines to a separate host to satisfy the virtual machine–virtual machine antiaffinity rule. In a largemajority of environments, this operation is acceptable and does not cause issues. However, some environments might have strict multitenancy or compliance restrictions that require consistent virtual machine separation. Another use case is an application with high sensitivity to latency; for example, a telephony application, where migration between hosts might cause adverse effects.