vSAN 6.7 All Flash Configuration

VMware Virtual SAN (VSAN) is a hypervisor-converged storage solution for your vSphere environment.  In this post I will explain about how to perform vSAN All Flash configuration .

vSAN All Flash Architecture

In All Flash Configuration  one designated flash device is used for cache while additional flash devices are used for the capacity layer. In an all-flash configuration, 100% of the cache tier is used for the write buffer, with no read cache.


A standard vSAN cluster consists of a minimum of three physical nodes and can be scaled to 64 nodes.All the hosts in a standard cluster are commonly located at a single location and are well-connected on the same Layer-2 network.

We are configuration the vSAN with below and Setup is on Nested Environment .

ESXi Version Node Flash Drive Flash Drive
ESXi 6.7 TEST-ESX1 50 GB 20 GB
ESXi 6.7 TEST-ESX2 50 GB 20 GB
ESXi 6.7 TEST-ESX3 50 GB 20 GB

Enable vSAN on the Cluster

In my previous post I have explained about the requirement and configuration of vSAN in Hybrid mode.  And you can follow the same article to configure the prerequisites like Network , HA  . After creation of VMkeirnel for virtual SAN traffic you can enable VSAN on the cluster from web client .

Addition to my previous post you have use all flash hard drives for configuration and in my case I have configured both disks to flash mode from web client .

Navigate to ESXi Host -> Configure- > Storage Devices , then select the disk and use the option "Mark  the selected disk as flash" 

After the changes on all hosts you have to Navigate to cluster -> Configure -> General -> Configure 

You can see vSAN is Turned OFF state ,  Click on Configure option on the right side

Select the check box Deduplication and Compression and this feature is available only on vSAN All Flash Configuration and Click Next

More detail about Deduplication and Compression can found on Blog post 

Note :- We are not using and KMS and Fault Domain  so no encryption and FD are  selected .

Next Network validation will happen , it will show all vSAN VMkernals where vSAN traffic enabled , if any host it is not enabled will give you error and you can modify and retry

Next page is to claim available disks on the ESXi hosts , as we using all flash configuration you can see all disks will be flash.

It will easy for you understand the hard drive details by selecting " Group by Host "  and Claim the disk and Click Next  to continue

Review the Configuration and click Finish , it will take some time to finish and you can start using vSAN

Verify the vSAN Status 

Navigate to Cluster -> Configure -> General

Verify the vSAN Configured devices 

Navigate to Cluster -> Configure -> Disk Management  ,  you can see all the disks configured with vSAN

Next you can see there will  vSAN data store will be available on the Cluster .

Navigate to Cluster -> Configure -> Datastores  , and you can verify that  vsan Datastore available on console.

You can find More vSAN Posts  Here

WSFC Configuration with vSAN 6.7 iSCSI Target

In my previous post, I have discussed about iSCSI Target Configuration in vSAN 6.7 and mentioned about new feature Windows Server Failover Clusters (WSFC) using the vSAN iSCSI target service. In this post, you can find the configuration of Windows Server Failover Cluster with iSCSI target feature in Windows 2012 Virtual Machine .This Feature is supported with Physical Window Servers also .

vSAN 6.7 fully support transparent failover of LUNs with the iSCSI service for vSAN when used in conjunction with WSFC. With support of this feature, now customers no need to buy a storage array and it save lot of money.

Steps Involved in this Procedure

  • Enable iSCSI Initiator
  • Create iSCSI Initiator Group
  • Create iSCSI Target and LUN
  • Configure the iSCSI  Target on Servers
  • Windows Server Failover  Cluster Creation
  • Fail over Testing


Number of Servers  2 ( Based Licensee You can use more Hosts)
Network Card Each Node with 3
DNS Resolution Required
IP Each Server required  2 Public IP and 2 Heart Beat IP and 1 iSCSI IP
Cluster Name with FQDN Required
Windows Update Fully Updated -  Two nodes
Quorum  With Min 2 GB iSCSI disk
iSCSI Enabled on Both Servers
MPIO Feature Feature to be enabled with policy set to “Fail Over Only”


Windows Server Failover Clusters (WSFC)

A Windows Server Failover Cluster (WSFC) is a group of independent servers that work together to increase the availability of applications and services.

Components you must know in WSFC

Node – Node is referred as server that that is participating in a WSFC.

Cluster Resource -   It is a physical or logical entity that can be owned by a node, which you can perform actions as brought online, offline, move between nodes and managed as a cluster object. Cluster resource can be owned by a single node at any point of time.

Role  -  Role is a collection of cluster resources managed as a single cluster object to provide specific functionality. A role contains all the cluster resources that are required for a n Availability Group (AG) or Always On Failover Cluster Instance (FCI) and Failover and failback always act in context of roles. A role will contain an IP address resource, a network name resource, and resources for the role.

Network Name Resource  -  A logical server name that is managed as a cluster resource. A network name resource must be used with an IP address resource. These entries may require objects in Active Directory Domain Services and/or DNS.

Quorum  -   The quorum configuration in a failover cluster that determines the number of node failures that the cluster can sustain.

Enable iSCSI Initiator

Navigate t oServer Manager -> Tools -> Select iSCSI Initiator

It will enable iSCSI Initiator on the server

From the Configuration Tab Collect the Initiator Name , it is required while configuration access

Create iSCSI Initiator Group 

Navigate to Cluster ->Configure -> vSAN-> iSCSI Initiator Group

Provide a Name for Group and add the Initiator Name you have collected from servers to the members list .

Now you can see the available members in the group , this will help you to restrict the access to the lUN to these members only .

Create iSCSI Target and LUN

Cluster ->Configure -> vSAN-> iSCSI Targets -> Click  " +"  Add button and fill the required details click ok

Add the Alias , Selec the iSCSI VMK network , Storage Policy also from same window you have option to create a LUN

Here I am creating Quorum disk with LUN ID 5 and 3 Gb size

Click on Allowed Initiators Tab and add the Initiator Group

Configure the iSCSI  Target on Servers 

First Verify the iSCSI Network from Hosts Navigate to -> ESXi Host -> Configure -> Networking -> VMkernal Adapters

Open the iSCSI Initiator -> Discovery Tab  -> Click on Discover Portal and the iSCSI VMkernal IP

After adding move to Targets Tab and you can see the targets are available  with inactive state

Select each Target and click on Connect Option  , Select Enable multipath option and click OK

Go to Disk Managent and you can see the iSCSI LU is available , Now you can bring that disk online and create partition

Enable Fail Over Cluster Feature 

You have to enable this feature on all nodes which need to be part of WSFC

Navigate to Server Manager -> Manage -> Select Add Roles and Features 

Follow  the Screen Options as default and in Features Select  the Failover Clustering  -> Add features 

and on screen options , It will take while to finish the installation .

Windows Server Failover  Cluster Creation

After Enabling the Feature you have to create a Windows Cluster from Primary Server .

You have two option ,Validate Configuration and Create Cluster

Validate Configuration - Validating the Cluster Prerequisites are met , any Waring or error  on servers , any issue there you have to fix before proceeding and it is recommended to follow . Also after validating it will give option to create Cluster .

Create Cluster  - Starting the Cluster Creation with out Validating the server configuration and after creation of Cluster you can validate the configuration.

Proceed to Create Cluster with Screen Options

You have add both servers and Windows Custer IP and Name in required steps

Note - should not Select the Add eligible disk to Cluster Option 

After Finishing the Fail Over Cluster wizard  you can the see the New Cluster with added Node details .

Next You Can add the Configured iSCSI storage to the Cluster and configured required Roles

Navigate to Failover Cluster -> Storage ->Disks  and Select the Add disk Option

It will list the disk associated with with server and you can select the desired form there , and you can see the added disk on Disks

Configure a Quorum for the Cluster with added 3 GB Disk

Navigate to Cluster -> Right Click -> More Actions ->Select Configure Cluster Quorum Settings

Follow the Screen options and you can see the  available disk to add as Quorum , select the desired disk and continue to finish this .


You can see the details on Tab Assigned To , as Disk Witness Quorum

Select the Disk and you can test fail over with  the Move the storage options Best Possible Node or Select Node

Best Possible Node - Automatically Select the node and storage will be moved

Select Node -  it will pop up with available Cluster resource you can move

Also you can shutdown the active node and verify the fail over status by login to other node.

Now you can Create Required Rolw after adding required Disk ,example for Data base , File Server etc

Reference vSAN 6.7

Reference for SQL - Microsoft SQL Server 2014 on VMware VSAN 6 Hybrid

Refer Microsoft Site for more details on Fail Over Cluster

Configure iSCSI target on vSAN 6 .7

As we know with vSAN 6.5, VMware released feature iSCSI Target and with new release of vSAN version 6.7, they have enhanced this feature. vSAN 6.7 now supports iSCSI target with SCSI Persistent Reservations for iSCSI shared disks. We can present the iSCSI-shared disks from vSAN to Virtual machines on the same vSAN cluster and officially support Microsoft WSFC “Windows Server Failover Clusters “. In this post, I will cover how to configure iSCSI target on vSAN 6.7 environment and about WSFC will discuss on another post as continuation of this .

iSCSI target service will support to enable the hosts and physical workloads that reside outside the vSAN cluster to access the vSAN datastore.This feature enables an iSCSI initiator on a remote host to transport block-level data to an iSCSI target on a storage device in the vSANcluster.

After you configure the vSAN iSCSI target service, you can discover the vSAN iSCSI targets from a remote host. To discover vSAN iSCSI targets, use the IP address of any host in the vSAN cluster, and the TCP port of the iSCSI target. To ensure high availability of the vSANiSCSI target, configure multipath support for your iSCSI application. You can use the IP addresses of two or more hosts to configure the multipath.

Note:- vSAN iSCSI target service does not support other vSphere or ESXi clients or initiators, third-party hypervisors, or migrations using raw device mapping (RDMs).

How to Enable iSCSI Target

First verify iSCSI Target service status by following below , by default this service will be disabled

Navigate to the vSAN cluster - > Configure -> vSAN -> iSCSI Target Service 

Click on the Enable Option  , new window will appear with enable vSAN option

Enable the iSCSI target service  and add required details and Click Apply

You have to select the default network , TCP port, Authentication method and a vSAN Storage Policy.

You Can monitor the progress on the recent tasks

Once the Target service is enabled you will the options to configure the iSCSI Targets and do other related configurations

create an iSCSI target

Click on " + "  Add button  to create new iSCSI target and  from same window you can create the LUN and assign to the same target or you can skip this potion by remove the " tick "  , you can use the vSphere Flex Client to do these configuration .

While creating iSCSI Target and LUN  you have to fill various details

  • VMKernel Interface
  • iSCSI target Alias name
  • TCP Port Number
  • Authentication Method (CHAP and Mutual CHAP supported)
  • The Storage Policy to be applied to the iSCSI Target
  • LUN ID
  • Alias for LUN
  • Size of the LUN
  • LUN ID
  • Storage Policy vSAN storage allocation for LUN , it gives you an example on the right hand side of what this looks like from a vSAN Perspective


After you create iSCSI target and LUN you will see similar configuration  like below

You have Option to configure the access to iSCSI target , you may specify the IQNs / Add them to Group or Everyone

Additionally you can configure iSCSI initiator Groups to manage  access to Targets

Navigate to vSAN Cluster->Configure ->vSAN -> iSCSI Initiator Groups to configure .

From a vSAN side iSCSI Target configuration and LUN maping has been completed, now you can login to Windows Machine  configure the iSCSI Initiator  .

Navigate to Administrative Tools -> iSCSI Initiator service -> Discovery Tab

and enter the IP Address of your iSCSI Target, my scenario host 1 and iSCSI IP is

If you are using CHAP/Mutual CHAP you can configure that also in advanced setting

After adding the IP of iSCSI target , Click on the “Targets” tab and you  will find Target IQN as “Inactive”, click on Connect and select “Enable Multi Path” also .


Another main thing if you need MPIO for iSCSI, this has to enabled if it is not listed under "Administrative Tools" ,you can enable the feature from Server Manager -> Add Roles and Features

If MPIO is enabled make sure you have selected “Add support for iSCSI devices” , if you enable this a Reboot is required for Windows Machine . Also need to add another path ( ip of vmk of another host in vSAN cluster  ) .

And you can configure the MPIO policy from -> iSCSI Targets -> Properties - > MCS / devices options

After adding the of  iSCSI target and enable MPIO you will be able see the devices from device manager and you can enable and configure the new iSCSI disk on windows .

Navigate to Computer Management ->Disk Management , you will see newly added un-partitioned disk in offline mode .

You can Change to Online Mode -> Initialize ->Format and start using the disk .


Additional Information 


In CHAP authentication, the target authenticates the initiator, but the initiator does not authenticate the target.

Mutual CHAP

In mutual CHAP authentication, an extra level of security enables the initiator to authenticate the target.

iSCSI Targets

You can add one or more iSCSI targets that provide storage blocks as logical unit numbers (LUNs). vSAN identifies each iSCSI target by a unique iSCSI qualified Name (IQN). You can use the IQN to present the iSCSI target to a remote iSCSI initiator so that the initiator can access the LUN of the target.

Each iSCSI target contains one or more LUNs. You define the size of each LUN, assign a vSAN storage policy to each LUN, and enable the iSCSI target service on a vSAN cluster. You can configure a storage policy to use as the default policy for the home object of the vSAN iSCSI target service.

iSCSI Initiator Groups

You can define a group of iSCSI initiators that have access to a specified iSCSI target. The iSCSI initiator group restricts access to only those initiators that are members of the group. If you do not define an iSCSI initiator or initiator group, then each target is accessible to all iSCSI initiators.

A unique name identifies each iSCSI initiator group. You can add one or more iSCSI initiators as members of the group. Use the IQN of the initiator as the member initiator name.


Recommended to read the iSCSI Target Configuration from VMware Library  and iSCSI target usage guide for more information about using the vSAN iSCSI target service


vSAN Erasure Coding – RAID 5 and RAID 6

First when you hear the term “Erasure Coding”, confused? , Let’s clarify this. what is “Erasure Coding” , Erasure Coding is a general term that refers to *any* scheme of encoding and partitioning data into fragments in a way that allows you to recover the original data even if some fragments are missing. Any such scheme is refer to as an “erasure code”  , this clarified from VMware Blog .

RAID-5 and RAID-6 are introduced in vSAN to reduce the overhead when configuring virtual machines to tolerate failures. This feature is also termed “erasure coding”. RAID 5 or RAID 6 erasure coding is a policy attribute that you can apply to virtual machine components. They are available only for all-flash vSAN Cluster, and you cannot use it on hybrid configuration.


To configure RAID-5 or RAID-6 on VSAN has specific requirement on the number of hosts in vSAN Cluster. For RAID-5, a minimum of 4 hosts and for RAID-6 a minimum of 6. Data blocks are placing across the storage on each host along with a parity. Here there is no dedicated disk allocated or storing the parity, it uses distributed parity. RAID-5 and RAID-6 are fully supported with the new deduplication and compression mechanisms in vSAN .

RAID-5  - 3+1 configuration, 3 data fragments and 1 parity fragment per stripe.

RAID-6  -  4+2 configuration, 4 data fragments, 1 parity and 1 additional syndrome per stripe.

To Learn More on RAID Levels , Check  STANDARD RAID LEVELS

You can use RAID 5 or RAID 6 erasure coding to protect against data loss and increase storage efficiency. Erasure coding can provide the same level of data protection as mirroring (RAID 1), while using less storage capacity.

RAID 5 or RAID 6 erasure coding enables vSAN to tolerate the failure of up to two capacity devices in the datastore. You can configure RAID 5 on all-flash clusters with four or more fault domains. You can configure RAID 5 or RAID 6 on all-flash clusters with six or more fault domains.

RAID 5 or RAID 6 erasure coding requires less additional capacity to protect your data than RAID 1 mirroring. For example, a VM protected by a Primary level of failures to tolerate value of 1 with RAID 1 requires twice the virtual disk size, but with RAID 5 it requires 1.33 times the virtual disk size. The following table shows a general comparison between RAID 1 and RAID 5 or RAID 6.

RAID Configuration Primary level of Failures to Tolerate Data Size Capacity Required

RAID 1 (mirroring)


100 GB

200 GB

RAID 5 or RAID 6 (erasure coding) with four fault domains


100 GB

133 GB

RAID 1 (mirroring)


100 GB

300 GB

RAID 5 or RAID 6 (erasure coding) with six fault domains


100 GB

150 GB

RAID-5/6 (Erasure Coding) is configured as a storage policy rule and can be applied to individual virtual disks or an entire virtual machine. Note that the failure tolerance method in the rule set must be set to RAID5/6 (Erasure Coding).

Additionally In a vSAN stretched cluster, the Failure tolerance method of RAID-5/6 (Erasure Coding) - Capacity applies only to the Secondary level of failures to tolerate .

RAID 5 or RAID 6 Design Considerations

  • RAID 5 or RAID 6 erasure coding is available only on all-flash disk groups.
  • On-disk format version 3.0 or later is required to support RAID 5 or RAID 6.
  • You must have a valid license to enable RAID 5/6 on a cluster.
  • You can achieve additional space savings by enabling deduplication and compression on the vSAN cluster..

RAID-1 (Mirroring) vs RAID-5/6 (Erasure Coding).

RAID-1 (Mirroring) in Virtual SAN employs a 2n+1 host or fault domain algorithm, where n is the number of failures to tolerate. RAID-5/6 (Erasure Coding) in Virtual SAN employs a 3+1 or 4+2 host or fault domain requirement, depending on 1 or 2 failures to tolerate respectively. RAID-5/6 (Erasure Coding) does not support 3 failures to tolerate.


Erasure coding will provide capacity savings over mirroring, but  erasure coding requires additional overhead. As I mentioned above erasure coding is only supported in all-flash Virtual SAN configuration and effects to latency and IOPS are negligible due to the inherent performance of flash devices.

Overhead on Write & Rebuild  Operations

Overhead on Erasure coding  in vSAN is not similar to RAID 5/6 in traditional disk arrays. When anew data block is
written to vSAN, it is sliced up, and distributed to each of the components along with additional parity information. Writing the data in distributed manner along with the parity will consume more computing resource  and write latency also increase since whole objects will be distributed across all hosts on the vSAN Cluster .

All the data blocks need to be verified and rewritten with each new write  , also it is necessary to have a uniform distribution of data and parity  for failure toleration and rebuild process  . Writes essentially are a sequence of read and modify, along with recalculation and rewrite of parity. This write overhead occurs during normal operation, and is also present during rebuild operations. As a result, erasure coding rebuild operations will take longer, and require more resources to complete than mirroring.

RAID-5  & Raid 6  conversion to/from RAID-1

To convert from a mirroring failure tolerance method, first you have to check  vSAN cluster meets the minimum host or fault domain requirement. Online conversion process adds additional overhead of existing components when you apply the policy. Always it is recommended do a test to convert virtual machines or their objects before performing this on production , it will help you to understand the impact of process and accordingly you can plan for production .

Because RAID-5/6 (Erasure Coding) offers guaranteed capacity savings over RAID-1 (Mirroring), any workload is going to see a reduced data footprint. It is importing to consider the impact of erasure coding versus mirroring in particular to performance, and whether the space savings is worth the potential impact to performance. Also you can refer below VMware recommendations .


  • Applications that are particularly sensitive to higher latencies and/or a reduction in IOPS such as ERP systems and OLTP applications should be thoroughly tested prior to production implementation.
  • Generally, read performance will see less of an impact from erasure coding than writes. Virtual SAN will first try to fulfill a read request from the client cache, which resides in host memory. If the data is not available in the client cache, the capacity tier of Virtual SAN is queried. Reads that come from the Virtual SAN capacity tier will generate a slight amount of resource overhead as the data is recomposed.
  • Workloads such as backups, with many simultaneous reads, could see better read performance when erasure coding is used in conjunction with larger stripe count rule in place. This is due to additional read locations, combined with a larger overall combined read IOPS capability. Larger clusters with more hosts and more disk groups can also lessen the perceived overhead.
  •  Ways to potentially mitigate the effects of the write overhead of erasure coding could include increasing bandwidth between hosts, use of capacity devices that are faster, and using larger/more queue depths. Larger network throughput would allow more data to be moved between hosts and remove the network as a bottleneck.
  • Faster capacity devices, capable of larger write IOPS performance, would reduce the amount of time to handle writes. Additional queue depth space through the use of controllers with larger queue depths, or using multiple controllers, would reduce the likelihood of contention within a host during these operations.
  • It is also important to consider that a cluster containing only the minimum number of hosts will not allow for in-place rebuilds during the loss of a host. To support in-place rebuilds, an additional host should be added to the minimum number of hosts.
  • It is a common practice to mirror log disks and place configure data disks for RAID5 in database workloads. Because Erasure Coding is a Storage Policy, it can be independently applied to different virtual machine objects, providing simplicity & flexibility to configuring database workloads.

vSAN Deduplication and Compression

Deduplication and compression are the two great space efficiency features on vSAN. With help of these techniques, you will be able to reduce the overall storage consumption on Virtual SAN. As we all know the concept of the duplication since it is been in use from long time by multiple storage and backup  vendors, so here I am not going explain more about that but just a brief on that and how vSAN uses this features .

First, you have to understand deduplication and Compression will work with only vSAN cluster with all flash mode ( Cache and Capacity Devices ) . vSAN keep most referenced data blocks in the cache tier while it is active/hot and as soon as the data is no longer active, it is moved to the capacity tier and during this movement vSAN does the deduplication and compression.

Deduplication and compression processing will occur only the data block is cold (no longer used) and it moved to the capacity tier. Advantage of this process is applications are writing data (same block) or over-written multiple times in the cache tier not on the Capacity Tier and there will not be any overhead on deduplication and compression.

Pic 1


If a block of data is already available on storage, a small reference will create to the existing block instead of writing the whole block again.

VSAN uses the SHA-1 hashing algorithm for deduplication, creates a “fingerprint” for each data block. This algorithm ensure that all data blocks uniquely hashed and multiple same data blocks will not be available on same hash. When a new data block comes in it will hashed and compared to the existing table of hashes. If the data block already available vSAN will add a new reference to it, if the block is not available, a new hash entry will create and the block is persisted.


This will help to squeeze more blocks to same footprint, VSAN uses the LZ4 compression mechanism and it works on 4KB blocks and get the 4KB block compressed to a size of 2KB.

If a new block found as unique it will go through compression. LZ4 compression mechanism reduce the size of the block to less than or equal to 2KB and compressed version of the block is continued to the capacity tier. If compression cannot reduce the size to less than 2KB, then the full-sized block is remain unchanged.

Deduplication and Compression Points

  • vSAN cluster must be on all flash ( Cache and Capacity devices ) .
  • You should enable together, Cannot enable each separately.
  • You must enable this on a group within disks on the same disks group.
  • Sha1 hash algorithm used for deduplication.
  • LZ4 is the compression mechanism used for compression.
  • Deduplication performed on 4KB block level.
  • vSAN will compress the deduped 4KB block down below 2KB or less , If not Original size remain same .
  • Single device failures will make the entire disk group appear unhealthy.
  • Deduplication and Compression will performed at the disk group level.
  • Deduplication is an IO intensive operation and more operations performed during destaging.

How Read and Write Operations on Duplication IO Intensity

As I mentioned above deduplication is an IO intensive operation, more operations are performed during the destaging of the data blocks.

Read – While performs a read, extra reads need to be sent to the capacity SSD in order to find the logical addresses and to find the physical capacity (SSD) address .

Write – During destage process, extra writes are required for perform the Hashing the data blocks from cache tier. The hot data on cache tier and hash map tables helps to reduce overheads. Therefore, this overhead has to be accounted, as this is the cause due to the 4KB block is been used.

Refer VMware Docs for More 

What’s New VMware vSAN 6.7

In my Previous post have mentioned  about new vSAN version , lets find out all the new features availabel with VMware vSAN 6.7 .


4Kn drive support

vSAN 6.7 supports 4K Native disk drives.  4Kn drives provide higher capacity densities compared to 512n. This support enables you to deploy storage heavy configurations using 4Kn drives with higher capacity points.

vSphere and vSAN FIPS 140-2 validation

vSAN 6.7 encryption has been validated for the Federal Information Processing Standard 140-2. FIPS validated software modules have numerous advantages over special purpose hardware, because they can be executed on a general-purpose computing system, providing portability and flexibility. You can configure a vSAN host using any HCL-compatible set of drives in thousands of form factors, capacities and features, while maintaining data security using FIPS 140-2 validated modules.

HTML interface

The HTML5-based vSphere Client ships with vCenter Server alongside the Flex-based vSphere Web Client. The vSphere Client uses many of the same interface terminologies, topologies, and workflows as the vSphere Web Client. You can use the new vSphere Client, or continue to use the vSphere Web Client.

vRealize Operations within vCenter Server

The vSphere Client includes an embedded vRealize Operations plugin that provides basic vSAN and vSphere operational dashboards. The plugin provides a method to easily deploy a new vROps instance or specify an existing instance in the environment, one of which is required to access the dashboards. The vROps plugin does not require any additional vROps licensing.

vSAN on-disk format to version 6.0

Data evacuation option is available with upgrade of version 6 .

Windows Server Failover Clustering support

vSAN 6.7 supports Windows Server Failover Clustering by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.


Intelligent site continuity for stretched clusters

In the case of a partition between the preferred and secondary data sites, vSAN 6.7 will intelligently determine which site leads to maximum data availability before automatically forming quorum with the witness. The secondary site can operate as the active site until the preferred site has the latest copy of the data. This prevents the VMs from migrating back to the preferred site and losing locality of data reads.

Witness traffic separation for stretched clusters

You now have the option to configure a dedicated VMkernel NIC for witness traffic. The witness VMkernel NIC does not transmit any data traffic. This feature enhances data security by isolating the witness traffic from vSAN data traffic. It also is useful when the witness NIC has less bandwidth and latency compared to the data NICs.

Efficient inter-site resync for stretched clusters. Instead of resyncing all copies across the inter-site link for a rebuild or repair operation, vSAN 6.7 sends only one copy and performs the remaining resyncs from that local copy. This reduces the amount of data transmitted between sites in a stretched cluster.

Fast failovers when using redundant vSAN networks

When vSAN 6.7 is deployed with multiple VMkernel adapters for redundancy, failure of one of the adapters will result in immediate failover to the other VMkernel adapter. In prior releases, vSAN waits for TCP to timeout before failing over network traffic to healthy VMkernel adapters.

Adaptive resync for dynamic management of resynchronization traffic

Adaptive resynchronization speeds up time to compliance (restoring an object back to its provisioned failures to tolerate) by allocating dedicated bandwidth to resynchronization I/O. Resynchronization I/O is generated by vSAN to bring an object back to compliance. While minimum bandwidth is guaranteed for resynchronization I/Os, the bandwidth can be increased dynamically if there is no contention from the client I/O. Conversely, if there are no resynchronization I/Os, client I/Os can use the additional bandwidth.

Consolidation of replica components

During placement, components belonging to different replicas are placed in different fault domains, due to the replica anti-affinity rule. However, when the cluster is running at high capacity utilization and objects must be moved or rebuilt, either because of maintenance operation or failure, enough FDs might not be available. Replica consolidation is an improvement over the point fix method used in vSAN 6.6. Whereas point fix reconfigures the entire RAID tree (considerable data movement), replica consolidation moves the least amount of data to create FDs that meet the replica anti-affinity requirement.

Host pinning for shared nothing applications

vSAN Host Pinning is a new storage policy that adapts the efficiency and resiliency of vSAN for next-generation, shared-nothing applications. With this policy, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment. You must work with your VMware representative to ensure the configuration is validated before deploying this policy.

Enhanced diagnostics partition (coredump) support

vSAN 6.7 automatically resizes the coredump partition on USB/SD media if there is free space on the device, so that coredumps and logs can be persisted locally. If there is insufficient free space or no boot device is present, then no re-partitioning is performed.

vSAN destaging optimizations

vSAN 6.7 includes enhancements to improve the speed at which data is written from the caching tier to the capacity tier. These changes will improve the performance of VM I/Os and resynchronization speed.

Health check additions and improvements

vSAN 6.7 includes several new health checks and improvements to the health service for better proactive and reactive guidance.

vSAN Support insight. vSAN 6.7 has improved customer support by providing anonymized environmental data to VMware Global Support Services (GSS) for proactive support and faster troubleshooting. Customer enrollment in the Customer Experience Improvement Program (CEIP) is required to receive this benefit.

Swap object thin provisioning and policy inheritance improvements

VM swap files in vSAN 6.7 inherit the VM storage policy for all settings, including thin provisioning. In prior versions, the swap file was always thick provisioned.

VMware Official  Video walkthrough of both What’s news as well as a general technical overview 



Technical Overview


Since it is release only two days before I am looking deep to share more blog posts  on vSAN 6.7

on-disk format version 6 on vSAN 6.7

With vSAN 6.7 new on-disk format is available , version 6.0 . And after the upgrade of vSAN infra you can upgrade on-disk format to latest version .In on-disk format version 5.0, no data evacuation is performed as the disks are reformatted but in on-disk format version 6 upgrade data evacuation will be performed .

The disk group is removed and upgraded to on-disk format version 6.0, and the disk group is added back to the cluster. For two-node or three-node clusters, or clusters without enough capacity to evacuate each disk group, select Allow Reduced Redundancy from the vSphere Client.

What happens with Reduced Redundancy

When you choose allow reduced redundancy, your VMs will be unprotected at the time of upgrade .This method does not evacuate data to the other hosts in the cluster . It will remove each disk group, upgrades the on-disk format then adds the disk group back to the cluster ,all objects remain available with reduced redundancy.

If you enable deduplication and compression during the upgrade to vSAN 6.7, you can select Allow Reduced Redundancy from the vSphere Client.

Navigate to Cluster ->vSAN->General  and you can see the On-disk Format version is will have warning message  and you can perform a pre-check to check the status .

If the pre-check is success and confirms ready to upgrade , you can perform upgrade .


Based on the number of nodes on the luster or requirement you can choose the Allow reduced Redundancy option and click YES to proceed upgrade  .

And you can monitor the progress of the upgrade

Once the upgrade is completed you can check the Disk Format version , All the disks will be having latest version

on-disk format version 6.0 


vSAN On-Disk Format Upgrade

Missing vSAN Proactive Tests in vSphere 6.5

I have shared a post about whats new with vSAN  6.6.1  but i have not mentioned  due to new features  what will happen to old . Recently I was looking in to proactive test feature and i was able to find only one test , VM creation test . In the previous version we have other tests also and it is missing on vSphere 6.5  , let discuss about why it is missing .

vSphere 6.5 with vSAN 

vSphere 6.0 with vSAN 

Multicast Performance Test 

With vSAN 6.6.1 multicast has been removed and unicast came as new feature , since mulicast has been removed so the test will be removed by default and unicast is running in vSAN test won't show .

Storage Performance Test 

VMware felt that use of any third party tools  for doing bench mark may provide incorrect details  . So VMware decided to use HCI Bench is the suitable solution for doing benchmark  . And this is confirms that  perf benchmark feature in the Proactive Tests section is depreciated .

vSAN 6.7 Announced

As we all know vSphere 6.7 is released  and there is one more update as vSAN 6.7 is also released .This release offers features and capabilities that enable improved performance, usability and consistency. There are many  improvements to the management and monitoring tools are also matched by lower level performance and application consistency improvements.

vSAN 6.7 Features 

These are new supported features with vSAN 6.7 and I will be discussing details about these features on upcoming blog posts .

  • HTML-5 User Interface support
  • Fast Network Failovers
  • Native vRealize Operations dashboards in the HTML-5 client
  • Support for Microsoft WSFC using vSAN iSCSI
  • New Health Checks
  • 4K Native Device Support
  • FIPS 140-2 Level 1 validation

Features Optimized

  • Adaptive Resync
  • Witness Traffic Separation for Stretched Clusters
  • Preferred Site Override for Stretched Clusters
  • Efficient Resync for Stretched Clusters
  • Efficient and consistent storage policies
  • Enhanced Diagnostic Partition
  • Enhanced Diagnostic Partition


How To Configure Existing vSAN With New vCenter

In this post i am sharing how can we configure existing vSAN with new vCenter Server  . This post is very useful for vCenter crashed and you are not able to recover it or any special cases we need new vCenter and we have to keep exiting vSAN Cluster .

What will happen If vCenter Server is not online ?

vSAN will continue working  and all virtual machines will be running but full functionality of the vSphere Web Client will not be available  .  With out vCenter also you can check  the health of vSAN by directly login in to any ESXi host in the vSAN cluster .

When vCenter Server is back online, we can get back to managing the environment using the vSphere Web Client. However, if vCenter Server is permanently lost, this might seem a bit scary. Fortunately, vSAN is resilient in this situation. A new vCenter Server can be deployed and the existing vSAN hosts can be added to the new vCenter Server—all without VM downtime.

Follow below steps configure vCenter Server with existing vSAN

  1. Deploy new vCenter Server Virtual Appliance (VCSA).
  2. Add all licenses to new vCenter Server  which you had in the previous vCenter Server.

                   3.   Create Data Center

4.Create a Cluster  with vSAN enabled

5. Add all the existing vSAN hosts to the cluster.

After adding all the nodes navigate to Cluster -> Monitor -> vSAN -> Health

And we can see there will be an error as " Failed  - vCenter State is Authoritative "

Each vSAN host maintains a configuration of vCenter instance and if it is not matching it will trigger error .This discrepancy in cluster configuration between the hosts and the new vCenter Server instance will be reported in vSAN Health.

Cluster Health – vCenter State Is Authoritative 

This health check verifies that all hosts in the vSAN cluster are using the current managing vCenter Server as the source of truth for the cluster configuration, including the vSAN cluster membership list. During normal operation, the vCenter Server can publish the latest host membership list and update the configuration for all hosts in the cluster. This health check reports an error if vCenter Server configuration is not synchronized with a member host, and is no longer accepted as the source of truth.This issue can lead to auto-update failures for the configuration, including the vSAN cluster membership list.

What are the potential side effects of clicking Update ESXi configuration?

This action overrides the existing vSAN cluster configuration on all hosts in this vCenter Server cluster. Verify that all ESXi hosts from the vSAN cluster are added to the vCenter Server cluster. Any host that is not part of this vCenter Server cluster are removed from the cluster.

Incorrect configuration settings can be pushed to all hosts in the vSAN cluster, and can cause problems. Before you take action, verify that all vSAN configuration settings in vCenter Server are correct, such as fault domains, deduplication, encryption, and so on

This issue can lead to auto-update failures for the configuration, including the vSAN cluster membership list.

How to Fix this issue ?

To fix this problem, Navigate  to Cluster > Monitor > vSAN > Health. Select Cluster > vCenter state is authoritative, and click Update ESXi configuration.

This action synchronizes all hosts in the cluster with the current vCenter Server, and forces the vCenter Server configuration, including the vSAN cluster membership list, to all member hosts.

Note :-  Refer the VMware knowledge base article   to understand more about this error and issues could occur if the configuration of the vSAN hosts does not match the cluster configuration in vCenter Server.

I have tested in my Lab and it worked with out any issue , my vSAN is working fine and all VMs are running perfectly .


Reference VMware Blog