Learning Docker – Part 2

In my previous post, I have explained how to install the Docker on Cent OS and few important commands can be used for verification and other basic operations. In addition, I have mentioned next topic will be Initial configuration task, which is required.

In this topic, I will be explaining about the initial configuration, which you have to perform after installation of Docker.

How to Secure the Docker Access

The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The docker daemon always runs as the root user.

We cannot share root credentials with anyone, so we have to give permission to developers to run Docker. There are two options once

  • Provide sudo access to users
  • Add user to Docker group

Configure Sudo Access


The sudo command offers a mechanism for providing trusted users with administrative access to a system without sharing the password of the root user. When users given access via this mechanism precede an administrative command with sudo they are prompted to enter their own password. Once authenticated, and assuming the command is permitted, the administrative command is executed as if run by the root user.

Log in to the system as the root user.

Create a normal user account using the useradd command

# useradd username

Set a password for the new user using the passwd command

# passwd username

Let try to run docker command with new user we created

Now lets check sudo permission

And we confirmed there is no sudo permission for this user , so we have to add  the user to sudoers file, you can add this user to the exiting group available on the suodoers or create a new group and configure that in sudoers.


  1. Edit the /etc/sudoers file using any suitable editor , here I use vi editor .

Note:- File sudoers  defines the policies applied by the sudo command.

  1. Find the lines in the file that grant sudo access to users in the group wheel when enabled.

Wheel is a default group available for sudoers, or you can create a new group and add to sudoers file which can be used for setting sudo permission.

# cat /etc/sudoers     - Use command to check the contests of the  sudoers file

⁠ ## Allows people in group wheel to run all commands# %wheel        ALL=(ALL)       ALL

  1. Remove the comment character (#) at the start of the second line. This enables the configuration option.
  2. Save your changes and exit the editor.

Note:- admin is a group i have created for  configure sudo access

If you don’t want to use sudo when you use the docker command, add the users to docker group which will be created while you install and enable docker . When the docker daemon starts, it makes the ownership of the Unix socket read/writable by the docker group.

Add user to Docker Group 

usermod -aG docker username

I have added vmarena user to docker group now Log out and log back  for  the group membership become active .

Verify that you can run docker commands without sudo.

$ docker run hello-world

This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.

Configure Docker to start on boot

You can use systemctl to manage which services start when the system boots .

# systemctl enable docker

To disable this use disable instead of enable

# sudo systemctl disable docker

Note:- User configured with docker group will not have the permission to perfomr this , sudo permission required or need use root account .

Options to Check  Docker Status

You will have multiple option to check the docker service status , find below

Use the docker info command  using to display system-wide information such as  status  , available containers , images  , OS ,  resources etc

You can also use operating system utilitie systemctl is-active docker or service docker status

Finally, you can check in the process list for the dockerd process, using commands like ps or top


First two Lessons are only to start docker and upcoming posts will help you understand how containers works in docker 
and  you can follow the exercise , Stay Tuned

Suggested Posts

Docker Architecture and Components

How Container Differ from Physical and Virtual Infrastructure

Learning Docker – Part 1

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube

Learning Docker – Part 1

In my previous post I have shared information about docker architecture and components with reference to Docker Documentation and Training Videos. Now time to start playing with docker, this is the First Learning Part will cover basic requirements for install Docker, how to install and some useful Linux commands, which  you should aware while using Docker.

Let me share about my Lab, where I learns Docker. I using a nested setup of vSphere 6.5 with Centos 7 Virtual Machines to Learn Docker.

Hypervisor Operating System Docker
vSphere 6.5 Cent OS 7 64 Bit Docker CE ( Community Edition)


Requirement to Run Docker CE (Community Edition)

  • Maintained CentOS 7 64 Bit
  • The centos-extrasrepository must be enabled.

Note: - This is enabled by default and incase it is disabled you have to re-enable it,

Refer: - Cent OS Wiki

  • The overlay2storage driver is recommended.

Options to Install Docker

You can install Docker CE in three ways

  • Set up Docker’s repositories and install from them it is will be easy for you to perform installation and upgrade tasks. This is the recommended approach by Docker, we proceed this option, and it need access to internet.
  • Download the RPM package, install it manually, and manage upgrades completely manually. This is useful in situations such as installing Docker on air-gapped systems with no access to the internet.

Refer – Docker Docs to Understand this

  • Automated convenience scripts to install Docker used in testing and development environments

Refer – Docker Docs to Understand this

Install Docker 

To install the Docker CE , First we need to set up the Docker repository using YUM and then will perform installation of Docker from the repository .And if you have experience on the Linux then it will be very easy

Setting up the Repository

Login to your Cent OS through console or a putty session .

  • Verify your  Operating system details  where you are going to setting up the repository and install Docker

Follow the basic command shown in the image to check Operating System , Kernel and Flavor of Linux you are using

command uname  we can use to view the system information

If you want to know more about uname command use uname --help 

  • Install required packages for setting up the repository

yum-utils provides the yum-config-manager utility  , device-mapper-persistent-data and lvm2 are required by the devicemapper storage driver

$ sudo yum install -y yum-utils \
  device-mapper-persistent-data \

 Set up the stable repository using below commands , Always need the stable repository, even if you want to install builds from edge or test repositories as well.
$ sudo yum-config-manager \
    --add-repo \

Enable the edge and test repositories, it is optional.

These repositories are included in the docker-ce.repo file but they are disabled by default and you can enable them alongside the stable repository using below commands

$ sudo yum-config-manager --enable docker-ce-edge

$ sudo yum-config-manager --enable docker-ce-test

You can disable the edge or test repository by running the yum-config-manager command with the --disable flag. To re-enable it, use the --enable flag.

To disables the edge repository follow below command

$ sudo yum-config-manager --disable docker-ce-edge

Note: Starting with Docker 17.06, stable releases are also pushed to the edge and testrepositories.

More about Repo

docker-ce.repo  is located in  /etc/yum.repos.d/ ,  you may view configuration of the file by using less , more or cat command

$ less /etc/yum.repos.d/docker-ce.repo 

If there is no enabled line in this configuration file that means the repo is enabled  , Image has taken before enabling the  enabling edg repository

Also you can check the enabled repo's by using below command

$ yum repolist enabled

This Image show the available repo before enabling edge and  test repository

$ yum repolist enabled

This Image show the available repo after enabling edge and  test repository

Now we are almost near to docker  , let's install docker and understand some basic commands required for a beginners

Install Docker CE

You have two options install the Docker

  • Latest Version
  • Install a specific version of Docker from repo

To install a specific version of Docker CE first you have to check the available versions in the repo, then select and install

List and sort the versions available in your repo.    ( Command results by version number, highest to lowest)

$ yum list docker-ce --showduplicates | sort -r

The list returned depends on which repositories are enabled, and is specific to your version of CentOS (indicated by the .el7 suffix in this example).

Install a specific version by its fully qualified package name, which is the package name (docker-ce) plus the version string (2nd column) up to the first hyphen, separated by a hyphen (-), for example, docker-ce-18.03.0.ce.

$ sudo yum install docker-ce-<VERSION STRING>

Note:- I am not going perform this because my test setup i want to check with Latest version

Install the latest version of Docker CE , it is very simple just run the below command to get  the latest version of docker :)

$ sudo yum install docker-ce

If prompted to accept the GPG key, verify that the fingerprint matches060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35, and if so, accept it.

Got multiple Docker repositories?

If you have multiple Docker repositories enabled, installing or updating without specifying a version in the yum install or yum update command always installs the highest possible version, which may not be appropriate for your stability needs.

Now Docker is installed on your Linux Cent OS  but it will not start automatically you have to start manually  . Also a  docker group will be created , but no users are added to the group.

First verify the status of the docker service by running command

$ sudo systemctl status docker

Start the Docker using below command

$ sudo systemctl start docker

$ docker version

To check the docker version available on the system

Type just docker on the system it will list all the available management commands and other commands

Next Verify that docker is installed correctly by running an image , question is if you are new to Docker how you will find available Images

As I mentioned above you can use the one of the command to check available images or  from hub.docker.com

$ docker search image keyword

Run the an  image , you can use your own required image or follow below to test

$ sudo docker run ubuntu

$ sudo docker image list    -  List running images

$ sudo docker rmi -f ubuntu     - Remove an Image

Note :- Above image may show command without sudo because it is running with root  login .

You can Refer  Docker CLI to understand the commands can be used in Docker  , Stay Tuned  Next post will be with more Docker Commands for Initial Configuration .

Reference  Installing Docker

Suggested Posts

Docker Architecture and Components

How Container Differ from Physical and Virtual Infrastructure

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube

Docker Architecture and Components

In my Previous Post I have explained how Containers and differ from Physical and Virtual Infrastructure. Also I mentioned Docker in the post but not shared more details because I do not want you to get confused with container and Docker.

This post I will explain about Docker,What is Docker? Docker Components, how Containers connected to Docker etc and more posts will be sharing soon which help to you do start playing with containers in your Lab .

What is Docker?

Docker is an open source platform, which used to package, distribute and run applications. Docker provides an easy and efficient way to encapsulate applications from infrastructure to run as a single Docker image, which shared through a central, shared Docker registry. The Docker image used to launch a Docker container, which makes the contained application available from the host where the container is running.

In simple words Docker is a containerization platform, which is OS-level virtualization method used to deploy and run distributed application and all its dependencies together in the form of a Docker container. Docker platform remove the hypervisor layer from your Hardware, It run directly on top of bare metal Operating system. By using Docker Platform, you can multiple isolated applications or services run on a single host, access the same OS kernel, and ensure that application works seamlessly in any environment.

Containers can run on any bare-metal system with supported Linux, Windows, Mac and Cloud instances; it can run on a virtual machines deployed on any hypervisor as well.

For developers it might be easy to understand the concept of Docker easily but for a system administers it may difficult .Don‘t worry here I will explain the components of the Docker and how it is used .

Docker is available in two editions:

  • Community Edition (CE)
  • Enterprise Edition (EE)

Docker Community Edition (CE) is ideal for individual developers and small teams looking to get started with Docker and experimenting with container-based apps.

Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale.

Docker Components

What is Docker Engine?

Docker Engine is the core of the Docker system; it is the application installed on the host machine. This Engine is a Client-server application with below components.

  • A server, which is a type of long-running program, called a daemon process (the dockerd command).
  • A REST API, which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
  • A command line interface (CLI) client (the docker command).

The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI.

The daemon creates and manages Docker objects, such as images, containers, networks, and volumes

Docker architecture

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.

Note: - Docker engine and Architecture information is from Docker Documentation

Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

Docker Client

Docker client key component which used by many Docker users interact with Docker. When you run the docker commands, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API also Docker client can communicate with more than one daemon.

Docker Registry

Docker registry is the place where Docker images are stored it can be a Public registry or Local registry. Docker Hub and Docker Cloud are public registries that is available for everyone and other option is create your own private registry. Docker is configured to look for images on Docker Hub by default and If you use Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR).

How Docker Registry Works?

When you use the docker pull or docker run commands, the required images pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.

Docker store allows you to buy and sell Docker images or distribute them for free.


Also you have option to buy a Docker image containing an application or service from a software vendor and use the image to deploy the application into your testing, staging, and production environments. You can upgrade the application by pulling the new version of the image and redeploying the containers.

Docker Environment

Docker Environment is combination of Docker Engine and Docker Objects, I have explained about Docker engine and some objects now understand the Objects , Docker Objects are images, containers, networks, volumes, plugins .


An image is a read-only template with instructions for creating a Docker container. You can create an image with additional customization from a base image or use those created by others and published in a registry.

Docker uses a smart layered file system, where the base layer is read-only and top layer is writable. When you try to write to a base layer, a copy is created in the top layer, and the base layer remains unchanged. This base layer can be shared since it is a read-only and never changes.

For example, you may build an image, which based on the Centos image, but installs the Web server and your application, as well as the configuration details needed to make your application run.

How to build Your Own Image

To build your own image, Create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers that have changed are rebuilt and this makes images so lightweight, small, and fast.


In simple words container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.


Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker.

Advantages of Volume over bind mounts

  • Volumes are easier to back up or migrate than bind mounts.
  • You can manage volumes using Docker CLI commands or the Docker API.
  • Volumes work on both Linux and Windows containers.
  • Volumes can be more safely shared among multiple containers.
  • Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
  • New volumes can have their content pre-populated by a container.

In addition, volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container.

If your container generates non-persistent state data, consider using a tmpfs mount to avoid storing the data anywhere permanently, and to increase the container’s performance by avoiding writing into the container’s writable layer.

Volumes use rprivate bind propagation, and bind propagation is not configurable for volumes.

One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way.

Bind mounts

Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.

The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist. Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available. If you are developing new Docker applications, consider using named volumes instead. You can’t use Docker CLI commands to directly manage bind mounts.


Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core-networking functionality:

  • Bridge: Bridger is the default network driver used in Docker. Docker has other network driver options and if you don’t specify a driver bridge network will created as default. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
  • Host: Using the host networking in standalone containers you can remove network isolation between the container and the Docker host.

Note:-host networking is only available for swarm services on Docker 17.06 and higher.

  • Overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers.
  • Macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlandriver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
  • None: For this container, disable all networking. Usually used in conjunction with a custom network driver.

Note:- none is not available for swarm services.

  • Network plugins: You can install and use third-party network plugins with Docker. These plugins are available from Docker Storeor from third-party vendors.

Which Network driver is suitable?

  • User-defined bridge networks are best when you need multiple containers to communicate on the same Docker host.
  • Host networks are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
  • Overlay networks are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.
  • Macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.
  • Third-party network plugins allow you to integrate Docker with specialized network stacks.

Most of the above network modes applies to all Docker installations. However, a few advanced features are only available to Docker EE customers.

Docker EE networking features

Two features are only possible when using Docker EE and managing your Docker services using Universal Control Plane (UCP):

  • The HTTP routing meshallows you to share the same network IP address and port among multiple services. UCP routes the traffic to the appropriate service using the combination of hostname and port, as requested from the client.
  • Session stickinessallows you to specify information in the HTTP header, which UCP uses to route subsequent requests to the same service task, for applications, which require stateful sessions.


Services allow you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Docker daemon, and the daemons all communicate using the Docker API. A service allows you to define the desired state, such as the number of replicas of the service that must be available at any given time. By default, the service is load-balanced across all worker nodes. To the consumer, the Docker service appears to be a single application. Docker Engine supports swarm mode in Docker 1.12 and higher.


In Docker also high availability cluster is available and it is called Swarm. By using swarm you can use features like Scaling, Load balancer and you need your apps to be stateless, and for failover to happen automatically .Also many more features, detailed information from here 

In Swarm, you can deploy your app to a number of nodes running on a number of Docker engines and these engines can be on different machines, or even in different data centers, or some in Azure and some in AWS. If any one of the nodes crashes or disconnects the other nodes automatically take over the load, and create a new node to replace the missing one.

Note:- This is a one of the important topic that tat you have to understand more details that can’t be explained though this post  , I will share through an another post with examples .Even though you can fine more details on Docker Docs

Docker Underlying Technology

Docker is written in Go and takes advantage of several features of the Linux kernel to deliver its functionality.


Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.

These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

Docker Engine uses namespaces such as the following on Linux:

  • The pid namespace: Process isolation (PID: Process ID).
  • The net namespace: Managing network interfaces (NET: Networking).
  • The ipc namespace: Managing access to IPC resources (IPC: InterProcess Communication).
  • The mnt namespace: Managing filesystem mount points (MNT: Mount).
  • The uts namespace: Isolating kernel and version identifiers. (UTS: Unix Timesharing System).

Control groups

Docker Engine on Linux also relies on another technology called control groups (cgroups). A cgroup limits an application to a specific set of resources. Control groups allow Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints. For example, you can limit the memory available to a specific container.

Union file systems

Union file systems, or UnionFS, are file systems that operate by creating layers, making them very lightweight and fast. Docker Engine uses UnionFS to provide the building blocks for containers. Docker Engine can use multiple UnionFS variants, including AUFS, btrfs, vfs, and DeviceMapper.

Container Format

Docker Engine combines the namespaces, control groups, and UnionFS into a wrapper called a container format. The default container format is libcontainer. In the future, Docker may support other container formats by integrating with technologies such as BSD Jails or Solaris Zones.

Refer Docker Documentation to understand more

Also Watch Docker Training Videos

Suggested Posts

How Container Differ from Physical and Virtual Infrastructure

Docker Learning Part 1

Docker Learning Part 2

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube

How Container Differ from Physical and Virtual Infrastructure

We all hear lot thing about containers, do you know what is a container how it different from physical and virtual world . In this post I will shared information about how containers differ from physical and virtual world and container platforms.

Physical World

In the beginning, we had physical machines and install Operating System on that then deploy application top of that. To deploy an application on this world we had lot of challenges and time consuming . Because every time to deploy a new application we have to buy a new physical server, to buy a server we have go through many process like find suitable hardware co-ordinate with vendor, finance team , delivery , installation and again Operating system installation License ,Storage , Application  etc.

Moreover, if you need more applications would require more machines that are physical, its own Operating System installed and the application, License and cost for  this is very high.

And we were not able to utilize the complete resource of those physical machines ,waste of power, cooling, raw materials, data centre floor space etc.

How we solved this?

VMware came in world with their innovative idea Virtual Machine and this was great opportunity to start moving physical world to virtualization . That was a beginning and VMware released their Hypervisor, which will install on the physical machine. The hypervisor will own the computing resources and it create multiple Virtual Machines.


Virtual Machines has their own virtual hardware where we install desired operating System then install an application on top of the OS. By moving to virtualization, we were able to utilize the maximum resource with less power, cooling, space etc. Now we have many hypervisor’s in market ESXi, Hyper-V, KVM etc.

Still we have challenges like hypervisor cost , support , virtual machine guest operating system , license etc

How to overcome this?

Virtualization was booming technology from many years and still in many regions .When we really looking in to the actual needs still we have challenges and to overcome this Container came.

There is common thing in all three models Computing Resources, Operating System, OS patching, anti-virus etc.

Finally Let check about Containers

Basic architecture of containers is shown in the below image , there we have physical machine with an Operating System running on the bare-metal then on top of the Operating System *containers* and within each container run an application. Here the Operating System owns and manages the physical machines computing resources

What is a Container?

A container consists of an entire run time environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. Containers allows you to deploy quickly, reliably, and consistently regardless of deployment environment.

Containers are lightweight it uses less computing  , Containers virtualize CPU, memory, storage, and network resources at the OS-level, providing developers with a sandboxed view of the OS logically isolated from other applications.

There are many container formats available. Docker is a popular, open-source container format that is supported on Google Cloud Platform and by Google Kubernetes Engine .

Suitable Linux distributions for Containers

Here I will share few Linux distributions which is commonly used

  • CoreOS — one of the first lightweight container operating systems built for containers
  • Rancher OS— it is a simplified Linux distribution built from containers, specifically for running containers.
  • Photon OS— it is a minimal Linux container host, optimized to run on VMware platforms.
  • Project Atomic Host— Red Hat's lightweight container OS has versions that are based on CentOS and Fedora, and a downstream enterprise version in Red Hat Enterprise Linux.
  • Ubuntu Core—Smallest Ubuntu version used as  host operating system for IoT devices and large-scale cloud container deployments

Only Linux re suitable for Containers?

The answer is No, because with Windows Server 2016 and Windows 10 has the ability to run containers. These are Docker containers and it can managed from any Docker client or from Microsoft's PowerShell. Microsoft Hyper-V also support containers, which are Windows containers running in a Hyper-V virtual machine for added isolation.

Windows containers can deployed on a Windows Server 2016, the streamlined Server Core install or Nano Server install option that is specifically designed for running applications inside containers or virtual machines.

More about Docker on Windows

Now you may think about VMware, yes of course VMware also has their own container platforms

  • Customers who has existing VMware infrastructure they can run containers on its Photon OS container Linux
  • vSphere Integrated Containers (VIC) which can be deployed directly to a standalone ESXi host or deployed to vCenter Server as if they were virtual machines.

In addition to Linux, Windows, and VMware Docker also runs on popular cloud platforms including Amazon EC2, Google Compute Engine, Microsoft Azure and Rackspace.

Don't get confused with Docker word we will explaining on this upcoming blogs posts

What you think about virtualization and containers together?

This is another better option because containers can run top of virtual machines to increase isolation and security. Another important thing by using hardware virtualization you can manage required infrastructure to run containers from a single pane with many advanced hypervisor features .

End of the day it is all about your requirement both method has their benefits. Moreover, I Prefer rather than replacing virtual machines with containers use containers within a virtual infrastructure to achieve availability, backup, easy management etc.

I will be sharing more posts on how to start using the containers , Stay Tuned

Suggested Posts

Docker Architecture and Components

Docker Learning Part 1

Docker Learning Part 2

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube

Test Your vSphere 6.5 Knowledge - VMware vSphere 6.5 Quiz

Test Your Knowledge Level on vSphere 6.5 ?

Challenge yourself with this nine-question interactive quiz that explores the latest updates to vSphere. After you answer each question, we’ll share additional insights, which means you’ll learn more about the new features and capabilities that you only get with vSphere 6.5.

Challenge yourself with this short vSphere 6.5 quiz.

Start the vSphere 6.5 quiz now!



vSphere DRS (Distributed Resource Scheduler)

Distributed Resource Scheduler (DRS) is a resource management solution for vSphere clusters to provide optimized performance of application workloads. DRS will help you run the workloads efficiently  on resources in a vSphere Environment .

Download the Official DRS White Paper#   vSphere Resources and Availability

DRS will determine the current resource demand of workloads and the current resource availability of the ESXi host that are grouped into a single vSphere cluster. DRS provides recommendations throughout the life-cycle of the workload. From the moment, it is powered-on, to the moment it is powered-down.

DRS operations consist of generating initial placements and load balancing recommendations based on resource demand, business policies and energy-saving settings. It is able to automatically execute the initial placement and load balancing operations without any human interaction, allowing IT-organizations to focus their attention elsewhere.

DRS provides several additional benefits to IT operations:

  • Day-to-day IT operations are simplified as staff members are less affected by localized events and dynamic changes in their environment. Loads on individual virtual machines invariably change, but automatic resource optimization and relocation of virtual machines reduce the need for administrators to respond, allowing them to focus on the broader, higher-level tasks of managing their infrastructure.
  • DRS simplifies the job of handling new applications and adding new virtual machines. Starting up new virtual machines to run new applications becomes more of a task of high-level resource planning and determining overall resource requirements, than needing to reconfigure and adjust virtual machines settings on individual ESXi hosts.
  • DRS simplifies the task of extracting or removing hardware when it is no longer needed or replacing older host machines with newer and larger capacity hardware.
  • DRS simplifies grouping of virtual machines to separate workloads for availability requirements or unite virtual machines on the same ESXi host machine for increased performance or to reduce licensing costs while maintaining mobility.

DRS generates recommendations for initial placement of virtual machines on suitable ESXi hosts during power-on operations and generates load balancing recommendations for active workloads between ESXi hosts within the vSphere cluster. DRS leverages VMware vMotion technology for live migration of virtual machines. DRS responds to cluster and workload scaling operations and automatically generates resource relocation and optimization decisions as ESXi hosts or virtual machines are added or removed from the cluster. To enable the use of DRS migration recommendations, the ESXi hosts in the vSphere cluster must be part of a vMotion network. If the ESXi hosts are connected to the vMotion network, DRS can still make initial placement recommendations .

Clusters can consist of heterogeneous or homogeneous hardware configured ESXi hosts. ESXi hosts in a cluster can differ in capacity size. DRS allows hosts that have a different number of CPU packages or CPU cores, different memory or network capacity, but also different CPU generations. VMware Enhanced vMotion Compatibility (EVC) allows DRS to live migrate virtual machines between ESXi hosts with different CPU configurations of the same CPU vendor. DRS leverages EVC to manage placement and migration of virtual machines that have Fault Tolerance (FT) enabled.

DRS provides the ability contain virtual machines on selected hosts within the cluster by using VM to Host affinity groups for performance or availability purposes. DRS resource pools allow compartmentalizing cluster CPU and memory resources. A resource pool hierarchy allows resource isolation between resource pools and simultaneous optimal resource sharing within resource pools.

DRS Automation Levels

There are three levels of automation are available, allowing DRS to provide recommendations for initial placement and load balancing operations. DRS can operate in manual mode, partially automated mode and fully automated mode. Allowing the IT operation team to be fully in-control or allow DRS to operate without the requirement of human interaction.

Manual Automation Level

The manual automation level expects the IT operation team to be in complete control. DRS generates initial placement and load balancing recommendations and the IT operation team can choose to ignore the recommendation or to carry out any recommendations. If a virtual machine is powered-on on a DRS enabled cluster, DRS presents a list of mutually exclusive
initial placement recommendations for the virtual machine. If a cluster imbalance is detected during a DRS invocation, DRS presents a list of recommendations of virtual machine migrations to improve the cluster balance. With each subsequent DRS invocation, the state of the cluster is recalculated and a new list of recommendations could be generated

Partially Automated Level

DRS generates initial placement recommendations and executes them automatically. DRS generates load balancing operations for the IT operation teams to review and execute. Please note that the introduction of a new workload can impact current active workload, which may result in DRS generating load balancing recommendations. It is recommended to review the DRS recommendation list after power-on operations if the DRS cluster is configured to operate in partially automated mode.

Fully Automated Level

DRS operates autonomous in fully automated level mode and requires no human interaction. DRS generates initial placement and load balancing recommendations and executes these automatically. Please note that the migration threshold setting configures the aggressiveness of load balancing migrations.

Per-VM Automation Level

DRS allows Per-VM automation level to customize the automation level for individual virtual machines to override the cluster’s default automation level. This allows IT operation teams to still benefit from DRS at the cluster level while isolating particular virtual machines. This can be helpful if some virtual machines are not allowed to move due to licensing or strict performance requirement. DRS still considers the load utilization and requirements to meet the demand of these virtual machines during load balancing and initial placement operations, it just doesn’t move them around anymore.



vCenter Update Manager

Update Manager enables centralized, automated patch and version management for VMware vSphere and offers support for VMware ESXi hosts, virtual machines, and virtual appliances.

With Update Manager, you can perform the following tasks:

  • Upgrade and patch ESXi hosts.

  • Install and update third-party software on hosts.

  • Upgrade virtual machine hardware, VMware Tools, and virtual appliances.

Update Manager requires network connectivity with VMware vCenter Server. Each installation of Update Manager must be associated (registered) with a single vCenter Server instance.

The Update Manager module consists of a server component and of a client component.

You can use Update Manager with either vCenter Server that runs on Windows or with the vCenter Server Appliance.

If you want to use Update Manager with vCenter Server, you have to perform Update Manager installation on a Windows machine. You can install the Update Manager server component either on the same Windows server where the vCenter Server is installed or on a separate machine. To install Update Manager, you must have Windows administrator credentials for the computer on which you install Update Manager.

If your vCenter Server system is connected to other vCenter Server systems by a common vCenter Single Sign-On domain, and you want to use Update Manager for each vCenter Server system, you must install and register Update Manager instances with each vCenter Server system. You can use an Update Manager instance only with the vCenter Server system with which it is registered.

The vCenter Server Appliance delivers Update Manager as an optional service. Update Manager is bundled in the vCenter Server Appliance.

In vSphere 6.5, it is no longer supported to register Update Manager to a vCenter Server Appliance during installation of the Update Manager server on a Windows machine.

The Update Manager client component is a plug-in that runs on the vSphere Web Client. The Update Manager client component is automatically enabled after installation of the Update Manager server component on Windows, and after deployment of the vCenter Server Appliance.

You can deploy Update Manager in a secured network without Internet access. In such a case, you can use the VMware vSphere Update Manager Download Service (UMDS) to download update metadata and update binaries.

Refer VMware Page for More 

Check the POST to learn How to Install and Configure Update Manager on Windows vCenter 6.5

What is Raw Device Mapping (RDM)

Raw device mapping (RDM) is an option in the vSphere environment that enables a storage LUN to be directly presented to a virtual machine from the storage array .

Mostly RDM will be used for confiuring clusters ( SQL , Oracle ) between virtual machines or between physical and virtual machines .Also or where SAN-aware applications are running inside a virtual machine. Compare to VMFS , RDM produce similar input/output (I/O) and throughput in random workloads. For sequential workloads there is a small I/O block sizes, RDM provides a slight increase in throughput compared to VMFS. But the performance gap decreases as the I/O block size increases for RDM also for workloads, RDM has slightly better CPU cost .

An RDMwill be mapped as a  file in a separate VMFS volume that acts as a proxy for a raw physical storage device. Virtual machine can access the RDM directly from storage device . The RDM contains metadata for managing and redirecting disk access to the physical device.The mapped file will give some of the advantages of direct access to a physical device, but keeps some advantages of a virtual disk in VMFS. As a result, it merges the VMFS manageability with the raw device access.

Raw Device Mapping
A virtual machine has direct access to a LUN on the physical storage using a raw device mapping (RDM) file in a VMFS datastore.

RDM  can be used in the following situations

  • When SAN snapshot or other layered applications run in the virtual machine because RDM enables backup offloading systems by using features inherent to the SAN.

  • Clustering scenario that spans physical hosts, such as virtual-to-virtual clusters and physical-to-virtual clusters.

Two compatibility modes with  RDM 

Considerations and Limitations

  • Direct-attached block devices or certain RAID devices will not support RDM. The RDM uses a SCSI serial number to identify the mapped device. Because block devices and some direct-attach RAID devices do not export serial numbers, they cannot be used with RDMs.

  • Cannot use snapshot feature with RDM in physical compatibility .Physical compatibility mode allows the virtual machine to manage its own, storage-based, snapshot or mirroring operations.

  • Snapshots feature is available for RDMs with virtual compatibility mode.

  • You cannot map to a disk partition. RDMs require the mapped device to be a whole LUN.

  • For vMotion support with RDMs,  maintain consistent LUN IDs for RDMs across all participating ESXi hosts.

  • Flash Read Cache does not support RDMs in physical compatibility. Virtual compatibility RDMs are supported with Flash Read Cache.


RDM cannot be used in every situation. but  RDM provides several benefits , find below for understand few of them

User-Friendly Persistent Names

Provides a user-friendly name for a mapped device. When you use an RDM, you do not need to refer to the device by its device name ,you can refer fit by the name of the mapping file .


Dynamic Name Resolution

Stores unique identification information for each mapped device. VMFS associates each RDM with its current SCSI device, regardless of changes in the physical configuration of the server because of adapter hardware changes, path changes, device relocation, and so on.

Distributed File Locking

Makes it possible to use VMFS distributed locking for raw SCSI devices. Distributed locking on an RDM makes it safe to use a shared raw LUN without losing data when two virtual machines on different servers try to access the same LUN.

File Permissions

Makes file permissions possible. The permissions of the mapping file are enforced at file-open time to protect the mapped volume.

File System Operations

Makes it possible to use file system utilities to work with a mapped volume, using the mapping file as a proxy. Most operations that are valid for an ordinary file can be applied to the mapping file and are redirected to operate on the mapped device.


Makes it possible to use virtual machine snapshots on a mapped volume. Snapshots are not available when the RDM is used in physical compatibility mode.


vMotion is supported with RDM devices  . The mapping file acts as a proxy to allow vCenter Server to migrate the virtual machine by using the same mechanism that exists for migrating virtual disk files.

vMotion of a virtual machine with an RDM file. The mapping file acts as a proxy to allow vCenter Server to migrate the virtual machine by using the same mechanism that exists for migrating virtual disk files.SAN Management Agents

you can run some SAN management agents inside a virtual machine. Similarly, any software that needs to access a device by using hardware-specific SCSI commands can be run in a virtual machine. This kind of software is called SCSI target-based software. When you use SAN management agents, select a physical compatibility mode for the RDM.

N-Port ID Virtualization (NPIV)

NPIV technology that allows a single Fibre Channel HBA port to register with the Fibre Channel fabric using several worldwide port names (WWPNs). This ability makes the HBA port appear as multiple virtual ports, each having its own ID and virtual port name. Virtual machines can then claim each of these virtual ports and use them for all RDM traffic.

Note:You can use NPIV only for virtual machines with RDM disks.

VMware works with vendors of storage management software to ensure that their software functions correctly in environments that include ESXi. Find below few applications

  • SAN management software
  • Storage resource management (SRM) software
  • Snapshot software
  • Replication software

Such software uses a physical compatibility mode for RDMs so that the software can access SCSI devices directly.

Note:Various management products are best run centrally (not on the ESXi machine), while others run well on the virtual machines. VMware does not certify these applications or provide a compatibility matrix. To find out whether a SAN management application is supported in an ESXi environment, contact the SAN management software provider.


VMware Network Adapter Types

Depending on the operating system and its version you can use different network adapter types, those adapter types may vary. In this post we are discussing about different virtual network adapters

VMware Network Adapter Types

The type of network adapters that are available depend on the following factors:

  • The virtual machine compatibility, which depends on the host that created or most recently updated it.
  • Whether the virtual machine compatibility has been updated to the latest version for the current host.
  • The guest operating system.

The following NIC types are supported:


Emulated version of the Intel 82574 Gigabit Ethernet NIC. E1000E is the default adapter for Windows 8 and Windows Server 2012.


Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later.


Identifies itself as a Vlance adapter when a virtual machine boots, but initializes itself and functions as either a Vlance or a VMXNET adapter, depending on which driver initializes it. With VMware Tools installed, the VMXNET driver changes the Vlance adapter to the higher performance VMXNET adapter.


Emulated version of the AMD 79C970 PCnet32 LANCE NIC, an older 10 Mbps NIC with drivers available in 32-bit legacy guest operating systems. A virtual machine configured with this network adapter can use its network immediately.


Optimized for performance in a virtual machine and has no physical counterpart. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.

VMXNET 2 (Enhanced)

Based on the VMXNET adapter but provides high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. VMXNET 2 (Enhanced) is available only for some guest operating systems on ESX/ESXi 3.5 and later.


A paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. VMXNET 3 is not related to VMXNET or VMXNET 2.


A paravirtualized NIC that supports remote direct memory access (RDMA) between virtual machines through the OFED verbs API. All virtual machines must have a PVRDMA device and should be connected to a distributed switch. PVRDMA supports VMware vSphere vMotion and snapshot technology. It is available in virtual machines with hardware version 13 and guest operating system Linux kernel 4.6 and later.

For information about assigning an PVRDMA network adapter to a virtual machine, see the vSphere Networking documentation.

SR-IOV passthrough

Representation of a virtual function (VF) on a physical NIC with SR-IOV support. The virtual machine and the physical adapter exchange data without using the VMkernel as an intermediary. This adapter type is suitable for virtual machines where latency might cause failure or that require more CPU resources.

SR-IOV passthrough is available in ESXi 5.5 and later for guest operating systems Red Hat Enterprise Linux 6 and later, and Windows Server 2008 R2 with SP2. An operating system release might contain a default VF driver for certain NICs, while for others you must download and install it from a location provided by the vendor of the NIC or of the host.

You can Find the Type of Adapter Available on below image


If you're looking for a compatibility, you might want to check VMware Compatibility guide.


Refernce 2

Enhanced vMotion Compatibility (EVC)

Enhanced vMotion Compatibility which is a vCenter Server cluster-centric feature allowing virtual machines to vMotion or migrate across ESXi hosts equipped with dissimilar processors in the same cluster. VMware EVC Mode works by masking unsupported processor features thus presenting a homogeneous processor front to all the virtual machines in a cluster. This means that a VM can vMotion to any ESXi host in a cluster irrespective of the host’s micro-architecture examples of which include Intel’s Sandy Bridge and Haswell. One thing to remember is that all the processor(s) must be from a single vendor i.e. either Intel or AMD. You simply cannot mix and match.


The main benefit is that you can add servers with the latest processors to your existing cluster(s) seamlessly and without incurring any downtime. More importantly, EVC provides you with the flexibility required to scale your infrastructure, lessening the need to decommission older servers prematurely, thus maximizing ROI. It also paves the way for seamless cluster upgrades once the decision to retire old hardware is taken.

Any Impacts

When a new family of processors is released to market, innovative microprocessor features and instruction sets are often included. These features include performance enhancements in areas such as multimedia, graphics or encryption. With this in mind try to determine in advance the type of applications you’ll be running in your vSphere environment. This gives you a rough idea of the type of processors you’ll be needing. This, in turn, allows you to predetermine the applicable EVC modes when mixing servers with processors from different generations. EVC modes are also dependent on the version of vCenter Server. This is shown in Figure 1 below (Intel based EVC modes)

Figure 1 - Intel based EVC modes (reproduced from VMware’s KB1003212)


To enable EVC, you must make sure the ESXi hosts in your cluster satisfy the following.

  • Processors must be from the vendor, AMD or Intel.
  • Hosts must be properly configured for vMotion.
  • Hosts must be connected to the same vCenter Server.
  • Advanced virtualization features such as Intel-VT and AMD-V must be enabled for all hosts from the server’s BIOS.

Figure 3 - Enabling advanced CPU virtualization features











Use the VMware Compatibility Guide to assess your EVC options

The VMware Compatibility Guide is the best way to determine which EVC modes are compatible with the processors used in your cluster. Please check below example on how to determine which EVC mode to use given 3 types of Intel processors.

The steps are as follows;

  • Select the ESXi version installed.
  • Hold down the CTRL key and select the type of processors from the CPU Series list.
  • Press the CPU/EVC matrix button to view the results.

Figure 4


The results tell us that we can only use EVC modes Merom or the Penryn. This means we have to sacrifice some features exclusive to the Intel i7 processor. This is the stage at which you have to decide whether you’re better off getting new servers as opposed to adding old servers to the cluster.

Check  How to enable EVC