XXX esx.problem.hyperthreading.unmitigated.formatonhost not found XXX Warning on ESXi 6.x

I have recently applied the latest patches on vSphere 6.0 version and after applying patches hosts was showing  with warning message

" XXX esx.problem.hyperthreading.unmitigated.formatonhost not found XXX "  . This messages come after applying latest patches available in VMSA-2018-0020 to mitigate CVE-2018-3646 introduced a new notification to indicate the remediation status of the 'L1 Terminal Fault' (L1TF - VMM) vulnerability.

There are multiple option to resolve this using CLI ,  if you are not experienced with CLI part no worries it is very easy to perform form vSphere or WebClinet , using below steps

  1. Connect to the vCenter Server using either the vSphere Web or vSphere Client.
  2. Select an ESXi host in the inventory.
  3. Click the Manage Tab from vSphere 6.x Host
  4. Click the Settings sub-tab.
  5. Under the System heading, click Advanced System Settings.
  6. Click in the Filter box and search VMkernel.Boot.hyperthreadingMitigation
  7. Select the setting by name and click the Edit pencil icon.
  8. Change the configuration option to true (default: false).
  9. Click OK.
  10. Reboot the ESXi host for the configuration change to go into effect.

Using ESXCLI to Perform this Operation

  1. SSH to an ESXi host or open a console where the remote ESXCLI is installed.
  2. Check the current runtime value of the HTAware Mitigation Setting by running below comand

#esxcli system settings kernel list -o hyperthreadingMitigation

  1. Enable HT Aware Mitigation by running below command
          #esxcli system settings kernel set -s hyperthreadingMitigation -v TRUE
  1. Reboot the ESXi host for the configuration change to go into effect.

This is Applicable for Below vSphere versions

  • VMware vSphere ESXi 5.5
  • VMware vSphere ESXi 6.0
  • VMware vSphere ESXi 6.5
  • VMware ESXi 6.7

 

Reference - VMware KB

Reference - VMware KB

Reference - VMware Security Advisory

Reference - VMware KB


Learning Docker – Part 2

In my previous post, I have explained how to install the Docker on Cent OS and few important commands can be used for verification and other basic operations. In addition, I have mentioned next topic will be Initial configuration task, which is required.

In this topic, I will be explaining about the initial configuration, which you have to perform after installation of Docker.

How to Secure the Docker Access

The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The docker daemon always runs as the root user.

We cannot share root credentials with anyone, so we have to give permission to developers to run Docker. There are two options once

  • Provide sudo access to users
  • Add user to Docker group

Configure Sudo Access

Sudo

The sudo command offers a mechanism for providing trusted users with administrative access to a system without sharing the password of the root user. When users given access via this mechanism precede an administrative command with sudo they are prompted to enter their own password. Once authenticated, and assuming the command is permitted, the administrative command is executed as if run by the root user.

Log in to the system as the root user.

Create a normal user account using the useradd command

# useradd username

Set a password for the new user using the passwd command

# passwd username

Let try to run docker command with new user we created

Now lets check sudo permission

And we confirmed there is no sudo permission for this user , so we have to add  the user to sudoers file, you can add this user to the exiting group available on the suodoers or create a new group and configure that in sudoers.

 

  1. Edit the /etc/sudoers file using any suitable editor , here I use vi editor .

Note:- File sudoers  defines the policies applied by the sudo command.

  1. Find the lines in the file that grant sudo access to users in the group wheel when enabled.

Wheel is a default group available for sudoers, or you can create a new group and add to sudoers file which can be used for setting sudo permission.

# cat /etc/sudoers     - Use command to check the contests of the  sudoers file

⁠ ## Allows people in group wheel to run all commands# %wheel        ALL=(ALL)       ALL

  1. Remove the comment character (#) at the start of the second line. This enables the configuration option.
  2. Save your changes and exit the editor.

Note:- admin is a group i have created for  configure sudo access

If you don’t want to use sudo when you use the docker command, add the users to docker group which will be created while you install and enable docker . When the docker daemon starts, it makes the ownership of the Unix socket read/writable by the docker group.

Add user to Docker Group 

usermod -aG docker username


I have added vmarena user to docker group now Log out and log back  for  the group membership become active .

Verify that you can run docker commands without sudo.

$ docker run hello-world

This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.

Configure Docker to start on boot

You can use systemctl to manage which services start when the system boots .

# systemctl enable docker

To disable this use disable instead of enable

# sudo systemctl disable docker

Note:- User configured with docker group will not have the permission to perfomr this , sudo permission required or need use root account .

Options to Check  Docker Status

You will have multiple option to check the docker service status , find below

Use the docker info command  using to display system-wide information such as  status  , available containers , images  , OS ,  resources etc

You can also use operating system utilitie systemctl is-active docker or service docker status

Finally, you can check in the process list for the dockerd process, using commands like ps or top

 

First two Lessons are only to start docker and upcoming posts will help you understand how containers works in docker 
and  you can follow the exercise , Stay Tuned

Suggested Posts

Docker Architecture and Components

How Container Differ from Physical and Virtual Infrastructure

Learning Docker – Part 1

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube


Linux on Azure App Service Environment is now GA

We all now a days hearing lot about Docker's  and Containers  , containers started in Linux platform  now Microsoft extended their features in Azure for Linux .You can deploy containerized web app in an Azure Virtual Network  .The Azure App Service team announced the general availability of Linux on Azure App Service Environment (ASE), which combines the features from App Service on Linux and App Service Environment.

Since ASE is generally available , Linux customers will be able to take advantage of deploying Linux and containerized apps in an App Service Environment, which is ideal for deploying applications into a VNet for secure network access or apps running at a high scale.

What You Achieve by Deploying Linux on ASE?

Primary you can deploy your Linux web applications into an Azure virtual network (VNet) by bringing your own custom container, or just bring your code by using one of our built-in images.

  • You can bring custom Docker container , image from DockerHub, Azure Container Registry, or your own private registry.
  • You can use one of our built-in images, we support many popular stacks, such as Node, PHP, Java, .NET Core, and more to come.

Windows, Linux, and containerized web applications can be deployed into the same ASE, sharing the same VNet. Remember that even though Windows and Linux web apps can be in the same App Service Environment, Windows and Linux web apps must be in separate App Service plans. With Linux on ASE, you will be using the Isolated SKU with Dv2 VMs and additional scaling capabilities (up to 100 total App Service plan instances, between Windows and Linux, in one ASE).

How to decide what kind of ASE is the best for your use case ?

First you have to choose the type of  IP ( public or private ) you want to use to expose the apps hosted in your ASE.  Depending on whether or not you want an Internet accessible endpoint, there are two type of ASEs you can create:

  • An external ASE with an Internet accessible endpoint.
  • An internal ASE with a private IP address in the VNet with an internal load balancer (ILB).

How to get started

You can create a Linux Web App into a new ASE by simply creating a new Web App and selecting Linux as the OS (built-in image), selecting Docker (custom container), or creating a new Web App for Containers (custom container).

If you need more detailed instructions, get started with creating your first Linux/containerized Web App into an ASE  by Referring Create External ASE

Pricing Update

Effective July 30, 2018, Linux and containerized apps deployed in an App Service Environment have returned to regular App Service on Linux and App Service Environment pricing. The 50 percent discount on the Linux App Service Plan from the public preview has been removed for general availability and is no longer being offered.

More Details  Refer Get started, and more context about How to configure networking for your ASE.

 


Learning Docker – Part 1

In my previous post I have shared information about docker architecture and components with reference to Docker Documentation and Training Videos. Now time to start playing with docker, this is the First Learning Part will cover basic requirements for install Docker, how to install and some useful Linux commands, which  you should aware while using Docker.

Let me share about my Lab, where I learns Docker. I using a nested setup of vSphere 6.5 with Centos 7 Virtual Machines to Learn Docker.

Hypervisor Operating System Docker
vSphere 6.5 Cent OS 7 64 Bit Docker CE ( Community Edition)

 

Requirement to Run Docker CE (Community Edition)

  • Maintained CentOS 7 64 Bit
  • The centos-extrasrepository must be enabled.

Note: - This is enabled by default and incase it is disabled you have to re-enable it,

Refer: - Cent OS Wiki

  • The overlay2storage driver is recommended.

Options to Install Docker

You can install Docker CE in three ways

  • Set up Docker’s repositories and install from them it is will be easy for you to perform installation and upgrade tasks. This is the recommended approach by Docker, we proceed this option, and it need access to internet.
  • Download the RPM package, install it manually, and manage upgrades completely manually. This is useful in situations such as installing Docker on air-gapped systems with no access to the internet.

Refer – Docker Docs to Understand this

  • Automated convenience scripts to install Docker used in testing and development environments

Refer – Docker Docs to Understand this

Install Docker 

To install the Docker CE , First we need to set up the Docker repository using YUM and then will perform installation of Docker from the repository .And if you have experience on the Linux then it will be very easy

Setting up the Repository

Login to your Cent OS through console or a putty session .

  • Verify your  Operating system details  where you are going to setting up the repository and install Docker

Follow the basic command shown in the image to check Operating System , Kernel and Flavor of Linux you are using

command uname  we can use to view the system information

If you want to know more about uname command use uname --help 

  • Install required packages for setting up the repository

yum-utils provides the yum-config-manager utility  , device-mapper-persistent-data and lvm2 are required by the devicemapper storage driver

$ sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

 Set up the stable repository using below commands , Always need the stable repository, even if you want to install builds from edge or test repositories as well.
$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo



Enable the edge and test repositories, it is optional.

These repositories are included in the docker-ce.repo file but they are disabled by default and you can enable them alongside the stable repository using below commands

$ sudo yum-config-manager --enable docker-ce-edge


$ sudo yum-config-manager --enable docker-ce-test

You can disable the edge or test repository by running the yum-config-manager command with the --disable flag. To re-enable it, use the --enable flag.

To disables the edge repository follow below command

$ sudo yum-config-manager --disable docker-ce-edge

Note: Starting with Docker 17.06, stable releases are also pushed to the edge and testrepositories.

More about Repo

docker-ce.repo  is located in  /etc/yum.repos.d/ ,  you may view configuration of the file by using less , more or cat command

$ less /etc/yum.repos.d/docker-ce.repo 

If there is no enabled line in this configuration file that means the repo is enabled  , Image has taken before enabling the  enabling edg repository

Also you can check the enabled repo's by using below command

$ yum repolist enabled

This Image show the available repo before enabling edge and  test repository

$ yum repolist enabled

This Image show the available repo after enabling edge and  test repository

Now we are almost near to docker  , let's install docker and understand some basic commands required for a beginners

Install Docker CE

You have two options install the Docker

  • Latest Version
  • Install a specific version of Docker from repo

To install a specific version of Docker CE first you have to check the available versions in the repo, then select and install

List and sort the versions available in your repo.    ( Command results by version number, highest to lowest)

$ yum list docker-ce --showduplicates | sort -r


The list returned depends on which repositories are enabled, and is specific to your version of CentOS (indicated by the .el7 suffix in this example).

Install a specific version by its fully qualified package name, which is the package name (docker-ce) plus the version string (2nd column) up to the first hyphen, separated by a hyphen (-), for example, docker-ce-18.03.0.ce.

$ sudo yum install docker-ce-<VERSION STRING>

Note:- I am not going perform this because my test setup i want to check with Latest version

Install the latest version of Docker CE , it is very simple just run the below command to get  the latest version of docker :)

$ sudo yum install docker-ce

If prompted to accept the GPG key, verify that the fingerprint matches060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35, and if so, accept it.

Got multiple Docker repositories?

If you have multiple Docker repositories enabled, installing or updating without specifying a version in the yum install or yum update command always installs the highest possible version, which may not be appropriate for your stability needs.

Now Docker is installed on your Linux Cent OS  but it will not start automatically you have to start manually  . Also a  docker group will be created , but no users are added to the group.

First verify the status of the docker service by running command

$ sudo systemctl status docker

Start the Docker using below command

$ sudo systemctl start docker

$ docker version

To check the docker version available on the system

Type just docker on the system it will list all the available management commands and other commands

Next Verify that docker is installed correctly by running an image , question is if you are new to Docker how you will find available Images

As I mentioned above you can use the one of the command to check available images or  from hub.docker.com

$ docker search image keyword

Run the an  image , you can use your own required image or follow below to test

$ sudo docker run ubuntu

$ sudo docker image list    -  List running images

$ sudo docker rmi -f ubuntu     - Remove an Image

Note :- Above image may show command without sudo because it is running with root  login .

You can Refer  Docker CLI to understand the commands can be used in Docker  , Stay Tuned  Next post will be with more Docker Commands for Initial Configuration .

Reference  Installing Docker

Suggested Posts

Docker Architecture and Components

How Container Differ from Physical and Virtual Infrastructure

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube


Docker Architecture and Components

In my Previous Post I have explained how Containers and differ from Physical and Virtual Infrastructure. Also I mentioned Docker in the post but not shared more details because I do not want you to get confused with container and Docker.

This post I will explain about Docker,What is Docker? Docker Components, how Containers connected to Docker etc and more posts will be sharing soon which help to you do start playing with containers in your Lab .

What is Docker?

Docker is an open source platform, which used to package, distribute and run applications. Docker provides an easy and efficient way to encapsulate applications from infrastructure to run as a single Docker image, which shared through a central, shared Docker registry. The Docker image used to launch a Docker container, which makes the contained application available from the host where the container is running.

In simple words Docker is a containerization platform, which is OS-level virtualization method used to deploy and run distributed application and all its dependencies together in the form of a Docker container. Docker platform remove the hypervisor layer from your Hardware, It run directly on top of bare metal Operating system. By using Docker Platform, you can multiple isolated applications or services run on a single host, access the same OS kernel, and ensure that application works seamlessly in any environment.

Containers can run on any bare-metal system with supported Linux, Windows, Mac and Cloud instances; it can run on a virtual machines deployed on any hypervisor as well.

For developers it might be easy to understand the concept of Docker easily but for a system administers it may difficult .Don‘t worry here I will explain the components of the Docker and how it is used .

Docker is available in two editions:

  • Community Edition (CE)
  • Enterprise Edition (EE)

Docker Community Edition (CE) is ideal for individual developers and small teams looking to get started with Docker and experimenting with container-based apps.

Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale.

Docker Components

What is Docker Engine?

Docker Engine is the core of the Docker system; it is the application installed on the host machine. This Engine is a Client-server application with below components.

  • A server, which is a type of long-running program, called a daemon process (the dockerd command).
  • A REST API, which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
  • A command line interface (CLI) client (the docker command).

The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI.

The daemon creates and manages Docker objects, such as images, containers, networks, and volumes

Docker architecture

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.

Note: - Docker engine and Architecture information is from Docker Documentation

Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

Docker Client

Docker client key component which used by many Docker users interact with Docker. When you run the docker commands, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API also Docker client can communicate with more than one daemon.

Docker Registry

Docker registry is the place where Docker images are stored it can be a Public registry or Local registry. Docker Hub and Docker Cloud are public registries that is available for everyone and other option is create your own private registry. Docker is configured to look for images on Docker Hub by default and If you use Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR).

How Docker Registry Works?

When you use the docker pull or docker run commands, the required images pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.

Docker store allows you to buy and sell Docker images or distribute them for free.

 

Also you have option to buy a Docker image containing an application or service from a software vendor and use the image to deploy the application into your testing, staging, and production environments. You can upgrade the application by pulling the new version of the image and redeploying the containers.

Docker Environment

Docker Environment is combination of Docker Engine and Docker Objects, I have explained about Docker engine and some objects now understand the Objects , Docker Objects are images, containers, networks, volumes, plugins .

IMAGES

An image is a read-only template with instructions for creating a Docker container. You can create an image with additional customization from a base image or use those created by others and published in a registry.

Docker uses a smart layered file system, where the base layer is read-only and top layer is writable. When you try to write to a base layer, a copy is created in the top layer, and the base layer remains unchanged. This base layer can be shared since it is a read-only and never changes.

For example, you may build an image, which based on the Centos image, but installs the Web server and your application, as well as the configuration details needed to make your application run.

How to build Your Own Image

To build your own image, Create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers that have changed are rebuilt and this makes images so lightweight, small, and fast.

CONTAINERS

In simple words container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

Volumes

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker.

Advantages of Volume over bind mounts

  • Volumes are easier to back up or migrate than bind mounts.
  • You can manage volumes using Docker CLI commands or the Docker API.
  • Volumes work on both Linux and Windows containers.
  • Volumes can be more safely shared among multiple containers.
  • Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
  • New volumes can have their content pre-populated by a container.

In addition, volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container.

If your container generates non-persistent state data, consider using a tmpfs mount to avoid storing the data anywhere permanently, and to increase the container’s performance by avoiding writing into the container’s writable layer.

Volumes use rprivate bind propagation, and bind propagation is not configurable for volumes.

One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way.

Bind mounts

Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.

The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist. Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available. If you are developing new Docker applications, consider using named volumes instead. You can’t use Docker CLI commands to directly manage bind mounts.

Network

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core-networking functionality:

  • Bridge: Bridger is the default network driver used in Docker. Docker has other network driver options and if you don’t specify a driver bridge network will created as default. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
  • Host: Using the host networking in standalone containers you can remove network isolation between the container and the Docker host.

Note:-host networking is only available for swarm services on Docker 17.06 and higher.

  • Overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers.
  • Macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlandriver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
  • None: For this container, disable all networking. Usually used in conjunction with a custom network driver.

Note:- none is not available for swarm services.

  • Network plugins: You can install and use third-party network plugins with Docker. These plugins are available from Docker Storeor from third-party vendors.

Which Network driver is suitable?

  • User-defined bridge networks are best when you need multiple containers to communicate on the same Docker host.
  • Host networks are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
  • Overlay networks are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.
  • Macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.
  • Third-party network plugins allow you to integrate Docker with specialized network stacks.

Most of the above network modes applies to all Docker installations. However, a few advanced features are only available to Docker EE customers.

Docker EE networking features

Two features are only possible when using Docker EE and managing your Docker services using Universal Control Plane (UCP):

  • The HTTP routing meshallows you to share the same network IP address and port among multiple services. UCP routes the traffic to the appropriate service using the combination of hostname and port, as requested from the client.
  • Session stickinessallows you to specify information in the HTTP header, which UCP uses to route subsequent requests to the same service task, for applications, which require stateful sessions.

SERVICES

Services allow you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Docker daemon, and the daemons all communicate using the Docker API. A service allows you to define the desired state, such as the number of replicas of the service that must be available at any given time. By default, the service is load-balanced across all worker nodes. To the consumer, the Docker service appears to be a single application. Docker Engine supports swarm mode in Docker 1.12 and higher.

SWARM

In Docker also high availability cluster is available and it is called Swarm. By using swarm you can use features like Scaling, Load balancer and you need your apps to be stateless, and for failover to happen automatically .Also many more features, detailed information from here 

In Swarm, you can deploy your app to a number of nodes running on a number of Docker engines and these engines can be on different machines, or even in different data centers, or some in Azure and some in AWS. If any one of the nodes crashes or disconnects the other nodes automatically take over the load, and create a new node to replace the missing one.

Note:- This is a one of the important topic that tat you have to understand more details that can’t be explained though this post  , I will share through an another post with examples .Even though you can fine more details on Docker Docs

Docker Underlying Technology

Docker is written in Go and takes advantage of several features of the Linux kernel to deliver its functionality.

Namespaces

Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.

These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

Docker Engine uses namespaces such as the following on Linux:

  • The pid namespace: Process isolation (PID: Process ID).
  • The net namespace: Managing network interfaces (NET: Networking).
  • The ipc namespace: Managing access to IPC resources (IPC: InterProcess Communication).
  • The mnt namespace: Managing filesystem mount points (MNT: Mount).
  • The uts namespace: Isolating kernel and version identifiers. (UTS: Unix Timesharing System).

Control groups

Docker Engine on Linux also relies on another technology called control groups (cgroups). A cgroup limits an application to a specific set of resources. Control groups allow Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints. For example, you can limit the memory available to a specific container.

Union file systems

Union file systems, or UnionFS, are file systems that operate by creating layers, making them very lightweight and fast. Docker Engine uses UnionFS to provide the building blocks for containers. Docker Engine can use multiple UnionFS variants, including AUFS, btrfs, vfs, and DeviceMapper.

Container Format

Docker Engine combines the namespaces, control groups, and UnionFS into a wrapper called a container format. The default container format is libcontainer. In the future, Docker may support other container formats by integrating with technologies such as BSD Jails or Solaris Zones.

Refer Docker Documentation to understand more

Also Watch Docker Training Videos

Suggested Posts

How Container Differ from Physical and Virtual Infrastructure

Docker Learning Part 1

Docker Learning Part 2

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube


How Container Differ from Physical and Virtual Infrastructure

We all hear lot thing about containers, do you know what is a container how it different from physical and virtual world . In this post I will shared information about how containers differ from physical and virtual world and container platforms.

Physical World

In the beginning, we had physical machines and install Operating System on that then deploy application top of that. To deploy an application on this world we had lot of challenges and time consuming . Because every time to deploy a new application we have to buy a new physical server, to buy a server we have go through many process like find suitable hardware co-ordinate with vendor, finance team , delivery , installation and again Operating system installation License ,Storage , Application  etc.

Moreover, if you need more applications would require more machines that are physical, its own Operating System installed and the application, License and cost for  this is very high.

And we were not able to utilize the complete resource of those physical machines ,waste of power, cooling, raw materials, data centre floor space etc.

How we solved this?

VMware came in world with their innovative idea Virtual Machine and this was great opportunity to start moving physical world to virtualization . That was a beginning and VMware released their Hypervisor, which will install on the physical machine. The hypervisor will own the computing resources and it create multiple Virtual Machines.

Virtualization

Virtual Machines has their own virtual hardware where we install desired operating System then install an application on top of the OS. By moving to virtualization, we were able to utilize the maximum resource with less power, cooling, space etc. Now we have many hypervisor’s in market ESXi, Hyper-V, KVM etc.

Still we have challenges like hypervisor cost , support , virtual machine guest operating system , license etc

How to overcome this?

Virtualization was booming technology from many years and still in many regions .When we really looking in to the actual needs still we have challenges and to overcome this Container came.

There is common thing in all three models Computing Resources, Operating System, OS patching, anti-virus etc.

Finally Let check about Containers

Basic architecture of containers is shown in the below image , there we have physical machine with an Operating System running on the bare-metal then on top of the Operating System *containers* and within each container run an application. Here the Operating System owns and manages the physical machines computing resources

What is a Container?

A container consists of an entire run time environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. Containers allows you to deploy quickly, reliably, and consistently regardless of deployment environment.

Containers are lightweight it uses less computing  , Containers virtualize CPU, memory, storage, and network resources at the OS-level, providing developers with a sandboxed view of the OS logically isolated from other applications.

There are many container formats available. Docker is a popular, open-source container format that is supported on Google Cloud Platform and by Google Kubernetes Engine .

Suitable Linux distributions for Containers

Here I will share few Linux distributions which is commonly used

  • CoreOS — one of the first lightweight container operating systems built for containers
  • Rancher OS— it is a simplified Linux distribution built from containers, specifically for running containers.
  • Photon OS— it is a minimal Linux container host, optimized to run on VMware platforms.
  • Project Atomic Host— Red Hat's lightweight container OS has versions that are based on CentOS and Fedora, and a downstream enterprise version in Red Hat Enterprise Linux.
  • Ubuntu Core—Smallest Ubuntu version used as  host operating system for IoT devices and large-scale cloud container deployments

Only Linux re suitable for Containers?

The answer is No, because with Windows Server 2016 and Windows 10 has the ability to run containers. These are Docker containers and it can managed from any Docker client or from Microsoft's PowerShell. Microsoft Hyper-V also support containers, which are Windows containers running in a Hyper-V virtual machine for added isolation.

Windows containers can deployed on a Windows Server 2016, the streamlined Server Core install or Nano Server install option that is specifically designed for running applications inside containers or virtual machines.

More about Docker on Windows

Now you may think about VMware, yes of course VMware also has their own container platforms

  • Customers who has existing VMware infrastructure they can run containers on its Photon OS container Linux
  • vSphere Integrated Containers (VIC) which can be deployed directly to a standalone ESXi host or deployed to vCenter Server as if they were virtual machines.

In addition to Linux, Windows, and VMware Docker also runs on popular cloud platforms including Amazon EC2, Google Compute Engine, Microsoft Azure and Rackspace.

Don't get confused with Docker word we will explaining on this upcoming blogs posts

What you think about virtualization and containers together?

This is another better option because containers can run top of virtual machines to increase isolation and security. Another important thing by using hardware virtualization you can manage required infrastructure to run containers from a single pane with many advanced hypervisor features .

End of the day it is all about your requirement both method has their benefits. Moreover, I Prefer rather than replacing virtual machines with containers use containers within a virtual infrastructure to achieve availability, backup, easy management etc.

I will be sharing more posts on how to start using the containers , Stay Tuned

Suggested Posts

Docker Architecture and Components

Docker Learning Part 1

Docker Learning Part 2

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube


Virtualization Based Security (VBS)

Virtualization Based Security (VBS) is on the best Microsoft Windows security feature available in Windows 10 and Windows Server 2016. DeviceGuard, Credential Guard are two security options depend on Virtualization Based Security.

  • DeviceGuard– It allows the system to block anything other than trusted applications.
  • CedentialGuard- it allows the system to isolate the “exe”process in order to block memory read attempts from virus, spyware, trojan or worm attacks.

Now you will be thinking what is Iaas.exe , find a brief about this below

What is the use of Iaas.exe ?

LSASS is known as Local Security Authority Subsystem Service which is a process in Microsoft Windows operating systems that is responsible for enforcing the security policy on the system. It verifies users logging on to a Windows computer or server, handles password changes, and creates access tokens. It also writes to the Windows Security Log.

What will happen if Iaas.exe is terminated?

Termination of lsass.exe will result in the Welcome screen losing its account/s, prompting a restart of the machine etc. This is happening because lsass.exe is a crucial system file and frequently this will faked by malware. The lsass.exe file used by Windows is located in the directory %WINDIR%\System32.

How Iaas.exe faking by Attackers?

Malicious developers may name the file with different systems display fonts and it trick users into installing or executing a malicious file instead of the trusted system file

Example - Isass.exe will displayed as “isaas

Also, beware if any other location that has lsass.exe is most likely a virus, spyware, trojan or worm.

These new features used in hardware virtualization technologies such as Intel VT-X or AMD-V in order to offer strong segmentation between two virtual machines and probably more in the future. These technologies allow the Virtual Machine Manager (VMM) setting different rights on physical pages using Extended Page Tables (EPT). A virtual machine can set a physical page writable in its Page Table Entry (PTE), and the VMM can silently authorize or block this by setting the appropriate access right in its EPT.

Virtualization Based Security relies on the hypervisors, which will issue VMs of different Virtual Trust Levels (called VTL). Hyper-V consists in a hypervisor, and any operating system, even the “main” one, is contained in a VM. This “main” operating system, Windows, is considered as the root VM. Hyper-V trusts it and accepts management orders such as controlling other VMs. Other VMs may be “enlightened”, and if so, send restricted messages to Hyper-V for their own management.

Virtual Trust Levels (VTL)

Two VTLs defined here, with higher value is more privileged

  • VTL0  -  This is the normal world, and basically consists in the standard Windows operating system
  • VTL1  -  This is the secure world, and consists in a minimalized kernel and secured applications known as trustlets.

As I mentioned above CredentialGuard security feature leverages this technology in isolating the critical lsass.exe process in a VTL1 trustlet (lsaiso.exe, “Isolated LSA” in the above picture), making impossible to even the VTL0 kernel to access its memory. Only messages may be forwarded to the isolated process from the VTL0, effectively blocking memory passwords and hashes from virus, spyware, Trojan or worm attacks.

The DeviceGuard security features allows W^X memory mitigation (a physical page cannot be both executable and writable) in the VTL0 kernel address space, and accepts a policy which will contain authorized code signers. If the VTL0 kernel wants to make a physical page executable, it must ask the VTL1 for the change (“HVCI” in the picture), which will check the signature against its policy. For usermode code, this is not enforced yet, and the VTL0 kernel just asks for the signature verification. Policies are loaded during the boot startup, and cannot be modified after, which forces the user to reboot in order to load new policies.

Policies may also be signed: in that case, authorized signers are set in UEFI variables, and new policies will be checked against these signers. UEFI variables have their Setup and Boot flags set, which means they cannot be accessed nor modified after startup. In order to cleanup these variables, the local user must reboot using a custom Microsoft EFI bootloader, which will remove them after user interaction (by pressing a key).

Therefore, VBS heavily relies on SecureBoot the bootloader’s integrity must be checked, as it is responsible to load the policies, Hyper-V, the VTL1 etc

Now With vSphere 6.7 onwards VBS will be supported , Check  Virtualization Based Security (VBS) in vSphere 6.7

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube


Windows Server 2016 Hyper-V on VMware ESXi 6.7

I have been testing few backup products on my lab and most of the tests are on VMware Infrastructure , now i though of working on Hyper-v . I decided to create a Hyper-v Server on my VMware infrastructure for testing .In this post I am sharing the step for installing Windows 2016 Server as VM and enabling the Hyper-V on a vSphere 6.7  platform.

Login to ESXi 6.7 host client by using any  browser  using  " https://<IP address or FQDN of ESXi>/ui "

Create a New Virtual Machine by selecting Create /Register VM option

Add the name for the Virtual Machine and select the options

  • Compatibility: Latest version of ESXi
  • Guest OS Family: Windows
  • Guest OS Version: Microsoft Windows Server 2016 (64 bit)

Note :- Enable Windows VBS  if required , About VBS check my prevous POST

Click Next.

 

Select the datastore where VM should be stored and click Next

 

 

Proceed with  default configuration  and Continue by Click Next

Note - Change Disk provisioning type if you need to save the space

Review the configuration options and click Finish to deploy the VM.

Once the VM is deployed you may increase the resources by editing the settings  also Expand the CPU section and Check the box Expose hardware-assisted virtualization to the guest OS. 

Next  map the windows 2016 ISO from Local location or if you have uploaded the ISO to any datastore , you can map that to virtual machine

And boot with that ISO  -> Select Install now

Select the desired Operating System -> Standard / Datacenter version and Click Next

You can follow the standard Windows installation process  to complete the installation .

After installation of the Operating system install the VMware Tools -> Asign a static IP address and Join this machine to Domain.

Navigate to Server Manager -> Add Roles and Features

 

Follow the default steps and Select the Hyper-V role and Include the Add Features option  as shown below and continue with default options and click finish

 

Reboot the Virtual Machine  after the role has been installed

Once the VM is rebooted navigate to Hyper-V Manager from the Tools section of the Server Manager or from Windows Start

Now you will be able to deploy VM inside the Hyper-V

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube

 


Virtualization Based Security (VBS) in vSphere 6.7

As we all know VMware has released their latest version vSphere 6.7 recently and there are many enhancements and new features .Now a days security is very import in all the platform and VMware has fantastic improvements  in the security side. There are really big security features with vSphere 6.7 and one of the really cool security features is the support for Microsoft Virtualization Based Security (VBS).

 

In this post I will sharing information about Microsoft Virtualization Based Security (VBS) and to enable this on  Windows 2016 Hyper-V in vSphere 6.7  virtual machine.

Virtualization-based security  ( VBS ) is a feature of the Windows 10 and Windows Server 2016 operating systems. It uses hardware and software virtualization to enhance Windows system security by creating an isolated, hypervisor-restricted, specialized subsystem. Microsoft Virtualization Based Security  (VBS)  uses hardware virtualization features to create and isolate a secure region of memory from the normal operating system. VBS uses the underlying hypervisor to create this virtual secure mode, and to enforce restrictions which protect vital system and operating system resources, or to protect security assets such as authenticated user credentials. Microsoft is using the hypervisor as a restricted memory space where sensitive information like credentials can be stored instead of  on the operating system itself. With the increased protections offered by VBS, even if malware gains access to the OS kernel the possible exploits can be greatly limited and contained, because the hypervisor can prevent the malware from executing code or accessing platform secrets.

Prerequisites
VBS reinforces the security of Microsoft Hyper-V  and you have to configure below setting on your virtual machine 
Option Required Setting
Firmware type UEFI
Enable UEFI Secure Boot Enabled
Enable hypervisor applications in this virtual machine Enabled
Enable IOMMU in this virtual machine Enabled
  • Create a virtual machine that uses hardware version 14 or later and one of the following supported guest operating systems.

    • Windows 10 Enterprise, 64-bit

    • Windows Server 2016

  • To use Windows 2016 as the guest operating system, apply all Microsoft updates to the guest.

Note:- VBS might not function in a Windows 2016 guest without the most current updates.

Enabling Virtualization Based Security in Windows 2016 with vSphere 6.7

I am  creating a  2016 virtual machine in a nested ESXi 6.7 vSphere environment  for configure VBS , you have two options to enable VBS  and VM compatibility Level should be ESXi 6.7

  • While creating the Virtual machine

  • After Creating the Virtual Machine

After booting the Windows 2016 Server  VM  follow below steps to enable Virtualization Based Security .

  • Enable the group policy setting first for VBS
  • Enable Hyper-V in Windows 2016 Server

Navigate to  Group Policy setting where VBS has to be  enabled

Open up the local group policy editor by typing gpedit.msc  using RUN menu or Search  Local Security  Policy from Start Menu

Navigate to Computer Configuration > Administrative Templates > System > Device Guard > Turn On Virtualization Based Security  

 

Set the policy to Enabled  and below options  from drop down menu and click OK   - > Reboot the Server

  • Select Platform Security level                                    :   Secure Boot and DMA Protection
  • Virtualization Based Protection of Code Integrity:   Enabled with UEFI lock
  • Credential Guard Configuration                               :   Enabled with UEFI lock

Note:- Enabled without UEFI lock option will allow you enable or disable this setting remotely

 

Enable Hyper-v on Windows 2016 Server

Navigate to Server Manager - > Add roles and features 

 

Click Next with default options and from Server Roles Select Hyper-V  & Include Management tools  and Click OK 

 

 

Continue with default options and Click Finish

After enabling the Hyper-V feature Restart Windows.

 

How to Verify VBS Enabled 

Run  the msinfo32.exe command from run menu  and under the System Summary  You can find the entries  related device guard

More about VBS can found here 

Check more vSphere 6.7 Posts 

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube


Azure File Sync is now Generally Available

Azure File Sync

Azure file sync will provide customers secure, centralized file share management in the cloud. Customers that install the File Sync agent on their Windows Servers will be able to store less frequently accessed files in the cloud, while keeping more frequently accessed data on local file shares, and will be able to deliver consistent file share performance with no configuration or code changes. Centralizing file share management with File Sync could also lower the IT support requirements for branch or remote office locations including centralized backup and multi-site replication.

Note:- Availability for the public preview will be limited to specific regions , Find details here

Azure File Sync replicates files from your on-premises Windows Server to an Azure file share. With Azure File Sync, you don’t have to choose between the benefits of cloud and the benefits of your on-premises file server - you can have both! Azure File Sync enables you to centralize your file services in Azure while maintaining local access to your data.

We are creating Windows file server and using file shares from long period in different Windows versions and it is useful for every organization , now Microsoft is came with new option Azure File Sync with the goal of making it easy for you to reap the benefits of cloud storage. With Azure Files, Microsoft focused on building general purpose file shares that can replace all the file servers and NAS devices on the  organization.

Azure File Sync replicates files from your on-premises Windows Server to an Azure file share, just like how you might have used DFS-R to replicate data between Windows Servers. Once you have a copy of your data in Azure, you can enable cloud tiering—the real magic of Azure File Sync—to store only the hottest and most recently accessed data on-premises. And since the cloud has a full copy of your data, you can connect as many servers to your Azure file share as you want, allowing you to establish quick caches of your data wherever your users happen to be. In simple terms Azure File Sync enables you to centralize your file services in Azure while maintaining local access to your data.

Having a copy of the data in the cloud enables you to do more. For example, you can nearly instantly recover from the loss of server with our fast disaster recovery feature. No matter what happens to your local server – a bad update, damaged physical disks, or something worse, you can rest easy knowing the cloud has a fully resilient copy of your data. Simply connect your new Windows Server to your existing sync group, and your namespace will be pulled down right away for use.

Top innovations and enhancements in Azure File Sync initial preview

  • Sync and cloud tiering performance, scale, and reliability improvements.

For general availability Microsoft has increased the performance of Azure File Sync upload by 2X and fast disaster recovery by 4X to 18X (depending on hardware). Also modification on the architect of the cloud tiering backend to support faster and more reliable tiering, enabling us to support tiering as soon as azure detect that the used volume space exceeds your volume free space percentage threshold.

  • Enhanced Azure File Sync portal experience.

By revamping portal experience that more clearly displays the progress of sync uploads and surfaces only actionable error messages to you – keeping you focused on your day to day jog and  it helps customers  understand the state of the system.

  • Holistic disaster recovery through integration with geo-redundant storage (GRS).

Microsoft now  integrate end-to-end with the geo-redundant storage (GRS) resiliency setting and this enables you to fearlessly adopt Azure File Sync to protect against disasters for your organization’s most valuable data.  And by this we have solution for disaster affecting one of the datacenters serving an Azure region .

Azure File Sync Overview

General availability of Azure File Sync is just the start of the innovations of Microsoft plan to bring to Azure Files and Azure File Sync. Microsoft working on proving new features and incremental improvements to deliver throughout the summer and fall, including support for and tighter integration with Windows Server 2019 .

Refer the Planning-guide to learn about Azure File Sync.

Thank you for reading this post  , Share the knowledge if you feel worth sharing it.

Follow VMarena on FaceBook , Twitter , Youtube