vSphere Archives - Altaro DOJO | VMware https://www.altaro.com/vmware VMware guides, how-tos, tips, and expert advice for system admins and IT professionals Thu, 29 Sep 2022 12:01:55 +0000 en-US hourly 1 Everything you Need to Know about Containers in VMware https://www.altaro.com/vmware/containers/ https://www.altaro.com/vmware/containers/#respond Thu, 29 Sep 2022 10:01:03 +0000 https://www.altaro.com/vmware/?p=24947 All available options to run containers in VMware listed and explained plus step-by-step instructions to use vSphere Integrated Containers

The post Everything you Need to Know about Containers in VMware appeared first on Altaro DOJO | VMware.

]]>

Unquestionably, organizations today are transforming from traditional infrastructure and workloads, including virtual machines, to modern containers running containerized applications. However, making this transition isn’t always easy as it often requires organizations to rethink their infrastructure, workflows, development lifecycles, and learn new skills. Are there ways to take advantage of the infrastructure already used in the data center today to run containerized workloads? For years, many companies have been using VMware vSphere for traditional virtual machines in the data center. So what are your options to run containers in VMware?

Why Shift to Containers?

Before we look at the options available to run containers in VMware, let’s take a quick overview of why we are seeing a shift to running containers in the enterprise environment. There are many reasons. However, consider a few primary reasons we see the change to containerized applications today.

One of the catalysts to the shift to containerized applications is the transition from large monolithic three-tier applications to much more distributed application architectures. For example, you may have a web, application, and database tier in a conventional application, each running inside traditional virtual machines. With these legacy three-tier architectures, the development lifecycle is often slow and requires many weeks or months to deploy upgrades and feature enhancements.

The application upgrade is performed by lifting the entire layer to a new version of code as it is required to happen in lockstep as a monolithic unit of code. The new layout of modern applications is very distributed, using microservice components running inside containers. With the new architectural design of modern applications, each microservice can be upgraded separately from the other application elements, allowing much faster development lifecycles, feature enhancements, upgrades, lifecycle management, and many other benefits.

Organizations are also shifting to a DevOps approach to deploying, configuring, and maintaining infrastructure. With DevOps, infrastructure is described in code, allowing infrastructure changes to be versioned like other development lifecycles. While DevOps processes can use virtual machines, containerized infrastructure is much more agile and more readily conforms to modern infrastructure management. So, the shift to a more modern approach to building applications offers benefits from both development and IT operations perspectives. To better understand containers vs. virtual machines, let’s look at the key differences.

Comparing Containers vs. Virtual Machines

Many have used virtual machines in the enterprise data center. How do containers compare to virtual machines? To begin, let’s define each. A virtual machine is a virtual instance of a complete installation of an operating system. The virtual machine runs on top of a hypervisor that typically virtualizes the underlying hardware of the virtual machine, so it doesn’t know it is running on a virtualized hardware layer.

Virtual machines are much larger than containers as a virtual machine contains the entire operating system, applications, drivers, and supporting software installations. Virtual machines require operating system licenses, lifecycle management, configuration drift management, and many other operational tasks to ensure they are fully compliant with the set of organizational governance policies decided.

Instead of containing the entire operating system, containers only package up the requirements to run the application. All of the application dependencies are bundled together to form the container image. Compared to a virtual machine with a complete installation of an operating system, containers are much smaller. Typical containers can range from a few megabytes to a few hundred megabytes, compared with the gigabytes of installation space required for a virtual machine with an entire OS.

One of the compelling advantages of running containers in VMware is that they can move between container hosts without worrying about the dependencies. With a traditional virtual machine, you must verify all the underlying prerequisites, application components, and other elements are installed for your application. As mentioned earlier, containers contain all the application dependencies and the application itself. Since all the prerequisites and dependencies move with the container, developers and IT Ops can move applications and schedule containers to run on any container host much more quickly.

Virtual machines still have their place. Installing traditional monolithic or “fat” applications inside a container is generally impossible. Virtual machines provide a great solution for interactive environments or other needs that still cannot be satisfied by running workloads inside a container.

Containers have additional benefits related to security. Managing multiple virtual machines can become tedious and difficult, primarily related to lifecycle management and attack surface. In addition, virtual machines have a larger attack surface since they contain a larger application footprint. The more software installed, the greater the possibility of attack.

Lifecycle management is much more challenging with virtual machines since they are typically maintained for the entire lifespan of an application, including upgrades. As a result, it can lead to stale software, old software installations, and other baggage brought forward with the virtual machine. Organizations also have to stay on top of security updates for virtual machines.

Containers in VMware help organizations to adopt idempotency. It means that the containers running the current version of the application will not be upgraded once deployed. Instead, businesses deploy new containers with new application versions. The result is a new application environment each time a new container is deployed.

Note the following summary table comparing containers and virtual machines.

Containers Virtual Machines
Small in size Yes No
Contains all application dependencies Yes No
Requires an OS license No Yes
Good platform for monolithic app installs No Yes
Reduced attack surface Yes No
Easy lifecycle management Yes No
Easy DevOps processes Yes No

It is easy to think that it is either containers or virtual machines. However, most organizations will find that there is a need for both containers and virtual machines in the enterprise data center due to the variety of business use cases, applications, and technologies used. These two technologies work hand-in-hand.

Virtual machines are often used as “container hosts.” They provide the operating system kernel needed to run containers and provide other benefits to be used as container hosts. They can take advantage of the benefits from a hypervisor perspective for high availability and resource scheduling.

Kubernetes (K8s) is the Modern Key to Running Containers

Businesses today are looking at running containers and refactoring for containerized applications. They are looking at doing so using Kubernetes. Kubernetes is the single more important aspect of running containers in business-critical environments.

Simply running your application inside a container does not satisfy the needs of production environments, such as scalability, performance, high availability, and other concerns. For example, suppose you have a microservice running in a single container that goes down. In that case, you are in the same situation as running the service in a virtual machine without some type of high availability.

Kubernetes is the container orchestration platform allowing businesses to run their containers much like they run VMs today in a highly-available configuration. Kubernetes can schedule containers to run on multiple container hosts and reprovision containers on a failed host onto a healthy container host.

While some companies may run simple containers inside Docker or containers and take care of scheduling using some homegrown orchestration or other means, most are looking at using Kubernetes to solve these challenges. Kubernetes is an open-source solution that allows managing containerized workloads and services and provides modern APIs to allow automation and configuration management.

Kubernetes provides:

  • Service discovery and load balancing – Kubernetes allows businesses to expose services using DNS names or IP addresses. It can also load balance between container hosts and distribute traffic between the containers for better performance and workload balance
  • Storage orchestration – Kubernetes provides a way to mount storage systems to back containers, including local storage, public cloud provider storage, and others
  • Automated rollouts and rollbacks – Kubernetes provides a way for organizations to perform “rolling” upgrades and application deployments, including automating the deployment of new containers and removing existing containers
  • Resource scheduling – Kubernetes can run containers on nodes in an intelligent way, making the best use of your resources
  • Self-healing – If containers fail for some reason, Kubernetes provides the means to restart, replace, or kill containers that don’t respond to a health check, and it doesn’t advertise these containers to clients until they are ready to service requests
  • Secret and configuration management – Kubernetes allows intelligently and securely storing sensitive information, including passwords, OAuth tokens, and SSH keys. Secrets can be updated and deployed without rebuilding your container images and without exposing secrets within the stack

Why Run Containers in VMware?

Why would you want to run containers in VMware when vSphere has traditionally been known for running virtual machines and is aligned more heavily with traditional infrastructure? There are many reasons for looking at running your containerized workloads inside VMware vSphere, and there are many benefits to doing so.

There have been many exciting developments from VMware over the past few years in the container space with new solutions to allow businesses to keep pace with containerization and Kubernetes effectively. In addition, To VMware’s numbers, some 70+ million virtual machine workloads are running worldwide inside VMware vSphere.

It helps to get a picture of the vast number of organizations using VMware vSphere for today’s business-critical infrastructure. Retooling and completely ripping and replacing one technology for something new is very costly from a fiscal and skills perspective. As we will see in the following overview of options for running containers in VMware, there are many excellent options available for running containerized workloads inside VMware, one of which is a native capability of the newest vSphere version.

VMware vSphere Integrated Containers

The first option for running containers in VMware is to use vSphere Integrated Containers (VIC). So what are vSphere Integrated Containers? How do they work? The vSphere Integrated Containers (VIC) offering was released back in 2019 and is the first offering from VMware to allow organizations to have a VMware-supported solution for running containers side-by-side with virtual machines in VMware vSphere.

It is a container runtime for vSphere that allows developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Also, vSphere administrators can manage these workloads by using vSphere in a familiar way.

The VIC solution to run containers in VMware is deployed using a simple OVA appliance installation to provision the VIC management appliance, which allows managing and controlling the VIC environment in vSphere. The vSphere Integrated Containers solution is a more traditional approach that uses virtual machines as the container hosts with the VIC appliance. So, you can think of the VIC option to run containers in VMware as a “bolt-on” approach that brings the functionality to traditional VMware vSphere environments.

With the introduction of VMware Tanzu and especially vSphere with Tanzu, vSphere Integrated Containers is not the best option for greenfield installations to run containers in VMware. In addition, August 31, 2021, marked the end of general support for vSphere Integrated Containers (VIC). As a result, VMware will not release any new features for VIC.

Components of vSphere Integrated Containers (VIC)

What are the main components of vSphere Integrated Containers (VIC)? Note the following architecture:


Architecture overview of vSphere Integrated Containers (VIC)

  • Container VMs – contain characteristics of software containers, including ephemeral storage, custom Linux guest OS, persistenting and attaching read-only image layers, and automatically configuring various network topologies
  • Virtual Container Hosts (VCH) – The equivalent of a Linux VM that runs Docker, providing many benefits, including clustered pool of resources, single-tenant container namespace, isolated Docker API endpoint, and a private network to which containers are attached by default
  • VCH Endpoint VM – Runs inside the VCH vApp or resource pool. There is a 1:1 relationship between a VCH and a VCH endpoint VM.
  • The vic-machine utility – It is the utility binary for Windows, Linux, and OSX to manage your VCHs in the VIC environment

How to Use vSphere Integrated Containers

As an overview of the VIC solution, getting started using vSphere Integrated Containers (VIC) is relatively straightforward. First, you need to download the VIC management appliance OVA and deploy this in your VMware vSphere environment. The download is available from the VMware customer portal.


Download the vSphere Integrated Containers appliance

Let’s look at the deployment screens for deploying the vSphere Integrated Containers appliance. The process to deploy the VIC OVA appliance is the standard OVA deployment process. Choose the OVA file for deploying the VIC management appliance.


Select the OVA template file

Name the VIC appliance.


Name the VIC appliance

Select the compute resource for deploying the VIC appliance.


Select the compute resource for deploying the VIC appliance

Review the details of the initial OVA appliance deployment.


Review the details during the initial deployment

Accept the EULA for deploying the OVA appliance.


Accept the EULA during the deployment of the OVA appliance

Select the datastore to deploy the VIC appliance.


Select the storage for the VIC appliance

Select the networking configuration for the VIC appliance.


Choose your virtual network to deploy the VIC appliance

On the customize template screen, configure the OVA appliance configuration details, including:

  • Root password
  • TLS certificate details
  • Network configuration (IP address, subnet mask, gateway, DNS, DNS search order, and FQDN)
  • NTP configuration
  • Other configurations


Customize the VIC appliance template configuration

Review and finalize the configuration for the VIC appliance.


Finish the deployment of the VIC appliance

Once the VIC appliance is deployed, you can browse to the hostname you have configured for VIC. You will see the following configuration dialog displayed. Enter your vCenter Server information, connection details, and the password you want to configure for the VIC appliance.


Wizard to complete the VIC appliance installation

Accept the thumbprint for your vCenter Server

Once the installation finishes, you will see the successful installation message. The dashboard provides several quick links to manage the solution. As you can see, you can also go to the vSphere Integrated Containers Management Portal to get started.

Installation of VIC is successful

Once you deploy the VIC appliance, you can download the vSphere Integrated Containers Engine Bundle to deploy your VIC container hosts. Once the container hosts are provisioned, you can deploy the container workloads you want to deploy for development.

The syntax to create the Virtual Container Host in VIC is as follows:

vic-machine-windows create

–target vcenter_server_address

–user “Administrator@vsphere.local”

–password vcenter_server_password

–bridge-network vic-bridge

–image-store shared_datastore_name

–no-tlsverify

–force

Once you have configured the Virtual Container Host, you can create your Docker containers. For example, you can use the following to create a Docker container running Ubuntu using the following:

docker -H <VCH IP address:2376> –tls run -it ubuntu

To learn more details on how to deploy vSphere Integrated Containers, take a look at the posts here:

VMware vSphere Integrated Containers – End of General Support

As noted above, vSphere Integrated Containers is now at the end of general support as of August 31, 2021. Why is VMware ending support? Again, due to the advancement in containerized technologies, including Tanzu, VMware is moving forward without VIC. The official answer from VMware on the End of General Support FAQ page for vSphere Integrated Containers (VIC) notes:

“VMware vSphere Integrated Containers (VIC) is a vSphere feature that VMware introduced in 2016 with the vSphere 6.5 release. It is one of the first initiatives that VMware had in the container space to bring containers onto vSphere.

In the last few years, the direction of both the industry and the cloud-native community has moved to Kubernetes, which is now the de facto orchestration layer for containers. During this time, VMware also made significant investments into Kubernetes and introduced several Kubernetes-related products including vSphere with Tanzu which natively integrates Kubernetes capabilities into vSphere. vSphere with Tanzu enables containers to be a first-class citizen on the vSphere platform with a much-improved user experience for developers, dev-ops (platform Op/SRE) teams and IT admins.

Given both the industry and community shift towards Kubernetes and the launch of vSphere with Tanzu, which incorporated many of the concepts and much of the technology behind VIC with critical enhancements such as the use of the Kubernetes API, we decided that it is time to end our support to VIC as more and more of our customers start moving towards Kubernetes.”

As mentioned on the End of Support FAQ page, VMware sees the direction moving forward with Kubernetes technologies. VMware Tanzu provides the supported solution moving forward, running Kubernetes-driven workloads in VMware vSphere.

VMware Embraces Kubernetes with vSphere 7

Organizations today are keen on adopting Kubernetes as their container orchestration platform. With VMware vSphere 7, VMware took a significant stride forward for native containerized infrastructure with the introduction of VMware Tanzu. In addition, VMware vSphere 7 has introduced native Kubernetes support, built into the ESXi hypervisor itself. It means running containers orchestrated by Kubernetes is not a bolt-on solution. Instead, it is a native feature found with a new component in the ESXi hypervisor.

In addition, vanilla Kubernetes can be difficult and challenging to implement. Tanzu provides an integrated and supported way forward for organizations to use the infrastructure they are already using today to implement Kubernetes containers moving forward.

Due to the seamless integration and many other key features with Tanzu, the new Tanzu Kubernetes solution is a far superior solution to run containers in VMware in 2022 and beyond. For this reason, VMware is phasing out vSphere Integrated Containers in favor of moving forward with VMware Tanzu.

VMware Tanzu is an overarching suite of solutions first announced at VMworld 2019. It provides solutions allowing organizations to run Kubernetes across cloud and on-premises environments. For example, with vSphere with Tanzu (codenamed Project Pacific), businesses can run Tanzu Kubernetes right in the VMware vSphere hypervisor. However, it extends beyond vSphere with Tanzu and includes the following solutions:

  • Tanzu Kubernetes Grid
  • Tanzu Mission Control
  • Tanzu Application Service
  • Tanzu Build Service
  • Tanzu Application Catalog
  • Tanzu Service Mesh
  • Tanzu Data Services
  • Tanzu Observability

There are two types of Kubernetes clusters configured with vSphere with Tanzu architecture. These include the following:

  • Supervisor cluster – The supervisor cluster uses the VMware ESXi hypervisor as a worker node, or Spherelet. This Spherelet is essentially the equivalent to the Kubelet. The advantage of the Spherelet is it is not run inside a virtual machine but natively in ESXi, which is much more efficient.
  • Guest cluster – The guest cluster is run inside specialized virtual machines for general-purpose Kubernetes workloads. These VMs run a fully compliant Kubernetes distribution


vSphere with Tanzu architecture

To learn more about VMware Tanzu, take a look here:

VMware Tanzu Community Edition (TCE)

VMware Tanzu Community Edition (TCE) is a newly announced VMware Tanzu solution that makes Tanzu-powered containers available to the masses. The project is free and open-source. However, it can also run production workloads using the same distribution of VMware Tanzu available in the commercial offerings. In addition, it is a community-supported project that allows the creation of Tanzu Kubernetes clusters for many use cases, including local development.

You can install VMware Tanzu Community Edition (TCE) in the following environments:

  • Docker
  • VMware vSphere
  • Amazon EC2
  • Microsoft Azure


Tanzu Community Edition installation options

Recently, VMware has introduced the unmanaged cluster type with the Tanzu Community Edition (TCE) 0.10 version. The new unmanaged cluster drastically reduces the time to deploy a Tanzu Community Edition by half. The new unmanaged cluster is taking the place of the standalone cluster type found in previous releases.

The new unmanaged cluster is the best deployment option when:

  • You have limited host resources available
  • You only need to provision one cluster at a time
  • A local development environment is needed
  • Kubernetes clusters are temporary and are stood up and then torn down

When looking at options to run containers in VMware in 2022, Tanzu Community Edition (TCE) is a great option to consider as it may fit the use cases needed for running containers in VMware environments. In addition, it offers an excellent option for transitioning away from vSphere Integrated Containers (VIC) and allows organizations to take advantage of Tanzu for free. It also provides a great way to use VMware Tanzu Kubernetes for local development environments.

What is the Cluster API Provider vSphere?

Another interesting project to run containers in VMware vSphere is the Cluster API Provider vSphere (CAPV) project. The cluster API allows organizations to have a declarative, Kubernetes-style API to manage cluster creation, configuration, and management. The CAPV project implements the Cluster API for vSphere. Since the API is shared, it allows businesses to have a truly hybrid deployment of Kubernetes across their on-premises vSphere environments and multiple cloud providers.

You can download the CAPV project for running Kubernetes containers in VMware vSphere here:

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual
machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Is it Finally Time to Make the Switch?

With the tremendous shift to microservices in modern application architecture, businesses are rearchitecting their application infrastructure using containers. The monolithic three-tier application architecture days are numbered as businesses are challenged to aggressively release enhancements, updates, and other features on short development lifecycles. Containers provide a much more agile infrastructure environment compared to virtual machines. They also align with modern DevOps processes, allowing organizations to adopt Continuous Integration/Continuous Deployment (CI/CD) pipelines for development.

VMware has undoubtedly evolved its portfolio of options to run containers. Many organizations currently use VMware vSphere for traditional workloads, such as virtual machines. Continuing to use vSphere to house containerized workloads offers many benefits. While vSphere Integrated Containers (VIC) has been a popular option for organizations who want to run containers alongside their virtual machines in vSphere, it has reached the end of support status as of August 31, 2021.

VMware Tanzu provides a solution that introduces the benefits of running your containerized workloads with Kubernetes, which is the way of the future. The vSphere with Tanzu solution allows running Kubernetes natively in vSphere 7.0 and higher. This new capability enables organizations to use the software and tooling they have been using for years without retooling or restaffing.

VMware Tanzu Community Edition (TCE) offers an entirely free edition of VMware Tanzu that allows developers and DevOps engineers to use VMware Tanzu for local container development. You can also use it to run production workloads. In addition, both the enterprise Tanzu offering and VMware Tanzu Community Edition can be run outside of VMware vSphere, providing organizations with many great options for running Kubernetes-powered containers for business-critical workloads.

The post Everything you Need to Know about Containers in VMware appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/containers/feed/ 0
How to Protect VMware ESXi Hosts from Ransomware Attacks https://www.altaro.com/vmware/esxi-hosts-ransomware-attacks/ https://www.altaro.com/vmware/esxi-hosts-ransomware-attacks/#respond Fri, 26 Aug 2022 10:43:34 +0000 https://www.altaro.com/vmware/?p=22933 In this article, you'll learn how Ransomware targets VMware infrastructure and what you can do to protect yourself.

The post How to Protect VMware ESXi Hosts from Ransomware Attacks appeared first on Altaro DOJO | VMware.

]]>

Historically and like most malware, ransomware has been targeting Windows operating systems primarily. However, cases of Linux and MacOS being infected are being seen as well. Attackers are being more proficient and keep evolving in their attacks by targeting critical infrastructure components leading to ransomware attacks on VMware ESXi. In this article, you’ll learn how Ransomware targets VMware infrastructure and what you can do to protect yourself.

What is Ransomware?

Ransomware are malicious programs that work by taking the user’s data hostage in exchange for a hefty ransom.

There are essentially 2 types of Ransomware (arguably 3):

    • Crypto Ransomware: Encrypts files so that the user cannot access them. This is the one we are dealing with in this blog.
    • Locker Ransomware: Lock the user out of his computer by encrypting system files.
    • Scareware: Arguably a third type of ransomware that is actually a fake as it only locks the screen by displaying the ransom page. Scanning the system with an Antivirus LiveCD will get rid of it quite easily.

A user computer on the corporate network is usually infected through infected USB drives or social engineering techniques such as phishing emails and shady websites. Another occurrence includes attacking a remote access server publicly exposed through brute-force attacks.

The malware then uses a public key to encrypt the victim’s data, which can span to mapped network drives as well. After which the victim is asked to make a payment to the attacker using bitcoin or some other cryptocurrency in exchange for the private key to unlock the data, hence the term Ransomware. If the victim doesn’t pay in time, the data will be lost forever.

As you can imagine, authorities advise against paying the ransom as there is no guaranty the bad actor will deliver on his end of the deal so you may end up paying the big bucks and not recover your data at all.

Can Ransomware affect VMware?

While infecting a Windows computer may yield a reward if the attacker gets lucky, chances are the OS will simply be reinstalled, no ransom is paid and the company will start tightening security measures. Game over for the bad guys.

Rather than burning bridges by locking a user’s workstation, they now try to make a lateral move from the infected workstation and target critical infrastructure components such as VMware ESXi. That way they hit a whole group of servers at once.

VMware ESXi ransomware impact all the VMs running on the hypervisor

VMware ESXi ransomware impact all the VMs running on the hypervisor”

From the standpoint of an attacker, infesting a vSphere host, or any hypervisor for that matter, is an “N birds, 1 stone” type of gig. Instead of impacting one workstation or one server, all the virtual machines running on the host become unavailable. Such an attack will wreak havoc in any enterprise environment!

How does a Ransomware Attack Work?

In the case of targeted attacks, the bad actor works to gain remote access to a box in the local network (LAN), usually a user computer, and then make a lateral move to access the management subnet and hit critical infrastructure components such as VMware ESXi.

There are several ways a ransomware attack on VMware ESXi can happen but reports have described the following process.

The ransomware attack on VMware ESXi described in this blog is broken down into 5 stages

The ransomware attack on VMware ESXi described in this blog is broken down into 5 stages”

Stage 1: Access local network

Gaining access to the LAN usually goes either of 2 ways:

    • A malware is downloaded in a phishing email or from a website. It can also come from an infected USB stick.
    • The attacker performs a Brute force attack against a remote access server exposed to the internet. This seems more unusual as it involves more resources and knowledge of the target. Brute force attacks are also often caught by DDoS protection mechanisms.

Ransomware spread through malicious email attachments, websites, USB sticks

Ransomware spread through malicious email attachments, websites, USB sticks”

Stage 2: Escalate privileges

Once the attacker has remote access to a machine on the local network, be it a workstation or a remote desktop server, he will try to escalate privileges to open doors for himself.

Several reports mentioned attackers leveraging CVE-2020-1472 which is a vulnerability in how the Netlogon secure channel connections are done. The attacker would use the Netlogon Remote Protocol (MS-NRPC) to connect to a domain controller and gain domain administrator access.

Stage 3: Access management network

Once the bad actors have domain administrator privileges, they can already deal a large amount of damage to the company. In the case of a ransomware attack on VMware ESXi, they will use it to gain access to machines on the management network, in which the vCenter servers and vSphere ESXi servers live.

Note that they might even skip this step if the company made the mistake to give user workstations access to the management network.

Stage 4: VMware ESXi vulnerabilities

When the attackers are in the management network, you can only hope that all the components in your infrastructure have the latest security patches installed and strong password policies. At this point, they are the last line of defense, unless a zero-day vulnerability is being leveraged in which case there isn’t much you can do about it.

Several remote code execution vulnerabilities have been exploited over the last year or so against VMware ESXi servers and vCenter servers.

The two critical vulnerabilities that give attackers access to vSphere hosts relate to the Service Location Protocol (SLP) used by vSphere to discover devices on the same network. By sending malicious SLP commands, the attacker can execute remote code on the host.

    • CVE-2019-5544: Heap overwrite issue in the OpenSLP protocol in VMware ESXi.
    • CVE-2020-3992: Use-after-free issue in the OpenSLP protocol in VMware ESXi.
    • CVE-2021-21985: Although no attack mentions it, we can assume the vCenter Plug-in vulnerability discovered in early 2021 can be a vector of attack as well. Accessing vSphere hosts is fairly easy once the vCenter is compromised.

They can then enable SSH to obtain interactive access and sometimes even change the root password or SSH keys of the hosts.

Note that the attacker may not even need to go through all that trouble if he manages to somehow recover valid vCenter of vSphere credentials. For instance, if they are stored in the web browser or retrieved from the memory of the infected workstation.

Stage 5: Encrypt datastore and request ransom

Now that the attacker has access to the VMware ESXi server, he will go through the following steps to lock your environment for good.

    • Uninstall Fault Domain Manager or fdm (HA agent) used to reboot VMs in case of failure.
    • Shut down all the virtual machines.
    • Encrypt all virtual machine files using an ELF executable, derived from an encrypting script that targets Linux machines. This file is usually named svc-new and stored in /tmp.
    • Write a ransom file to the datastore for the administrator to find.

Note that there are variations of the ransomware attack on VMware ESXi, which themselves are ever-evolving. Meaning the steps described above represent one way things can happen but your mileage may very well vary.

How to protect yourself from ransomware attacks on VMware ESXi

If you look online for testimonies, you will find that the breach never comes from a hooded IT mastermind in an ill-lit room that goes through your firewalls by frantically typing on his keyboard like in the movies.

The reality is nowhere near as exciting. 9 times out of 10, it will be an infected attachment in a phishing email or a file downloaded on a shady website. This is most often the doing of a distracted user that didn’t check the link and executed the payload without thinking twice.

Ensure at least the following general guidelines are being enforced in your environment to establish a first solid line of defense:

VMware environment-related recommendations

    • If you need to open internet access on your vCenter, enforce strong edge firewall rules and proxy access to specific domains. Do not expose vCenter on the internet!!! (Yes, it’s been done).
    • Avoid installing third party vCenter plugins.
    • Enable Secure Boot and vSphere Trust Authority on vSphere hosts.
    • Set VMware ESXi shell and SSH to manual start and stop.
    • Don’t use the same password on all the hosts and out-of-band cards.

Some recommend not to add Active Directory as an Identity Source in vCenter Server. While this certainly removes a vector of attack, configuring Multi-Factor Authentication also mitigates this risk.

Industry standards

    • Educate your users and administrators through educational campaigns.
    • Ensure the latest security patches are installed as soon as possible on all infrastructure components as well as backups servers, workstations…
    • Segregate the management subnets from other subnets.
    • Connect to the management network through a jump server. It is critical that the jump server must:
      • Be secured and up to date
      • Accessible only through Multifactor authentication (MFA)
      • Must only allow a specific IP range.
    • Restrict network access to critical resources only to trained administrators.
      • Ensure AD is secured and users/admins are educated on phishing attacks.
      • Apply least privilege policy.
      • Use dedicated and named accounts.
      • Enforce strong password policies.
      • Segregate Admin and Domain admin accounts on AD.
      • Log out users on inactivity on Remote Desktop Servers.
    • Don’t save your infrastructure password in the browser.
    • Use Multi-Factor Authentication (MFA) where possible, at least on admin accounts.
    • Forward infrastructure logs to a Syslog server for trail auditing.
    • Ensure all the workstations and servers have a solid antivirus with regularly updated definitions.

Where do backups fit in all this?

While there are decryption tools out there, they will not always work. In fact, they almost never will.

Restoring from backup is essentially the only way known to date that you can use to recover from a ransomware attack on VMware ESXi. You can use Altaro VM Backup to ensure your environment is protected.

Because attackers know this well, they will try to take down the backup infrastructure and erase all the files so your only option left is to pay the ransom. Which, as mentioned previously, is no guaranty that you get your files back.

Because of it, it is paramount to ensure your backup infrastructure is protected and secure by following best practices:

    • Avoid Active Directory Domain integration or use multi-factor authentication (MFA).
    • Do not use the same credentials for access to the VMware and Backup infrastructures.
    • Test your backups regularly.
    • Keep the backup infrastructure on a dedicated network. Also called Network Air-Gap.
    • Sufficient backup retention to avoid backing up infected data.
    • Maintain offsite read-only backups (air gap).

You can also check our dedicated blog for more best practice recommendations: Ransomware: Best Practices for Protecting Backups.

NIST controls for data integrity (National Institute of Standards and Technology)

VMware documents solutions for combatting ransomware by incorporating the National Institute of Standards and Technology (NIST) controls specific to data integrity. You can find VMware’s recommendations and implementation of the NIST in this dedicated document:

National Institute of Standards and Technology logo

National Institute of Standards and Technology logo”

The NIST framework is broken down into 5 functions:

In the VMware document linked above, you will find Detect, Protect and Respond recommendations that apply to various environments such as private cloud, hybrid cloud or end-user endpoints.

So How Worried Should I be?

Ransomware have always been one of the scary malware as they can deal a great amount of damage to a company, up to the point of causing some of them to go into bankruptcy. However, let us not get overwhelmed by these thoughts as you are not powerless against them. It is always best to act than to react.

In fact, there is no reason for your organization to get hit by a ransomware as long as you follow all the security best practices and you don’t cut corners. It might be tempting at some point to add an ALLOW ALL/ALL firewall rule to test something, give a user or service account full admin rights, patch a server into an extra VLAN or whatever action you know for a fact would increase your security officer’s blood pressure. In such a case, even if there is a 99.9% chance things are fine, think of the consequences it could have on the company as a whole should you hit that 0.1% lurking in the back.

If you are reading this and you have any doubts regarding the security of your infrastructure, run a full audit of what is currently in place and draw a plan to bring it into compliance with the current industry best practices as soon as possible. In any case, patch your systems as soon as possible, especially if you are behind!

The post How to Protect VMware ESXi Hosts from Ransomware Attacks appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/esxi-hosts-ransomware-attacks/feed/ 0
Is Edge Computing a Gamechanger for vSphere? https://www.altaro.com/vmware/edge-vsphere/ https://www.altaro.com/vmware/edge-vsphere/#respond Fri, 19 Aug 2022 09:22:18 +0000 https://www.altaro.com/vmware/?p=24471 Discover how low-latency operations are being addressed by Edge computing technology and what it means for the future of vSphere

The post Is Edge Computing a Gamechanger for vSphere? appeared first on Altaro DOJO | VMware.

]]>

As organizations increasingly use business-critical data in new and exciting ways, it is increasingly important where the infrastructure resides in relation to where data is generated, including vSphere computing use cases. In addition, businesses are running increasingly latency and performance-sensitive workloads where milliseconds count, helping to emphasize the importance of compute and data locality when building out infrastructure.

The new business and technical requirements have led to a new buzzword, known as Edge Computing. VMware has developed and evolved many technologies, including vSphere computing, to help organizations leverage the power of Edge Computing and allow placing these latency-sensitive workloads as close to the Edge as possible, bringing many advantages with the likes of vSphere edge clusters.

Get ready for Edge Computing

Data is the new critical asset for organizations worldwide, with massive amounts of data generated by Internet of Things (IoT) devices. Additionally, businesses increasingly need to analyze the data as soon as possible, almost instantly in some cases, for business-critical processes. Organizations are also increasingly leveraging artificial intelligence (AI), requiring real-time local processing power. This real-time analysis requires instant access to compute resources. AI processes may suffer or not even work when compute resources are located a considerable distance from the data source.

Unfortunately, businesses cannot bend the laws of physics and the speed at which data can travel from remote locations to centralized data center locations. The solution to this increased performance and latency problem is physically moving the computing infrastructure closer or adjacent to the Edge where the data is generated.

Basic concepts of edge computing

The name “edge computing” comes from the notion of moving the compute power (such as vSphere computing) to the edge of the network, close to the device generating the data, such as an Internet of Things (IoT) device. After the computing infrastructure is physically moved adjacent to devices generating the data, the data can be processed locally instead of hair pinning back to the data center or cloud for analysis. We already skimmed over the topic during VMworld 2021 coverage.

Edge computing comes from the notion of moving the compute power to the edge of the network

Edge computing comes from the notion of moving the compute power to the edge of the network

Gartner has forecasted edge use cases to be the largest area of growth for computing resources for the foreseeable future:

“The global enterprise market for content distribution, security, and compute services at the Edge will reach nearly $19.3 billion in 2024, with a compound annual growth rate of 13.9%. Perimeter security will become the largest segment, and advanced edge services will exhibit the largest growth.”

Edge computing: What it is and why it’s important

As mentioned, edge computing is just that; computing moved to the Edge, helping to solve extremely sensitive latency and processing challenges at the Edge. There are many reasons why edge computing is becoming extremely important for organizations worldwide. To understand why edge computing has become so important, we need to look at how massive data generation and analysis helps businesses to use various technologies to solve critical business challenges today.

The world of the Internet of Things (IoT) has transformed how businesses generate and use data. Now, billions of IoT devices and sensors are placed in locations all over, including city streets, manufacturing facilities, hospitals, retail stores, and many others. In addition, these sensors generate massive amounts of data that can be used in exciting ways.

The introduction of 5G has also accelerated the need for edge computing with much faster network connectivity and throughput possibilities at remote edge locations due to the introduction of the new mobile connectivity standard. In addition, it allows high-speed connectivity to IoT devices that may not have been possible otherwise.

You most likely have heard of the buzzword artificial intelligence (AI). Many businesses are incorporating artificial intelligence into their modern applications and platforms to make decisions that affect the business more quickly and intelligently. According to IBM, artificial intelligence (AI) leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

Today, powerful artificial intelligence (AI) technologies include neural networks, expert systems, robotics, case systems, natural language processing, and many others. Among those technologies, neural networks dominate the AI space as the premiere technology driving many AI solutions. Artificial intelligence neural networks are loosely designed from how the brain works. In an artificial neural network, compute nodes are connected and share data in such a way as to represent and mimic how neurons in the brain function.

Businesses are increasingly using artificial intelligence (AI) and machine learning (ML) in business processes to make split-second decisions that affect how processes and tasks are carried out. As you can imagine, AI requires a tremendous amount of data, and the processing of this data needs to happen as quickly as possible, so the AI intelligence is accurate, timely, relevant, and beneficial to the business. However, if AI processing is working from old data, the technology-driven decisions may not be up-to-date.

AI and ML applications directly benefit from the accelerated move to edge technologies as it places the compute technologies close to where IoT devices generate the data. Application deployments are accelerating. In the next five years, businesses will be deploying more applications and solutions. These applications will be increasingly deployed at the Edge instead of the enterprise data center.

IDC Blog post “Edge Computing Not All Edges are Created Equal in June 2020, stated more than 50% of new IT infrastructure will be deployed at the Edge by 2023. Additionally, in the post, “Worldwide IT Industry 2021 Predictions,” the IDC stated the number of new operational processes deployed on edge infrastructure will grow from less than 20% today to over 90% in 2024.

Distributed architecture and applications

Enterprise architecture is changing rapidly and becoming more distributed. Traditionally, we are used to seeing a very organized stack of infrastructure in the enterprise data center. It includes endpoints in one location and connecting to the enterprise data center or the cloud in another location. However, this organized stack of infrastructure and applications has become highly distributed.

Where data is produced, consumed, and served is completely distributed. The people consuming the data are distributed, and the endpoints are distributed. So how can organizations bring structure from a software perspective to help the service providers and enterprises meet their goals?

Data centers used to exist on a per-business unit basis. Businesses went through the push to centralize the data center a few decades back. Then, with the cloud revolution, enterprises have started migrating and are still migrating applications to the public cloud. Which model is better? A central theme that many follow is to distribute when you must, centralize when you can.

However, when the source of the data and the consumption are highly distributed, you can’t simply rely on centralized models to continue. There are diminishing benefits to using cloud resources at a certain point of distributed applications.

Many industries and use cases benefit from edge computing such as autonomous vehicles, eSports, Immersive retailing, work from anywhere, smart homes, industrial IoT, and many others. Again, Internet-connected devices, sensors, and other clients can consume and process the data in these environments locally, processed by AI and ML algorithms for quick analysis locally.

Advantages of Edge Computing

Edge computing allows businesses to take advantage of the now billions of Internet-connected devices worldwide. Especially in manufacturing, retail, healthcare, and other industries. By placing infrastructure, applications, and workloads at the Edge, organizations reap the following advantages and benefits:

    • Improved speed/reduced latency between interconnected devices and compute infrastructure
    • Ability to more effectively use artificial intelligence (AI) and machine learning (ML) for split-second decisions that can affect data-driven decisions
    • Decreased need for ultra-fast connections between edge locations and the central data center
    • Reliability and resiliency improvements as the decentralized sensors and IoT devices are not dependent on centralized data center connections
    • The Decentralized model helps to scale solutions
    • Some security benefits – with the decentralized model, you also decentralize the security risks. In the centralized model, attackers can concentrate their efforts on a single point of failure.
    • Reduced costs and other savings – With edge environments, costly high-bandwidth Internet pipes and dedicated circuits between private data centers or public cloud locations are no longer needed

Edge Computing Challenges

The challenges at the Edge are not easy. Historically, many organizations have built applications to run in the cloud or the enterprise data center, not at the Edge. So, it requires refactoring of applications to run in edge locations. In addition, the attributes of the application model itself are different, it requires hardware abstraction, and state data must be handled when rewriting applications for edge use cases.

Security is also much different and more challenging. While many cloud service providers have effortless security built into the cloud IaaS, PaaS, and SaaS offerings, now businesses have to take on the challenge of physically locating workloads on-site with edge applications.

The Edge is a growing and accelerating market. The infrastructure needs to be able to scale and accommodate edge applications. Computing and other technologies, including GPU technologies, are moving to the Edge. With the accelerated move to edge deployments, security, scalability, and manageability will be critical priorities. Businesses need to manage edge deployments in a “fleet style” and secure their applications.

It will require retooling, rethinking, and redesigning infrastructure, applications, processes, support, communication, and many other facets of the infrastructure for edge locality and connectivity. Also, without simplifying edge deployments, organizations risk having “special snowflakes” that exist at every edge location. Generally speaking, it is common for edge locations to have little to no IT staff. It means that edge infrastructure and applications need to be robust, fleet-managed, and adequately secured to ensure resiliency in environments apart from the centralized data center location. This is where edge vsphere computing can help here.

VMware Edge announced at VMworld

At VMworld 2021, VMware showed they are betting big in this field with vsphere computing for edge cases. In the announcements at VMworld 2021, VMware outlined its strategy and portfolio of products that help customers to make the transition to edge with applications and infrastructure.

Part of the strategy for supporting accelerated edge deployments is the introduction of VMware Edge. What is VMware Edge or vSphere Edge? Rather than a new product, the new VMware Edge, as announced, is a portfolio of products that helps customers make the transition to edge-native applications that can exist anywhere they are needed to be close to the data, including multiple clouds.

VMware defined a new type of workload that is emerging with the shifting to vSphere edge services – the edge-native app. The edge-native application is truly latency-sensitive and demands ultimate compute and latency performance between interconnected devices and the data analysis at the Edge.

Examples of edge-native apps as defined by VMware include AR/VR, connected vehicles, immersive gaming, collaborative robots, drone fleets, and other modern robotic and industrial devices.

Components of the VMware Edge portfolio

So, what are the components of the VMware Edge and vSphere edge cluster configurations, and how do these fit together? First of all, let’s consider the VMware definition of an edge location defined. VMware describes the Edge as a distributed digital infrastructure for running workloads across many locations, placed close to users and devices producing and consuming data. Therefore, where workloads are placed in the Edge is a primary consideration to meet the requirements of edge-native applications.

VMware breaks down Edge into two primary categories:

    • Near Edge – Near Edge describes an edge-native workload, delivered as a service, that exists “between” a public cloud environment and a remote site
    • Far Edge – Far edge describes edge-native workloads placed as close to the endpoint as possible. These workloads may even be adjacent to the endpoint

VMware Edge portfolio of solutions consists of the following:

    • VMware Edge Compute Stack – The VMware computing stack has become much more versatile, with vSphere computing becoming more capable, fully-featured, and possessing the ability to run traditional and modern workloads. The vSphere computing stack is purpose-built to run VMs and containerized stacks that provide a robust platform for edge-native applications at the far Edge. This vSphere computing stack consists of VMware vSphere, VMware Tanzu, and VMware automation solutions. In addition, you can extend the platform with solutions such as VMware vSAN, VMware SD-WAN, and SASE solutions to provide the tools needed to meet various edge use cases.
    • VMware SASE (Secure Access Service Edge) – The VMware vSphere computing layer is complimented with the Secure Access Service Edge (SASE) solution that allows organizations to merge wide area networking, security, and compute through a cloud-delivered service. It enables connecting users with edge-native apps and traditional applications, regardless of the location. VMware SASE helps to deliver a secure and highly automated solution to deliver access to applications and workloads. It uses software-defined networking and security to unify secure access from a single platform.

Below is a high-level deployment of infrastructure at the Edge with VMware SD-WAN as network connectivity.

VMware SDWAN architecture from the central data center to the Edge
VMware SDWAN architecture from the central data center to the Edge

    • VMware Telco Cloud Platform – VMware’s Telco Cloud Platform is a unique configuration of solutions in the VMware portfolio, including VMware vSphere, VMware Tanzu, and VMware Telco Cloud Automation. It provides a centralized management plane for network and multi-cloud domains. The automation capabilities offered by the vsphere Cloud include infrastructure, CaaS, and network services, including network functions. VMware Tanzu is a proven Kubernetes distribution that can house modern edge-native apps. In addition, VMware vSphere computing provides familiar and robust capabilities for edge-native applications.

The VMware Telco Cloud Platform provides vital benefits to organizations in this space:

    • It helps deploy service provider and enterprise edge sites faster and manage these with automation and a unified management plane
    • You can integrate network modernization efforts using open standards ETSI, TMF, and O-RAN
    • You can run heterogeneous applications on a common platform, allowing network modernization and edge monetization
    • Telcos can realize many revenue streams by offering communications services and offerings
    • Telcos own the platform that hosts the services and have the freedom to use vendors that align with their business strategy

Below, vSphere computing at the Edge provides the ability to deliver many modern technologies to edge locations to provide much fast analysis of big data generated by IoT devices.

VMware Edge computing solutions empower industries
VMware Edge computing solutions empower industries

Which Sectors can be Applied to Edge Computing?

Which business sectors can benefit from applying the features and advantages of edge computing? There are many different use cases, but note the following:

    • Retail – With the retail store environment, businesses can implement interactive digital media to improve the customer experience. With the use of edge computing, advanced artificial intelligence, and machine learning, organizations can enhance their customer’s buying experience using targeted media and tailored recommendations for them as they shop. It also includes streamlining the checkout and payment experience.
    • Manufacturing – Increasingly, organizations are looking to have near real-time machine performance analytics in manufacturing facilities and digital twin solutions for real-time monitoring and control of machines and processes. They can also leverage AI and ML to overlay complex models on products and services.
    • Public safety – In the public safety sector, edge computing allows emergency response services to understand emergency events better. It is made possible with streaming data analytics that allows responding to events in real-time. When first responders and disaster workers can be provided with real-time sensitive data
    • Healthcare – Edge computing and artificial intelligence is helping to reshape the healthcare industry, making it cheaper, easier, and providing better care for everyone. Edge computing devices can be used to monitor patients remotely and provide automated care delivery.

Below is the retail vSphere computing edge solution architecture from VMware in conjunction with Dell EMC vSAN Ready Node XE2420 and VMware SD-WAN providing a solution to deliver secure, reliable, and performant application performance to distributed edge locations.

VMware Retail edge solution architecture design
VMware Retail edge solution architecture design

vSphere Computing Technologies Bolstering the Edge

 

VMware’s portfolio of vSphere computing technologies gives organizations many options to provide cost-effective, capable compute infrastructure to empower edge environments. It includes the powerful and capable vSAN two-node design, providing software-defined storage, networking, and computing for edge environments at a low cost, small footprint, and new resiliency capabilities found in vSphere 7.0 Update 3.

Customers have access to many vendor-provided options for implementing hyper-converged platforms. An example solution of a hyper-converged vSphere computing platform includes VMware/Dell solutions for rolling out cost-efficient edge solutions:

    • Dell EMC PowerEdge XE2420 vSAN Ready Nodes
    • Dell EMC Virtual Edge Platform 4600
    • VMware SD-WAN by VeloCloud, a software-defined WAN solution

The following overview diagram details the network design of a central data center and an edge site with a VMware vSAN two-node deployment. The vSAN witness is housed in the centralized data center, and networking is taken care of with the VMware SD-WAN appliance.

VMware vSAN two-node configuration with the witness node located at the central data center
VMware vSAN two-node configuration with the witness node located at the central data center

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Is Edge the Future for all Low-Latency Demands?

The landscape of enterprise applications is changing and evolving considerably and vSphere computing is one of the culprits of it. As enterprises have moved past on-premises environments and are in the middle of cloud migrations and utilizing hybrid cloud infrastructure. As businesses are using advanced technologies such as artificial intelligence (AI) and machine learning (ML) to analyze data generated at the edge environment, it is becoming increasingly essential to have computing resources as close to the devices generating the data as possible.

As discussed, edge computing essentially places the compute resources close to the edge devices generating the data. It allows analyzing and processing of the data as quickly as possible without the challenges of traversing a WAN connection with the added latency.

VMware offers many vSphere computing solutions that provide the platform to run edge solutions, such as the VMware vSAN two-node configuration. In addition, it provides software-defined storage, which companies can use with the software-defined networking connectivity of the VMware SD-WAN appliance. The newly announced portfolio, VMware Edge, combines the relevant technologies found in the VMware software stack to easily build edge infrastructure that scales with the customer’s needs.

The post Is Edge Computing a Gamechanger for vSphere? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/edge-vsphere/feed/ 0
All You Need to Know about vSphere Cloud Native Storage (CNS) https://www.altaro.com/vmware/vsphere-cloud-native-storage/ https://www.altaro.com/vmware/vsphere-cloud-native-storage/#respond Fri, 12 Aug 2022 12:51:39 +0000 https://www.altaro.com/vmware/?p=24607 Learn about data management solutions and how to provision persistent storage for stateful applications with vSphere Cloud Native Storage.

The post All You Need to Know about vSphere Cloud Native Storage (CNS) appeared first on Altaro DOJO | VMware.

]]>

In this article, we will have a look at how to provision storage for Kubernetes workloads directly on vSphere cloud native storage without resorting to an extra software layer in between.

In the infrastructure world, storage is the building block of data persistency which comes in many different shapes and forms, including vSphere cloud native storage. Shared storage lets you leverage clustering services and enables a number of data protection scenarios that are vital to most IT environments. Ensuring a solid storage backend will give IT departments much appreciated peace of mind as storage outages are among the most dreaded failure scenarios in any IT professional’s mind. Despite the growing number of storage solutions available on the market, provisioning shared storage in a vSphere environment is something that many now consider mainstream, as it is a tried a tested process. VMware vSphere environments offer several options for storage such as VMFS, vSAN, VVols and NFS to store your virtual machines and other resources consumed by the hypervisors among other things.

In recent years, VMware have extended the reach of vSphere storage backends and the capabilities of the vSphere suite to integrate more closely with modern applications, in other words, container workloads and microservices that leverage vSphere cloud-native storage. An area VMware has been heavily involved in after the acquisition of several Cloud Native companies like Pivotal to build their VMware Tanzu portfolio.

While the flexibility and customization potential of Kubernetes is unbeatable, its complexity means that the learning curve is fairly steep compared to other infrastructure solutions. Let’s see how vSphere Cloud Native Storage deals with that.

An introduction to VMware vSphere cloud native storage

First of all, what is Cloud Native? The term Cloud Native has been somewhat of a buzzword these last few years and it started appearing in more and more places. Cloud Native mostly refers to infrastructure agnostic container workloads that are built to run in the cloud. That means no more monolithic software architectures and separation of duties. Microservices are meant to be service-specific workloads interacting with each other in a streamlined fashion. Kubernetes is a container orchestrator platform that has been enabling this revolution and became the de-facto industry standard for running containers in enterprise settings.

Having said that, not all workloads running on Kubernetes can be stateless and ephemeral. We still need to store data, configs and other resources permanently on backends such as vSphere cloud native storage for those stateful applications. That way the data will remain even after ruthlessly killing a bunch of pods. Here come persistent volumes (PVs). Kubernetes resources that let you provision storage on a specific backend like vSphere cloud native storage to store data persistently.

“VMware CNS supports most types of vSphere storage”

VMware CNS supports most types of vSphere storage”

Kubernetes PVs, PVCs and Pods

VMware Tanzu is an awesome product; however, it is easy for a vSphere admins to jump headfirst in it with no prior knowledge of Kubernetes just because it has the “VMware” label on it. This makes the learning process incredibly confusing and not a great way to start on this journey. So, before we dig in, I’d like to cover a few Kubernetes terms for those that aren’t too familiar with this. More will follow in the next chapter.

  • Pod: A pod is the smallest schedulable entity for workloads, you manage pods, not containers. A pod can contain one or more containers but a container is only in one pod. It contains information on volumes, networking and how to run the containers.
  • Persistent Volume (PV): A PV is an object to define storage that can be connected to pods. It can be backed by various sources such as temporary local storage, local folder, NFS or interact with an external storage provider through a CSI driver.
  • Persistent Volume Claim (PVC): PVCs are like storage requests that let you assign specific persistent volumes to pods.
  • Storage Class (SC): Those let you configure different tiers of storage or infrastructure-specific parameters to apply PVs backed by a certain type of storage without having to be too specific, much like storage policies in the vSphere world.

The vSphere Container Storage Interface driver

The terms described in the previous chapter are the building blocks of provisioning vSphere Cloud Native storage. Now we will quickly touch base on what a Container Storage Interface (CSI) driver is. As mentioned earlier, persistent volumes are storage resources that let a pod store data onto a specific storage type. There is a number of built-in storage types to work with but the strength of Kubernetes is its extensibility. Much like you can add third-party plugins to vCenter or array-specific Path Selection Policies to vSphere, you can interact with third-party storage devices in Kubernetes by using drivers distributed by the vendor, which will plug into the Container Storage Interface. Most storage solution vendors now offer CSI drivers and VMware is obviously one of them with the vSphere Container Storage Interface or vSphere CSI which enables vSphere cloud-native storage.

When a PVC requests a persistent volume on vSphere, the vSphere CSI driver will translate the instructions into something vCenter understand. vCenter will then instruct the creation of vSphere cloud native storage that will be attached to the VM running the Kubernetes node and then attached to the pod itself. The added benefit is that vCenter will report information about the container volumes in the vSphere client with more or less information depending on the version you are running. And this is what is called vSphere Cloud Native Storage.

“vSphere cloud native storage lets you provision persistent volumes on vSphere storage”

vSphere cloud native storage lets you provision persistent volumes on vSphere storage”

Now in order to leverage vSphere cloud native storage, the CSI provider must be installed in the cluster. If you aren’t sure or you are getting started with this, you can use CAPV or Tanzu Community Edition to fast track this step. Regardless, the configuration to instruct the CSI driver how to communicate with vCenter is contained in a Kubernetes secret (named csi-vsphere-config by default) that is mapped as a volume on the vSphere CSI controller. You can display the config of the CSI driver by opening it.

k get secrets csi-vsphere-config -n kube-system -o jsonpath='{.data.csi-vsphere\.conf}’

 

“The vSphere CSI driver communicates with vCenter to provision vSphere cloud native storage”

The vSphere CSI driver communicates with vCenter to provision vSphere cloud native storage”

vSphere cloud native storage features and benefits

Part of the job of an SRE (Site Reliability Engineer), or whatever title you give to the IT professional managing Kubernetes environments, is to work with storage provisioning. We are not talking about presenting iSCSI LUNs or FC zoning to infrastructure components here, we are working a level higher in the stack. The physical shared storage is already provisioned and we need a way to provide a backend for Kubernetes persistent volumes. vSphere Cloud native storage greatly simplifies this process with the ability to match vSphere storage policies with Kubernetes storage classes. That way when you request a PV in Kubernetes you get a virtual disk created directly on the datastore.

Note that these disks are not of the same type as traditional virtual disks that are created with virtual machines. This could be the topic of its own blog post but in a nutshell, those are called Improved Virtual Disk (IVD), First Class Disks (FCD) or managed virtual disk. This type is needed because it is a named virtual disk unassociated with a VM, as opposed to traditional disks that can only be provisioned by being attached to a VM.

The other benefit of using vSphere cloud native storage is better visibility of what’s being provisioned in a single pane of glass (a.k.a. vSphere web client). With vSphere CNS, you can view your container volumes in the vSphere UI and find out what VM (a.k.a. Kubernetes node) the volume is connected to along with extra information such as labels, storage policy… I will show you that part in a bit.

Note that support for vSphere CSI will depend on your environment and you may or may not be able to leverage it in full. This is obviously subject to change across versions so you can find the up to date list here.

Functionality vSphere Container Storage Plug-in Support
vSphere Storage DRS No
vSAN File Service on Stretched Cluster No
vCenter Server High Availability No
vSphere Container Storage Plug-in Block or File Snapshots No
ESXi Cluster Migration Between Different vCenter Server Systems No
vMotion Yes
Storage vMotion No
Cross vCenter Server Migration

Moving workloads across vCenter Server systems and ESXi hosts.

No
vSAN, Virtual Volumes, NFS 3, and VMFS Datastores Yes
NFS 4 Datastore No
Highly Available and Distributed Clustering Services No
vSAN HCI Mesh No
VM Encryption Yes
Thick Provisioning on Non vSAN Datastores

For Virtual Volumes, it depends on capabilities exposed by third-party storage arrays.

No
Thick Provisioning on vSAN Datastores Yes

A lot of features get added over the versions of the release cycle such as:

  • Snapshot support for block volumes
  • Exposed metrics for Prometheus monitoring
  • Support for volume topology
  • Performance and resiliency improvements
  • Online volume expansion
  • vSphere Container Storage support on VMware Cloud on AWS (VMC)
  • ReadWriteMany volumes using vSAN file services
  • And others…

The transformation from VCP (vSphere Cloud Provider) to CSI (Container Storage Interface)

Originally, cloud provider-specific functionalities were integrated in Kubernetes natively within the main Kubernetes tree, also called in-tree modules. Kubernetes is a highly fast-changing landscape with a community that strives to make the product scalable and as efficient as possible. The growing popularity of the platform meant more and more providers jumped on the train which made this model hardly maintainable and difficult to scale. As a result, vendor-specific functionalities must now be removed from the Kubernetes code and offered as out-of-tree plug-ins. That way, vendors can maintain their own software independently from the main Kubernetes repo.

This was the case with the in-tree vSphere Volume plugin that was part of the Kubernetes code which will be deprecated and removed from future versions in favor of the current vSphere CSI driver (out of tree). In order to simplify the shift from the in-tree vSphere volume plug-in to vSphere CSI, Kubernetes added a Migration feature to provide a seamless procedure.

The migration will allow existing volumes using the in-tree vSphere Volume Plugin to continue to function, even when the code has been removed from Kubernetes, by routing all the volume operations to the vSphere CSI driver. If you want to know more, the procedure is described in this VMware blog.

“vSphere cloud native storage includes additional and modern features with vSphere CSI driver compared to the in-tree vSphere volume plugin”

vSphere cloud native storage includes additional and modern features with vSphere CSI driver compared to the in-tree vSphere volume plugin”

vSAN Cloud Native Storage integration

I will demonstrate here how to provision vSphere cloud native storage on vSAN without going too much into the details. The prerequisites to this demonstration is to have a Kubernetes cluster running on a vSphere infrastructure with the vSphere CSI driver installed in it. If you want a head start and skip the step of installing the CSI driver, you can use CAPV or Tanzu Community Edition to deploy your Kubernetes cluster.

Anyways, in order to use vSphere cloud native storage, we will create a Storage policy in our Kubernetes cluster that matches the vSAN storage policy, then we will create a Persistent Volume Claim using that storage policy, we will attach it to a pod and see how vCenter displays it in the vSphere client.

  • First, I create a Storage Class that matches the name of the vSAN storage policy which is “vSAN Default Storage Policy”. The annotation field means that PVCs will use this storage class unless specified otherwise. It will obviously depend on which vSAN storage policy you want to set as the default one.
kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

name: vsan-default-policy

annotations:

storageclass.kubernetes.io/is-default-class: “true”

provisioner: csi.vsphere.vmware.com

parameters:

storagepolicyname: “vSAN Default Storage Policy”

“The storage class references the vSAN storage policy and the storage provisioner (vSphere CSI driver)”

The storage class references the vSAN storage policy and the storage provisioner (vSphere CSI driver)”

  • Then I create a persistent volume claim (PVC) that references the storage class. The storage request will be the size of the virtual disk backing the PV.
apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: test-altaro-blog

spec:

accessModes:

– ReadWriteOnce

resources:

requests:

storage: 5Gi

storageClassName: vsan-default-policy

 

“The PVC creates a VMware cns with a PV”

The PVC creates a VMware cns with a PV”

  • You should now see a persistent volume provisioned by the PVC.

“The PVC should automatically create a PV”

The PVC should automatically create a PV”

  • At this point you should see the vSphere cloud-native storage in the vSphere client by browsing to Cluster > Monitor > Container Volumes.

The volume name matches the name of the persistent volume claim, I also tagged it in Kubernetes to show how the tags are displayed in the vSphere client.

Cluster > Monitor > Container Volumes.

  • You can get details if you click on the icon to the left of the volume. You will find the Storage Policy, datastore and you’ll see that no VM is attached to it yet.

Storage Policy

  • In the Kubernetes objects tab, you will find information such as the namespace in use, the type of cluster…

Kubernetes objects tab

  • Then the Physical Placement tab shows you were the vSAN components backing this vSphere cloud-native storage or stored in the hosts.

Kubernetes

  • At this point the vSphere cloud native storage is created but it isn’t used by any pod in Kubernetes. I created a basic pod to consume the PVC.
apiVersion: v1

kind: Pod

metadata:

name: test-pod-altaro

spec:

volumes:

– name: test-pv-altaro

persistentVolumeClaim:

claimName: test-altaro-blog

containers:

– name: test-cont-altaro

image: nginx

volumeMounts:

– mountPath: “/usr/share/nginx/html”

name: test-pv-altaro

 

Notice where the pod is scheduled, on node “test-clu-145-md-0-5966988d9d-s97vm”.

node

  • At this point, the newly created pod gets the volume attached and it will be quickly shown in the vSphere client where you see the VM running the node where the pod is scheduled.

pod gets the volume attached

  • If you open the settings of said VM, you will find a disk attached which is the vSphere Cloud native storage created earlier.

vSphere Cloud native storage

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Wrap up

Most IT pros will agree that the learning curve of Kubernetes is fairly steep as it is a maze of components, plugins and third-party products that can seem daunting at first. However, they will also agree that Kubernetes has been one of the fastest-growing technologies in the last 5 years. The big players in the tech industry have all jumped on the bandwagon and either developed their own product or added support/managed services for it somehow. VMware is one of them with their Tanzu portfolio and vSphere Cloud native storage is a critical component of this stack as it reduces the complexity by offering vSphere storage to Kubernetes workloads. The cool thing about it is that it is made easier to use thanks to the CSI driver plugin architecture and tightly integrated with the vSphere web client for added visibility.

The post All You Need to Know about vSphere Cloud Native Storage (CNS) appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vsphere-cloud-native-storage/feed/ 0
Manage resources across sites with the VMware Content Library https://www.altaro.com/vmware/vmware-content-library/ https://www.altaro.com/vmware/vmware-content-library/#respond Fri, 05 Aug 2022 12:53:05 +0000 https://www.altaro.com/vmware/?p=24625 Publish and synchronize resources such as virtual machine templates, OVF files, ISO images, and others across your vSphere environment.

The post Manage resources across sites with the VMware Content Library appeared first on Altaro DOJO | VMware.

]]>

A VMware vSphere environment includes many components to deliver business-critical workloads and services. However, there is a feature of today’s modern VMware vSphere infrastructure that is arguably underutilized – the VMware Content Library. Nevertheless, it can be a powerful tool that helps businesses standardize the workflow using files, templates, ISO images, vApps, scripts, and other resources to deploy and manage virtual machines. So how can organizations manage resources across sites with the VMware Content Library?

What is the VMware Content Library?

Most VI admins will agree with multiple vCenter Servers in the mix, managing files, ISOs, templates, vApps, and other resources can be challenging. For example, have you ever been working on one cluster and realized you didn’t have the ISO image copied to a local datastore that is accessible, and you had to “sneakernet” the ISO where you could mount and install it? What about virtual machine templates? What if you want to have the virtual machine templates in one vCenter Server environment available to another vCenter Server environment?

The VMware Content Library is a solution introduced in vSphere 6.0 that allows customers to keep their virtual machine resources synchronized in one place and prevent the need for manual updates to multiple templates and copying these across between vCenter Servers. Instead, administrators can create a centralized repository using the VMware Content Library from which resources can be updated, shared, and synchronized between environments.

Using the VMware Content Library, you essentially create a container that can house all of the important resources used in your environment, including VM-specific objects like templates and other files like ISO image files, text files, and other file types.

The VMware Content Library stores the content as a “library item.” Each VMware Content Library can contain many different file types and multiple files. VMware gives the example of the OVF file that you can upload to your VMware Content Library. As you know, the OVF file is a bundle of multiple files. However, when you upload the OVF template, you will see a single library entry.

VMware has added some excellent new features to the VMware Content Library features in the past few releases. These include the ability to add OVF security policies to a content library. The new OVF security policy was added in vSphere 7.0 Update 3. It allows implementing strict validation for deploying and updating content library items and synchronizing templates. One thing you can do is make sure a trusted certificate signs the templates. To do this, you can deploy a signing certificate for your OVFs from a trusted CA to your content library.

Another recent addition to the VMware Content Library functionality introduced in vSphere 6.7 Update 1 is uploading a VM template type directly to the VMware Content Library. Previously, VM templates were converted to an OVF template type. Now, you can work directly with virtual machine templates in the VMware Content Library.

VMware Content Library types

VMware Content Library enables managing resources across sites using two different types of content libraries. These include the following:

    • Local Content Library – A local content library is a VMware Content Library used to store and manage content residing in a single vCenter Server environment. Suppose you work in a single vCenter Server environment and want to have various resources available across all your ESXi hosts to deploy VMs, vAPPs, install from ISO files, etc. In that case, the local content library allows doing that. With the local content library, you can choose to Publish the local content library. When you publish the Content Library, you are making it available to be subscribed to or synchronized.
    • Subscribed Content Library – The other type of Content Library is the subscribed content library. When you add a subscribed VMware Content Library type, you are essentially downloading published items from a VMware Content Library type that has published items as mentioned in the Local Content Library section. In this configuration, you are only a consumer of the VMware Content Library that someone else has published. It means when creating the Content Library, the publish option was configured. You can’t add templates and other items to the subscribed VMware Content Library type as you can only synchronize the content of the subscribed Content Library with the content of the published Content Library.
      • With a subscribed library, you can choose to download all the contents of the published Content Library immediately once the subscribed Content Library is created. You can also choose to download only the metadata for items in the published Content Library and download the entire contents of the items you need. You can think of this as a “files on-demand” type feature that only downloads the resources when these are required.

Below is an example of the screen when configuring a content library that allows creating either a Local Content Library or the Subscribed Content Library:

Choosing the content library type
Choosing the content library type

Create a local or subscription Content Library in vSphere 7

Creating a new VMware Content Library is a relatively straightforward and intuitive process you can accomplish in the vSphere Client. Let’s step through the process to create a new VMware Content Library. We will use the vSphere Web Client to manage and configure the Content Library Settings.

Using the vSphere Web Client to manage the Content Library

First, click the upper left-hand “hamburger” menu in the vSphere Client. You will see the option Content Libraries directly underneath the Inventory menu when you click the menu.

Choosing the Content Libraries option to create a manage Content Libraries
Choosing the Content Libraries option to create a manage Content Libraries

Under the Content Libraries screen, you can Create new Content Libraries.

Creating a new Content Library in the vSphere Client
Creating a new Content Library in the vSphere Client

It will launch the New Content Library wizard. In the Name and Location screen, name the new VMware Content Library.

New Content Library name and location
New Content Library name and location

On the Configure content library step, you configure the content library type, including configuring a local content library or a subscribed content library. Under the configuration for Local content library, you can Enable publishing. If publishing is enabled, you can also enable authentication.

Configuring the Content Library type
Configuring the Content Library type

When you configure publishing and authentication, you can configure a password on the content library.

Apply security policy step

Step 3 is the Apply security policy step. It allows applying the OVF default policy to protect and enforce strict validation while importing and synchronizing OVF library items.

Choosing to apply the OVF default policy
Choosing to apply the OVF default policy

The VMware Content Library needs to have a storage location that will provide the storage for the content library itself. First, select the datastore you want to use for storing your content library. The beauty of the content library is that it essentially publishes and shares the items in the content library itself, even though they may be housed on a particular datastore.

Select the storage to use for storing items in the VMware Content Library
Select the storage to use for storing items in the VMware Content Library

Finally, we are ready to complete the creation of the Content Library. Click Finish.

Finishing the creation of the VMware Content Library
Finishing the creation of the VMware Content Library

Once the VMware Content Library is created, you can see the details of the library, including the Publication section showing the Subscription URL.

Viewing the settings of a newly created VMware Content Library
Viewing the settings of a newly created VMware Content Library

As a note. If you click the Edit Settings hyperlink under the Publication settings pane, you can go in and edit the settings of the Content Library, including the publishing options, authentication, changing the authentication password, and applying a security policy.

Editing the settings of a VMware Content Library
Editing the settings of a VMware Content Library

Creating a subscribed VMware Content Library

As we mentioned earlier, configuring a subscribed content library means synchronizing items from a published content library. In the New Content Library configuration wizard, you choose the Subscribed content library option to synchronize with a published content library. Then, enter the subscription URL for the published content library when selected. As shown above, this URL is found in the settings of the published content library.

You will need to also place a check in the Enable authentication setting if the published content library was set up with authentication. Then, enter the password configured for the published content library. Also, note the configuration for downloading content. As detailed earlier, you can choose to synchronize items immediately, meaning the entire content library will be fully downloaded. Or, you can select when needed, which acts as a “files on demand” configuration that only downloads the resources when needed.

Configuring the subscribed content library
Configuring the subscribed content library

Choose the storage for the subscribed Content Library.

Add storage for the subscribed VMware Content Library

Add storage for the subscribed VMware Content Library

Ready to complete adding a new subscribed VMware Content Library. Click Finish.

Ready to complete adding a subscribed VMware Content Library
Ready to complete adding a subscribed VMware Content Library

Interestingly, you can add a subscribed VMware Content Library that is subscribed to the same published VMware Content Library on the same vCenter Server.

Published and subscribed content library on the same vCenter Server
Published and subscribed content library on the same vCenter Server

What is Check-In/Check-Out?

A new feature included with VMware vSphere 7 is versioning with the VMware Content Library. So often, with virtual machine templates, these are frequently changed, updated, and configured. As a result, it can be easy to lose track of the changes made, the user making the modifications, and track the changes efficiently.

Now, VMware vSphere 7 provides visibility into the changes made to virtual machine templates with a new check-in/check-out process. This change embraces DevOps workflows with a way for IT admins to check in and check out virtual machine templates in and out of the Content Library.

Before the new check-in/check-out feature, VI admins might use a process similar to the following to change a virtual machine template:

    1. Convert a virtual machine template to a virtual machine
    2. Place a snapshot on the converted template to machine VM
    3. Make whatever changes are needed to the VM
    4. Power the VM off and convert it back to a template
    5. Re-upload the VM template back to the Content Library
    6. Delete the old template
    7. Internally notify other VI admins of the changes

Now, VI admins can use a new capability in vSphere 7.0 and higher to make changes to virtual machine templates more seamlessly and track those changes effectively.

Clone as template to Library

The first step is to house the virtual machine template in the Content Library. Right-click an existing virtual machine to use the new functionality and select Clone as Template to Library.

Clone as Template to Library functionality to use the check-in and check-out feature
Clone as Template to Library functionality to use the check-in and check-out feature

As a note, if you see the Clone to Library functionality instead of Clone as Template to Library, it means you have not converted the VM template to a virtual machine. If you right-click a VM template, you only get the Clone to Library option. If you select Clone to Template, it only allows cloning the template in a traditional way to another template on a datastore.

Right-clicking and cloning a VM template only gives the option to Clone to Library
Right-clicking and cloning a VM template only gives the option to Clone to Library

Continuing with the Clone to Library process, you will see the Clone to Template in Library dialog box open. Select either New template or Update the existing template.

Clone to Template in Library
Clone to Template in Library

In the vCenter Server tasks, you will see the process begin to Upload files to a Library and Transfer files.

Uploading a virtual machine template to the Content Library
Uploading a virtual machine template to the Content Library

When you right-click a virtual machine and not a virtual machine template, you will see the additional option of Clone as Template to Library.

Clone as Template to Library
Clone as Template to Library

It then brings up a more verbose wizard for the Clone Virtual Machine To Template process. The first screen is the Basic information where you define the Template type (can be OVF or VM Template), the name of the template, notes, and select a folder for the template.

Configuring basic information for the clone virtual machine to template process
Configuring basic information for the clone virtual machine to template process

On the Location page, you select the VMware Content Library you want to use to house the virtual machine template.

Select the VMware Content Library to house the virtual machine template
Select the VMware Content Library to house the virtual machine template

Select a compute resource to house your cloned VM template.

Select the compute resource for the virtual machine template
Select the compute resource for the virtual machine template

Select the storage for the virtual machine template.

Select storage to house the VM template
Select storage to house the VM template

Finish the Clone Virtual Machine to Template process.

Finish the clone of the virtual machine to template in the VMware Content Library
Finish the clone of the virtual machine to template in the VMware Content Library

If you navigate to the Content Library, you will see the template listed under the VM Templates in the Content Library.

Viewing the VM template in the Content Library
Viewing the VM template in the Content Library

Checking templates in and out

If you select the radio button next to the VM template, the Check Out VM From This Template button will appear to the right.

Launching the Check out VM from this template
Launching the Check out VM from this template

When you click the button, it will launch the Check out VM from VM Template wizard. First, name the new virtual machine that will be created in the check-out process.

Starting the Check out VM from VM template
Starting the Check out VM from VM template

Select the compute resource to house the checked-out virtual machine.

Selecting a compute resource
Selecting a compute resource

Review and finish the Check out VM from VM template process. You can select to power on VM after check out.

Review and Finish the Check out VM from VM Template
Review and Finish the Check out VM from VM Template

The checked-out virtual machine will clone from the existing template in the Content Library. Also, you will see an audit trail of the check-outs from the Content Library. You are directed to Navigate to the checked-out VM to make updates. Note you then have the button available to Check In VM to Template.

Virtual machine template is checked out and deployed as a virtual machine in inventory
Virtual machine template is checked out and deployed as a virtual machine in inventory

If you navigate to the Inventory view in the vSphere Client, you will see the machine has a tiny blue dot in the lower left-hand corner of the virtual machine icon.

Viewing the checked-out VM template as a virtual machine in vSphere inventory
Viewing the checked-out VM template as a virtual machine in vSphere inventory

After making one small change, such as changing the virtual network the virtual machine is connected to, we see the option appear to Check In VM to Template.

Check In VM to Template
Check In VM to Template

It will bring up the Check In VM dialog box, allowing you to enter notes and then click the Check In button.

Check In the VM
Check In the VM

We see the audit trail of changes reflected in the Content Library with the notes we entered in the Check in notes.

Virtual machine template checked back in with the notes entered in the check-in process
Virtual machine template checked back in with the notes entered in the check-in process

You will also see a new Versioning tab displayed when you view the virtual machine template in the inventory view.

Viewing the versioning of a virtual machine template in the inventory view
Viewing the versioning of a virtual machine template in the inventory view

VMware Content Library Roles

There are various privileges related to Content Library privileges. VMware documents the following privileges that can be assigned to a custom VMware Content Library Role.

Privilege Name Description Required On
Content library.Add library item Allows addition of items in a library. Library
Content library.Add root certificate to trust store Allows addition of root certificates to the Trusted Root Certificates Store. vCenter Server
Content library.Check in a template Allows checking in of templates. Library
Content library.Check out a template Allows checking out of templates. Library
Content library.Create a subscription for a published library Allows creation of a library subscription. Library
Content library.Create local library Allows creation of local libraries on the specified vCenter Server system. vCenter Server
Content library.Create or delete a Harbor registry Allows creation or deletion of the VMware Tanzu Harbor Registry service. vCenter Server for creation. Registry for deletion.
Content library.Create subscribed library Allows creation of subscribed libraries. vCenter Server
Content library.Create, delete or purge a Harbor registry project Allows creation, deletion, or purging of VMware Tanzu Harbor Registry projects. Registry
Content library.Delete library item Allows deletion of library items. Library. Set this permission to propagate to all library items.
Content library.Delete local library Allows deletion of a local library. Library
Content library.Delete root certificate from trust store Allows deletion of root certificates from the Trusted Root Certificates Store. vCenter Server
Content library.Delete subscribed library Allows deletion of a subscribed library. Library
Content library.Delete subscription of a published library Allows deletion of a subscription to a library. Library
Content library.Download files Allows download of files from the content library. Library
Content library.Evict library item Allows eviction of items. The content of a subscribed library can be cached or not cached. If the content is cached, you can release a library item by evicting it if you have this privilege. Library. Set this permission to propagate to all library items.
Content library.Evict subscribed library Allows eviction of a subscribed library. The content of a subscribed library can be cached or not cached. If the content is cached, you can release a library by evicting it if you have this privilege. Library
Content library.Import Storage Allows a user to import a library item if the source file URL starts with ds:// or file://. This privilege is disabled for content library administrator by default. Because an import from a storage URL implies import of content, enable this privilege only if necessary and if no security concern exists for the user who performs the import. Library
Content library.Manage Harbor registry resources on specified compute resource Allows management of VMware Tanzu Harbor Registry resources. Compute cluster
Content library.Probe subscription information This privilege allows solution users and APIs to probe a remote library’s subscription info including URL, SSL certificate, and password. The resulting structure describes whether the subscription configuration is successful or whether there are problems such as SSL errors. Library
Content library.Publish a library item to its subscribers Allows publication of library items to subscribers. Library. Set this permission to propagate to all library items.
Content library.Publish a library to its subscribers Allows publication of libraries to subscribers. Library
Content library.Read storage Allows reading of content library storage. Library
Content library.Sync library item Allows synchronization of library items. Library. Set this permission to propagate to all library items.
Content library.Sync subscribed library Allows synchronization of subscribed libraries. Library
Content library.Type introspection Allows a solution user or API to introspect the type support plug-ins for the content library service. Library
Content library.Update configuration settings Allows you to update the configuration settings. Library
No vSphere Client user interface elements are associated with this privilege.
Content library.Update files Allows you to upload content into the content library. Also allows you to remove files from a library item. Library
Content library.Update library Allows updates to the content library. Library
Content library.Update library item Allows updates to library items. Library. Set this permission to propagate to all library items.
Content library.Update local library Allows updates of local libraries. Library
Content library.Update subscribed library Allows you to update the properties of a subscribed library. Library
Content library.Update subscription of a published library Allows updates of subscription parameters. Users can update parameters such as the subscribed library’s vCenter Server instance specification and placement of its virtual machine template items. Library
Content library.View configuration settings Allows you to view the configuration settings. Library
No vSphere Client user interface elements are associated with this privilege.

 

Advanced Content Library settings

Several advanced configuration settings are configurable with the VMware Content Library. You can get to these by navigating to Content Libraries > Advanced.

Content Library advanced settings
Content Library advanced settings

These include the following settings as detailed by VMware:

Configuration Parameter Description
Library Auto Sync Enabled This setting enables automatic synchronization of subscribed content libraries.
Library Auto Sync Refresh Interval (minutes) The Interval between two consequent automatic synchronizations of the subscribed content library. This interval is measured in minutes.
Library Auto Sync Setting Refresh Interval (seconds) This is the Interval after which the refresh interval for the automatic synchronization settings of the subscribed library will be updated if it has been changed. It is measured in seconds. A change in the refresh interval requires a restart of vCenter Server.
Library Auto Sync Start Hour This setting refers to the time of day when the automatic synchronization of a subscribed content library begins
Library Auto Sync Stop Hour This setting refers to the time of day when the automatic synchronization of a subscribed content library stops. Automatic synchronization stops until the start hour.
Library Maximum Concurrent Sync Items The maximum number of items concurrently synchronizing for each subscribed library.
Max concurrent NFC transfers per ESX host The maximum concurrent NFC transfers per ESXi host limit
Maximum Bandwidth Consumption The bandwidth usage threshold. It is measured in Mbps across all transfers where 0 means unlimited bandwidth.
Maximum Number of Concurrent Priority Transfers The Concurrent transfer limit for priority files. Tranfers are queued if the bandwidth limit is exceeded. This threadpool is used only to transfer priority objects. For example, if you change the concurrent transfer limit for priority files, such as OVF, you must restart vCenter Server.
Maximum Number of Concurrent Transfers Concurrent transfer limit. When exceeded, the transfers are queued. If you change the concurrent transfer limit, it requires a restart of vCenter Server.

 

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Wrapping up

The VMware Content Library provides a centralized repository that allows keeping required file resources, virtual machine templates, ISO images vApps, and other files synchronized and available across the vSphere datacenter. In vSphere 7, the Content Library allows organizations to have a better way to keep up with and track changes to virtual machine templates. Using the new check-in/check-out process, VI admins can track changes made with each check-out and ensure these are documented and synchronized back to the Content Library.

It effectively provides a solution to remove the need to copy files between ESXi hosts or vSphere clusters and have what you need to install guest operating systems or deploy virtual machine templates. In addition, the subscribed Content Library allows synchronizing vCenter Server content libraries so that many other vCenter Servers can take advantage of the files already organized in the published Content Library.

The VMware Content Library is one of the more underutilized tools in the VI admin’s toolbelt that can bring about advantages in workflow, efficiency, and time spent finding and organizing files for deploying VMs and OS’es. In addition, the recent feature additions and improvements, such as check-ins/check-outs, have provided a more DevOps approach to tracking and working with deployment resources.

The post Manage resources across sites with the VMware Content Library appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-content-library/feed/ 0
4 Powerful VMware vSAN Blueprints for the SMB https://www.altaro.com/vmware/vsan-blueprints-smb/ https://www.altaro.com/vmware/vsan-blueprints-smb/#respond Fri, 29 Apr 2022 15:26:18 +0000 https://www.altaro.com/vmware/?p=24236 From 2-nodes to stretched clusters, find the architecture that best fits your organization’s size and needs with these four VMware vSAN blueprints.

The post 4 Powerful VMware vSAN Blueprints for the SMB appeared first on Altaro DOJO | VMware.

]]>

As many organizations enter a hardware refresh cycle, they must consider both the hardware and technologies available as they plan to purchase new hardware and provision new environments. Many businesses are now looking at software-defined storage as one of their first choices for backing storage for their enterprise data center and virtualization solutions.

VMware has arguably maintained an edge over the competition across their portfolio of solutions for the enterprise datacenter, including software-defined storage. VMware vSAN is the premiere software-defined storage solution available to organizations today and offers many excellent capabilities. It is well suited for organizations of all sizes, ranging from small businesses and edge locations to large enterprise customers.

What is VMware vSAN?

Before looking at the available configuration options for deploying VMware vSAN to suit the needs of various deployments, let’s first look at the VMware vSAN technology itself. What is it, and how does it work? VMware vSAN is a software-defined enterprise storage solution that enables businesses to implement what is known as hyper-converged infrastructure (HCI).

Instead of the traditional 3-2-1 configuration where organizations have (3) hypervisor hosts, (2) storage switches, and (1) storage area network device (SAN), VMware vSAN enables pooling locally attached storage in each VMware ESXi host as one logical volume. In this way, there is no separately attached storage device providing storage to the vSphere cluster.

VMware vSAN software-defined storage solution
VMware vSAN software-defined storage solution

This feature is key to implementing hyper-converged infrastructure where compute, storage, and networking is “in the box.” With this more modern technology model for implementing virtualization technologies, it provides many additional benefits, including the following:

    • Control of storage within vSphere – With vSAN, VI admins can provision and manage storage from within the vSphere client, without the need for the storage team. Traditional 3-2-1 architectures may rely on the storage team to configure and allocate storage for workloads, slowing down the workflow for turning up new resources.
    • Easy scalability – With vSAN, businesses can easily scale up and scale out storage by adding additional diskgroups and vSAN hosts
    • Automation – Using PowerCLI, organizations can fully automate vSAN storage alongside other automated tasks
    • Use vSphere policy-based management for storage – VMware’s software-defined policies allow storage to be controlled and governed in a granular way
    • Optimized for flash storage – VMware vSAN is optimized for flash storage. Businesses can run performance and latency-sensitive workloads while benefiting from all the other vSAN capabilities using all-flash vSAN.
    • Easy stretched clusters for zero data loss failovers and failbacks – The vSAN stretched Cluster provides an excellent option for site-level resiliency and workloads that require as little downtime as possible
    • Disaggregated compute and storage resources – Using vSAN HCI Mesh, compute and storage can be disaggregated and ensure that free storage is not landlocked within a particular cluster. Other clusters can take advantage of available storage found in a different vSAN cluster.
    • Integrated file services – Organizations can run critical file services on top of vSAN
    • iSCSI access to vSAN storage – VMware vSAN provides iSCSI storage that can be used for many different use cases, including Windows Server Failover Clusters
    • Two-node direct connect – Small businesses may have a limited budget or running virtualization clusters. The two-node direct connect allows connecting two vSAN hosts in the two-node configuration without a network switch in between. This configuration is also a great option in remote office/branch office (ROBO) and edge environments.
    • Provides Data Persistence platform for modern workloads – VMware vSAN provides the tools needed for allocating storage for modern workloads and includes a growing ecosystem of third-party plugins

Pricing

VMware vSAN is a paid product license from VMware. The licensed editions include:

    • Standard
    • Enterprise – Adds Data-at-rest and data-in-transit encryption, stretched Cluster with local failure protection, file services, VMware HCI Mesh, Data Persistence platform for modern stateful services
    • Enterprise Plus – Adds vRealize Operations 8 Advanced

In addition to the paid licenses, VMware includes a vSAN license for free as part of other paid solutions, such as VMware Horizon Advanced and Enterprise.

Understanding Availability and vSAN Storage Policies Failures to Tolerate (FTT)

One of the primary requirements for understanding and designing your vSAN server clusters is how vSAN handles failures and maintains the availability of your data. It relies on the vSAN Storage Policies to determine these settings. It uses the term Failures to tolerate (FTT) to help understand the number of failures any vSAN configuration can handle and still maintain access to your data.

Failures to tolerate define the number of host and device failures that a virtual machine can tolerate. Customers can choose anything from no data redundancy, all the way to RAID 6 erasure coding to 3 failures RAID-1 mirroring. VMware’s stance on mirroring vs. RAID-5/6 erasure coding is that mirroring provides better performance and the erasure code provides more efficient space utilization.

    • ***Note*** – It is essential to understand that a VM storage policy with FTT = 0 (No Data Redundancy) is not supported and can lead to data loss. This policy setting is not something you would configure for a production cluster.

There are two formulas to note for understanding the needed number of data copies and hosts with VMware vSAN VM storage policies and availability.

    • n equals the number of failures tolerated
    • Number of data copies – n+1
    • Number of hosts contributing storage required = n2+1

With the above formulas, it results in the following configurations

RAID Configuration Failures to Tolerate (FTT) Minimum Hosts Required
RAID-1 (Mirroring) This is the default setting. RAID-1

1

2

RAID-5 (Erasure Coding)

1

4

RAID-1 (Mirroring)

2

5

RAID-6 (Erasure Coding)

2

6

RAID-1 (Mirroring)

3

7

vSAN Blueprints for the SMB

With the various VM Storage Policies and host configurations, VMware vSAN provides a wealth of configuration options, scalability, and flexibility to satisfy many different use cases and customer needs from small to large. Let’s consider some different VMware vSAN blueprints for implementing VMware vSAN storage solutions and which configurations and blueprints fit individual use cases.

We will consider:

    1. VMware vSAN 2-node and direct connect configuration
    2. VMware vSAN 3-node configuration
    3. VMware vSAN 4/5/6-node configuration
    4. VMware vSAN stretched Clustering

Blueprint #1 – VMware vSAN 2-node configuration

As mentioned earlier, the VMware vSAN two-node configuration is an excellent option for organizations looking to satisfy specific use cases. The beauty of the VMware vSAN two-node configuration is its simplicity and efficient use of hardware resources, which allows businesses to run business-critical workloads in remote sites, ROBO configurations, and edge locations without needing a rack full of hardware resources.

The VMware vSAN 2-node configuration is a specialized version of a vSAN stretched Cluster, including the three fault domains. Each ESXi node in the 2-node configuration compromises a fault domain. Then the witness host appliance, which runs as a virtual machine, comprises the third fault domain. As a side note, the ESXi witness host virtual machine appliance is the only production-supported, nested installation of VMware ESXi.

The witness host is a specialized ESXi server running as a virtual machine in a different VMware vSphere environment. Currently, the witness host is only supported to run in another vSphere environment. However, there have been rumblings that VMware will support running the ESXi witness node appliance in other environments such as in Amazon AWS, Microsoft Azure, or Google GCP. If this happens in a future vSAN release, customers can architect their 2-node configurations with even more flexibility.

One of the great new features around the witness node appliance, released in vSphere 7, is the new ability to use a single witness node appliance to house the witness node components for multiple vSAN 2-node clusters. Introducing this feature allows organizations to reduce the complexity and resources needed for the witness components since these can be shared between multiple clusters. Up to 64 2-node clusters can share a single witness appliance.

Primary Failures to Tolerate (PFTT)

From a data perspective, the witness host never holds data objects, only witness components that are minimal in size since these are basically small metadata files. The two physical ESXi hosts that comprise the 2-node cluster store the data objects.

With the vSAN 2-node deployment, data is stored in a mirrored data protection configuration. It means that one copy of your data is stored on the first ESXi node, and another copy is stored on the second ESXi node in the 2-node Cluster. The witness component is, as described above, stored on the witness host appliance.

In vSAN 6.6 or higher 2 Node Clusters, PFTT may not be greater than 1. It is because 2-node clusters contain three fault domains.

Enhancements with vSAN 7 Update 3

VMware vSAN 7 Update 3 introduces the ability to provide a secondary level of resilience in a two-node cluster. If each host has more than one disk group running in each host, it allows you to have multiple failures. With vSAN 7 Update 3, you can suffer a complete host failure, a subsequent failure of a witness, and a disk group failure. That is three major failures in a two-node cluster.

VMware vSAN 2-node configuration
VMware vSAN 2-node configuration

Direct Connect

The two nodes are configured in a single site location and in most cases, will be connected via a network switch between them. However, in the “Direct Connect” configuration, released with vSAN 6.5, the ESXi hosts that comprise the 2-node vSAN cluster configuration can be connected using a simple network cable connection, hence, directly connected. This capability even further reduces the required hardware footprint.

With new server hardware, organizations can potentially have two nodes with 100 Gbit connectivity, using the direct connect option and 100 Gbit network adapters, all without network switching infrastructure in between. By default, the 2-node vSAN witness node needs to have connectivity with each of the vSAN data node’s VMkernel interfaces tagged with vSAN traffic.

However, VMware introduced a specialized command-line tool that allows specifying an alternate VMkernel interface designated to carry traffic destined for the witness node, separate from the vSAN tagged VMkernel interface. This configuration allows for separate networks between node-to-node and node-to-witness traffic.

VMware vSAN 2-node Direct Connect configuration
VMware vSAN 2-node Direct Connect configuration

Use case

The 2-node and 2-node Direct Connect configurations are excellent choices for small SMB customers who need to conserve cost while maintaining a high availability level. It is also a perfect choice for organizations of any size who need to place a vSAN cluster in an edge environment with minimal hardware requirements.

Blueprint #2 – VMware vSAN 3-node configuration

The next configuration we want to cover is the 3-node vSAN cluster host design. It is a more traditional design of a vSAN cluster. It provides an “entry-level” configuration for vSAN since it requires the least hardware outside the two-node design.

With a 3-node vSAN cluster design, you do not set up stretched Clustering. Instead, each host comes into the vSAN Cluster as a standalone host. Each is considered its own fault domain. With a 3-node vSAN configuration, you can tolerate only one host failure with the number of failures to tolerate set to 1. With this configuration, VMware vSAN saves two replicas to different hosts. Then, the witness component is saved to the third host.

A 3-node vSAN cluster default fault domains
A 3-node vSAN cluster default fault domains

If you attempt to configure a vSAN 3-node cluster with RAID-5/6 (Erasure coding), you will see the following error when assigning the RAID-5 erasure coding policy:

    • “Datastore does not match current VM policy. “Policy specified requires 4 fault domains contributing all-flash storage, but only 3 found”

Configuring the VM Storage Policy on a 3-node cluster to RAID5 or 6

Configuring the VM Storage Policy on a 3-node cluster to RAID5 or 6

If you change to 3 failures – RAID-1 (Mirroring) storage policy, the same is true. Once you move forward with the 3 failures – RAID-1 mirroring configuration, you won’t see your vSAN Datastore as compatible.

Viewing compatible datastores with the 3 failures - RAID-1 mirroring configuration
Viewing compatible datastores with the 3 failures – RAID-1 mirroring configuration

When you consider that you only have 3-nodes, it makes sense that creating additional mirrored copies of your data between the three hosts would not make sense or provide any additional resiliency. If you lose a single host, it will take the other copies of data with it.

Limitations of the 3-node vSAN cluster configuration

There are limitations to note with the 3-node vSAN Cluster configuration that can create operational limitations and expose you to data loss during a failure. These include:

    • During a failure scenario of a host or other component, vSAN cannot rebuild data on another host or protect your data from another failure.
    • Another consideration with the 3-node Cluster is when you need to take a host down for maintenance with a planned outage, vSAN cannot evacuate data from the host to maintain policy compliance due to the limited number of nodes. Therefore, when entering maintenance mode on a 3-node cluster, you can only set the Ensure data accessibility data evacuation option.
    • If you are in a situation where you already have a 3-node cluster with an inaccessible host or disk group, and you have another failure, your VMs will be inaccessible.
    • It is also worth considering you will not be able to create a new snapshot on virtual machines running in a 3-node vSAN cluster when a host is down, even for planned maintenance. Snapshots require the VM storage policy to be met, so you run into an issue where you don’t have all the components available to take a snapshot. It proves to be a “real-world” impact as even backup processes that attempt to run on a 3-node vSAN cluster with a host down will fail until the host is brought back online to provide the availability to all the data components.

Use case

What is the use case for the 3-node vSAN Cluster? The 3-node vSAN Cluster provides an entry-level cluster configuration that uses minimal resources and allows access to the two data components and the witness component, all in the same Cluster. Furthermore, it does this without needing the specialized witness appliance’s external witness component.

However, with the 3-node Cluster, you don’t benefit from configuring a “Direct Connect” configuration as you do with the 2-node Cluster. So, for the most part, while the 3-node vSAN cluster configuration is supported, it is not a recommended configuration. Outside of the 2-node vSAN cluster option, VMware recommends using the 4-node Cluster as it provides the additional resources needed to have maintenance operations without being in a policy deficient situation.

A 3-node cluster is still an excellent option for businesses that need to have the resiliency, flexibility, and capabilities offered by vSAN without large hardware investments. As long as customers understand the limitations of the 3-node configuration, it can run production workloads with a reasonable degree of high availability.

Blueprint #3 – VMware vSAN 4/5/6-node configuration

Moving past the 3-node vSAN cluster configuration, you get into the vSAN Cluster configurations that offer the most resiliency for your data, performance, and storage space efficiency.

4-node vSAN Cluster

Starting with the 4-node vSAN Cluster configuration, customers have the recommended configuration of VMware vSAN implemented in their environments outside of the 2-node stretched Cluster. As mentioned, the 3-node vSAN Cluster provides the minimum number of hosts in the standard cluster configuration (non-2-node Cluster). With the 3-node configuration, as detailed earlier, when a host or disk group is down, vSAN is operating in a degraded state where you cannot withstand any other failures, and you are impacted operationally (backups can’t complete, etc.).

Moving in the 4-node configuration, you are not subject to the same constraints as the 3-node Cluster. You can continue to operate in a normal state as far as data availability when you have a host or disk group down. Operationally, you can still create snapshots, perform backups, and do other tasks.

In a 4-node vSAN Cluster, if you have a node or disk group fail or need to take a host down for maintenance, vSAN can perform “self-healing” and reprotect the data objects using the available disk space in the remaining three nodes of the Cluster. What do we mean by “self-healing?”

With vSAN, you always have the primary copy of the data, a secondary copy of the data, and finally, a witness component. These are always placed on separate hosts. In a 4-node vSAN cluster, you have enough hosts that when a node or component fails, you have another host on which you can rebuild the missing component. Whereas in the 3-node, you are waiting on the third host to become available once again, you have the extra host to instantly begin rebuilding data components or migrating data to (in the case of a planned maintenance mode event).

In the 4-node configuration, you also can use the “Full data migration” option when placing hosts in maintenance mode to meet the FTT=1 requirement.

Choosing the vSAN data migration option
Choosing the vSAN data migration option

4-node vSAN introduces Erasure Coding

One of the other advantages of the 4-node vSAN Cluster compared to the 3-node Cluster is using Erasure Coding instead of data mirroring. What is Erasure Coding?

    • Erasure coding is a way to protect data by fragmenting the data across physical boundaries in a way that allows maintaining access to the data, even if a part of the data is lost. In the case of VMware vSAN, it is object storage that stripes data set with parity across multiple hosts.

With RAID-5 erasure coding, 4 objects are created for the dataset, which is spread across all the hosts in the Cluster. For this reason, you can only assign the RAID-5 erasure coding storage policy in a 4-node vSAN cluster, not the RAID-6 storage policy. The RAID-6 storage policy requires at least 6 hosts in a vSAN cluster for implementation.

RAID-5 data placement
RAID-5 data placement

What is the advantage of using erasure coding instead of data mirroring? The erasure code storage policies provide more efficient storage space utilization on the vSAN datastore. Whereas RAID-1 mirroring requires twice the capacity for a single data set, RAID-5 erasure coding requires only x1.33 space for FTT=1. It results in significant space savings compared to the data mirroring policies.

    • FTT=1
    • X1.33 compared to x2 capacity with RAID-1
    • Requires 4 hosts

However, there is a tradeoff between more efficient space utilization and performance. There is storage I/O amplification with erasure coding on file writes, not reads. Current data and parity need to be read and merged, and new parity is written.

RAID-5/6 Erasure Coding Improvements in vSAN 7.0 Update 2

VMware has been working hard to improve the performance across the board with each release of VMware vSAN. One of the improvements noted with the release of VMware vSAN 7.0 Update 2 is performance enhancements related to RAID-5/6 erasure coding.

As mentioned above, part of the performance degradation with erasure coding is related to the I/O amplification around writing parity information. The new improvements in vSAN 7.0 Update 2 relate to how vSAN performs the parity calculation and optimizes how vSAN reads old data and parity calculations.

VMware has optimized the calculations and has not changed the data structures used in vSAN’s erasure codes. With the new optimizations, performance will benefit from using RAID-5 and RAID-6 storage policies, seeing large sequential writes bursts. In addition, CPU cycles per IOPs are reduced, benefiting the virtual machines using the vSAN datastore. Finally, RAID-6 operations will benefit even more than RAID-5 due to the overall increased amplification using RAID-6 over RAID-5.

Failures to tolerate (FTT)

As noted above, it means if you assign the RAID-5 erasure coding policy, if you have a failure of one of your hosts in a 4-node vSAN Cluster, you are in a degraded state with no opportunity for “self-healing” rebuild. It is similar to the 3-node vSAN host with the data mirroring policy. Once a failure happens in the 4-node vSAN Cluster running RAID-5 erasure coding, there is no available hardware to recreate the objects immediately.

Use Cases – 4-node vSAN Cluster

The 4-node vSAN Cluster is the recommended vSAN configuration for the FTT=1 mirror as it provides the ability to recover using self-healing. However, the 4-node Cluster is not the recommended configuration for RAID-5 erasure coding as you need at least a 5-node cluster for self-healing RAID-5 erasure coding.

5-node vSAN Cluster

The 5-node vSAN Cluster adds the advantage of having the extra host needed to provide immediate “self-healing” with RAID-5 erasure coding. With the 5-node vSAN Cluster configuration, you still don’t have the number of nodes needed for RAID-6 erasure coding since RAID-6 requires a minimum 6-node vSAN configuration.

6-node vSAN Cluster

The 6-node vSAN Cluster is required for RAID-6 erasure coding. With RAID-6 erasure coding, the Failures to tolerate is “2.” It uses x1.5 storage space compared to x3 the capacity compared to RAID-1 mirroring. So, for FTT=2, it is much more cost-effective from a capacity perspective. With RAID-6 erasure coding, the amplification of storage I/O is even more than RAID-5 since you are essentially writing “double parity.”

VMware vSAN RAID-6 erasure coding
VMware vSAN RAID-6 erasure coding

Use cases for 6-node vSAN Cluster

The 6-node vSAN Cluster provides the ability to jump to the FTT=2 level of data resiliency, which is where most business-critical applications need to be. In addition, the 6-node vSAN Cluster allows taking advantage of the RAID-6 erasure coding storage policy with the resiliency benefits and space savings benefits from erasure coding. With the new improvements in vSAN 7.0 Update 2 from a performance perspective, RAID-5/6 erasure coding has become a more feasible option for performance-sensitive workloads.

Blueprint #4 – VMware vSAN Stretched Cluster

For the ultimate in resiliency and site-level protection, organizations can opt for the vSAN stretched Cluster. The Stretched Cluster functionality for VMware vSAN was introduced in vSAN 6.1 and is a specialized vSAN configuration targeting a specific use case – disaster/downtime avoidance.

VMware vSAN stretched clusters provide an active/active vSAN cluster configuration, allowing identically configured ESXi hosts distributed evenly between the two sites to function as a single logical cluster, each configured as their own fault domain. Additionally, a Witness Host is used to provide the witness component of the stretched Cluster.

As we learned earlier, the 2-node vSAN Cluster is a small view of how a stretched cluster works. Each node is a fault domain, with the Witness Host providing the witness components. Essentially with the larger stretched Cluster, we are simply adding multiple ESXi hosts to each fault domain.

One of the design considerations with the Stretched Cluster configuration is the need for a high bandwidth/low latency link connecting the two sites. The vSAN Stretched Cluster requires latency of no more than 5 ms RTT (Round Trip Time). This requirement can potentially be a “deal-breaker” when organizations consider using a Stretched Cluster configuration between two available sites if the latency requirements cannot be met.

A few things to note about the vSAN Stretched Cluster:

    • X+Y+Z – This nomenclature describes the Stretched Cluster configuration where X is a data site, Y is a data site, and Z is the number of witness hosts at site C
    • Data sites are where virtual machines are deployed
    • The minimum configuration is 1+1+1 (3 nodes)
    • The maximum configuration as of vSAN 7 Update 2 is 20+20+1 (41 nodes)

VMware vSAN Stretched Cluster

 

The virtual machines in the first site have data components stored in each site and the witness component stored on the Witness Host. Organizations use the fault domains and affinity rules to keep virtual machines running in the preferred sites.

Even in a failure scenario where an entire site goes down, the virtual machine still has a copy of the data in the remaining site, and “more than 50% of components” are available to remain accessible. The beauty of the Stretched Cluster is that customers minimize and eliminate data loss. Failover and failback (which are often underestimated) are greatly simplified. At the point of failback, virtual machines can simply be DRS controlled back to the preferred site.

Within each site, vSAN uses mirroring to create the redundant data copies needed to protect data intrasite. In addition, the mirrored copies between the sites protect from data loss from a site-wide failure.

vSAN 7 Update 3 Stretched Cluster enhancements

In vSAN 7 Update 3, Stretched Clusters have received major improvements to the data resiliency offered. In vSAN 7 Update 3, if one of the data sites becomes unavailable, followed by a planned or unplanned outage of the witness host, data and VMs remain available, allowing tremendous resiliency in cases of catastrophic loss of Stretched Cluster components. This new resiliency is a tremendous improvement over previous releases, where losing a site and the Witness Host would have meant the remaining site would have had inaccessible virtual machines.

Use Cases for vSAN Stretched Clusters

The use case for vSAN Stretched Clusters are fairly obvious. The vSAN Stretched Cluster protects an organization and its data from a catastrophic loss that can occur with the failure of an entire site. In this case, you may think about a natural disaster that destroys an entire data center location or some other event that takes the site offline.

With Stretched Clustering, your RPO and RTO values are essentially real-time. The only outage experienced for workloads would be the HA event needed to restart virtual machines in the remaining data site. However, the important thing to consider is the data is up-to-date and available. No data skew will exist or need for restoring backups, etc.

It drastically reduces the administrative effort and time needed to bring services back online after a disaster. For businesses who have mission-critical services that depend on real-time failover, no data loss, and the ability to get services back online in minutes, Stretched Clustering fits this use case perfectly.

However, Stretched Cluster will be more expensive from a hardware perspective. Per best practice, you would want to run at least a 4-node vSAN Cluster in each site that comprises a fault domain. This practice ensures the ability to have self-healing operations in both locations if you lose a host or disk group on either side.

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Is vSAN the Future for Data Storage?

Organizations today are looking at modern solutions moving forward with hardware refresh cycles. Software-defined storage is a great option for businesses looking to refresh their backend storage along with hypervisor hosts. VMware vSAN is a premiere software-defined storage solution on the market today in the enterprise. It provides many great features, capabilities, and benefits to businesses looking to modernize their enterprise data center.

VMware vSAN allows customers to quickly and easily scale their storage by simply adding more storage to each hypervisor host or simply adding more hosts to the Cluster. In addition, they can easily automate software-defined storage without the complexities of storage provisioning using a traditional SAN device. VMware vSAN can be managed right from within the vSphere Client and other everyday tasks performed by a VI admin. It helps to simplify the change control requests and provisioning workflows as VI admins can take care of storage administration and their everyday vSphere duties without involving the storage team.

As shown by the aforementioned vSAN configurations, VMware vSAN provides a solution that can be sized to fit just about any environment and use case. The 2-node Direct Connect vSAN Cluster allows creating a minimal hardware cluster that can house business-critical workloads in an edge environment, ROBO, or remote datacenter without the need for expensive networking gear. The 3-node traditional vSAN host allows protecting VMs with little hardware investment and easily scales into the 4/5/6 node vSAN cluster configurations.

With the 4/5/6-node vSAN configurations, organizations benefit from expanded storage policies, including mirroring and RAID-5/6 erasure coding that helps to minimize the capacity costs of mirroring data. In addition, with the improvements made in vSAN 7 Update 2, performance has been greatly improved for RAID-5/6 erasure coding, helping to close the gap on choosing mirroring over erasure coding for performance reasons.

The vSAN software-defined storage solution helps businesses modernize and scale their existing enterprise data center solutions to meet their business’s current and future demands and does so with many different options and capabilities. In addition, by leveraging software-defined technologies, organizations can solve technical challenges that have traditionally been difficult to solve using standard technologies.

The post 4 Powerful VMware vSAN Blueprints for the SMB appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vsan-blueprints-smb/feed/ 0
Two-factor Authentication for vCenter is now Essential https://www.altaro.com/vmware/two-factor-authentication-vcenter/ https://www.altaro.com/vmware/two-factor-authentication-vcenter/#respond Thu, 31 Mar 2022 11:54:28 +0000 https://www.altaro.com/vmware/?p=24042 If you currently do not use 2FA for vCenter, stop what you're doing and do it now. Here's a rundown of the threats posed and how to set it up

The post Two-factor Authentication for vCenter is now Essential appeared first on Altaro DOJO | VMware.

]]>

More than ever, organizations need to focus on security in their environments. New cybersecurity threats are endless, and the bad guys are constantly trying new ways to hack into your network, business-critical services, and applications. One of the most common age-old ways cybercriminals compromise networks, and business-critical data is by compromised credentials.

If you think about it, if an attacker gets possession of a privileged user account, it is game over. Instead of needing to find an obscure vulnerability or zero-day attack, they can simply walk in the front door of your environment using stolen credentials. VMware vCenter is the heart of VMware vSphere implementations. It is a crucial piece of the vSphere infrastructure. If compromised, attackers essentially have the heart of your vSphere environment and your workloads.

Setting up two-factor authentication to protect user credentials, especially administrator accounts, is a great way to bolster the overall security of your user accounts. Is it possible to configure two-factor authentication on your vCenter Server? How is this accomplished, and what considerations need to be made?

Passwords Alone are not Enough

The traditional username and password have been around for decades now. Users have long had a username that is usually a combination of their first and last names, either truncated or full names with a period in between. A password is a string of characters that is not viewable or known by anyone but the user.

Despite decades of security evolution and much more powerful applications and enterprise services, surprisingly, the classic username and password are still primarily the way systems are secured today. Why is this worrisome? As mentioned, compromised credentials are one of the most common ways attackers get into environments today.

In the IBM Cost of a Data Breach Report 2021, it noted the following statistics regarding compromised credentials:

    • Compromised credentials initially cause at least 20% of breaches
    • It represents the most common initial attack vector
    • Breached caused by stolen/compromised credentials took the longest number of days to identify
    • Compromised credential breaches took 250 days to identify and 91 days to contain

Why are passwords so easy to compromise? It is due to many different reasons. However, end-users generally tend to choose weak or easily guessable passwords. In addition, many users choose passwords they can easily remember, and they may use this same password everywhere.

Today, this problem is magnified as organizations have users create logins to a myriad of services, both on-premises and in the cloud. Unfortunately, human nature being what it is, users often choose a password they can reuse across the services found in their digital workspace.

It leads to weak user logins associated with business-critical data used across many services. Even though administrators better understand why security is essential compared to normal end-users, they are also guilty of choosing weak passwords and reusing them across their privileged accounts in the environment, including VMware vCenter Server. In many environments, network segmentation is either poorly designed or non-existent, leading to attackers having easy lateral movement to compromise vCenter, ESXi, and other infrastructure.

Phishing and brute force attacks

Attackers are very cunning and use sophisticated ways of compromising credentials. The most common types of password attacks are:

    • Phishing attacks

Although one of the older types of attacks, phishing attacks are still surprisingly effective. Attackers craft very legitimate-looking emails and masquerade these as being from legitimate or known vendors or businesses the users are familiar with. The phishing email may request the user enter their current password to review their security information.

Once the user enters the current password, the attacker now has access to a legitimate password for a particular service or solution associated with the organization. Phishing attacks can easily harvest credentials used by an organization and then use these for malicious purposes.

If you manage Microsoft 365, a dedicated email security service is vital for companies to provide the most effective level of security. Hornetsecurity is the leading cloud email security provider and offers a free trial of its product range.

    • Brute force attacks

Brute force attacks try many different passwords against a user account to compromise user accounts using common passwords, easily guessed passwords, or even breached passwords. Breached password lists exist on the dark web and even from legitimate channels that contain passwords that have been obtained in actual breach events.

Attackers know that even different users think alike. Therefore, breached passwords are often tried against other user accounts to find accounts using the same passwords. If enough user accounts are scanned, attackers generally will have success in finding other user accounts using the same character transformations, phrases, and strings.

    • Password spraying

Password spraying is another form of password attack where attackers choose a few common passwords and spray these against multiple accounts, even across different organizations. These attacks are often highly successful and do not trigger safety mechanisms such as account lockouts in Active Directory since they attempt very few passwords for each user account.

VMware vCenter Server Access is Often Linked to Active Directory

Attackers often target Microsoft Active Directory, the most popular enterprise directory service in use today. Microsoft Active Directory typically houses all the usernames and passwords used on-premises or federated to cloud environments. Linking services and solutions to a centralized identity directory has many advantages from a management perspective and security benefits, such as centralized password policies, etc.

IT admins commonly connect vCenter Server authentication with Active Directory. This approach allows one set of credentials to be used, both for Windows logins and accessing vCenter Server, among other services. However, the downside is if attackers compromise legitimate Active Directory credentials, they now have access to all the services using Active Directory authentication, including vCenter Server.

Also, instead of using role-based access where credentials have only the access needed in vSphere, admins may grant their Domain Admin account administrator privileges in vCenter Server. Attackers who compromise an account with both domain admin privileges and vSphere administrator permissions have the “keys to the kingdom.” Active Directory users delegated high-level permissions inside vSphere sweeten the deal for attackers who compromise their accounts.

Linking vSphere with Active Directory user accounts in itself is not necessarily a bad practice. Rather, it is the lack of following other best practices when doing so. These include role-based access, not using domain admin user accounts in vSphere permissions, and failing to enable multi-factor authentication.

Initial Access Brokers (IABs) and Ransomware

An extremely alarming trend on the dark web is a new sinister entity – the Initial Access Broker (IAB). What is it? The Initial Access Broker is a new criminal entity that specializes in selling legitimate and valid credentials to ransomware gangs and other hackers looking to launch a ransomware attack.

Initial Access Brokers take the “leg work” out of finding compromised credentials or infiltrating a network and doing this the hard way. Instead, IABs provide credentials for sale on the dark web. The credentials offered may include credentials to:

    • Virtual Private Network (VPN) connections
    • Remote Desktop Services (RDS) servers
    • VMware Horizon connections
    • Citrix
    • Cloud services
    • VMware vSphere

The IAB will carry out the operation of infiltrating networks or using phishing campaigns to harvest credentials. These credentials are then posted with the offer online, stating the type of access and the user’s privilege level. IAB operators usually base the price charged for the access credentials based on the privilege level of access and the company’s revenue.

These pieces of information are essential for an attacker looking for the next target of a ransomware attack. Additionally, the size and revenue stream of the targeted organization will help determine the ransom demanded after a successful ransomware attack and the likelihood the business will be able to pay the ransom.

The IAB provides the credentials needed for an attacker to carry out the end goal – a ransomware attack. Ransomware is an increasingly dangerous plague for businesses today as attacks are increasing, and the damage they can bring is unprecedented.

Like IABs, there is another development on the dark web that is helping to facilitate the increase in attacks. Ransomware-as-a-Service (RaaS) has commoditized ransomware for criminals across the board. In the past, carrying out and operating a successful ransomware attack took a certain level of skill and prowess in first developing the malicious software, then carrying out the attack, and finally, collecting the ransom.

You can think of Ransomware-as-a-Service (RaaS), much like Google Workspace or Microsoft 365. It is Software-as-a-Service, except in the case of RaaS, it is malicious software. Nevertheless, the principle is the same. With RaaS, attackers who buy into the RaaS service don’t have to know the ransomware’s inner workings or all the technical details. These are handled by the ransomware group operating the RaaS service. Instead, the affiliate attacker can simply carry out an attack with proven, mature ransomware. The ransomware group receives a percentage of the ransom payment if the attack is carried out successfully.

Both the IAB and the Ransomware-as-a-Service (RaaS) development on the dark web has led to the proliferation of ransomware attacks seen today and the increase in successful attacks. Is VMware vSphere really vulnerable to a ransomware attack? How can a ransomware attack be carried out on a VMware vSphere environment?

Ransomware That Attacks VMware vSphere

It is no longer just a “theory” that ransomware can attack VMware vSphere environments. Undoubtedly, you may have started to see in the news, Reddit, and other places where vSphere admins have started to see firsthand ransomware that attacks VMware vSphere environments.

A thread that popped up last year on Reddit that received countless views and comments from concerned vSphere admins is found here:

The post mortem to the above ransomware thread can be read here:

If you read through the post mortem of the above-mentioned ESXi ransomware account, you will find on step 3 of the attack post mortem:

    • Attackers gained access to hosts that had access to ESXi’s management subnet. They already had AD admin privileges.

In the attack, we can assume that the hackers had admin-level domain accounts with admin-level vSphere permissions, based on how the attack was carried out. Sophos also recently detailed this type of attack on ESXi servers. In details of the attack, Sophos noted:

    • The attackers broke into a computer using a compromised TeamViewer account
    • The computer was running under a domain administrator account
    • 10 minutes later, the attackers used Advanced IP Scanner to scan the network for targets
    • The SSH shell was running on the ESXi hosts
    • They installed Bitvise
    • Then, using a Python script, the virtual machine disk files (VMDKs) were encrypted at the datastore level

While the attacks noted made use of direct access to ESXi hosts, VMware vCenter Server makes a perfect target since through vCenter Server, all ESXi hosts are vulnerable to an attack if vCenter is compromised. Additionally, it emphasizes the importance of protecting user accounts across the entire landscape of your infrastructure. Going back to the post from Sophos, they gave the following security advice:

“Administrators who operate ESXi or other hypervisors on their networks should follow security best practices. This includes using unique, difficult to brute-force passwords and enforcing the use of multi-factor authentication wherever possible.”

For years now, vSphere has not been on the radar of ransomware groups. However, it seems in the past year or so, vSphere environments have moved up quickly on the radar of ransomware groups and attackers in general. It often represents an easy target with poor password practices and other factors at play.

What is Two-Factor Authentication (2FA)?

First, we need to understand what two-factor authentication is and why it helps secure user accounts. Two-factor is one variety of multi-factor authentication (MFA). Multi-factor authentication (MFA) refers to an authentication scheme that requires more than one factor of information to authenticate. For example, a password is a single factor used to authenticate a user as being who they say they are.

Common password factors generally include three types:

    • Something you know
    • Something you are
    • Something you have

A password is something you know. A fingerprint is something you are. A one-time password delivered or generated using a smartphone is something you have.

The problem with a single factor is it only requires a single piece of information to establish and verify user identity. Enabling multi-factor authentication on a user account makes compromising the account exponentially more difficult as it requires multiple components of information to establish identity.

Two-factor authentication (2FA) is the most popular form of multi-factor authentication and effectively bolsters account security by combining two factors. Two-factor authentication requires something you know, a password, and something you possess, a one-time passcode. With two-factor authentication, you need the one-time passcode in addition to the correct password to authenticate successfully.

The most common implementation of two-factor authentication involves using a smartphone to provide a one-time passcode received through text message or generated using an authenticator app. The key benefit when using two-factor authentication is an attacker who compromises a user account password does NOT have all the required factors to complete a successful authentication. Without successfully authenticating, an attacker is limited in what they can do.

Just about any best practice guidance available today detailing how to bolster cybersecurity will include implementing two-factor authentication across your user accounts. With two-factor authentication enabled, the possibility of a successful ransomware attack is dramatically reduced. While it is not the only cybersecurity measure that needs to be taken to protect against ransomware, it is one of the most important.

In addition to the positive impact on your organization’s security, multi-factor authentication is required by compliance frameworks. Examples include compliance frameworks such as PCI DSS 3.2 and NIST 800-53 revision 4. So, there are many reasons for organizations to implement multi-factor authentication across the board, including vCenter Server.

Securing vCenter Login with 2FA

Prior to vSphere 7.0, vCenter Server included a built-in identity provider that VI admins have known and are familiar with for years now (since vSphere 6.0). By default, vCenter uses the “vsphere.local” domain (can be changed) as an identity source. You could also configure the built-in identity provider to connect to:

    • Active Directory with LDAPS/S
    • OpenLDAP/S
    • Integrated Windows Authentication (IWA)

Organizations could configure logging into vCenter Server with Active Directory accounts using this configuration. In vSphere 7, VMware is making it much easier to implement multi-factor authentication by introducing identity federation. Identity federation introduces the capability to connect vCenter Server to an external identity provider, which allows federating the authentication process for vCenter Server to the identity solutions in use in the enterprise today.

Below is a screenshot from the Single Sign On > Configuration > Identity Provider screen found in vCenter Server 7.

vCenter Server 7 Identity Provider configuration
vCenter Server 7 Identity Provider configuration

You can click the Change Identity Provider link to change or view the current provider. Note the default Microsoft ADFS configured in the vCenter Server 7 configuration.

Viewing and configuring the identity provider in vSphere 7 vCenter Server
Viewing and configuring the identity provider in vSphere 7 vCenter Server

This new feature helps centralize the vCenter Server authentication process with identity federation solutions in today’s enterprise, such as Active Directory Federation Services (ADFS). More importantly, with the discussion around multi-factor authentication, this feature opens up capabilities such as multi-factor authentication, including the two-factor authentication approach.

The infographic below from VMware shows the workflow of the identity federation process in vCenter Server.

Identity Federation workflow found in vSphere 7 (courtesy of VMware)
Identity Federation workflow found in vSphere 7 (courtesy of VMware)

    1. The vSphere Client connects to the Identity Provider
    2. The vSphere Client redirects logins to the Identity Provider’s login page
    3. The end-user logs in with their normal user credentials
    4. They will be prompted with multi-factor authentication if this is configured
    5. Once authenticated, the identity provider redirects the session back to the vSphere Client
    6. The session will have the authentication token provided from the identity provider that authorizes access
    7. The user will proceed normally in the vSphere Client session, now authenticated

 

Currently, the only identity provider natively supported at the time of this writing is Active Directory Federation Services (ADFS). However, VMware will no doubt extend the list of available identity providers natively supported in future versions of vSphere, as noted in the official blog post:

“vSphere 7 initially supports ADFS because it represents what a large portion of our customers have and can easily use. More options to come as we teach vSphere more authentication “languages.”

VMware has built the new identity federation capability in line with standard protocols which is great as this will allow a much wider variety of identity providers. The vSphere 7 identity federation feature uses industry-standard protocols, including OAUTH2 and OIDC. However, it will still take time to integrate various identity providers in vSphere as, even with the open standards, they each use different identity “schemas.”

Available Options for vCenter – Are There Free Options?

As mentioned above, the option currently “included” with vCenter Server 7 identity federation is Active Directory Federation Services (ADFS). Additionally, VMware mentioned they include AFDS as the first identity federation option as this is the solution most of their enterprise customers are currently using.

However, Active Directory Federation Services (ADFS) may not be deployed in all customer environments. In addition, configuring Active Directory Federation Services (ADFS) to enable 2FA for your vCenter Server would involve a tremendous amount of complexity simply to enable 2FA on vCenter. ADFS configuration comes with additional infrastructure requirements and its own configuration, troubleshooting, and lifecycle maintenance.

While there are no specific licenses for ADFS itself, Windows licensing is needed for the ADFS servers and there are additional infrastructure resources required to provision the infrastructure. It is a great option to go this route for 2FA in vCenter if ADFS is already in place. Often this is used to federate user logins for Microsoft Office 365 and other cloud services.

Are there free options available for setting up 2FA with vCenter Server? You can set up two-factor authentication with vCenter Server without using the new identity federation functionality in vSphere 7. Duo Security offers a free version of their solution that allows creating a simple two-factor application that can introduce a two-factor prompt with the vSphere Client.

Does Two-Factor Authentication Impact Automation?

A question comes up with two-factor authentication and automated processes. How do the two coincide? It is a great question and one that needs to be considered when implementing two-factor authentication with automated processes running in the environment.

Two-factor authentication can potentially cause challenges for automated processes depending on how long the authentication token is maintained. The automated process would likely need to be reauthenticated each time the process runs. Some two-factor authentication solutions, such as Duo, OKTA, and others, allow admins to bypass two-factor prompts based on specific criteria, such as a user, an application, source networks, etc.

Using these specialized exemptions from two-factor prompts can be used judiciously throughout the environment for automated tasks and the user contexts these run. However, it is a double-edged sword. Opening or bypassing two-factor prompts are chinks in the armor that two-factor authentication provides in the environment.

However, usually, there is a “sweet spot” of exemptions, bypasses, and other rules that can be put in place that still provide a good balance. A few best practices to think about with two-factor and automation include:

    • Never run automated processes under a normal interactive user login
    • Use special-purpose service or automation accounts
    • Rotate the passwords for the automated service accounts frequently
    • Combine automated tasks with secrets management from the likes of Hashicorp Vault or another solution to have the credentials retrieved real-time as opposed to hardcoded in automated tasks or processes
    • Have automated solutions positioned on their own segregated network and only accessible using a Privileged Access Workstation (PAW)

vSphere Authentication with vCenter Single Sign-On and SAML

Another login mechanism in the enterprise for accessing the vSphere environment through vCenter Server is Sign Sign-On (SSO). Does vSphere support logging in with Single Sign-On (SSO)? Yes, it does. VMware vCenter Single Sign-On protects your environment by seamlessly allowing vSphere components to communicate using a secure token mechanism. It is much more secure than requiring users to authenticate separately to each component.

As mentioned earlier, the Single Sign-On domain is the “built-in” identity source found in vSphere 6.0 and higher that defaults to vsphere.local during installation. You can see the Single Sign-On domain configured when you login to the VAMI (vCenter Server Appliance Management Interface), under the Summary dashboard.

Viewing the vCenter Server Single Sign-On domain in the VAMI interface
Viewing the vCenter Server Single Sign-On domain in the VAMI interface

The vCenter Single Sign-On solution uses a combination of:

    • STS (Security Token Service)
    • SSL (secure communication)
    • Active Directory or OpenLDAP for user authentication

You can also add a SAML service provider to vCenter Single Sign-On solution with an external SAML Service Provider, or use a VMware-native SAML solution, such as is found in the vRealize Automation solution.

How to Set up vCenter Server Two-Factor Authentication

Let’s look at the process to configure the ADFS connection for vCenter Server. In the ADFS management console, create a new Application Group. Use the “Server application accessing a Web API” template.

Creating a new application group in ADFS
Creating a new application group in ADFS

In the screen below, you need to enter the Redirect URIs that point back to your vCenter Server. Also, copy the Client Identifier for use later.

Add the vSphere redirect URIs
Add the vSphere redirect URIs

Where do you get the redirect URIs? In the Identity Provider configuration, you can click the informational button beside the Change Identity Provider link and copy both for use in the ADFS configuration.

Gathering your redirect URIs from your vCenter Server
Gathering your redirect URIs from your vCenter Server

On the Configure Application Credentials screen, click the checkbox next to Generate a shared secret.

Generate a shared secret
Generate a shared secret

Enter the client identifier that you copied from the Server application screen.

Configure the WebAPI
Configure the WebAPI

On the Apply Access Control Policy screen, clik the Permit everyone and require MFA option. This is one of the key pieces of the configuration that enables MFA for your vCenter Server login.

Configure Access Control Policy in ADFS
Configure Access Control Policy in ADFS

Make sure the allatclaims and openid options are selected.

Configure application permissions
Configure application permissions

Review the options configured on the Summary screen and click Next.

Summary for creating the new application group
Summary for creating the new application group

The new application group is created successfully.

Application group is created successfully in ADFS
Application group is created successfully in ADFS

Now, we need to add a few claim rules to the application. Navigate to the Properties of your Application Group we just created.

Viewing the properties of the ADFS application group
Viewing the properties of the ADFS application group

Navigate to the Issuance Transform Rules and select Add Rule.

Navigate to the Issuance Transform Rules and select Add Rule.

 

The three we add will be from the template Send LDAP Attributes as Claims. Choose Active Directory as the Attribute store. The first set of configurations for LDAP Attribute and Outgoing Claim Type are:

    • Token-Groups – Qualified by Long Name
    • Group

Create a new Claim Rule for groups
Create a new Claim Rule for groups

The next pair includes:

    • User-Principal-Name
    • Name ID

 

Create a new Claim Rule for Name ID
Create a new Claim Rule for Name ID

Finally, map the following:

    • User-Principal-Name
    • UPN

Create a new Claim Rule for UPN

Create a new Claim Rule for UPN

Your Web API properties should look like the following.

New ADFS Web API properties for vCenter 2FA
New ADFS Web API properties for vCenter 2FA

We have all the information needed to populate the vCenter Server identity provider configuration except for the open-id configuration URL. To obtain that URL, use the cmdlet:

    • Get-AdfsEndpoint | Select FullUrl | Select-String openid-configuration

Make sure to only select the URL starting with the https:// and do not include the final “}” from the output.

Getting your openid-configuration from your ADFS server
Getting your openid-configuration from your ADFS server

Now, let’s go back to the identity provider configuration and choose ADFS server. We can now populate the information needed, including:

    • Client identifier
    • Shared secret
    • OpenID Address

How to Configure vCenter Two-Factor Authentication in VMware

Now, to configure vCenter two-factor authentication in VMware using the ADFS functionality built into vSphere 7, we just need to point to the ADFS configuration group application we have configured for MFA in the ADFS configuration above. Navigate in the vSphere Client to Administration > Single Sign On > Configuration > Identity Provider > Identity Sources > Change Identity Provider.

Change Identity Provider in the Identity Provider
Change Identity Provider in the Identity Provider

Select the Microsoft ADFS option.

Choose the Microsoft ADFS option
Choose the Microsoft ADFS option

Next, enter the relevant ADFS information from the new ADFS group application created earlier.

Populate your vCenter Server identity provider with the ADFS information
Populate your vCenter Server identity provider with the ADFS information

On the Users and Groups screen, populate the required information with the user with permissions to search Active Directory.

Configure your users and groups for searching Active Directory

Configure your users and groups for searching Active Directory

Click Finish to finalize the configuration of the identity provider for Active Directory Federation Services.

Review and confirm the ADFS identity provider

Review and confirm the ADFS identity provider

To ensure you have your ADFS configuration in place for multi-factor, you can follow the Microsoft documentation here:

How to Manage Two-Factor Authentication for VMware

The management of the two-factor authentication itself will be handled at the ADFS layer and/or Azure MFA. Basically, vCenter hands over the authentication to ADFS, once the Identity Provider configuration is in place, pointed to ADFS. Once authentication is configured and verified in vSphere, you can manage the ADFS implementation using the official Active Directory Federation Services (ADFS) management console found under “Windows Administrative Tools”:

    • Microsoft.IdentityServer.msc

Azure MFA integration will be managed using your Azure Portal:

Troubleshooting 2FA in vCenter Server

Since the key to the new vCenter Server 7.0 Identity Provider 2FA solution is ADFS, troubleshooting 2FA in vCenter Server will revolve around ADFS troubleshooting. A good resource for troubleshooting ADFS login issues, including MFA is found on the official Microsoft documentation site:

What are some common ADFS errors you might encounter:

    • ADFS error 180 and endpoints missing
    • ADFS 2.0 certificate error
    • ADFS 2.0 error 401
    • ADFS 2.0 error: This page cannot be displayed
    • ADFS 2.0 service fails to start

Specific documentation around Azure multi-factor authentication troubleshooting can be found here:

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

So, How Essential is 2FA?

A mountain of cybersecurity evidence, research, and best practice documentation points to the fact that enabling multi-factor authentication is a great way to decrease the likelihood of suffering from a cyberattack drastically. For example, ransomware attacks often start with stolen, leaked, or compromised credentials. As a result, the bad guys have an easy way into the network without elaborate schemes to hack into the network using other methods.

Multi-factor authentication (MFA) is a form of authentication that requires the user to prove their identity using multiple “factors” of information. These include something you know, something you are, and something you possess. Two-factor authentication is a popular combination of two factors of information, commonly something you know (password) and something you possess (a phone that receives or generates a one-time passcode).

Can ransomware affect VMware vSphere? Unfortunately, yes, it can. Ransomware groups are explicitly targeting vSphere environments using malicious Python scripts to encrypt virtual machines at the datastore level. As a result, many security companies see an alarming increase in attacks on ESXi in the enterprise.

Almost all the ransomware attacks on vSphere start with compromised credentials to some degree. Poor cybersecurity hygiene in many environments, lack of role-based permissions in vSphere, and domain admin credentials added to vSphere administrator permissions lead to easy vSphere targets.

A great way for vSphere administrators to bolster security for their vSphere environments is to implement security best practices in the environment. It includes securing the vSphere management network, turning off SSH access, using lockdown mode in ESXi, and also implementing two-factor authentication.

VMware vSphere 7 allows VI admins to add external identity sources to handle authentication requests. This new functionality makes it possible to connect vSphere environments into existing authentication providers that already can perform MFA. As shown in the walkthrough in this article, VI admins can now integrate with existing authentication providers, such as Active Directory Federation Services (ADFS). VMware has plans to add additional identity providers in the future to provide more options with the external identity providers.

However, without the new functionality, customers can add two-factor authentication to vSphere without the need for the external identity provider configuration, which opens up the opportunity to use free tools to provide 2FA in vCenter Server.

The post Two-factor Authentication for vCenter is now Essential appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/two-factor-authentication-vcenter/feed/ 0
vSphere 7 Partition Layout Changes https://www.altaro.com/vmware/vsphere-7-partition-layout/ https://www.altaro.com/vmware/vsphere-7-partition-layout/#respond Fri, 18 Mar 2022 14:01:44 +0000 https://www.altaro.com/vmware/?p=24014 Discover vSphere 7 partitions layout, important differences with ESXi 6 and how to upgrade to ESXi 7 with a new partition layout

The post vSphere 7 Partition Layout Changes appeared first on Altaro DOJO | VMware.

]]>

With the release of vSphere 7, VMware partitions are changed in the vSphere 7 layouts to make it more versatile and to allow additional VMware and third-party solutions to be installed on it. The VMware partition sizes in prior versions of vSphere 6.x were fixed and static which may prevent the installation of additional solutions such as vSAN, NSX-T, Tanzu as well as some third-party integrations. In response to these constraints, VMware modified the partition sizes in the vSphere 7 layout, increasing the size of boot banks and making them easier to extend.

In this article, we’ll learn about the vSphere 7 ESXi boot media partitions, important differences between ESXi 6 and ESXi 7, ESXi 7 supported boot media and upgrading to ESXi 7 with a new partition layout. Let’s get into it!

vSphere 7 – ESXi Boot Media Partition

With the new partition schema of the vSphere 7 layout, the system boot partition is the only one that is fixed at 100 MB. The rest of the VMware partitions are dynamic, which means the size of the partitions is decided by the boot media size. In the vSphere 7 layout, VMware consolidated the partitions which now consists of four VMware partitions.

  • System Boot: The EFI components and boot loader are stored in a FAT16 partition called system boot. Like earlier vSphere versions, it’s a fixed-size partition of 100 MB.
  • Boot-bank 0: A FAT16 partition that gives the system enough room to hold ESXi boot components. It’s a dynamic partition with a range of sizes ranging from 500 MB to 4 GB.
  • Boot-bank 1: A FAT16 partition that gives the system enough room to hold ESXi boot components. It’s a dynamic partition with a range of sizes ranging from 500 MB to 4 GB.
  • ESX-OSData: A VMFS-L partition that holds non-boot and additional modules like system states and configuration, as well as system VMs, and it’s only available on high-endurance systems. It’s also a dynamic partition with a storage capacity of up to 128 GB.

The ESX-OSData partition is separated into two high-level data types:

  • ROM-data: Data produced rarely, such as VMtools ISOs, settings, and core dumps.
  • RAM-data: Includes logs, VMFS global traces, vSAN EPD and traces as well as active databases, among other things.

Note that a VMFS datastore is automatically established for storing virtual machine data if the boot media is greater than 128GB.

vSphere 7 Layout

Figure: vSphere 7 Layout

The ESX-OSData partition is built on a high-endurance storage device such as an HDD or SSD for storage media such as USB or SD cards. When a backup high-endurance storage device is unavailable, a VMFS-L Locker partition on USB or SD devices is created, although it is solely utilized to store ROM-data. A RAM disc is used to store RAM data.

Keep in mind that USB and SD devices are no longer supported starting vSphere 7 Update 3 following a large number of issues encountered by customers.

Key Changes Between the ESXi 6 And ESXi 7

The ESX-OSData partition change is an important one in the context of SD cards and USB devices since all non-boot partitions (such as the small and big core-dump, locker, and scratch disc ) have been consolidated into this new VMFS-L partition.

 

VMware Partitions in vShere 6.x and 7

Figure: VMware Partitions in vShere 6.x and 7

High endurance persistent storage device required

Due to an increase in IO requests delivered to the ESX-OSData partition, it must be built on a high endurance persistent storage device. Multiple variables included with ESXi 7.x have resulted in higher IO requests, including:

    • A higher number of probe requests were issued to examine the device’s condition and ensure that it was still serving IO requests.
    • Scheduled routines to back up system state and timestamps contribute to the increased IO demands in a minor way.
    • Additionally, new features and solutions use ESX-OSData to store their configuration data, necessitating its installation on a high-endurance, locally connected persistent storage device.

Increased storage minimums

ESXi could previously be installed on 1 GB USB sticks. ESXi 7.0, on the other hand, increases these needs to 3.72GB of storage space to be precise.

However, the recommended storage capacity is 32 GB. What’s noteworthy is that, while the boot partition’s size (100MB) remains constant, the sizes of the other VMware partitions vary depending on the kind of installation media used.

    • <4GB minimum required to install ESXi 7.0.
    • 32 GB required to install ESXi 7.0.
    • 4GB required for upgrading to ESXi 7.0.

Dynamic partition sizes

The VMware examples demonstrate media sizes ranging from 4GB to 128GB and beyond, and as you can see, if you have an SSD drive with more than 128GB, the remaining space may be used to create a local datastore in ESXi partitions.

Changes in vSphere 7 Partitions

Figure: Changes in vSphere 7 Partitions

Supported Boot Media in vSphere 7 Layout

As you may be aware, starting with vSphere 7 Update 3, the use of standalone SD cards or USB devices is deprecated in the vSphere 7 layout. In which instance the system will display warnings when you use them. It is suggested (mandatory eventually) that you store the ESX-OSData partition on a locally attached persistent storage device.

A 32 GB disc is required when booting from a local drive, SAN or iSCSI LUN to create system storage volumes in ESXi partitions. A VMware Tools partition is created automatically on the RAM disc starting with ESXi 7 Update 3, and warnings appear to prevent you from establishing ESXi partitions on flash media devices other than the boot bank partitions. Other ways for improving the performance of an ESXi 7.0 installation include:

    • A 138 GB or bigger local drive for maximum ESX-OSData compatibility. The boot partition, ESX-OSData volume, and VMFS datastore are all located on the drive.
    • A device capable of storing a minimum of 128 terabytes of data (TBW).
    • A device with a sequential write speed of at least 100 MB/s.
    • A RAID 1 mirrored device is recommended for resiliency in the event of device failure.

Upgrading to ESXi 7 U3 with SD card

We’ve already discussed that starting with vSphere 7 Update 3, the use of standalone SD cards or USB devices is deprecated in the vSphere 7 layout. The system will continue to run with warnings if they are used but it is best that you store the ESX-OSData partition on a locally attached persistent storage device.

Upgrade procedure with SD card and additional disk

Please follow the procedures below to upgrade ESXi 6.7 with a standalone SD card or USB device to ESXi 7 with an extra disc. If the ESXi 6.7 host does not have persistent storage:

    • On an ESXi 6.x host, add a high-endurance, locally connected persistent storage device.
    • ESXi Host should be upgraded to ESXi 7 to meet ESXi requirements.
    • If autoPartition=True is set, the first unused boot device will be auto partitioned and utilized as the ESX-OSData partition.
    • This will guarantee that the System boot partition is stored on the SD card or USB device, and the ESX-OSData partition is stored on the newly inserted storage device with ESX partitioning.

If the ESXi host has previously been updated to ESXi 7.x and is operating from a USB or SD card.

    • Add a locally associated persistent storage device with good durability in ESXi partitions.
    • Set autoPartition = True on the ESXi host, and it will auto partition the first unused boot device to be used as the ESX-OSData partition.
    • This will guarantee that the System boot partition is stored on the SD card or USB device, and the ESX-OSData partition is stored on the newly inserted storage device with ESX partitioning.

ESXi 7.0 degraded mode

When a 4 GB boot device is used and no local disc is discovered, ESXi enters a state known as ‘degraded mode.’ In summary, the degraded mode is a condition in which logs and states may not be permanent, causing boot up to be delayed as a result.

Note that if the OSData partition is on an HDD or superior media, the system in the vSphere 7 layout will not enter in degraded mode.

The only vSphere 7 layout that will remain supported is the use of persistent storage devices only

Figure: The only vSphere 7 layout that will remain supported is the use of persistent storage devices only.

A sysalert appears in case you enter degraded mode:

ALERT: No persistent storage for system logs and data is available. Because ESX has a limited amount of system storage capacity, logs and system data will be lost in the vSphere 7 layout if the server is rebooted. To fix this, you’ll need to install a local disc or flash device and follow the steps in KB article 77009.

If you don’t want to use an SD card or USB device anymore in the vSphere 7 layout, you can:

    • Use a locally connected persistent storage device.
    • On a locally connected storage device, reinstall ESXi 7.x in the vSphere 7 layout.
    • This will ensure that all partitions are kept on a locally connected storage device with excellent durability in the vSphere 7 layout.

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Conclusion

While the new vSphere 7 layout will certainly bring hardship to customers with a large fleet of hypervisors installed on SD cards, it also introduces more flexibility to improve the integration of VMware and third-party solutions into the vSphere hypervisor.

With the new vSphere 7 layout, VMware is discontinuing support for Boot configuration with only an SD card, USB drive, and without a persistent device with the introduction of vSphere 7 Update 3.

Because these will not be supported in future vSphere versions, customers are encouraged to stop using SD cards and USB devices entirely due to the vSphere 7 layout. If that isn’t possible right now, make sure you have at least 8GB SD cards or USB drives on hand, as well as a minimum of 32 GB locally connected high endurance device for ESX-OSData Partition.

The post vSphere 7 Partition Layout Changes appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vsphere-7-partition-layout/feed/ 0
Setting up Enhanced Linked Mode in vCenter 7.0 https://www.altaro.com/vmware/enhanced-linked-mode/ https://www.altaro.com/vmware/enhanced-linked-mode/#comments Fri, 07 Jan 2022 17:10:05 +0000 https://www.altaro.com/vmware/?p=23519 Simplify the management of your SDDCs and reduce operational overhead with vCenter server Enhanced Linked mode on VMware vCenter 7

The post Setting up Enhanced Linked Mode in vCenter 7.0 appeared first on Altaro DOJO | VMware.

]]>

VMware vCenter Enhanced Linked Mode (ELM) allows virtual infrastructure admins to connect and manage multiple vCenter Server instances together, through a single pane of glass.

By joining vCenter Servers together in Enhanced Linked Mode, they become part of the same Single Sign-On (SSO) domain, allowing administrators to log into any of the linked vCenter Servers simultaneously using a single set of credentials.

As well as roles and permissions, ELM also enables the sharing of tags, policies, and search capabilities across the inventories of all linked vCenter Servers from the vSphere Client.

An example of a common ELM setup is the management and workload vCenter Servers from the primary and secondary sites (for a total of 4) linked together, improving ease of administration and usability.

Example vCenter Enhanced Linked Mode Setup

Example vCenter Enhanced Linked Mode Setup

What is the Difference Between Enhanced Linked Mode and Hybrid Linked Mode?

Hybrid Linked Mode is concerned with linking your on-premises vCenter Server with a cloud vCenter Server. The key difference is that Hybrid Linked Mode does not join the same SSO domain, but instead maps through the connection using either a Cloud Gateway Appliance or an LDAP Identity Source.

You can set up on-premises vCenter Servers in Enhanced Linked Mode, and still connect these to a cloud vCenter Server using Hybrid Linked Mode. An example of this is a hybrid cloud setup with VMware Cloud on AWS providing the cloud vCenter, linked with vCenter Servers in your data centre(s).

Example vCenter Hybrid Linked Mode Setup

Example vCenter Hybrid Linked Mode Setup

What are the Requirements for Enhanced Linked Mode in vCenter 7.0?

    • An embedded Platform Services Controller (PSC) deployment
    • vCenter Server Standard licensing, ELM is not included with vCenter Server Foundation or Essentials
    • All vCenter Servers must be running the same version

If you are running vCenter 7.0 then both the Windows vCenter and the external Platform Services Controller are deprecated.

For previous versions, or non-compliant deployment types, review the following steps:

    • vCenter 6.0 – vSphere 6.0 is out of support, whilst ELM was available with vCenter 6.0, it required external PSC node(s), which is also no longer a supported deployment option in vCenter 7.0. Upgrade to vSphere 6.5 or 6.7 first, and then upgrade to vCenter 7.0.
    • vCenter 6.5/6.7 – ELM is supported with the embedded PSC from vCenter 6.5 Update 2 and later. However, due to the end of support approaching on October 15 2022 for both vSphere 6.5 and 6.7, you should still consider upgrading to vCenter 7.0.
    • Windows vCenter – Windows vCenter Servers are not supported with ELM or with vCenter 7.0. During the upgrade process, you can migrate all your configuration and historical data to the vCenter Server Appliance from the vCenter 7.0 upgrade UI.
    • External PSC – The external PSC deployment model is not supported with vCenter 7.0. During the upgrade process, you can consolidate your external PSC(s) into the embedded model using the converge tool built into the vCenter 7.0 upgrade UI.

How to Configure Enhanced Linked Mode for Existing vCenter Server Appliances

If you have existing vCenter Server deployments in separate SSO domains, then you can still join the vCenter Servers together in Enhanced Linked Mode using the SSO command line utility.

First, confirm your vCenter Server instance is not already using Enhanced Linked Mode as part of an existing SSO domain:

    • Log into the vSphere Client
    • Select the vCenter Server (top level) from the inventory
    • Click the Linked vCenter Server Systems tab
    • If you cannot see this option, click the … icon to reveal more
    • Review the list of linked vCenter Server systems
    • If the list is blank, then ELM is not setup

The steps below will demonstrate repointing a source vCenter, not already in ELM, to an existing target SSO domain. You will need to amend the syntax with the following values:

    • –src-emb-admin Administrator
      • The source SSO domain administrator, account name only. The default is administrator.
    • replication-partner-fqdn FQDN_of_destination_node
      • The Fully Qualified Domain Name (FQDN) of the target vCenter Server.
    • –replication-partner-admin SSO_Admin_of_destination_node
      • The target SSO domain administrator, account name only. The default is administrator.
    • –dest-domain-name destination_SSO_domain
      • The target SSO domain name, the default is vsphere.local.

Additionally, please note that:

    • Whilst ELM is supported with vSphere 6.5 Update 2 and later, SSO domain repointing is only supported with vCenter 6.7 Update 1 onwards
    • The command line utility requires the Fully Qualified Domain Name (FQDN) of the vCenter Server and will not work with the IP address
    • The source vCenter Server is unavailable during domain repointing
    • Ensure you have taken a file-based backup of the vCenter Server to protect against data loss

First, SSH onto the source vCenter Server. During the repointing exercise, you can migrate tags, categories, roles, and privileges.

Check for any conflicts between the source and destination vCenter Servers using the pre-check command:

cmsso-util domain-repoint -m pre-check –src-emb-admin Administrator –replication-partner-fqdn FQDN_of_destination_node –replication-partner-admin SSO_Admin_of_destination_node –dest-domain-name destination_SSO_domain

To migrate any data generated during pre-check, and repoint the vCenter Server to the target domain, run the execute command:

cmsso-util domain-repoint -m execute –src-emb-admin Administrator –dest-domain-name destination_SSO domain

If you did not run the pre-check then run the full execute syntax:

cmsso-util domain-repoint -m execute –src-emb-admin Administrator –replication-partner-fqdn FQDN_of_destination_node –replication-partner-admin SSO_Admin_of_destination_node –dest-domain-name destination_SSO_domain

You can validate ELM using the Linked vCenter Server Systems view in the vSphere client, outlined above. Alternatively, you can use the following command:

./vdcrepadmin -f showpartners -h FQDN_of_vCenter -u administrator -w SSO_Admin_Password

How to Configure Enhanced Linked Mode with vCenter 7.0

To configure Enhanced Linked Mode a vCenter Server with an existing SSO domain must already be in place. This may be through an existing vCenter in your environment, or by deploying one from scratch.

If you are deploying a greenfield environment then install vCenter Server as normal, creating a new SSO domain by default as part of the process.

Follow the process outlined below to configure Enhanced Linked Mode with your second, or further vCenter Servers in the environment.

    • Follow stage 1 of the vCenter Server 7.0 install process as normal.
    • Stage 1 deploys the appliance to your target host and datastore, whilst configures the appliance size and network settings.
    • Once stage 1 is complete you are prompted to continue to stage 2.
    • The SSO domain configuration is done during stage 2 configuration.

vCenter Server Stage 2 Install

vCenter Server Stage 2 Install

    • Click next. Verify the network, time, and SSH settings, click next again.
    • On the SSO Configuration page change the default option from the new SSO domain, to join an existing SSO domain.

vCenter Server Join Existing SSO Domain

vCenter Server Join Existing SSO Domain

    • Enter the details of the vCenter Server for the target SSO domain, along with the existing administrator password.
    • Click next. Configure the Customer Experience Improvement Program (CEIP) accordingly and click next again.
    • Review the settings and click finish to finalise the deployment.
    • Once complete, log into vCenter Server as normal.
    • You should now see the vCenter along with any linked vCenter Servers from the vSphere Client.
    • You can further validate the ELM configuration by selecting the vCenter Server (top level) from the inventory and clicking the Linked vCenter Server Systems tab.
    • The linked vCenter Servers will now be listed.

vCenter Server Configured Enhanced Linked Mode

vCenter Server Configured Enhanced Linked Mode

Wrap Up

I hope that you enjoyed this article and that you now have a better idea of how to properly set up Enhanced Linked Mode in vCenter 7.0. If there are any questions, please let me know in the comments below.

The post Setting up Enhanced Linked Mode in vCenter 7.0 appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/enhanced-linked-mode/feed/ 2
VMware Project Capitola: The vSAN of Host Memory? https://www.altaro.com/vmware/vmware-project-capitola/ https://www.altaro.com/vmware/vmware-project-capitola/#respond Fri, 24 Dec 2021 14:59:48 +0000 https://www.altaro.com/vmware/?p=23468 At VMworld 2021, VMware introduced VMware Project Capitola, a software-defined memory solution. What is it? How does it work?

The post VMware Project Capitola: The vSAN of Host Memory? appeared first on Altaro DOJO | VMware.

]]>

VMware has continued to innovate in the enterprise datacenter with cutting-edge products that are now household names, such as vSAN and NSX. Over the years, it has transformed how we look at compute, storage, networking, and creating an abstraction layer on top of physical hardware to make these much easier to consume, manage, and tier.

While memory and compute are often tied together, VMware has unveiled a new technology set to bring similar advantages to the world of server memory that VMware vSAN has brought about in the world of server storage. VMware Project Capitola is a new software-defined memory solution unveiled at VMworld 2021 that will revolutionize server memory in the data center.

What is Software-Defined Memory?

With the various challenges and business problems mentioned above, the idea of a software-defined memory solution comes into focus. We mentioned at the outset, as a parallel to VMware vSAN, the notion of software-defined memory. VMware vSAN can take physical storage assigned to a specific physical vSAN host and pool this logically at the cluster level.

This software-defined approach to physical storage provides tremendous advantages in terms of flexibility, scalability, and storage tiering that allows customers to have the tools needed to solve modern storage problems. However, while VMware has pushed the envelope in most of the major areas of the data center (compute, storage, and networking), so far, memory, while virtualized to VMs and other technologies for the guest, has remained a simple hardware resource assigned to the underlying guest OS’es.

What if we had a solution that aggregated available memory installed in physical ESXi hosts and the types of memory installed in the host? Software-defined memory allows organizations to make intelligent decisions on how memory is used across the environment and assigned to various resources. In addition, memory can be pooled and tiered in the environment for satisfying different SLAs and performance use cases, like VMware vSAN allows today.

Memory Types Explained (DRAM, PMEM, NVMe)

Currently, there are three types of memory technologies widely used in the data center today. These are:

    • DRAM
    • PMEM
    • NVMe

DRAM

DRAM (Dynamic Random-Access Memory) is the standard type of memory common in servers and workstations today. It is very durable and extremely fast in terms of access times and latency. However, it has one major downside. It is not able to retain data. This characteristic of DRAM is known as volatility.

When DRAM loses power for any reason, the data contained in the DRAM modules is lost and must be retrieved from physical disk storage.

PMEM

PMEM (Persistent Memory) is a type of memory technology that is non-volatile. It retains the data, even after a power loss. It is high-density and has low latency access times like DRAM. PMEM still lacks the speed of DRAM. However, it is much faster than flash memory, such as used in SSDs.

Intel® OptaneTM is a 3D XPoint memory technology that is gaining momentum at the enterprise server level as an extremely performant memory technology with the advantages of non-volatility. In addition, Intel® OptaneTM provides excellent performance, even with multiple write operations running in parallel, something that SSDs and other memory-based storage technologies lack. This type of memory is also referred to as “storage-class memory.”

At this time, Intel® OptaneTM is not meant to be a replacement for DRAM. Instead, it complements existing DRAM memory, providing excellent performance and high reliability. It is seen as a secondary tier of memory that is used for various use cases and is much cheaper than DRAM. Whereas DRAM is around $7-$20/GB, storage-class memory like Intel® OptaneTM is around $2-$3/GB.

NVMe

Rather than a type of memory technology, NVMe is an interface provided to SSD drives. Thus, you can think of NVMe as a PCIe SSD. As a result, they are much faster SSDs than traditional SATA SSDs. NVMe storage is becoming a mainstream technology in the data center, especially in the area of high-speed storage devices. However, it is fast enough to be used as a slower memory technology tier in certain use cases.

The Consumers and Use-cases for Pooled and Tiered Memory

With infrastructure hardware in the data center, many organizations are becoming memory-bound with their applications. Memory is also a significantly expensive component of physical server infrastructure costs today. Memory can comprise as much as 50% of the price of a two-socket physical server.

Data needs are significantly expanding. Many organizations who are using large database servers find the memory initially allocated database workloads grows over time. Many companies are leveraging in-memory databases. As these grow, so does the demand for host memory consumption. Some even find this may be doubling every 18-24 months.

In addition, memory is often intentionally provisioned from a hardware perspective due to maintenance operations. Why is this? During maintenance operations, the overall capacity of a virtualization cluster is reduced so that remaining hosts must assume the memory footprint of the host in maintenance. Note the comments of an IT admin at a major US Airline company:

“I am running mission-critical workloads; I need 35% excess memory capacity at the cluster level, which I am not even using most of the time.”

Even larger cloud service providers running cloud services are challenged with memory contention. Note the comments from a cloud service provider:

“Our cloud deployment instances are also getting memory bound and losing deals due to lack of large memory instances.”

There is no question that organizations across the board are feeling the challenge of meeting the demands of customers and business stakeholders around satisfying the memory requirements of their applications and business-critical workloads.

The Challenges of Exponential Data Growth

A trend across the board in the enterprise is data is growing exponentially. Businesses are collecting, harnessing, and using the power of data for many different use cases in the enterprise. Arguably, data is the most important asset of today’s businesses. As a result, data has been referred to as the business world’s new “gold” or new “currency.”

The reason for the data explosion is data allows businesses to make better and more effective decisions. For example, pinpointed data helps companies see where they need to invest in their infrastructure, the demographics of their customers, trends in sales, and other essential statistics. The data explosion among businesses is a macro trend that shows no signs of changing.

Data doesn’t only help with the business. The data itself is a commodity that companies will buy and sell, accounting for their main fiscal revenue stream. According to Gartner, by 2022, 35% of large organizations will be sellers or buyers of data via formal online marketplaces, up from 25% in 2020.

Storing the data is only part of the challenge for businesses. They have to make something useful from the data that is harvested. Another trend related to data is modern organizations want to make use of the data collected faster. It means that data must be processed more quickly. A study by the IDC predicts that nearly 30% of global data will be real-time by 2025. It underscores the need for data to be processed more quickly. Data not processed in time declines in value exponentially.

The challenges around data are driving various customer needs across the board. These include:

    • Infrastructure needs to scale to accommodate the explosive data growth – It includes the scaling of compute, memory, storage, and networking to meet these challenges. All hardware areas are seeing the demands of data processing grow. As more data needs to be processed, it places stress on compute. It is why we are seeing GPUs becoming more mainstream for data process offloading. The network is now seeing 100 Gbit connections becoming mainstream. All NVMe storage is also being more widely used to help meet the demands placed on expedient data processing.
    • For ultra-quick data processing, in-memory applications are needed
    • Memory is expensive – It is one of the most expensive components in your infrastructure. Customers are challenged to reduce costs and at the same time keep an acceptable level of performance.
    • Consistent Day-0 through Day 2 experience – Customers need acceptable experience from an operations and monitoring perspective.

The digital transformation resulting from the global pandemic has been a catalyst to the tremendous growth of data seen in the enterprise. Since the beginning of 2020, businesses have had to digitalize everything and streamline manual processes into fully digital processes to streamline business operations and allow these to be completed safely.

Application designs are changing as a result. Organizations are designing applications that must work with ever-increasing datasets across the board. Even though the datasets are growing, the expectation is that applications can process the data faster than ever.

It includes applications that rely on database backends such as SAP, SQL, and Oracle. In addition, artificial intelligence (AI) and machine learning (ML) are becoming more mainstream in the enterprise. SLAs also require exponentially more extensive data sets to be constantly available.

Virtual Desktop Infrastructure (VDI) instances continue as a business-critical service in the enterprise today. However, the cost per VDI instance continues to be a challenge for businesses today. As organizations continue to scale their VDI infrastructure, the demand for memory continues to grow. As mentioned, memory is one of the most expensive components in a modern server. As a result, memory consumption is one of the primary price components of VDI infrastructure.

In-memory computing (IMC) is a growing use case for memory consumption. Organizations are accelerating their adoption of memory-based applications such as SaaS and high-velocity time-series data. In addition, 5G and IoT Mobile Edge use cases require real-time data processing that depends on the speed of in-memory processing.

Due to the memory demands needed by modern applications and the price of standard DRAM, many organizations are turning to alternative technologies for memory utilization. NVMe is being considered and used in some environments for memory use cases. Although slower than standard DRAM, it can provide a value proposition and ROI for companies in many use cases.

Summary of Modern Memory Challenges

To summarize the variety of challenges organizations are encountered directly related to memory requirements and constraints:

    • Memory is expensive – The cost of memory is a significant part of the overall hardware investment in the data center
    • Deployments are memory-bound – Memory is becoming the resource that is most in-demand and in short supply relative to other system resources
    • Hardware incompatibility and heterogeneity – Up to this point, memory is tied to and limited by the physical server host. This constraint creates challenges for applications with memory resources beyond what a single physical server host can provide.
    • Performance SLA and monitoring – Businesses will continue to have performance demands while continuing to need more memory to keep up with the resource demands of applications and data processing
    • Availability and recovery – On top of the performance demands, businesses still need to ensure applications and data are available and can be quickly recovered
    • Operational complexity – To keep up with the demands of memory and other resources, applications are becoming more complex to work around the memory demands.

These challenges result in unsustainable costs to meet business needs, both from an infrastructure and application development perspective.

What is VMware Project Capitola?

With the growing demands on memory workloads in the enterprise, businesses need new ways to satisfy memory requirements for data processing and modern applications. VMware has redefined the data center in CPU, storage, and networking with products that most are familiar with and use today – vSphere, vSAN, and NSX. In addition, VMware is working on a solution that will help customers solve the modern challenges associated with memory consumption. At VMworld 2021, VMware unveiled a new software-defined memory solution called VMware Project Capitola.

What is VMware Project Capitola? VMware has very much embraced the software-defined approach to solving challenges associated with traditional hardware and legacy data center technologies. VMware Project Capitola extends the software-defined approach to managing and aggregating memory resources. VMware notes the VMware Project Capitola Mission as “flexible and resilient memory management built in the infrastructure layer at 30-50% better TCO and scale.”

VMware Project Capitola is a technology preview that has been described as the “vSAN of memory” as it performs very similar capabilities for memory management as VMware vSAN offers for storage. It will essentially allow customers to aggregate tiers of different memory types, including:

    • DRAM
    • PMEM
    • NVMe
    • Other future memory technologies

It enables customers to implement these technologies cost-effectively and allows delivering memory intelligently and seamlessly to workloads and applications. Thus, VMware Project Capitola helps to meet challenges associated with operations challenges and those faced by application developers.

    • Enterprise operations – VMware Project Capitola allows seamlessly scaling tiers of memory based on demand and enable unifying heterogeneous memory types in a unified platform for consumption
    • Application developers – Using VMware Project Capitola, application developers are provided the tools to consume the different memory technologies without using APIs

The memory tiers created by VMware Project Capitola are aggregated into logical memory. This capability allows consuming and managing memory across the platform as a capability of VMware vSphere. It increases overall available memory intelligently using specific tiers of memory for workloads and applications. In addition, it prevents consuming all memory within a particular tier. Instead, this is now shifted to a business decision based on the SLAs and performance required of the applications.

VMware Project Capitola details currently known

VMware Project Capitola will be tightly integrated with current vSphere features and capabilities such as Distributed Resource Scheduler (DRS), which bolsters the new features provided with VMware Project Capitola with the standard availability and resource scheduling provided in vSphere.

VMware mentions VMware Project Capitola will be released in phases. It will be implemented at the ESXi host level, and then features will be extended to the vSphere cluster. VMware details that VMware Project Capitola will be implemented in a way that preserves current vSphere memory management workflows and capabilities. It will also be available in both vSphere on-premises and cloud solutions.

As expected, VMware is working with various partners, including memory and server vendors (Intel, Micron, Samsung, Dell, HPE, Lenovo, Cisco). In addition, they are working with service providers and various ISV partners in the ecosystem and internal VMware business divisions (Hazelcast, Gemfire, and Horizon VDI) to integrate the solution seamlessly with native VMware solutions. VMware is collaborating with Intel initially as a leading partner with technologies such as Intel® OptaneTM PMem on Intel® XeonTM platforms.

Value proposition

    1. Software-defined memory for all applications provides frictionless deployments without retooling applications and allows addressing memory-bound deployments with large memory footprints. It can also lead to faster recovery from failures.
    2. Operational Simplicity – No changes in the way vSphere works. It provides flexibility to tune performance and tune applications. It reduces infrastructure customization for a specific workload
    3. Technology Agnostics – Pay-as-you-grow model that allows tuning performance as you need for specific applications. Bring pooled and disaggregated memory to your server fabric.

How does VMware Project Capitola work?

In phase 1 of the VMware Project Capitola, it is local-tiering with a cluster. ESXi, installed on top of the physical server hardware, is where the memory tiers are created. Management of the tiering happens at the cluster level. When VMs are created in the environment, they will have access to the various memory tiers.

Future capabilities of VMware Project Capitola will undoubtedly have the ability to control memory tiers based on policies, much like vSAN storage today. All current vSphere technologies, such as vMotioning a VM, will remain available with VMware Project Capitola. It will be able to maintain the tiering assignments for workloads as these move from host to host.

Overview of VMware Project Capitola architecture
Overview of VMware Project Capitola architecture

In phase 2 releases of VMware Project Capitola, the tiering capabilities will be a cluster-wide feature. In other words, if a workload cannot get the tier of memory locally on the native ESXi host, it will get the memory from another node in the cluster or dedicated memory device.

VMware Project Capitola enables transparent tiering

The memory tiering enabled by VMware Project Capitola is called transparent tiering. The virtual machine simply sees the memory that is allocated to it in vSphere. It is oblivious to where the actual physical memory is coming from on the physical ESXi host. VMware vSphere takes care of the appropriate placement of memory paging in the relative physical memory.

A simple two-tier memory layout may look like:

    • Tier 1 – DRAM
    • Tier 2 – Cheaper and larger memory (Optane, NVMe, etc)

VMware Project Capitola enables transparent tiering

The ESXi host sees a sum of all the memory available to it across all memory tiers. At the host level, the host monitoring and the tier or tier sizing decide the tier allocation budget given to a particular VM. It decides this based on various metrics, including:

    • Memory activity
    • Memory size
    • Other factors

The underlying VMware Project Capitola mechanisms decide when and where active pages sit in faster tiers of memory or slower tiers of memory. Again, the virtual machine is unaware of where memory pages actually reside in physical memory. It simply sees the amount of memory it is allocated. This intelligent transparent tiering will allow businesses to solve performance and memory capacity challenges in ways not possible before.

What Project Capitola Means for the Future of Memory Management

VMware Project Capitola is set to change how organizations can solve challenging problems in managing and allocating memory across the environment for business-critical workloads and applications. Today, organizations are bound by physical memory constraints related to physical hosts in the data center. VMware Project Capitola will allow customers to pool memory from multiple hosts in much the same way that vSAN allows pooling storage resources.

While it is currently only shown as a technology preview, VMware Project Capitola already looks extremely interesting and will provide powerful features enabling innovation and flexibility for in-memory and traditional applications across the board.

Learn more about VMware Project Capitola in the following resources:

The post VMware Project Capitola: The vSAN of Host Memory? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-project-capitola/feed/ 0