Desktop Virtualization Archives - Altaro DOJO | VMware https://www.altaro.com/vmware VMware guides, how-tos, tips, and expert advice for system admins and IT professionals Thu, 29 Sep 2022 12:01:55 +0000 en-US hourly 1 Everything you Need to Know about Containers in VMware https://www.altaro.com/vmware/containers/ https://www.altaro.com/vmware/containers/#respond Thu, 29 Sep 2022 10:01:03 +0000 https://www.altaro.com/vmware/?p=24947 All available options to run containers in VMware listed and explained plus step-by-step instructions to use vSphere Integrated Containers

The post Everything you Need to Know about Containers in VMware appeared first on Altaro DOJO | VMware.

]]>

Unquestionably, organizations today are transforming from traditional infrastructure and workloads, including virtual machines, to modern containers running containerized applications. However, making this transition isn’t always easy as it often requires organizations to rethink their infrastructure, workflows, development lifecycles, and learn new skills. Are there ways to take advantage of the infrastructure already used in the data center today to run containerized workloads? For years, many companies have been using VMware vSphere for traditional virtual machines in the data center. So what are your options to run containers in VMware?

Why Shift to Containers?

Before we look at the options available to run containers in VMware, let’s take a quick overview of why we are seeing a shift to running containers in the enterprise environment. There are many reasons. However, consider a few primary reasons we see the change to containerized applications today.

One of the catalysts to the shift to containerized applications is the transition from large monolithic three-tier applications to much more distributed application architectures. For example, you may have a web, application, and database tier in a conventional application, each running inside traditional virtual machines. With these legacy three-tier architectures, the development lifecycle is often slow and requires many weeks or months to deploy upgrades and feature enhancements.

The application upgrade is performed by lifting the entire layer to a new version of code as it is required to happen in lockstep as a monolithic unit of code. The new layout of modern applications is very distributed, using microservice components running inside containers. With the new architectural design of modern applications, each microservice can be upgraded separately from the other application elements, allowing much faster development lifecycles, feature enhancements, upgrades, lifecycle management, and many other benefits.

Organizations are also shifting to a DevOps approach to deploying, configuring, and maintaining infrastructure. With DevOps, infrastructure is described in code, allowing infrastructure changes to be versioned like other development lifecycles. While DevOps processes can use virtual machines, containerized infrastructure is much more agile and more readily conforms to modern infrastructure management. So, the shift to a more modern approach to building applications offers benefits from both development and IT operations perspectives. To better understand containers vs. virtual machines, let’s look at the key differences.

Comparing Containers vs. Virtual Machines

Many have used virtual machines in the enterprise data center. How do containers compare to virtual machines? To begin, let’s define each. A virtual machine is a virtual instance of a complete installation of an operating system. The virtual machine runs on top of a hypervisor that typically virtualizes the underlying hardware of the virtual machine, so it doesn’t know it is running on a virtualized hardware layer.

Virtual machines are much larger than containers as a virtual machine contains the entire operating system, applications, drivers, and supporting software installations. Virtual machines require operating system licenses, lifecycle management, configuration drift management, and many other operational tasks to ensure they are fully compliant with the set of organizational governance policies decided.

Instead of containing the entire operating system, containers only package up the requirements to run the application. All of the application dependencies are bundled together to form the container image. Compared to a virtual machine with a complete installation of an operating system, containers are much smaller. Typical containers can range from a few megabytes to a few hundred megabytes, compared with the gigabytes of installation space required for a virtual machine with an entire OS.

One of the compelling advantages of running containers in VMware is that they can move between container hosts without worrying about the dependencies. With a traditional virtual machine, you must verify all the underlying prerequisites, application components, and other elements are installed for your application. As mentioned earlier, containers contain all the application dependencies and the application itself. Since all the prerequisites and dependencies move with the container, developers and IT Ops can move applications and schedule containers to run on any container host much more quickly.

Virtual machines still have their place. Installing traditional monolithic or “fat” applications inside a container is generally impossible. Virtual machines provide a great solution for interactive environments or other needs that still cannot be satisfied by running workloads inside a container.

Containers have additional benefits related to security. Managing multiple virtual machines can become tedious and difficult, primarily related to lifecycle management and attack surface. In addition, virtual machines have a larger attack surface since they contain a larger application footprint. The more software installed, the greater the possibility of attack.

Lifecycle management is much more challenging with virtual machines since they are typically maintained for the entire lifespan of an application, including upgrades. As a result, it can lead to stale software, old software installations, and other baggage brought forward with the virtual machine. Organizations also have to stay on top of security updates for virtual machines.

Containers in VMware help organizations to adopt idempotency. It means that the containers running the current version of the application will not be upgraded once deployed. Instead, businesses deploy new containers with new application versions. The result is a new application environment each time a new container is deployed.

Note the following summary table comparing containers and virtual machines.

Containers Virtual Machines
Small in size Yes No
Contains all application dependencies Yes No
Requires an OS license No Yes
Good platform for monolithic app installs No Yes
Reduced attack surface Yes No
Easy lifecycle management Yes No
Easy DevOps processes Yes No

It is easy to think that it is either containers or virtual machines. However, most organizations will find that there is a need for both containers and virtual machines in the enterprise data center due to the variety of business use cases, applications, and technologies used. These two technologies work hand-in-hand.

Virtual machines are often used as “container hosts.” They provide the operating system kernel needed to run containers and provide other benefits to be used as container hosts. They can take advantage of the benefits from a hypervisor perspective for high availability and resource scheduling.

Kubernetes (K8s) is the Modern Key to Running Containers

Businesses today are looking at running containers and refactoring for containerized applications. They are looking at doing so using Kubernetes. Kubernetes is the single more important aspect of running containers in business-critical environments.

Simply running your application inside a container does not satisfy the needs of production environments, such as scalability, performance, high availability, and other concerns. For example, suppose you have a microservice running in a single container that goes down. In that case, you are in the same situation as running the service in a virtual machine without some type of high availability.

Kubernetes is the container orchestration platform allowing businesses to run their containers much like they run VMs today in a highly-available configuration. Kubernetes can schedule containers to run on multiple container hosts and reprovision containers on a failed host onto a healthy container host.

While some companies may run simple containers inside Docker or containers and take care of scheduling using some homegrown orchestration or other means, most are looking at using Kubernetes to solve these challenges. Kubernetes is an open-source solution that allows managing containerized workloads and services and provides modern APIs to allow automation and configuration management.

Kubernetes provides:

  • Service discovery and load balancing – Kubernetes allows businesses to expose services using DNS names or IP addresses. It can also load balance between container hosts and distribute traffic between the containers for better performance and workload balance
  • Storage orchestration – Kubernetes provides a way to mount storage systems to back containers, including local storage, public cloud provider storage, and others
  • Automated rollouts and rollbacks – Kubernetes provides a way for organizations to perform “rolling” upgrades and application deployments, including automating the deployment of new containers and removing existing containers
  • Resource scheduling – Kubernetes can run containers on nodes in an intelligent way, making the best use of your resources
  • Self-healing – If containers fail for some reason, Kubernetes provides the means to restart, replace, or kill containers that don’t respond to a health check, and it doesn’t advertise these containers to clients until they are ready to service requests
  • Secret and configuration management – Kubernetes allows intelligently and securely storing sensitive information, including passwords, OAuth tokens, and SSH keys. Secrets can be updated and deployed without rebuilding your container images and without exposing secrets within the stack

Why Run Containers in VMware?

Why would you want to run containers in VMware when vSphere has traditionally been known for running virtual machines and is aligned more heavily with traditional infrastructure? There are many reasons for looking at running your containerized workloads inside VMware vSphere, and there are many benefits to doing so.

There have been many exciting developments from VMware over the past few years in the container space with new solutions to allow businesses to keep pace with containerization and Kubernetes effectively. In addition, To VMware’s numbers, some 70+ million virtual machine workloads are running worldwide inside VMware vSphere.

It helps to get a picture of the vast number of organizations using VMware vSphere for today’s business-critical infrastructure. Retooling and completely ripping and replacing one technology for something new is very costly from a fiscal and skills perspective. As we will see in the following overview of options for running containers in VMware, there are many excellent options available for running containerized workloads inside VMware, one of which is a native capability of the newest vSphere version.

VMware vSphere Integrated Containers

The first option for running containers in VMware is to use vSphere Integrated Containers (VIC). So what are vSphere Integrated Containers? How do they work? The vSphere Integrated Containers (VIC) offering was released back in 2019 and is the first offering from VMware to allow organizations to have a VMware-supported solution for running containers side-by-side with virtual machines in VMware vSphere.

It is a container runtime for vSphere that allows developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Also, vSphere administrators can manage these workloads by using vSphere in a familiar way.

The VIC solution to run containers in VMware is deployed using a simple OVA appliance installation to provision the VIC management appliance, which allows managing and controlling the VIC environment in vSphere. The vSphere Integrated Containers solution is a more traditional approach that uses virtual machines as the container hosts with the VIC appliance. So, you can think of the VIC option to run containers in VMware as a “bolt-on” approach that brings the functionality to traditional VMware vSphere environments.

With the introduction of VMware Tanzu and especially vSphere with Tanzu, vSphere Integrated Containers is not the best option for greenfield installations to run containers in VMware. In addition, August 31, 2021, marked the end of general support for vSphere Integrated Containers (VIC). As a result, VMware will not release any new features for VIC.

Components of vSphere Integrated Containers (VIC)

What are the main components of vSphere Integrated Containers (VIC)? Note the following architecture:


Architecture overview of vSphere Integrated Containers (VIC)

  • Container VMs – contain characteristics of software containers, including ephemeral storage, custom Linux guest OS, persistenting and attaching read-only image layers, and automatically configuring various network topologies
  • Virtual Container Hosts (VCH) – The equivalent of a Linux VM that runs Docker, providing many benefits, including clustered pool of resources, single-tenant container namespace, isolated Docker API endpoint, and a private network to which containers are attached by default
  • VCH Endpoint VM – Runs inside the VCH vApp or resource pool. There is a 1:1 relationship between a VCH and a VCH endpoint VM.
  • The vic-machine utility – It is the utility binary for Windows, Linux, and OSX to manage your VCHs in the VIC environment

How to Use vSphere Integrated Containers

As an overview of the VIC solution, getting started using vSphere Integrated Containers (VIC) is relatively straightforward. First, you need to download the VIC management appliance OVA and deploy this in your VMware vSphere environment. The download is available from the VMware customer portal.


Download the vSphere Integrated Containers appliance

Let’s look at the deployment screens for deploying the vSphere Integrated Containers appliance. The process to deploy the VIC OVA appliance is the standard OVA deployment process. Choose the OVA file for deploying the VIC management appliance.


Select the OVA template file

Name the VIC appliance.


Name the VIC appliance

Select the compute resource for deploying the VIC appliance.


Select the compute resource for deploying the VIC appliance

Review the details of the initial OVA appliance deployment.


Review the details during the initial deployment

Accept the EULA for deploying the OVA appliance.


Accept the EULA during the deployment of the OVA appliance

Select the datastore to deploy the VIC appliance.


Select the storage for the VIC appliance

Select the networking configuration for the VIC appliance.


Choose your virtual network to deploy the VIC appliance

On the customize template screen, configure the OVA appliance configuration details, including:

  • Root password
  • TLS certificate details
  • Network configuration (IP address, subnet mask, gateway, DNS, DNS search order, and FQDN)
  • NTP configuration
  • Other configurations


Customize the VIC appliance template configuration

Review and finalize the configuration for the VIC appliance.


Finish the deployment of the VIC appliance

Once the VIC appliance is deployed, you can browse to the hostname you have configured for VIC. You will see the following configuration dialog displayed. Enter your vCenter Server information, connection details, and the password you want to configure for the VIC appliance.


Wizard to complete the VIC appliance installation

Accept the thumbprint for your vCenter Server

Once the installation finishes, you will see the successful installation message. The dashboard provides several quick links to manage the solution. As you can see, you can also go to the vSphere Integrated Containers Management Portal to get started.

Installation of VIC is successful

Once you deploy the VIC appliance, you can download the vSphere Integrated Containers Engine Bundle to deploy your VIC container hosts. Once the container hosts are provisioned, you can deploy the container workloads you want to deploy for development.

The syntax to create the Virtual Container Host in VIC is as follows:

vic-machine-windows create

–target vcenter_server_address

–user “Administrator@vsphere.local”

–password vcenter_server_password

–bridge-network vic-bridge

–image-store shared_datastore_name

–no-tlsverify

–force

Once you have configured the Virtual Container Host, you can create your Docker containers. For example, you can use the following to create a Docker container running Ubuntu using the following:

docker -H <VCH IP address:2376> –tls run -it ubuntu

To learn more details on how to deploy vSphere Integrated Containers, take a look at the posts here:

VMware vSphere Integrated Containers – End of General Support

As noted above, vSphere Integrated Containers is now at the end of general support as of August 31, 2021. Why is VMware ending support? Again, due to the advancement in containerized technologies, including Tanzu, VMware is moving forward without VIC. The official answer from VMware on the End of General Support FAQ page for vSphere Integrated Containers (VIC) notes:

“VMware vSphere Integrated Containers (VIC) is a vSphere feature that VMware introduced in 2016 with the vSphere 6.5 release. It is one of the first initiatives that VMware had in the container space to bring containers onto vSphere.

In the last few years, the direction of both the industry and the cloud-native community has moved to Kubernetes, which is now the de facto orchestration layer for containers. During this time, VMware also made significant investments into Kubernetes and introduced several Kubernetes-related products including vSphere with Tanzu which natively integrates Kubernetes capabilities into vSphere. vSphere with Tanzu enables containers to be a first-class citizen on the vSphere platform with a much-improved user experience for developers, dev-ops (platform Op/SRE) teams and IT admins.

Given both the industry and community shift towards Kubernetes and the launch of vSphere with Tanzu, which incorporated many of the concepts and much of the technology behind VIC with critical enhancements such as the use of the Kubernetes API, we decided that it is time to end our support to VIC as more and more of our customers start moving towards Kubernetes.”

As mentioned on the End of Support FAQ page, VMware sees the direction moving forward with Kubernetes technologies. VMware Tanzu provides the supported solution moving forward, running Kubernetes-driven workloads in VMware vSphere.

VMware Embraces Kubernetes with vSphere 7

Organizations today are keen on adopting Kubernetes as their container orchestration platform. With VMware vSphere 7, VMware took a significant stride forward for native containerized infrastructure with the introduction of VMware Tanzu. In addition, VMware vSphere 7 has introduced native Kubernetes support, built into the ESXi hypervisor itself. It means running containers orchestrated by Kubernetes is not a bolt-on solution. Instead, it is a native feature found with a new component in the ESXi hypervisor.

In addition, vanilla Kubernetes can be difficult and challenging to implement. Tanzu provides an integrated and supported way forward for organizations to use the infrastructure they are already using today to implement Kubernetes containers moving forward.

Due to the seamless integration and many other key features with Tanzu, the new Tanzu Kubernetes solution is a far superior solution to run containers in VMware in 2022 and beyond. For this reason, VMware is phasing out vSphere Integrated Containers in favor of moving forward with VMware Tanzu.

VMware Tanzu is an overarching suite of solutions first announced at VMworld 2019. It provides solutions allowing organizations to run Kubernetes across cloud and on-premises environments. For example, with vSphere with Tanzu (codenamed Project Pacific), businesses can run Tanzu Kubernetes right in the VMware vSphere hypervisor. However, it extends beyond vSphere with Tanzu and includes the following solutions:

  • Tanzu Kubernetes Grid
  • Tanzu Mission Control
  • Tanzu Application Service
  • Tanzu Build Service
  • Tanzu Application Catalog
  • Tanzu Service Mesh
  • Tanzu Data Services
  • Tanzu Observability

There are two types of Kubernetes clusters configured with vSphere with Tanzu architecture. These include the following:

  • Supervisor cluster – The supervisor cluster uses the VMware ESXi hypervisor as a worker node, or Spherelet. This Spherelet is essentially the equivalent to the Kubelet. The advantage of the Spherelet is it is not run inside a virtual machine but natively in ESXi, which is much more efficient.
  • Guest cluster – The guest cluster is run inside specialized virtual machines for general-purpose Kubernetes workloads. These VMs run a fully compliant Kubernetes distribution


vSphere with Tanzu architecture

To learn more about VMware Tanzu, take a look here:

VMware Tanzu Community Edition (TCE)

VMware Tanzu Community Edition (TCE) is a newly announced VMware Tanzu solution that makes Tanzu-powered containers available to the masses. The project is free and open-source. However, it can also run production workloads using the same distribution of VMware Tanzu available in the commercial offerings. In addition, it is a community-supported project that allows the creation of Tanzu Kubernetes clusters for many use cases, including local development.

You can install VMware Tanzu Community Edition (TCE) in the following environments:

  • Docker
  • VMware vSphere
  • Amazon EC2
  • Microsoft Azure


Tanzu Community Edition installation options

Recently, VMware has introduced the unmanaged cluster type with the Tanzu Community Edition (TCE) 0.10 version. The new unmanaged cluster drastically reduces the time to deploy a Tanzu Community Edition by half. The new unmanaged cluster is taking the place of the standalone cluster type found in previous releases.

The new unmanaged cluster is the best deployment option when:

  • You have limited host resources available
  • You only need to provision one cluster at a time
  • A local development environment is needed
  • Kubernetes clusters are temporary and are stood up and then torn down

When looking at options to run containers in VMware in 2022, Tanzu Community Edition (TCE) is a great option to consider as it may fit the use cases needed for running containers in VMware environments. In addition, it offers an excellent option for transitioning away from vSphere Integrated Containers (VIC) and allows organizations to take advantage of Tanzu for free. It also provides a great way to use VMware Tanzu Kubernetes for local development environments.

What is the Cluster API Provider vSphere?

Another interesting project to run containers in VMware vSphere is the Cluster API Provider vSphere (CAPV) project. The cluster API allows organizations to have a declarative, Kubernetes-style API to manage cluster creation, configuration, and management. The CAPV project implements the Cluster API for vSphere. Since the API is shared, it allows businesses to have a truly hybrid deployment of Kubernetes across their on-premises vSphere environments and multiple cloud providers.

You can download the CAPV project for running Kubernetes containers in VMware vSphere here:

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual
machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Is it Finally Time to Make the Switch?

With the tremendous shift to microservices in modern application architecture, businesses are rearchitecting their application infrastructure using containers. The monolithic three-tier application architecture days are numbered as businesses are challenged to aggressively release enhancements, updates, and other features on short development lifecycles. Containers provide a much more agile infrastructure environment compared to virtual machines. They also align with modern DevOps processes, allowing organizations to adopt Continuous Integration/Continuous Deployment (CI/CD) pipelines for development.

VMware has undoubtedly evolved its portfolio of options to run containers. Many organizations currently use VMware vSphere for traditional workloads, such as virtual machines. Continuing to use vSphere to house containerized workloads offers many benefits. While vSphere Integrated Containers (VIC) has been a popular option for organizations who want to run containers alongside their virtual machines in vSphere, it has reached the end of support status as of August 31, 2021.

VMware Tanzu provides a solution that introduces the benefits of running your containerized workloads with Kubernetes, which is the way of the future. The vSphere with Tanzu solution allows running Kubernetes natively in vSphere 7.0 and higher. This new capability enables organizations to use the software and tooling they have been using for years without retooling or restaffing.

VMware Tanzu Community Edition (TCE) offers an entirely free edition of VMware Tanzu that allows developers and DevOps engineers to use VMware Tanzu for local container development. You can also use it to run production workloads. In addition, both the enterprise Tanzu offering and VMware Tanzu Community Edition can be run outside of VMware vSphere, providing organizations with many great options for running Kubernetes-powered containers for business-critical workloads.

The post Everything you Need to Know about Containers in VMware appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/containers/feed/ 0
Next-gen VMware Architecture with SmartNICs – Are You On Board? https://www.altaro.com/vmware/next-gen-smartnics/ https://www.altaro.com/vmware/next-gen-smartnics/#respond Thu, 15 Sep 2022 12:48:34 +0000 https://www.altaro.com/vmware/?p=24751 Next-generation architecture with SmartNICs is helping solve the challenges of next-generation applications. Read all about it here

The post Next-gen VMware Architecture with SmartNICs – Are You On Board? appeared first on Altaro DOJO | VMware.

]]>

Server hardware, virtualization technologies, and modern data center infrastructure are transforming. As a result, modern-day applications are no longer the conventional three-tier applications. Instead, they are highly distributed and contain many containerized microservices. The shift to modern, containerized applications has created new and unique infrastructure challenges in the data center. Next-generation architecture with SmartNICs is helping to solve the challenges of next-generation applications. In addition, VMware’s Project Monterey, which was announced during VMworld 2020, is poised to help companies take advantage of the new SmartNIC architecture and disaggregated infrastructure.

The Challenges of Hybrid Infrastructure and Modern Applications

The traditional approach to server infrastructure with the central processing unit (CPU) provides a core processing “brain,” controlling all the server’s processing capabilities and allowing it to be used for a variety of use cases. These use cases include general-purpose servers, network or security appliances, storage appliances, etc.

The traditional server was also built for monolithic workloads, not distributed applications or workloads. So, no matter the workload on a general-purpose server, the capabilities, architecture, scaling, and other characteristics remain the same.

Modern applications present new and unique challenges to organizations designing and using traditional infrastructure. For example, modern distributed applications involve unstructured data, including images, log files, and text. Standard central processing units (CPUs) found in conventional servers are not well suited for these shifts in applications and the highly distributed nature of these new workload types. As a result, much of the CPU compute processing power is relegated to infrastructure services instead of applications.

Traditional server infrastructure with a standard CPU runs and processes multiple types of payloads in the same processing unit, which is less than ideal. These include:

  • Management payloads – These are core-critical processes allowing the management and control of the infrastructure. These generally run between embedded and userspace payloads
  • Userspace payloads – These are applications and the data they rely on and require
  • Embedded payloads – These payload types are generally included in the core operating system and can run privileged operations in the operating system kernel
  • Organizations attempting to satisfy the demands of new distributed microservices have created new infrastructure silos. It has also led to inconsistencies in how businesses manage and operationalize their infrastructure. What are some of the challenges with traditional infrastructure services with new modern microservices application architecture?
  • Artificial intelligence and machine learning (AI/ML) – Organizations today are using AI/ML to process the mass of data collected from IoT and other devices. The AI/ML compute infrastructure used by organizations today is often specialized and unique assets used as separate infrastructure resources in the data center. It results in increased complexity and cost for the organizations using them.
  • Complex server scale-out costs – Scalability is an increasing challenge with traditional server infrastructure. As businesses need to scale data center clusters, including CPU and network-based scaling, adding traditional nodes becomes more complex and inefficient. Data processing units like SmartNICs allow scaling infrastructure very granularly by adding the data processing units into the environment to handle specific use cases. In addition, as modern apps become more highly distributed and rely on modern infrastructure technologies as the underpinning for hybrid technologies, a larger percentage of the server capacity is used for infrastructure services and technologies. As a result, it is increasingly difficult to project capacity and scale-out costs for additional capacity.
  • Increasing security concerns – As was shown by the Spectre and Meltdown vulnerabilities, cybersecurity concerns can exist at the CPU hardware layer. It becomes increasingly risky to run infrastructure and application services on the same CPU. The more isolation provided for each application stack layer, the more secure the application data is from current and future threats. Today’s security requirements are increasingly stringent as cybersecurity risks continue to grow, and there is a need for zero-trust separation of workloads, management, and applications. Due to cloud or virtualized environments, it is crucial to have intrinsic platform security. With the way traditional servers are designed, the entire server is one unit, with all the hardware components required to process data and run applications. This tightly coupled hardware unit can make it challenging to maintain platform security and meet other requirements, such as scalability and lifecycle management. It leads to an entire server or set of servers needing to be replaced simultaneously to deliver hardware security upgrades.

New technologies and redesigned hardware and software isolations are needed to satisfy the needs of highly distributed modern applications. However, with the contemporary developments using data processing units such as SmartNICs, this goal is now achievable.

With the variety of processing tasks required in today’s highly distributed processing environments, data processing units like SmartNICs can offload many tasks from the central processing unit (CPU). These offloaded processing tasks help to improve the processing capabilities and efficiencies required by today’s modern applications.

The IDC also refers to these data processing units as function-offload accelerators (FAs). It is worth noting that many prominent service providers are adopting the use of DPUs or FAs with traditional platform architecture. With the adoption of DPUs and FAs into the enterprise data center, we will see a paradigm shift in how infrastructure is delivered and software is composed. This shift is helping to facilitate the decentralization of services with the trend in shifting to microservices. It is also helping to drive the disaggregation of hardware in the data center.

Benefits of Shifting Infrastructure Services to SmartNICs

The challenges we have noted so far help highlight the physical infrastructure’s role in the transition to scalable, secure, and efficient modern applications. The disaggregated and decentralized approach using data processing units like SmartNICs is becoming very attractive to organizations looking to transition to modern apps running across disaggregated environments.

The SmartNIC data processing units allow solving many challenges associated with traditional servers with CPUs. These data processors allow offloading of specific workloads and provide the separation of processing tasks. The benefits provided by the modern data center architecture utilizing DPUs such as SmartNICs include:

  • Freeing CPU and memory used for infrastructure tasks – Once infrastructure tasks and processing are offloaded from the CPU, it frees up CPU cycles for business-critical applications. Organizations no longer have to balance resources between critical applications and the equally critical infrastructure services that are needed to run the applications
  • Standalone control plane – By running infrastructure services on SmartNICs or data processing units, it provides a standalone control plane for access control and infrastructure services. This standalone and separated control plane offers many benefits from a security and operational perspective.
  • Secure, Zero-trust computing – There are tremendous security benefits in separating infrastructure services from applications. It allows the operating system and any virtualization platform to gain the benefits of an additional layer of protection against rogue and malicious exploits and code

What is a SmartNIC?

First of all, what exactly is a SmartNIC? Intuitively, a SmartNIC is an enhanced network interface card (NIC) that serves as its only data processing unit, allowing it to become its own standalone intelligent processing unit to process data center networking, security, and storage.

New generations of discrete data processing units (DPUs), including SmartNICs, GPUs (graphics processing units), and FPGAs (field-programmable gate arrays), are being increasingly used to perform specific applications processing. For years now, we have seen the increasing use of graphics processing units (GPUs) for a vast number of use cases, including accelerated graphics offloading, but also their use in specific processing use cases such as artificial intelligence (AI) and machine learning (ML). This trend in GPUs and other discrete “smart” processing units helps show the industry’s direction regarding how tomorrow’s mass of data is processed.

Foundational NICs (traditional network interface cards) have been used to interconnect multiple computers in traditional computer networks for years in Ethernet networks. Networking has remained a critical component of the modern data center. However, software-defined networking (SDN) is one of the major consumers of the compute cycles in the data center. It is a growing trend as more modern applications and technologies use software-defined networking overlays. However, in addition to SDN technologies, many other modern capabilities and features brought about by virtualization and modern microservice architectures are taxing even current CPUs with additional processing demands, robbing cycles from business-critical applications.

Devices based on the SmartNIC architecture are being developed by a wide range of companies with different approaches to their implementation. These include being implemented by the following technologies:

  • FPGAs – have good flexibility but are difficult to program, expensive, and are not as performant as dedicated ASICs.
  • Dedicated ASICs – Dedicated ASICs provide the best performance and are also flexible to program and produce.
  • System-on-chip (SoC) – The SoC designs blend dedicated ASICs with programmable chips. The SoC designs offer the best of both worlds, including maximum performance and flexibility in programming. However, they are the most expensive.

An example of modern SmartNIC technology is NVIDIA ConnectX-7 400G SmartNIC. It is designed to deliver accelerated networking for cloud-native workloads, artificial intelligence, and traditional workloads. In addition, it offers software-defined, hardware-accelerated storage, networking, and security capabilities to help modernize current and future IT enterprise data center infrastructure.

NVIDIA ConnectX-7 SmartNIC (image courtesy of NVIDIA)
NVIDIA ConnectX-7 SmartNIC (image courtesy of NVIDIA)

It provides 400Gb/s bandwidth, accelerated switching and packet processing, advanced RoCE NVIDIA GPUDirect storage, and in-line hardware acceleration for TLS/IPsec/MACsec encryption/decryption.

Note these additional features:

    • Accelerated software-defined networking with line-rate performance with no CPU penalty
    • Enhanced storage performance and data access with RoCE and GPUDirect Storage and NVME-oF over RoCE and TCP
    • Enhanced security with hardware-based security engines to offload encryption/decryption processing of TLS, IPsec, and MACsec
    • Accurate, hardware-based time synchronization for applications in the data center

In addition to NVIDIA, Intel is also producing SmartNICs and SmartNIC platforms. Intel refers to their solution as Intel Processing Units (IPUs) for specific infrastructure applications and SmartNICs. The infrastructure processing units accelerate network infrastructure and help free up CPU cores for improved application performance.

An example of the Intel IPU SmartNIC is the Intel IPU C5000X-PL Platform card. It provides a high-performance cloud infrastructure acceleration platform with 2×25 GbE network interfaces and can support cloud infrastructure workloads such as Open vSwitch, NVMe over Fabrics, and RDMA over Converged Ethernet v2 (RoCEv2).

Intel IPU C5000X-PL
Intel IPU C5000X-PL (Image courtesy of Intel)

The Intel IPU Platform, Codenamed Oak Springs Canyon, is the next-generation high-performance cloud infrastructure acceleration platform that provides 2x100GbE network interfaces and supports the above workloads, including Open vSwitch.

The Intel FPGA SmartNIC N6000PL is an example of Intel’s high-performance Intel Agilex FPGA-based SmartNIC providing 2x100GbE connectivity and supports many programmable functions, including acceleration of Network Function Virtualization (NFVi) and virtualized radio access network (vRAN) for 4G/5G deployments.

The Silicom FPGA SmartNIC N5010 provides the first hardware programmable 4x100GE FPGA-accelerated SmartNIC enabling servers to meet the performance needs of next-generation firewall solutions.

Silicom FPGA SmartNIC N5010
Silicom FPGA SmartNIC N5010 (Image courtesy of Intel)

VMware on SmartNICs Accelerates Virtualization

As we have detailed, the shift to modern applications is leading to a change in how organizations will be able to provide infrastructure to meet the requirements of the enterprise data center. More processing and compute cycles are spent on infrastructure services needed to connect the hybrid data center across many verticals.

In addition, new security challenges continue to mount as cybersecurity risks continue to grow, and the boundaries of the enterprise data center have been blurred with the integration of many cloud technologies and solutions. Businesses need a consistent operating model that unifies traditional and modern apps and better computing resource utilization for workloads without increasing infrastructure costs and security that provides robust isolation between infrastructure services and workloads.

What if you could run VMware, not in the traditional way, but rather on a SmartNIC where the ESXi hypervisor is isolated from the applications? Announced at VMworld 2022, Project Monterey is a new solution to meet the modern challenges facing businesses today, pivoting to modern applications running in distributed environments in the hybrid cloud. What is it?

VMware Project Monterey Unveiled with ESXi on SmartNIC

Project Monterey from VMware reimagines infrastructure as a distributed architecture where data processing units (DPUs) form the backbone of infrastructure management and services, including networking, security, storage, and host management services. Instead of running the ESXi hypervisor, storage services, and networking on top of traditional server infrastructure, organizations use data processing units (DPUs).

It brings many benefits to managing and operationalizing infrastructure and infrastructure services:

    • Unifies workload management across traditional, cloud-native, and bare metal operations, reducing operational cost
    • Provides composable software-defined infrastructure to future proof investments
    • Improves performance by accelerating network storage and security services on the DPU, freeing up CPU cycles to achieve better workload consolidation at a lower total cost of ownership (TCO)
    • Enhanced zero-trust security with air-gapped isolation between tenants and workloads. It includes an enterprise-wide security policy that applies uniformly across existing and modern apps
    • It allows IT admins to take advantage of skills and tools used in the enterprise for years now with the vSphere ecosystem

VMware Grants Security Into SmartNICs

As mentioned, software-defined networking and other infrastructure services are significant consumers of CPU and memory resources in traditional servers. In conjunction with Vmware Project Monterey, VMware has also announced that it plans to run distributed firewalls through the NSX-T Data Center Services-Defined Firewall directly on top of SmartNICs.

It would effectively offload the compute and memory requirements of software-defined networking resources from the traditional CPU and run these on the SmartNIC data processing units. When you consider that many of the infrastructure virtual machines resource requirements are not insignificant, it helps to underscore the benefits of the transition to the SmartNIC architecture in the data center.

The Future of VMware and SmartNICs

It is clear to see the direction VMware is taking with the introduction of support with Project Monterey and SmartNICs. With the massive shift to disaggregated applications and workloads, the traditional infrastructure model becomes less relevant and more inefficient. In addition, the popularity and use of GPUs in recent years for offloading CPU-intensive tasks for AI/ML workloads help show the benefits of these special-purpose data processing units or co-processors.

VMware is helping to provide the solution for organizations meeting the challenges of traditional infrastructure using microservice application architectures. With Project Monterey, businesses can embrace the use of SmartNICs. By offloading the infrastructure services processing and resources to SmartNICs, companies can free up the CPU for the all-important task of providing processing for business-critical applications.

Undoubtedly, the future of VMware and SmartNICs will continue to develop and help solve challenges related to the new “server sprawl” coming from the virtualization movement. It will help address the resource consumption coming from infrastructure virtual machines and other management VMs running simply to process infrastructure-related traffic such as software-defined networking.

If you think about it, the traditional management cluster in the VMware vSphere world may now be possible instead by using SmartNICs to run your critical infrastructure services and scale these by simply adding additional DPUs. It will be interesting to see if the future of management clusters will now reside in the new Project Monterey vSphere clusters. Decentralized disaggregated infrastructure services are the future for VMware. It will be exposed using the familiar VMware management and operational tools. How so?

VMware’s Familiar Tools and Operations

Despite major changes underneath, VMware has done a great job keeping the management and operational tools the same for VMware vSphere and related services. As a result, you can look at solutions like VMware vSphere 7.0 with VMware Tanzu baked in, a.k.a vSphere with Tanzu. VMware has added all the features and functionality to the existing VMware vSphere Client. With vSphere with Tanzu, VI admins can now run modern applications inside Kubernetes-controlled containers and do that right beside traditional virtual machines that VMware has run for years.

One of the strengths of the VMware vSphere solution is the management platform with vCenter that hasn’t significantly changed for IT admins despite the introduction of new features and capabilities. It is arguably one of the reasons for the platform’s success, providing stability and consistency that admins need for Day 0, 1, and 2 operations.

With VMware’s Project Monterey, VMware will undoubtedly keep implementing the new features and changes in the underlying capabilities of vSphere running on top of SmartNIC data processing units. This change will allow operationalizing modern operations with disaggregated hardware using vSphere.

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual  machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.   To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

What Does it all Mean?

Modern applications are shifting from monolithic applications. They have transitioned from running on traditional servers in the data center to microservices architectures. With this shift in application architecture, organizations are seeing challenges using conventional physical server configurations. As microservice architectures are adopted, businesses face challenges with scalability, security, and running modern applications like AI/ML.

SmartNICs will undoubtedly change the infrastructure services landscape in the modern data center by allowing organizations to scale infrastructure services that are not possible when using traditional server technologies with a standard central processing unit (CPU).

VMware’s Project Monterey will help organizations using VMware vSphere to take advantage of these new SmartNIC data processing units. In addition, it will help modernize the approach to infrastructure services and free up the CPU in traditional server architecture for processing business-critical applications they were intended to run. It will be interesting to see the future data center infrastructure and how DPUs, including SmartNICs, will help transform the enterprise and cloud data center landscape.

The post Next-gen VMware Architecture with SmartNICs – Are You On Board? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/next-gen-smartnics/feed/ 0
Creating the Perfect Homelab for VMware Admins https://www.altaro.com/vmware/perfect-homelab-vmware/ https://www.altaro.com/vmware/perfect-homelab-vmware/#comments Fri, 10 Jun 2022 14:01:53 +0000 https://www.altaro.com/vmware/?p=24721 6 VMware professionals explain their homelab setups, what they use them for, configurations, limitations, budget, scalability & more!

The post Creating the Perfect Homelab for VMware Admins appeared first on Altaro DOJO | VMware.

]]>

Working in infrastructure has been a blast since I went down that route many years ago. One of the most enjoyable things in this line of work is learning about cool tech and playing around with it in a VMware homelab project for instance. Running a homelab involves sacrificing some of your free time and dedicating it to learning and experimenting.

Now, it is obvious that learning without a purpose is a tricky business as motivation tends to fade quite quickly. For that reason, it is best to work towards a goal and use your own hardware to conduct a VMware homelab project that will get you a certification, material to write interesting blogs, automate things in your home or follow a learning path to aim for a specific job or a different career track. When interviewing for engineering roles, companies are receptive to candidates that push the envelope to sharpen their skills and don’t fear investing time and money to get better.

This article is a bit different than usual as we, at Altaro, decided to have a bit of fun! We asked our section editors, authors, as well as third-party authors to talk about their homelabs. We set a rough structure regarding headlines to keep things consistent but we also wanted to leave freedom to the authors as VMware homelab projects are all different and serve a range of specific purposes.

Brandon Lee

https://www.virtualizationhowto.com/

In my honest opinion, it is one of the best investments in the learning and career goals I have made – a home lab. However, as the investment isn’t insignificant, why would I recommend owning and running a home lab environment? What do you use it for? What considerations should you make when purchasing equipment and servers?

Around ten years ago, I decided that having my own personal learning environment and sandbox would benefit all the projects and learning goals I had in mind. So, the home lab was born! Like many IT admins out there, my hobby and my full-time job are geeking out on technology. So, I wanted to have access at home to the same technologies, applications, and server software I use in my day job.

Why do you have a lab?

Like many, I started with a “part-time” VMware homelab project running inside VMware Workstation. So, the first hardware I purchased was a Dell Precision workstation with 32 gigs of memory. Instead of running vSphere on top of the hardware, I ran VMware Workstation. I believe this may have been before the VMUG Advantage subscription was available, or at least before I knew about it.

I would advise anyone thinking of owning and operating a home lab to start small. Running a lab environment inside VMware Workstation, Hyper-V, Virtualbox, or another solution is a great way to get a feel for the benefits of using a home lab environment. It may also be that a few VMs running inside VMware Workstation or another workstation-class hypervisor is all you need.

For my purposes, the number of workloads and technologies I wanted to play around with outgrew what I was able to do inside VMware Workstation. So, after a few years of running VMware Workstation on several other workstation-class machines, I decided to invest in actual servers. The great thing about a home lab is you are only constrained in its design by your imagination (and perhaps funds). Furthermore, unlike production infrastructure, you can redesign and repurpose along the way as you see fit. As a result, the home lab can be very fluid for your needs.

What’s your setup?

I have written quite a bit about my home lab environment, detailing hardware and software. However, I am a fan of Supermicro servers for the hardware side of things. I have found the Supermicro kits to be very stable, affordable, and many are supported on VMware’s HCL for installing vSphere, etc.

Enclosure

    • Sysracks 27U server enclosure

Servers

I have the following models of Supermicro servers:

    • (4) Supermicro SYS-5028D-TN4T
      • Mini tower form factor
      • (3) are in a vSAN cluster
      • (1) is used as a standalone host in other testing
    • (1) SYS-E301-9D-8CN8TP
      • Mini 1-U (actually 1.5 U) form factor
      • This host is used as another standalone host for various testing and nested labs

Networking

    • Cisco SG350-28 – Top of rack switch for 1 gig connectivity with (4) 10 gig SFP ports
    • Ubiquiti – Edgeswitch 10 Gig, TOR for Supermicro servers
    • Cisco SG300-20 – Top of rack IDF

Storage

    • VMFS datastores running on consumer-grade NVMe drives
    • vSAN datastore running on consumer-grade NVMe drives, (1) disk group per server
    • Synology Diskstation 1621xs+ – 30 TB of useable space

In terms of license requirements; I cannot stress enough how incredible the VMUG Advantage subscription is for obtaining real software licensing to run VMware solutions. It is arguably the most “bang for your buck” in terms of software you will purchase in your VMware homelab project. For around $200 (you can find coupons most of the year), you can access the full suite of VMware solutions, including vSphere, NSX-T, VMware Horizon, vRealize Automation, vRealize Operations, etc.

The VMUG Advantage subscription is how I started with legitimate licensing in the VMware home lab environment and have maintained a VMUG Advantage subscription ever since. You can learn more about the VMUG advantage subscription here: » VMUG Advantage Membership.

I used Microsoft Evaluation center licensing for Windows, suitable for 180 days, generally long enough for most of my lab scenarios.

What software am I running?

The below list is only an excerpt, as there are too many items, applications, and solutions to list. As I mentioned, my lab is built on top of VMware solutions. In it, I have the following running currently:

    • vSphere 7.0 Update 3d with latest updates
    • vCenter Server 7.0 U3d with the latest updates
    • vSAN 7.0 Update 3
    • vRealize Operations Manager
    • vRealize Automation
    • vRealize Network Insight
    • VMware NSX-T
    • Currently using Windows Server 2022 templates
    • Linux templates are Ubuntu Server 21.10 and 20.04
    • Running Gitlab and Jenkins for CI/CD
      • I have a CI/CD pipeline process that I use to keep VM templates updated with the latest builds

Nested vSphere labs:

    • Running vSAN nested labs with various configurations
      • Running vSphere with Tanzu with various containers on top of Tanzu
      • Running Rancher Kubernetes clusters

Do I leverage the cloud?

Even though I have a VMware homelab project, I do leverage the cloud. For example, I have access to AWS and Azure and often use these to build out PoC environments and services between my home lab and the cloud to test real-world scenarios for hybrid cloud connectivity for clients and learning purposes.

What does your roadmap look like?

I am constantly looking at new hardware and better equipment across the board on the hardware roadmap. It would be nice to get 25 gig networking in the lab environment at some point in the future. Also, I am looking at new Supermicro models with the refreshed Ice Lake Xeon-D processors.

On the software/solutions side, I am on a continuous path to learning new coding and DevOps skills, including new Infrastructure-as-Code solutions. Also, Kubernetes is always on my radar, and I continue to use the home lab to learn new Kubernetes skills. I want to continue building new Kubernetes solutions with containerized workloads in the home lab environment, which is on the agenda this year in the lab environment.

Any horror stories to share?

One of the more memorable homelab escapades involved accidentally wiping out an entire vSAN datastore as I had mislabeled two of my Supermicro servers. So, when I reloaded two of the servers, I realized I had rebuilt the wrong servers. Thankfully, I am the CEO, CIO, and IT Manager of the home lab environment, and I had backups of my VMs 😊.

I like to light up my home lab server rack

One of the recent additions to the VMware homelab project this year has been the addition of LED lights. I ran LED light strips along the outer edge of my server rack and can change the color via remote or have the lights cycle through different colors on a timer. You can check out a walkthrough of my home lab environment (2022 edition with lights) here: (574) VMware Home Lab Tour 2022 Edition Server Room with LED lights at night! A geek’s delight! – YouTube

Rack servers for VMware homelab project

Rack servers for myVMware homelab project

Xavier Avrillier

VMware | DOJO Author & Section Editor

http://vxav.fr

Why do you have a lab?

When I started my career in IT, I didn’t have any sort of lab and relied exclusively on the environment I had at work to learn new things and play around with tech. This got me started with running virtual machines in VMware workstations at home but computers back then (10 years ago) didn’t come with 16GB of RAM as a common requirement so I had to get crafty with resources.

When studying to take the VCP exam, things started to get a bit frustrating as running a vCenter with just 2 vSphere nodes on 16 GB of ram is cumbersome (and slow). At this point, I got lucky enough that I could use a fairly good test environment at work to delay the inevitable and manage to get the certification without investing a penny in hardware or licenses.

I then changed my employer and started technical writing so I needed the capacity to play around with and resources pile up fast when you add vSAN, NSX, SRM and other VMware products into the mix. For that reason, I decided to get myself a homelab that would be dedicated to messing around. I started with Intel NUC mini-PCs like many of us and then moved to a more solid Dell rack server that I am currently running.

I decided to go the second-hand route as it was so much cheaper and I don’t really care about official support, newer software usually works unless on dinosaur hardware. I got a great deal on a Dell R430, my requirements were pretty easy as I basically needed lots of cores, memory, a fair amount of storage and an out-of-band card for when I’m not at home and need to perform power actions on it.

What’s your setup?

I am currently running my cluster labs nested on the R430 and run natively in VMs when possible. For instance, I have the DC, NSX manager, VCD, and vCenter run in VMs on the physical host, but I have a nested VSAN cluster with NSX-T networking managed by this same vCenter server. This is the most consolidated way I could think of while offering flexibility.

    • Dell R430
    • VMware vSphere ESXi 7 Update 3
    • 2 x Intel Xeon E5-2630 v3 (2 x 8 pCores @2.40GHz)
    • 128GB of RAM
    • 6 x 300GB 15K rpm in RAID 5 (1.5TB raw)
    • PERC H730 mini
    • Dual 550W power supply (only one connected)
    • iDRAC 8 enterprise license
    • I keep the firmware up to date with Dell OME running in a VM in a workstation on my laptop that I fire up every now and again (when I have nothing better to do).

On the side, I also have a Gigabyte mini-pc running. That one is installed with an Ubuntu server with K3s running on it (Kubernetes). I use it to run a bunch of home automation stuff that are managed by ArgoCD in a private Github repository (GitOps), that way I can track my change through commits and pull requests. I also use it for CAPV to quickly provision Kubernetes (and Tanzu TCE) clusters in my lab.

    • Gigabyte BSi3-6100
    • Ubuntu 20.04 LTS
    • Core i3 6th gen
    • 8GB of ram

I also have an old Synology DS115j NAS (Network Access Storage) that participates in the home automation stuff. It is also a target for vCenter backups and a few VMs I don’t want to have to rebuild using Altaro VM backup. It’s only 1TB but I am currently considering my options to replace it with a more powerful model with more storage.

Network wise all the custom stuff happens nested with OpnSense and NSX-T, I try to keep my home network as simple as possible if I don’t need to complicate it any further.

I currently don’t leverage any cloud services on a daily basis but I spin up the odd instance or cloud service now and again to check out new features or learn about new tech in general.

I try to keep my software and firmware as up-to-date as possible. However, it tends to depend on what I’m currently working on or interested in. I haven’t touched my Horizon install in a while but I am currently working with my NSX-T + ALB + VCD + vSAN setup to deploy a Kubernetes cluster with Cluster API.

VMware homelab project architecture

VMware homelab project architecture”

What do you like and don’t like about your setup?

I like that I have a great deal of flexibility by having a pool of resources that I can consume with nested installs or natives VMs. I can scratch projects and start over easily.

However, I slightly underestimated storage requirements and 1.5TB is proving a bit tricky as I have to really keep an eye on it to avoid filling it up. My provisioning ratio is currently around 350% so I don’t want to hit the 100% used space mark. And finding spare 15K SAS disks isn’t as easy as I’d hope.

What does your roadmap look like?

As mentioned, I’m reaching a point where storage can become a bottleneck as interoperable VMware products require more and more resources (NSX-T + ALB + Tanzu + VCD …). I could add a couple of disks but that would only add 600GB of storage and I’ll have to find 15K rpm 300GB disks with caddies so not an easy find. For that reason, I’m considering getting a NAS that I can then use as NFS or iSCSI storage backend with SSDs.

Things I am currently checking out include VMware Cloud Director with NSX-T and ALB integration and Kubernetes on top of all that. I’d also like to get in touch with CI/CD pipelines and other cloud-native stuff.

Any horror stories to share?

The latest to date was my physical ESXi host running on a consumer-grade USB key plugged in the internal USB port, which got fried (the USB key) after a few months of usage. My whole environment was running on this host and I had no backup then. But luckily, I was able to reinstall it on a new USB key (plugged in the external port) and re-register all my resources one by one manually.

Also, note that I am incredibly ruthless with my home lab. I only turn it on when needed. So, when I am done with it, none of that proper shutdown sequence, thanks very much. I trigger the shut down of the physical host from vCenter which takes care of stopping the VMs, sometimes I even push the actual physical button (yes there’s one). While I haven’t nuked anything that way somehow, I would pay to see my boss’s face should I stop production hypervisors with the button!

Ivo Beerens

https://www.ivobeerens.nl/

Why do you have a lab?

The home lab is mainly used for learning, testing new software versions, and automating new image releases. Back when I started down this journey, my first home lab was in the Novell Netware 3.11 era which I acquired using my own money, no employer’s subvention 😊

My main considerations and decision points for what I decided to purchase were low noise, low power consumption for running 24×7, room for PCI-Express cards and NVMe support.

What’s your setup?

From a hardware standpoint, computing power is handled by two Shuttle barebone machines with the following specifications:

      • 500 W Plus Silver PSU
      • Intel Core i7 8700 with 6 cores and 12 threads
      • 64 GB memory
      • Samsung 970 EVO 1 TB m.2
      • 2 x 1 GbE Network cards
      • Both barebones are running the latest VMware vSphere version.

In terms of storage, I opted for a separate QNAP TS-251+ NAS with two Western Digital (WD) Red 8 TB disks in a RAID-1 configuration. The barebones machines have NVM drives with no RAID protection.

The bulk of my workloads are hosted on VMware vSphere and for the VDI solution, I run VMware Horizon with Windows 10/11 VDIs. Cloud-wise, I use an Azure Visual Studio subscription for testing IAAS and Azure Virtual Desktop services.

I manage the environments by automating as much as possible using Infrastructure as Code (IaC). I automated the installation process of almost every part so I can start over from scratch whenever I want.

What do you like and don’t like about your setup?

I obviously really enjoy the flexibility that automation brings to the table. However, the lack of resources sometimes (max 128 GB) can sometimes be a limiting factor. I also miss having remote management boards such as HPE iLO, Dell iDRAC or a KVM switch to facilitate hardware operations.

What does your roadmap look like?

I currently have in the works to upgrade to a 10 GbE Switch and bump the memory to 128GB per barebone.

Paolo Valsecchi

https://nolabnoparty.com/

Why do you have a lab?

I am an IT professional and I often find myself in the situation of implementing new products and configurations without having the right knowledge or tested procedures at hand. Since it is a bad idea to experiment with things directly on production environments, having a lab is the ideal solution to learn, study, and practice new products or test new configurations without the hassle of messing up critical workloads.

Because I’m also a blogger, I study and test procedures to publish them on my blog. This required a better test environment than what I had. Since my computer didn’t have enough resources to allow complex deployments, in 2015 I decided to invest some money and build my own home lab.

It was clear that the ideal lab was not affordable due to high costs. For that reason, I decided to start with a minimum set of equipment to extend later. It took a while before finding the configuration that met the requirements. After extensive research on the Internet, I was finally able to complete the design by comparing other lab setups.

My requirements for the lab were simple: Low power, cost-effective hardware, acceptable performance, at least two nodes, one external storage, compatibility with the platforms I use, and components size.

What’s your setup?

Despite my lab still meeting my requirements, it is starting to be a little bit obsolete now. My current lab setup is the following:

    • PROD Servers: 3 x Supermicro X11SSH-L4NF
      • Intel Xeon E3-1275v5
      • 64GB RAM
      • 2TB WD Red
    • DR Server: Intel NUC NUC8i3BEH
      • Intel Core i3-8109U
      • 32GB RAM
      • Kingston SA1000M8 240G SSD A1000
    • Storage PROD: Synology DS918
      • 12TB WD Red RAID5
      • 250GB read/write cache
      • 16GB RAM
    • Storage Backup: Synology DS918
      • 12TB WD Red RAID5
      • 8GB RAM
    • Storage DR: Synology DS119j + 3TB WD Red
    • Switch: Cisco SG350-28
    • Router: Ubiquiti USG
    • UPS: APC 1400

The lab is currently composed of three nodes cluster running VMware vSphere 7.0.2 with vSAN as main storage. Physical shared storage devices are configured with RAID 5 and connected to vSphere or backup services via NFS or dedicated LUNs.

Installed Windows Servers are running version 2016 or 2019 while Linux VMs belong to different distributions and versions may vary.

My lab runs different services, such as:

    • VMware vSphere and vSAN
    • Active Directory, ADFS, Office 365 sync
    • VMware Horizon
    • Different backup solutions (at least 6 different products including Altaro)

In terms of Cloud service, I use cloud object storage (S3 and S3-compatible) solutions for backup purposes. I also use Azure to manage services such as Office 365, Active Directory and MFA. Due to high costs, workloads running on AWS or Azure are just created on-demand and for specific tests.

I try to keep the software always up-to-date with in-place upgrades, except for Windows Server which I always reinstall. Only once did I have to wipe the lab due to hardware failure

What do you like and don’t like about your setup?

With my current setup, I’m able to run the workloads I need and do my tests. Let’s say I’m satisfied with my lab, but…

vSAN disks are not SSD (only the cache), RAM installed on each host is limited to 64GB and the network speed is 1GB. These constraints are affecting the performance and the number of running machines that are demanding always more and more resources.

What does your roadmap look like?

To enhance my lab, the replacement of HDDs with SSDs is the first step in my roadmap. Smaller physical servers to better fit in my room as well as a 10 Gbps network would be the icing on the cake. Unfortunately, this means replacing most of the installed hardware in my lab.

Any horror stories to share?

After moving my lab from my former company to my house, the original air conditioning system in use during the very first days was not so good and a hot summer was fatal to my hardware… the storage with all my backups failed, losing a lot of important VMs. Pity that some days before I deleted such VMs from the lab. I spent weeks re-creating all the VMs! I have now a better cooling system and a stronger backup (3-2-1!)

Mayur Parmar

https://masteringvmware.com

Why do you have a lab?

I use my Home LAB primarily for testing various products to explore new features and functionality that I’d never played with before. This greatly helps me in learning about the product as well as testing it.

I decided to go for a Home Lab 4 years ago because of the complete flexibility and control you have over your own environment. You can easily (or not) deploy, configure and manage things yourself. I bought my Dell Workstation directly from Dell by customizing its configuration according to my needs and requirements.

The first thing I considered was whether it should be bare metal with Rack servers, Network Switches and Storage devices or simply nested virtualization inside VMware Workstation. I went for the nested virtualization route for flexibility and convenience and sized the hardware resources according to what I needed at the time.

What’s your setup?

My home lab is pretty simple, it is made up of a Dell Workstation, a TP link switch and a Portable hard drive.

Dell Workstation:

    • Dell Precision Tower 5810
    • Intel Xeon E5-2640v4 10 Core processor
    • 96 GB of DDR4 Memory
    • 2x1TB of SSDs
    • 2 TB of Portable hard drive
    • Windows 10 with VMware Workstation

At the moment I currently run a variety of VMs such as ESXi hosts, AD-DNS, Backup software, a mail server and a number of Windows and Linux boxes. Because all VMs running on VMware Workstation there is no additional network configuration required as all VMs can interact with each other on virtual networks.

Since my Home LAB is on VMware Workstation it gives the flexibility to keep up-to-date versions as well as lower versions to test and compare features for instance. Because it runs in VMware Workstation, I often got to wipe out and recreate the complete setup. Whenever newer versions are released, I always upgrade to try out new features.

What do you like and don’t like about your setup?

I like the flexibility VMware Workstation gives me to set things up easily and scratch them just as easily.

On the other hand, there is a number of things I can’t explore such as setting up solutions directly on the physical server, working on Firmware, Configuring Storage & RAID levels, Configure Networking, routing and so on.

What does your roadmap look like?

Since I bought my Dell Workstation, I constantly keep an eye on the resources to avoid running out of capacity. In the near future, I plan to continue with that trend but I am considering buying a new one to extend the capacity.

However, I am currently looking at buying a NAS device to provide shared storage capacity to the compute node(s). While I don’t use any just now, my future home lab may include cloud services at some point.

Any horror stories to share?

A couple of mistakes I made in the home lab included failure to create DNS Records before deploying a solution, messed up vCenter Upgrade which required to deploying new vCenter servers or a failed Standard Switch to Distributed Switch migration which caused network outage and needed to reset the whole networking stack.

Simon Cranney

https://esxsi.com/

Why do you have a lab?

A couple of years ago I stood up my first proper VMware home lab project. I had messed about with running VMware Workstation on a gaming PC in the past, but this time I wanted something I could properly get my teeth into and have a VMware vSphere home lab without resource contention.

Prior to this, I had no home lab. Many people that are fortunate to work in large enterprise infrastructure environments may be able to fly under the radar and play about with technologies on works hardware. I cannot confirm nor deny if this was something I used to do! But hey learning and testing new technologies benefits the company in the long run.

What’s your setup?

Back to the current VMware home lab then, I had a budget in mind so ended up going with a pair of Intel NUC boxes. Each with 32 GB RAM and a 1 TB PCIe NVMe SSD.

The compute and storage are used to run a fairly basic VMware vSphere home lab setup. I have a vCenter Server as you’d expect, a 2-node vSAN cluster, and vRealize Operations Manager, with a couple of Windows VMs running Active Directory and some different applications depending on what I’m working on at any given point in time.

My VMware home lab licenses are all obtained free of charge through the VMware vExpert program but there are other ways of accessing VMware home lab licenses such as through the VMUG Advantage membership or even the vSphere Essentials Plus Kit. If you are building a VMware home lab though, why not blog about it and shoot for the VMware vExpert application?

In terms of networking, I’ve put in a little more effort! Slightly out of scope here but in a nutshell;

    • mini rack with the Ubiquiti UniFi Dream Machine Pro
    • UniFi POE switch
    • And a number of UniFi Access Points providing full house and garden coverage

I separate out homelab and trusted devices onto an internal network, partner and guest devices onto an external network, and smart devices or those that like to listen onto a separate IoT network. Each network is backed by a different VLAN and associated firewall rules.

What do you like and don’t like about your setup?

Being 8th Generation, the Intel NUC boxes caused me some pain when upgrading to vSphere 7. I used the Community Network Driver for ESXi Fling and played about adding some USB network adapters to build out distributed switches.

I’m also fortunate enough to be running a VMware SD-WAN (VeloCloud) Edge device, which plugs directly into my works docking station and optimizes my corporate network traffic for things like Zoom and Teams calls.

What does your roadmap look like?

In the future, I’d like to connect my VMware home lab project to some additional cloud services, predominantly in AWS. This will allow me to deep dive into technologies like VMware Tanzu, by getting hands-on with the deployment and configuration.

Whilst VMware Hands-on Labs are an excellent resource, like many techies I do find that the material sticks and resonates more when I have had to figure out integrations and fixes in a real-life environment. I hope you found my setup interesting. I’d love to hear in the comments section if you’re running VMware Tanzu in your home lab and from any other UniFi fans!

Get More Out of Your Homelab

It is always fun to discuss home labs and discover how your peers do it. It’s a great way to share “tips and tricks” and to learn from the success and failures of others. Hardware is expensive and so is electricity, real estate to store it and so on.

Learn how to design on a budget for the VMware homelab building process

For these reasons and many others, you should ask yourself a few questions before even looking at home lab options to better steer your research towards something that will fit your needs:

    • Do I need hardware, Cloud services or both? On-premise hardware involves investing a chunk of money at the beginning but it means you are in total control of the budget as electricity will be the only variable from now on. On the other hand, cloud services will let you pay for only what you use. It can be very expensive but it could also be economical under the right circumstances. Also, some of you will only require Azure services because it’s your job, while I couldn’t run VMware Cloud Director, NSX-T and ALB in the cloud.
    • Do you have limited space or noise constraints? Rack and tower servers are cool, but they are bulky and loud. A large number of IT professionals went for small, passive and silent mini-pcs such as Intel NUC. It grew in popularity after William Lam from VMware endorsed it and network drivers for the USB adapters were released as Flings. These small form factor machines are great and offer pretty good performances with i3, i5 or i7 processors. You can get a bunch of these to build a cluster that won’t use up much energy and won’t make a peep.
    • Nested or Bare-Metal? Another question that is often asked is if you should run everything bare-metal. I personally like the flexibility of nested setups but it’s also because I don’t have the room for a rack at home (and let’s face it, I would get bad looks!). However, as you saw in this blog, people go for one or the other for various reasons and you will have to find yours.
    • What do you want to get out of it? If you are in the VMware dojo, you most likely are interested in testing VMware products. Meaning vSphere will probably be your go-to platform. In which case you will have to think about licenses. Sure, you can use evaluation licenses but you’ll have to start over every 60 days, not ideal at all. The vExpert program and the VMUG advantage program are your best bets in this arena. On the other hand, if you are only playing with Open-source software you can install Kubernetes, OpenStack or KVM on bare metal for instance and you won’t have to pay for anything.
    • How much resources do you need? This question goes hand in hand with the next one. While playing around with vSphere, vCenter or vSAN won’t set you back that much. If you want to get into Cloud Director, Tanzu, NSX-T and the likes, you will find that they literally eat up CPU, memory and storage for breakfast. So, try to look at the resource requirements for the products you want to test in order to get a rough idea of what you will need.
    • What is your budget? Now the tough question, how much do you want to spend? In hardware and energy (which links back to small form factor machines)? It is important to set yourself a budget and not just start buying stuff for the sake of it (unless you have the funds). Home lab setups are expensive and, while you might get a 42U rack full of servers for cheap on the second-hand market, your energy bill will skyrocket. On the other hand, getting a very cheap setup will cost you a certain amount of money but you may not get anything from it due to hardware limitations. So set yourself a budget and try to find the sweet spot.
    • Check compatibility: Again, don’t jump in guns blazing at the first offer. Double-check that the hardware is compatible with whatever you want to evaluate. Sure, it is likely to work even if it isn’t in the VMware HCL, but it is always worth it to do your research to look for red flags before buying.

Those are only a few key points I could think of but I’d be happy to hear about yours in the comments!

Is a VMware Homelab Worth it?

We think that getting a home lab is definitely worth it. While the money aspect might seem daunting at first, investing in a home lab is investing in yourself. The wealth of knowledge you can get from 16 cores/128GB servers is lightyears away from running VMware Workstation on your 8 cores/16GB laptop. Even though running products in a lab isn’t real-life experience, this might be the differentiating factor that gets you that dream job you’ve been after. And once you get it, the $600 you spent for that home lab will feel like money well spent with a great ROI!

VMware Homelab Alternatives

However, if your objective is to learn about VMware products in a guided way and you are not ready to buy a home lab just yet for whatever reason, fear not, online options are there for you! You can always start with the VMware Hands-on-labs (HOL) which offers a large number of learning paths where you can get to grips with most of the products sold by the company. Many of them you couldn’t even test in your home lab actually (especially the cloud ones like carbon black or workspace one). Head over to https://pathfinder.vmware.com/v3/page/hands-on-labs and register to Hands-on-labs to start learning instantly.

The other option to run a home lab for cheap is to install a VMware workstation on your local workstation if you have enough resources. This is, in almost 100% of the cases, the first step before moving to a more serious and expensive setup.

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

What Homelab Set Up is Right for You?

I think we will all agree that our work doesn’t fit within the traditional 9-to-5 as keeping our skills up is also part of the job and it can’t always be done on company time. Sometimes we’ll be too busy or it might just be that we want to learn about something that has nothing to do with the company’s business. Home labs aren’t limited to VMware or Azure infrastructure and what your employer needs. You can put them to good use by running overkill wifi infrastructures or by managing your movie collection with an enterprise-grade and highly resilient setup that many SMBs would benefit from. The great thing about it is that it is useful on a practical and personal level while also being good fun (if you’re a nerd like me).

Gathering testimonies about VMware homelab projects and discussing each other’s setup has been a fun and very interesting exercise. It is also beneficial to see what is being done out there and identify ways to improve and optimize our own setup, I now know that I need an oversized shared storage device in my home (This will be argued)!

Now we would love to hear about your VMware homelab project that you run at home, let’s have a discussion in the comments section!

The post Creating the Perfect Homelab for VMware Admins appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/perfect-homelab-vmware/feed/ 4
The VMware Admin’s Guide to Windows Server 2022 Templates https://www.altaro.com/vmware/windows-server-template/ https://www.altaro.com/vmware/windows-server-template/#respond Fri, 20 May 2022 12:00:13 +0000 https://www.altaro.com/vmware/?p=24214 At the tips and tricks on how to create a Windows Server 2022 VMware template. Including settings, optimization, tools and best practices

The post The VMware Admin’s Guide to Windows Server 2022 Templates appeared first on Altaro DOJO | VMware.

]]>

Unless in very specific use cases, every organization running a VMware infrastructure includes deploying virtual machines such as Windows Server 2022 as part of their operations in order to provide businesses with the resources they need. The easiest way to ensure both fast and efficient delivery is through the use of virtual machine templates. Instead of creating a new virtual machine, mount the ISO, install Windows Server 2022 and the usual software on top of it; deploying from a template will allow you to skip these steps and get ready in minutes.

In this article, we will demonstrate how to create a Windows Server 2022 template with a few best practices. Note that you can go further by leveraging tools such as vRealize Automation, Terraform and other automation solutions.

What VMware Admins Need to Know About Windows Server 2022

First and foremost, let us start by having a quick look at the main enhancements brought by Windows Server 2022 release date: September 2021.

    • Secured-core server: Advanced protection achieved with Hardware TPM 2.0, protected firmware and Virtualization-based Security (VBS). Note that it requires certain OEM specifications and capabilities.
    • System-Guard: Windows Defender feature that helps defend end-user PCs against the likes of rootkits and bootkits.
    • Windows Admin Center replaces the old Server Management console (which still exists).
    • Windows Server 2022 Azure Edition Hotpatching: A new way of installing Windows updates in Windows Server Azure Edition virtual machines that do not require a reboot after installation.
    • Windows Server Azure Edition: A version of Windows Server 2022 that is designed to run as a VM within Microsoft Azure or on top of Azure Stack HCI on-prem.
    • No more free Hyper-V Server: After 13 years, Microsoft declared the end of the free Hyper-V and Azure Stack HCI as the way forward. Not everyone was pleased with that.
    • MsQuic: Microsoft’s QUIC implementation which will power HTTP/3 and improve SMB file transfers

Note that we only skimmed the surface of the main changes here, for more details on Windows Server 2022, check out our dedicated articles on the DOJO Hyper-V Section:

How to Download and Install Windows Server 2022

The first step in creating a template is to download the Windows server 2022 iso and install it on a new virtual machine.

Browse to the Windows Server 2022 download link, Check Download the ISO and click Continue. You don’t need an account or anything else to download the Windows Server 2022 ISO.

Windows Server 2022 download to prepare the template VM

Windows Server 2022 download to prepare the template VM”

Note that if English is not your preferred language, you can download the new Languages and Optional Features ISO to add language packs.

Windows Server 2022 still comes with an evaluation period that expires in 180 days like previous versions.

The installation process is similar to any other Windows Server. Create a new virtual machine, mount the ISO, boot on it and install Windows Server 2022. In this example, we install Desktop Experience.

Creating a Virtual Machine Template

To create a template, you first need to provision the virtual machine on which you will install Windows Server 2022. Here are a few things to consider when doing so:

    • Disk types (thin or thick): What is your company policy when it comes to disk types? Thin provisioning is the go-to choice for many but you may have to configure this otherwise.
    • SCSI Controller: LSI Logic
    • vCPU and Memory allocation: Capacity planning and resource allocation has always been a tricky problem to tackle for vSphere Admins. It is recommended to provision your template with the minimum requirements. That way you ensure new VMs are not oversized when the resources aren’t tuned. 4GB of RAM and 2 vCPUs (2 sockets x 1 core) is usually the recommended choice for mixed workload setups.
    • VMXNet3: Set the network card of the virtual machine to VMXNET3 which offers better performance than E1000.

VMXNET3 network controllers offers better performance than E1000

VMXNET3 network controllers offers better performance than E1000”

    • VMware Hardware Level: Consider the compatibility level of the VM and using EVC to ensure compatibility. This will be an “it depends” type of thing but you usually want to use the level of the oldest host (across cluster or not, that’s for you to decide).
    • Remove unused devices: It is recommended to remove virtual hardware devices that won’t be used by most VMs such as Floppy, Serial, parallel…
    • IP address: It is best to configure the server with DHCP or put it in a non-production network to avoid the risk of an IP duplicate with a production VM.

Additionally, you can check our blog from a few years back about creating vSphere VM templates.

Considerations for Template Creation

How specific should the template be?

There are several approaches to maintaining VMware templates:

    • Some prefer a limited number of templates that are as generic as possible with a configuration that is common to all workloads in the environment. Easier maintenance but more post-deployment tasks.
    • Others have multiple templates tailored to different types of workloads. More overhead to maintain but less post-deployment tasks.

In this article, we will demonstrate the first choice as it is what will work for most readers.

Do not join the template to AD

This question pops up every so often on forums or Reddit. The answer is no, you shouldn’t join your template to Active Directory. This step should be performed when deploying new servers, in fact, it is part of the Customization Specifications.

You may need to temporarily join the template to AD to receive the updates from WSUS or SCCM but you should take it out once it is done.

Should I Sysprep my VMware template

There is no need to Sysprep your Windows Server 2022 installation as it can be done during the deployment process as part of the customization spec if you use it.

Customization Specs can run Sysprep on deployed virtual machines automatically

Customization Specs can run Sysprep on deployed virtual machines automatically”

Preparation of the Windows Server 2022 Template

Once the OS is installed on the VM, we can start preparing it. Your mileage may vary here but the following steps should apply to most environments.

1 – VMware Tools

The first thing to do after you install Windows Server 2022 is to install the VMware Tools to ensure the best performance. VMware Tools include drivers for the virtual hardware (VMXNET3, paravirtual…) as well as memory reclamation mechanisms, tighter integration with vCenter, better mouse support and so on.

Installing the VMware Tools is one of the first things to do for a new machine

Installing the VMware Tools is one of the first things to do for a new machine”

Installing the VMware Tools is very easy and requires a restart of the virtual machine. Find the procedure on how to install the tools in our complete guide on the topic.

You may also want to enable “Check and upgrade VMware Tools before each power on” to ensure they are automatically up to date.

Check and upgrade VMware Tools before each power on keeps your VM Tools up to date

Check and upgrade VMware Tools before each power on keeps your VM Tools up to date”

2 – Windows Update

You will find this recommendation in every single blog and documentation out there because it is an important one.

Although your machines are most likely managed by WSUS or SCCM, keeping the Windows Update as recent as possible in your templates will minimize the post-deployment time overhead of downloading updates, installing them and rebooting Windows Server 2022 several times.

Ensure that there are no available updates for Windows Server 2022 before you turn it into a template

Ensure that there are no available updates for Windows Server 2022 before you turn it into a template”

3 – Other Windows Settings

Note that some of these may very well be replaced by your organization’s policies but they may prove valuable.

You can go in Server Manager and disable IE enhanced Security Configuration for both administrators and users. You may also want to check that the correct Time Zone is configured.

IE enhanced Security Configuration

You can also go in the Diagnostics & Feedback settings to disable everything and set the Feedback frequency to Never. While you’re at it you can also click on Inking & Typing personalization and disable it.

Power options to High Performance

Then open the Control Panel by typing Control panel in the execute window (Win+R) and set the Power options to High Performance.

vTPM and Secured-Core server

4 – vTPM and Secured-Core server (Optional)

We mentioned earlier that a new feature of Windows Server 2022 is a Secured-core server. Although this is not fully taken advantage of yet, you may want to prepare your VMs for it if you run a highly secured environment.

In order to do so, you will need to enable virtual Trusted Platform Module (vTPM) on your template. Note that several requirements exist to enable it.

5 – VMware OS Optimization Tool (Optional)

If you want to go as far as you can in the preparation and optimization of the OS you install in your template, you can have a look at VMware OS Optimization Tool. Although it is aimed at Horizon desktops, it can be leveraged regardless.

This used to be a Fling used to tune VMware Horizon golden images which made its way in the final product (productized). It even includes a companion Microsoft Deployment Toolkit plugin since June of 2021.

Keep in mind that many of the changes you make using the VMware OS Optimization Tool may be overridden by your organization’s GPO (Group Policy Object).

VMware OS Optimization Tool is a great tool to optimize Windows Server 2022

VMware OS Optimization Tool is a great tool to optimize Windows Server 2022”

6 – Tools you may want to consider installing

This step is very much specific to everyone’s own environment as not all organizations use the same tools. Note that you should try and keep your templates as lean as possible and avoid cluttering them with the likes of 7-Zip, Notepad++ etc… These are fine for client OSes such as Windows 11 but shouldn’t really be installed on Servers.

The usual software that is found in templates must be common to all workloads, such as:

    • Windows Admin Center
    • Monitoring agents
    • Antivirus agents
    • Inventory agents
    • BGInfo

Automate Template Creation with Packer

Packer is an application distributed by Hashicorp that gives IT Pros the ability to automate their VM template builds in order to save time and enforce compliance. You can refer to our dedicated blog on the topic to learn more about it.

Organization is key

Whether you are the best in your field or a beginner, everyone will agree that documentation and organization is the key to smooth operations. You can make your life and your colleagues easier with a few simple steps.

Add notes to Template

It is best practice to keep notes of what was done by who on a specific template. For instance, you may want to add the date of the latest change, what was done, the user who performed the operation…

That way you will know when a template needs updating if it hasn’t been done in a long time.

VM Notes help keep track of changes and improve teamwork

VM Notes help keep track of changes and improve teamwork”

Use the vSphere Content Library

Instead of keeping your templates and ISOs in VM specific folders with no version tracking, it is recommended to use vSphere Content Libraries. There are several benefits in leveraging the Content Library feature:

    • Operators can deploy VMs from a single pane where all templates are maintained and consolidated.
    • Other vCenter instance, either local or remote, can subscribe to a published library. That way the resources are kept up to date across the board.
    • Better change tracking and versioning if the feature is used correctly.

The vSphere Content Library is a great way to manage templates and ISO files

The vSphere Content Library is a great way to manage templates and ISO files”

You may also want to keep a windows server 2022 ISO in there for when you need to add features to a server.

Use customization specifications

Unless you use another mechanism to deploy you workloads, it is highly recommended to leverage Customization Specifications to configuration the new VM as part of the deployment process.

This will save you a lot of time and avoid errors down the line. You can also check out our blog on how to create a GUI tool to deploy VMs with PowerCLI.

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Are VMware Templates for Windows Server 2022 Worth it?

Windows Server 2022 brings a lot of value, especially to companies leveraging cloud services or hybrid cloud implementations. While the trend adopted by software vendors is to move their management plane to the cloud, IT departments will still be deploying virtual machines in their environment and Windows Server 2022 will be no exception.

While automation takes many forms in the current IT landscape with various self-provisioning tools getting more and more sophisticated, moderate size organizations cannot always afford to go to these lengths. In such instances, maintaining healthy IT hygiene requires ensuring that Windows Server 2022 templates are kept up to date and follow best practices. It can take a bit of time but it’s definitely worth it in the long run especially if you’re frequently spinning up new VMs.

The post The VMware Admin’s Guide to Windows Server 2022 Templates appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/windows-server-template/feed/ 0
What Is VMware Horizon and How Does It Work? https://www.altaro.com/vmware/vmware-horizon/ https://www.altaro.com/vmware/vmware-horizon/#respond Fri, 21 Jan 2022 12:54:53 +0000 https://www.altaro.com/vmware/?p=23589 VMware Horizon is a robust brokering technology, providing access to critical resources for remote workers essential in the modern workplace

The post What Is VMware Horizon and How Does It Work? appeared first on Altaro DOJO | VMware.

]]>

Businesses today have been forced to switch to remote working to ensure continued business continuity. After the pandemic began in early 2020, it caused a shift to a majority remote workforce, seemingly overnight. With the change to a distributed workforce, new requirements have emerged for businesses around availability, security, and flexibility.

Virtual Desktop Infrastructure (VDI) is a solution that allows connecting remote workers with virtual desktops and applications running in a corporate data center. VMware Horizon is a VDI solution offered by VMware that provides a robust feature set and capabilities for remote workers. So what is VMware Horizon, and how does it work?

What is VMware Horizon?

Today, the work from anywhere model is no longer optional for businesses. Providing accessibility, flexibility, and connectivity from anywhere for the distributed workforce allows remote employees to remain productive no matter where they are located.

As the pandemic escalated, businesses quickly found legacy on-premises desktop and app virtualization platforms that predated the widespread use of the cloud were not equipped for current challenges. It led to many companies struggling to provide the distributed workforce with fast and reliable access to apps they need for business productivity.

VMware Horizon is an end-to-end solution for managing and delivering virtualized or physical desktops and virtual application delivery to end-users. It allows creating and brokering connections to Windows & Linux virtual desktops, Remote Desktop Services (RDS) applications, and desktops. It can also deliver Linux-hosted applications.

VMware Horizon is a Virtual Desktop Infrastructure (VDI) solution, a core component of VMware’s digital workspace for businesses looking to deliver virtual desktops and applications to their workforce. It provides the tooling and capabilities that enable access from any device and is deeply integrated with other VMware solutions and services such as VMware NSX, VMware Workspace One, vSAN, and others.

VMware Horizon provides secure and robust connectivity for remote workers
VMware Horizon provides secure and robust connectivity for remote workers

Recent VMware Horizon versions have evolved to provide desktop resources on-premises, in the cloud, hybrid clouds, and multi-cloud environments.

VMware Horizon Editions

VMware Horizon is provided in three editions:

    • Horizon Standard
    • Horizon Advanced
    • Horizon Enterprise

All three editions provide the components needed for end-to-end virtual desktop deployment.

What are the key capabilities / features of VMware Horizon?

    • VMware Horizon is a flexible and agile hybrid cloud platform.
    • It enables businesses to utilize existing datacenter based resources, including transforming on-premises desktop and app environments without redeploying.
    • It provides the ability to leverage the cloud for additional capacity and use cases
    • Choose if and when you transition workloads to optimize performance and lower the cost of on-premises environments.
    • It lets you leverage cloud-native control plane services. As a result, it reduces costs, improves productivity, and shifts IT focus from manual tasks to automated processes.
    • Manage and monitor your deployment from one central management GUI.
    • It offers the ability to meet remote user needs keeping employees connected to desktops and apps from anywhere and any device with a single login. It doesn’t matter where the data resides, on-premises or in the cloud.
    • The Horizon control plane delivers the ability to deploy, manage, and scale, virtual desktops, and apps across hybrid cloud environments.
    • Horizon is a modern platform for securely delivering virtual desktops and apps across the hybrid cloud, keeping employees connected, productive and engaged, anytime and anywhere.

Deliver applications and desktops automatically and in real-time

One of the key benefits and use cases of VMware Horizon is to deliver applications and desktops automatically and in real-time. Today, many organizations are using VMware Horizon as the vehicle that allows remote workers to connect to virtual machine resources or physical workstations in the corporate network, without VPN, or exposing an RDP server to the outside world.

Administrators configure desktop pools consisting of a single desktop or multiple desktops that end-users can connect to and utilize. When there are multiple virtual machines or physical desktops in a single pool, users will be placed on an available desktop resource in the pool.

Desktop pools consist of:

    • Automated desktop pools – An automated desktop pool uses a vCenter Server template or virtual machine snapshot to generate new machines. The machines can be created when the pool is created or generated on demand based on pool usage.
    • Manual desktop pools – A manual desktop pool provides access to an existing set of machines. Any machine that can install the VMware Horizon agent is supported. These include both vCenter virtual machines and physical desktops.
    • RDS Desktop pools – A Microsoft RDS desktop pool provides RDS sessions as machines to Horizon users. The Horizon Connection Server manages the RDS sessions in the same way as normal machines. Microsoft RDS hosts are supported on vCenter virtual machines and physical computers.

Viewing VMware Horizon Desktop Pools
Viewing VMware Horizon Desktop Pools

Application Pools provide remote workers with access to published applications, either from a desktop pool or RDS farm.

Viewing a published application in VMware Horizon
Viewing a published application in VMware Horizon

It also allows quickly performing maintenance tasks such as enabling or disabling specific Horizon Connection Servers and performing backup operations. You can also add vCenter Server environments and integrate your Unified Access Gateways to the environment.

Performing maintenance operations in the VMware Horizon Administration Console
Performing maintenance operations in the VMware Horizon Administration Console

Simplify management and maintenance tasks

One of the key areas that VMware Horizon provides quick time to value is the area of management and maintenance. The VMware Horizon Administration Console is an HTML 5 web console that is quick and intuitive. All of the tasks are very wizard-driven with natural workflows.

In the VMware Horizon Administration Console, administrators can easily see:

    • Problem vCenter VMs
    • Problem RDS hosts
    • Events
    • System Health

The VMware Horizon Monitoring dashboard quickly shows the overall system health, sessions, workload, VDI desktops, RDSH desktops, RDSH applications, and other information.

Viewing the VMware Horizon monitoring dashboard
Viewing the VMware Horizon monitoring dashboard

Keep sensitive data safe and enforce endpoint compliance

Several tools and VMware Horizon configurations help keep business-critical and sensitive data safe and enforce endpoint compliance. For example, the Endpoint Compliance Checks feature is part of the Unified Access Gateway (UAG) that provides a layer of security for clients accessing Horizon resources. The Endpoint Compliance Checks helps to verify end-user client compliance to predefined policies. These may include antivirus policy or encryption policy on endpoints.

Currently, a couple of endpoint compliance check providers offer the ability to check compliance of endpoints. These include:

    • OPSWAT – The OPSWAT MetaAccess persistent agent or the OPSWAT MetaAccess on-demand agent on the Horizon Client communicates the compliance status to an OPSWAT instance. It can then enforce policies related to the health of the endpoint and the allowed access to Horizon resources

OPSWAT Endpoint Compliance Checks
OPSWAT Endpoint Compliance Checks

    • Workspace ONE Intelligence (Risk Analytics) – The Workspace ONE Intelligence platform has a risk analytics feature. It can assess both user and device risk by identifying behaviours that affect security and calculating a risk score for each device and user. Based on the risk score, policies can define whether or not clients can connect and access resources.

End-user components

There are only a couple of different components required for end-user clients for VMware Horizon. Actually, you can use either a browser to connect to the Horizon environment or the VMware Horizon Client. Most modern clients feature an HTML5-capable browser that allows connecting to VMware Horizon.

While you can connect to VMware Horizon-enabled endpoints using a web browser, the most robust connection experience is provided with the VMware Horizon Client. However, a question often comes up with the VMware Horizon Client – is it free?

The VMware Horizon Client is indeed a free download from the VMware Customer Connect portal. Also, there is no need to provide an email address and sign up for an account. You can find the most recent download of the VMware Horizon Clients here:

Downloading the VMware Horizon Client
Downloading the VMware Horizon Client

The availability and ease of downloading the VMware Horizon Client help to ensure remote workers can easily download, install, and connect to VMware Horizon resources. Another great feature built into the VMware Horizon Client is checking for and updating the client directly from the interface.

Checking for updates to VMware Horizon Client
Checking for updates to VMware Horizon Client

When remote workers browse to the public URL of the Unified Access Gateway, the UAG presents the Horizon Connection Server web page, allowing users to download the client or connect to their assigned resources using the VMware Horizon HTML access link.

Browsing to the VMware Horizon web access
Browsing to the VMware Horizon web access

VMware Workspace ONE UEM additional components

Organizations using cloud-based VMware Workspace ONE can simplify access to the cloud, mobile, and enterprise applications from various types of devices. Workspace ONE Unified Endpoint Management (UEM) is a single solution for modern, over-the-air management of desktops, mobile, rugged, wearables, and IoT.

It manages and secures devices and apps, taking advantage of native MDM capabilities in IOS and Android and the mobile-cloud management efficiencies found in modern versions of Windows, Mac, and Chrome OS.Supported devices with Workspace ONE UEM

Supported devices with Workspace ONE UEM

Managing clients with Workspace ONE UEM requires the Workspace ONE UEM agent is installed on the devices for management. It can be installed manually, scripted installations, or by using GPOs. Organizations can also make use of the Workspace ONE Intelligent Hub for an easily integrated digital workspace solution designed to improve employee engagement and productivity through a single app.

Read more about VMware Workspace ONE Intelligent Hub here:

The New Naming Format for VMware Horizon 8

VMware has departed a bit from the conventional naming convention associated with legacy versions of VMware Horizon previously. While the older versions of VMware Horizon were named according to a “major.minor” release name, VMware has adopted a release cadence style “YYMM” naming convention, denoting the year and month of the release, much like other software vendors have adopted in the last couple of years.

VMware Horizon 8 is denoted with a new naming convention in the YYMM format
VMware Horizon 8 is denoted with a new naming convention in the YYMM format

If you see any of the VMware Horizon versions that start with at least a “20,” these are synonymous with VMware Horizon 8 across various documentation.

Is VMware Horizon a VPN?

There are many ways that enterprise organizations have traditionally delivered access to internal resources for remote employees. Virtual Private Network (VPN) has historically been a prevalent and familiar way for end-users to access business-critical resources that reside on the internal corporate network from the Internet.

While VPN is more secure than simply placing internal resources accessible directly from the Internet (not recommended), it also has its share of security issues. With VPN connections, a VPN client is loaded on the client workstation, laptop, or other devices, creating a secure, encrypted tunnel between the client and a VPN terminator, such as a firewall or other VPN device.

VPNs traditionally have been used for remote connectivity
VPNs traditionally have been used for remote connectivity

While this secures and encrypts the communication between the client and the internal network, it essentially makes the end-user device part of the network. You can think of a VPN connection as simply a “long patch cable” between the corporate network switch and the client. There are ways to secure VPN connections and scope down the resources the external clients can see. However, it opens the door to potentially connecting a client with malware to the corporate network. It also creates the possibility of easy data exfiltration from the corporate network to the client.

VPN connections are also notoriously complex and cumbersome to manage and maintain. Admins must manage each VPN client individually in most cases. In addition, each VPN connection is its own tunnel to the corporate network, creating the need for tedious management of multiple tunnels.

VMware Horizon provides a solution that is not VPN-based and solves the challenges mentioned above with traditional VPN connections. Note the following:

    • Remote users connect to virtual or physical desktops that are provisioned inside the corporate network. It means the end-user remote client is not directly connected to the corporate network
    • While the Horizon Client is recommended for the most robust experience connecting to the VMware Horizon environment, end-users can also connect to provisioned resources over a simple web browser connection, with no client required.
    • VPNs may not work with all types of devices. VMware Horizon connectivity, either via the Horizon Client or web browser connection, means almost any modern device with web connectivity can allow a user to connect to VMware Horizon resources
    • Admins have a consolidated and centrally managed set of infrastructure as a connectivity point, either with the Unified Access Gateways (recommended for secure external connectivity) or the Horizon Connection Servers
    • Combined with VMware NSX-T Data Center, administrators can easily secure the connectivity between VMware Horizon resources and which resources users can hit, making it an identity-driven solution

VMware Anywhere Workspace

VMware Horizon is a core component of the VMware Anywhere Workspace. What is the VMware Anywhere Workspace? It is a holistic solution that combines multiple components required for effective and efficient secure remote access, including:

    • Digital workspace solution – Provided by VMware Horizon cloud services or on-premises resources
    • Endpoint security – Organizations can seamlessly secure their remote worker interface with VMware NSX-T Data Center and VMware Carbon Black.
    • Secure Access Service Edge (SASE) – Secure access service edge platform that converges industry-leading cloud networking and cloud security to deliver flexibility, agility, security, and scale for enterprise environments of all sizes.

Note how VMware Horizon fits into the various aspects of VMware Anywhere Workspace:

    • It helps to manage multi-modal employee experience – With the VMware Anywhere Workspace, VMware Horizon can help deliver a familiar desktop and application experience across workspace locations and devices.
    • Security and the distributed edge – VMware Horizon delivers access to desktops and applications to any endpoint.
    • Anywhere Workspace Integrations – Workspace SEcurity brings Carbon Black together with Workspace ONE UEM and VMware Horizon

VMware Horizon Architecture and Logical Components

VMware Horizon has a robust architecture that is compromised of many different components that make up the end-to-end solution. The components of VMware Horizon architecture include:

    • Horizon Client – The client is the piece that forms the protocol session connection to a Horizon Agent running in a virtual desktop, RDSH server, or physical machine
    • Universal Access Gateway (UAG) – It provides secure edge services for the Horizon Client. The Horizon Client authenticates to a Connection Server through the Unified Access Gateway and then forms a protocol session connection to the UAG and then the Horizon Agent running in a virtual desktop or RDSH server.
    • Horizon Connection Server – The Connection Server brokers and connects users to the Horizon Agent installed on VMs, physical hosts, and RDSH servers. The Connection Server authenticates user sessions through Active Directory, and grants access to the proper entitled resource.
    • Horizon Agent – The agent is installed in the guest OS of the target VM or system. It allows the machine to be managed by the Connection Servers and allows a Horizon Client to connect using the protocol session to the Horizon Agent.
    • RDSH Server – Microsoft Remote Desktop Servers that provide access to published applications and session-based remote desktops to end-users.
    • Virtual Machine – Virtual machines can be configured as persistent or non-persistent desktops. Persistent desktops are usually assigned in a 1-to-1 fashion to a specific user. Non-persistent desktops are assigned in desktop pools that can be dynamically provisioned to users as needed.
    • Physical Desktop – Counterintuitively, VMware Horizon can be used as a secure and efficient way to deliver connectivity to physical desktops to end-users. Starting with VMware Horizon 7.7, VMware introduced the ability to broker physical desktop machines with RDP. In Horizon 7.12, support was added for Blast protocol connectivity to physical desktops.
    • Virtual Application – Horizon can be used with RDSH servers to provide virtual application delivery. Using the functionality of the published application in RDSH, VMware Horizon can deliver the published applications to assigned users.

Logical Components

There are other components of Horizon architecture that are considered to be logical components of the solution. Some of the components listed below are not absolutely required. However, they can be used to enhance a Horizon deployment and scale the capabilities, security, and performance of the solution.

    • Workspace ONE Access – VMware Workspace ONE provides the solution for enterprise single sign-on (SSO) for the enterprise. It simplifies the access to apps, desktops, and other resources to the end-user. It can integrate with existing identity providers and provide a seamless login experience to create a smooth access workflow. It also offers application provisioning, a self-service catalogue, and conditional access.
    • App Volumes Manager – VMware App Volumes Manager coordinates and orchestrates the delivery of applications by managing assignments of application volumes. These include packages and writable volumes that can easily assign applications to users, groups, and target computers.
    • Dynamic Environment Manager – User profiles are also challenging in dynamic environments with multiple resources accessed by a single user. Dynamic Environment Manager enables seamless profile management by capturing user settings for the operating system and also end-user applications.
    • VMware vSAN™ storage – VMware vSAN is a software-defined storage solution that offers many advantages in the enterprise. It can deliver high-performance, highly-scalable storage that can be seamlessly managed from the vSphere Client as part of the native VMware solution. It does this by aggregating locally attached storage in each ESXi host in the vSphere cluster and presenting it as a logical volume for virtual machines and modern workloads. When it comes to VMware Horizon environments that are mission-critical, you want to have highly-resilient storage that is scalable and performant. VMware Horizon environments backed by VMware vSAN work exceptionally well for this use case.
    • VMware NSX-T Data Center – Another consideration for VMware Horizon environments and end-user computing is security. VMware NSX-T Data Center provides the network-based security needed in EUC environments. It allows easily creating secure, resilient, and software-defined networks that allow admins to take advantage of micro-segmentation for VMware Horizon workloads. Each virtual desktop can be isolated from all other virtual desktops using VMware NSX-T Data Center, bolstering security and protecting other critical Horizon infrastructure, such as the Connection Servers.
    • Microsoft SQL Servers – It is recommended to have a dedicated Microsoft SQL Server to house the event databases required by VMware Horizon. Plan your VMware Horizon deployment accordingly.

Horizon Hybrid and Multicloud Architecture

VMware Horizon can be deployed in many different architecture designs. These include on-premises, in the cloud, or a combination of hybrid and multi-cloud architectures.

In the VMware Horizon hybrid deployment, infrastructure can run in an on-premises datacenter with the Horizon control plane running in the cloud as well as deploy on both on-premises and public cloud, and join the two. In addition, organizations can connect their existing Horizon 7 or Horizon 8 implementations to the Horizon Cloud Service using the Horizon Cloud Connector appliance.

The VMware Horizon Control Plane Services are designed to meet modern challenges for remote workers and connectivity. Organizations that use virtual desktops and apps from companies that only support cloud solutions can benefit from the Horizon Control Plane Services. Existing VDI implementations may only be able to work with cloud environments. The Horizon Control Plane allows managing all hybrid and multi-cloud deployments and configurations.

VMware Horizon hybrid architecture with the Horizon Control Plane
VMware Horizon hybrid architecture with the Horizon Control Plane

It provides many benefits outside of management, including:

    • Universal brokering
    • Image management
    • Application management
    • Monitoring
    • Lifecycle management

The Horizon Control Plane Services
The Horizon Control Plane Services

Just-in-time desktops and apps

VMware Horizon technology allows organizations to provision “just-in-time” desktops and applications. Using a technology VMware calls Instant Clone Technology, entire desktops can be provisioned just-in-time. The Instant Clone Technology allows the rapid cloning of virtual machines in just a few seconds! Instant clones can configure, on average, one clone per second.

The Instant Clone Technology is really a radical evolution of what VMware Composer clones could do previously. With Instant Clone Technology, the steps required to provision a clone with VMware Composer are dramatically reduced. Note the comparison of the two processes below:

Comparing VMware Horizon Composer with Instant Clone Technology
Comparing VMware Horizon Composer with Instant Clone Technology

The VMware Instant Clone Technology was born from a project called “vmFork” that uses rapid in-memory cloning of a running parent virtual machine and copy-on-write to deploy the virtual machines to production rapidly.

    • Copy-on-write – The copy-on-write technology is an optimization strategy that forces tasks first to create a separate private copy of the data to prevent its changes from becoming visible to all other tasks. With copy-on-write, the parent VM is quiesced and then forked. The forking process creates two branches or variations of development, and the resulting clones receive unique MAC addresses, UUIDs, and other unique information.

Using the Instant Clone Technology with VDI provisioning is perfect for the just-in-time desktop and applications use case. New workstations can quickly be provisioned, just in time for the user to log into the environment. Then, using VMware App Volumes to attach AppStacks to the just-in-time desktops dynamically, you can have fully functional workstations with dynamically assigned applications in a matter of seconds, fully customized for each user.

Should you be using VMware Horizon?

VMware Horizon is a powerful remote connectivity solution that allows businesses today to solve the challenges of remote workers and connectivity needs. In addition, it enables businesses to scale their deployments with modern architectures, including hybrid cloud deployments and multi-cloud architectures.

With the new VMware Horizon Control Plane services, organizations can manage multiple VMware Horizon deployments across sites, clouds, and different infrastructures from the cloud. In addition, it opens up the possibility for organizations to use heterogeneous implementations of virtual desktops that may exist across on-premises and public cloud environments and aggregate these services for end-users.

VMware provides a rich set of additional solutions and services that seamlessly integrate with VMware Horizon and extend the solution’s capabilities, scalability, security, and management. These include VMware vSAN, VMware NSX-T Data Center, VMware Workspace ONE, Workspace ONE UEM, and VMware Anywhere Workspace.

For end-user clients, connecting to Workspace ONE or native VMware Horizon resources is as simple as browsing the solution’s service URLs. While the VMware Horizon Client provides the most robust connectivity experience for end-user clients, users can also use the HTML client to connect to virtual machines, physical desktops, and applications using a simple web browser.

The Instant Clone Technology provided by VMware Horizon allows just-in-time desktops and applications to be provisioned in seconds, a feat that is amazing to see and provides businesses with the capability to have exponentially more scale in providing virtual desktops to end-users. In addition, the dynamic capabilities offered by VMware Horizon allow companies to elastically scale up and scale down virtual desktops, even with on-premises infrastructure.

The post What Is VMware Horizon and How Does It Work? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-horizon/feed/ 0
Setting up Enhanced Linked Mode in vCenter 7.0 https://www.altaro.com/vmware/enhanced-linked-mode/ https://www.altaro.com/vmware/enhanced-linked-mode/#comments Fri, 07 Jan 2022 17:10:05 +0000 https://www.altaro.com/vmware/?p=23519 Simplify the management of your SDDCs and reduce operational overhead with vCenter server Enhanced Linked mode on VMware vCenter 7

The post Setting up Enhanced Linked Mode in vCenter 7.0 appeared first on Altaro DOJO | VMware.

]]>

VMware vCenter Enhanced Linked Mode (ELM) allows virtual infrastructure admins to connect and manage multiple vCenter Server instances together, through a single pane of glass.

By joining vCenter Servers together in Enhanced Linked Mode, they become part of the same Single Sign-On (SSO) domain, allowing administrators to log into any of the linked vCenter Servers simultaneously using a single set of credentials.

As well as roles and permissions, ELM also enables the sharing of tags, policies, and search capabilities across the inventories of all linked vCenter Servers from the vSphere Client.

An example of a common ELM setup is the management and workload vCenter Servers from the primary and secondary sites (for a total of 4) linked together, improving ease of administration and usability.

Example vCenter Enhanced Linked Mode Setup

Example vCenter Enhanced Linked Mode Setup

What is the Difference Between Enhanced Linked Mode and Hybrid Linked Mode?

Hybrid Linked Mode is concerned with linking your on-premises vCenter Server with a cloud vCenter Server. The key difference is that Hybrid Linked Mode does not join the same SSO domain, but instead maps through the connection using either a Cloud Gateway Appliance or an LDAP Identity Source.

You can set up on-premises vCenter Servers in Enhanced Linked Mode, and still connect these to a cloud vCenter Server using Hybrid Linked Mode. An example of this is a hybrid cloud setup with VMware Cloud on AWS providing the cloud vCenter, linked with vCenter Servers in your data centre(s).

Example vCenter Hybrid Linked Mode Setup

Example vCenter Hybrid Linked Mode Setup

What are the Requirements for Enhanced Linked Mode in vCenter 7.0?

    • An embedded Platform Services Controller (PSC) deployment
    • vCenter Server Standard licensing, ELM is not included with vCenter Server Foundation or Essentials
    • All vCenter Servers must be running the same version

If you are running vCenter 7.0 then both the Windows vCenter and the external Platform Services Controller are deprecated.

For previous versions, or non-compliant deployment types, review the following steps:

    • vCenter 6.0 – vSphere 6.0 is out of support, whilst ELM was available with vCenter 6.0, it required external PSC node(s), which is also no longer a supported deployment option in vCenter 7.0. Upgrade to vSphere 6.5 or 6.7 first, and then upgrade to vCenter 7.0.
    • vCenter 6.5/6.7 – ELM is supported with the embedded PSC from vCenter 6.5 Update 2 and later. However, due to the end of support approaching on October 15 2022 for both vSphere 6.5 and 6.7, you should still consider upgrading to vCenter 7.0.
    • Windows vCenter – Windows vCenter Servers are not supported with ELM or with vCenter 7.0. During the upgrade process, you can migrate all your configuration and historical data to the vCenter Server Appliance from the vCenter 7.0 upgrade UI.
    • External PSC – The external PSC deployment model is not supported with vCenter 7.0. During the upgrade process, you can consolidate your external PSC(s) into the embedded model using the converge tool built into the vCenter 7.0 upgrade UI.

How to Configure Enhanced Linked Mode for Existing vCenter Server Appliances

If you have existing vCenter Server deployments in separate SSO domains, then you can still join the vCenter Servers together in Enhanced Linked Mode using the SSO command line utility.

First, confirm your vCenter Server instance is not already using Enhanced Linked Mode as part of an existing SSO domain:

    • Log into the vSphere Client
    • Select the vCenter Server (top level) from the inventory
    • Click the Linked vCenter Server Systems tab
    • If you cannot see this option, click the … icon to reveal more
    • Review the list of linked vCenter Server systems
    • If the list is blank, then ELM is not setup

The steps below will demonstrate repointing a source vCenter, not already in ELM, to an existing target SSO domain. You will need to amend the syntax with the following values:

    • –src-emb-admin Administrator
      • The source SSO domain administrator, account name only. The default is administrator.
    • replication-partner-fqdn FQDN_of_destination_node
      • The Fully Qualified Domain Name (FQDN) of the target vCenter Server.
    • –replication-partner-admin SSO_Admin_of_destination_node
      • The target SSO domain administrator, account name only. The default is administrator.
    • –dest-domain-name destination_SSO_domain
      • The target SSO domain name, the default is vsphere.local.

Additionally, please note that:

    • Whilst ELM is supported with vSphere 6.5 Update 2 and later, SSO domain repointing is only supported with vCenter 6.7 Update 1 onwards
    • The command line utility requires the Fully Qualified Domain Name (FQDN) of the vCenter Server and will not work with the IP address
    • The source vCenter Server is unavailable during domain repointing
    • Ensure you have taken a file-based backup of the vCenter Server to protect against data loss

First, SSH onto the source vCenter Server. During the repointing exercise, you can migrate tags, categories, roles, and privileges.

Check for any conflicts between the source and destination vCenter Servers using the pre-check command:

cmsso-util domain-repoint -m pre-check –src-emb-admin Administrator –replication-partner-fqdn FQDN_of_destination_node –replication-partner-admin SSO_Admin_of_destination_node –dest-domain-name destination_SSO_domain

To migrate any data generated during pre-check, and repoint the vCenter Server to the target domain, run the execute command:

cmsso-util domain-repoint -m execute –src-emb-admin Administrator –dest-domain-name destination_SSO domain

If you did not run the pre-check then run the full execute syntax:

cmsso-util domain-repoint -m execute –src-emb-admin Administrator –replication-partner-fqdn FQDN_of_destination_node –replication-partner-admin SSO_Admin_of_destination_node –dest-domain-name destination_SSO_domain

You can validate ELM using the Linked vCenter Server Systems view in the vSphere client, outlined above. Alternatively, you can use the following command:

./vdcrepadmin -f showpartners -h FQDN_of_vCenter -u administrator -w SSO_Admin_Password

How to Configure Enhanced Linked Mode with vCenter 7.0

To configure Enhanced Linked Mode a vCenter Server with an existing SSO domain must already be in place. This may be through an existing vCenter in your environment, or by deploying one from scratch.

If you are deploying a greenfield environment then install vCenter Server as normal, creating a new SSO domain by default as part of the process.

Follow the process outlined below to configure Enhanced Linked Mode with your second, or further vCenter Servers in the environment.

    • Follow stage 1 of the vCenter Server 7.0 install process as normal.
    • Stage 1 deploys the appliance to your target host and datastore, whilst configures the appliance size and network settings.
    • Once stage 1 is complete you are prompted to continue to stage 2.
    • The SSO domain configuration is done during stage 2 configuration.

vCenter Server Stage 2 Install

vCenter Server Stage 2 Install

    • Click next. Verify the network, time, and SSH settings, click next again.
    • On the SSO Configuration page change the default option from the new SSO domain, to join an existing SSO domain.

vCenter Server Join Existing SSO Domain

vCenter Server Join Existing SSO Domain

    • Enter the details of the vCenter Server for the target SSO domain, along with the existing administrator password.
    • Click next. Configure the Customer Experience Improvement Program (CEIP) accordingly and click next again.
    • Review the settings and click finish to finalise the deployment.
    • Once complete, log into vCenter Server as normal.
    • You should now see the vCenter along with any linked vCenter Servers from the vSphere Client.
    • You can further validate the ELM configuration by selecting the vCenter Server (top level) from the inventory and clicking the Linked vCenter Server Systems tab.
    • The linked vCenter Servers will now be listed.

vCenter Server Configured Enhanced Linked Mode

vCenter Server Configured Enhanced Linked Mode

Wrap Up

I hope that you enjoyed this article and that you now have a better idea of how to properly set up Enhanced Linked Mode in vCenter 7.0. If there are any questions, please let me know in the comments below.

The post Setting up Enhanced Linked Mode in vCenter 7.0 appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/enhanced-linked-mode/feed/ 2
How to use VMware Converter for P2V (Physical to Virtual) https://www.altaro.com/vmware/vmware-converter-p2v/ https://www.altaro.com/vmware/vmware-converter-p2v/#respond Fri, 26 Nov 2021 13:45:29 +0000 https://www.altaro.com/vmware/?p=23307 VMware Converter provides an easy way to perform P2V and V2V operations. In this article, you'll find the processes explained in full detail

The post How to use VMware Converter for P2V (Physical to Virtual) appeared first on Altaro DOJO | VMware.

]]>

Most organizations are well on their virtual journey in the enterprise data center and we will discover here how VMware Converter for P2V will help complete it. Virtual workloads account for the majority of servers running in most environments today. However, many businesses may still have physical workloads running in their data center due to various reasons.

As the hardware lifecycle reaches its end for physical workloads, most businesses will look to virtualize physical server workloads and run them inside a virtual machine. VMware Converter has long been a solution to allow easily virtualizing physical workloads and transitioning these to virtual machines. So let’s see how to perform the all-important VMware Converter for P2V conversion.

What is P2V?

Intuitively, P2V stands for “physical to virtual” and represents the process of converting and migrating a physical computer image into a virtual machine (VM). Unlike a migration where you take the applications and data from one computer and copy them to an entirely new platform, with VMware Converter for P2V, you take an exact image-level copy of the physical computer and transform it into a virtual machine.

The virtual machine then retains the same state as the physical computer, including the operating system, applications, configuration, data, and even assigned resources. However, after a P2V operation, the assigned resources, such as CPU and memory, can easily be adjusted or changed.

What is the purpose of P2V operations?

The VMware Converter for P2V operation allows organizations to achieve the objective of server consolidation. Since the onset of the server virtualization movement with modern hypervisors, it provides much more efficient use of server hardware resources with the powerful technology and hardware in current enterprise servers.

Modern CPU, memory, and storage resources are generally not used to their fullest potential by a single workload loaded on the bare metal server itself and running multiple workloads on the same hardware instead of just one. Instead, organizations can utilize the underlying resources much more by running a hypervisor on top of the bare metal server.

Instead of having to “lift and shift” physical servers to virtual machines using challenging migration processes, organizations can use the VMware Converter for P2V process to simply take the Server as-is and seamlessly convert it to a virtual machine. It helps organizations realize the objective of their server consolidation projects.

In addition, as physical server hardware ages and nears the end of its lifecycle, organizations can use the VMware Converter for P2V process to move workloads off servers that are no longer supported to virtual machines running in supported hypervisor environments, such as VMware vSphere.

What is VMware Converter for P2V?

VMware Converter (VMware standalone converter), now known as VMware vCenter converter Standalone, is a tool used in VMware vSphere environments to convert physical and virtual machines to VMware virtual machines, using the VMware Converter for P2V process. You can also use VMware Converter to perform V2V (virtual to virtual) conversions to convert virtual machines running in one type of virtual environment to another virtual machine. For the purposes of the post, we will refer to it simply as VMware Converter.

While this is a supported VMware tool, you should note the last build of VMware Converter for P2V was released in 2018. VMware has made no mention as of yet to deprecate the product, and there is no comparable VMware tool used to perform similar conversions.

 

Version Release Date Build Number Installer Build Number
Converter Standalone 6.2.0.1 2018-05-22 8466193 N/A
Converter Standalone 6.2 2017-12-14 7348398 N/A
Converter Standalone 6.1.1 2016-02-16 3533064 N/A

 

Supported Types of Migration

You can use VMware converter to perform a physical to virtual conversion of a powered on:

    • a local machine where VMware Converter is running
    • a remote Windows host
    • a remote Linux host

Using VMware Converter to convert a powered-on machine
Using VMware Converter to convert a powered-on machine

You can also convert machines that are powered off in the following environments:

    • VMware Infrastructure virtual machine (vSphere)
    • VMware Workstation or other VMware virtual machine
    • Hyper-V Server

Using VMware Converter to Converter powered off virtual machines
Using VMware Converter to Converted powered-off virtual machines

Hot vs. Cold cloning

The above options illustrate the difference between what is known as a hot clone vs. a cold clone. Hot denotes the source machine for the VMware Converter for P2V or V2V is powered on, whereas cold refers to a powered-off machine. So when would you want to perform a cold clone vs. a hot clone?

Many server or workstation types may do well with a hot clone process while the machine is in a powered-on state. The hot clone process may work fine if the source does not have dynamically changing data and is relatively static. However, workloads that house database applications are a much better fit for the cold clone process.

It can be problematic with most database applications to have the database still running, servicing the application with data changing while running a hot clone. In addition, the hot clone copy of the database server can have missing or corrupted data. The cold clone with the machine powered off ensures no changes are made to the database, so the new VM copy contains all the data without any chance of corruption.

In the screenshots above, you may note there is no cold or powered-off option for physical Windows or Linux machines. Many versions ago with VMware Converter, VMware made a “Cold-Clone CD” available that allowed booting a Live CD and running the conversion process on the workload.

This CD or ISO is no longer made available in the latest versions of VMware Converter. An alternative to running a Cold-Clone of a server or workstation with database applications is stopping all the database services and ensuring all critical services are quiesced before running a hot migration of the powered-on workload. If you have an old workload, circa Windows Server 2003, you can still find the old VMware Cold-Clone CD floating around the web for download.

Since hot cloning, also called live cloning converts a source machine while the operating system is running and processes continue to run on the source machine during the conversion, the resulting virtual machine is not an exact copy of the source machine.

Data synchronization after hot cloning

As mentioned above, stopping services helps to overcome many of the challenges associated with hot cloning. However, when cloning Windows machines, VMware Converter can perform this operation automatically and synchronize data between the source and destination operating system, using “changed blocks” synchronization to ensure data is an exact match before powering down the source machine.

Copying changed blocks is used in many processes, including storage migration and backups by third-party data protection solutions. You can configure the settings of VMware Converter to stop selected Windows services, so no critical changes occur on the source machine during the data synchronization process. VMware Converter can automatically shut down the source machine and power on the destination machine after completing the data synchronization process.

Using this cloning automation provided by VMware Converter allows seamless conversion with little downtime of the workload. When the cloning process is complete, the newly created virtual machine is booted, taking over the identity of the source machine with the least possible downtime.

Prerequisites to perform P2V & V2V of Powered on Windows Machine and Linux

Several prerequisites need to be met to use the VMware Converter cloning process. First, let’s look at the platforms, data cloning modes, required VMware converter ports, etc. Keep in mind the most recent documentation provided by VMware dates back to May 2018. There are operating systems missing from the list that may likely work just fine with VMware Converter. However, you will need to test your specific environment and use case thoroughly.

Supported source types

The supported source types KB details the following source types that can be used as a VMware Converter conversion source:

Source Type Sources
Powered on machines Remote Windows physical machines
Remote Linux physical machines
Local Windows physical machines
Powered on VMware virtual machines
Powered on Hyper-V Server virtual machines
Powered on virtual machines running under Red Hat KVM or RHEL XEN
Note: VMware standalone converter does not support para-virtualized kernels.
VMware vCenter virtual machines For information about the interoperability between powered-off VMware vCenter virtual machines and vCenter Converter Standalone, see VMware Product Interoperability Matrices.
Note: VMware standalone converter 6.2.x does not support Virtual Hardware versions above 11. For selected hardware versions above 11, features are limited to the features in version 11.
VMware virtual machines For information about the interoperability between vCenter Converter Standalone and powered-off hosted VMware Workstation and VMware Fusion virtual machines, see VMware Product Interoperability Matrices.
Hyper-V Server virtual machines For Hyper-V Server versions distributed with Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows 10, and Windows Server 2016, powered off virtual machines with the following guest operating systems:
Windows Vista SP2 (32-bit and 64-bit) (except Home editions)
Windows Server 2008 SP2 (32-bit and 64-bit) (except Home editions)
Windows 7 (32-bit and 64-bit) (except Home editions)
Windows Server 2008 R2 (64-bit) (except Home editions)
Windows 8 (32-bit and 64-bit)
Windows Server 2012 (64-bit)
Windows 8.1 (32-bit and 64-bit)
Windows Server 2012 R2 (64-bit)
Windows 10 (32-bit and 64-bit) (except Home editions)
Windows Server 2016 (64-bit)
For other Hyper-V Server sources, perform the procedure for powered-on source machines.

 

Supported destination types

VMware-hosted products can be both conversion destinations.

    • VMware Workstation
    • VMware Fusion™
    • VMware Player
    • Virtual machines running on an ESX or ESXi instance that vCenter Server manages can be conversion destinations.
    • Virtual machines running on unmanaged ESX or ESXi hosts can be conversion destinations.

Supported data cloning modes

Note the following types of data cloning operations supported by VMware Converter:

Data Copy Types Application Overview
Volume-based Copy volumes from the source machine to the destination machine Volume-based cloning can be slow. File-level cloning is slower than block-level cloning. VMware Converter converts Dynamic disks into basic volumes on the target virtual machine
Disk-based Create copies of the powered-off source machines for all types of basic and dynamic disks You cannot select which data to copy. Disk-based cloning is faster than volume-based cloning.
Linked clone Use to check the compatibility of non-VMware images quickly The linked clone is corrupted for specific third-party sources if you power on the source machine after the conversion. Linked cloning is the fastest (but incomplete) cloning mode that VMware standalone converter supports.

 

Ports required for VMware converter p2v

Different VMware converter ports are required for communication, depending on the conversion of a Windows or Linux host. Also, compared to the ports required for vmware converter p2v, V2V operations require fewer ports.

Windows host

Communication Ports Overview
Converter Standalone Server to the powered on source machine TCP – 445, 139,
9089
UDP – 137, 138
If the source computer uses NetBIOS, port 445 is not required. If NetBIOS is not being used, ports 137, 138, and 139 are not required. When in doubt, make sure that none of the ports are blocked.
Converter Standalone server to vCenter Server TCP – 443 It is required only if the conversion destination is a vCenter Server.
Converter Standalone client to vCenter Server TCP – 443 It is required only if the Converter Standalone server and client components are on different machines.
Converter Standalone server to the destination ESX/ESXi TCP – 902 The VMware Converter server always requires access to ESX/ESXi at port 902.
Powered on source machine to ESX/ESXi TCP – 443, 902 If the conversion destination is vCenter Server, only port 902 is required. If the proxy mode feature is on, port 902 is not required.

 

Linux host

Communication Ports Overview
Converter Standalone server to the powered on source machine TCP 22 Used to establish an SSH connection between the Converter Standalone server and the source machine.
Converter Standalone client to Converter Standalone server TCP 443 Required only if the Converter Standalone server and client
components are on different machines.
Converter Standalone server to vCenter Server TCP 443 It is only required if the conversion destination is a vCenter Server.
Converter Standalone server to ESX/ESXi TCP 902 The VMware Converter server requires access to ESX/ESXi at port 902.
Converter Standalone server to helper virtual machine TCP 443, 902 If the conversion destination is vCenter Server, only port 902 is required. Likewise, if the proxy mode feature is on, port 902 is not required.
Helper virtual machine to the powered on source machine TCP 22 It is used to establish an SSH connection between the helper virtual machine and the source machine. By default, the IP address of the helper virtual machine is assigned by DHCP. However, if no DHCP server is available on the destination network, you must manually assign the helper virtual machine an IP address.

 

Ports for V2V operations

Communication Ports Overview
Converter Standalone server to
Fileshare path
TCP – 445, 139 UPD – 137, 138 These are required only for standalone virtual machine sources or destinations. For example, if the computer hosting the source or destination path uses NetBIOS, port 445 is not required. If NetBIOS is not being used, ports 137, 138, and 139 are not required. When in doubt, make sure that none of the ports are blocked.
Converter Standalone client to Converter Standalone server TCP – 443 It is required only if the Converter Standalone server and client components are on different machines.
Converter Standalone server to vCenter Server TCP – 443 It is only required if the conversion destination is a vCenter Server.
Converter Standalone server to ESX/ESXi TCP – 443, 902 If the conversion destination is a vCenter Server, only port 902 is required.

 

Installation Types

When you install VMware Converter, you can choose between two different installation types. These are:

    • Local installation – The local installation of VMware Converter installs Converter on a local machine only. You can use this option to create and manage conversion tasks from the local machine it is installed. For example, you can install VMware Converter for P2V locally to process the local machine itself.
    • Client-Server installation (advanced) – In the Client-Server installation, you can configure a client-server model for Converter to have a centralized approach to using VMware Converter across an environment where there is a centralized VMware Converter server that provides conversion capabilities to IT admins. In addition, it allows local and multiple remote clients to access the Converter server. The Client-Server installation includes three different components for installation:
      • Converter server – Provides centralized management for all conversions. In addition, the Converter server handles communication between clients and the Converter agent.
      • Converter agent – Allows the local machine to be a source for conversion
      • Converter client – Connects to the Converter server and provides a graphical user interface for setting up and managing conversions

VMware Converter – When to use and not use?

VMware Converter is a robust tool for performing standalone conversions from physical hardware to virtual machines or from virtual machines to virtual machines. However, there are typical use cases and those that are non-typical for use with VMware Converter. Let’s highlight some of these.

Typical use cases:

    • Server consolidation
    • Migrating from old server hardware
    • Changing disk sizes
    • Migrating from ESXi VM to VMware Workstation
    • Migrating from VMware Workstation to ESXi VM

Non-typical use cases or ones that will not work

    • Cloning a domain controller – this is something that is frowned upon and can cause major issues, no matter what cloning tool you use.
    • If applications depend on specific underlying hardware (serial numbers, device manufacturers)
    • If applications depend on a specific MAC address
    • Cloning FAT32 volumes

The following are limitations between Windows and Linux hosts.

Source Operating System Limitations
Windows When you convert UEFI sources, Converter Standalone does not copy any UEFI variables to the destination.
Synchronization is supported only for volume-based cloning at the block level.
Linux Only volume-based cloning at the file level is supported.
Only managed destinations are supported.
Converting multiboot virtual machines is only supported if GRUB is installed as the boot loader. LILO is not supported.
 Converter Standalone copies only the current UEFI boot entry option to the destination when you convert UEFI sources.
Simultaneous cloning of multiple disks and volumes is supported only when converting a virtual Linux source.
Installing VMware Tools on Linux guest operating systems is not supported.

 

Cloning Powered-on Windows Physical and Virtual Machines Overview

What does the process look like to clone a powered-on physical or virtual machine? The beauty of the VMware Converter cloning process is it does not modify the source physical or virtual machine, aside from the VMware Converter agent installation (which can be removed).

The VMware Converter clone process involves copying the source disks or volumes to the destination virtual machine. Part of the copy process is transferring the data that exists on the source hard disk to the destination virtual disk. One of the abilities of VMware Converter is to modify the disk layout as part of the conversion process. This capability is actually one of the use cases mentioned above.

You can change the destination virtual disk to have a different size, file layout, and other characteristics. As part of the conversion process, VMware Converter’s system reconfiguration introduces the drivers necessary so the migrated operating system continues to function on the new virtual hardware. One of the important drivers needed is the new storage drive to recognize the VMware storage controller types.

Overview of the VMware Converter conversion workflow

Overview of the VMware Converter conversion workflow

Cloning Powered-on Linux Source Machines Overview

As you can imagine, the process to clone powered-on Linux source machines differs from Windows. Unlike Windows, where you have an agent installed on the source machine, no agent is installed on the Linux source machine. Instead of the installation of the agent, a helper virtual machine is deployed instead.

The helper virtual machine provides the target of the data and system configuration copy of the source Linux machine. All connections and data copies use SSH.

Overview of converting Linux source machines using VMware Converter
Overview of converting Linux source machines using VMware Converter

When the conversion process is complete, the destination Linux virtual machine shuts down and becomes the complete copy of the source. Once powered up, it will represent the source machine.

Installing VMware Converter

The process to install VMware Converter is a “next, next, finish” process. First, you will need to begin by downloading the latest version of VMware Converter, which you can do from here:

The latest version at the time of this writing is version 6.2.0.1. Download the installer and run the .EXE file.

After executing the installer, click Next on the welcome screen.

Beginning the installation of VMware Converter for P2V
Beginning the installation of VMware Converter for P2V

Accept the patent agreement.

VMware Converter patent agreement
VMware Converter patent agreement

Agree to the terms of the EULA.

Agree to the VMware Converter EULA
Agree to the VMware Converter EULA

Select the destination folder for the installation.

Select the destination folder
Select the destination folder

Next, select the Setup Type for the installation. Finally, choose between the local installation and the Client-Server installation for advanced setup. For the demo in the post, we are using the Local installation of VMware Converter to showcase its functionality and features.

Choosing the installation type for VMware Converter
Choosing the installation type for VMware Converter

Opt-in or out for the VMware CEIP program.

VMware CEIP options
VMware CEIP options

Finally, your VMware Converter installation is ready to proceed. Click Install to begin the installation of VMware Converter.

Begin the install of VMware Converter
Begin the install of VMware Converter

After a few moments, VMware Converter installation is completed.

VMware Converter installation completes successfully
VMware Converter installation completes successfully

After clicking Finish, you can choose to launch VMware Converter to begin your first P2V or V2V conversion.

Launching VMware standalone converter
Launching VMware standalone converter

Using VMware Converter for Converting a Windows Host

The following walkthrough demonstrates using VMware Converter to convert a running Windows host to a virtual machine. The source of the following walkthrough is a physical Windows Server machine that will target a VMware vSphere cluster environment.

Below, we are selecting a remote Windows machine and entering the guest OS credentials to install the VMware Converter agent.

Selecting a remote Windows machine for cloning
Selecting a remote Windows machine for cloning

You can select how you want the agent to be uninstalled once the conversion is complete. The default selection is to automatically uninstall the files when import succeeds.

Select how the agent will be uninstalled
Select how the agent will be uninstalled

VMware Converter will begin installing the agent on the remote Windows host.

VMware Converter agent is installed on the remote Windows host
VMware Converter agent is installed on the remote Windows host

Select the destination type of the VMware Converter conversion. The choices are:

    • VMware Infrastructure virtual machine
    • VMware Workstation or other VMware virtual machine

Choose the destination system for the conversion
Choose the destination system for the conversion

Enter the credentials needed to connect to the destination type selected in VMware Converter.

Enter the destination system and credentials for the conversion
Enter the destination system and credentials for the conversion

Choose to ignore the certificate warning for the target vSphere environment
Choose to ignore the certificate warning for the target vSphere environment

The next step is to select the destination VM name and location. This is not a rename of the guest operating system. Rather it is the virtual machine inventory name.

Select the destination VM name and folder
Select the destination VM name and folder

Select from the available storage and select the virtual machine version.

Select the storage and virtual machine version
Select the storage and virtual machine version

On the Options screen, you have the opportunity to configure the parameters for the conversion task. This step allows you to change the resulting virtual machine’s configuration to be different from the source, including data to copy, devices, networks, services, and advanced options. Click the Edit link next to each section to change the configuration.

Configure the conversion options for the conversion process
Configure the conversion options for the conversion process

Click Finish on the summary screen to begin the conversion process using VMware Converter.

Begin the conversion operation for the Windows host
Begin the conversion operation for the Windows host

The conversion progress begins and will display progress in the status column of VMware Converter.

Windows host conversion progressing in VMware Converter
Windows host conversion progressing in VMware Converter

As a note, the first conversion process failed for a Windows Server 2019 host targeting a vSphere 7.0 Update 2 environment when I left the default for the Virtual Machine version set to Version 19. However, when converting the machine the second time, I chose Version 11, and the conversion process completed successfully. So, it would seem with this test, the newest virtual machine versions are too new for VMware Converter targeting the most recent vSphere versions.

The workaround is simple. Choose a lower version such as version 11, as this was successful.

conversion process completed successfully

Using VMware Converter for Converting a Linux Host

The process of converting a running Linux server using VMware Converter is not much different using the VMware Converter UI. Here, we select Remote Linux machine. Enter the connection information for the remote Linux server.

Select the remote Linux server and enter connection information
Select the remote Linux server and enter connection information

Accept the certificate warning coming from the remote Linux server via the SSH connection.

Accept the certificate warning for the remote Linux server
Accept the certificate warning for the remote Linux server

Note using VMware Converter for the remote Linux host, we only have the option to target a VMware infrastructure server. If we select powered off, we can select other targets.

Choose the destination system for the Linux machine
Choose the destination system for the Linux machine

Choose the virtual machine name for the vSphere inventory.

Choose the virtual machine name for the vSphere inventory
Choose the virtual machine name for the vSphere inventory

Choose virtual machine storage for the resulting virtual machine.

Select the storage and virtual machine version
Select the storage and virtual machine version

On the Options page, edit the configuration options if needed for the Linux virtual machine.

Edit the configuration options on the Options screen for the Linux VM
Edit the configuration options on the Options screen for the Linux VM

Begin the conversion process by clicking Finish.

Begin the conversion of the Linux host with VMware Converter
Begin the conversion of the Linux host with VMware Converter

My Thoughts on VMware Converter

VMware Converter is an excellent tool for converting both physical hosts and virtual machines. You can target a wide variety of environments with the tool, and it automates much of the process, making the conversion seamless. As shown, VMware Converter can convert both Windows and Linux hosts that are powered on, and it can also convert virtual machines that are powered off in both VMware and Hyper-V environments.

Take note of the prerequisites and use cases that will not work. Also, as shown by the conversion results, VMware Converter has not been updated in over three years since 2018. Therefore, depending on the virtual machine version and other configuration options selected, it may or may not work without error targeting the newest VMware solutions, such as vSphere 7.0 Update 2 and higher.

Learn more about VMware Converter here: vCenter Converter: P2V Virtual Machine Converter | VMware

The post How to use VMware Converter for P2V (Physical to Virtual) appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-converter-p2v/feed/ 0
VMworld 2021 Headlines – Cloud Services, Tanzu, and More! https://www.altaro.com/vmware/vmworld-2021-headlines-cloud-services-tanzu-and-more/ https://www.altaro.com/vmware/vmworld-2021-headlines-cloud-services-tanzu-and-more/#respond Thu, 14 Oct 2021 15:37:58 +0000 https://www.altaro.com/vmware/?p=23079 The announcements at VMworld 2021 have huge implications for the future of the company and admins. The key takeaways and talking points here

The post VMworld 2021 Headlines – Cloud Services, Tanzu, and More! appeared first on Altaro DOJO | VMware.

]]>

We’re tying the bow on VMworld 2021 which was packed with a dizzying number of announcements. While we can’t cover every single one of them, we will talk about the ones that really struck us as well as those high-visibility strategic announcements.

VMware’s CEO Raghu Raghuram speaking at VMworld 2021

VMware’s CEO Raghu Raghuram speaking at VMworld 2021

Like last year, VMworld 2021 was an online event with free registration for everyone. The event was organized in 8 different “booths” from which you can pick and choose sessions. It seems the bulk of the innovations were in the multi-cloud and App modernization fronts though.

VMworld 2021

This year’s VMworld 2021 guests included no other than Michael J. Fox, Will Smith who treated us with really inspiring messages and views on life in general outside of the tech space.

As for the technical side of things, on top of all the other areas that were talked about, the agenda was packed with multi-cloud and App modernization (Tanzu) topics. Without further ado, let’s dive into the VMworld 2021 announcements.

VMware Cross-cloud services

According to VMware’s CEO Raghu Raghuram during VMworld 2021, “Multi-cloud is the digital business model for the next 20 years, as entire industries reinvent themselves”. The plan to help organizations with the shift to multi-cloud was set in motion some time ago and has been the topic of several announcements ever since.

VMworld 2021 is no exception and brings the concept a little bit further with VMware Cross-Cloud services, a group of several integrated services allowing customers to deal with apps with “freedom and flexibility” across clouds. The goal of these multi-cloud services is to accelerate the move to the cloud, make it cheaper and more flexible.

Mware Cross-Cloud services helps organizations shifting to multi-cloud

VMware Cross-Cloud services help organizations shifting to multi-cloud”

The new VMware cross-cloud services offering will revolve around the following areas. Keep in mind that these span multiple clouds (this is where the value really is). You can pick and choose which service you want on which cloud.

    1. Building and deploying cloud-native apps (VMware Tanzu Application Platform).
    2. Operating and running apps (VMware Cloud, Project Arctic).
    3. Management of performance and cost across clouds (VMware vRealize Cloud, Project Ensemble).
    4. Security and Networking (Carbon Black, NSX Cloud, Service Mesh).
    5. Deploy and manage edge-native apps (VMware Workspace One and VMware Edge Compute Stack).

Not all organizations will benefit from this offering just yet as most IT departments will first need to wrap their head around it, find use cases, analyze the TCO… However visionary, things certainly seem to be moving in that direction and VMware is paving the way.

VMware Sovereign Cloud

Data sovereignty refers to countries’ jurisdiction on data and how it relates to the concepts of ownership, who is authorized to store data, how it can be used, protected, stored and what would happen should the data be used ill-intentionally.

The discussions around data and cloud sovereignty are becoming more frequent and will most likely become a critical selling point for large customers such as government entities. As more and more companies resort to cloud computing, it is becoming increasingly important to establish a way of ensuring the data stored with these cloud providers is treated squarely.

For instance, the principality of Monaco recently unveiled a Monegasque sovereign cloud where all the shareholders are Monegasque with the state owning a controlling stake in it.

VMware Sovereign Cloud will ensure regulations compliance

VMware Sovereign Cloud will ensure regulations compliance”

VMware is cracking down on this issue with VMware Sovereign Cloud. The aim of this initiative is to partner with cloud providers to be able to deliver multi-cloud service with the “VMware Cloud Verified” seal of approval.

In order for this to happen, a VMware Sovereign Cloud framework will be put in place and only cloud providers who abide by it will be able to slap the “VMware Cloud Verified” seal of approval on their services. They must also self-attest on the design, build, and operations of their cloud environments and their capability to offer a sovereign digital infrastructure.

If cloud providers decide to play ball, this should open the door to juicy contracts with government entities such as the European Union in the years to come.

More information is in the press release from VMworld 2021 announcements.

VMware Cloud on AWS Outpost

AWS Outpost is a managed service offering where AWS delivers and installs the Outpost physically so you get the AWS experience on compute capacity located on-premise or in any datacenter or co-location near you. It is managed so you don’t have to take care of its lifecycle. The use-cases related to AWS Outposts include low-latency requirements, data sovereignty, local data processing…

During VMworld 2021, VMware introduced VMware Cloud on AWS Outposts with the hope that it will boost the adoption of VMware Cloud on AWS. The adoption process is the same as an AWS outpost after which AWS sets up the VMware SDDC VCF stack, VMware makes sure everything checks out and hands it to you through the VMware Cloud Service Portal.

VMware Cloud on AWS is a tight partnership between the two entities

VMware Cloud on AWS is a tight partnership between the two entities”

At the moment it is limited to 42U racks with i3en.metal instances but it may evolve over time. Looking at the pricing it is actually cheaper than I would have expected considering the resources in the i3en.metal instances and the VCF stack in the bundle.

The bundle includes:

    • AWS Outposts 42u rack
    • AWS managed dedicated Nitro-based i3en.metal EC2 instance with local SSD storage
    • VMware HCX. Also
    • VMware Cloud Console
    • Support by VMware SREs
    • Supply chain, shipment logistics, and onsite installation by AWS
    • Ongoing hardware monitoring with break/fix support.

You can now get the benefits of VMware Cloud on AWS closer to your organization

You can now get the benefits of VMware Cloud on AWS closer to your organization”

Note that it is only available in the US at the moment.

More info in this technical deep dive on VMware Cloud on AWS outpost.

DR-as-a-Service (DRaaS) Enhancements

A bunch of enhancements to the DRaaS offering were unveiled during VMworld 2021 announcements. The product was first announced in VMworld 2020. As a reminder, DRaaS allows customers to replicate workloads to cheap cloud storage and restore them to VMware Cloud on AWS that you can spin up on-demand to improve TCO.

Among the enhancements to the cloud disaster recovery solutions were:

    • 30-minutes RPO

This will offer more frequent snapshots for critical apps that have higher change rates which give you up to 48 recovery points per day. The combination of that higher granularity and the air-gapped Scale-out Cloud File System will offer to reduce the impact of Ransomware attacks.

30-Minutes RPO offers much finer recovery granularity

30-Minutes RPO offers much finer recovery granularity”

    • Accelerated Ransomware recovery with File-level recovery

On top of Scale-out Cloud File System (SCFS), VMware DRaaS will let you extract recent, uncorrupted files or folders from various snapshots in VMs without powering them up. You can then inject them into a clean recovery restore point.

Ransomware recovery is simplified with File-level recovery

Ransomware recovery is simplified with File-level recovery”

    • Integrated and simple data protection for VMware Cloud on AWS

In order to protect those critical pieces of software that run your organization, VMware Cloud on AWS will now offer the possibility to leverage Cloud DR as a unified DR, ransomware, and foundational backup-restore solution.

Once you select and configure VMs protection, Cloud DR creates immutable, encrypted backup copies stored on the Scale-out Cloud File System (air-gapped). You can then restore at the file, folder or VM level.

Integrated data protection for VMC on AWS simplifies the data protection process

Integrated data protection for VMC on AWS simplifies the data protection process”

VMware Tanzu Community Edition

One of the biggest hurdles in getting into VMware Tanzu so far was the complexity and resources required. VMware Tanzu Community Edition is a free, open source, and community supported distribution of VMware Tanzu. The best thing is that it is full featured and you can deploy it to various environments:

    • Locally on your workstation in Docker
    • vSphere infrastructure (vCenter server)
    • Amazon EC2
    • Microsoft Azure

VMware Tanzu Community Edition is full featured

VMware Tanzu Community Edition is full-featured”

This new product is a platform for “learners and users” as VMware puts it, especially small-scale and preproduction environments. As of October 2021, the product hasn’t reached v1 yet so it may not be the smartest move to start running your prod in it.

The other big selling point of VMware Tanzu Community Edition is the pluggability of the product, in that it includes additional packages to cover all aspects of the modern app’s lifecycle.

VMware Tanzu Community Edition makes installing packages easy and pain free

VMware Tanzu Community Edition makes installing packages easy and pain-free”

This new VMware Tanzu Community Edition aims at facilitating the deployment process with a docker based kind bootstrap cluster, provisioned through the Tanzu cli, that will, in turn, deploy either:

    • A management cluster to manage multiple workload clusters.
    • A standalone, all-in-one workload cluster. An even quicker way to get started.

The deployment of the management or standalone cluster can be done in a user-friendly web UI that automatically generates the associated deployment configuration file and the kube-config file. But we’ll get into all that in another dedicated blog.

You can find more info on the VMware Tanzu Community Edition website.

VMware Cloud with Tanzu Services

VMware aims at facilitating the shift to app modernization and the adoption of Kubernetes with their Tanzu offering. However, managing your own on-premise Kubernetes/Tanzu infrastructure may not be in the cards for a variety of reasons such as time constraints, complexity, CAPEX…

Managed Tanzu Kubernetes Grid Service

VMware Cloud with Tanzu Services will propose a multi-cloud managed offering where the underlying infrastructure and capacity required for Kubernetes workloads is fully managed by VMware so your teams don’t have to worry about dealing with vSphere with Tanzu on-premise.

Managed TKS lets you focus on what really matters

Managed TKS lets you focus on what really matters”

VI admins will get to keep using their good old vCenter Server interface to manage Kubernetes operations. The VMware Cloud console will let VI admins provision Tanzu Kubernetes Grid (TKG) cluster and deliver role-based access and capacity to the developer teams seamlessly.

Tanzu Mission Control Essentials

Tanzu Mission Control Essentials is a component included in Tanzu services. It is a SaaS solution that acts as a management plane for Kubernetes clusters.

Platform Operations are centralized through the use of Tanzu Mission Control Essentials which will be able to leverage VMware Cloud to deliver that holy multi-cloud deployment. Tanzu Mission Control provides global visibility across clusters and clouds and automates operational tasks such as access and security management at scale.

Tanzu Mission Control Essentials is a component included in Tanzu services

Tanzu Mission Control Essentials is a component included in Tanzu services”

Tanzu Mission Control Starter

VMware Tanzu Mission Control is a multi-cloud SaaS management platform that facilitates the operations of Kubernetes across private and public clouds, implement security, provision TKG clusters, offers troubleshooting capabilities, IAM, data protection… The list goes on, you get it, it’s a great tool when you are heavily involved with Kubernetes.

Tanzu Mission Control Starter

During VMworld 2021, VMware unveiled a free tier with VMware Tanzu Mission Starter which will include a set of core Kubernetes management features like centralized visibility and policy control against any compatible Kubernetes clusters, be it on-premise or in the cloud.

There isn’t much info on it yet but it should be a solid free alternative when paired with Tanzu Community Edition. You can register here if you want to receive updates on Tanzu Mission Control Starter.

Other VMware Tanzu announcements

Other Tanzu releases were made during VMworld 2021 announcements such as:

    • Tanzu Service Mesh Enterprise: Advanced, end-to-end connectivity and security for applications across end-users, microservices, APIs, and data.
    • VMware Tanzu Standard for VMware Cloud Universal: You can now leverage VMware Tanzu Standard as part of the Cloud Universal Program if that’s what you are into.
    • TKG New features: Support for Windows containers (experimental), GPU workload support, …
    • Tanzu Application Platform adds new capabilities.

VMware vSphere 7 Update 3

Although it was released a few days before VMword 2021, it is worth mentioning vSphere 7u3 here since it is a significant update. We won’t go through a complete what’s new here as it would make for a dedicated blog, instead, we will touch base on the main announcements:

    • Enhanced performance stats visibility for persistent memory.
    • Support for NVMe over TCP.
    • vCenter Server plug-in for NSX.
    • Simplified deployment process of VMware vSphere with Tanzu, especially network-wise.

Configuring vSphere Tanzu is much easier in vSphere 7 Update 3

Configuring vSphere Tanzu is much easier in vSphere 7 Update 3”

    • Improved maintenance operations with vSphere Distributed Resource Scheduler (DRS).
    • Use of SD and USB drives as boot media deprecated and warning of “degraded” boot volume if used.
    • Improvements to lifecycle management (depot editing, drive firmware support, vSAN witness management).
    • vCenter server reduced downtime upgrade (Cloud technology on-premise).
    • Future Linux distributions will have VMware Tools preinstalled.
    • I/O Trip Analyzer to get an overview of the vSAN I/O path.

As you can tell, vSphere is no longer just a hypervisor. It is shapeshifting into the foundation bricks of a complete ecosystem of multi-cloud and modern apps.

vSphere 7 is no longer just a hypervisor

vSphere 7 is no longer just a hypervisor”

Refer to the vSphere 7 Update 3 release notes for the full list.

Refer to vSAN 7 Update 3 release notes for the news in vSAN.

VMware Edge Compute Stack

During VMworld 2021 or outside, VMware Cloud environments are getting lots of love and marketing exposition these last few years while on-premise solutions keep bettering with age like good wine. However, edge computing is gaining in popularity and maturity as use cases for AI/ML (Artificial Intelligence and Machine Learning) continue growing. Edge computing refers to scenarios where you need compute capacity as close to the endpoint as possible. In such cases, you can’t afford to make a call to the DC or cloud and back for each processing, therefore, some sort of capacity must be on-site to run the App.

VMware Edge Compute Stack will come in three editions

VMware Edge Compute Stack will come in three editions”

One of the sticking points when leveraging edge computing includes the heavy work required in refactoring Apps, processes, and such to run the workloads at the edge. VMware Edge Compute Stack was one of the VMworld 2021 announcements and aims at simplifying that move. It is a purpose-built and integrated stack offering HCI and SDN for small-scale VM and container workloads to effectively extend your SDDC to the Edge.

Edge compute use cases will solve a wide variety of challenges

Edge compute use cases will solve a wide variety of challenges”

While this stuff is still considered cutting edge, it is without a doubt that we will witness an explosion of use cases in the coming years and VMware will have a bundled and licensed solution ready for those customers ready to jump in.

Project Announcements

Just like Tanzu Kubernetes Grid once was Project Pacific, a number of projects currently in the works have been discussed in one of the VMworld 2021 sessions (A look inside VMware’s innovation engine [VI3091]).

VMworld 2021 announcements including many projects currently in the works

VMworld 2021 announcements including many projects currently in the works”

Project Santa Cruz

The VMworld 2021 announcements introduced an integrated offering with one device that adds edge compute and SD-WAN together. It connects edge sites to centralized management planes for cloud-native and networking teams. It can run containers and cloud services.

Project Santa Cruz extends SDDC capabilities to the edge

Project Santa Cruz extends SDDC capabilities to the edge”

Project Tanzu Bring your own Host (Santa Cruz)

If you don’t want the VMware box, Project Santa Cruz also includes a Cluster API provider that supports customers bringing their own infrastructure to comply with cases such as Hyper-V or specific environment-driven kernel tuning scenarios. You can register bare metal servers as capacity to TKG clusters. Note that it is also integrated with Tanzu Mission Control.

Project Radium

This AI-oriented project Radium builds upon VMware Bitfusion to expand the feature set to other architectures over ethernet such as AMD, Graphcore, Intel, Nvidia, and other hardware vendors for AI/ML workloads. That way, users will be able to leverage a multitude of AI accelerators. Those accelerators will be attachable dynamically regardless of running on-premise, in the cloud, or at the edge.

AI ML workloads will benefit from a wider range of hardware offload devices

AI/ML workloads will benefit from a wider range of hardware offload devices”

Project Cryptographic agility

Crypto algorithms and standards have a lifecycle and become weaker as compute capability advances. What took 6 months to crack 15 years ago may take only a few hours or even minutes nowadays. The goal of this project is to offer crypto agility through increased control over configurations and the ability to switch between standard and libraries.

Project Ensemble

Following the footsteps of VMware cross-cloud services, Project Ensemble will simplify and accelerate the adoption of multi-cloud.

Ensemble streamlines multi-cloud operations through app-centric views of multi-clouds and focuses on how different personas, such as cloud providers and cloud consumers, in the organization interact with the applications.

Project IDEM

A very powerful move towards VMware’s multi-cloud vision, this multi-cloud scaled management automation project aims at simplifying management for those customers leveraging several cloud providers by automating any management task on any cloud. Project IDEM can run tasks sync or async through an entirety of cloud APIs that dynamically adapt to new versions through automatic discovery. You can think of it as Desired State Configuration across multiple clouds.

Project Capitola

Project Capitola is an impressive software-defined memory implementation announced during VMworld 2021 that aggregates tiers of different memory types such as DRAM, PMEM, NVMe, and other future technologies into logical memory for easy consumption and managed in the backend by VMware vSphere.

This model will be beneficial for memory-intensive apps and should prove cost-effective since you can leverage memory types at different price points according to your performance needs and it will work with DRS. VMware is currently partnering with Intel and their Optane devices to pioneer this new tech.

Tiered memory will offer cost-effective solutions to memory heavy apps

Tiered memory will offer cost-effective solutions to memory heavy apps”

Project Arctic and Cascade

Arctic: Addressing OPS who deliver resources

Currently at the stage of technology preview, Project Arctic will bring cloud connectivity into vSphere in order to open the cloud door to all these customers relying on on-premise environments. By making vSphere “cloud-aware”, Project Arctic will make hybrid cloud the default operating model. Organization will be able to instantly access VMware Cloud capacity and deploy VMware Cross-Cloud Services. One use case would be the ability to enable DRaaS in a few clicks.

Cascade: Addressing Devs and DevOps who rapidly develop and deploy apps

Also a technology preview, Project Cascade will provide a unified Kubernetes interface for both on-demand infrastructure and containers across VMware Cloud through CLI, API, and graphical interface. The VM service that was introduced in vSphere with Tanzu to manage VMs from Kubernetes will be ported to VMware Cloud as part of Project Cascade.

Project Arctic and project Cascade will address the needs of IT OPS and DevOps

Project Arctic and project Cascade will address the needs of IT OPS and DevOps.

VMworld 2021 in Review

Well, VMworld 2021 announcements came in truckloads and were again highly qualitative. You could clearly see the company’s long-term vision and how they go about tackling problems we don’t even know we are going to have or already have. It is remarkable to witness how the vision initiated around 10 years ago came true with the shift to the cloud. While we can’t ignore that VMware was a little late at the app modernization table with Tanzu, they are now closing the gap with huge investments in that space and tons of use cases being covered.

It seems this year’s VMworld 2021 spotlights were mostly on the multi-cloud with a tightening of the partnership with the providers, as well as app modernization with open-source products such as Tanzu Community Edition that we surely appreciate.

However, we are also greatly looking forward to seeing where Edge computing is going to take us with really interesting use cases and announcements that are paving the way for years to come.

The post VMworld 2021 Headlines – Cloud Services, Tanzu, and More! appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmworld-2021-headlines-cloud-services-tanzu-and-more/feed/ 0
What is VMware Cloud on AWS (VMC on AWS)? https://www.altaro.com/vmware/vmware-cloud-aws/ https://www.altaro.com/vmware/vmware-cloud-aws/#respond Fri, 06 Aug 2021 09:36:53 +0000 https://www.altaro.com/vmware/?p=22615 Learn more about what VMware Cloud on AWS is, use cases and how it can help your organization extend to a hybrid cloud to be more agile.

The post What is VMware Cloud on AWS (VMC on AWS)? appeared first on Altaro DOJO | VMware.

]]>

VMware Cloud on AWS is a hybrid cloud service that was launched by the company back in 2017 to address organizations that want to run VMware in AWS and it never ceased to grow (Also referred to as VMC on AWS for what does VMC stand for). Everyone in the tech industry acknowledges the fact that cloud solutions have changed the IT landscape and are here to stay, never mind thriving.

VMware Cloud On AWS

“VMware Cloud On AWS

However, shifting to the cloud is not something you do overnight and simply does not apply to a number of cases. Many IT folks don’t have the means, needs, or possibility to migrate all of their workloads to the cloud, however beneficial it would be. In these instances, a hybrid cloud is a great compromise to smoothen the transition, especially with VMware in AWS which simplifies the process significantly.

For additional details to this article, you can also refer to the official VMware Cloud on AWS features roadmap in which you will find the status of development of each and everyone of them. For instance, you will find that Cloud Native Storage is now utilized on VMware Cloud on AWS with Tanzu Kubernetes Grid Plus in all regions or that vSAN File Services on VMware Cloud on AWS is currently in planning state and should find its way into the product sometime in the future.

Cloud technicalities

Hybrid cloud

For a complete rundown on hybrid cloud, be sure to check out our guide to VMware hybrid Cloud. Here we will just touch base on the different ways to use the cloud and where Hybrid implementations sit:

      • On-Premise: Using an infrastructure hosted and operated in-house incurs significant up-front investments (CAPEX) and skills to manage. In this instance you have full control, meaning you also have to manage everything.
      • Public cloud: Run your services directly in a cloud provider such as AWS or Azure (SaaS). The infrastructure is mutualized and operated by the provider. There is no up-front cost as you pay for what you consume (OPEX).
      • Hybrid cloud: A mix of the above linking your on-premise infrastructure to an SDDC running in the cloud provider’s datacenters (PaaS or IaaS). You don’t need to worry about managing the hardware nor the management components. Note that VMware also partnered with DellEMC to offer VMware on DellEMC Cloud.

Hybrid cloud implementations offer a great deal of possibilities such as workload mobility, disaster recovery, elastic/burst capacity with no up-front investment costs (up-front payment of subscription excluded).

IaaS, PaaS, SaaS

Even though you may have seen these words everywhere on the internet over the past 10 years, I wanted to quickly explain what they mean for those who are not familiar with the terminology. “aaS” stands for “as-a-service” and describes parts of the IT environment that is offered to you as a service by the cloud provider. The company has shifted significantly from a product to a service business model and it is the case with VMware in AWS.

Now, the relevant thing here is that the service can be offered at various levels. Ranging from the infrastructure where you get hands-on management on the hypervisor, down to the actual service where you only manage the configuration (syslog, apache, mysql…). Anyway, a picture is worth a thousand words:

The type of cloud services you choose will give you more or less control over the underlying components”

VMware Cloud on AWS

VMware in AWS is available in most AWS regions of the world and runs the whole SDDC stack on Amazon Elastic Compute Cloud (Amazon EC2). It is based on the VMware Cloud Foundation framework which integrates management (vCenter), compute (vSphere), storage (vSAN) and network (NSX-T).

VMC on AWS offers an SDDC in the cloud, closer to AWS services, improving data gravity”

VMware in AWS doesn’t only provide vSphere hosts running in AWS, it includes a plethora of other VMware cloud services and offerings. Refer to the roadmap section of the VMware Cloud on AWS page for an exhaustive list of the available and in-development features.

Here are a few important ones that are worth mentioning:

Elastic DRS

Elastic DRS automatically adds and removes vSphere hosts to ensure an optimal number in the cluster in order to satisfy the demand, kind of like a cluster auto-scale if you like. It is achieved by monitoring the demand and applying an algorithm that will produce scale-out (adding) or scale-in (removing) recommendations.

The decision to add or remove vSphere will depend on the Elastic DRS Policy you selected which will be more or less conservative (impacting the cost eventually). Note that the Rapid Scale-out policy was recently added which provisions multiple hosts simultaneously to cover scenarios like VDI boot storms or host failures.

Elastic DRS policies offers 3 scale-in scale-out policies to choose from

Elastic DRS policies offer 3 scale-in / scale-out policies to choose from”

Disaster Recovery

Disaster recovery is critically important but not all organizations can afford a second site to replicate workloads to. VMware in AWS can help those companies by offering DR solutions in the cloud. There are currently 2 main ways offered by VMware Cloud on AWS to do this.

VMware Cloud Disaster Recovery aka DRaaS – SaaS

Announced during VMworld 2020, DRaaS is a SaaS VMware cloud services providing cost-optimized running VMware in AWS, on-demand disaster recovery. Instead of paying for hosts as replication destination, replicas are stored on relatively cheap cloud storage and restored to a cloud SDDC that is spun up on-demand to improve TCO.

Because restoring involves automatically provisioning an SDDC, which takes a bit of time, the solution is characterized as warm DRaaS. However, it is possible to run a light footprint SDDC called live pilot-light to restore a number of critical workloads in a timely fashion.

The solution will support up to 1,500 VMs across multiple SDDC clusters with DR health checks

The solution will support up to 1,500 VMs across multiple SDDC clusters with DR health checks”

Find out more about DRaaS in our dedicated blog on the topic.

VMware Site Recovery – IaaS

Also a VMware cloud services, however, as opposed to DRaaS, VMware Site Recovery is a hot DRaaS solution, meaning the recovery infrastructure is ready to go, SDDC provisioning required. It is built on Site Recovery Manager (SRM) and leverages vSphere Replication to copy the replicas to the destination running VMware in AWS.

The workloads will be replicated to vSphere hosts running in AWS. The upsides will be that you don’t need to own a DR infrastructure while benefiting from the best RPO/RTO possible. However, this will obviously be reflected in the cost as it is more expensive than the SaaS option.

VMware Site Recovery lets you replicate your workloads to a vSphere backed cloud SDDC”

Hybrid linked mode and Workload mobility

One of the main selling points of hybrid cloud is workload mobility. vCenter hybrid linked mode will link your on-premise SDDC to VMware in AWS. By doing this you get to manage both environments from a single pane of glass, share tags and migrate virtual machines using vMotion.

Maximum latency for Hybrid Linked mode is 100ms roundtrip time”

It can be configured in any of the following 2 ways:

      • On-Premise to Cloud: In this model, the Cloud Gateway Appliance acts as a bridge between your on-premises infrastructure and the cloud SDDC. The identity source is already taken care of as the SSO configuration is mapped to VMware in AWS. You manage the hybrid SDDC by logging into the VMC gateway.
      • Cloud to On-Premise: No need for a VMC Gateway here as you will link directly from the cloud vCenter to the on-premise one. You need to use the cloud vSphere client to manage your hybrid environment. In this scenario, you must add your on-premise identity source to the vCenter in AWS.

The VMC Gateway lets you link your on-premise SDDC to the cloud SDDC”

Once the VPN connection along with firewall rules, SSO, and permissions are configured and Hybrid Linked Mode is connected, you can start migrating VMs between your on-premise and cloud SDDC. Nothing new here as it uses the tried and tested vSphere vMotion.

VMware Horizon on VMware Cloud on AWS

Granted the name of this feature is a bit of a mouthful. I assume it is to differentiate it from “Horizon Cloud”, a separate SaaS offering hosted on IBM Cloud or Azure in which you only manage the desktop pools.

In VMware Horizon on VMC on AWS, you deploy your Horizon infrastructure components in your cloud SDDC just like you would in your on-premise environment. You can then add it to the Cloud Pod Architecture (CPA) of your on-premise environment or you could decide to run all your VDI workloads in VMware in AWS for some reason.

Horizon Cloud pod architecture for VMware Cloud on AWS

Horizon Cloud pod architecture for VMware Cloud on AWS”

A number of use cases can motivate the choice for this architecture such as:

    • Datacenter expansion: Expand the capacity of your VDI infrastructure without investing in new hardware. Burst capacity such as seasonal activities may benefit from it greatly.
    • Application locality: Put your VDI closer to your published AWS services to reduce application latency to a minimum (Data Gravity).
    • Business Continuity / Disaster Recovery: Adding a Horizon pod in AWS to your CPA will open the doors to BC and DR to recover quickly from a failure in your on-premise SDDC.

VMware Tanzu Kubernetes Grid Plus on VMware Cloud on AWS

Tanzu Kubernetes Grid Plus (TKG+) is VMware’s upstream Kubernetes runtime which provides open-source technologies and an automation solution to deploy scalable and multi-cluster Kubernetes environment.

VMware in AWS now lets you deploy an SDDC in the cloud that contains all the components required to leverage Tanzu Kubernetes Grid. You benefit from elastically scalable resources in the cloud for your containerized workloads.

Tanzu Kubernetes Grid (TKG) can now span to VMware Cloud on AWS

Tanzu Kubernetes Grid (TKG) can now span to VMware Cloud on AWS”

VMware Cloud on AWS Outpost

As mentioned, it is no joke that VMware has been going full steam ahead with the cloud and wanting to tighten the partnership with AWS by integrating even more with their product offering. In doing so, VMware Cloud on AWS was made available for AWS Outpost and was announced during VMworld 2021.

AWS Outpost is a managed service offering proposed by AWS where the company delivers onsite and installs the “Outpost” physically in your location. Meaning you get the AWS experience on compute capacity except it is located on-premise or in any datacenter or co-location of your choosing. It is obviously managed by AWS so you don’t need to worry about software updates or any of the nitty-gritty of infrastructure lifecycling. The use-cases related to AWS Outposts include low-latency requirements, local data processing, and many more.

Data sovereignty was a significant driver in the adoption of VMware Cloud on AWS outpost as the number of large organizations and government bodies looking to protect their data against foreign legislations is growing at a rapid pace. VMware actually launched the VMware Sovereign Cloud initiative to address these customer needs.

Getting started with VMC on AWS

Planning your hybrid cloud journey

Planning your shift to hybrid cloud is an important step in the journey, especially making sure the network aspect is correctly configured and doesn’t contain security issues.

As opposed to listing requirements and prerequisites that get to change quite regularly, I would rather send you to the VMware Cloud Launchpad. Described in VMware’s words as “A One-Stop-Shop for all VMware Cloud Solutions and Infrastructure”.

It is clear and well organized; you will find guidance and a lot of learning material to get started with VMware in AWS. Again, you will also find some information in our guide to hybrid cloud.

The VMware Cloud Launchpad helps you plan and prepare for your hybrid cloud journey

The VMware Cloud Launchpad helps you plan and prepare for your hybrid cloud journey”

Deploying virtual machines

Deploying a VM directly to your AWS SDDC is fairly similar to what you would do in your on-premise environment and can be done in several ways. VMware actually redirects to the regular vSphere documentation when it comes to it.

  • Creating a new VM from scratch.
  • Cloning existing VMs or templates.
  • Deploying an OVF or OVA template.
  • Deploying a VM from an uploaded OVF or OVA file.

Because the SDDC runs VMware in AWS, some operations available in your on-premise environments won’t be possible in the cloud SDDC such as RDM, SCSI BUS sharing, Hyperthreading, virtual disk types… You can find the complete list of unsupported features in the VMware Documentation.

Content libraries let you synchronize resources from the on-premise datacenter to the cloud SDDC”

Note that operations will be significantly facilitated if you leverage vSphere Content Libraries. You can publish a library from your on-premise environment and have the vCenter running on VMware in AWS subscribe to it. That way you get to manage your ISO and templates from a single place.

Migrating virtual machines

Most companies committing to a hybrid cloud model will almost surely get to the discussion of migrating workloads between environments, be it from or to the SDDC running in AWS. We call it a Hybrid migration.

The fact is there are again multiple ways to migrate virtual machines to VMware in AWS:

      • VMware HCX

VMware HCX is an application mobility platform that facilitates workload mobility across environments without requiring a reboot or network interruption. It is particularly relevant in bulk migration scenarios where hundreds of VMs have to be moved.

      • vMotion (cold)

You can also move VMs in powered off state where VM downtime is not an issue. That way you ensure CPU compatibility and VMs connected to standard switches can be moved.

      • vMotion (live)

The one and only vSphere vMotion can be used to relocate your workload (vDS networking only) between your on-premise and cloud SDDCs. It will obviously move the storage of the VM as well and maintain its active state. It can be done from the vSphere client as long as Hybrid linked mode is enabled and your SDDC runs supported vSphere versions (vSphere 6.7U2/6.5U3 or higher).

Note that EVC is disabled in the Cloud SDDC. Hence, it is recommended to enable Per-VM EVC or set your on-premise SDDC to Broadwell. This will ensure that you can migrate live workloads between your SDDCs.

Per-VM EVC ensure CPU compatibility for workload migrations across SDDCs”

Accessing AWS services

While we are talking about VMware in AWS, I also wanted to touch base on AWS’s SaaS offering. When deploying an SDDC with VMC on AWS, a high speed, low latency link is created with your Amazon VPC.

Meaning, your workloads will run closer to your cloud services such as EC2 or S3 to offer LAN-like communications. This is called data gravity and is highly beneficial for latency sensitive applications accessing cloud services.

Pricing

The pricing model for VMware in AWS is based on the number and type of hosts that you will use in your cloud SDDC. You can either choose to pay on-demand ($/host/hour) or go for a 1- or 3-year(s) subscription ($/host/year). Paying upfront for a subscription will obviously save you money over time but the investment is significant.

If you want to know more, head over to the VMware Cloud on AWS pricing calculator to estimate the costs.

Number of hosts

The number of hosts you run will depend on your needs but there are minimums. Production environments can start with as little as 2 hosts backed with i3.metal servers or 3 hosts backed with i3en.metal servers. You can then scale up as demand increases.

Note that a time-bound low-cost single-host option is also available for organizations willing to try the service to see if it works with their environment and adds value. Be mindful that if you don’t scale up the cluster within 30 days, the SDDC is deleted along with the data stored on it. It starts at $7/hour, which is ok, but watch out as it will set you back $5,110 per month if it runs for the full 30 days!

Types of hosts

When planning for your cloud SDDC, you can choose from 2 server configurations for which the cost will vary.

VMC on AWS server configurations as of April of 2021

VMC on AWS server configurations as of April of 2021”

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Conclusion

In the last few years, it’s been fascinating to witness VMware’s vision “Any app, any cloud” come to life thanks to a series of acquisitions and partnerships with major tech companies in the industry like Amazon AWS. After four years of continuous improvements, VMware in AWS is getting traction and customers are getting on board.

While VMware in AWS might appear, and rightly so, like a pretty expensive service, it will bring some much-needed breathing space to IT departments that struggle to balance CAPEX management and innovation. By shifting some of those large up-front acquisitions to an OPEX model, you don’t need to worry about amortization, hardware, cabling, patching, upgrades… anymore.

VMware also thought about vSphere administrators as your knowledge and skills are transferable to VMware Cloud on AWS thanks to it using the same management tools.

If you want to give it a go, the single-host option lets you test the service for 30 days for about $7 per hour. Remember not to store any important data on it if you are not going to scale up the SDDC as it will be deleted at the 30 days mark.

Alternatively, you can have a glimpse at VMware in AWS in the dedicated hands-on-labs offered for free by VMware.

The post What is VMware Cloud on AWS (VMC on AWS)? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-cloud-aws/feed/ 0
What is the difference between VDI desktop virtualization and virtual machines https://www.altaro.com/vmware/vdi-desktop-virtualization/ https://www.altaro.com/vmware/vdi-desktop-virtualization/#respond Fri, 09 Jul 2021 07:13:44 +0000 https://www.altaro.com/vmware/?p=22472 What is the difference between VDI, desktop virtualization, and virtual machines? This guide will help you find out.

The post What is the difference between VDI desktop virtualization and virtual machines appeared first on Altaro DOJO | VMware.

]]>

There is no question that virtualization has changed the world of computing as we know it. Server virtualization has revolutionized how enterprise organizations run business-critical workloads. Since the age of server virtualization, organizations have consolidated servers into very dense ratios on physical hardware. It has brought about new efficiencies and capabilities.

Virtualization has not stopped at the server. It has also revolutionized desktop computing. It allows providing many of the same benefits for the desktop as possible with server virtualization in management, lifecycle operations, security, and other areas. In looking at virtualization in the realm of the desktop, many different terms come up. What is the difference between VDI, desktop virtualization, and virtual machines? This comparison between the terms and technologies will detail the differences between them.

What is the difference between VDI, desktop virtualization, and virtual machines?

When considering the various technologies that comprise enterprise virtual desktops, many terms and technologies are mentioned and described, regardless of the solution used. Three of those technologies and terms include:

    • VDI
    • Desktop virtualization
    • Virtual Machines

Let’s define these and see how they fit in the solutions to deliver virtual desktops to end-users.

Why is understanding the differences important?

As we will see in the guide to follow, the terms listed above are all related and interconnected in the world of virtual desktops. However, understanding the different technologies is essential to know when designing, architecting, and using virtual desktops to empower remote end-users. Choosing the right technologies is extremely important in ensuring the best solutions for remote work environments.

There are different implications and dependencies assumed with each of the terms listed as organizations dive into the world of virtual desktops. Understanding what relates, how the various technologies work, and the different nuances can steer businesses in selecting the right solutions for various use cases.

Virtual Desktop Infrastructure (VDI)

The acronym VDI comes from Virtual Desktop Infrastructure. Virtual Desktop Infrastructure (VDI) is a term that describes the infrastructure dedicated to run virtual desktops in an enterprise environment. Virtual Desktop Infrastructure (VDI) uses virtual machines to provide virtual desktops to end-users connecting from many different devices. These can include PC, Mac, Linux, tablet, or mobile devices.

VMware vSphere provides a common hypervisor platform for VDI
VMware vSphere provides a common hypervisor platform for VDI

The concept of VDI is relatively simple. A user connects the VDI environment and is given a desktop by the VDI broker out of a pool of available desktops. However, to make this relatively simple concept come to life, some rather complicated software and hardware requirements need to be satisfied to provide a seamless end-user experience for effective delivery of virtual desktops to remote users.

Virtual Desktop Infrastructure (VDI) relies on a software layer that brokers the connections from end-users to the VDI environment remote to the user. It is essential to understand with VDI, the virtual environment used to carry out business-critical operations is not running locally on an end-user device. The VDI broker and virtual machines compromising the VDI environment all reside in an on-premises or cloud data center. VMware Horizon and Citrix Virtual Apps and Desktop are modern examples of VDI solutions that organizations are using today.

VMware Horizon VDI
VMware Horizon VDI

The locality of infrastructure and data in a VDI solution has many advantages in lifecycle management, performance, and security as business-critical data does not leave the confines of the sanctioned data center environment. Additionally, the virtual machine environment is adjacent to backend resources needed for business applications.

What are the benefits of Virtual Desktop Infrastructure (VDI)

Virtual Desktop Infrastructure (VDI) brings about many advantages both for organizations and end-users. What are these?

    • Work from home – VDI provides an excellent remote access platform for remote workers. With VDI solutions, remote workers can connect to remote work environments that look and feel like working on a computer in the office. VDI desktops can be customized to the needs of the specific end-user connecting.
    • Mobility – With VDI technologies, mobility is vital. No longer is a user limited to working on a PC or laptop dedicated to running business apps. With VDI, users can access their business environment from many different devices, including mobile phones, tablets, thin clients, etc.
    • Secure access – Today, cybersecurity is critical. VDI keeps business-critical, sensitive data housed in the data center where it belongs. It helps minimize the danger of data exfiltration and malicious attack from placing an infected remote workstation on the network using a VPN. By using additional security solutions such as VMware NSX-T, data can be further protected with security policies and micro-segmentation.

 

VMware NSX-T provides a robust micro-segmentation platform for VDI
VMware NSX-T provides a robust micro-segmentation platform for VDI

    • Central management and monitoring – With VDI, IT can manage and monitor the environment from a central location since server-side resources reside in the data center. It also helps ease the burden of troubleshooting since, generally, the IT team can quickly triage the VDI environment if there is an issue.

Types of VDI implementations

There are generally two different VDI implementations that allow organizations to effectively provide virtual desktop resources to remote employees. These include:

    • On-premises VDI
    • Cloud-based VDI

Traditionally, on-premises VDI is the more common implementation between the two different types of VDI environments. With on-premises VDI, organizations typically provision, configure and manage their own physical VDI infrastructure in an on-premises data center. What does a typical on-premises VDI implementation include?

  1. Hypervisor hosts – The hypervisor hosts are the physical server hosts that provide the virtual machines’ hardware resources. It includes compute and memory.
  2. Network gear – The network gear includes the physical network switches, physical cabling, and other network hardware required
  3. Storage – The virtual machines configured as targets for remote users require storage for provisioning. Additionally, organizations must decide how to store and maintain user data.
  4. Hypervisor software layer – Virtual Desktop Infrastructure (VDI) solutions today run on top of a hypervisor such as VMware vSphere or Citrix Hypervisor.
  5. Virtual Desktop Infrastructure (VDI) broker and other software – The VDI connection broker component of most VDI solutions perform the brokering and placement of users on the assigned VDI desktop pools
  6. Desktop operating system – Users typically connect to desktop operation system sessions which require a desktop operation system
  7. “Golden” image – This refers to the preconfigured operating system settings, applications, and other customizations specific to the needs of the users connecting to the VDI solution
  8. Cloning mechanism – VDI solutions generally work on the premise of cloning the Golden image for end-users. There are new ways of cloning desktops that drastically reduce the time required for this operation.
      1. Administrators define this operation in the type of desktop pool configured. By selecting an automated desktop pool, the VDI solution (VMware Horizon shown below) uses a virtual machine template to generate new virtual machines on which to place users.

Creating an automated desktop pool in VMware Horizon
Creating an automated desktop pool in VMware Horizon

       9. Desktop pools – The desktop pool is the group of desktop workstations used as the target for end-users connecting to the VDI environment.

10. Entitlements and assigning users to desktop pools – Users are “entitled” to the target desktop pools. The entitlement provides the permissions and assignment required so the connection broker “knows” where to place the user.

Increasingly popular today are cloud-based options for Virtual Desktop Infrastructure (VDI). Cloud SaaS VDI solutions, like other cloud SaaS solutions, such as G Suite and Microsoft Office 365, abstract the underlying hardware and physical infrastructure and allow organizations to consume the VDI solution. This abstraction enables businesses to instantly provision VDI environments without the usual complexities of purchasing, provisioning, configuring, and managing VDI infrastructure.

One of the popular offerings in this space is the Microsoft Windows Virtual Desktop solution on Azure. Windows Virtual Desktop (WVD) is a desktop and app virtualization service that runs on the Microsoft Azure cloud and is an “as-a-Service” offering that allows organizations to quickly provision a VDI environment for their users with the infrastructure residing in Microsoft Azure datacenters.

Windows Virtual Desktop provides excellent features, including:

    • Multi-session Windows 10 deployments (this is not possible with Windows 10 installed in on-premises environments)
    • Virtualize Microsoft 365 applications and have those optimized to run in the WVD environment
    • Ability to virtualize both desktops and applications
    • It allows publishing an unlimited number of host pools for remote end-users
    • You can bring your image from on-premises and run this in WVD
    • You can pick a WVD image from the Azure Gallery
    • Deploying a WVD image is quickly done from the Azure portal, PowerShell, and REST interfaces.
    • Users can be assigned to the pools of desktops configured in WVD
    • Users can connect using either the native WVD application on their devices or using the Windows Virtual Desktop HTML5 web client

As you can see below, you can start with Windows Virtual Desktop for free and with the click of a button.

Windows Virtual Desktops VDI-as-a-Service
Windows Virtual Desktops VDI-as-a-Service

VDI desktop types

With VDI, there are usually two types of desktops configured in a typical VDI environment. These include:

    • Persistent desktops
    • Non-persistent desktops

Persistent desktops have been referred to as “stateful” desktops as these are desktops customized and configured with user settings and configuration that persists between login sessions. Any changes to the configuration or settings are saved and available on the next login session. The persistent VDI configuration aligns with the experience users are accustomed to with a physical desktop working in the office.

When they log in, they see their customized configuration settings and the desktop’s personalized look and feel. This stateful behavior is generally accomplished by the VDI solution creating a one-to-one relationship between an end-user and a full virtual machine stored in the virtual environment. When organizations start out using VDI solutions, this is typically the configuration that many gravitate towards using. Today’s VDI solutions can also target physical desktops as the target for remote users. This capability allows administrators to target physical desktops in the office. It enables placing remote users on the same workstation they use when physically working from the office.

Persistent desktops are an excellent option for businesses with power users who need access to custom applications and more processing power. These may include engineers, graphic artists, developers, etc. Persistent desktops provide customized virtual machines that fit the needs of power users connecting to the remote environment.

Non-persistent desktops can be referred to as “stateless” desktops. Typically in a non-persistent desktop configuration, the desktop does not retain any settings or configuration changes made to the desktop once a user logs out. Some solutions synchronize and maintain user data when users log out of their remote desktop. These include Citrix Appsense and VMware Dynamic Environment Manager.

With non-persistent desktops, the VDI solution generally uses a cloning process to rapidly clone subsequent desktops in desktop pools using a master image. Desktop clones are provisioned to satisfy the incoming user connection requests. It provides many advantages, including management, security, and other lifecycle benefits as the image can be updated in one place. All desktops receive the changes and updates using the cloning process.

Non-persistent desktops are often used in organizations with many task workers who may perform a limited number of repetitive tasks and don’t need a customized desktop. The software needs of a standard office worker can usually be satisfied with a standard image provided using a non-persistent desktop. Non-persistent desktops also save on storage and other resources in the VDI infrastructure as these share a standard base disk that is then cloned. Users are placed on the “delta disks” of the clone. It, in turn, results in a cheaper solution when compared to persistent desktops.

VDI application publishing

Another strong use case of Virtual Desktop Infrastructure (VDI) that may not be as obvious is virtual application delivery or “app publishing.” What is virtual application delivery or app publishing? Application publishing makes an application available instead of a full desktop session. When the user launches the VDI-backed application, it looks identical to the same application loaded locally. The difference is the application is streamed across the network.

It offers many benefits in the correct use cases compared to full desktop sessions. Many, if not most, users need access to applications and not a full desktop session. The main reason a user may need to log in to a desktop is to launch applications. With application publishing using virtual application delivery, the desktop is no longer required and the user gets direct access to the application. The footprint is drastically reduced when VDI infrastructure delivers applications and not desktops, leading to much greater user density when compared to full VDI desktops.

Organizations can use a hybrid implementation between VDI-based desktops for power users and use application publishing through virtual app delivery. This combination provides power users with the desktop they need and delivers the required apps for task workers and office employees who need access to a broad set of business productivity software.

As a note, the virtual application publishing provided by VMware Horizon and others relies on the capabilities found in the Remote Desktop Services capabilities offered in Windows Server. VMware Horizon can publish the apps that are presented by the RDS environment.

Are VDI and Desktop Virtualization the same thing?

Some references to virtual desktop solutions use the terms desktop virtualization and VDI interchangeably. Are VDI and desktop virtualization the same? No, VDI is a form of desktop virtualization that uses a hypervisor running on a cluster of physical hypervisor hosts to broker and provision virtual machines for end-users connecting to the environment.

Desktop virtualization is a much broader term that includes VDI and other virtual desktop solutions such as remote desktop services (RDS). It encompasses all technologies that provide a virtual desktop by various means to end-users.

Is VDI the same as VM?

A virtual machine provides all the identical constructs as a physical machine, including a processor, memory, storage, and network. Using the hypervisor, the operating system installed in the guest virtual machine can communicate with the underlying physical hardware running on the hypervisor host as if the hardware is dedicated to the guest operating system. The guest operating system is unaware its hardware is virtualized. However, the hypervisor handles all the interactions with the physical processor, memory, and other hardware.

Do the terms VDI and VM refer to the same thing? No, VDI and VM are two closely related technologies but are not the same thing. Virtual Desktop Infrastructure (VDI) generally relies on VMs (virtual machines) to deliver desktops to remote users. A VDI broker listens for incoming connection requests. Once a connection request is received, the broker places the user on an available VM. The virtual machine is usually running a client operating system like Microsoft Windows 10. The underlying virtual machines with VDI can also run a server operating system like Windows Server 2019, publishing applications the VDI platform presents to end-users.

What is the difference between VDI and Remote Desktop?

VDI and remote desktop services (RDS) are part of the group of technologies that make up desktop virtualization. However, they are different technologies and provide virtual desktops to end-users in different ways. Remote desktop services (RDS) is a traditional solution that has long been a capability with the Windows Server operating system. It has been known in legacy versions of Windows Server as Terminal Services. Terminal Services is now known as Remote Desktop Services. Remote Desktop Services allows multiple remote users to login to the same instance of the operating system. However, each end-user who logs in gets their own desktop session.

Microsoft implemented a thin-client architecture in Windows Server software that makes this possible. Clients can access the Windows Server desktop using the Remote Desktop Protocol (RDP). RDS can provide organizations with an excellent option for remote access, especially for those already using and heavily invested in Microsoft Windows.

One downside to RDS is that multi-user sessions are only possible with the Windows Server operating system and not Windows clients such as Windows 10. Windows Virtual Desktops (WVD), described earlier, is an exception to this as it allows multi-user sessions to Windows 10 WVD targets. For on-premises RDS, this is limited to Windows Server. Certain applications may not run correctly on the Windows Server operating systems, only clients. Organizations must keep this in mind when considering RDS as an option for remote user connectivity and productivity. It may also come into play when users are accustomed to Windows 10 clients and are now placed on Windows Server to launch applications.

Is VDI the same as RDS? There are many similarities and nuances to consider between VDI and RDS. However, VDI provides multiple virtual desktops by way of numerous virtual machines. As discussed earlier, the VDI connection broker places incoming connections on assigned pools of VMs for the particular user. The VDI environment may use a cloning process to provide the needed VMs for the end-users.

An RDS server provides multiple sessions on the same virtual machine instance. Similar to VDI, RDS servers can be configured as a pool of available RDS hosts. Microsoft’s RDS infrastructure generally uses what is called the Remote Desktop Gateway and Remote Desktop Connection Broker. The Gateway allows tunneling RDP over HTTPS for additional security. The RD Connection Broker load balances users across available RDSH servers. RDS offers similar load balancing and placement features as VDI but accomplishes this with desktop sessions and not using dedicated virtual machines.

Windows Server Remote Desktop Services configuration
Windows Server Remote Desktop Services configuration

Below, we are configuring RDP to use a Remote Desktop Gateway to connect to an RDS environment.

 

 

configuring RDP to use a Remote Desktop Gateway to connect to an RDS environment

Configuring RDP to use a Remote Desktop Gateway to connect to an RDS environment

It leads to considerations for organizations on which technology makes the most sense. What factors are important? VDI is known to be one of the most demanding technologies in regards to hardware backing the solution. It demands very capable hardware, delivering high IOPs required for cloning processes and “boot storms” that may be caused when workers are logging into the VDI environment at the beginning of the day. VDI environments may require the use of all-flash storage arrays to deliver the IOPs requirements needed for acceptable performance.

Dell all-flash SAN commonly used for VDI storage backends
Dell all-flash SAN commonly used for VDI storage backends

RDSH servers are not known for the ultra levels of performance required for VDI. However, RDS has the consideration mentioned earlier regarding multi-user sessions only working on-premises with Windows Server versions. It is also important to understand with RDS that the operating system and applications are shared between connected end-users. If a user needs an isolated and customized environment, this is much more difficult to achieve with RDS than VDI. With VDI, a customized desktop image can be used for specific end-users.

What is the difference between VDI and Citrix?

Citrix is one of the best-known vendors in delivering applications to remote end-users using application virtualization. However, Citrix includes a VDI solution known as Citrix Virtual Apps and Desktop. This product was formerly known as Citrix XenDesktop. Citrix Virtual Apps and Desktop is available in different versions that offer varying levels of capabilities and functionality.

Citrix provides a well-known desktop virtualization platform
Citrix provides a well-known desktop virtualization platform

While Citrix and VDI may be used interchangeably, Citrix is simply a vendor-specific solution providing a specific offering of Virtual Desktop Infrastructure (VDI). It is essential to understand that Citrix has its own way of implementing VDI and virtual apps.

What is the difference between VPN and VDI?

Another similar acronym that may be confused with VDI is VPN. The confusion may come from VPN and VDI being both associated with remote workers and remote work productivity. What is VPN? A VPN is a virtual private network that allows a remote end-user to connect to the corporate network using a secure encrypted tunnel. VPN connections place a remote client on the corporate network so they can access business-critical resources and applications.

Windows 10 VPN settings
Windows 10 VPN settings

VPN connections typically mean that the end-user device is connecting directly to the remote resources, and the end-user is not connecting to a remote desktop for that purpose. VPNs can expose organizations to security concerns, especially since the remote client connects to the corporate network. With this being the case, any malware or other security threats on the end-user client are connected to the corporate network. Also, data exfiltration can become an issue as well.

VDI connections generally do not need or use VPN connections to establish connections to the virtual desktop pool in the VDI environment. Most VDI solutions have a means for external devices to connect from the Internet using specialized external gateway appliances without any specialized network connectivity such as a VPN connection.

Concluding Thoughts

Virtual Desktop Infrastructure (VDI) is an excellent solution providing remote end-users the ability to access business applications to carry out business-critical tasks. It allows employees to do this from desktops, laptops, tablets, and mobile devices. VDI is known for having very stringent hardware and performance requirements that can drive up the cost of implementing VDI-based solutions. However, it is a robust solution that can satisfy organizations looking to empower both power users and general office and task workers with the tools and applications needed to carry out their daily tasks.

Many other terms are often associated with VDI, including RDS, VPN, Citrix, etc. As shown in the guide, there are nuances and differences in the various terminology and how they relate to the delivery of virtual desktops. By understanding these differences, a business can choose the right solution for their particular use case.

As organizations transition to more cloud-native applications and “as-a-Service” offerings in public cloud environments, cloud-based VDI is becoming increasingly popular. It allows businesses to quickly implement and take advantage of VDI solutions to empower remote employees without being concerned with the often complex and challenging implementation of VDI from the ground up.

Virtual Desktop Infrastructure (VDI) technologies and solutions are only going to continue to improve. As remote work and hybrid work technologies have come to the fore since the onset of the global pandemic, organizations rely heavily on VDI and other powerful technologies to empower their hybrid workforce.

The post What is the difference between VDI desktop virtualization and virtual machines appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vdi-desktop-virtualization/feed/ 0