Cloud Archives - Altaro DOJO | VMware https://www.altaro.com/vmware VMware guides, how-tos, tips, and expert advice for system admins and IT professionals Fri, 12 Aug 2022 12:51:39 +0000 en-US hourly 1 All You Need to Know about vSphere Cloud Native Storage (CNS) https://www.altaro.com/vmware/vsphere-cloud-native-storage/ https://www.altaro.com/vmware/vsphere-cloud-native-storage/#respond Fri, 12 Aug 2022 12:51:39 +0000 https://www.altaro.com/vmware/?p=24607 Learn about data management solutions and how to provision persistent storage for stateful applications with vSphere Cloud Native Storage.

The post All You Need to Know about vSphere Cloud Native Storage (CNS) appeared first on Altaro DOJO | VMware.

]]>

In this article, we will have a look at how to provision storage for Kubernetes workloads directly on vSphere cloud native storage without resorting to an extra software layer in between.

In the infrastructure world, storage is the building block of data persistency which comes in many different shapes and forms, including vSphere cloud native storage. Shared storage lets you leverage clustering services and enables a number of data protection scenarios that are vital to most IT environments. Ensuring a solid storage backend will give IT departments much appreciated peace of mind as storage outages are among the most dreaded failure scenarios in any IT professional’s mind. Despite the growing number of storage solutions available on the market, provisioning shared storage in a vSphere environment is something that many now consider mainstream, as it is a tried a tested process. VMware vSphere environments offer several options for storage such as VMFS, vSAN, VVols and NFS to store your virtual machines and other resources consumed by the hypervisors among other things.

In recent years, VMware have extended the reach of vSphere storage backends and the capabilities of the vSphere suite to integrate more closely with modern applications, in other words, container workloads and microservices that leverage vSphere cloud-native storage. An area VMware has been heavily involved in after the acquisition of several Cloud Native companies like Pivotal to build their VMware Tanzu portfolio.

While the flexibility and customization potential of Kubernetes is unbeatable, its complexity means that the learning curve is fairly steep compared to other infrastructure solutions. Let’s see how vSphere Cloud Native Storage deals with that.

An introduction to VMware vSphere cloud native storage

First of all, what is Cloud Native? The term Cloud Native has been somewhat of a buzzword these last few years and it started appearing in more and more places. Cloud Native mostly refers to infrastructure agnostic container workloads that are built to run in the cloud. That means no more monolithic software architectures and separation of duties. Microservices are meant to be service-specific workloads interacting with each other in a streamlined fashion. Kubernetes is a container orchestrator platform that has been enabling this revolution and became the de-facto industry standard for running containers in enterprise settings.

Having said that, not all workloads running on Kubernetes can be stateless and ephemeral. We still need to store data, configs and other resources permanently on backends such as vSphere cloud native storage for those stateful applications. That way the data will remain even after ruthlessly killing a bunch of pods. Here come persistent volumes (PVs). Kubernetes resources that let you provision storage on a specific backend like vSphere cloud native storage to store data persistently.

“VMware CNS supports most types of vSphere storage”

VMware CNS supports most types of vSphere storage”

Kubernetes PVs, PVCs and Pods

VMware Tanzu is an awesome product; however, it is easy for a vSphere admins to jump headfirst in it with no prior knowledge of Kubernetes just because it has the “VMware” label on it. This makes the learning process incredibly confusing and not a great way to start on this journey. So, before we dig in, I’d like to cover a few Kubernetes terms for those that aren’t too familiar with this. More will follow in the next chapter.

  • Pod: A pod is the smallest schedulable entity for workloads, you manage pods, not containers. A pod can contain one or more containers but a container is only in one pod. It contains information on volumes, networking and how to run the containers.
  • Persistent Volume (PV): A PV is an object to define storage that can be connected to pods. It can be backed by various sources such as temporary local storage, local folder, NFS or interact with an external storage provider through a CSI driver.
  • Persistent Volume Claim (PVC): PVCs are like storage requests that let you assign specific persistent volumes to pods.
  • Storage Class (SC): Those let you configure different tiers of storage or infrastructure-specific parameters to apply PVs backed by a certain type of storage without having to be too specific, much like storage policies in the vSphere world.

The vSphere Container Storage Interface driver

The terms described in the previous chapter are the building blocks of provisioning vSphere Cloud Native storage. Now we will quickly touch base on what a Container Storage Interface (CSI) driver is. As mentioned earlier, persistent volumes are storage resources that let a pod store data onto a specific storage type. There is a number of built-in storage types to work with but the strength of Kubernetes is its extensibility. Much like you can add third-party plugins to vCenter or array-specific Path Selection Policies to vSphere, you can interact with third-party storage devices in Kubernetes by using drivers distributed by the vendor, which will plug into the Container Storage Interface. Most storage solution vendors now offer CSI drivers and VMware is obviously one of them with the vSphere Container Storage Interface or vSphere CSI which enables vSphere cloud-native storage.

When a PVC requests a persistent volume on vSphere, the vSphere CSI driver will translate the instructions into something vCenter understand. vCenter will then instruct the creation of vSphere cloud native storage that will be attached to the VM running the Kubernetes node and then attached to the pod itself. The added benefit is that vCenter will report information about the container volumes in the vSphere client with more or less information depending on the version you are running. And this is what is called vSphere Cloud Native Storage.

“vSphere cloud native storage lets you provision persistent volumes on vSphere storage”

vSphere cloud native storage lets you provision persistent volumes on vSphere storage”

Now in order to leverage vSphere cloud native storage, the CSI provider must be installed in the cluster. If you aren’t sure or you are getting started with this, you can use CAPV or Tanzu Community Edition to fast track this step. Regardless, the configuration to instruct the CSI driver how to communicate with vCenter is contained in a Kubernetes secret (named csi-vsphere-config by default) that is mapped as a volume on the vSphere CSI controller. You can display the config of the CSI driver by opening it.

k get secrets csi-vsphere-config -n kube-system -o jsonpath='{.data.csi-vsphere\.conf}’

 

“The vSphere CSI driver communicates with vCenter to provision vSphere cloud native storage”

The vSphere CSI driver communicates with vCenter to provision vSphere cloud native storage”

vSphere cloud native storage features and benefits

Part of the job of an SRE (Site Reliability Engineer), or whatever title you give to the IT professional managing Kubernetes environments, is to work with storage provisioning. We are not talking about presenting iSCSI LUNs or FC zoning to infrastructure components here, we are working a level higher in the stack. The physical shared storage is already provisioned and we need a way to provide a backend for Kubernetes persistent volumes. vSphere Cloud native storage greatly simplifies this process with the ability to match vSphere storage policies with Kubernetes storage classes. That way when you request a PV in Kubernetes you get a virtual disk created directly on the datastore.

Note that these disks are not of the same type as traditional virtual disks that are created with virtual machines. This could be the topic of its own blog post but in a nutshell, those are called Improved Virtual Disk (IVD), First Class Disks (FCD) or managed virtual disk. This type is needed because it is a named virtual disk unassociated with a VM, as opposed to traditional disks that can only be provisioned by being attached to a VM.

The other benefit of using vSphere cloud native storage is better visibility of what’s being provisioned in a single pane of glass (a.k.a. vSphere web client). With vSphere CNS, you can view your container volumes in the vSphere UI and find out what VM (a.k.a. Kubernetes node) the volume is connected to along with extra information such as labels, storage policy… I will show you that part in a bit.

Note that support for vSphere CSI will depend on your environment and you may or may not be able to leverage it in full. This is obviously subject to change across versions so you can find the up to date list here.

Functionality vSphere Container Storage Plug-in Support
vSphere Storage DRS No
vSAN File Service on Stretched Cluster No
vCenter Server High Availability No
vSphere Container Storage Plug-in Block or File Snapshots No
ESXi Cluster Migration Between Different vCenter Server Systems No
vMotion Yes
Storage vMotion No
Cross vCenter Server Migration

Moving workloads across vCenter Server systems and ESXi hosts.

No
vSAN, Virtual Volumes, NFS 3, and VMFS Datastores Yes
NFS 4 Datastore No
Highly Available and Distributed Clustering Services No
vSAN HCI Mesh No
VM Encryption Yes
Thick Provisioning on Non vSAN Datastores

For Virtual Volumes, it depends on capabilities exposed by third-party storage arrays.

No
Thick Provisioning on vSAN Datastores Yes

A lot of features get added over the versions of the release cycle such as:

  • Snapshot support for block volumes
  • Exposed metrics for Prometheus monitoring
  • Support for volume topology
  • Performance and resiliency improvements
  • Online volume expansion
  • vSphere Container Storage support on VMware Cloud on AWS (VMC)
  • ReadWriteMany volumes using vSAN file services
  • And others…

The transformation from VCP (vSphere Cloud Provider) to CSI (Container Storage Interface)

Originally, cloud provider-specific functionalities were integrated in Kubernetes natively within the main Kubernetes tree, also called in-tree modules. Kubernetes is a highly fast-changing landscape with a community that strives to make the product scalable and as efficient as possible. The growing popularity of the platform meant more and more providers jumped on the train which made this model hardly maintainable and difficult to scale. As a result, vendor-specific functionalities must now be removed from the Kubernetes code and offered as out-of-tree plug-ins. That way, vendors can maintain their own software independently from the main Kubernetes repo.

This was the case with the in-tree vSphere Volume plugin that was part of the Kubernetes code which will be deprecated and removed from future versions in favor of the current vSphere CSI driver (out of tree). In order to simplify the shift from the in-tree vSphere volume plug-in to vSphere CSI, Kubernetes added a Migration feature to provide a seamless procedure.

The migration will allow existing volumes using the in-tree vSphere Volume Plugin to continue to function, even when the code has been removed from Kubernetes, by routing all the volume operations to the vSphere CSI driver. If you want to know more, the procedure is described in this VMware blog.

“vSphere cloud native storage includes additional and modern features with vSphere CSI driver compared to the in-tree vSphere volume plugin”

vSphere cloud native storage includes additional and modern features with vSphere CSI driver compared to the in-tree vSphere volume plugin”

vSAN Cloud Native Storage integration

I will demonstrate here how to provision vSphere cloud native storage on vSAN without going too much into the details. The prerequisites to this demonstration is to have a Kubernetes cluster running on a vSphere infrastructure with the vSphere CSI driver installed in it. If you want a head start and skip the step of installing the CSI driver, you can use CAPV or Tanzu Community Edition to deploy your Kubernetes cluster.

Anyways, in order to use vSphere cloud native storage, we will create a Storage policy in our Kubernetes cluster that matches the vSAN storage policy, then we will create a Persistent Volume Claim using that storage policy, we will attach it to a pod and see how vCenter displays it in the vSphere client.

  • First, I create a Storage Class that matches the name of the vSAN storage policy which is “vSAN Default Storage Policy”. The annotation field means that PVCs will use this storage class unless specified otherwise. It will obviously depend on which vSAN storage policy you want to set as the default one.
kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

name: vsan-default-policy

annotations:

storageclass.kubernetes.io/is-default-class: “true”

provisioner: csi.vsphere.vmware.com

parameters:

storagepolicyname: “vSAN Default Storage Policy”

“The storage class references the vSAN storage policy and the storage provisioner (vSphere CSI driver)”

The storage class references the vSAN storage policy and the storage provisioner (vSphere CSI driver)”

  • Then I create a persistent volume claim (PVC) that references the storage class. The storage request will be the size of the virtual disk backing the PV.
apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: test-altaro-blog

spec:

accessModes:

– ReadWriteOnce

resources:

requests:

storage: 5Gi

storageClassName: vsan-default-policy

 

“The PVC creates a VMware cns with a PV”

The PVC creates a VMware cns with a PV”

  • You should now see a persistent volume provisioned by the PVC.

“The PVC should automatically create a PV”

The PVC should automatically create a PV”

  • At this point you should see the vSphere cloud-native storage in the vSphere client by browsing to Cluster > Monitor > Container Volumes.

The volume name matches the name of the persistent volume claim, I also tagged it in Kubernetes to show how the tags are displayed in the vSphere client.

Cluster > Monitor > Container Volumes.

  • You can get details if you click on the icon to the left of the volume. You will find the Storage Policy, datastore and you’ll see that no VM is attached to it yet.

Storage Policy

  • In the Kubernetes objects tab, you will find information such as the namespace in use, the type of cluster…

Kubernetes objects tab

  • Then the Physical Placement tab shows you were the vSAN components backing this vSphere cloud-native storage or stored in the hosts.

Kubernetes

  • At this point the vSphere cloud native storage is created but it isn’t used by any pod in Kubernetes. I created a basic pod to consume the PVC.
apiVersion: v1

kind: Pod

metadata:

name: test-pod-altaro

spec:

volumes:

– name: test-pv-altaro

persistentVolumeClaim:

claimName: test-altaro-blog

containers:

– name: test-cont-altaro

image: nginx

volumeMounts:

– mountPath: “/usr/share/nginx/html”

name: test-pv-altaro

 

Notice where the pod is scheduled, on node “test-clu-145-md-0-5966988d9d-s97vm”.

node

  • At this point, the newly created pod gets the volume attached and it will be quickly shown in the vSphere client where you see the VM running the node where the pod is scheduled.

pod gets the volume attached

  • If you open the settings of said VM, you will find a disk attached which is the vSphere Cloud native storage created earlier.

vSphere Cloud native storage

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Wrap up

Most IT pros will agree that the learning curve of Kubernetes is fairly steep as it is a maze of components, plugins and third-party products that can seem daunting at first. However, they will also agree that Kubernetes has been one of the fastest-growing technologies in the last 5 years. The big players in the tech industry have all jumped on the bandwagon and either developed their own product or added support/managed services for it somehow. VMware is one of them with their Tanzu portfolio and vSphere Cloud native storage is a critical component of this stack as it reduces the complexity by offering vSphere storage to Kubernetes workloads. The cool thing about it is that it is made easier to use thanks to the CSI driver plugin architecture and tightly integrated with the vSphere web client for added visibility.

The post All You Need to Know about vSphere Cloud Native Storage (CNS) appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vsphere-cloud-native-storage/feed/ 0
VMware Cloud Services: Should you Take the Jump? https://www.altaro.com/vmware/vmware-cloud-services/ https://www.altaro.com/vmware/vmware-cloud-services/#respond Fri, 13 May 2022 13:06:35 +0000 https://www.altaro.com/vmware/?p=24486 Deep dive into VMware Cloud Services with Cross-Cloud Services, Sofware-as-a-Service (SaaS), and Cloud Services Portal (CSP)

The post VMware Cloud Services: Should you Take the Jump? appeared first on Altaro DOJO | VMware.

]]>

Over 20 years ago VMware changed the IT industry, and the way applications were delivered, by pioneering virtualisation, making their way to nowadays’ VMware cloud services.

For customers, it meant that individual operating systems were no longer tied to individual servers. The Virtual Machine (VM) became abstracted from its underlying hardware, and to an extent didn’t care which hardware vendor it was running on.

Compute virtualisation nowadays is a core capability on which most cloud providers are built.

Later, VMware would go further and virtualise other aspects of the data centre, from basic storage and networking to advanced networking and security services.

Cloud computing blossomed and the rise of AWS (Amazon) and Azure (Microsoft) saw increased demand for VMware cloud services.

With many customers wanting increased agility and simplicity, but with reduced costs and operating models, delivering a virtual data centre in software is now an attractive option.

If the first chapter for VMware was virtualisation leader, then the second was private cloud leader through their Software-Defined Data Centre (SDDC). The next chapter is becoming a multi-cloud and applications leader, commoditising the underlying cloud provider to deliver choice, flexibility, and consistency.

In this article, we’ll look at VMware Cross-Cloud Services; diving into VMware Cloud, VMware Sofware-as-a-Service (SaaS), and the VMware Cloud Services Portal (CSP).

Multi-cloud flexibility with VMware Cross-Cloud Services

At VMworld 2021, VMware cloud services announced their Cross-Cloud Services portfolio. VMware Cross-Cloud Services provides customers with the following benefits:

    • Speed – accelerating the journey to the cloud
    • Spend – improving cost efficiencies
    • Freedom – cross-cloud choices for maximum flexibility

VMware Cross-Cloud Services isn’t a single product or bundle, but a collection of services that deliver VMware’s strategic priorities. This includes:

    • VMware Tanzu – a modern application platform for building and running cloud-native applications
    • VMware Cloud – cloud-based infrastructure for running and modernising enterprise applications
    • VMware vRealize Cloud – cloud-based management for managing and monitoring multi-cloud applications
    • VMware Carbon Black and VMware NSX Cloud – networking and security across multi-cloud operations for all applications
    • VMware Workspace ONE and VMware Edge Compute Stack – enables the distributed workforce, and edge-native applications

Having said that, this article will focus specifically on VMware cloud services.

VMware Cloud

Does VMware have a Cloud?

VMware is predominantly a software company. Many of their services can now be consumed as SaaS (Software-as-a-Service), where VMware host and maintain the management plane. However, the services do not run in VMware’s own cloud, they use a public cloud infrastructure back-end, such as AWS, with a VMware cloud services branded front-end.

In the case of VMware cloud services, VMware are providing the software overlay, and the service wrap around. The physical hosting facility, hardware, and connectivity are provided by a public cloud hyperscaler or partner.

The customer has flexibility over which cloud provider or location they run their applications in, along with other benefits like operational consistency. Processes and tools can be standardised, and workloads can move seamlessly between platforms.

So, whilst VMware do not have their own specific cloud, they do run a variety of SaaS, IaaS (Infrastructure-as-a-Service) and PaaS (Platform-as-a-Service) offerings that make use of other cloud provider capabilities under the hood.

What is VMware Cloud?

VMware Cloud is a modern software-defined infrastructure that virtualises nearly all aspects of the data centre and provides a consistent operating platform as an overlay for commodity hardware or public cloud IaaS.

The core building blocks that make up the Software Defined Data Centre (SDDC) are:

    • vSphere & vCenter – Server virtualisation and management of virtual machines
    • vSAN – Storage virtualisation
    • NSX-T – Network virtualisation and security

When deployed together this stack is also known as VMware Cloud Foundation (VCF). VCF is the key component to an enhanced and consistent operating experience, across a variety of hardware and locations. As well as providing the digital base for VMware Cloud, VCF can also be used by customers with their own self-managed hardware.

Is VMware Cloud IaaS or SaaS?

As well as transitioning to subscription-based licensing, VMware now offers many of its solutions in SaaS form as VMware cloud services. The Software-as-a-Service model provides end-users and VI admins with hosted, or managed, versions of the same VMware software they know and love.
Some examples include:

    • vRealize Network Insight Cloud
    • Workspace ONE

This is not a comprehensive list of VMware cloud services (SaaS) solutions but gives you an idea of what to expect. In each of the cases above VMware manages the underlying hosting and infrastructure, including lifecycle management.

VI admins still control the configuration for their end-users, typically through the same admin interfaces as if the product was deployed on-premises. However, they do not need to worry about installation, patching or upgrades, high availability, backups, monitoring, load balancing, ingress, egress, and so on.

Since the IT team have no visibility into the underlying hosting platform, and simply utilise the software as a service, it is deemed a SaaS solution.

Infrastructure-as-a-Service (IaaS) is slightly different. Although the IT team still doesn’t need to concern themselves with maintaining the underlying hardware, they do have responsibility for how the virtual machines are set up.

VMware Cloud services in VMware’s Cross-Cloud Services portfolio is about delivering a modern cloud-based infrastructure. It can be complemented with other SaaS solutions, vRealize Suite being a great example, but the branding of VMware Cloud is predominantly concerned with running the VMware SDDC (Software-Defined Data Centre) on some form of IaaS (Infrastructure-as-a-Service) or self-managed infrastructure (in the case of private cloud).

Although VMware partner with thousands of different cloud providers to deliver their multi-cloud portfolio, they have several first-party solutions with partners AWS and Dell specifically that are sold and supported directly through VMware.

These solutions fall under the VMware Cloud branding, let’s take a closer look.

VMware Cloud on AWS

VMware Cloud on AWS is a jointly engineered solution between VMware and AWS that was first launched in 2017. The solution utilises the software referenced above as an overlay for AWS bare-metal hardware. The customer receives dedicated servers and storage in the form of hyper-converged nodes located at AWS data centres, with a fully managed service wrap around.

VMware are responsible for hardware maintenance and firmware upgrades, as well the patching and lifecycle management of the VMware software stack.

The customer consumes the same VMware technologies they already know and love as a service, and can utilise this technology as a quick, low-risk method of migrating to the cloud, scaling out their data centres, or adding capacity for disaster recovery or one-time use cases.

In addition to software-defined compute, storage, and networking, VMware Cloud services include HCX (Hybrid-Cloud Extension). HCX allows on-premises and cloud-based vCenter Servers to be paired, and L2 networks to be extended between sites. This network stretch capability is what allows virtual machines to be migrated, or live-migrated, without changing IP address settings.

The use of third-party tools and existing processes continues, ensuring operational stability for things like change and incident management, backups, monitoring, anti-virus, and security. These areas can be improved over time as contracts expire or requirements change.

Furthermore, customers can start to integrate native AWS services to modernise existing applications where it makes sense to do so or to complement infrastructure services, for example using AWS S3 (Simple Storage Service) as a backup target.

VMware Cloud on AWS example setup

VMware Cloud on AWS example setup

VMware Cloud on AWS Outposts

VMware Cloud on AWS Outposts brings the same VMware Cloud services on AWS hardware, software, and operating model to the customer’s data centre. A fully operational rack of AWS hardware is wheeled into the customer data centre or site, and managed by VMware in the same way as if it were in an AWS location.

VMware Cloud on AWS Outposts is ideal for edge locations with extremely low latency requirements, or regulated environments where services or data needs to be kept in a specific physical location.

VMware Cloud on AWS Outposts example setup

VMware Cloud on AWS Outposts example setup

VMware Cloud on Dell EMC

VMware Cloud on Dell EMC brings the local cloud operating model to the customers’ data centre. A similar concept to VMware Cloud on AWS Outposts, however, the difference being this rack is made up entirely of Dell hardware, including Dell VxRAIL hyper-converged nodes.

All hardware, software, and the service wrap around are managed by VMware, or Dell depending on the commercial model. The customer provides the physical location for the rack to sit, the power source to plug into, and the core networking to patch through to the Top of Rack (ToR) switches.

VMware and Dell carry out a site survey, and then make up the rack and hardware to the customer requirements, before delivering to the site ready for use. IT teams can now focus on applications, projects, and modernising services or processes, rather than the operational overhead of infrastructure administration.

This type of local cloud operating model allows customers to subscribe to flexible 1- or 3-year terms, where previously they would need to procure and take ownership of hardware typically depreciated over a 5-year period. A ‘shadow’ or ‘dark’ node is included at no additional cost to ensure the cluster is always at full capacity, during planned maintenance or an unplanned host outage.

Use cases include customers switching to revenue-based IT funding, customers requiring an ‘on-premises’ solution for third party licensing constraints, managed VDI services, hybrid cloud, edge applications, low latency requirements, and highly regulated industries.

VMware Cloud Check Point

Let’s recap what we’ve seen so far:

VMware Cloud Foundation – Software-Defined Data Centre (SDDC) deployment onto a wide range of customer or partner-managed hardware, at on-premises, edge, and cloud locations

VMware Cloud on AWS – SDDC deployment onto AWS hardware in AWS regions

VMware Cloud on AWS Outposts – SDDC deployment onto AWS hardware in customer locations

VMware Cloud on Dell EMC – SDDC deployment onto Dell EMC hardware in customer locations

A comprehensive set of modern infrastructure services, all sold and supported directly through VMware, leveraging some of their longest-serving partnerships in Dell and AWS.

That’s great, but we know from VMware Cross-Cloud Services that VMware’s vision for running its software is all about flexibility and portability across all hyperscalers. In the same way that VMware commoditised server hardware with compute virtualisation many years ago, it aims to do the same with cloud IaaS.

Further VMware Cloud Partners

This is where VMware’s extensive partnerships come in. Beyond the services and partnerships with AWS and Dell that we’ve talked about so far, the VMware Cloud Foundation ‘franchise’ extends out much further.

Each of the examples below runs the VMware Software-Defined Data Centre on the referenced cloud provider, albeit with subtle differences:

    • Azure VMware Solution – Microsoft Azure
    • Google Cloud VMware Engine – Google Cloud
    • Oracle Cloud VMware Solution – Oracle Cloud
    • Alibaba Cloud VMware Solution – Alibaba Cloud
    • IBM Cloud for VMware Solutions – IBM Cloud

Furthermore, there are over 4000 additional partners worldwide that provide local cloud or hosting services through the following programs:

    • VMware Cloud Provider Partners (VCPP)

The key difference in this section is that the services are supported and maintained by the partner, rather than VMware. Should VMware Cloud services support be needed to troubleshoot deeper issues then the partner will manage the support case and relationship with VMware on the customer’s behalf. This isn’t to say one service is better than another, each of those referenced above is a VMware partner and has a VMware Cloud Verified environment.

The VMware Cloud Verified accreditation is an assurance to the customer that they are working with a VMware partner validated for providing cloud and hosting services with VMware’s best-in class software.

Let’s take a closer look at a couple of the most popular options.

Customer, VMware, and Provider managed VMware Clouds

Customer, VMware, and Provider managed VMware Clouds

Azure VMware Solution

Azure VMware Solution (AVS) delivers the VMware SDDC, based on VMware Cloud Foundation, to Microsoft Azure. In a similar model to VMware Cloud on AWS, the core software building blocks of vSphere, vSAN, NSX-T, and HCX are deployed to bare metal hardware in the data centres used for Microsoft Azure services.

AVS is installed directly from the Azure Portal. Customers can use the same Azure Portal to manage virtual machines, or carry-on using VMware vCenter Server. This gives IT teams flexibility and the best of both worlds as they transition not only applications and technology, but skills, processes, and third-party tools into the cloud.

In much the same way as VMware Cloud on AWS integrates with native AWS services, Azure VMware Solution has a similar private connection into the cloud providers’ backbone network. This connectivity enables hybrid applications and gradual refactoring of services. Quick wins for migration to other managed Azure services often include database and file shares, while the front end or application servers may continue to run in AVS.

With Azure VMware Solution customers can also take advantage of Microsoft Windows and SQL hybrid licensing benefits with extended security updates.

Azure VMware Solution example setup

Azure VMware Solution example setup

Google Cloud VMware Engine

How VMware software is run on a public cloud provider is hopefully by now starting to make sense. With Google Cloud VMware Engine (GCVE) we’re following the exact same model of VMware Cloud services, whereby the VMware SDDC is deployed onto bare metal hardware in Google’s data centres.

Google are maintaining the service and all operational elements of the hardware, firmware, and VMware software. A private connection is provided into Google’s 100Gbps backbone network for integrating with native Google services.

Google Cloud commercialised many of their big data and machine-learning tools used internally. Some of these services are built to return billions of search results and YouTube videos daily. Start-ups or organisations wanting to innovate quickly will find a lot of value in using Google Cloud services, and GCVE allows them to do that without refactoring their entire back catalogue of applications in one go.

Google Cloud VMware Engine example setup

Google Cloud VMware Engine example setup

How Much Does VMware Cloud Cost?

Each of the VMware Cloud services options, such as VMware Cloud on AWS, and the VMware public cloud IaaS options, such as Azure VMware Solution, mentioned in this article is priced per node. The latest pricing can be obtained from VMware, or direct from the public cloud provider depending on the desired solution.

There are some very slight differences in terms of things like the node size (CPU, RAM, and raw storage), and licensing, for example Microsoft Services Provider Licensing Agreement (SPLA), but in general the cost includes:

    • Hyperconverged node: with the CPU, RAM, and raw storage specification listed
    • VMware software: vCenter, vSphere, vSAN, NSX-T, and HCX
    • Managed service wrap around: hardware, firmware, VMware software maintenance and lifecycle management
    • Hosting and facilities costs: such as building, power, cooling, racks, networking equipment and any other hardware

Typically, there is a 3-node minimum requirement per cluster, although this is continuously changing, and VMware Cloud on AWS also has the option for a 2-node cluster. Nodes can be purchased on-demand, per host per hour, or using a 1- or 3-year commitment known as reserved instances.

Here are some examples of additional costs that may need to be factored in:

    • Egress costs: public cloud providers will charge you per/GB for data taken out of the cloud
    • Private connection: a Direct Connect (AWS), Express Route (Microsoft), Cloud Interconnect (Google), or SD-WAN solution may be required as an alternative to VPN
    • Additional software licensing: there may be licensing stipulations, such as Microsoft or Oracle, depending on your environment
    • Native services: you may need, or want, to make use of additional cloud-based services in AWS, Azure, Google, etc. which are not included in the node price

VMware can assist with sizing, TCO (Total Cost of Ownership) comparisons, and choosing the right VMware Cloud services or IaaS solution for your business through their multi-cloud teams, regardless of whether you already have a preferred public cloud partner. VMware Cloud Provider Partners or Sovereign Cloud providers the software included, and pricing, will vary by company and region.

What is the VMware Cloud Portal?

Finally let’s look at the VMware Cloud Portal, or CSP (Cloud Services Portal) as it’s also known. The CSP brings together all the VMware Cloud services into a single web-based user interface.

This includes the first-party VMware Cloud services we’ve looked, such as VMware Cloud on AWS, along with a bunch of other VMware SaaS services that were beyond the scope of this article, arranged into the following categories:

    • Multi-cloud management
    • Application modernisation
    • Data and insight
    • Digital workspace
    • Intrinsic security
    • Virtual cloud network

From the VMware Cloud Portal customers can, for example, deploy the SDDC for their VMware Cloud on AWS environment, and enable or trial operational add-ons such as VMware vRealize Operations Cloud, Skyline Advisor, VMware Cloud Disaster Recovery, and so on as well as manage VMware Cloud services.

After the announcement of Project Arctic at VMworld 2021, it is expected that the long-term goal for the CSP is to bring in all available VMware IaaS partner offerings, like the ones we’ve examined in this article, including Azure VMware Solution and Google Cloud VMware Engine.

If achieved, this would make the VMware Cloud Portal a true multi-cloud enabler for organisations looking to migrate and scale across any public cloud provider on demand.

VMware Cloud Services Portal

VMware Cloud Services Portal

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Wrap up

VMware has moved aggressively to fulfil its multi-cloud ambitions. The VMware Cloud Provider Partner (VCPP) program was first initiated in 2008, bringing managed VMware Cloud services to customers. Capabilities jumped up a notch following the 2017 partnership with AWS, after which VMware moved to partner with all major public cloud providers inside just 3 years.

VMware Multi-Cloud Strategy

VMware Multi-Cloud Strategy

Acknowledging that not all applications will be best suited to virtual machines, VMware also bet heavily on Kubernetes and other DevOps tooling throughout the multi-cloud timeline you see above.

Thinking back to VMware Cross-Cloud Services at the start of this article, we can see how the VMware Tanzu portfolio of services for modern applications complements VMware Cloud services to provide flexibility for both Virtual Machine and container-based workloads.

In summary, VMware has managed to pivot its services from being data centre focused, to now giving customers genuine use cases for continuing to run its software in the cloud. Perhaps more surprisingly, VMware has given industry competitors compelling reasons to partner with them on jointly engineered solutions, ultimately to the benefit of the customer with VMware Cloud services.

The post VMware Cloud Services: Should you Take the Jump? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-cloud-services/feed/ 0
Enabling Proactive Intelligence and Support with VMware Skyline Advisor Pro https://www.altaro.com/vmware/skyline-advisor-collector/ https://www.altaro.com/vmware/skyline-advisor-collector/#respond Fri, 04 Mar 2022 13:47:13 +0000 https://www.altaro.com/vmware/?p=23762 Learn how VMware Skyline Advisor Pro provides businesses with automated support intelligence for complex hybrid cloud environments

The post Enabling Proactive Intelligence and Support with VMware Skyline Advisor Pro appeared first on Altaro DOJO | VMware.

]]>

Today, most organizations have extensive infrastructures that span both on-premises, remote sites, and in the cloud. Gone are the days when manual support efforts effectively ensured the infrastructure was operating normally. As virtualized environments have proliferated throughout the enterprise, workloads run on top of sophisticated and advanced infrastructure, requiring proactive monitoring and support processes.

VMware Skyline Advisor offers organizations a proactive support solution for VMware products and services and allows organizations to embrace an automated approach for troubleshooting, pinpointing root/cause analysis, and support.

What is VMware Skyline?

VMware Skyline is a solution that allows businesses to save both time and money troubleshooting issues across their environments. It is a proactive self-service support solution integrated directly with VMware Global Support Services. It automatically collects and analyzes data about the VMware products and solutions in your environment and proactively identifies issues, vulnerabilities, and misconfigurations.

VMware Skyline also helps VMware support engineers identify root causes of issues more quickly with the logging functionality built into the solution. So, all the way around, VMware Skyline is a great tool that VMware customers can use in their environments to minimize the support tickets they need to place and continually tweak and tune their environment to run according to best practice recommendations.

VMware Skyline Advisor collects data from your SDDC and offers recommendations after processing them in the cloud

VMware Skyline Advisor collects data from your SDDC and offers recommendations after processing them in the cloud”

While professional support services are needed in production environments, the VMware Skyline solution provides automated intelligence that can help quickly pinpoint and resolve issues, often without escalating issues into the support queue.

What’s more, VMware Skyline Advisor is a free solution for paid production and premiere subscription customers, vRealize Cloud Universal, and Success 360 customers. It is also included in vRealize Cloud Universal. So, it does not require an additional paid license or yearly subscription fee.

VMware Skyline Advisor is one of the “no brainer” products that VMware environments should be running since it is free and can reduce the SLAs for root/cause analysis, break-fix scenarios, and preventative analysis. However, VMware has upped the game of what VMware Skyline can do with VMware Skyline Advisor Pro.

Overview of VMware Skyline Advisor Pro

Recently, VMware announced VMware Skyline Advisor Pro, building on the foundations of VMware Skyline Advisor. In addition, VMware has evolved the Skyline solution in the Pro release with much deeper intelligence and data analysis. It is also much quicker than its predecessor, VMware Skyline Advisor. These benefits lead to an even more advanced proactive support and analysis tool for VMware environments that helps customers quickly get to a resolution.

Benefits of VMware Skyline Advisor Pro

Note some of the benefits to the solution:

    1. Much faster analysis of data
    2. Additional Insights
    3. More flexibility and ease of use

1. Much faster analysis of data

One of the downsides of the original VMware Skyline Advisor solution is it can take up to 48 hours to surface issues that may be present or have developed in the environment. This time frame could lead to major issues being present in the environment for two entire days before the original VMware Skyline Advisor platform notifies you.

New with the Skyline Advisor Pro platform is accelerated analysis capabilities that allow surfacing issues and inventory changes within 4 hours, a 12X improvement. It is significantly faster than its predecessor and allows remediations to be put in place much more quickly. It also allows admins to see inventory changes, additions, and deletions reflected much more quickly. It ensures your environment is up-to-date.

This enhancement is a significant improvement when you consider that proactive support and intelligence is getting the information more quickly, leading to faster resolution times. If your proactive support is slow to respond or give you this information, it offsets the benefits. The new VMware Skyline Advisor Pro solution puts proactive responsiveness in line with what you would expect. VMware will no doubt be looking at improving the responsiveness of the solution even further over time.

Benefits of the improved speed of Skyline Advisor Pro:

    • View the latest critical issues and security vulnerabilities as soon as possible
    • Validate any remediation responses as quickly as possible after these are introduced in the environment
    • Make sure your reporting is as up-to-date as possible

2. Additional insights

VMware has designed the new VMware Skyline Advisor Pro to be a more intelligent and insightful solution than its predecessor. When Skyline Advisor Pro is smarter, you as a customer can make smarter decisions based on the information presented. In addition, it provides additional insights to help customers better understand potential environmental issues.

First of all, VMware Skyline Advisor Pro supports a wide range of VMware production products and solutions found in customer environments. These include:

    • VMware vSphere
    • VMware vRealize Automation
    • VMware Cloud Foundation

It also identifies VxRail and VMware Validated Design solution deployments. In addition, a new feature found in the new Skyline Advisor Pro solution is End of Life Insights. As many who have managed production environments know, it can be complex and challenging to keep up with all products approaching an end-of-life status and remember support dates status.

The new VMware Skyline Advisor Pro has a feature that alerts customers to installed solutions no longer receiving General Support and Technical Guidance from VMware. When solutions may no longer receive new patches, upgrades, and bug fixes, it is good to know. Skyline Advisor Pro allows you to keep this information visible across your VMware environment.

The last thing you want is to be in an unsupported condition in your production environment without having sufficient time to plan your upgrades to minimize any disruptions these may cause in your environment.

Another great new feature built into the new VMware Skyline Advisor Pro is Historical Insights. Developing trends in your environment is key to understanding the root causes of events, problems, and issues as these come up. It is another area where the new benefits of VMware Skyline Advisor Pro come to light.

With the historical insights, you have the visibility of key events in the environment ad how these relate to findings and recommendations triggered or remediated due to the change over a configurable amount of time. In other words, it might be difficult to correlate a configuration change to an issue that happens three days later or understand what may have triggered a new finding that may not seem to be directly related to a configuration change.

VMware Skyline Advisor Pro’s historical insights provide this deep understanding and trending of changes, actions, and other environmental triggers and how these relate to new findings and remediations.

Another value-added benefit of VMware Skyline Advisor Pro’s offering is Proactive Insights Reports for Success 360 customers. So what is VMware Success 360? It is an offering that continually guides VMware customers through all the stages of the journey with VMware solutions. As a result, it helps businesses continue to realize the value consistently that achieves the results organizations are looking for across the portfolio of VMware product offerings.

The Proactive Insights report is exclusive to Success 360 customers and is delivered by a dedicated team of VMware professionals explicitly assigned to your business. It includes a bi-weekly check-in to go over “health checks” and other information on issues and remediations discovered and actioned in the environment. It helps businesses achieve transparency from IT to business stakeholders, assisting with future planning.

3. More flexibility and ease of use

VMware Skyline Advisor Pro has been made even easier to use and includes even more flexibility to interact with the platform. VMware has introduced a new Insights API that provides a programmatic way to interact with Skyline Advisor Pro, allowing customers to integrate Skyline’s findings and recommendations with third-party tools.

Customers can create customized integrations with ticketing systems and other automation tooling to create automated workflows based on the Skyline Advisor Pro’s findings.

The information available – Skyline Advisor Pro vs. Skyline Advisor

Skyline Advisor Pro Skyline Advisor
Account, Environment, and Collector details. Account, Environment, and Collector details.
Inventory details for VMware vSphere™, NSX-V, NSX-T, Horizon 7, vRealize Operations Manager, VMware Cloud Foundation, vRealize Suite Lifecycle Manager, and vRealize Automation. Inventory details for VMware vSphere™, NSX-V, NSX-T, Horizon 7, vRealize Operations Manager, VMware Cloud Foundation, vRealize Suite Lifecycle Manager, and vRealize Automation.
All findings discovered by Skyline, with both individual and consolidated All findings discovered by Skyline, with both individual and consolidated
Insights Reports (OSR 2.0). Operational Summary Reports (OSR).
Historic Findings NA
Streamlined support bundle upload capabilities with Skyline Log Assist. Streamlined support bundle upload capabilities with Skyline Log Assist.
Theme NA
 
You can use the Light and Dark theme toggle to move to the thematic view you prefer. Click the toggle option next to the Settings on the top to switch between Light and Dark themes.

Deploy Skyline Collector and configure the connection to Cloud Services

To take advantage of VMware Skyline Advisor Pro, organizations must deploy the VMware Skyline Collector. The Skyline Collector is a specialized OVA appliance downloaded from VMware that is the centralized collector of information and data from on-premises that is streamed to the VMware Cloud for analysis.

Starting with the Skyline Advisor November 2021 release, Skyline Advisor Pro is introduced. To activate the VMware Skyline Advisor Pro version of the solution, you need to ensure you install or upgrade to the latest Skyline Collector 3.0. Let’s take a look at the deployment process of the Skyline Collector 3.0.

First, we need to download the VMware Skyline Collector 3.0.0 Appliance. Login to your VMware Customer Connect portal and download the OVA file.

Downloading the VMware Skyline Collector v3.0 appliance from VMware
Downloading the VMware Skyline Collector v3.0 appliance from VMware

Beginning the deployment of the VMware Skyline Collector v3.0 appliance
Beginning the deployment of the VMware Skyline Collector v3.0 appliance

Name and select the folder for the VMware Skyline Collector appliance
Name and select the folder for the VMware Skyline Collector appliance

Select the compute resource

Select the compute resource

Review the initial deployment details

Review the initial deployment details

Accept the EULA

Accept the EULA

Select the storage and storage policy for the Skyline Collector appliance

Select the storage and storage policy for the Skyline Collector appliance

Select the network for the Skyline Appliance
Select the network for the Skyline Appliance

An important screen during the deployment of the VMware Skyline Advisor 3.0 appliance is the Customize template screen. On this screen, you customize the root password for the appliance and configure the network settings.

Customize the template for the VMware Skyline Collector

Customize the template for the VMware Skyline Collector

Ready to complete the OVA deployment wizard and begin the deployment.

Ready to begin the deployment of the OVA appliance

Ready to begin the deployment of the OVA appliance

Configure the connection to cloud services

After deploying the VMware Skyline Collector v3.0, the next step is to connect the appliance to the VMware Cloud Services. The collector is “headless” on its own as it requires cloud analysis and intelligence to analyze the data captured from the on-premises environment.

After deploying the VMware Skyline Advisor Collector v3.0, we need to run through a wizard to connect the appliance to the VMware Cloud Services portal. This process ties your appliance to your VMware Cloud Services portal, your support, and your licensed environment. Login to your VMware Skyline Collector IP address or FQDN in a web browser to start this process. Login with the default credentials: admin/default.

Log in to the VMware Skyline Collector
Log in to the VMware Skyline Collector

You will be prompted to change your password.

Change your password for the VMware Skyline Collector
Change your password for the VMware Skyline Collector

It will begin the initial configuration of the Skyline Collector. The configuration wizard that follows links your Collector to the VMware Cloud Services portal and performs the initial setup of the appliance. It also allows you to link your on-premises VMware technologies with VMware Skyline.

Network configuration and testing
Network configuration and testing

Decide if you want to opt into the Customer Experience Improvement Program (CEIP).

Make your choice on the CEIP program
Make your choice on the CEIP program

The next step is the Collector registration. You have to enter a registration token into your Skyline Collector on this screen. Next, you need to log into your VMware Cloud Services account to get the registration code. Leave the browser tab connected to your Skyline Collector open, and then open a new window or tab and log into your VMware Cloud Services account. We will need to return to the initial setup wizard in just a moment to enter our Collector registration token.

Collector Registration Token request
Collector Registration Token request

After logging into the VMware Cloud Services portal, initiate adding a new Skyline collector.

Add a new Skyline collector to your environment
Add a new Skyline collector to your environment

You will see a registration token automatically generated on the screen that follows. You can copy the token from the page and also generate a new registration token.

Generating a registration token for registering the Skyline Collector
Generating a registration token for registering the Skyline Collector

Now that our Collector registration token is copied switch back to your window or tab with your initial Collector setup. Paste the Collector registration token into the Collector Registration token window of the Collector Registration step.

Registering the new VMware Skyline Collector
Registering the new VMware Skyline Collector

On the Continue Configuration screen, you will see a note about the successful registration of the VMware Collector and the need to have all your pertinent account information ready for your other VMware services and solutions. After this screen, the Initial Configuration wizard will connect the Skyline Collector to the different VMware solutions you have running in your environment.

Continue configuration of VMware Skyline collector
Continue configuration of VMware Skyline collector

Next, assign a friendly name to your new Skyline Collector.

Assigning a friendly name to your Skyline Collector
Assigning a friendly name to your Skyline Collector

You can configure how you want the VMware Skyline Collector appliance to check for and apply upgrades.

Enable Upgrade configuration settings

Enable Upgrade configuration settings

The first important connection to make is connecting VMware Skyline to vCenter Server. This is a mandatory connection that must be made to continue configuring the VMware Skyline Collector.

Configure the vCenter Server connection

Configure the vCenter Server connection

The remaining connections are optional. These include connections to NSX-V, NSX-T, VMware Horizon, VMware Cloud Foundation, vRealize Operations, and vRealize Automation.

Configure NSX-v connection from the Skyline Collector

Configure NSX-v connection from the Skyline Collector

Configure NSX-T connection

Configure NSX-T connection

Configure VMware Horizon View connection

Configure VMware Horizon View connection

Configure a connection with vRealize Operations

Configure a connection with vRealize Operations

Configure a connection to VMware Cloud Foundation
Configure a connection to VMware Cloud Foundation

Configure a connection to vRealize Suite Lifecycle Manager

Configure a connection to vRealize Suite Lifecycle Manager

Configure a connection to vRealize Automation

Configure a connection to vRealize Automation

The Final Step screen shows the connections you have made during the Initial Configuration and gives a summary of the connection information.

View the connection made from VMware Skyline

View the connection made from VMware Skyline

After clicking Finish, you will see the System Status for the VMware Skyline Collector and the connections made to on-premises solutions.

VMware Skyline Collector overview
VMware Skyline Collector overview

After deploying the VMware Skyline Collector v3.0, you will have the option to upgrade your environment to the VMware Skyline Advisor Pro offering. The process to do this is simple. You just need to click the Activate Advisor Pro button.

Option to upgrade to VMware Skyline Advisor Pro
Option to upgrade to VMware Skyline Advisor Pro

As mentioned, you will need to have all your Skyline Collectors upgraded to v3.0 collectors before you will be able to activate the Skyline Pro offering. If you have legacy collectors, you will see the message below.

Error activating Skyline Pro due to legacy collectors
Error activating Skyline Pro due to legacy collectors

After upgrading your collectors, you will be able to perform the upgrade to Skyline Advisor Pro.

The VMware Skyline Advisor Pro dashboard
The VMware Skyline Advisor Pro dashboard

How to Implement VMware Skyline for Proactive Support

One of the tremendous benefits of VMware Skyline is the proactive support capabilities it brings to the table. These features include issues findings and automatic log uploads that can shortcut getting VMware technical support. In addition, you can see the immediate benefits that VMware Skyline brings in terms of proactive support. Admins are alerted to immediate and active findings in the environment. These are also assigned criticality ratings.

Findings displayed in VMware Skyline Advisor Pro
Findings displayed in VMware Skyline Advisor Pro

There is an option in the Log Assist section to Auto Approve Log Requests. It is a per-user setting that will auto-approve transfer requests for Support Requests.

Proactive log gathering from VMware Skyline Advisor Pro

Proactive log gathering from VMware Skyline Advisor Pro

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Is it Worth it?

Organizations today have a myriad of hardware, software solutions, and application stacks that make it challenging to monitor and maintain using manual processes and tasks. Proactive, automated solutions to surface issues provide quick time to value for most organizations.

VMware Skyline Advisor and VMware Skyline Advisor Pro provide proactive, automated intelligence for your VMware software stack to understand and surface issues, analyze trends, and verify remediation. It is also a free solution at no added cost to current VMware customers and Success 360 subscriptions.

Using the VMware Skyline Advisor solution performs the heavy lifting of proactive best practices and issues discovery and resolution in VMware environments. In addition, it allows customers to have the automated support intelligence that leverages the VMware Cloud Services cloud to analyze, categorize, and assign criticality metrics to issues.

Organizations of any size realize benefits with VMware Skyline Advisor and Skyline Advisor Pro by taking the manual efforts out of surfacing issues, verifying remediation of problems, and historical/end-of-life reporting and insights. You can learn more about VMware Skyline Advisor from the official VMware resource here: VMware Skyline | Support

The post Enabling Proactive Intelligence and Support with VMware Skyline Advisor Pro appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/skyline-advisor-collector/feed/ 0
VMware Sovereign Cloud and How Legislation Affects Your Data https://www.altaro.com/vmware/sovereign-cloud/ https://www.altaro.com/vmware/sovereign-cloud/#respond Fri, 14 Jan 2022 14:43:56 +0000 https://www.altaro.com/vmware/?p=23713 Find out where data sovereignty fits in the current IT landscape and how VMware helps ensure legislations are enforced by cloud providers

The post VMware Sovereign Cloud and How Legislation Affects Your Data appeared first on Altaro DOJO | VMware.

]]>

VMware Sovereign Cloud is an initiative by the company to show customers that data sovereignty in the cloud and compliance is being worked on and ensures that those customers can rely on VMware’s services to safely store their data and workloads openness, transparency, data protection, security, and portability in mind.

The concept of data sovereignty is not new per se but it has organically become an important topic to consider among large organizations and government entities following the rise of commodity cloud computing, cyber security threats, the Snowden leaks…

VMware’s own definition of sovereignty is the following:

Sovereignty is the power of a state to do everything necessary to govern itself, such as making, executing, and applying laws; imposing and collecting taxes; making war and peace; and forming treaties or engaging in commerce with foreign nations”.

Data sovereignty refers to data being subject to the privacy laws and governance structures within the nation where data is collected.”

 

Data Sovereignty: The Challenge of the Data Decade

You may be familiar with Moore’s law that was formulated around 1970 which stated that CPU speeds will double every year and hasn’t been discredited in 2021, over 51 years later. While the global data growth doesn’t follow the same dramatic trend, it does evolve exponentially. In fact, back in 2018, IDC estimated that over 175 zettabytes will be generated each year by 2025.

Annual size of the global datasphere – Sponsored by Seagate from IDC

Annual size of the global datasphere – Sponsored by Seagate from IDC”

Environments that store all of their data on-premise know where the data is when it leaves the network, where it goes, how it is used… However, the advantages of cloud are no longer subject to debate, it is an accepted fact that cloud computing solves many a challenge and most companies leverage it in some way or another.

With that said, storing data in the cloud means they are no longer under your control but the cloud provider’s, meaning they could be in another country that abides by different laws and this is where the discussion begins. As you can see in the trend below, the amount of data stored in the cloud is growing.

Data storage is shifting from on-premise data centers to public cloud providers

Data storage is shifting from on-premise data centers to public cloud providers”

Enter data and cloud sovereignty. Data sovereignty (and indirectly cloud sovereignty) refers to countries’ jurisdiction on data compliance and how it relates to the concepts of ownership, who is authorized to store data, how it can be used, protected, stored and what would happen should the data be used ill-intentionally. With the growth of data storage in the cloud, public entities, large enterprises and government bodies are eager to ensure that their cloud-based data is treated right and that they don’t need to worry about it.

Among recent examples of sovereign cloud initiatives, we find:

    • The principality of Monaco recently unveiled a sovereign cloud where all the shareholders are residents along with the state owning a controlling stake in it.
    • The European Commission is spearheading the Franco-German Gaia-X project to create a federated and secure data infrastructure. The goal is an open, transparent and secure digital ecosystem, where data and services can be made available, collated and shared in an environment of trust.

The European cloud market was allegedly worth €53 billion in 2020 and is expected to be worth between €300 billion and €500 billion by 2027-2030, hence VMware’s eagerness to be ahead in the cloud sovereignty market.

Introducing the New VMware Sovereign Cloud Initiative

Up until recently, data sovereignty was ensured by cloud providers through clauses in contracts regarding several areas of the data lifecycle. While large enterprises have departments with dedicated people to deal with all of this, smaller structures can’t necessarily afford the overhead or simply don’t have the resources internally to understand the risks and benefits associated with data sovereignty.

VMware Sovereign Cloud streamlines the process of ensuring data sovereignty with cloud providers

VMware Sovereign Cloud streamlines the process of ensuring data sovereignty with cloud providers”

One needs to ensure at the very least that:

    1. The cloud infrastructure is secured, modern and kept up to date at all times.
    2. Customers’ data sovereignty is assured and guaranteed.

It is with these challenges in mind that VMware sovereign cloud aims to simplify and streamline the process of cloud sovereignty by offering its customers a certified cloud offering through partnerships with cloud providers. The VMware Sovereign Cloud Initiative is built on a framework comprised of a number of rules to abide by in order to be a certified cloud provider. VMware Sovereign Cloud providers must meet applicable geographic-specific sovereign cloud requirements, regulations, or standards where their Sovereign Cloud is made available.

In fact, you can already review the list of VMware sovereign cloud providers on cloud.vmware.com. where you find all VMware cloud solutions. As of the time of this writing, there are currently 9 VMware Sovereign Cloud providers but the list will grow as others get on board. Once a provider checks all the boxes of the VMware Sovereign Cloud Initiative framework, they will get the VMware Sovereign Cloud designation.

VMware Sovereign Cloud providers can be filtered out in cloud.vmware.com

“VMware Sovereign Cloud providers can be filtered out in cloud.vmware.com”

Ensuring data privacy and compliance

Sovereignty has become an important part of national policy and customers are starting to get on board with this train of thought. VMware Sovereign Cloud is here to help them navigate these waters and verified VMware sovereign cloud providers will remain where the workloads run.

Environment, Social & Governance (ESG) are the 3 VMware Sovereign Cloud strategies

Environment, Social & Governance (ESG) are the 3 VMware Sovereign Cloud strategies”

In order to certify providers as Sovereign Cloud Providers, VMware is developing a two-phase approach to tackle the problem:

VMware Sovereign Cloud Framework

This framework developed by VMware includes guiding principles, best practices, and technical architecture requirements to adhere to the data sovereignty requirements of the specific jurisdiction in which that cloud operates. For instance, France requires data to be stored in the European Union while Germany requires localization either in Germany or the EU depending on the level of data sovereignty.

The framework is built around 5 principles:

      • Data sovereignty and jurisdiction control
      • Data access and integrity
      • Data security and compliance
      • Data independence and mobility
      • Data innovation and analytics

VMware Sovereign Cloud Initiative

The VMware Sovereign Cloud initiative is a designation for Providers that self-attest and meet all the requirements of the VMware Sovereign Cloud framework. They must complete an assessment on their Cloud environment (design, build, operations…) and attest that they check all the boxes based on the VMware Sovereign Cloud framework. Among other things, VMware Sovereign Cloud providers must follow the VMware Validated Designs (VVD) for Cloud providers to be VMware Cloud verified.

Promoting VMware Multi-Cloud Offerings

Although it wasn’t made obvious during VMworld 2021 or in the official communications, pushing VMware Sovereign Cloud may also be a way to open the door to multi-cloud offerings. Organizations and public bodies with data sovereignty concerns aren’t likely to go through all the hoops of data sovereignty compliance with several cloud providers for sports.

VMware Cross-Cloud Services will simplify the adoption of multi-cloud architectures

VMware Cross-Cloud Services will simplify the adoption of multi-cloud architectures”

Embracing cloud computing isn’t necessarily easy at first. Leveraging several cloud providers for specific features or redundancy reasons multiplies the hurdles along the way. This is why VMware introduced their new multi-cloud offerings with VMware cross-cloud services and communicated so much about it. Now add data sovereignty to the mix and you get a tangled mess that will be tricky to make sense of.

From a business perspective, cloud services is a very lucrative business since it brings recurring revenue and centralizes customers while consolidating the maintenance and support effort on the VMware side of things. With the VMware Sovereign Cloud initiative, I believe it will lift a load off the decision makers’ shoulders that will only have to select among the available VMware Sovereign Cloud providers and choose whatever service they are interested in such as VMware Disaster Recovery as a Service (DRaaS).

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

The Road Ahead

It is no wonder we are in what is referred to as the “data decade” given the rate of current and projected data that is being generated by consumers, enterprises and public entities. While cloud adoption used to be rather slow at the beginning of the last decade, the emergence of use cases and cloud providers in the last few years have made it an integral part of the modern digital ecosystem. VMware’s global strategy is a testimony of this trend given the resources they’ve invested in developing their multi-cloud offering and partnerships with various providers.

VMware Sovereign Cloud is one of the components in this global strategy but it will certainly be an important one given the customers concerned by these problems. Those include government bodies and highly regulated large entities that usually allocate large chunks of their budget towards securing their data which at the end of the day boils down to data sovereignty.

With the VMware Sovereign Cloud Initiative, the company is positioning itself at the forefront of this topic by removing the complexity of cloud sovereignty to promote multi-cloud offerings. Securing a large customer base on this solution will likely incur important revenue streams and customers will not be likely to switch unless they have a very good reason given the importance of compliance nowadays.

The post VMware Sovereign Cloud and How Legislation Affects Your Data appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/sovereign-cloud/feed/ 0
Introduction to VMware Tanzu https://www.altaro.com/vmware/vmware-tanzu/ https://www.altaro.com/vmware/vmware-tanzu/#respond Fri, 03 Dec 2021 12:50:42 +0000 https://www.altaro.com/vmware/?p=23448 What is VMware Tanzu, and how does it help solve the complex challenges of app modernization? Find out in this extensive article!

The post Introduction to VMware Tanzu appeared first on Altaro DOJO | VMware.

]]>

The cloud revolution has brought about many changes in the enterprise. Following the microservices cloud model, many organizations are looking to modernize their business-critical applications to have more agility, scalability, high availability, and ease of deployment. However, manually managing containers and containerized infrastructure can be complex and challenging. Kubernetes helps to solve many of the container management challenges. However, Kubernetes can be challenging to configure and maintain as well.

VMWare Tanzu is a solution that helps to take the complexity out of managing containers and containerized applications using Kubernetes. However, it also includes a rich, robust ecosystem of solutions to extend modern application development. What technologies are associated with app modernization, and what challenges are faced with using containerized workloads? What is VMware Tanzu? How does it help customers realize the end goal of app modernization? Where does it fit in the DevOps cycle? Let’s dive right in!

App Modernization

Businesses today are moving at a rapid pace and need to have the agility to deploy applications more quickly and efficiently. Using modernized infrastructure allows organizations to achieve the agility and capabilities required to meet their current and future business demands and modernize their applications.

When we look back over the past 20 years, there have been several revolutions in enterprise technology. The virtualization revolution certainly was the beginning of this shift in modernizing applications and using more modern and abstracted approaches to solve business problems.

Most recently, the cloud revolution has once again changed how businesses are using technology to solve problems. Using cloud technologies has allowed businesses to accelerate how they build, configure, and deploy infrastructure and applications. It has also enabled building applications effectively using microservices. Legacy monolithic applications are large, complex, and difficult to deploy at scale and with any agility. App modernization often involves breaking these monolithic applications down into microservices architectures that allow much more easily developing applications with the speed and agility needed.

Applications can be deployed and updated much more quickly and with DevOps processes using the microservices approach. Application modernization involves updating older software for newer computing approaches. It includes new languages, frameworks, and modern infrastructure. In addition, it helps businesses to introduce efficiencies into current processes and solutions filled with technical debt.

Why do businesses want or need to modernize their applications?

When applications are modernized, organizations can reap many benefits, including reducing the number of resources required to run a business-critical application, increasing the frequency of deployments, realizing the benefits of continuous integration/development (CICD), and providing better resiliency against failures.

Must of the app modernization process that allows businesses to break down applications into microservices architectures require something smaller and more agile than virtual machines. Modern microservices architectures rely on containers. What are containers?

Containers

Containers are a key technology in application modernization. They are a cloud-centric method for packaging, deploying, and operationalizing applications and workloads. Containers are focused on applications and contain all the requirements needed for an application to run. Containerized applications can be moved or deployed on a new container host, and the application is unaffected.

Containers, similar to virtual machines, provide an abstraction layer. Containers are an abstraction at the application layer that combines apps and dependencies. Containers share the OS kernel with other containers, each running as isolated processes in userspace. In addition, multiple containers can run on the same container host. As a result, containers take up less space than VMs. Where virtual machines can take up several gigabytes worth of disk space, container images are typically tens of MBs in size, so they are much leaner than VMs. It means using containers instead of VMs allows running more applications and requires fewer VMs and operating systems.

The difference in the architecture of containers and virtual machines
The difference in the architecture of containers and virtual machines (Image courtesy of Docker)

Docker notes the following points regarding containers:

    • A container is a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another
    • They are available for both Linux and Windows-based applications.
    • Containerized software will always run the same, regardless of the infrastructure
    • Containers isolate software from its environment and ensure that it works uniformly despite differences, for instance, between development and staging

Kubernetes

On their own, containers do not have an orchestration engine that “pulls strings” behind the scenes to spin up new containers for scaling up workloads or account for a failed container host. Kubernetes is the orchestration engine that provides the automation behind the scenes that allow businesses to use containers in the way we have been using VMs with vSphere and other hypervisors for years now. In addition, it provides the management and orchestration layer that can manage the underlying container infrastructure so your applications can be resilient to downtime.

If a container fails, it is much more efficient to have an automated system process that can automatically spin up another container and reprovision the application. This exact use case is the “bread and butter” of Kubernetes.

According to the official documentation found on Kubernetes.io:

    • Kubernetes is “a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.”

What problems does Kubernetes solve?

    • It allows applications running on containers to be highly available – You can lose a container host or process without seeing downtime or disruption in your application
    • It provides easy elasticity in your infrastructure – Kubernetes can control when new pods are scheduled for new resources and when resources are idle and need to be spun down.
    • It provides the backend scheduling of where resources are best suited to run and on which container host(s)
    • It allows quickly adding new container hosts to the Kubernetes cluster
    • It allows developers to interact with containers via an API interface

Below is a look at the components of Kubernetes architecture.

Kubernetes architecture components
Kubernetes architecture components (Image courtesy of Kubernetes.io)

Challenges

For many businesses, re-architecting their applications to use containers provides a robust platform for application modernization. However, containers have their own complexities and natively lack the management and orchestration needed for availability and elasticity required. While Kubernetes solves many of the management challenges, it can be challenging to configure, maintain, and support.

Businesses shifting to modernized applications running on top of containers may find the tooling, configuration, and infrastructure much different from what they are familiar with coming from the world of virtual machines. What if organizations could use their existing virtual machine infrastructure and the management tools and capabilities they are already familiar with to manage their containerized infrastructure, orchestrated by Kubernetes?

What is VMware Tanzu?

At VMworld 2019 US, in August 2019, VMware unveiled a suite of products that helps organizations solve the many challenges of modernizing their applications – VMware Tanzu. To understand what it is exactly we can draw similarities with the VMware vRealize Suite of products. However, it is not a single solution or product, but rather it is multiple products under the name of VMware Tanzu. The solutions contained in VMware Tanzu include the following:

    • Tanzu Application Service – VMware Tanzu Application Service is a modern application platform for enterprises that want to continuously deliver and run microservices across clouds, providing runtimes for Java, .NET, and other platforms such as Node apps
    • Tanzu Build Service – automates container creation, management, and governance at enterprise scale
    • Tanzu Application Catalog – A curated catalogue of production-ready open source software from the Bitnami collection
    • Tanzu Data Services – Simplify your migration to the cloud with VMware Tanzu Data Services. It’s a portfolio of on-demand caching, messaging, and database software on VMware Tanzu for development teams building modern applications. It includes GemFire, RabbitMQ, SQL and Greenplum.
    • Tanzu Kubernetes Grid – The Enterprise Kubernetes runtime built into vSphere
    • Tanzu Mission Control – a centralized management platform for consistently operating and securing your Kubernetes infrastructure and modern applications across teams and clouds
    • Tanzu Observability – Monitor everything from full-stack applications to cloud infrastructures with metrics, traces, span logs, and analytics
    • Tanzu Service Mesh – Monitor and secure the microservices driving your business across any runtime and any cloud with an enterprise-class service mesh

VMWare Tanzu

VMware Tanzu, in short, allows organizations to run Kubernetes-powered containers across cloud environments and even natively in their VMware vSphere environments. Running VMware Tanzu natively in vSphere brings about many benefits to customers. First, it allows much more easily configuring, managing, and operationalizing Kubernetes-powered containers.

Second, it enables existing VMware vSphere customers who use vSphere as the underlying hypervisor in their environment to use the same set of tools and management interface to manage their existing infrastructure compromised of virtual machines modernized applications running on Kubernetes.

As shown by the list of solutions contained under the umbrella of VMware Tanzu, it is more than just Kubernetes. It is the entire package of services and solutions for building, deploying, running, and managing modern applications. The focus is applications and not infrastructure.

How does VMware Tanzu work?

Taking a closer focus on how VMware Tanzu works in a VMware vSphere environment, it is essential to understand, VMware has re-engineered VMware vSphere 7 from the ground up to have the native functionality built into the hypervisor to run Kubernetes. It means there is no “bolt-on” product or third-party solution needed to run Kubernetes in VMware natively.

Formerly known as “Project Pacific,” it uses Kubernetes to change vSphere. It combines the functionality of vSphere by embedding Kubernetes inside the vSphere control plane. Containers now appear in the vSphere client as “first-class citizens,” along with VMs and managed accordingly. VM and container runtimes are converged using vSphere pods. VMware vSphere Native Pods provide many benefits, including being lightweight and secure. VMware touts these can be even faster than bare metal containers due to the efficient way vSphere handles CPU scheduling.

Kubernetes native vSphere platform
Kubernetes native vSphere platform

With Kubernetes embedded into the control plane of vSphere, it allows container compute, storage, and networking resources to be managed alongside the traditional VM. It provides tremendous benefits from a management and operational standpoint. It means IT operations can manage Kubernetes container objects from the vSphere Client. The native VMware vSphere Native Pods allow all the traditional PowerCLI scripts, third-party tools, and other tools and mechanisms to work with Kubernetes as it does with VMs in VMware vSphere.

VMware implements what is known as a Supervisor cluster that is a special kind of Kubernetes cluster that uses the ESXi host as a worker node. It implements what is called a Spherelet (a special kind of Kubelet) into ESXi. It runs not in a VM but in ESXi itself.

The vSphere Supervisor cluster is a Kubernetes cluster of ESXi
The vSphere Supervisor cluster is a Kubernetes cluster of ESXi

Guest Clusters are created to run general-purpose Kubernetes workloads. The guest clusters run inside virtual machines on the Supervisor Cluster and are a fully upstream compliant Kubernetes distribution which allows full compatibility with existing Kubernetes applications.

The guest cluster control plane in the Supervisor Cluster
The guest cluster control plane in the Supervisor Cluster

Project Pacific is the new architecture in ESXi that brings VMware Tanzu to vSphere and is a component of the much broader VMware Tanzu solution.

What is VMware Tanzu Kubernetes Grid?

Arguably, the central component to the VMware Tanzu solution is VMware Tanzu Kubernetes Grid (TKG). The VMware Tanzu Kubernetes Grid solution is the specialized Kubernetes distribution tested, signed, and supported by VMware. It includes the following supporting components:

    • Registry
    • Networking
    • Monitoring
    • Authentication
    • Ingress control
    • Logging services

All of the above components are required for production-ready Kubernetes clusters. In addition, it provides organizations with the consistent, upstream-compatible, regional Kubernetes distribution that is ready to host all Kubernetes workloads that can run inside a Kubernetes cluster.

Tanzu Kubernetes Grid can be deployed across both on-premises and cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. Take a look below at the Tanzu Kubernetes Grid instance architecture.

Tanzu Kubernetes Grid architecture
Tanzu Kubernetes Grid architecture (Image courtesy of VMware)

VMware Tanzu Kubernetes Grid Service

Closely related to Tanzu Kubernetes Grid is the Tanzu Kubernetes Grid Service, or TKGS. The Tanzu Kubernetes Grid Service (TKGS) is crucial in the VMware Tanzu portfolio of products. It allows creating and operating Tanzu Kubernetes clusters natively in vSphere with Tanzu. In addition, the service can be invoked using the Kubernetes CLI.

What is vSphere with Tanzu?

In reading about and looking at VMware Tanzu, you may see references to VMware Tanzu and VMware vSphere with Tanzu. What is the difference? When VMware Tanzu was initially released, it required the full modern VMware SDDC stack powered by VMware Cloud Foundation, vSAN, and VMware NSX-T for the software-defined networking component.

However, for most VMware environments, many customers use traditional VMware vSphere implementations without a combination of VMware Cloud Foundation, VMware vSAN, and VMware NSX-T. While VMware notes the full SDDC experience is best consumed with VMware Cloud Foundation, this left out a majority of some 70+ million customers taking advantage of VMware Tanzu.

With the release of VMware vSphere 7.0 Update 1, VMware officially solved this glaring problem for customers. With that release, VMware officially implemented vSphere with Tanzu. What is vSphere with Tanzu, and how can it benefit VMware customers wanting to take advantage of what VMware Tanzu has to offer?

VMware vSphere with Tanzu is the native vSphere offering that allows deploying the VMware Tanzu solution directly into vSphere, without the requirement of having VMware Cloud Foundation, VMware vSAN, or VMware NSX-T networking. In addition, VMware vSphere with Tanzu allows customers to bring their own storage and networking to the VMware Tanzu solution, which lifts many of the restrictions found previously with VMware Tanzu.

The new vSphere with Tanzu offering allows customers to drop in Kubernetes to their vSphere 7.0 Update 1 and higher environments and administer Kubernetes from the same familiar vSphere Client interface. Note the following benefits of vSphere with Tanzu:

    • Allows customers to consume enterprise-grade Kubernetes with existing network configurations and block or file storage
    • Customers can use the native vSphere Distributed Switch for Kubernetes clusters networking
    • Customers can choose a load balancer between either the HAProxy or the NSX Advanced Load Balancer solution
    • It allows implementing role-based access to the vSphere-powered Kubernetes cluster in minutes and takes the heavy lifting out of the security configuration

The new vSphere with Tanzu solution is enabled using the Workload Management dashboard found natively in the new vSphere Client UI.

Enabling Workload Management in vSphere with Tanzu
Enabling Workload Management in vSphere with Tanzu

Below is a screenshot of a vSphere with Tanzu environment running a Workspace Cluster along with a traditional virtual machine. It helps to illustrate the seamless nature of managing containerized infrastructure running modern applications and the conventional virtual machines running for the past decade or more.

VI admins, system admins, and others view the entire landscape, including containerized and conventional infrastructure. This single-pane-of-glass interface that most sysadmins are accustomed to can drastically help with adoption, day two operations, and other tasks.

VMware vSphere with Tanzu containers and virtual machines
VMware vSphere with Tanzu containers and virtual machines

With vSphere with Tanzu, all the low-level Kubernetes commands can be used to view information about the supervisor cluster, control plane, and guest clusters.

Viewing the provisioning of the Supervisor Cluster control plane
Viewing the provisioning of the Supervisor Cluster control plane

Logging into the vSphere with Tanzu Guest Cluster and viewing pods
Logging into the vSphere with Tanzu Guest Cluster and viewing pods

VMware Tanzu Editions

With the release of VMware vSphere 7.0 Update 1, VMware also introduced Tanzu Editions. Each of the Tanzu Editions offered provides various features and functionality to meet multiple use cases in the enterprise. However, one of the key considerations made by VMware is to include in all versions several important characteristics shared between them. These include:

    • They are open source aligned
    • Multi-cloud environments are supported
    • DevOps processes are enabled and supported with each offering

So, regardless of the edition of Tanzu, organizations benefit from many of the same cloud, automation, and DevOps capabilities. Let’s take a closer look at the features and capabilities found in each respective Tanzu version.

    • Tanzu Basic – The Tanzu Basic offering targets the current VI and System admin currently managing VMware vSphere environments today. With Tanzu Basic, organizations have access to the most affordable and accessible Tanzu solution. In addition, it allows businesses to run containerized off-the-shelf (COTS) workloads with ease, using the familiar VMware vSphere tooling. It allows both containers and VMs to run side-by-side in the environment. With the simple process of installing a license in their existing vSphere 7 environments, customers can start taking advantage of running VMware Tanzu workloads.
    • Tanzu Standard – For businesses that have used the off-the-shelf containerized workloads but now need to scale and deploy consistent Kubernetes workloads both on-premises and in the public cloud, Tanzu Standard provides the means to do this with a global control plane to manage them all. It includes a policy engine that provides access management, backups for Kubernetes clusters, and even groups of Kubernetes clusters. Monitoring is also enabled using both Prometheus and Grafana dashboards. It targets the infrastructure lead and cloud architect.
    • Tanzu Advanced – The Tanzu Advanced offering provides more of the extended capabilities found in the VMware Tanzu solution. These expanded capabilities include additional DevOps and security features. As a result, the Tanzu Advanced release targets DevOps and Platform Ops teams.
    • Tanzu Enterprise – The Tanzu Enterprise edition provides the full features and capabilities found in the Tanzu portfolio. It helps provide the tools and features needed for developers and improves the developer experience and the velocity of the deployment process. Tanzu Enterprise has been described as creating a “superhighway” between developers, IDEs, and production environments.

The different editions of VMware Tanzu provide flexibility for various use cases needed by different environments and business requirements in the enterprise. In addition, each of the Tanzu editions is a superset of the one before it so that customers can start with a particular edition and easily step up to a higher edition if additional capabilities are needed in the future.

Where does it fit in the DevOps Cycle?

VMware Tanzu allows developers to have full access to Kubernetes APIs and consistently create production-ready container images that run on Kubernetes and across clouds in a self-service type manner. In addition, it enables automating source code to container workflows across all development frameworks.

At the same time, it allows VI admins and operations engineers to maintain policies and other role-based access control in the environment so that both teams can operate effectively and efficiently. As a result, it enables maintaining proper security and other controls without impeding development workflows and processes.

Why do I need VMware Tanzu?

As with any technology used to serve business-critical processes, data, and services, a certain amount of complexity is involved. Kubernetes has not been known for being easy to deploy, configure, and manage using manual means. However, it is the defacto standard in the industry for orchestrating and automating container deployments in production.

VMware Tanzu allows organizations to get up and running quickly and easily with Kubernetes-powered containers without the steep learning curve required using manual Kubernetes deployments. It also allows businesses to step into Kubernetes-managed containers in a supported fashion, with VMware support assisting with any deployment, configuration, or management issues. The support aspect is huge, especially for production use, as any downtime can be disastrous.

As mentioned earlier, VMware Tanzu allows businesses to utilize the same familiar tools they are accustomed to using with vSphere and use these same tools to manage their containerized infrastructure. As a result, avoiding the need for new management interfaces, tools, processes, and other solutions can pay tremendous dividends in operationalizing Kubernetes-powered containers.

The vSphere with Tanzu offering allows businesses to use their existing standard vSphere implementations without VMware Cloud Foundation, VMware vSAN, or VMware NSX-T, which means they can simply install a license key and start configuring Workload Management using vSphere with Tanzu.

Businesses looking to modernize their applications stand to benefit from the VMware Tanzu offering as it provides the tools and solutions to configure Kubernetes-orchestrated containerized infrastructure quickly and in a supported way. In addition, VMware has positioned the licensing to make it easy to start with the Tanzu Basic edition and move up if needed.

My Thoughts on VMware Tanzu

Organizations today are accelerating their applications and application development using hybrid and cloud technologies. In addition, those on the path of application modernization find it requires using cloud-native technologies like containers to break monolithic applications into microservices for rapid deployments, upgrades, and feature enhancements.

Containers provide a much smaller footprint than VMs and can often be ephemeral, able to spin up and down as needed. However, containers in themselves have no native way for orchestration and automation. Kubernetes is the industry standard to manage production-ready containerized environments. Both containers and Kubernetes can present a steep learning curve for organizations looking to introduce these manually.

VMware Tanzu is an entire suite of solutions that allows businesses to deploy, configure, and manage Kubernetes using a fully supported Kubernetes distribution produced by VMware. Organizations can run Tanzu Kubernetes Grid solution natively in vSphere, Microsoft Azure, and Amazon AWS. Additionally, with vSphere with Tanzu, businesses can run VMware Tanzu in very traditional vSphere environments, without VMware Cloud Foundation, VMware vSAN, or VMware NSX-T.

VMware Tanzu is a powerful application modernization platform assisting businesses in making the entire digital transformation, allowing them to modernize their applications for cloud-native and hybrid cloud technologies more efficiently.

You can learn more about VMware Tanzu from the official VMware Tanzu site at VMware here.

The post Introduction to VMware Tanzu appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-tanzu/feed/ 0
Disaster Recovery as a Service (DRaaS) in VMware – The Full Picture https://www.altaro.com/vmware/disaster-recovery-as-a-service/ https://www.altaro.com/vmware/disaster-recovery-as-a-service/#respond Fri, 12 Nov 2021 15:44:30 +0000 https://www.altaro.com/vmware/?p=23282 Disaster Recovery is an essential consideration for all business-critical operations and there are several DRaaS tools available in VMware

The post Disaster Recovery as a Service (DRaaS) in VMware – The Full Picture appeared first on Altaro DOJO | VMware.

]]>

Disaster Recovery as a Service (DRaaS) is a type of service offering that provides Disaster Recovery (DR) capabilities in the cloud. You may have read about what is disaster recovery as a service in our dedicated blog during VMworld 2020.

Traditionally, organizations have distributed critical systems across multiple sites or locations to protect against failures. This approach has been effective but expensive; buying the same hardware multiple times to stand up identical infrastructure. Disaster recovery as a service provides the orchestration and replication software required to failover services to standby or on-demand services in the cloud. In this article, we will run down the basics of DR and break down the DRaSS options available in VMware. Let’s get to it!

Using DRaaS to prepare for a Disaster

    • The main benefit of DRaaS is removing the need for dedicated additional data centers or hosting facilities, along with duplication of hardware.
    • The resources required for failover are maintained and allocated by the service provider, who will typically have a global footprint with a fully resilient setup.
    • The service provider should provide the replication and orchestration capability to restore services into the cloud.
    • Ideally, further value such as compliance checks and restore tests should also be added.
    • Standardization of recovery plans for multiple sites and removing the heavy lifting in creating a dedicated disaster recovery plan. Using cloud services also presents several generic benefits, as organizations move away from racking and stacking hardware on-premises they can benefit from:
    • Quicker time-to-market or project delivery, by freeing up staff from maintaining the underlying infrastructure.
    • Economies of scale, using cheaper commodity infrastructure or paying for on-demand consumption.
    • Shifting from large Capital Expenditure to predictable, reoccurring Operating Expenditure funding for IT.

What is RPO and RTO?

Any kind of disaster recovery needs to be measured with Service Level Agreements (SLAs) along with Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs).

    • Recovery Point Objective (RPO): The data or application state at a particular point in time, for which recovery is provided. For example, an RPO for a system with critical and changing stateful data may need to be a few minutes, a system where not much change takes place could be 4 hours, or a non-critical system that can incur data loss could be 1 day or even longer.
    • Recovery Time Objective (RTO): Amount of time taken to recover. Again, a mission-critical system that cannot afford downtime may need a low RTO, whereas a test system may allow for an RTO of days or even weeks before it is available again.

There will be multiple RPO and RTO values for the different services within each organization. There is typically a trade-off between cost and recovery time. When examining on-premise DR or disaster recovery as a service, the RPO and RTO offerings should be in alignment with the business needs.

How Does Disaster Recovery as a Service Work?

As a typical service model, Disaster Recovery as a Service will replicate (and convert if required) physical or virtual servers to cloud-hosted infrastructure. In the event of a disaster or event that impacts the uptime of the on-premises service, failover to the cloud-based copy is initiated to maintain business continuity.

Example disaster recovery as a service High-Level Setup

Example disaster recovery as a service High-Level Setup

The low-level details vary depending on the exact service and provider. The most common hypervisor for server virtualization in the data center is VMware. Let’s look at some of the Disaster Recovery as a Service options available for VMware workloads.

VMware Cloud Disaster Recovery (VCDR)

VMware Cloud Disaster Recovery (VCDR) is perhaps managed DRaaS in the truest form, it works as follows:

    • The DRaaS Connector VM snapshots workloads from the on-premises VMware environment into a cloud-based scale-out file system.
    • The customer pays for the number of VMs they are protecting, and the total amount of storage they have used.
    • The Software-as-a-Service (SaaS) orchestrator and control plane allows the customer to specify exactly how many recovery points they would like to retain, at what frequency, and for how long.
    • Should DR need to be invoked, the scale-out file system is mounted to dedicated VMware Cloud on AWS nodes, and the workloads powered on.
    • The recovery nodes can either be already running or deployed automatically on-demand.
    • When the protected site or hardware is available again, a delta-based failback can be scheduled.

You’ll notice that other than policy customizations, the provider is managing all the failover infrastructure. At the time of writing VCDR is only available for AWS, with VMware Cloud on AWS as the recovery site. In the future, this will be extended out to other VMware-based disaster recovery as a service providers, such as Microsoft Azure (Azure VMware Solution) and Google Cloud (Google Cloud VMware Engine).

VMware Cloud Disaster Recovery Setup

VMware Cloud Disaster Recovery Setup

As well as the typical disasters that spring to mind such as power outages, natural disasters, hardware failures, or human error, VCDR is great for ransomware protection.

The normal VM-based replication model for disaster recovery will replicate any kind of corruption or encryption installed by ransomware, rendering the replicas useless. Furthermore, ransomware will most likely seek out backups as the first point of attack.

VCDR uses the following methods to provide ransomware recovery:

    • User-defined snapshot frequency and retention points, for a deep history of data and application state.
    • Immutable backups/snapshots that cannot be changed.
    • Instant VM power-on for faster experimentation.

VMware Cloud Disaster Recovery was first announced at VMworld 2020, following the company’s acquisition of Datrium. Functionality is likely to grow at a fast pace, and at VMworld 2021 the following new features were announced:

    • 30-minute RPOs, for critical applications with higher change rates, providing a restore point every 30 minutes.
    • File-level recovery, accelerate ransomware recovery by restoring files or folders without powering on the VM.
    • Integrated VMware Cloud on AWS protection, enabling region or site failover.

VMware Site Recovery Manager (SRM)

VMware Site Recovery Manager (SRM) has long been used by VI admins on-premises to provide VM failover between sites. It utilizes vSphere Replication, which is included in vCenter licensing, to replicate VMs between sites with corresponding vCenter instances.

Site Recovery Manager can run custom scripts, re-IP virtual machines, check dependencies, and run failover tests. The big difference between Site Recovery Manager and a solution like disaster recovery as a service, is that SRM requires the recovery site to be online and available, to replicate the VM data and host the placeholder VMs ready for recovery.

Site Recovery Manager can be used the same way it was on-premises, to restore into the cloud by installing SRM at both sites. In this type of setup, the customer is responsible for the SRM installation and configuration at both sites, with the cloud provider maintaining the underlying infrastructure.

Whilst this may not be fully managed disaster recovery as a service, and could perhaps be described as self-service DRaaS, it does provide flexibility and use cases for:

    • Recovery to Microsoft Azure (Azure VMware Solution)
    • Recovery to Google Cloud (Google Cloud VMware Engine)
    • Recovery to Oracle Cloud (Oracle Cloud VMware Engine)
    • Recovery to other VMware Cloud Provider Partners, such as IBM Cloud
    • Recovery to managed service providers, and private, local, or sovereign clouds

VMware Site Recovery Manager Setup

VMware Site Recovery Manager Setup

VMware Site Recovery

VMware Site Recovery provides the same functionality and benefits as SRM, except that the solution is provided in Software-as-a-Service (SaaS) form.

Currently, VMware Site Recovery is only available with VMware Cloud on AWS, or a hybrid site pairing between on-premises and VMware Cloud on AWS. These disaster recovery models could be termed as being on the spectrum between assisted DRaaS and self-service DRaaS.

In the former example, VMware Site Recovery is enabled from the VMware Cloud console without any installation. VM level failover can be configured between VMware Cloud on AWS Availability Zones or regions.

In the latter example, the customer installs SRM on-premises and enables VMware Site Recovery for the VMware Cloud on AWS recovery site. The hybrid setup works as follows:

    • The customer installs Site Recovery Manager at their primary or protected site. The customer is responsible for maintaining this side of the Site Recovery installation.
    • The customer enables Site Recovery from the VMware Cloud Services Portal (CSP) for the recovery site. Site Recovery for the recovery site is SaaS, so no further installation is needed.
    • A site pairing is created between the on-premises site and the VMware Cloud on AWS instance.
    • Protection policies and recovery plans are created to define the VMs in scope for failover, and any mappings or dependencies.

VMware Site Recovery Setup

VMware Site Recovery Setup

Is Disaster Recovery as a Service Right for You?

As businesses responded digitally to the Covid-19 pandemic, cloud computing has accelerated and DRaaS is no exception. Organizations starting out in the cloud initially host test and development workloads, with many opting to add disaster recovery as a use case with a cloud-based second or third site. As a result of demand and an increase in recovery scenarios, such as ransomware, the Disaster Recovery as a Service market continues to grow.

Right now, VMware Cloud Disaster Recovery and VMware Site Recovery Manager are the mainstream options. The best method for disaster recovery will be down to each individual organization, and there are plenty of alternatives:

    • Azure JetStream will backup VMs to blob storage and restore them into Azure VMware Solution (AVS).
    • Azure Site Recovery (ASR) converts VMware VMs to Azure VMs for disaster recovery.
    • AWS CloudEndure replicates physical or virtual machines into low-cost EC2 staging instances and converts them to production in the event of a failover.
    • VMware vCloud Availability is a tool for service providers and VMware Cloud Provider Partners, enabling multi-tenant recovery between sites.
    • VMware vSphere Replication on its own provides an asynchronous replication engine for VMs, which could be complemented with third-party software.
    • Backup-as-a-Service (BaaS) provides data backup to the cloud, and whilst it doesn’t provide restoration of infrastructure services, it could be worked into a disaster recovery plan with other solutions.

Making a Choice

In summary, understanding your requirements and existing services is the first step to identifying whether Disaster Recovery as a Service is the right option for you, and how you can use DRaaS to prepare and protect against a disaster. The next step is to understand the available options and align them with your own SLAs, RPOs, and RTOs, as well as any dependencies and regulatory requirements.

The post Disaster Recovery as a Service (DRaaS) in VMware – The Full Picture appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/disaster-recovery-as-a-service/feed/ 0
VMworld 2021 Headlines – Cloud Services, Tanzu, and More! https://www.altaro.com/vmware/vmworld-2021-headlines-cloud-services-tanzu-and-more/ https://www.altaro.com/vmware/vmworld-2021-headlines-cloud-services-tanzu-and-more/#respond Thu, 14 Oct 2021 15:37:58 +0000 https://www.altaro.com/vmware/?p=23079 The announcements at VMworld 2021 have huge implications for the future of the company and admins. The key takeaways and talking points here

The post VMworld 2021 Headlines – Cloud Services, Tanzu, and More! appeared first on Altaro DOJO | VMware.

]]>

We’re tying the bow on VMworld 2021 which was packed with a dizzying number of announcements. While we can’t cover every single one of them, we will talk about the ones that really struck us as well as those high-visibility strategic announcements.

VMware’s CEO Raghu Raghuram speaking at VMworld 2021

VMware’s CEO Raghu Raghuram speaking at VMworld 2021

Like last year, VMworld 2021 was an online event with free registration for everyone. The event was organized in 8 different “booths” from which you can pick and choose sessions. It seems the bulk of the innovations were in the multi-cloud and App modernization fronts though.

VMworld 2021

This year’s VMworld 2021 guests included no other than Michael J. Fox, Will Smith who treated us with really inspiring messages and views on life in general outside of the tech space.

As for the technical side of things, on top of all the other areas that were talked about, the agenda was packed with multi-cloud and App modernization (Tanzu) topics. Without further ado, let’s dive into the VMworld 2021 announcements.

VMware Cross-cloud services

According to VMware’s CEO Raghu Raghuram during VMworld 2021, “Multi-cloud is the digital business model for the next 20 years, as entire industries reinvent themselves”. The plan to help organizations with the shift to multi-cloud was set in motion some time ago and has been the topic of several announcements ever since.

VMworld 2021 is no exception and brings the concept a little bit further with VMware Cross-Cloud services, a group of several integrated services allowing customers to deal with apps with “freedom and flexibility” across clouds. The goal of these multi-cloud services is to accelerate the move to the cloud, make it cheaper and more flexible.

Mware Cross-Cloud services helps organizations shifting to multi-cloud

VMware Cross-Cloud services help organizations shifting to multi-cloud”

The new VMware cross-cloud services offering will revolve around the following areas. Keep in mind that these span multiple clouds (this is where the value really is). You can pick and choose which service you want on which cloud.

    1. Building and deploying cloud-native apps (VMware Tanzu Application Platform).
    2. Operating and running apps (VMware Cloud, Project Arctic).
    3. Management of performance and cost across clouds (VMware vRealize Cloud, Project Ensemble).
    4. Security and Networking (Carbon Black, NSX Cloud, Service Mesh).
    5. Deploy and manage edge-native apps (VMware Workspace One and VMware Edge Compute Stack).

Not all organizations will benefit from this offering just yet as most IT departments will first need to wrap their head around it, find use cases, analyze the TCO… However visionary, things certainly seem to be moving in that direction and VMware is paving the way.

VMware Sovereign Cloud

Data sovereignty refers to countries’ jurisdiction on data and how it relates to the concepts of ownership, who is authorized to store data, how it can be used, protected, stored and what would happen should the data be used ill-intentionally.

The discussions around data and cloud sovereignty are becoming more frequent and will most likely become a critical selling point for large customers such as government entities. As more and more companies resort to cloud computing, it is becoming increasingly important to establish a way of ensuring the data stored with these cloud providers is treated squarely.

For instance, the principality of Monaco recently unveiled a Monegasque sovereign cloud where all the shareholders are Monegasque with the state owning a controlling stake in it.

VMware Sovereign Cloud will ensure regulations compliance

VMware Sovereign Cloud will ensure regulations compliance”

VMware is cracking down on this issue with VMware Sovereign Cloud. The aim of this initiative is to partner with cloud providers to be able to deliver multi-cloud service with the “VMware Cloud Verified” seal of approval.

In order for this to happen, a VMware Sovereign Cloud framework will be put in place and only cloud providers who abide by it will be able to slap the “VMware Cloud Verified” seal of approval on their services. They must also self-attest on the design, build, and operations of their cloud environments and their capability to offer a sovereign digital infrastructure.

If cloud providers decide to play ball, this should open the door to juicy contracts with government entities such as the European Union in the years to come.

More information is in the press release from VMworld 2021 announcements.

VMware Cloud on AWS Outpost

AWS Outpost is a managed service offering where AWS delivers and installs the Outpost physically so you get the AWS experience on compute capacity located on-premise or in any datacenter or co-location near you. It is managed so you don’t have to take care of its lifecycle. The use-cases related to AWS Outposts include low-latency requirements, data sovereignty, local data processing…

During VMworld 2021, VMware introduced VMware Cloud on AWS Outposts with the hope that it will boost the adoption of VMware Cloud on AWS. The adoption process is the same as an AWS outpost after which AWS sets up the VMware SDDC VCF stack, VMware makes sure everything checks out and hands it to you through the VMware Cloud Service Portal.

VMware Cloud on AWS is a tight partnership between the two entities

VMware Cloud on AWS is a tight partnership between the two entities”

At the moment it is limited to 42U racks with i3en.metal instances but it may evolve over time. Looking at the pricing it is actually cheaper than I would have expected considering the resources in the i3en.metal instances and the VCF stack in the bundle.

The bundle includes:

    • AWS Outposts 42u rack
    • AWS managed dedicated Nitro-based i3en.metal EC2 instance with local SSD storage
    • VMware HCX. Also
    • VMware Cloud Console
    • Support by VMware SREs
    • Supply chain, shipment logistics, and onsite installation by AWS
    • Ongoing hardware monitoring with break/fix support.

You can now get the benefits of VMware Cloud on AWS closer to your organization

You can now get the benefits of VMware Cloud on AWS closer to your organization”

Note that it is only available in the US at the moment.

More info in this technical deep dive on VMware Cloud on AWS outpost.

DR-as-a-Service (DRaaS) Enhancements

A bunch of enhancements to the DRaaS offering were unveiled during VMworld 2021 announcements. The product was first announced in VMworld 2020. As a reminder, DRaaS allows customers to replicate workloads to cheap cloud storage and restore them to VMware Cloud on AWS that you can spin up on-demand to improve TCO.

Among the enhancements to the cloud disaster recovery solutions were:

    • 30-minutes RPO

This will offer more frequent snapshots for critical apps that have higher change rates which give you up to 48 recovery points per day. The combination of that higher granularity and the air-gapped Scale-out Cloud File System will offer to reduce the impact of Ransomware attacks.

30-Minutes RPO offers much finer recovery granularity

30-Minutes RPO offers much finer recovery granularity”

    • Accelerated Ransomware recovery with File-level recovery

On top of Scale-out Cloud File System (SCFS), VMware DRaaS will let you extract recent, uncorrupted files or folders from various snapshots in VMs without powering them up. You can then inject them into a clean recovery restore point.

Ransomware recovery is simplified with File-level recovery

Ransomware recovery is simplified with File-level recovery”

    • Integrated and simple data protection for VMware Cloud on AWS

In order to protect those critical pieces of software that run your organization, VMware Cloud on AWS will now offer the possibility to leverage Cloud DR as a unified DR, ransomware, and foundational backup-restore solution.

Once you select and configure VMs protection, Cloud DR creates immutable, encrypted backup copies stored on the Scale-out Cloud File System (air-gapped). You can then restore at the file, folder or VM level.

Integrated data protection for VMC on AWS simplifies the data protection process

Integrated data protection for VMC on AWS simplifies the data protection process”

VMware Tanzu Community Edition

One of the biggest hurdles in getting into VMware Tanzu so far was the complexity and resources required. VMware Tanzu Community Edition is a free, open source, and community supported distribution of VMware Tanzu. The best thing is that it is full featured and you can deploy it to various environments:

    • Locally on your workstation in Docker
    • vSphere infrastructure (vCenter server)
    • Amazon EC2
    • Microsoft Azure

VMware Tanzu Community Edition is full featured

VMware Tanzu Community Edition is full-featured”

This new product is a platform for “learners and users” as VMware puts it, especially small-scale and preproduction environments. As of October 2021, the product hasn’t reached v1 yet so it may not be the smartest move to start running your prod in it.

The other big selling point of VMware Tanzu Community Edition is the pluggability of the product, in that it includes additional packages to cover all aspects of the modern app’s lifecycle.

VMware Tanzu Community Edition makes installing packages easy and pain free

VMware Tanzu Community Edition makes installing packages easy and pain-free”

This new VMware Tanzu Community Edition aims at facilitating the deployment process with a docker based kind bootstrap cluster, provisioned through the Tanzu cli, that will, in turn, deploy either:

    • A management cluster to manage multiple workload clusters.
    • A standalone, all-in-one workload cluster. An even quicker way to get started.

The deployment of the management or standalone cluster can be done in a user-friendly web UI that automatically generates the associated deployment configuration file and the kube-config file. But we’ll get into all that in another dedicated blog.

You can find more info on the VMware Tanzu Community Edition website.

VMware Cloud with Tanzu Services

VMware aims at facilitating the shift to app modernization and the adoption of Kubernetes with their Tanzu offering. However, managing your own on-premise Kubernetes/Tanzu infrastructure may not be in the cards for a variety of reasons such as time constraints, complexity, CAPEX…

Managed Tanzu Kubernetes Grid Service

VMware Cloud with Tanzu Services will propose a multi-cloud managed offering where the underlying infrastructure and capacity required for Kubernetes workloads is fully managed by VMware so your teams don’t have to worry about dealing with vSphere with Tanzu on-premise.

Managed TKS lets you focus on what really matters

Managed TKS lets you focus on what really matters”

VI admins will get to keep using their good old vCenter Server interface to manage Kubernetes operations. The VMware Cloud console will let VI admins provision Tanzu Kubernetes Grid (TKG) cluster and deliver role-based access and capacity to the developer teams seamlessly.

Tanzu Mission Control Essentials

Tanzu Mission Control Essentials is a component included in Tanzu services. It is a SaaS solution that acts as a management plane for Kubernetes clusters.

Platform Operations are centralized through the use of Tanzu Mission Control Essentials which will be able to leverage VMware Cloud to deliver that holy multi-cloud deployment. Tanzu Mission Control provides global visibility across clusters and clouds and automates operational tasks such as access and security management at scale.

Tanzu Mission Control Essentials is a component included in Tanzu services

Tanzu Mission Control Essentials is a component included in Tanzu services”

Tanzu Mission Control Starter

VMware Tanzu Mission Control is a multi-cloud SaaS management platform that facilitates the operations of Kubernetes across private and public clouds, implement security, provision TKG clusters, offers troubleshooting capabilities, IAM, data protection… The list goes on, you get it, it’s a great tool when you are heavily involved with Kubernetes.

Tanzu Mission Control Starter

During VMworld 2021, VMware unveiled a free tier with VMware Tanzu Mission Starter which will include a set of core Kubernetes management features like centralized visibility and policy control against any compatible Kubernetes clusters, be it on-premise or in the cloud.

There isn’t much info on it yet but it should be a solid free alternative when paired with Tanzu Community Edition. You can register here if you want to receive updates on Tanzu Mission Control Starter.

Other VMware Tanzu announcements

Other Tanzu releases were made during VMworld 2021 announcements such as:

    • Tanzu Service Mesh Enterprise: Advanced, end-to-end connectivity and security for applications across end-users, microservices, APIs, and data.
    • VMware Tanzu Standard for VMware Cloud Universal: You can now leverage VMware Tanzu Standard as part of the Cloud Universal Program if that’s what you are into.
    • TKG New features: Support for Windows containers (experimental), GPU workload support, …
    • Tanzu Application Platform adds new capabilities.

VMware vSphere 7 Update 3

Although it was released a few days before VMword 2021, it is worth mentioning vSphere 7u3 here since it is a significant update. We won’t go through a complete what’s new here as it would make for a dedicated blog, instead, we will touch base on the main announcements:

    • Enhanced performance stats visibility for persistent memory.
    • Support for NVMe over TCP.
    • vCenter Server plug-in for NSX.
    • Simplified deployment process of VMware vSphere with Tanzu, especially network-wise.

Configuring vSphere Tanzu is much easier in vSphere 7 Update 3

Configuring vSphere Tanzu is much easier in vSphere 7 Update 3”

    • Improved maintenance operations with vSphere Distributed Resource Scheduler (DRS).
    • Use of SD and USB drives as boot media deprecated and warning of “degraded” boot volume if used.
    • Improvements to lifecycle management (depot editing, drive firmware support, vSAN witness management).
    • vCenter server reduced downtime upgrade (Cloud technology on-premise).
    • Future Linux distributions will have VMware Tools preinstalled.
    • I/O Trip Analyzer to get an overview of the vSAN I/O path.

As you can tell, vSphere is no longer just a hypervisor. It is shapeshifting into the foundation bricks of a complete ecosystem of multi-cloud and modern apps.

vSphere 7 is no longer just a hypervisor

vSphere 7 is no longer just a hypervisor”

Refer to the vSphere 7 Update 3 release notes for the full list.

Refer to vSAN 7 Update 3 release notes for the news in vSAN.

VMware Edge Compute Stack

During VMworld 2021 or outside, VMware Cloud environments are getting lots of love and marketing exposition these last few years while on-premise solutions keep bettering with age like good wine. However, edge computing is gaining in popularity and maturity as use cases for AI/ML (Artificial Intelligence and Machine Learning) continue growing. Edge computing refers to scenarios where you need compute capacity as close to the endpoint as possible. In such cases, you can’t afford to make a call to the DC or cloud and back for each processing, therefore, some sort of capacity must be on-site to run the App.

VMware Edge Compute Stack will come in three editions

VMware Edge Compute Stack will come in three editions”

One of the sticking points when leveraging edge computing includes the heavy work required in refactoring Apps, processes, and such to run the workloads at the edge. VMware Edge Compute Stack was one of the VMworld 2021 announcements and aims at simplifying that move. It is a purpose-built and integrated stack offering HCI and SDN for small-scale VM and container workloads to effectively extend your SDDC to the Edge.

Edge compute use cases will solve a wide variety of challenges

Edge compute use cases will solve a wide variety of challenges”

While this stuff is still considered cutting edge, it is without a doubt that we will witness an explosion of use cases in the coming years and VMware will have a bundled and licensed solution ready for those customers ready to jump in.

Project Announcements

Just like Tanzu Kubernetes Grid once was Project Pacific, a number of projects currently in the works have been discussed in one of the VMworld 2021 sessions (A look inside VMware’s innovation engine [VI3091]).

VMworld 2021 announcements including many projects currently in the works

VMworld 2021 announcements including many projects currently in the works”

Project Santa Cruz

The VMworld 2021 announcements introduced an integrated offering with one device that adds edge compute and SD-WAN together. It connects edge sites to centralized management planes for cloud-native and networking teams. It can run containers and cloud services.

Project Santa Cruz extends SDDC capabilities to the edge

Project Santa Cruz extends SDDC capabilities to the edge”

Project Tanzu Bring your own Host (Santa Cruz)

If you don’t want the VMware box, Project Santa Cruz also includes a Cluster API provider that supports customers bringing their own infrastructure to comply with cases such as Hyper-V or specific environment-driven kernel tuning scenarios. You can register bare metal servers as capacity to TKG clusters. Note that it is also integrated with Tanzu Mission Control.

Project Radium

This AI-oriented project Radium builds upon VMware Bitfusion to expand the feature set to other architectures over ethernet such as AMD, Graphcore, Intel, Nvidia, and other hardware vendors for AI/ML workloads. That way, users will be able to leverage a multitude of AI accelerators. Those accelerators will be attachable dynamically regardless of running on-premise, in the cloud, or at the edge.

AI ML workloads will benefit from a wider range of hardware offload devices

AI/ML workloads will benefit from a wider range of hardware offload devices”

Project Cryptographic agility

Crypto algorithms and standards have a lifecycle and become weaker as compute capability advances. What took 6 months to crack 15 years ago may take only a few hours or even minutes nowadays. The goal of this project is to offer crypto agility through increased control over configurations and the ability to switch between standard and libraries.

Project Ensemble

Following the footsteps of VMware cross-cloud services, Project Ensemble will simplify and accelerate the adoption of multi-cloud.

Ensemble streamlines multi-cloud operations through app-centric views of multi-clouds and focuses on how different personas, such as cloud providers and cloud consumers, in the organization interact with the applications.

Project IDEM

A very powerful move towards VMware’s multi-cloud vision, this multi-cloud scaled management automation project aims at simplifying management for those customers leveraging several cloud providers by automating any management task on any cloud. Project IDEM can run tasks sync or async through an entirety of cloud APIs that dynamically adapt to new versions through automatic discovery. You can think of it as Desired State Configuration across multiple clouds.

Project Capitola

Project Capitola is an impressive software-defined memory implementation announced during VMworld 2021 that aggregates tiers of different memory types such as DRAM, PMEM, NVMe, and other future technologies into logical memory for easy consumption and managed in the backend by VMware vSphere.

This model will be beneficial for memory-intensive apps and should prove cost-effective since you can leverage memory types at different price points according to your performance needs and it will work with DRS. VMware is currently partnering with Intel and their Optane devices to pioneer this new tech.

Tiered memory will offer cost-effective solutions to memory heavy apps

Tiered memory will offer cost-effective solutions to memory heavy apps”

Project Arctic and Cascade

Arctic: Addressing OPS who deliver resources

Currently at the stage of technology preview, Project Arctic will bring cloud connectivity into vSphere in order to open the cloud door to all these customers relying on on-premise environments. By making vSphere “cloud-aware”, Project Arctic will make hybrid cloud the default operating model. Organization will be able to instantly access VMware Cloud capacity and deploy VMware Cross-Cloud Services. One use case would be the ability to enable DRaaS in a few clicks.

Cascade: Addressing Devs and DevOps who rapidly develop and deploy apps

Also a technology preview, Project Cascade will provide a unified Kubernetes interface for both on-demand infrastructure and containers across VMware Cloud through CLI, API, and graphical interface. The VM service that was introduced in vSphere with Tanzu to manage VMs from Kubernetes will be ported to VMware Cloud as part of Project Cascade.

Project Arctic and project Cascade will address the needs of IT OPS and DevOps

Project Arctic and project Cascade will address the needs of IT OPS and DevOps.

VMworld 2021 in Review

Well, VMworld 2021 announcements came in truckloads and were again highly qualitative. You could clearly see the company’s long-term vision and how they go about tackling problems we don’t even know we are going to have or already have. It is remarkable to witness how the vision initiated around 10 years ago came true with the shift to the cloud. While we can’t ignore that VMware was a little late at the app modernization table with Tanzu, they are now closing the gap with huge investments in that space and tons of use cases being covered.

It seems this year’s VMworld 2021 spotlights were mostly on the multi-cloud with a tightening of the partnership with the providers, as well as app modernization with open-source products such as Tanzu Community Edition that we surely appreciate.

However, we are also greatly looking forward to seeing where Edge computing is going to take us with really interesting use cases and announcements that are paving the way for years to come.

The post VMworld 2021 Headlines – Cloud Services, Tanzu, and More! appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmworld-2021-headlines-cloud-services-tanzu-and-more/feed/ 0
What is VMware Cloud on AWS (VMC on AWS)? https://www.altaro.com/vmware/vmware-cloud-aws/ https://www.altaro.com/vmware/vmware-cloud-aws/#respond Fri, 06 Aug 2021 09:36:53 +0000 https://www.altaro.com/vmware/?p=22615 Learn more about what VMware Cloud on AWS is, use cases and how it can help your organization extend to a hybrid cloud to be more agile.

The post What is VMware Cloud on AWS (VMC on AWS)? appeared first on Altaro DOJO | VMware.

]]>

VMware Cloud on AWS is a hybrid cloud service that was launched by the company back in 2017 to address organizations that want to run VMware in AWS and it never ceased to grow (Also referred to as VMC on AWS for what does VMC stand for). Everyone in the tech industry acknowledges the fact that cloud solutions have changed the IT landscape and are here to stay, never mind thriving.

VMware Cloud On AWS

“VMware Cloud On AWS

However, shifting to the cloud is not something you do overnight and simply does not apply to a number of cases. Many IT folks don’t have the means, needs, or possibility to migrate all of their workloads to the cloud, however beneficial it would be. In these instances, a hybrid cloud is a great compromise to smoothen the transition, especially with VMware in AWS which simplifies the process significantly.

For additional details to this article, you can also refer to the official VMware Cloud on AWS features roadmap in which you will find the status of development of each and everyone of them. For instance, you will find that Cloud Native Storage is now utilized on VMware Cloud on AWS with Tanzu Kubernetes Grid Plus in all regions or that vSAN File Services on VMware Cloud on AWS is currently in planning state and should find its way into the product sometime in the future.

Cloud technicalities

Hybrid cloud

For a complete rundown on hybrid cloud, be sure to check out our guide to VMware hybrid Cloud. Here we will just touch base on the different ways to use the cloud and where Hybrid implementations sit:

      • On-Premise: Using an infrastructure hosted and operated in-house incurs significant up-front investments (CAPEX) and skills to manage. In this instance you have full control, meaning you also have to manage everything.
      • Public cloud: Run your services directly in a cloud provider such as AWS or Azure (SaaS). The infrastructure is mutualized and operated by the provider. There is no up-front cost as you pay for what you consume (OPEX).
      • Hybrid cloud: A mix of the above linking your on-premise infrastructure to an SDDC running in the cloud provider’s datacenters (PaaS or IaaS). You don’t need to worry about managing the hardware nor the management components. Note that VMware also partnered with DellEMC to offer VMware on DellEMC Cloud.

Hybrid cloud implementations offer a great deal of possibilities such as workload mobility, disaster recovery, elastic/burst capacity with no up-front investment costs (up-front payment of subscription excluded).

IaaS, PaaS, SaaS

Even though you may have seen these words everywhere on the internet over the past 10 years, I wanted to quickly explain what they mean for those who are not familiar with the terminology. “aaS” stands for “as-a-service” and describes parts of the IT environment that is offered to you as a service by the cloud provider. The company has shifted significantly from a product to a service business model and it is the case with VMware in AWS.

Now, the relevant thing here is that the service can be offered at various levels. Ranging from the infrastructure where you get hands-on management on the hypervisor, down to the actual service where you only manage the configuration (syslog, apache, mysql…). Anyway, a picture is worth a thousand words:

The type of cloud services you choose will give you more or less control over the underlying components”

VMware Cloud on AWS

VMware in AWS is available in most AWS regions of the world and runs the whole SDDC stack on Amazon Elastic Compute Cloud (Amazon EC2). It is based on the VMware Cloud Foundation framework which integrates management (vCenter), compute (vSphere), storage (vSAN) and network (NSX-T).

VMC on AWS offers an SDDC in the cloud, closer to AWS services, improving data gravity”

VMware in AWS doesn’t only provide vSphere hosts running in AWS, it includes a plethora of other VMware cloud services and offerings. Refer to the roadmap section of the VMware Cloud on AWS page for an exhaustive list of the available and in-development features.

Here are a few important ones that are worth mentioning:

Elastic DRS

Elastic DRS automatically adds and removes vSphere hosts to ensure an optimal number in the cluster in order to satisfy the demand, kind of like a cluster auto-scale if you like. It is achieved by monitoring the demand and applying an algorithm that will produce scale-out (adding) or scale-in (removing) recommendations.

The decision to add or remove vSphere will depend on the Elastic DRS Policy you selected which will be more or less conservative (impacting the cost eventually). Note that the Rapid Scale-out policy was recently added which provisions multiple hosts simultaneously to cover scenarios like VDI boot storms or host failures.

Elastic DRS policies offers 3 scale-in scale-out policies to choose from

Elastic DRS policies offer 3 scale-in / scale-out policies to choose from”

Disaster Recovery

Disaster recovery is critically important but not all organizations can afford a second site to replicate workloads to. VMware in AWS can help those companies by offering DR solutions in the cloud. There are currently 2 main ways offered by VMware Cloud on AWS to do this.

VMware Cloud Disaster Recovery aka DRaaS – SaaS

Announced during VMworld 2020, DRaaS is a SaaS VMware cloud services providing cost-optimized running VMware in AWS, on-demand disaster recovery. Instead of paying for hosts as replication destination, replicas are stored on relatively cheap cloud storage and restored to a cloud SDDC that is spun up on-demand to improve TCO.

Because restoring involves automatically provisioning an SDDC, which takes a bit of time, the solution is characterized as warm DRaaS. However, it is possible to run a light footprint SDDC called live pilot-light to restore a number of critical workloads in a timely fashion.

The solution will support up to 1,500 VMs across multiple SDDC clusters with DR health checks

The solution will support up to 1,500 VMs across multiple SDDC clusters with DR health checks”

Find out more about DRaaS in our dedicated blog on the topic.

VMware Site Recovery – IaaS

Also a VMware cloud services, however, as opposed to DRaaS, VMware Site Recovery is a hot DRaaS solution, meaning the recovery infrastructure is ready to go, SDDC provisioning required. It is built on Site Recovery Manager (SRM) and leverages vSphere Replication to copy the replicas to the destination running VMware in AWS.

The workloads will be replicated to vSphere hosts running in AWS. The upsides will be that you don’t need to own a DR infrastructure while benefiting from the best RPO/RTO possible. However, this will obviously be reflected in the cost as it is more expensive than the SaaS option.

VMware Site Recovery lets you replicate your workloads to a vSphere backed cloud SDDC”

Hybrid linked mode and Workload mobility

One of the main selling points of hybrid cloud is workload mobility. vCenter hybrid linked mode will link your on-premise SDDC to VMware in AWS. By doing this you get to manage both environments from a single pane of glass, share tags and migrate virtual machines using vMotion.

Maximum latency for Hybrid Linked mode is 100ms roundtrip time”

It can be configured in any of the following 2 ways:

      • On-Premise to Cloud: In this model, the Cloud Gateway Appliance acts as a bridge between your on-premises infrastructure and the cloud SDDC. The identity source is already taken care of as the SSO configuration is mapped to VMware in AWS. You manage the hybrid SDDC by logging into the VMC gateway.
      • Cloud to On-Premise: No need for a VMC Gateway here as you will link directly from the cloud vCenter to the on-premise one. You need to use the cloud vSphere client to manage your hybrid environment. In this scenario, you must add your on-premise identity source to the vCenter in AWS.

The VMC Gateway lets you link your on-premise SDDC to the cloud SDDC”

Once the VPN connection along with firewall rules, SSO, and permissions are configured and Hybrid Linked Mode is connected, you can start migrating VMs between your on-premise and cloud SDDC. Nothing new here as it uses the tried and tested vSphere vMotion.

VMware Horizon on VMware Cloud on AWS

Granted the name of this feature is a bit of a mouthful. I assume it is to differentiate it from “Horizon Cloud”, a separate SaaS offering hosted on IBM Cloud or Azure in which you only manage the desktop pools.

In VMware Horizon on VMC on AWS, you deploy your Horizon infrastructure components in your cloud SDDC just like you would in your on-premise environment. You can then add it to the Cloud Pod Architecture (CPA) of your on-premise environment or you could decide to run all your VDI workloads in VMware in AWS for some reason.

Horizon Cloud pod architecture for VMware Cloud on AWS

Horizon Cloud pod architecture for VMware Cloud on AWS”

A number of use cases can motivate the choice for this architecture such as:

    • Datacenter expansion: Expand the capacity of your VDI infrastructure without investing in new hardware. Burst capacity such as seasonal activities may benefit from it greatly.
    • Application locality: Put your VDI closer to your published AWS services to reduce application latency to a minimum (Data Gravity).
    • Business Continuity / Disaster Recovery: Adding a Horizon pod in AWS to your CPA will open the doors to BC and DR to recover quickly from a failure in your on-premise SDDC.

VMware Tanzu Kubernetes Grid Plus on VMware Cloud on AWS

Tanzu Kubernetes Grid Plus (TKG+) is VMware’s upstream Kubernetes runtime which provides open-source technologies and an automation solution to deploy scalable and multi-cluster Kubernetes environment.

VMware in AWS now lets you deploy an SDDC in the cloud that contains all the components required to leverage Tanzu Kubernetes Grid. You benefit from elastically scalable resources in the cloud for your containerized workloads.

Tanzu Kubernetes Grid (TKG) can now span to VMware Cloud on AWS

Tanzu Kubernetes Grid (TKG) can now span to VMware Cloud on AWS”

VMware Cloud on AWS Outpost

As mentioned, it is no joke that VMware has been going full steam ahead with the cloud and wanting to tighten the partnership with AWS by integrating even more with their product offering. In doing so, VMware Cloud on AWS was made available for AWS Outpost and was announced during VMworld 2021.

AWS Outpost is a managed service offering proposed by AWS where the company delivers onsite and installs the “Outpost” physically in your location. Meaning you get the AWS experience on compute capacity except it is located on-premise or in any datacenter or co-location of your choosing. It is obviously managed by AWS so you don’t need to worry about software updates or any of the nitty-gritty of infrastructure lifecycling. The use-cases related to AWS Outposts include low-latency requirements, local data processing, and many more.

Data sovereignty was a significant driver in the adoption of VMware Cloud on AWS outpost as the number of large organizations and government bodies looking to protect their data against foreign legislations is growing at a rapid pace. VMware actually launched the VMware Sovereign Cloud initiative to address these customer needs.

Getting started with VMC on AWS

Planning your hybrid cloud journey

Planning your shift to hybrid cloud is an important step in the journey, especially making sure the network aspect is correctly configured and doesn’t contain security issues.

As opposed to listing requirements and prerequisites that get to change quite regularly, I would rather send you to the VMware Cloud Launchpad. Described in VMware’s words as “A One-Stop-Shop for all VMware Cloud Solutions and Infrastructure”.

It is clear and well organized; you will find guidance and a lot of learning material to get started with VMware in AWS. Again, you will also find some information in our guide to hybrid cloud.

The VMware Cloud Launchpad helps you plan and prepare for your hybrid cloud journey

The VMware Cloud Launchpad helps you plan and prepare for your hybrid cloud journey”

Deploying virtual machines

Deploying a VM directly to your AWS SDDC is fairly similar to what you would do in your on-premise environment and can be done in several ways. VMware actually redirects to the regular vSphere documentation when it comes to it.

  • Creating a new VM from scratch.
  • Cloning existing VMs or templates.
  • Deploying an OVF or OVA template.
  • Deploying a VM from an uploaded OVF or OVA file.

Because the SDDC runs VMware in AWS, some operations available in your on-premise environments won’t be possible in the cloud SDDC such as RDM, SCSI BUS sharing, Hyperthreading, virtual disk types… You can find the complete list of unsupported features in the VMware Documentation.

Content libraries let you synchronize resources from the on-premise datacenter to the cloud SDDC”

Note that operations will be significantly facilitated if you leverage vSphere Content Libraries. You can publish a library from your on-premise environment and have the vCenter running on VMware in AWS subscribe to it. That way you get to manage your ISO and templates from a single place.

Migrating virtual machines

Most companies committing to a hybrid cloud model will almost surely get to the discussion of migrating workloads between environments, be it from or to the SDDC running in AWS. We call it a Hybrid migration.

The fact is there are again multiple ways to migrate virtual machines to VMware in AWS:

      • VMware HCX

VMware HCX is an application mobility platform that facilitates workload mobility across environments without requiring a reboot or network interruption. It is particularly relevant in bulk migration scenarios where hundreds of VMs have to be moved.

      • vMotion (cold)

You can also move VMs in powered off state where VM downtime is not an issue. That way you ensure CPU compatibility and VMs connected to standard switches can be moved.

      • vMotion (live)

The one and only vSphere vMotion can be used to relocate your workload (vDS networking only) between your on-premise and cloud SDDCs. It will obviously move the storage of the VM as well and maintain its active state. It can be done from the vSphere client as long as Hybrid linked mode is enabled and your SDDC runs supported vSphere versions (vSphere 6.7U2/6.5U3 or higher).

Note that EVC is disabled in the Cloud SDDC. Hence, it is recommended to enable Per-VM EVC or set your on-premise SDDC to Broadwell. This will ensure that you can migrate live workloads between your SDDCs.

Per-VM EVC ensure CPU compatibility for workload migrations across SDDCs”

Accessing AWS services

While we are talking about VMware in AWS, I also wanted to touch base on AWS’s SaaS offering. When deploying an SDDC with VMC on AWS, a high speed, low latency link is created with your Amazon VPC.

Meaning, your workloads will run closer to your cloud services such as EC2 or S3 to offer LAN-like communications. This is called data gravity and is highly beneficial for latency sensitive applications accessing cloud services.

Pricing

The pricing model for VMware in AWS is based on the number and type of hosts that you will use in your cloud SDDC. You can either choose to pay on-demand ($/host/hour) or go for a 1- or 3-year(s) subscription ($/host/year). Paying upfront for a subscription will obviously save you money over time but the investment is significant.

If you want to know more, head over to the VMware Cloud on AWS pricing calculator to estimate the costs.

Number of hosts

The number of hosts you run will depend on your needs but there are minimums. Production environments can start with as little as 2 hosts backed with i3.metal servers or 3 hosts backed with i3en.metal servers. You can then scale up as demand increases.

Note that a time-bound low-cost single-host option is also available for organizations willing to try the service to see if it works with their environment and adds value. Be mindful that if you don’t scale up the cluster within 30 days, the SDDC is deleted along with the data stored on it. It starts at $7/hour, which is ok, but watch out as it will set you back $5,110 per month if it runs for the full 30 days!

Types of hosts

When planning for your cloud SDDC, you can choose from 2 server configurations for which the cost will vary.

VMC on AWS server configurations as of April of 2021

VMC on AWS server configurations as of April of 2021”

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Conclusion

In the last few years, it’s been fascinating to witness VMware’s vision “Any app, any cloud” come to life thanks to a series of acquisitions and partnerships with major tech companies in the industry like Amazon AWS. After four years of continuous improvements, VMware in AWS is getting traction and customers are getting on board.

While VMware in AWS might appear, and rightly so, like a pretty expensive service, it will bring some much-needed breathing space to IT departments that struggle to balance CAPEX management and innovation. By shifting some of those large up-front acquisitions to an OPEX model, you don’t need to worry about amortization, hardware, cabling, patching, upgrades… anymore.

VMware also thought about vSphere administrators as your knowledge and skills are transferable to VMware Cloud on AWS thanks to it using the same management tools.

If you want to give it a go, the single-host option lets you test the service for 30 days for about $7 per hour. Remember not to store any important data on it if you are not going to scale up the SDDC as it will be deleted at the 30 days mark.

Alternatively, you can have a glimpse at VMware in AWS in the dedicated hands-on-labs offered for free by VMware.

The post What is VMware Cloud on AWS (VMC on AWS)? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-cloud-aws/feed/ 0
NSX-T vs NSX-v: What is the difference? https://www.altaro.com/vmware/nsx-t-vs-nsx-v/ https://www.altaro.com/vmware/nsx-t-vs-nsx-v/#respond Fri, 19 Feb 2021 07:25:55 +0000 https://www.altaro.com/vmware/?p=20995 What is VMware NSX? What is the difference between NSX-V and NSX-T? What advantages does NSX-T offer over NSX-V? Get the answers by reading the article.

The post NSX-T vs NSX-v: What is the difference? appeared first on Altaro DOJO | VMware.

]]>

There have been many advancements in modern IT infrastructure. Virtualization has totally revolutionized the way that organizations view compute, storage, and networking. The notion of “virtualizing” the modern datacenter was a paradigm shift in many areas of IT infrastructure and datacenter technology. Workloads abstracted from the physical hardware have opened up tremendous efficiencies, and advantages in the way businesses can provide digital resources.

Along with server virtualization that allowed businesses to abstract running operating systems from the physical hardware, network virtualization has brought tremendous networking advantages. Much as they were in the area of server virtualization, VMware has been a pioneer in the area of network virtualization. VMware NSX is well-known in network virtualization and is a powerful solution that enables network virtualization, both in the data center, public cloud, and multi-cloud environments.

What challenges exist in data centers still leveraging traditional networking? What is VMware NSX? What is the difference between NSX-V and NSX-T? What advantages does NSX-T offer over NSX-V? What is the migration process to get from NSX-V to NSX-T? What features does NSX-T offer today to empower modern workloads?

Traditional data center networking challenges

VMware’s Software-Defined Data Center (SDDC) vision incorporates next-generation virtualization technologies. It allows organizations to realize automated, non-disruptive deployments of business-critical infrastructure in a way that helps reduce operational complexity and extend technical agility to deliver applications. By now, most organizations have virtualized most of their server infrastructure in their data centers and are also taking advantage of software-defined storage technologies.

Datacenter networks have historically been extremely slow to respond to the changing needs of the enterprise. Networking is often too rigid, complicated and presents many barriers to innovation and realizing the full potential of virtualizing other data center components such as servers and storage. Traditional networking technologies constrain the advantages gained by virtualizing servers and storage.

Traditional networking presents the following challenges:

  • Provisioning new routers, switches, and other technologies is slow
  • Proprietary networking technologies historically bind traditional networking from specific networking vendors
  • Automated network configuration is generally non-existent
  • Changes generally require manual interaction
  • Even for experienced network engineers, network changes are error-prone
  • Many traditional network constructs such as VLANs, firewalls, load balancers, ACLs, and others present roadblocks to fast-paced development and DevOps-style infrastructure
  • Traditional networking depends on workload placement
  • Workload mobility is limited
  • Firewall rule sprawl
  • VLAN and IP topology sprawl

What if the network could be abstracted from the underlying physical network infrastructure and placed into the software layer? VMware NSX allows eliminating the challenges mentioned above with traditional physical networks.

What is VMware NSX?

VMware NSX is a robust software-defined networking (SDN) technology that solves complex networking challenges in the modern data center environment. It enables organizations to move rapidly to deploy new networks, change existing network designs, and effectively automate networks in code. It allows businesses to connect their virtual cloud networks and protect applications across on-premises data centers, multi-cloud environments, bare-metal workloads, and modern container infrastructure with ease. Aside from delivering software-defined networking capabilities to the enterprise, VMware NSX empowers businesses with an L2-L7 security virtualization solution. With VMware NSX, companies can manage their virtual networking and network security from a single pane of glass UI with the management and security tools in a seamless interface.

VMware NSX brings both networking and security constructs closer to where the application lives. Applications can reside inside virtual machines, bare-metal physical servers, and modern containerized applications. Regardless of where the application lives or the underlying physical network, networks can be provisioned and managed independently. Since VMware NSX is a software-defined solution and does not rely on physical networking gear, it provides logical networking and security capabilities, including:

  • Logical switching – VMware NSX provides logical switching capabilities that extend Layer 2 switching boundaries across a routed Layer 3 fabric. The extensions can include both within and across data center environments and public/private clouds.
  • Routing – With VMware NSX, organizations have a much more modern approach to Layer 3 routing distributed in the hypervisor kernel.
  • Gateway firewall – The software-defined gateway firewall provides stateful firewall capabilities up to Layer 7, with NSX providing app identification and distributed FQDN whitelisting. Again this is distributed with centralized policy and management.
  • Distributed firewall – Similar to the gateway firewall, the distributed firewall as part of the VMware NSX solution provides stateful Layer 7 firewall capabilities with app ID and distributed FQDN whitelisting
  • Load balancing – Organizations can use the VMware NSX load balancer to provide L4-L7 load balancing features with SSL offloading. Other features such as server health checks and passive health checks and API interaction are part of the solution.
  • Virtual Private Network (VPN) – Site-to-Site VPN, remote-access VPN, and cloud gateway services are possible with VMware NSX VPN
  • NSX Gateway – You can bridge physical Layer 2 VLANs from the physical network with NSX overlay networks using the NSX Gateway
  • NSX Intelligence – The NSX Intelligence platform uses automated artificial intelligence (AI) and machine learning (ML) to provide continuous monitoring and visualization for network traffic flows to recognize malicious traffic and intent
  • NSX Distributed IDS/IPS – VMware NSX has evolved to provide centralized advanced threat detection and prevention engine that allows detecting and preventing east-west movement of malicious threats. It provides a distributed architecture and application context in software that can replace the functionality provided by discrete security appliances.
  • Federation – For organizations managing multiple VMware NSX environments, the Federation capability allows managing and configuring numerous environments with a single pane of glass using centralized policy and enforcement
  • Virtual Routing and Forwarding (VRF) – For multi-tenant environments, VMware NSX provides complete data plan isolation using the NSX Tier 0 gateway that provides separate routing tables, NAT, and edge firewall support in each VRF.
  • NSX Data Center API – Developers and DevOps automation tools have access to RESTful APIs that allow interacting with VMware NSX programmatically.
  • Operations – VMware NSX includes native tools such as traceflow, overlay logical SPAN, and IPFIX and also allows easy integration with other tools such as vRealize Network Insight (vRNI).
  • Quality of Service (QoS) – Define software-based QoS features to applications
  • Context-aware micro-segmentation – Security groups and policies with VMware NSX can automatically be created and updated based on various environmental attributes outside of the typical network constructs such as IP address, port, and others.

The logical, software-defined architecture allows easily provisioning networking non-disruptively over existing physical networks. VMware NSX logical networks can extend across data centers, public and private cloud environments, containers, and bare-metal servers.

How does VMware NSX work?

Software-defined network solutions, including VMware NSX, make use of an underlay and an overlay network. It provides the ability to separate the control and data planes between the two. Let’s see how both the underlay and overlay networks play a part in network communication with a software-defined network (SDN) solution.

  • Underlay – The underlay network includes the physical network infrastructure that enables the transmission of packets. The underlay network also consists of the routing protocols needed to allow for IP connectivity between locations. Routing protocols including OSPF, IS-IS, and BGP are examples of common routing protocols for this purpose.
  • Overlay – The overlay network is where the “magic” of a software-defined network happens. The overlay network is formed “on top of” the underlay physical network architecture. Both the data plane traffic and control plane signalling are controlled within the virtualized network. Multiple virtual networks can overlay on top of a single physical network. Overlay networks use overlay protocols such as VXLAN, NVGRE, OTV, and GENEVE

A high-level overview of the Overlay and Underlay network in software-defined networking
A high-level overview of the Overlay and Underlay network in software-defined networking

VMware NSX key benefits

VMware NSX provides many critical benefits to organizations looking to modernize networking operations in their environments. These include the following:

  • Micro-segmentation – The notion of having a “trusted” internal network is no longer practical with new-age threats and the way attackers are compromising networks via east-west attacks
  • Automated network provisioning – The ability to automate network provisioning, configuration, and security policy management allows businesses to be much more agile in their operations
  • Consistent management of networking and security policies – Since logical networks can be controlled through code, it allows much more consistent management of networking and security policies
  • Built-in network visualization and monitoring – VMware NSX provides monitoring and visualization of application topologies, security policies, and flow monitoring
  • Advanced east-west threat prevention and distributed IPS/IDS – To bolster the built-in micro-segmentation capabilities of VMware NSX, distributed IPS/IDS provides automated threat protection and prevention capabilities. The benefits include elastic throughput, reduce false positives, improved utilization of computing capacity.

VMware NSX use cases

These are alluded to with the key benefits covered. However, what are the specific use cases for using VMware NSX solutions? These include the following:

  • Security
  • Multi-cloud networking
  • Network automation
  • Networking and security for cloud-native applications

Security

Arguably the most obvious use case with using VMware NSX is security. There is a new cybersecurity best practice model known as “Zero-Trust.” The traditional network operates on the notion of an “untrusted” zone, typically the Internet, and a “trusted” zone, which has historically included the internal LAN. With new threats that have emerged, such as ransomware and other malicious tools used by attackers, there the “trusted” network is no longer a practical approach for security

Using the Zero-Trust approach, all network traffic is viewed as untrusted, regardless of where the traffic originates. In the Zero-Trust model, even if two servers share the same network, they should not implicitly trust all network traffic communicated between them. Using micro-segmentation, distributed IPS/IDS, and context-aware firewalling, VMware NSX allows organizations to have the tools to implement a Zero-Trust model in their networks effectively. It helps to prevent attackers from compromising internal resources due to lateral east-west movement.

Multi-cloud networking

Traditional networking in a single on-premises data center can be difficult, let alone networking between data centers and even on-premises and cloud environments. With VMware NSX software-defined solutions, networking and security boundaries can be extended between heterogeneous sites. It allows stretching sites and moving workloads between on-premises and cloud environments without disruption.

Traditional physical networking cannot achieve the mobility and flexibility that VMware NSX provides for workloads. It decouples the requirements that a physical network exists in a particular location and allows networks to be placed where logically needed to solve challenging technical and business use cases.

Network Automation

One of the compelling capabilities afforded by the VMware NSX platform is the ability to automate the solution. The deployment of full-stack solutions can be accomplished in code without entering a CLI interface or deploying physical appliances. VMware NSX exposes various APIs that allow interacting with the solution through RESTful API calls. You can also integrate VMware NSX with other automation solutions such as Ansible, Terraform, and vRealize Automation, automation solutions commonly used within the enterprise.

Networking and security for cloud-native applications

VMware NSX allows your organization to provide both networking and security capabilities for modern workloads and containerized applications. You can do this with a very granular policy based on each container. It allows applying the same micro-segmentation capabilities for virtual machines to containers.

VMware NSX-V vs. VMware NSX-T

If you have been keeping up with the evolution of VMware NSX, you will be quick to note VMware NSX has evolved in the past few years from the early days of its initial releases. VMware NSX now comes in two different versions of the product. There are VMware NSX-V and VMware NSX-T. Each version of VMware NSX has specific use cases and characteristics. It is essential to recognize the differences between the two solutions and understand which version you should deploy. Let’s take a detailed comparison between VMware NSX-V and VMware NSX-T to see how the solutions are different, why NSX-T is an improvement over NSX-V, and the migration path from NSX-V to NSX-T.

What is VMware NSX-V?

VMware introduced the original VMware NSX product after VMware’s purchase in 2012 of a company called Nicira. VMware integrated Nicira’s R&D teams and shortly after that introduced the first version of VMware NSX. This became the mainstream VMware NSX-V. The “V” in the NSX-V solution stands for “vSphere.” VMware NSX-V is a vSphere-only solution. It is VMware’s software-defined networking platform supported to run in a vSphere environment. Currently, the solution is marketed as NSX Data Center for vSphere. Installing VMware NSX-V requires you have a vCenter Server in the environment. When VMware NSX-V is installed, it registers with your vCenter Server, and the solution integrates into vSphere through the connection with vCenter.

The vCenter Server is defined as the compute manager for the VMware NSX-V solution. VMware NSX-V connections are made through the vCenter Server APIs to interact with vSphere and onboard ESXi hosts. One of the reasons that VMware NSX-V is reliant on vCenter Server is the vSphere Distributed Switch (VDS) requirement for the more advanced VMware NSX-V functionality, including logical switches, etc. The vSphere Distributed Switch is a vCenter Server construct that requires a vCenter Server in the environment. Unlike the vSphere Standard Switch that resides on the ESXi host itself, vCenter Server maintains the vSphere Distributed Switch (VDS). The vCenter Server synchronizes the VDS switches with the ESXi hosts.

As described earlier with software-defined networking, an overlay network creates the virtual network on top of the underlay network or the physical network that transmits the packets. To create the overlay network, VMware NSX-V uses the VXLAN network encapsulation protocol. What is VXLAN? VXLAN is short for Virtual Extensible LAN. It is an encapsulation protocol that provides connectivity between data centers through tunneling. It effectively allows connecting two Layer 2 segments over Layer 3. VXLAN is not a VMware-only technology. It is an open standard used in many different vendor technologies, including EVPN, Cisco ACI, etc.

VXLAN uses packet encapsulation similar to VLANs that creates VXLAN tunnels between VXLAN tunnel endpoints (VTEPs). The problem with VLANs is they were developed with a fixed 12-bit field, which means there are roughly 4000 VLANs that can be provisioned in a single environment. However, VXLAN overcomes this issue as each VXLAN segment uses a 24-bit segment ID known as the VXLAN Network Identifier (VNI) for identification. The 24-bit segment ID allows up to 16 million unique VXLAN segments in the same administrative domain as opposed to the 4000 with VLANs.

Through VXLAN, VMware NSX-V can create logical, virtual networks and the ability to “stretch” and architect networks in a way that solves very complex problems. It also allows creating virtual constructs such as the L2 logical switch, distributed logical routers, load balancers, and other features.

Why is it being deprecated?

As organizations have transitioned from mainly on-premises data centers to leverage the cloud for many workloads, it became apparent a new version of VMware NSX was needed. A modern network virtualization solution needs to scale beyond VMware vSphere and allow organizations to use network virtualization with modern cloud-native platforms. For quite some time, VMware NSX-V was VMware’s preferred network virtualization solution for VMware vSphere. However, VMware introduced a new version of VMware NSX, known as VMware NSX-T (which will be described in detail to follow).

The early stages of the VMware NSX-T release lacked many of the features that enterprise customers had with VMware NSX-V and lacked the seamless installation process with VMware NSX-V. Since VMware NSX-V has been around much longer than NSX-T, there were many more third-party integrations with VMware NSX-V than NSX-T, as you would expect. In the early stages of VMware NSX-T, customers with many third-party integration requirements were better suited to install VMware NSX-V.

VMware NSX-V at this point is considered the legacy VMware NSX solution in the portfolio of VMware NSX Data Center solutions. VMware NSX-T is a much more robust and fully-featured modern implementation of VMware NSX that is no longer limited to the confines of VMware vSphere. VMware is steering customers with greenfield implementations of VMware NSX to install VMware NSX-T, even in VMware vSphere environments. They have also created a migration process with the technical tools needed to migrate from the VMware NSX-V platform to NSX-T.

VMware is committed to supporting NSX-V environments until the end of general support date. The end of general support date is January 16, 2022. The end of technical guidance given by VMware for VMware NSX-V follows on January 16, 2023.

What is VMware NSX-T?

VMware NSX-T is the new, modern release of VMware NSX Data Center. NSX-T is the solution that will be moving forward with VMware network virtualization, covering all platforms, including VMware vSphere. The “T” in NSX-T stands for “Transformers.” Intuitively, NSX-“Transformers” allows the solution to transform beyond the initial use case of network virtualization with VMware vSphere and into the realm of public cloud and modern containerized workloads.

VMware NSX-T is a very flexible solution businesses can implement with VMware vSphere and KVM hypervisors and bare-metal servers, and containerized workloads. It is the network virtualization platform VMware has chosen for their VMware on AWS cloud IaaS solution. It is also running under the hood of Amazon AWS Outposts.

VMware NSX-T is VMware’s solution to modernize networking and security in the enterprise and cloud-native environments and everything in between. You can think of VMware NSX-T as a multi-cloud solution that allows organizations to stitch networking together in an effective, efficient, and seamless way. This type of solution is needed as modern applications may contain many different infrastructure components. It may include virtual machines, containers, and even bare-metal workloads.

Businesses need an API-driven solution, flexible, intrinsic security, and streamlined operations to solve the challenges with the diverse infrastructure and cloud environments that back modern applications. These are the types of challenges that VMware NSX-T is purpose-built to address.

VMware NSX-T removes the requirement for VMware vCenter Server to deploy the solution. You can deploy VMware NSX-T VMware ESXi host transport nodes without vCenter Server altogether.

Adding a VMware NSX-T ESXi transport node
Adding a VMware NSX-T ESXi transport node

Compatibility and interoperability with VMware vSphere is still very strong. VMware vCenter Server is now referred to as a Compute Manager. You add vCenter as a Compute Manager to allow easy integration with ESXi hosts if you want to onboard ESXi hosts in mass.

Adding vCenter Server as a Compute Manager in NSX-T
Adding vCenter Server as a Compute Manager in NSX-T

It is due to VMware NSX-T being a standalone solution in its own right. It does not depend on a particular hypervisor compute manager such as VMware vCenter Server to function. However, many of the basic concepts highlighted with VMware NSX-V apply with NSX-T. VMware NSX-T uses an encapsulation protocol to create an overlay network on top of the physical network underlay.

Adding virtual network segments in NSX-T
Adding virtual network segments in NSX-T

Instead of using the very common VXLAN, VMware NSX-T has moved forward using the GENEVE network encapsulation protocol. What is GENEVE? GENEVE is short for Generic Network Virtualization Encapsulation. Compared to other popular network encapsulation protocols, GENEVE is believed to be the modern technology moving forward. It helps to solve many of the problems or limitations found with earlier encapsulation protocols like VXLAN. While it works almost identically to VXLAN, it offers more flexibility in its implementation because of control plane independence.

GENEVE is an open standard that does not include information or specification for the control plane. The IETF draft states this:

Although some protocols for network virtualization have included a control plane as part of the tunnel format specification (most notably, the original VXLAN spec prescribed a multicast learning-based control plane), these specifications have largely been treated as describing only the data format. The VXLAN frame format has actually seen a wide variety of control planes built on top of it.

There is a clear advantage in settling on a data format: most of the protocols are only superficially different, and there is little advantage in duplicating effort. However, the same cannot be said of control planes, which are diverse in very fundamental ways. The case for standardization is also less clear given the wide variety in requirements, goals, and deployment scenarios.

Another slight variation, when compared to VXLAN, is the terminology difference between the tunnel endpoints. While VXLAN tunnel endpoints are referred to as VTEPs, the GENEVE tunnel endpoints are simply called tunnel endpoints (TEPs). With GENEVE under the hood of VMware NSX-T, NSX-T has “future-proofing” built into the solution from an encapsulation protocol perspective.

While VMware NSX-V uses a more traditional approach to routing, VMware NSX-T introduces a new two-tier routing architecture that is better suited for multi-tenant environments and scaling for today’s very complex cloud architectures. With VMware NSX-T, it has introduced a new TIER-0 and TIER-1 routing topology.

Why should organizations migrate to VMware NSX-T?

If you are running VMware NSX-V, besides the end of life looming in 2022, why should organizations migrate to VMware NSX-T from a feature perspective? VMware NSX-T provides an extremely robust solution with modern features and capabilities that align with the cloud-native applications organizations are using today. VMware cites four reasons that organizations should migrate from NSX-V to NSX-T from a feature perspective. These include:

  • Scale-out networking, including NSX Federation – VMware NSX-T provides the means to federate and manage numerous installations of VMware NSX across multiple locations.
  • Full-stack networking for modern distributed applications – This sets VMware NSX-T apart from VMware NSX-V. NSX-T is purpose-built to handle modern applications, including containerized workloads, something NSX-V was never designed to do.
  • NSX Intelligence with best-in-class security – NSX Intelligence is a modern AI and ML-driven solution that provides proactive security intelligence in your environment to find and prevent cybersecurity attacks.
  • Networking and security automation – NSX-T provides a robust API-driven interface that helps to simplify network automation.
  • More intuitive dashboard and monitoring capabilities

New features in VMware NSX-T 3.1

VMware NSX-T 3.1 contains many new features, including the following:

  • Cloud scalability improvements
  • Simplified operations
  • East-west traffic security improvements
  • Distributed IDS/IPS
  • NSX Intelligence 1.2
  • NSX-V to NSX-T Migration for large scale networks

Cloud scalability improvements

VMware NSX-T 3.1 contains many new cloud scalability improvements. These include NSX T auto-scaling build into the platform that allows growing as your network needs change. NSX-T 3.1 provides clustering support for the NSX Global Manager, simplified disaster recovery workflows, a terraform provider for better automation, and improved scalability for large deployments.

New enhancements to multicast traffic have also been provided. It enables multicast in a multi-tenant environment. New tenant multicast deployments can be automated with APIs and no longer require changing the network’s underlying configuration.

Simplified operations

VMware NSX-T 3.1 provides simplified operations for use with both private and public cloud environments. It includes deep integration with vREalize Network Insight (vRNI) to enhance network modelling, configuration, and intent verification. It helps to understand the impacts of network changes as well as offers better network planning overall.

With the new vSphere 7.0 and higher releases, VMware has introduced the new vSphere Lifecycle Manager (vLCM). With vLCM, organizations can access simplified lifecycle management across their environment that understands VMware NSX-T and provides NSX lifecycle management.

VMware NSX-T provides a better dashboard for monitoring and viewing traffic, compliance, and other reporting. It allows admins to get information about the network quickly.

VMware NSX-T monitoring and compliance report
VMware NSX-T monitoring and compliance report

The VMware NSX-T dashboard is searchable and displays information about the virtual network environment and security in a single intuitive dashboard.

VMware NSX-T provides an intuitive seamless dashboard for viewing the enviroment
VMware NSX-T provides an intuitive seamless dashboard for viewing the environment

 

East-west traffic security improvements

VMware has introduced the ability to purchase Internal Firewall and Advanced Threat Prevention (ATP) independently of the networking features in VMware NSX-T 3.1. The Advanced Threat Prevention capabilities include Distributed IDS/IPS, Network Traffic Analytics/Networking Detection and Response, and Network Sandboxing.

VMware NSX-T provides network introspection capabilities for east-west traffic
VMware NSX-T provides network introspection capabilities for east-west traffic

Distributed IDS/IPS

VMware NSX-T 3.1 introduces the world’s first distributed IDS/IPS solution that provides the ability to detect and stop east-west lateral threat movement across your environment. It also helps to replace discrete hardware appliances and strengthens compliance capabilities.

The IDS/IPS’s virtual patching capabilities help patch vulnerabilities at a workload level without new signatures from an endpoint security perspective. From a performance perspective, the overhead is minimal.

VMware NSX-T Distributed IDS/IPS
VMware NSX-T Distributed IDS/IPS

NSX Intelligence 1.2

With NSX Intelligence 1.2, VMware has added NSX Intelligence’s ability to cover physical servers and improves recommendations across the entire environment, including VMs and bare-metal servers. This also includes L7 content profile recommendations with App-ID support. Visualizations include the display of user and process level context from workloads.

VMware NSX-T NSX Intelligence provides a modern security solution
VMware NSX-T NSX Intelligence provides a modern security solution

VMware NSX-T application context
VMware NSX-T application context

NSX-V to NSX-T Migration for large scale networks

VMware has added integration with vRealize Automation and provided Migration Coordinator enhancements. It is all to help organizations accelerate their migrations from NSX-V to NSX-T. The migration coordinator’s new capabilities allow customers to migrate things like firewall rules with lift and shift ease from NSX-V environments over to NSX-T.

NSX-T migration coordinator
NSX-T migration coordinator

Comparing NSX-V with NSX-T

Any way you look at it, VMware NSX is a revolutionary technology. It has evolved into an even more powerful solution since its inception in the early days of 2012. Let’s review by comparing the differences between the two solutions.

Reliance on VMware vCenter

VMware NSX-V is a VMware vSphere-only solution. It is VMware’s initial NSX solution based on the original code acquired from Nicira. With VMware NSX-V, it requires a connection to vCenter Server for integration with the ESXi hosts. Another reason for the vCenter Server integration is the requirement for vSphere Distributed Switches (VDS). The VDS is required for the advanced functionality that VMware NSX-V provides such as Logical Switching.

VMware NSX-T does not require a vCenter Server and allows interacting with ESXi hosts directly and onboard those as transport nodes. VMware vCenter Server can be used as a compute manager to integrate with multiple ESXi hosts. However, the key here is that VMware NSX-T is not a vSphere-specific network virtualization platform.

Overlay technology

VMware NSX-V uses Virtual Extensible LAN (VXLAN) as the overlay technology that creates the virtualized network infrastructure. VXLAN is a vendor-neutral protocol and provides a 24-bit segment ID that provides 16 million possible virtual network segments. It is far above the traditional VLAN, which provides roughly 4000 useable network segments.

VMware NSX-T uses the Generic Network Virtualization Encapsulation (GENEVE) as the overlay network encapsulation protocol. GENEVE is regarded as an even more modern encapsulation protocol that helps overcome some of the limitations of more traditional network encapsulation protocols like VXLAN.

Routing

VMware NSX-V uses a more traditional routing architecture. VMware NSX-T introduces a multi-tier routing architecture using what is known as a TIER-0 and TIER-1 routing topology. It is a much better approach for today’s multi-tenant environments.

Multi-cloud capabilities

In terms of multi-cloud capabilities, VMware NSX-V is limited. Since it is limited to VMware vSphere environments, it is not considered a multi-cloud platform. Even “VMware” cloud environments such as VMware Cloud on AWS do not use VMware NSX-V, but rather VMware NSX-T.

VMware NSX-T is a multi-cloud network virtualization technology. Since it is not a VMware vSphere-only technology and is designed for modern workloads, VMware NSX-T is a true multi-cloud platform. It allows organizations to leverage the power of virtual networking on-premises as well as in the cloud.

Feature parity

In the early days of the solution, VMware NSX-T did not have feature parity with VMware NSX-V. However, now, VMware NSX-T has effectively surpassed VMware NSX-V in terms of what it can do and the powerful integrations with other solutions like NSX Intelligence.

Lifecycle and End of Life

VMware NSX-V is a solution that will be going end of life in 2022 and then extended technical guidance will be ended in 2023. Considering this fact alone, VMware NSX-T is a solution that organizations will be installing moving forward for greenfield installations. Also, environments that are currently on VMware NSX-V will need to migrate to VMware NSX-T.

VMware NSX-T is the modern solution moving forward for both VMware vSphere and multi-cloud environments. New features and functionality will be included in VMware NSX-T. At this point, VMware NSX-V will be maintained and patched. It would not be surprising to see a few new features added. However, for the most part, VMware NSX-T will receive the majority of new capabilities.

The post NSX-T vs NSX-v: What is the difference? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/nsx-t-vs-nsx-v/feed/ 0
What’s new in VMware Horizon 8? https://www.altaro.com/vmware/horizon-8/ https://www.altaro.com/vmware/horizon-8/#respond Thu, 11 Feb 2021 14:51:04 +0000 https://www.altaro.com/vmware/?p=20977 VMware Horizon has gone through many iterations since its launch in 2009. This article will walk you through all the news and changes in Horizon version 8.

The post What’s new in VMware Horizon 8? appeared first on Altaro DOJO | VMware.

]]>

While VMware Horizon was already one of the most advanced EUC solutions on the market, its popularity gained even more traction in 2020. The lockdown imposed by governments all over the world to counter the spread of the pandemic forced most companies to allow their employees to work from home. Unfortunately, many executives in the management layers are still reluctant to teleworking due to a lack of trust and were not prepared for it.

This is where VMware Horizon came in handy and helped some of those businesses offer a solid and flexible infrastructure for their employees to log in and work remotely. Others went the more traditional Microsoft RDS route or opted for a mix of RDP and VDI to increase the maximum concurrent users.

VMware Horizon has gone through many iterations since its launch in 2009 as View and was renamed Horizon View at the end of 2015 with version 6.1.0. In 2020, VMware changed the naming convention once again and opted to follow a ‘YYMM’ format (i.e. year and month of the release), similar to Microsoft’s approach in order to align with the industry versioning standard. The latest version was released as Horizon 8 2006 (2020 – June 06). This new format applies to Horizon Server, Horizon Client and Horizon Agent.

Now let’s see what the new features are.

Parallel upgrade

The Horizon cloud pod architecture allows running multiple pods to scale-out the VDI infrastructure. The upgrade process used to be done one by one which was inconvenient for companies with large infrastructures as it would take a very long time. With Horizon 8 it is now possible to upgrade up to 3 pods at a time.

CBRC 2.0

The CBRC feature has been integrated into vSphere for many iterations now. It is a host ram-based caching solution that aims at improving read operation during boot storms in VDI environments. CRBC 1.0 had a maximum cache size of 2GB. Since vSphere 6.5, CRBC 2.0 is the default mechanism which offers up to 32GB of cache. It is the only available choice as of vSphere 7.0 and is supported in Horizon 8.

IC Parents alarms

If you like a clean vCenter and don’t like false positives, you will happy to know that it is now possible to disable those pesky memory alerts on the IC parent VMs.

Deployment Options

In Horizon 8, you are now offered a choice as part of the connection server installation process when deploying a new pod. You can choose from several scenarios such as the typical on-premise or in a public cloud with VMware SDDC service.

VMware Horizon Connection Server

Digital Watermark

A new watermark feature was added to protect ownership and ensure the authenticity of intellectual property. It is only available with Blast Extreme and PCoIP (no RDP). It is configured via GPO where you can adjust the text with a variable like a username, computer name etc… As well as images, rotation, opacity…

Horizon 8 Digital Watermark

REST API

Some environments interact with Horizon through REST APIs in scenarios including automation, reporting or monitoring. A bunch of new endpoints have been added in the latest release to offer control over authentication, configuration, inventory, monitoring and so on.

Horizon 8 REST API

They even added a swagger API accessible via a web browser at https://fqdn/rest/swagger-ui.hmtl where you can test the API and learn about more easily and interactively. A paper is available if you want to learn more about how to use the API.

Smart Provisioning

When working with Instant clone pools, a parent VM is created to speed up the desktop creation process. Those parent VMs use up memory and disk which can add up quite a bit according to the number of pools you have and the size of the master VM. Now there is the possibility to provision instant clones without parent VMs. In this scenario the desktop is directly cloned from the replica which is a little slower, however, it is offset by space and memory savings.

Horizon 8 Smart Provisioning

The new smart provisioning feature lets Horizon decide whether to create parent VMs or not automatically by taking into account the density of the cluster, whether it uses vGPU, vTPM, Linux OSes, mixed vSphere/vCenter versions… Note that a pool or farm can contain desktops provisioned in both ways.

  • Low density of VMs: Instant clones created without Parent VMs.
  • High density of VMs: Instant clones created with Parent VMs.

Horizon Console Update

The console has been improved with extra information like client versions in the session grid, display names finally supported on global entitlements, added details on the desktop pools and network display. A new admin role has also been added which cannot grant admin permissions to others. While these aren’t major changes it is a quality-of-life improvement for operations teams.

Although not the most exciting news, you can now send direct in-product feedback to product teams at the top right of the console. You can, of course, opt out like vSphere.

Client restriction for desktop pools

This feature was already available in RDSH and has been extended to Windows 10 desktop pools. It allows administrators to allow Privileged Access Workstations (PAWS) to access a certain desktop pool. Meaning you can create an entitlement that contains both the user and the desktop that has to be fulfilled in order to receive a desktop in the pool.

Miscellaneous

A whole bunch of other changes obviously happened in this version which you can find in the changelog.

To name a few notable ones regardless:

  • HTML enabled by default on pools and farms, it is no longer a separate component to install
  • Location-based printing available in a new UI with VMware integrated printing
  • H.265 support in Horizon Agent for webcam usage and video conferencing
  • Blast Extreme improvements with new hevc driver and support for up to two 8K monitors if you are a unicorn
  • New Linux versions supported along with multi-session on Red Hat and Ubuntu.
  • Optimizations to video and desktop sharing in Microsoft Teams through GPO.

Feature Mapping

As with every major version step, a bunch of features don’t survive the transition. Such features include persistent disks which were made kind of redundant by DEM and App Volumes, Linked clones, JMP server, Persona management, Flex admin, Thin print as well as the Security server that has already been replaced by UAGs in most environments. Note that Linked clones and persistent disks are deprecated, meaning they will actually be removed in a future release.

vSphere Horizon Feature Mapping

Note that Windows versions up to Windows Server 2008 and Windows 8 are no longer supported. All these deprecated and removed features have been replaced by a more modern and integrated alternative to make it leaner and less confusing.

Horizon 8 deprecated persistent disks

A few caveats apply to the deprecation of linked clones as there is a slight feature gap with Instant Clones. The latter does not support multi-nic, sysprep, statically assigned computer names as well as unique BIOS IDs. These will hopefully be addressed in future versions.

VMware published a document on Techzone to help customers move away from the deprecated features.

Conclusion

With 2020 becoming the year of teleworking, this latest version of the VMware’s VDI platform arrived at a strategic time. It offers existing customers a path to a more modern infrastructure and allows newcomers to start with a new version, meaning they will have time to build confidence in the product without being faced with a major upgrade a few months down the line.

The post What’s new in VMware Horizon 8? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/horizon-8/feed/ 0