Storage Archives - Altaro DOJO | VMware https://www.altaro.com/vmware VMware guides, how-tos, tips, and expert advice for system admins and IT professionals Mon, 22 Aug 2022 09:23:05 +0000 en-US hourly 1 Manage resources across sites with the VMware Content Library https://www.altaro.com/vmware/vmware-content-library/ https://www.altaro.com/vmware/vmware-content-library/#respond Fri, 05 Aug 2022 12:53:05 +0000 https://www.altaro.com/vmware/?p=24625 Publish and synchronize resources such as virtual machine templates, OVF files, ISO images, and others across your vSphere environment.

The post Manage resources across sites with the VMware Content Library appeared first on Altaro DOJO | VMware.

]]>

A VMware vSphere environment includes many components to deliver business-critical workloads and services. However, there is a feature of today’s modern VMware vSphere infrastructure that is arguably underutilized – the VMware Content Library. Nevertheless, it can be a powerful tool that helps businesses standardize the workflow using files, templates, ISO images, vApps, scripts, and other resources to deploy and manage virtual machines. So how can organizations manage resources across sites with the VMware Content Library?

What is the VMware Content Library?

Most VI admins will agree with multiple vCenter Servers in the mix, managing files, ISOs, templates, vApps, and other resources can be challenging. For example, have you ever been working on one cluster and realized you didn’t have the ISO image copied to a local datastore that is accessible, and you had to “sneakernet” the ISO where you could mount and install it? What about virtual machine templates? What if you want to have the virtual machine templates in one vCenter Server environment available to another vCenter Server environment?

The VMware Content Library is a solution introduced in vSphere 6.0 that allows customers to keep their virtual machine resources synchronized in one place and prevent the need for manual updates to multiple templates and copying these across between vCenter Servers. Instead, administrators can create a centralized repository using the VMware Content Library from which resources can be updated, shared, and synchronized between environments.

Using the VMware Content Library, you essentially create a container that can house all of the important resources used in your environment, including VM-specific objects like templates and other files like ISO image files, text files, and other file types.

The VMware Content Library stores the content as a “library item.” Each VMware Content Library can contain many different file types and multiple files. VMware gives the example of the OVF file that you can upload to your VMware Content Library. As you know, the OVF file is a bundle of multiple files. However, when you upload the OVF template, you will see a single library entry.

VMware has added some excellent new features to the VMware Content Library features in the past few releases. These include the ability to add OVF security policies to a content library. The new OVF security policy was added in vSphere 7.0 Update 3. It allows implementing strict validation for deploying and updating content library items and synchronizing templates. One thing you can do is make sure a trusted certificate signs the templates. To do this, you can deploy a signing certificate for your OVFs from a trusted CA to your content library.

Another recent addition to the VMware Content Library functionality introduced in vSphere 6.7 Update 1 is uploading a VM template type directly to the VMware Content Library. Previously, VM templates were converted to an OVF template type. Now, you can work directly with virtual machine templates in the VMware Content Library.

VMware Content Library types

VMware Content Library enables managing resources across sites using two different types of content libraries. These include the following:

    • Local Content Library – A local content library is a VMware Content Library used to store and manage content residing in a single vCenter Server environment. Suppose you work in a single vCenter Server environment and want to have various resources available across all your ESXi hosts to deploy VMs, vAPPs, install from ISO files, etc. In that case, the local content library allows doing that. With the local content library, you can choose to Publish the local content library. When you publish the Content Library, you are making it available to be subscribed to or synchronized.
    • Subscribed Content Library – The other type of Content Library is the subscribed content library. When you add a subscribed VMware Content Library type, you are essentially downloading published items from a VMware Content Library type that has published items as mentioned in the Local Content Library section. In this configuration, you are only a consumer of the VMware Content Library that someone else has published. It means when creating the Content Library, the publish option was configured. You can’t add templates and other items to the subscribed VMware Content Library type as you can only synchronize the content of the subscribed Content Library with the content of the published Content Library.
      • With a subscribed library, you can choose to download all the contents of the published Content Library immediately once the subscribed Content Library is created. You can also choose to download only the metadata for items in the published Content Library and download the entire contents of the items you need. You can think of this as a “files on-demand” type feature that only downloads the resources when these are required.

Below is an example of the screen when configuring a content library that allows creating either a Local Content Library or the Subscribed Content Library:

Choosing the content library type
Choosing the content library type

Create a local or subscription Content Library in vSphere 7

Creating a new VMware Content Library is a relatively straightforward and intuitive process you can accomplish in the vSphere Client. Let’s step through the process to create a new VMware Content Library. We will use the vSphere Web Client to manage and configure the Content Library Settings.

Using the vSphere Web Client to manage the Content Library

First, click the upper left-hand “hamburger” menu in the vSphere Client. You will see the option Content Libraries directly underneath the Inventory menu when you click the menu.

Choosing the Content Libraries option to create a manage Content Libraries
Choosing the Content Libraries option to create a manage Content Libraries

Under the Content Libraries screen, you can Create new Content Libraries.

Creating a new Content Library in the vSphere Client
Creating a new Content Library in the vSphere Client

It will launch the New Content Library wizard. In the Name and Location screen, name the new VMware Content Library.

New Content Library name and location
New Content Library name and location

On the Configure content library step, you configure the content library type, including configuring a local content library or a subscribed content library. Under the configuration for Local content library, you can Enable publishing. If publishing is enabled, you can also enable authentication.

Configuring the Content Library type
Configuring the Content Library type

When you configure publishing and authentication, you can configure a password on the content library.

Apply security policy step

Step 3 is the Apply security policy step. It allows applying the OVF default policy to protect and enforce strict validation while importing and synchronizing OVF library items.

Choosing to apply the OVF default policy
Choosing to apply the OVF default policy

The VMware Content Library needs to have a storage location that will provide the storage for the content library itself. First, select the datastore you want to use for storing your content library. The beauty of the content library is that it essentially publishes and shares the items in the content library itself, even though they may be housed on a particular datastore.

Select the storage to use for storing items in the VMware Content Library
Select the storage to use for storing items in the VMware Content Library

Finally, we are ready to complete the creation of the Content Library. Click Finish.

Finishing the creation of the VMware Content Library
Finishing the creation of the VMware Content Library

Once the VMware Content Library is created, you can see the details of the library, including the Publication section showing the Subscription URL.

Viewing the settings of a newly created VMware Content Library
Viewing the settings of a newly created VMware Content Library

As a note. If you click the Edit Settings hyperlink under the Publication settings pane, you can go in and edit the settings of the Content Library, including the publishing options, authentication, changing the authentication password, and applying a security policy.

Editing the settings of a VMware Content Library
Editing the settings of a VMware Content Library

Creating a subscribed VMware Content Library

As we mentioned earlier, configuring a subscribed content library means synchronizing items from a published content library. In the New Content Library configuration wizard, you choose the Subscribed content library option to synchronize with a published content library. Then, enter the subscription URL for the published content library when selected. As shown above, this URL is found in the settings of the published content library.

You will need to also place a check in the Enable authentication setting if the published content library was set up with authentication. Then, enter the password configured for the published content library. Also, note the configuration for downloading content. As detailed earlier, you can choose to synchronize items immediately, meaning the entire content library will be fully downloaded. Or, you can select when needed, which acts as a “files on demand” configuration that only downloads the resources when needed.

Configuring the subscribed content library
Configuring the subscribed content library

Choose the storage for the subscribed Content Library.

Add storage for the subscribed VMware Content Library

Add storage for the subscribed VMware Content Library

Ready to complete adding a new subscribed VMware Content Library. Click Finish.

Ready to complete adding a subscribed VMware Content Library
Ready to complete adding a subscribed VMware Content Library

Interestingly, you can add a subscribed VMware Content Library that is subscribed to the same published VMware Content Library on the same vCenter Server.

Published and subscribed content library on the same vCenter Server
Published and subscribed content library on the same vCenter Server

What is Check-In/Check-Out?

A new feature included with VMware vSphere 7 is versioning with the VMware Content Library. So often, with virtual machine templates, these are frequently changed, updated, and configured. As a result, it can be easy to lose track of the changes made, the user making the modifications, and track the changes efficiently.

Now, VMware vSphere 7 provides visibility into the changes made to virtual machine templates with a new check-in/check-out process. This change embraces DevOps workflows with a way for IT admins to check in and check out virtual machine templates in and out of the Content Library.

Before the new check-in/check-out feature, VI admins might use a process similar to the following to change a virtual machine template:

    1. Convert a virtual machine template to a virtual machine
    2. Place a snapshot on the converted template to machine VM
    3. Make whatever changes are needed to the VM
    4. Power the VM off and convert it back to a template
    5. Re-upload the VM template back to the Content Library
    6. Delete the old template
    7. Internally notify other VI admins of the changes

Now, VI admins can use a new capability in vSphere 7.0 and higher to make changes to virtual machine templates more seamlessly and track those changes effectively.

Clone as template to Library

The first step is to house the virtual machine template in the Content Library. Right-click an existing virtual machine to use the new functionality and select Clone as Template to Library.

Clone as Template to Library functionality to use the check-in and check-out feature
Clone as Template to Library functionality to use the check-in and check-out feature

As a note, if you see the Clone to Library functionality instead of Clone as Template to Library, it means you have not converted the VM template to a virtual machine. If you right-click a VM template, you only get the Clone to Library option. If you select Clone to Template, it only allows cloning the template in a traditional way to another template on a datastore.

Right-clicking and cloning a VM template only gives the option to Clone to Library
Right-clicking and cloning a VM template only gives the option to Clone to Library

Continuing with the Clone to Library process, you will see the Clone to Template in Library dialog box open. Select either New template or Update the existing template.

Clone to Template in Library
Clone to Template in Library

In the vCenter Server tasks, you will see the process begin to Upload files to a Library and Transfer files.

Uploading a virtual machine template to the Content Library
Uploading a virtual machine template to the Content Library

When you right-click a virtual machine and not a virtual machine template, you will see the additional option of Clone as Template to Library.

Clone as Template to Library
Clone as Template to Library

It then brings up a more verbose wizard for the Clone Virtual Machine To Template process. The first screen is the Basic information where you define the Template type (can be OVF or VM Template), the name of the template, notes, and select a folder for the template.

Configuring basic information for the clone virtual machine to template process
Configuring basic information for the clone virtual machine to template process

On the Location page, you select the VMware Content Library you want to use to house the virtual machine template.

Select the VMware Content Library to house the virtual machine template
Select the VMware Content Library to house the virtual machine template

Select a compute resource to house your cloned VM template.

Select the compute resource for the virtual machine template
Select the compute resource for the virtual machine template

Select the storage for the virtual machine template.

Select storage to house the VM template
Select storage to house the VM template

Finish the Clone Virtual Machine to Template process.

Finish the clone of the virtual machine to template in the VMware Content Library
Finish the clone of the virtual machine to template in the VMware Content Library

If you navigate to the Content Library, you will see the template listed under the VM Templates in the Content Library.

Viewing the VM template in the Content Library
Viewing the VM template in the Content Library

Checking templates in and out

If you select the radio button next to the VM template, the Check Out VM From This Template button will appear to the right.

Launching the Check out VM from this template
Launching the Check out VM from this template

When you click the button, it will launch the Check out VM from VM Template wizard. First, name the new virtual machine that will be created in the check-out process.

Starting the Check out VM from VM template
Starting the Check out VM from VM template

Select the compute resource to house the checked-out virtual machine.

Selecting a compute resource
Selecting a compute resource

Review and finish the Check out VM from VM template process. You can select to power on VM after check out.

Review and Finish the Check out VM from VM Template
Review and Finish the Check out VM from VM Template

The checked-out virtual machine will clone from the existing template in the Content Library. Also, you will see an audit trail of the check-outs from the Content Library. You are directed to Navigate to the checked-out VM to make updates. Note you then have the button available to Check In VM to Template.

Virtual machine template is checked out and deployed as a virtual machine in inventory
Virtual machine template is checked out and deployed as a virtual machine in inventory

If you navigate to the Inventory view in the vSphere Client, you will see the machine has a tiny blue dot in the lower left-hand corner of the virtual machine icon.

Viewing the checked-out VM template as a virtual machine in vSphere inventory
Viewing the checked-out VM template as a virtual machine in vSphere inventory

After making one small change, such as changing the virtual network the virtual machine is connected to, we see the option appear to Check In VM to Template.

Check In VM to Template
Check In VM to Template

It will bring up the Check In VM dialog box, allowing you to enter notes and then click the Check In button.

Check In the VM
Check In the VM

We see the audit trail of changes reflected in the Content Library with the notes we entered in the Check in notes.

Virtual machine template checked back in with the notes entered in the check-in process
Virtual machine template checked back in with the notes entered in the check-in process

You will also see a new Versioning tab displayed when you view the virtual machine template in the inventory view.

Viewing the versioning of a virtual machine template in the inventory view
Viewing the versioning of a virtual machine template in the inventory view

VMware Content Library Roles

There are various privileges related to Content Library privileges. VMware documents the following privileges that can be assigned to a custom VMware Content Library Role.

Privilege Name Description Required On
Content library.Add library item Allows addition of items in a library. Library
Content library.Add root certificate to trust store Allows addition of root certificates to the Trusted Root Certificates Store. vCenter Server
Content library.Check in a template Allows checking in of templates. Library
Content library.Check out a template Allows checking out of templates. Library
Content library.Create a subscription for a published library Allows creation of a library subscription. Library
Content library.Create local library Allows creation of local libraries on the specified vCenter Server system. vCenter Server
Content library.Create or delete a Harbor registry Allows creation or deletion of the VMware Tanzu Harbor Registry service. vCenter Server for creation. Registry for deletion.
Content library.Create subscribed library Allows creation of subscribed libraries. vCenter Server
Content library.Create, delete or purge a Harbor registry project Allows creation, deletion, or purging of VMware Tanzu Harbor Registry projects. Registry
Content library.Delete library item Allows deletion of library items. Library. Set this permission to propagate to all library items.
Content library.Delete local library Allows deletion of a local library. Library
Content library.Delete root certificate from trust store Allows deletion of root certificates from the Trusted Root Certificates Store. vCenter Server
Content library.Delete subscribed library Allows deletion of a subscribed library. Library
Content library.Delete subscription of a published library Allows deletion of a subscription to a library. Library
Content library.Download files Allows download of files from the content library. Library
Content library.Evict library item Allows eviction of items. The content of a subscribed library can be cached or not cached. If the content is cached, you can release a library item by evicting it if you have this privilege. Library. Set this permission to propagate to all library items.
Content library.Evict subscribed library Allows eviction of a subscribed library. The content of a subscribed library can be cached or not cached. If the content is cached, you can release a library by evicting it if you have this privilege. Library
Content library.Import Storage Allows a user to import a library item if the source file URL starts with ds:// or file://. This privilege is disabled for content library administrator by default. Because an import from a storage URL implies import of content, enable this privilege only if necessary and if no security concern exists for the user who performs the import. Library
Content library.Manage Harbor registry resources on specified compute resource Allows management of VMware Tanzu Harbor Registry resources. Compute cluster
Content library.Probe subscription information This privilege allows solution users and APIs to probe a remote library’s subscription info including URL, SSL certificate, and password. The resulting structure describes whether the subscription configuration is successful or whether there are problems such as SSL errors. Library
Content library.Publish a library item to its subscribers Allows publication of library items to subscribers. Library. Set this permission to propagate to all library items.
Content library.Publish a library to its subscribers Allows publication of libraries to subscribers. Library
Content library.Read storage Allows reading of content library storage. Library
Content library.Sync library item Allows synchronization of library items. Library. Set this permission to propagate to all library items.
Content library.Sync subscribed library Allows synchronization of subscribed libraries. Library
Content library.Type introspection Allows a solution user or API to introspect the type support plug-ins for the content library service. Library
Content library.Update configuration settings Allows you to update the configuration settings. Library
No vSphere Client user interface elements are associated with this privilege.
Content library.Update files Allows you to upload content into the content library. Also allows you to remove files from a library item. Library
Content library.Update library Allows updates to the content library. Library
Content library.Update library item Allows updates to library items. Library. Set this permission to propagate to all library items.
Content library.Update local library Allows updates of local libraries. Library
Content library.Update subscribed library Allows you to update the properties of a subscribed library. Library
Content library.Update subscription of a published library Allows updates of subscription parameters. Users can update parameters such as the subscribed library’s vCenter Server instance specification and placement of its virtual machine template items. Library
Content library.View configuration settings Allows you to view the configuration settings. Library
No vSphere Client user interface elements are associated with this privilege.

 

Advanced Content Library settings

Several advanced configuration settings are configurable with the VMware Content Library. You can get to these by navigating to Content Libraries > Advanced.

Content Library advanced settings
Content Library advanced settings

These include the following settings as detailed by VMware:

Configuration Parameter Description
Library Auto Sync Enabled This setting enables automatic synchronization of subscribed content libraries.
Library Auto Sync Refresh Interval (minutes) The Interval between two consequent automatic synchronizations of the subscribed content library. This interval is measured in minutes.
Library Auto Sync Setting Refresh Interval (seconds) This is the Interval after which the refresh interval for the automatic synchronization settings of the subscribed library will be updated if it has been changed. It is measured in seconds. A change in the refresh interval requires a restart of vCenter Server.
Library Auto Sync Start Hour This setting refers to the time of day when the automatic synchronization of a subscribed content library begins
Library Auto Sync Stop Hour This setting refers to the time of day when the automatic synchronization of a subscribed content library stops. Automatic synchronization stops until the start hour.
Library Maximum Concurrent Sync Items The maximum number of items concurrently synchronizing for each subscribed library.
Max concurrent NFC transfers per ESX host The maximum concurrent NFC transfers per ESXi host limit
Maximum Bandwidth Consumption The bandwidth usage threshold. It is measured in Mbps across all transfers where 0 means unlimited bandwidth.
Maximum Number of Concurrent Priority Transfers The Concurrent transfer limit for priority files. Tranfers are queued if the bandwidth limit is exceeded. This threadpool is used only to transfer priority objects. For example, if you change the concurrent transfer limit for priority files, such as OVF, you must restart vCenter Server.
Maximum Number of Concurrent Transfers Concurrent transfer limit. When exceeded, the transfers are queued. If you change the concurrent transfer limit, it requires a restart of vCenter Server.

 

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Wrapping up

The VMware Content Library provides a centralized repository that allows keeping required file resources, virtual machine templates, ISO images vApps, and other files synchronized and available across the vSphere datacenter. In vSphere 7, the Content Library allows organizations to have a better way to keep up with and track changes to virtual machine templates. Using the new check-in/check-out process, VI admins can track changes made with each check-out and ensure these are documented and synchronized back to the Content Library.

It effectively provides a solution to remove the need to copy files between ESXi hosts or vSphere clusters and have what you need to install guest operating systems or deploy virtual machine templates. In addition, the subscribed Content Library allows synchronizing vCenter Server content libraries so that many other vCenter Servers can take advantage of the files already organized in the published Content Library.

The VMware Content Library is one of the more underutilized tools in the VI admin’s toolbelt that can bring about advantages in workflow, efficiency, and time spent finding and organizing files for deploying VMs and OS’es. In addition, the recent feature additions and improvements, such as check-ins/check-outs, have provided a more DevOps approach to tracking and working with deployment resources.

The post Manage resources across sites with the VMware Content Library appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-content-library/feed/ 0
Achieve Event-Driven Automation with VMware Event Broker Appliance (VEBA) https://www.altaro.com/vmware/vmware-event-broker-appliance/ https://www.altaro.com/vmware/vmware-event-broker-appliance/#respond Fri, 06 May 2022 12:33:07 +0000 https://www.altaro.com/vmware/?p=24264 Learn how to provide event-driven automation using the VMware Event Broker Appliance (VEBA) for your VMware vSphere SDDC.

The post Achieve Event-Driven Automation with VMware Event Broker Appliance (VEBA) appeared first on Altaro DOJO | VMware.

]]>

Organizations worldwide are in the middle of a paradigm shift in provisioning, managing, monitoring, and configuring their infrastructure. The cloud revolution has prompted a change in the way businesses think about infrastructure, leading to Infrastructure-as-Code and DevOps processes that blur the lines between development and IT operations.

While cloud operations have been driven by DevOps processes for some time now, on-premises tools and techniques have lagged behind the cloud. As a result, VMware admins are on a journey of adopting cloud-native processes on-premises.

Overview of VMware Event Broker Appliance (VEBA)

So, what is the VMware Event Broker Appliance (VEBA)? VEBA is released as a VMware Fling. VMware Flings are unofficial VMware solutions created by VMware software engineers without production support. However, they are powerful and often used in production environments for various use cases.

VMware Flings

Popular and helpful VMware Flings often eventually make it to production-supported solutions. A great example is the VMware OS Optimization Tool used for years by Horizon administrators. It has now become officially supported with the release of VMware Horizon 2111. So, case in point, VMware Flings often produce viable solutions that eventually become production-supported.

The VMware Event Broker Appliance (VEBA) is a VMware Community open source project released as a VMware Fling in 2019. It enables customers to create event-driven automation using vCenter Server Events easily. The idea behind the Fling is simple. It brings modern technologies and innovations out of the cloud-native world, like Kubernetes, to help cloud admins build event-driven automation based on vCenter Server events.

Take Advantage of Event-Driven Automation in a VMware vSphere Environment

Event-driven automation with vCenter Server is compelling. Today, vCenter Server ships with over 1800+ events out-of-the-box. These include events that represent state changes like a virtual machine created, a host added, a VM getting powered-on or powered-off. It can also consist of vCenter Server alarms that are more metric-based. Metric-based alarms consist of alarms driven from thresholds. For example, a CPU is running above a certain threshold, a datastore is under a certain percentage of free space.

This information has been in vCenter Server for years now and can drive very basic event-driven automation that can carry out tasks. For example, using the built-in event automation directly in vCenter Server, you can send an email, SNMP trap, or run a command using event-driven automation from vCenter Server event capabilities.

The capabilities provided around vCenter events have not changed much in the past few years. However, organizations today are looking for more robust and modernized automation tools and capabilities to fit in with current cloud-driven automation capabilities.

The VEBA solution takes the traditional events supplied by vCenter Server through the vSphere Client and augments the in-box capabilities with a cloud-native approach. What use cases does the VEBA solution help customers solve?

VMware Event Broker Appliance (VEBA) Benefits and Applications

VEBA targets many different use cases in customer environments. Common use cases that are solved using VEBA include:

Notification:

    • Customers can receive alerts for modern alert and notification platforms, including SMS, Microsoft Teams, Slack, and others with modern API endpoints
    • Receive real-time updates for business-critical objects contained in your vSphere SDDC
    • Provide real-time monitoring of resources that are infinitely customizable to suit the needs of your business

Automation:

    • VEBA can also provide robust automation capabilities based on virtual machine activities in the environment, including applying vSphere tags, security settings, and other configurations to both VMs and hosts.
    • Triggered health monitoring jobs such as checking for long-running processes, including vSphere snapshots

Integration:

    • Automatically trigger API calls to third-party solutions providing remote API endpoints and trigger the API calls based on vSphere infrastructure events
    • VEBA can integrate with ticketing solutions used in the enterprise today, including Jira Service Desk, Pager Duty, ServiceNow, Salesforce-based incidents. These may include a workload or hardware failure.
    • You can integrate VEBA with AWS EventBridge and other public cloud services

Remediation:

    • Proactively perform tasks based on certain types of events in your VMware vSphere environment. For example, request additional capacity if you see threshold metrics peaking.
    • Allows automation teams and site reliability engineers (SREs) to code run books for providing automated resolution

Audit:

    • Track your VMware vSphere assets based on VM creation and deletion events and have these events trigger CMDB database updates
    • Use VEBA as a powerful security auditing tool allowing the forwarding of security-related events (logins and resource access) to in-house security teams for compliance and security investigation if needed
    • Provide an accurate log of configuration changes to assist with troubleshooting and debugging

Analytics:

    • Reduce the load on your vCenter Server as VEBA allows shipping events to an external database that can be queried instead of viewing events directly on vCenter Server
    • Easily identify trends in the vSphere environment, including event duration, specific users who are generating unnecessary events, and visibility to other workflows happening in the environment

In addition to the official use cases listed above, imagination is the only limitation to designing functions that carry out tasks in customer VMware environments. When you look at the VEBA community site, you see functions that can do the following:

    • Attach tags containing desired configuration settings to a VM and have it automatically reconfigure the next time it powers down
    • Automatically tag a VM upon a vCenter event
    • Disable alarm actions on a host when entering maintenance mode and re-enable alarm actions on a host after it has exited maintenance mode.
    • Send an email notification when warning/error threshold is reached for Datastore Usage Alarm in vSphere
    • Send an email listing VMs restarted due to a host failure in an HA enabled cluster
    • Limit the scope of other functions by allowing filtering of events by vCenter Inventory paths using standard regex
    • Automatically synchronize VM tags to NSX-T
    • Add Custom Attribute to a VM upon a vCenter event
    • Send a Slack notification triggered by a VMware Horizon Event
    • Send a Microsoft Teams notification triggered by a VMware Cloud Notification Gateway SDDC Event
    • Accepts an incoming webhook from the vRealize Network Insight Databus, constructs a CloudEvent, and sends it to the VMware Event Router

VMware Event Broker Appliance (VEBA) Architecture

The VMware Event Broker Appliance has been designed in a way that allows a very modular design. The core components of the VMware Event Broker Appliance include:

    • Kubernetes and containers – Provides robust platform capabilities, including self-healing, secrets and configuration management, resource management, and modularity/extensibility
    • Photon OS – An open-source Linux container host optimized for cloud-native applications, including support for Kubernetes out-of-the-box.
    • VMware Event Router – The event router supports multiple event stream sources, including VMware vCenter, VMware Horizon, and incoming webhooks. It also supports multiple event stream processors, including Knative, OpenFaaS, and AWS EventBridge.
    • Contour – An ingress controller for Kubernetes that deploys Envoy proxy as a reverse proxy and load balancer

VMware Event Broker Appliance

The VMware Event Broker Appliance is deployed as a single virtual machine in its current form, with no option for scale-out or high availability. However, it is built on the principles of a microservices architecture running on Kubernetes. Individual services communicate using TCP/IP. Most of the communication happens “inside the box” of the VM.

High availability options such as Kubernetes cluster capabilities appear to be on the roadmap for future VMware Event Broker Appliance releases. However, while this increases the availability of the capabilities VEBA provides, it will also increase the architectural complexity of the solution.

VEBA uses Knative – What is it?

One of the powerful components of VMware Event Broker Appliance is its use of the Knative project. What is Knative? It is a project to provide a more approachable abstraction to Kubernetes. As you know, if you have worked with Kubernetes for any time, it is a robust platform solution. It has now transitioned from a container orchestration tool to a powerful API interface to a cloud-native architecture.

At its best, many feel like working with Kubernetes requires very low-level interactions, including YAML files, deployments, ingress controllers, and other components. Knative helps abstract the lower reaches of Kubernetes and helps keep the focus on the applications.

It allows deploying and managing modern serverless workloads using two core components – Serving (Knative Service) and Eventing (Broker, Channel, etc.).

Deploying the VMware Event Broker Appliance (VEBA)

The process to deploy the VMware Event Broker Appliance (VEBA) is straightforward. The first thing you need to do is download the appliance OVA file from the VMware Flings site. You can do that here:

    • VMware_Event_Broker_Appliance_v0.7.1.ova – filename at the time of writing
    • File size – 2GB

 

Download the VMware Event Broker Appliance (VEBA) from the VMware Flings site
Download the VMware Event Broker Appliance (VEBA) from the VMware Flings site

The deployment process is the standard OVA deployment process using the vSphere Client.

Point the OVA deployment to the OVA file downloaded from the VMware Fling site
Point the OVA deployment to the OVA file downloaded from the VMware Fling site

Specify the name of the appliance and the folder in your vSphere inventory.

Configure the name and folder location in vSphere inventory
Configure the name and folder location in vSphere inventory

Select a compute resource for the OVA appliance deployment in your vSphere SDDC.

Select the compute resource to deploy your VEBA OVA
Select the compute resource to deploy your VEBA OVA

Review the initial details of the OVA deployment for the VMware Event Broker Appliance.

Review the details of the initial OVA deployment
Review the details of the initial OVA deployment

Select the vSphere datastore you want to house your VMware Event Broker Appliance deployment and set the VM Storage policy.

Select the storage for your VEBA OVA deployment
Select the storage for your VEBA OVA deployment

Select the virtual network to connect your VMware Event Broker Appliance.

Select a vSphere virtual network for your VEBA appliance
Select a vSphere virtual network for your VEBA appliance

The Customize template page is the configuration step you want to look over carefully. On this configuration step, you set the following:

    • Hostname
    • IP Address
    • Subnet
    • Gateway
    • DNS
    • NTP
    • vSphere credentials for communicating with vCenter Server
    • vSphere credentials for installing the vSphere Client plugin
    • Pointing to an existing event processor

Customize template configuration deploying the VMware Event Broker Appliance
Customize template configuration deploying the VMware Event Broker Appliance

Another section of the customised template screen showing the vCenter Server and VMware Horizon configuration.

Customize template screen showing the VCSA and VMware Horizon configuration
Customize template screen showing the VCSA and VMware Horizon configuration

Review and Finish-out the deployment of the appliance.

Finish the deployment of the VEBA OVA appliance
Finish the deployment of the VEBA OVA appliance

The VMware Event Broker Appliance deployment allows deploying a vSphere Client plugin that provides seamless integration with the vSphere Client and VEBA.

The VMware Event Broker Plugin successfully deployed requiring a refresh of the browser
The VMware Event Broker Plugin successfully deployed requiring a refresh of the browser

After refreshing your browser session, you will see the VMware Event Broker option listed under your Menu options in the vSphere Client. The VMware Event Broker plugin allows deploying functions using the vSphere GUI instead of the VEBA command line.

The VMware Event Broker plugin installed and listed in the vSphere Client
The VMware Event Broker plugin installed and listed in the vSphere Client

Now that we have the VEBA appliance deployed, we can clone down the official VEBA repo, providing the example code and functions for deploying to a local VEBA instance.

Cloning the VEBA Repository

The official open-source code for the VMware Event Broker Appliance (VEBA) is hosted on Github. To get started with event-driven automation in your local vSphere SDDC, you first need to clone down the repository locally. Doing this allows viewing code examples and using the prebuilt examples as templates to deploy your own event-driven notifications.

To view the official code repository for VEBA, you can navigate to the Github page here:

    1. https://github.com/vmware-samples/vcenter-event-broker-appliance

To clone the repo from the command line, enter the following:

    1. git clone https://github.com/vmware-samples/vcenter-event-broker-appliance.git

Cloning the official VEBA repo to pull down the examples
Cloning the official VEBA repo to pull down the examples

Building a Docker Container

The workflow that we will perform to get to the point of testing the function for event-driven automation involves:

    1. Cloning the VEBA repo – This is the step above that we have completed
    2. Building the Docker container – Using the included Dockerfile, we need to build the Docker container and upload the container to the Docker registry or a local registry if you have one
    3. Testing it locally – Using the supplied test script in the cloned repo files, we can test the functionality locally before applying the configuration to the Kubernetes cluster running in the VEBA appliance
    4. Applying the configuration to the VMware Event Broker Appliance – Finally, we deploy the configuration to the VEBA appliance

For example, let’s customize and upload a function to the VMware Event Broker Appliance. It will involve creating a Docker container with a function to send an email for the triggered alert. First, let’s look at building the Docker container. In the cloned VEBA repo, you will find the knative > powershell > kn-ps-email folder under the examples folder in the parent vcenter-event-broker-appliance folder. Before customizing the files included in the root of the kn-ps-email folder, we can go ahead and test how the function works using the test folder.

Viewing the Dockerfile for the email function for VEBA
Viewing the Dockerfile for the email function for VEBA

The test folder contains files that we can test the functionality. It is a great way to get familiar with the VEBA process and build containers to deploy the functions. To see how we do this, first, we need to go ahead and build a default container. Again, you don’t have to worry about customizing anything for your email server just yet.

To build the container with Docker, use the following command:

    • Docker build -t <Docker username>/kn-ps-email:1.0 .

We use a Docker Hub user with the command above and name and tag the container. ***Note*** don’t forget the trailing dot. The container builds successfully. At this point, it is still a local container since we have not pushed the container to the Docker registry.

Building a test Docker container to develop VEBA functions locally
Building a test Docker container to develop VEBA functions locally

After creating the container, we can push the container to the Docker registry. As we will detail later, you may most likely be fine using the default container referenced in the function.yaml file. However, the upload process is needed if you need a custom container. Uploading to Docker Hub will be the option many may want to use if you have no local registry running. To push your container to the Docker registry, use the command:

    • docker push <Docker username>/kn-ps-email:1.0

Now that we have the Docker container built from the Dockerfile and uploaded to the Docker registry, we can use the sample test file to test the notification alert payload sent to the function. Change your focus into the test folder from the command line. You will now want to edit your docker-test-env-variable file to point to a real email server. You can even use something like smtp.gmail.com to test the email delivery.

Update the following variable names within the docker-test-env-variable file

    • SMTP_SERVER – Address of Email Server
    • SMTP_PORT – Port of Email Server
    • SMTP_USERNAME – Username to send email (use empty string for no username)
    • SMTP_PASSWORD – Password to send email (use empty string for no password)
    • EMAIL_SUBJECT – Subject to use for email
    • EMAIL_TO – Recipient email address (comma separated)
    • EMAIL_FROM – Sender email address to use

I am using VS Code as it makes it easy to have two split PowerShell windows to work with. Below, I am running the send-cloudevent-test.ps1 file from one window and start and run the Docker container from the other window. When you run the Docker container, you start the container with the parameter –env-file and pass the docker-test-env-variable file to this parameter.

Testing the Docker container and event-driven automation with the test PowerShell script
Testing the Docker container and event-driven automation with the test PowerShell script

You should get a successful test with a StatusCode 200. This result means the email was sent successfully. I received the email to my Gmail account, as you can see below.

Test email is received using the VEBA email event test script
Test email is received using the VEBA email event test script

Now, we can finish out the configuration to deploy the function code to your local VEBA.

Deploy the Event-Driven Function to your VEBA Appliance

To interact with your VMware Event Broker Appliance, you need to have connectivity to the VEBA Kubernetes cluster using kubectl. As you may already know from working with other Kubernetes infrastructure, kubectl communicates with the destination cluster using the Kubernetes config file. To get this file, you can connect to your VEBA appliance using your SSH/SCP tool of choice and copy over the Kubernetes config file from the VEBA appliance.

You will copy the config file to your local development workstation to the special .kube folder where kubectl looks for the configuration file. In Windows, this is created under the user profile directory.

Copy the Kubernetes config file from your VEBA appliance to your admin workstation
Copy the Kubernetes config file from your VEBA appliance to your admin workstation

Once you have this file copied down, you should be able to run normal kubectl commands against your VMware Event Broker Appliance, like you would any other Kubernetes cluster.

Viewing pods on the VEBA appliance
Viewing pods on the VEBA appliance

We have already pushed our Docker container to the Docker Hub registry earlier in the walkthrough. This step is now crucial because the VMware Event Broker Appliance function needs to reach out and pull down the container.

Next, you need to update your email_secret.json file with your email configuration and create a Kubernetes secret that can be accessed from the function.

Now, to create the Kubernetes secret, use the following commands:

# Create the secret using your customized email-secret file with your mail server config

    • kubectl -n vmware-functions create secret generic email-secret –from-file=EMAIL_SECRET=email_secret.json

# Update the label for the secret to show up in VEBA UI

    • kubectl -n vmware-functions label secret email-secret app=veba-ui

Next, you need to edit the function.yaml file with the name of the container image if you changed this from the default. As the documentation mentions, the default VMware container image will work for most. However, if you have changed the name and uploaded a custom Docker container to the registry, it must be referenced in the function.yaml file.

By default, the function deployment will filter on the VmRemovedEvent vCenter Server Event. If you wish to change this, update the subject field in the function.yaml to the desired event.

Finally, deploy the function to the VMware Event Broker Appliance (VEBA) using the command:

    • kubectl -n vmware-functions apply -f function.yaml

VEBA Fixes and Other Hints to Functions

The latest version of VEBA has introduced many fixes to the platform as it continues to evolve. The December 2021 release includes:

    • Fix special character handling for VEBA vSphere UI plugin
    • Fix imagePullPolicy for knative-contour in air-gap deployment
    • Improved website documentation
    • More Knative Function Examples

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Wrapping Up

The VMware Event Broker appliance truly allows customers to unlock the hidden potential of events in their vSphere-powered SDDC and create event-driven automation. It includes support for both vCenter Server and VMware Horizon events. As shown, it is easily deployed using the prebuilt OVA appliance, and functions are readily available in the community.

Be sure to check out the official sites for VEBA, including:

The post Achieve Event-Driven Automation with VMware Event Broker Appliance (VEBA) appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-event-broker-appliance/feed/ 0
vSphere 7 Partition Layout Changes https://www.altaro.com/vmware/vsphere-7-partition-layout/ https://www.altaro.com/vmware/vsphere-7-partition-layout/#respond Fri, 18 Mar 2022 14:01:44 +0000 https://www.altaro.com/vmware/?p=24014 Discover vSphere 7 partitions layout, important differences with ESXi 6 and how to upgrade to ESXi 7 with a new partition layout

The post vSphere 7 Partition Layout Changes appeared first on Altaro DOJO | VMware.

]]>

With the release of vSphere 7, VMware partitions are changed in the vSphere 7 layouts to make it more versatile and to allow additional VMware and third-party solutions to be installed on it. The VMware partition sizes in prior versions of vSphere 6.x were fixed and static which may prevent the installation of additional solutions such as vSAN, NSX-T, Tanzu as well as some third-party integrations. In response to these constraints, VMware modified the partition sizes in the vSphere 7 layout, increasing the size of boot banks and making them easier to extend.

In this article, we’ll learn about the vSphere 7 ESXi boot media partitions, important differences between ESXi 6 and ESXi 7, ESXi 7 supported boot media and upgrading to ESXi 7 with a new partition layout. Let’s get into it!

vSphere 7 – ESXi Boot Media Partition

With the new partition schema of the vSphere 7 layout, the system boot partition is the only one that is fixed at 100 MB. The rest of the VMware partitions are dynamic, which means the size of the partitions is decided by the boot media size. In the vSphere 7 layout, VMware consolidated the partitions which now consists of four VMware partitions.

  • System Boot: The EFI components and boot loader are stored in a FAT16 partition called system boot. Like earlier vSphere versions, it’s a fixed-size partition of 100 MB.
  • Boot-bank 0: A FAT16 partition that gives the system enough room to hold ESXi boot components. It’s a dynamic partition with a range of sizes ranging from 500 MB to 4 GB.
  • Boot-bank 1: A FAT16 partition that gives the system enough room to hold ESXi boot components. It’s a dynamic partition with a range of sizes ranging from 500 MB to 4 GB.
  • ESX-OSData: A VMFS-L partition that holds non-boot and additional modules like system states and configuration, as well as system VMs, and it’s only available on high-endurance systems. It’s also a dynamic partition with a storage capacity of up to 128 GB.

The ESX-OSData partition is separated into two high-level data types:

  • ROM-data: Data produced rarely, such as VMtools ISOs, settings, and core dumps.
  • RAM-data: Includes logs, VMFS global traces, vSAN EPD and traces as well as active databases, among other things.

Note that a VMFS datastore is automatically established for storing virtual machine data if the boot media is greater than 128GB.

vSphere 7 Layout

Figure: vSphere 7 Layout

The ESX-OSData partition is built on a high-endurance storage device such as an HDD or SSD for storage media such as USB or SD cards. When a backup high-endurance storage device is unavailable, a VMFS-L Locker partition on USB or SD devices is created, although it is solely utilized to store ROM-data. A RAM disc is used to store RAM data.

Keep in mind that USB and SD devices are no longer supported starting vSphere 7 Update 3 following a large number of issues encountered by customers.

Key Changes Between the ESXi 6 And ESXi 7

The ESX-OSData partition change is an important one in the context of SD cards and USB devices since all non-boot partitions (such as the small and big core-dump, locker, and scratch disc ) have been consolidated into this new VMFS-L partition.

 

VMware Partitions in vShere 6.x and 7

Figure: VMware Partitions in vShere 6.x and 7

High endurance persistent storage device required

Due to an increase in IO requests delivered to the ESX-OSData partition, it must be built on a high endurance persistent storage device. Multiple variables included with ESXi 7.x have resulted in higher IO requests, including:

    • A higher number of probe requests were issued to examine the device’s condition and ensure that it was still serving IO requests.
    • Scheduled routines to back up system state and timestamps contribute to the increased IO demands in a minor way.
    • Additionally, new features and solutions use ESX-OSData to store their configuration data, necessitating its installation on a high-endurance, locally connected persistent storage device.

Increased storage minimums

ESXi could previously be installed on 1 GB USB sticks. ESXi 7.0, on the other hand, increases these needs to 3.72GB of storage space to be precise.

However, the recommended storage capacity is 32 GB. What’s noteworthy is that, while the boot partition’s size (100MB) remains constant, the sizes of the other VMware partitions vary depending on the kind of installation media used.

    • <4GB minimum required to install ESXi 7.0.
    • 32 GB required to install ESXi 7.0.
    • 4GB required for upgrading to ESXi 7.0.

Dynamic partition sizes

The VMware examples demonstrate media sizes ranging from 4GB to 128GB and beyond, and as you can see, if you have an SSD drive with more than 128GB, the remaining space may be used to create a local datastore in ESXi partitions.

Changes in vSphere 7 Partitions

Figure: Changes in vSphere 7 Partitions

Supported Boot Media in vSphere 7 Layout

As you may be aware, starting with vSphere 7 Update 3, the use of standalone SD cards or USB devices is deprecated in the vSphere 7 layout. In which instance the system will display warnings when you use them. It is suggested (mandatory eventually) that you store the ESX-OSData partition on a locally attached persistent storage device.

A 32 GB disc is required when booting from a local drive, SAN or iSCSI LUN to create system storage volumes in ESXi partitions. A VMware Tools partition is created automatically on the RAM disc starting with ESXi 7 Update 3, and warnings appear to prevent you from establishing ESXi partitions on flash media devices other than the boot bank partitions. Other ways for improving the performance of an ESXi 7.0 installation include:

    • A 138 GB or bigger local drive for maximum ESX-OSData compatibility. The boot partition, ESX-OSData volume, and VMFS datastore are all located on the drive.
    • A device capable of storing a minimum of 128 terabytes of data (TBW).
    • A device with a sequential write speed of at least 100 MB/s.
    • A RAID 1 mirrored device is recommended for resiliency in the event of device failure.

Upgrading to ESXi 7 U3 with SD card

We’ve already discussed that starting with vSphere 7 Update 3, the use of standalone SD cards or USB devices is deprecated in the vSphere 7 layout. The system will continue to run with warnings if they are used but it is best that you store the ESX-OSData partition on a locally attached persistent storage device.

Upgrade procedure with SD card and additional disk

Please follow the procedures below to upgrade ESXi 6.7 with a standalone SD card or USB device to ESXi 7 with an extra disc. If the ESXi 6.7 host does not have persistent storage:

    • On an ESXi 6.x host, add a high-endurance, locally connected persistent storage device.
    • ESXi Host should be upgraded to ESXi 7 to meet ESXi requirements.
    • If autoPartition=True is set, the first unused boot device will be auto partitioned and utilized as the ESX-OSData partition.
    • This will guarantee that the System boot partition is stored on the SD card or USB device, and the ESX-OSData partition is stored on the newly inserted storage device with ESX partitioning.

If the ESXi host has previously been updated to ESXi 7.x and is operating from a USB or SD card.

    • Add a locally associated persistent storage device with good durability in ESXi partitions.
    • Set autoPartition = True on the ESXi host, and it will auto partition the first unused boot device to be used as the ESX-OSData partition.
    • This will guarantee that the System boot partition is stored on the SD card or USB device, and the ESX-OSData partition is stored on the newly inserted storage device with ESX partitioning.

ESXi 7.0 degraded mode

When a 4 GB boot device is used and no local disc is discovered, ESXi enters a state known as ‘degraded mode.’ In summary, the degraded mode is a condition in which logs and states may not be permanent, causing boot up to be delayed as a result.

Note that if the OSData partition is on an HDD or superior media, the system in the vSphere 7 layout will not enter in degraded mode.

The only vSphere 7 layout that will remain supported is the use of persistent storage devices only

Figure: The only vSphere 7 layout that will remain supported is the use of persistent storage devices only.

A sysalert appears in case you enter degraded mode:

ALERT: No persistent storage for system logs and data is available. Because ESX has a limited amount of system storage capacity, logs and system data will be lost in the vSphere 7 layout if the server is rebooted. To fix this, you’ll need to install a local disc or flash device and follow the steps in KB article 77009.

If you don’t want to use an SD card or USB device anymore in the vSphere 7 layout, you can:

    • Use a locally connected persistent storage device.
    • On a locally connected storage device, reinstall ESXi 7.x in the vSphere 7 layout.
    • This will ensure that all partitions are kept on a locally connected storage device with excellent durability in the vSphere 7 layout.

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Conclusion

While the new vSphere 7 layout will certainly bring hardship to customers with a large fleet of hypervisors installed on SD cards, it also introduces more flexibility to improve the integration of VMware and third-party solutions into the vSphere hypervisor.

With the new vSphere 7 layout, VMware is discontinuing support for Boot configuration with only an SD card, USB drive, and without a persistent device with the introduction of vSphere 7 Update 3.

Because these will not be supported in future vSphere versions, customers are encouraged to stop using SD cards and USB devices entirely due to the vSphere 7 layout. If that isn’t possible right now, make sure you have at least 8GB SD cards or USB drives on hand, as well as a minimum of 32 GB locally connected high endurance device for ESX-OSData Partition.

The post vSphere 7 Partition Layout Changes appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vsphere-7-partition-layout/feed/ 0
How to Use Tanzu for Free! https://www.altaro.com/vmware/vmware-tanzu-community-edition/ https://www.altaro.com/vmware/vmware-tanzu-community-edition/#comments Fri, 11 Mar 2022 12:16:22 +0000 https://www.altaro.com/vmware/?p=23736 Earlier this year, VMware released Tanzu Community Edition at VMworld. Learn how you can use VMware Tanzu for free with this edition.

The post How to Use Tanzu for Free! appeared first on Altaro DOJO | VMware.

]]>

Kubernetes is one of the hottest technologies across the enterprise space, and with organizations moving forward with designing modern applications. Many businesses are looking at Kubernetes-powered technologies in redesigning in-house applications, shifting to microservices, improving scalability, and modernizing development across the board.

VMware Tanzu is a powerful enterprise Kubernetes platform that allows businesses to have the tools needed to run Kubernetes across clouds. This year at VMworld, VMware announced the release of VMware Tanzu Community Edition. What is VMware Tanzu Community Edition (TCE)? What is the difference between TCE and VMware Tanzu? How does it work?

What is VMware Tanzu Community Edition?

Let’s understand what VMware Tanzu Community Edition is exactly. VMware Tanzu Community Edition (TCE) is a version of VMware Tanzu that is free to download and use. It is an open-source release of VMware Tanzu with the same features and functionality as the enterprise edition of VMware Tanzu.

Tanzu Community Edition is free to use, no strings attached, open-source, and community-supported. There is no registration or time limitation with TCE. It has a Kubernetes-friendly installer and is easier to provision than a “do it yourself” implementation based on technology used across the Tanzu platforms.

It is a fully-featured Kubernetes distro that adds more functionality. Tanzu Community Edition is the edition for learning and using VMware Tanzu for free without limitations to the core product.

Community Edition is the platform VMware will use to evaluate early-stage technology for the Tanzu platform. It includes early evaluations of Kubernetes .0 releases along with experimental and alpha releases of applications.

One of the main differences between Tanzu Community Edition and the commercial editions, including Standard, Advanced, and Enterprise, is community vs. commercial support. Commercial editions also include other services, such as Tanzu Mission Control and Tanzu Observability.

How does Tanzu Community Edition compare to other easy Kubernetes learning platforms, such as Kind and Minikube? Tanzu Community Edition is based on VMware Tanzu technology. A key aspect of Tanzu Community Edition is it provides a curated selection of packages that has many advantages.

Where can you use Tanzu Community Edition? It can be deployed to AWS, Azure, vSphere, desktop hypervisors, and Docker on Linux, Mac, or Windows. Support is planned for more platforms down the road. In addition, the size of the Kubernetes cluster is configurable to match your needs and available resources.

Benefits of VMware Tanzu Community Edition

VMware Tanzu Community Edition will open up many new benefits and capabilities for developers, DevOps engineers, and others who want to learn and play around with VMware Tanzu and Kubernetes in general. Other use cases and benefits of the TCE solution include:

    • Need for short-lived K8s clusters that are ephemeral
    • Dev & Test environments that can be spun up quickly and work in isolation
    • An easy platform that can be provisioned for workshops, personal training, demonstrations, etc.
    • No license costs for using fully-featured VMware Tanzu features
    • Local development environments using the same VMware Tanzu distribution used in production

Tanzu Community Edition Package Management System

VMware integrated an open-source package management system into Tanzu Community Edition that is akin to other package management systems that users are familiar with, such as yum, apt, winget, etc. The package management solution is the open-source project called kapp-controller, a custom resource that VMware refers to as a package repository.

What is kapp-controller? From the kapp-controller website:

kapp-controller’s declarative APIs and layered approach enables you to build, deploy, and manage your own applications. It also helps package your software into easily distributable packages and enables your users to discover, configure, and install these packages on a Kubernetes cluster”

Everything in the kapp-controller package management solution in VMware Tanzu Community Edition ends up as a OCI bundle. OCI bundles are typically the format used to package container images and push those to registries like Docker Hub and Harbor. VMware pushes the package repository up to an OCI registry and kapp-controller can look at the package repository in the cluster and understand inside the bundle what packages are available.

Once it sees the packages are available, it pulls them down and makes them available in the cluster. If you have performed actions like apt update or added a repository through a package manager and then listed out the packages that could be installed locally on a Linux system, it is similar to the way that TCE makes use of the package repository.

Eventually, a custom resource is brought into the cluster to declare intent to install a package itself. For example, using kubectl, declaring the intent to install a package, kapp-controller understands it needs to retrieve the configuration for installing the package. It generally means pulling YAML files, manifests, services, deployments, config maps, ingress objects, etc.

Overview of the TCE package management from the client-side
Overview of the TCE package management from the client-side

Secure chain of trust

By using the OCI assets in the package manifest, VMware creates a chain of integrity that is unique to the package management solution. From the point of the actual definition of the package and configuration, all objects are represented by a SHA signature of the OCI bundle itself. Going deeper, inside of the configuration, all container images that will eventually run in the cluster are also referenced by the SHA signatures of the images themselves.

This creates a chain of trust based on a SHA that can be verified, you know from the package definition to the container image you run, there is a level of integrity for the software you are deploying. While you can use kubectl as you would in other clusters if you want, VMware also has the Tanzu CLI that allows listing and installing packages along with working with repositories.

While providing a robust set of packages out of the gate for Tanzu Community Edition, VMware is enabling users of TCE to bring their own custom package repositories they are using in their environment today.

Bootstrap cluster

Tanzu and Tanzu Community Edition are built on the foundations of cluster API. Cluster API is an open-source project built on the principle of taking the API declarative and reconciliation model in Kubernetes and bringing this to a point to bootstrap and manage clusters themselves. It simplifies provisioning, upgrading, and operating multiple Kubernetes clusters. VMware has asked the question of why can’t in the same way that you use kubectl apply to get an Nginx server for instance, can you not do something similar like kubectl a cluster and get clusters in a target environment like AWS and vSphere.

When you create a tanzu cluster, a bootstrap cluster is initially created with a “kind” that runs a minimal Docker-based local cluster. Once the bootstrap cluster is provisioned, it is injected with objects that declare in the target provider (AWS, vSphere, local machine). It then creates a management cluster on your specified provider. Finally, it initializes management components inside the cluster to communicate with a provider to create infrastructure, including networking, to initialize Kubernetes.

Management Cluster

In the bootstrap cluster itself, VMware moves the configuration of the bootstrap cluster into the initial management cluster that is created. This process creates the fully instantiated management cluster. The management cluster is then responsible for the management of any new workload clusters. The management cluster can create, manage, and delete any number of workload clusters. This model is called managed clusters. You can think of this as the more production-ready version or Cluster Bootstrapping in TCE.

Managed clusters in TCE
Managed clusters in TCE

Standalone cluster

There is another cluster model that VMware is experimenting with in TCE. This model is called standalone clusters. At this point standalone clusters are highly-experimental and only partially implemented. However, they provide the fastest way to get a functioning workload cluster with fewer resources than managed clusters, as they do not require a long-running management cluster.

The current implementation of the standalone cluster still instantiates a bootstrap cluster, and then the bootstrap cluster instantiates the workload cluster. However, instead of the bootstrap cluster pivoting over and injecting resources into the resulting cluster, it is kept as a normal cluster, and the bootstrap cluster simply dies off.

When you want to change the cluster (scale, delete, etc), VMware instantiates the bootstrap cluster later when you want to make changes. It then changes the cluster in the way you have declared using the CLI. The bootstrap cluster is ephemeral and is stopped once the clusters are created.

The Standalone Tanzu Community Edition model
The Standalone Tanzu Community Edition model

VMware Tanzu Community Edition Prerequisites

There are a few prerequisites required for installing VMware Tanzu Community Edition. These requirements may differ depending on the platform you are installing TCE. The supported platforms are Linux, Mac, and Windows. For the remainder of the walkthrough, we will look at the installation of TCE on the Linux platform.

Hardware requirements

The hardware requirements for TCE are fairly similar between the platforms, with VMware documenting more memory required on a Windows platform. Note the following hardware requirements.

Platform

RAM

CPU

Linux

6

2

Mac

6

2

Windows

8

2

Software requirements

There are a few software requirements to be aware of that are prerequisites to installing VMware Tanzu Community Edition. Note the following software components needed:

    • Docker – In the Linux deployment, you must create the docker group and add your user before you attempt to create a standalone or management cluster.
    • Latest version of Chrome, Firefox, Safari, Internet Explorer, or Edge

Installing VMware Tanzu Community Edition in Linux

Let’s look at the VMware Tanzu Community Edition installation process in a Linux environment. The steps consist of the following:

    1. Download and install kubectl
    2. Install Docker
    3. Check your group
    4. Download and unzip the TCE package
    5. Run the GUI assisted installation

1. Download and install kubectl

To download kubectl to your Linux distribution, use the following command to pull down the latest version of kubectl:

Once kubectl is downloaded, you can install it by running:

    • sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

Download and install kubectl
Download and install kubectl

2. Install Docker

Read the official documentation from Docker on installing Docker in your specific distribution here: Install Docker Engine | Docker Documentation.

Installing Docker in Linux
Installing Docker in Linux

3. Check your cgroup

Verify your Linux distribution is configured to use the cgroups v1. You can verify this is the case after you install Docker by running the command:

    • docker info | grep -i cgroup

Checking the cgroup configuration in your Linux distribution
Checking the cgroup configuration in your Linux distribution

4. Install Tanzu Community Edition

    • There are a couple of ways to install Tanzu Community Edition. The first is using a package management tool such as “Homebrew” package management in Linux and macOS and “Chocolatey” in Windows. The second option is to pull down the package from the official Github site, extract, and install.

Below, I am using Homebrew to install Tanzu Community Edition in Linux. You can do this with the Homebrew command:

    • Brew install vmware-tanzu/tanzu/tanzu-community-edition

Installing Tanzu Community Edition (TCE) with Homebrew
Installing Tanzu Community Edition (TCE) with Homebrew

After the Homebrew installation of TCE completes, you need to run a final post-installation configuration script. This can be completed using the command:

    • /home/linuxbrew/.linuxbrew/Cellar/tanzu-community-edition/v0.9.1/libexec/configure-tce.sh

Running the post-installation script after installing TCE using Homebrew
Running the post-installation script after installing TCE using Homebrew

Configuring a Tanzu Community Edition Standalone cluster

Once you have Tanzu Community Edition installed, you are ready to use the tanzu CLI tools to create your clusters. First, let’s see the process of using the Tanzu Community Edition CLI to provision a new standalone cluster, which is currently the experimental offering with TCE.

Launching the Tanzu Community Edition cluster creation GUI
Launching the Tanzu Community Edition cluster creation GUI

The Tanzu Community Edition Install launches in a web browser tab once you run the above command. As you can see from the screenshot, you have the option to deploy the standalone cluster in:

    • Docker
    • VMware vSphere
    • Amazon EC2
    • Microsoft Azure

Here I am choosing Docker to proceed.

Choosing the environment to deploy the standalone cluster
Choosing the environment to deploy the standalone cluster

Once you choose your deployment target for the standalone cluster, the Deploy Standalone Cluster wizard will begin. Click Next.

Beginning the deployment of the standalone cluster in Docker
Beginning the deployment of the standalone cluster in Docker

Next, name the standalone cluster name.

Naming your Tanzu Community Edition standalone cluster
Naming your Tanzu Community Edition standalone cluster

In step 3, you can customize the Kubernetes Network Settings. For most, unless you have an overlapping network with another cluster, you can accept the default settings.

Configuring the Tanzu Community Edition standalone cluster network settings
Configuring the Tanzu Community Edition standalone cluster network settings

Click the Review Configuration button to see the configured settings for your standalone cluster.

Review the Tanzu Community Edition cluster configuration
Review the Tanzu Community Edition cluster configuration

For learning purposes, the wizard shows the command you can use to perform the operation using the CLI. In case you are wondering what the YAML file looks like for the configuration:

CLUSTER_CIDR: 100.96.0.0/11

CLUSTER_NAME: tce-standalone

ENABLE_MHC: “false”

IDENTITY_MANAGEMENT_TYPE: none

INFRASTRUCTURE_PROVIDER: docker

LDAP_BIND_DN: “”

LDAP_BIND_PASSWORD: “”

LDAP_GROUP_SEARCH_BASE_DN: “”

LDAP_GROUP_SEARCH_FILTER: “”

LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: “”

LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn

LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN

LDAP_HOST: “”

LDAP_ROOT_CA_DATA_B64: “”

LDAP_USER_SEARCH_BASE_DN: “”

LDAP_USER_SEARCH_FILTER: “”

LDAP_USER_SEARCH_NAME_ATTRIBUTE: “”

LDAP_USER_SEARCH_USERNAME: userPrincipalName

OIDC_IDENTITY_PROVIDER_CLIENT_ID: “”

OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: “”

OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: “”

OIDC_IDENTITY_PROVIDER_ISSUER_URL: “”

OIDC_IDENTITY_PROVIDER_NAME: “”

OIDC_IDENTITY_PROVIDER_SCOPES: “”

OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: “”

OS_ARCH: “”

OS_NAME: “”

OS_VERSION: “”

SERVICE_CIDR: 100.64.0.0/13

TKG_HTTP_PROXY_ENABLED: “false”

Click the Deploy Standalone Cluster button.

Beginning the deploy standalone cluster operation
Beginning the deploy standalone cluster operation

After beginning the deployment, you will see the graphical output of the operation, displaying the logs in real-time, much as if you are tailing the deployment.

Tanzu Community Edition standalone cluster deploying
Tanzu Community Edition standalone cluster deploying

After just a few minutes, the Tanzu Community Edition standalone cluster deployment finishes successfully.

The Tanzu Community Edition standalone cluster deployment finishes successfully
The Tanzu Community Edition standalone cluster deployment finishes successfully

Since the Tanzu Community Edition standalone cluster is deployed successfully, we can change the kubectl context to the standalone cluster, use the context, and start looking at pods, nodes, and other information.

Use the following commands:

    • kubectl config set-context <your cluster name>-admin@<your cluster name>
    • kubectl config use-context <your cluster name>-admin@<your cluster name>

Setting and using the context for the standalone Kubernetes cluster in Tanzu Community Edition
Setting and using the context for the standalone Kubernetes cluster in Tanzu Community Edition

We can now interact with the cluster using standard Kubernetes commands to view the nodes in detailed view and shortened form.

Using the kubectl command to view the Tanzu Community Edition Kubernetes nodes
Using the kubectl command to view the Tanzu Community Edition Kubernetes nodes

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Concluding thoughts

VMware Tanzu is a game-changer for VMware. It provides existing vSphere customers with the ability to deploy Kubernetes without restructuring, using different tools, or deploying another platform as VMware has brought Kubernetes into ESXi itself. Kubernetes has traditionally been complicated to deploy, configure, troubleshoot, and operationalize without specialized skills.

VMware Tanzu commoditizes Kubernetes for all its existing customers and organizations who decide to use VMware Tanzu for their on-premises and cloud Kubernetes platform. It provides a commercialized Kubernetes offering managed by familiar VMware tools used across the enterprise. It also provides additional tooling and solutions that allow businesses to extend and use Kubernetes more intelligently and with visibility and service integration.

VMware Tanzu Community Edition (TCE) is a free edition of VMware Tanzu, providing the same VMware Tanzu core as the commercial offering. With Kubernetes now an integral part of modern application architecture, developers and IT Ops engineers alike have the need to become familiar with and use Kubernetes. VMware Tanzu Community Edition provides a distribution of VMware Tanzu, allowing developers and DevOps engineers to provision a local Kubernetes lab or sandbox quickly.

The unique Community Edition can be installed in Docker, vSphere, AWS, and Azure and offers an excellent way for developers to use, test, and learn using the same Kubernetes technology as the commercial VMware Tanzu offering. Additionally, VMware is offering Tanzu Community Edition as a freely available download, without any registration required or time limitations to play around with VMware Tanzu in this form.

Tanzu Community Edition allows provisioning both standalone clusters (experimental), and managed clusters in a unique way that self-provisions the solution. Users can use familiar tools such as kubectl to manage nodes, pods, and other aspects of the K8s clusters, along with the Tanzu CLI.

VMware Tanzu Community Edition is a great new edition to the VMware Tanzu fleet, allowing an excellent solution for learning and development. The project is rapidly evolving. Keep pace with the latest developments directly from the VMware Tanzu Community Edition site documentation here:

The post How to Use Tanzu for Free! appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-tanzu-community-edition/feed/ 2
Traceability and Auditing with VMware vRealize Log Insight Cloud https://www.altaro.com/vmware/vrealize-log-insight-cloud/ https://www.altaro.com/vmware/vrealize-log-insight-cloud/#respond Fri, 25 Feb 2022 13:42:27 +0000 https://www.altaro.com/vmware/?p=23663 Find out how vRealize Log Insight Cloud can help you improve traceability and auditing across various public and private cloud

The post Traceability and Auditing with VMware vRealize Log Insight Cloud appeared first on Altaro DOJO | VMware.

]]>

The landscape of IT as we know them has always been shaped by innovation, driven by the vision of modern tech organizations. Through these innovations, environments gain in complexity and inter-dependencies. Making auditing and traceability a critical component of the IT environment and this is where vRealize Log Insight Cloud comes into play and helps solve this problem.

Over the last few years, VMware has been following the vision set out by Pat Gelsinger back in the early 2010s. The company operated a shift towards cloud computing and it was still the most prominent topic during VMworld 2021. Through this shift, VMware is offering more and more of their products as cloud services and this applies to vRealize Log Insight which used to be only an on-premise appliance solution. With vRealize Log Insight Cloud, the on-premise appliance is a proxy that acts as syslog target and relays the data to your VMware Cloud service.

What is VMware vRealize Log Insight Cloud?

Formerly known as VMware Log Intelligence, VMware vRealize Log Insight Cloud is a cloud service that offers a managed solution to get visibility across various public and private clouds through log forwarding. You will find the features of any respectable Syslog server such as log aggregation, analytics, dashboards, custom alerting…

vRealize Log Insight Cloud

The great thing about vRealize Log Insight is that it includes content packs with all the intelligence and experience gathered by VMware and their customers over the years. That way you don’t have to download third-party plugins or manually create a bunch of rules to get a deep insight into your products.

vRealize Log Insight Cloud architecture and ingestion options

VMware vRealize Log Insight Cloud is built in a way that allows multiple services to forward logs to it in order for the IT department to be able to correlate data across their SDDC and Cloud services.

vRLI Cloud architecture

The vSphere integration for on-premise SDDC is based on an appliance named the Cloud Proxy. This appliance collects the journals from various on-premise sources and forwards them to VMware vRealize Log Insight Cloud in a compressed and encrypted state.

vRealize Log Insight Cloud connects various public and private Clouds to consolidate log aggregation

vRealize Log Insight Cloud connects various public and private Clouds to consolidate log aggregation.”

In this article, we will describe how to get started with VMware vRealize Log Insight Cloud by using the 30-day trial period.

Ingestion options

A wide variety of sources are currently supported out of the box with associated content packs such as:

    • Agents: Cloud Proxy, Fluentd, Fluent Bit, Log Insight Agent, LogStash
    • Applications: Apache, Docker, HAProxy, Kubernetes, IIS, SQL Server, TKG, NGINX, Github, Gitlab…
    • Cloud Providers: AWS, Azure, GCP (Google Cloud Platform), VMC on AWS
    • Third-party forwarders: Protocols such as Rsyslog, Syslog, TCP and UDP.

Note that these are the log sources that encompass all use cases. You will find a more specific array of solutions in the Content Packs with products like vRealize Orchestrator, vRealize Automation, SRM, Dell iDRAC, Active Directory, you name it.

Content Packs with products like vRealize Orchestrator, vRealize Automation, SRM, Dell iDRAC, Active Directory

How to set up VMware vRealize Log Insight Cloud

As the name implies, VMware vRealize Log Insight Cloud is a cloud service so you need to have a VMware Cloud Services account in order for you to enable it. If you don’t have an account, you can create one here.

The onboarding process will be different if you are a VMware Cloud (VMC) user

The onboarding process will be different if you are a VMware Cloud (VMC) user.”

Although VMware vRealize Log Insight Cloud is a paid VMware Cloud service, a 30-days trial period is offered for free to test the product before going with a paid subscription. We will use this free trial period in this example.

Port requirements

Let’s cover a few networking prerequisites laid out by VMware before jumping into it. There actually is a page in the official documentation with a “getting started” checklist. Most of these points we are describing in this article, however, I think the part about networking ports should be addressed before starting.

The Cloud Proxy appliance we will deploy will need the following network ports:

Source

Destination

Port

Protocol

Service Description

Standard system log Remote Cloud Proxy 514 TCP, UDP Syslog data over TCP or UDP
vRealize Log Insight Agents or Server Remote Cloud Proxy 9000 TCP vRealize Log Insight log data in JSON format (CFAPI)
Remote Cloud Proxy vRealize Log Insight Cloud 443 TCP vRealize Log Insight Cloud data over HTTPS

 

Step 1: Request Trial access

The first step is to request access to the service within the trial period. It took less than 15 minutes for me to receive the activation email so it should be pretty quick.

    • Log in the VMware Cloud Services console > Go to Services > Search for “log insight” > Click on REQUEST ACCESS.

https://console.cloud.vmware.com

VMware Cloud Services console

    • You will be redirected to the vRealize Log Insight product page. Here click on REQUEST FREE CLOUD TRIAL.

REQUEST FREE CLOUD TRIAL

    • In step 1 of the registration window, type in your details and click NEXT.

type in your details and click NEXT

    • In step 2, you may or may not put in your real information (a thought goes to all these fake VMware accounts to download evaluation products…). Then click NEXT.

step 2, you may or may not put in your real information

    • In step 3, you can add extra details and choose to receive communications that I’m actually interested in. Finish the wizard with the captcha and click SUBMIT.

step 3, you can add extra details and choose to receive communications

    • At this point, you will receive a notification email letting you know the request has been received. You will then need to wait a bit for the activation email to come through.

At this point you will receive a notification email letting you know the request has been received

    • Once you get the activation email, click on ACTIVATE SERVICE.

Once you get the activation email, click on ACTIVATE SERVICE

You should receive the confirmation email pretty quickly after requesting access to the trial.”

    • An organization with the details you filled in earlier should be pre-created and checked. Click CONTINUE here.

An organization with the details you filled earlier should be pre-created and checked. Click CONTINUE here

    • This will take you to a page where you can review the subscription tiers and start the trial with START MY TRIAL.

This will take you to a page where you can review the subscription tiers and start the trial with START MY TRIAL

At this point, you have enabled your free 30-days trial period and have access to the vRealize Log Insight Cloud console at https://www.mgmt.cloud.vmware.com/li/. In the next steps, we will deploy the Cloud Proxy to link our on-premise environment to the VMware Cloud Service.

Step 2: Deploy vRealize Log Insight Cloud Proxy

Now that we have access to the console, we need to deploy the Cloud Proxy in our on-premise environment to gather the logs and forward them to vRealize Log Insight Cloud. The download of the Cloud Proxy appliance happens in the Cloud console (not on my.vmware).

Cloud proxies establish the connection between your on-premise SDDC and vRealize Log Insight Cloud

Cloud proxies establish the connection between your on-premise SDDC and vRealize Log Insight Cloud”

    • It brings up a popup where you can download the appliance by clicking DOWNLOAD OVA.

The download may take some time but you can come back to this page by following the same path so don’t worry if you close it or get your session disconnected after a while. Note that the key specified below will be used when we deploy the OVA to link it with the Cloud portal.

The Cloud Proxy appliance must be downloaded from your vRealize Log Insight Cloud console

The Cloud Proxy appliance must be downloaded from your vRealize Log Insight Cloud console.”

    • Once the appliance is downloaded, go ahead and deploy it in your vSphere environment. I won’t go into the details of deploying an OVA but I will only point out that you need to paste the key mentioned earlier in the Customize template pane under the VMware Cloud Services One Time Key (OTK) section.

The key will pair your Cloud Proxy instance with your VMware Cloud account

The key will pair your Cloud Proxy instance with your VMware Cloud account.”

    • After you start the appliance, wait a couple of minutes and the Cloud Proxy should appear in the VMware vRealize Log Insight Cloud console like so.

Click on the Cloud Proxy’s name to display extra details about it

Click on the Cloud Proxy’s name to display extra details about it.”

Enable Content Packs

We now have an on-premise Cloud Proxy that is linked to the VMware Cloud Services console but no logs are being forwarded just yet. First, we’ll need to enable content packs.

    • Go to Content Packs > Public > Enable those that apply to your environment.

In my case, I enabled VMware Cloud > General.

Go to Content Packs > Public > Enable those that apply to your environment.

As well as VMware Products > VMware vSAN, VMware vSphere.

There are quite a few similar ones so it can be tricky to know which one to enable. I chose to pick the latest one.

Content packs are available for a wide variety of source products

Content packs are available for a wide variety of source products.”

 

Step 3: Connect vCenter Server

We now need to connect our on-premise infrastructure to vRealize Log Insight Cloud. In order to do so properly, we will create a vSphere role with just enough permissions (don’t you go using the SSO admin account right!).

vSphere role creation

    • Log in your vCenter Server and go to Administration > Access Control > Roles > click on the Read-only role > Clone it and give it a reasonable name such as vRealize Log Insight Cloud and click OK.

The read-only role must be cloned to create a dedicated role for vRLI Cloud

The read-only role must be cloned to create a dedicated role for vRLI Cloud.”

    • Edit the role and add the following privileges under the Host subcategory:
      • Configuration.Advanced settings
      • Configuration.Change settings
      • Configuration.Network configuration
      • Configuration.Security profile and firewall

Note that these host privileges are required for vRealize Log Insight to automatically configure the hosts, otherwise you would have to do it all manually.

Host privileges allows vRealize Log Insight Cloud to configure the hosts for log forwarding

Host privileges allows vRealize Log Insight Cloud to configure the hosts for log forwarding.”

    • Then create a new user. Whether it is in AD, OpenLDAP or vsphere.local doesn’t matter as long as it follows your internal security policy. I created vrli-cloud@vsphere.local for the purpose of this demonstration.
    • Then select the top vCenter object > Permissions > Add the user we created with the role we created and enable Propagate to children.

Set the permission at the root of the vCenter instance

Set the permission at the root of the vCenter instance.”

vCenter connection and vSphere host logs forwarding

    • Once this is done, go back to the VMware Cloud console in Configuration > vSphere Integration > ADD VCENTER SERVER and type in the connection details. Check both checkboxes to configure the hosts and forward events to it, then click SAVE.

It is recommended to check the boxes to ensure proper host configuration

It is recommended to check the boxes to ensure proper host configuration.”

    • Once this is successfully completed, you should see the configured hosts on the vSphere Integration page.

Once this is successfully completed, you should see the configured hosts in the vSphere Integration page

If you look at the Advanced Settings of a host that was reconfigured, you will find the Syslog.global.logHost value set to the Cloud Proxy appliance.

“vSphere hosts are automatically configured for log forwarding

vSphere hosts are automatically configured for log forwarding.”

vCenter server logs forwarding (Optional)

This step is optional but we will quickly see how to configure the vCenter appliance to forward its syslog activity to vRealize Log Insight Cloud.

    • Log in the vCenter server VAMI on https://<vcenter>:5480 and go to Syslog > CONFIGURE and configure it like so and click SAVE:
Server Address

FQDN or IP address of the Cloud Proxy appliance

Protocol

TCP

Port

514

 

The default protocol and ports for Syslog servers is TCP 514

The default protocol and ports for Syslog servers is TCP 514.”

    • There is a SEND TEST MESSAGE feature that sends a specific message to make sure it is picked up by whatever Syslog solution is in the background.

the vCenter VAMI lets you send test message to ensure a successful connection

the vCenter VAMI lets you send a test message to ensure a successful connection.”

    • You can check for this live in vRealize Log Insight Cloud by going to Live Trail > set the filter to look for “syslog test message” for instance, send it from the VAMI and it should pop up in the live trail. If it doesn’t come up, then I’m afraid it’s time to start troubleshooting.

Live trail lets you observe logs in real time as they come in

Live trail lets you observe logs in real-time as they come in.”

Step 4: Start using it

The rest is up to you to tailor vRealize Log Insight Cloud to your needs and start skimming through logs to find anomalies you wouldn’t normally pick up.

Customize your dashboards and queries to gain visibility in your environment

Customize your dashboards and queries to gain visibility in your environment.”

The user interface is pretty self-explanatory and intuitive to use. You can create custom dashboards, move things around, create complicated queries…

The content packs you enable will give you access to a number of dashboards such as the one below for “vCenter Server – Events”.

The content packs you enable will give you access to a number of dashboards such as the one below for “vCenter Server – Events”

I particularly liked KB insights. A feature that uses indexing and machine learning techniques to identify and pair anomalies with suggested solutions from a knowledge base created by customers and field experts for similar problems that were solved in the past. That way save time by letting the engine do the research work for you and propose KB articles or VMTN posts that may hold the solution to the issue.

KB Insights proposes KB articles or VMTN community posts as potential solutions to anomalies found in the logs

KB Insights proposes KB articles or VMTN community posts as potential solutions to anomalies found in the logs.”

Additional config

While there is no definite answer as to how you should use the solution going forward, it is highly recommended to configure the email settings so you can receive an alert whenever a condition is met.

Email notifications are a must have in any respectable SDDC environment.

Email notifications are a must-have in any respectable SDDC environment.”

You can also have a look at the Access control settings of vRealize Log Insight Cloud. Three roles exist out-of-the-box that will be enough in most instances:

    • Organization Owner
    • vRealize Log Insight Cloud Admin
    • vRealize Log Insight Cloud User

Free Tier and Premium Subscription

The 30-days free trial period has no restrictions in terms of features so you can review the solutions as you would use them in production. The log retention is 10 days and data limits apply:

    • VMware Cloud on AWS users: 50 GB per day.
    • Non-VMware Cloud on AWS users: 10 GB per day.

Once you reach the end of the trial period, the following happens:

    • VMware Cloud on AWS users: 15 days grace period, then conversion to VMware Cloud core subscription or upgrade to a premium subscription.
    • Non-VMware Cloud on AWS users: You must upgrade to a standalone premium subscription to continue using it.

You can find up-to-date information on features and subscription specifics in the official documentation.

Switching to a paid subscription is obviously made easy for you, just go to Configuration > Subscriptions > ADD PAYMENT METHOD. You can then choose a plan which will be more or less financially interesting according to your commitment to the program.

vRealize Log Insight Cloud Premium subscription will vary in price according to the time commitment.

vRealize Log Insight Cloud Premium subscription will vary in price according to the time commitment.”

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Make sure you come back regularly to our VMware DOJO section to keep up with the latest VMware articles and news!

So Should you be using vRealize Log Insight Cloud?

In most organizations, the VMware infrastructure is managed by a handful of administrators but is accessed by a variety of users for specific purposes linked to their role in the company. In such instances, vRealize Log Insight Cloud will help you achieve user action identification for traceability and auditing purpose. Some software vendors may also request customers to retain the logs to ensure that their CPU or VM based license is not being overused.

vRealize Log Insight Cloud offers robust Syslog capabilities for VMware products and third-party software through content packs. The fact that the logs are stored in the cloud means they will remain available even in the instance of a full site or storage failure and help in the troubleshooting effort.

The post Traceability and Auditing with VMware vRealize Log Insight Cloud appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vrealize-log-insight-cloud/feed/ 0
VMware Sovereign Cloud and How Legislation Affects Your Data https://www.altaro.com/vmware/sovereign-cloud/ https://www.altaro.com/vmware/sovereign-cloud/#respond Fri, 14 Jan 2022 14:43:56 +0000 https://www.altaro.com/vmware/?p=23713 Find out where data sovereignty fits in the current IT landscape and how VMware helps ensure legislations are enforced by cloud providers

The post VMware Sovereign Cloud and How Legislation Affects Your Data appeared first on Altaro DOJO | VMware.

]]>

VMware Sovereign Cloud is an initiative by the company to show customers that data sovereignty in the cloud and compliance is being worked on and ensures that those customers can rely on VMware’s services to safely store their data and workloads openness, transparency, data protection, security, and portability in mind.

The concept of data sovereignty is not new per se but it has organically become an important topic to consider among large organizations and government entities following the rise of commodity cloud computing, cyber security threats, the Snowden leaks…

VMware’s own definition of sovereignty is the following:

Sovereignty is the power of a state to do everything necessary to govern itself, such as making, executing, and applying laws; imposing and collecting taxes; making war and peace; and forming treaties or engaging in commerce with foreign nations”.

Data sovereignty refers to data being subject to the privacy laws and governance structures within the nation where data is collected.”

 

Data Sovereignty: The Challenge of the Data Decade

You may be familiar with Moore’s law that was formulated around 1970 which stated that CPU speeds will double every year and hasn’t been discredited in 2021, over 51 years later. While the global data growth doesn’t follow the same dramatic trend, it does evolve exponentially. In fact, back in 2018, IDC estimated that over 175 zettabytes will be generated each year by 2025.

Annual size of the global datasphere – Sponsored by Seagate from IDC

Annual size of the global datasphere – Sponsored by Seagate from IDC”

Environments that store all of their data on-premise know where the data is when it leaves the network, where it goes, how it is used… However, the advantages of cloud are no longer subject to debate, it is an accepted fact that cloud computing solves many a challenge and most companies leverage it in some way or another.

With that said, storing data in the cloud means they are no longer under your control but the cloud provider’s, meaning they could be in another country that abides by different laws and this is where the discussion begins. As you can see in the trend below, the amount of data stored in the cloud is growing.

Data storage is shifting from on-premise data centers to public cloud providers

Data storage is shifting from on-premise data centers to public cloud providers”

Enter data and cloud sovereignty. Data sovereignty (and indirectly cloud sovereignty) refers to countries’ jurisdiction on data compliance and how it relates to the concepts of ownership, who is authorized to store data, how it can be used, protected, stored and what would happen should the data be used ill-intentionally. With the growth of data storage in the cloud, public entities, large enterprises and government bodies are eager to ensure that their cloud-based data is treated right and that they don’t need to worry about it.

Among recent examples of sovereign cloud initiatives, we find:

    • The principality of Monaco recently unveiled a sovereign cloud where all the shareholders are residents along with the state owning a controlling stake in it.
    • The European Commission is spearheading the Franco-German Gaia-X project to create a federated and secure data infrastructure. The goal is an open, transparent and secure digital ecosystem, where data and services can be made available, collated and shared in an environment of trust.

The European cloud market was allegedly worth €53 billion in 2020 and is expected to be worth between €300 billion and €500 billion by 2027-2030, hence VMware’s eagerness to be ahead in the cloud sovereignty market.

Introducing the New VMware Sovereign Cloud Initiative

Up until recently, data sovereignty was ensured by cloud providers through clauses in contracts regarding several areas of the data lifecycle. While large enterprises have departments with dedicated people to deal with all of this, smaller structures can’t necessarily afford the overhead or simply don’t have the resources internally to understand the risks and benefits associated with data sovereignty.

VMware Sovereign Cloud streamlines the process of ensuring data sovereignty with cloud providers

VMware Sovereign Cloud streamlines the process of ensuring data sovereignty with cloud providers”

One needs to ensure at the very least that:

    1. The cloud infrastructure is secured, modern and kept up to date at all times.
    2. Customers’ data sovereignty is assured and guaranteed.

It is with these challenges in mind that VMware sovereign cloud aims to simplify and streamline the process of cloud sovereignty by offering its customers a certified cloud offering through partnerships with cloud providers. The VMware Sovereign Cloud Initiative is built on a framework comprised of a number of rules to abide by in order to be a certified cloud provider. VMware Sovereign Cloud providers must meet applicable geographic-specific sovereign cloud requirements, regulations, or standards where their Sovereign Cloud is made available.

In fact, you can already review the list of VMware sovereign cloud providers on cloud.vmware.com. where you find all VMware cloud solutions. As of the time of this writing, there are currently 9 VMware Sovereign Cloud providers but the list will grow as others get on board. Once a provider checks all the boxes of the VMware Sovereign Cloud Initiative framework, they will get the VMware Sovereign Cloud designation.

VMware Sovereign Cloud providers can be filtered out in cloud.vmware.com

“VMware Sovereign Cloud providers can be filtered out in cloud.vmware.com”

Ensuring data privacy and compliance

Sovereignty has become an important part of national policy and customers are starting to get on board with this train of thought. VMware Sovereign Cloud is here to help them navigate these waters and verified VMware sovereign cloud providers will remain where the workloads run.

Environment, Social & Governance (ESG) are the 3 VMware Sovereign Cloud strategies

Environment, Social & Governance (ESG) are the 3 VMware Sovereign Cloud strategies”

In order to certify providers as Sovereign Cloud Providers, VMware is developing a two-phase approach to tackle the problem:

VMware Sovereign Cloud Framework

This framework developed by VMware includes guiding principles, best practices, and technical architecture requirements to adhere to the data sovereignty requirements of the specific jurisdiction in which that cloud operates. For instance, France requires data to be stored in the European Union while Germany requires localization either in Germany or the EU depending on the level of data sovereignty.

The framework is built around 5 principles:

      • Data sovereignty and jurisdiction control
      • Data access and integrity
      • Data security and compliance
      • Data independence and mobility
      • Data innovation and analytics

VMware Sovereign Cloud Initiative

The VMware Sovereign Cloud initiative is a designation for Providers that self-attest and meet all the requirements of the VMware Sovereign Cloud framework. They must complete an assessment on their Cloud environment (design, build, operations…) and attest that they check all the boxes based on the VMware Sovereign Cloud framework. Among other things, VMware Sovereign Cloud providers must follow the VMware Validated Designs (VVD) for Cloud providers to be VMware Cloud verified.

Promoting VMware Multi-Cloud Offerings

Although it wasn’t made obvious during VMworld 2021 or in the official communications, pushing VMware Sovereign Cloud may also be a way to open the door to multi-cloud offerings. Organizations and public bodies with data sovereignty concerns aren’t likely to go through all the hoops of data sovereignty compliance with several cloud providers for sports.

VMware Cross-Cloud Services will simplify the adoption of multi-cloud architectures

VMware Cross-Cloud Services will simplify the adoption of multi-cloud architectures”

Embracing cloud computing isn’t necessarily easy at first. Leveraging several cloud providers for specific features or redundancy reasons multiplies the hurdles along the way. This is why VMware introduced their new multi-cloud offerings with VMware cross-cloud services and communicated so much about it. Now add data sovereignty to the mix and you get a tangled mess that will be tricky to make sense of.

From a business perspective, cloud services is a very lucrative business since it brings recurring revenue and centralizes customers while consolidating the maintenance and support effort on the VMware side of things. With the VMware Sovereign Cloud initiative, I believe it will lift a load off the decision makers’ shoulders that will only have to select among the available VMware Sovereign Cloud providers and choose whatever service they are interested in such as VMware Disaster Recovery as a Service (DRaaS).

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

The Road Ahead

It is no wonder we are in what is referred to as the “data decade” given the rate of current and projected data that is being generated by consumers, enterprises and public entities. While cloud adoption used to be rather slow at the beginning of the last decade, the emergence of use cases and cloud providers in the last few years have made it an integral part of the modern digital ecosystem. VMware’s global strategy is a testimony of this trend given the resources they’ve invested in developing their multi-cloud offering and partnerships with various providers.

VMware Sovereign Cloud is one of the components in this global strategy but it will certainly be an important one given the customers concerned by these problems. Those include government bodies and highly regulated large entities that usually allocate large chunks of their budget towards securing their data which at the end of the day boils down to data sovereignty.

With the VMware Sovereign Cloud Initiative, the company is positioning itself at the forefront of this topic by removing the complexity of cloud sovereignty to promote multi-cloud offerings. Securing a large customer base on this solution will likely incur important revenue streams and customers will not be likely to switch unless they have a very good reason given the importance of compliance nowadays.

The post VMware Sovereign Cloud and How Legislation Affects Your Data appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/sovereign-cloud/feed/ 0
VMware Project Capitola: The vSAN of Host Memory? https://www.altaro.com/vmware/vmware-project-capitola/ https://www.altaro.com/vmware/vmware-project-capitola/#respond Fri, 24 Dec 2021 14:59:48 +0000 https://www.altaro.com/vmware/?p=23468 At VMworld 2021, VMware introduced VMware Project Capitola, a software-defined memory solution. What is it? How does it work?

The post VMware Project Capitola: The vSAN of Host Memory? appeared first on Altaro DOJO | VMware.

]]>

VMware has continued to innovate in the enterprise datacenter with cutting-edge products that are now household names, such as vSAN and NSX. Over the years, it has transformed how we look at compute, storage, networking, and creating an abstraction layer on top of physical hardware to make these much easier to consume, manage, and tier.

While memory and compute are often tied together, VMware has unveiled a new technology set to bring similar advantages to the world of server memory that VMware vSAN has brought about in the world of server storage. VMware Project Capitola is a new software-defined memory solution unveiled at VMworld 2021 that will revolutionize server memory in the data center.

What is Software-Defined Memory?

With the various challenges and business problems mentioned above, the idea of a software-defined memory solution comes into focus. We mentioned at the outset, as a parallel to VMware vSAN, the notion of software-defined memory. VMware vSAN can take physical storage assigned to a specific physical vSAN host and pool this logically at the cluster level.

This software-defined approach to physical storage provides tremendous advantages in terms of flexibility, scalability, and storage tiering that allows customers to have the tools needed to solve modern storage problems. However, while VMware has pushed the envelope in most of the major areas of the data center (compute, storage, and networking), so far, memory, while virtualized to VMs and other technologies for the guest, has remained a simple hardware resource assigned to the underlying guest OS’es.

What if we had a solution that aggregated available memory installed in physical ESXi hosts and the types of memory installed in the host? Software-defined memory allows organizations to make intelligent decisions on how memory is used across the environment and assigned to various resources. In addition, memory can be pooled and tiered in the environment for satisfying different SLAs and performance use cases, like VMware vSAN allows today.

Memory Types Explained (DRAM, PMEM, NVMe)

Currently, there are three types of memory technologies widely used in the data center today. These are:

    • DRAM
    • PMEM
    • NVMe

DRAM

DRAM (Dynamic Random-Access Memory) is the standard type of memory common in servers and workstations today. It is very durable and extremely fast in terms of access times and latency. However, it has one major downside. It is not able to retain data. This characteristic of DRAM is known as volatility.

When DRAM loses power for any reason, the data contained in the DRAM modules is lost and must be retrieved from physical disk storage.

PMEM

PMEM (Persistent Memory) is a type of memory technology that is non-volatile. It retains the data, even after a power loss. It is high-density and has low latency access times like DRAM. PMEM still lacks the speed of DRAM. However, it is much faster than flash memory, such as used in SSDs.

Intel® OptaneTM is a 3D XPoint memory technology that is gaining momentum at the enterprise server level as an extremely performant memory technology with the advantages of non-volatility. In addition, Intel® OptaneTM provides excellent performance, even with multiple write operations running in parallel, something that SSDs and other memory-based storage technologies lack. This type of memory is also referred to as “storage-class memory.”

At this time, Intel® OptaneTM is not meant to be a replacement for DRAM. Instead, it complements existing DRAM memory, providing excellent performance and high reliability. It is seen as a secondary tier of memory that is used for various use cases and is much cheaper than DRAM. Whereas DRAM is around $7-$20/GB, storage-class memory like Intel® OptaneTM is around $2-$3/GB.

NVMe

Rather than a type of memory technology, NVMe is an interface provided to SSD drives. Thus, you can think of NVMe as a PCIe SSD. As a result, they are much faster SSDs than traditional SATA SSDs. NVMe storage is becoming a mainstream technology in the data center, especially in the area of high-speed storage devices. However, it is fast enough to be used as a slower memory technology tier in certain use cases.

The Consumers and Use-cases for Pooled and Tiered Memory

With infrastructure hardware in the data center, many organizations are becoming memory-bound with their applications. Memory is also a significantly expensive component of physical server infrastructure costs today. Memory can comprise as much as 50% of the price of a two-socket physical server.

Data needs are significantly expanding. Many organizations who are using large database servers find the memory initially allocated database workloads grows over time. Many companies are leveraging in-memory databases. As these grow, so does the demand for host memory consumption. Some even find this may be doubling every 18-24 months.

In addition, memory is often intentionally provisioned from a hardware perspective due to maintenance operations. Why is this? During maintenance operations, the overall capacity of a virtualization cluster is reduced so that remaining hosts must assume the memory footprint of the host in maintenance. Note the comments of an IT admin at a major US Airline company:

“I am running mission-critical workloads; I need 35% excess memory capacity at the cluster level, which I am not even using most of the time.”

Even larger cloud service providers running cloud services are challenged with memory contention. Note the comments from a cloud service provider:

“Our cloud deployment instances are also getting memory bound and losing deals due to lack of large memory instances.”

There is no question that organizations across the board are feeling the challenge of meeting the demands of customers and business stakeholders around satisfying the memory requirements of their applications and business-critical workloads.

The Challenges of Exponential Data Growth

A trend across the board in the enterprise is data is growing exponentially. Businesses are collecting, harnessing, and using the power of data for many different use cases in the enterprise. Arguably, data is the most important asset of today’s businesses. As a result, data has been referred to as the business world’s new “gold” or new “currency.”

The reason for the data explosion is data allows businesses to make better and more effective decisions. For example, pinpointed data helps companies see where they need to invest in their infrastructure, the demographics of their customers, trends in sales, and other essential statistics. The data explosion among businesses is a macro trend that shows no signs of changing.

Data doesn’t only help with the business. The data itself is a commodity that companies will buy and sell, accounting for their main fiscal revenue stream. According to Gartner, by 2022, 35% of large organizations will be sellers or buyers of data via formal online marketplaces, up from 25% in 2020.

Storing the data is only part of the challenge for businesses. They have to make something useful from the data that is harvested. Another trend related to data is modern organizations want to make use of the data collected faster. It means that data must be processed more quickly. A study by the IDC predicts that nearly 30% of global data will be real-time by 2025. It underscores the need for data to be processed more quickly. Data not processed in time declines in value exponentially.

The challenges around data are driving various customer needs across the board. These include:

    • Infrastructure needs to scale to accommodate the explosive data growth – It includes the scaling of compute, memory, storage, and networking to meet these challenges. All hardware areas are seeing the demands of data processing grow. As more data needs to be processed, it places stress on compute. It is why we are seeing GPUs becoming more mainstream for data process offloading. The network is now seeing 100 Gbit connections becoming mainstream. All NVMe storage is also being more widely used to help meet the demands placed on expedient data processing.
    • For ultra-quick data processing, in-memory applications are needed
    • Memory is expensive – It is one of the most expensive components in your infrastructure. Customers are challenged to reduce costs and at the same time keep an acceptable level of performance.
    • Consistent Day-0 through Day 2 experience – Customers need acceptable experience from an operations and monitoring perspective.

The digital transformation resulting from the global pandemic has been a catalyst to the tremendous growth of data seen in the enterprise. Since the beginning of 2020, businesses have had to digitalize everything and streamline manual processes into fully digital processes to streamline business operations and allow these to be completed safely.

Application designs are changing as a result. Organizations are designing applications that must work with ever-increasing datasets across the board. Even though the datasets are growing, the expectation is that applications can process the data faster than ever.

It includes applications that rely on database backends such as SAP, SQL, and Oracle. In addition, artificial intelligence (AI) and machine learning (ML) are becoming more mainstream in the enterprise. SLAs also require exponentially more extensive data sets to be constantly available.

Virtual Desktop Infrastructure (VDI) instances continue as a business-critical service in the enterprise today. However, the cost per VDI instance continues to be a challenge for businesses today. As organizations continue to scale their VDI infrastructure, the demand for memory continues to grow. As mentioned, memory is one of the most expensive components in a modern server. As a result, memory consumption is one of the primary price components of VDI infrastructure.

In-memory computing (IMC) is a growing use case for memory consumption. Organizations are accelerating their adoption of memory-based applications such as SaaS and high-velocity time-series data. In addition, 5G and IoT Mobile Edge use cases require real-time data processing that depends on the speed of in-memory processing.

Due to the memory demands needed by modern applications and the price of standard DRAM, many organizations are turning to alternative technologies for memory utilization. NVMe is being considered and used in some environments for memory use cases. Although slower than standard DRAM, it can provide a value proposition and ROI for companies in many use cases.

Summary of Modern Memory Challenges

To summarize the variety of challenges organizations are encountered directly related to memory requirements and constraints:

    • Memory is expensive – The cost of memory is a significant part of the overall hardware investment in the data center
    • Deployments are memory-bound – Memory is becoming the resource that is most in-demand and in short supply relative to other system resources
    • Hardware incompatibility and heterogeneity – Up to this point, memory is tied to and limited by the physical server host. This constraint creates challenges for applications with memory resources beyond what a single physical server host can provide.
    • Performance SLA and monitoring – Businesses will continue to have performance demands while continuing to need more memory to keep up with the resource demands of applications and data processing
    • Availability and recovery – On top of the performance demands, businesses still need to ensure applications and data are available and can be quickly recovered
    • Operational complexity – To keep up with the demands of memory and other resources, applications are becoming more complex to work around the memory demands.

These challenges result in unsustainable costs to meet business needs, both from an infrastructure and application development perspective.

What is VMware Project Capitola?

With the growing demands on memory workloads in the enterprise, businesses need new ways to satisfy memory requirements for data processing and modern applications. VMware has redefined the data center in CPU, storage, and networking with products that most are familiar with and use today – vSphere, vSAN, and NSX. In addition, VMware is working on a solution that will help customers solve the modern challenges associated with memory consumption. At VMworld 2021, VMware unveiled a new software-defined memory solution called VMware Project Capitola.

What is VMware Project Capitola? VMware has very much embraced the software-defined approach to solving challenges associated with traditional hardware and legacy data center technologies. VMware Project Capitola extends the software-defined approach to managing and aggregating memory resources. VMware notes the VMware Project Capitola Mission as “flexible and resilient memory management built in the infrastructure layer at 30-50% better TCO and scale.”

VMware Project Capitola is a technology preview that has been described as the “vSAN of memory” as it performs very similar capabilities for memory management as VMware vSAN offers for storage. It will essentially allow customers to aggregate tiers of different memory types, including:

    • DRAM
    • PMEM
    • NVMe
    • Other future memory technologies

It enables customers to implement these technologies cost-effectively and allows delivering memory intelligently and seamlessly to workloads and applications. Thus, VMware Project Capitola helps to meet challenges associated with operations challenges and those faced by application developers.

    • Enterprise operations – VMware Project Capitola allows seamlessly scaling tiers of memory based on demand and enable unifying heterogeneous memory types in a unified platform for consumption
    • Application developers – Using VMware Project Capitola, application developers are provided the tools to consume the different memory technologies without using APIs

The memory tiers created by VMware Project Capitola are aggregated into logical memory. This capability allows consuming and managing memory across the platform as a capability of VMware vSphere. It increases overall available memory intelligently using specific tiers of memory for workloads and applications. In addition, it prevents consuming all memory within a particular tier. Instead, this is now shifted to a business decision based on the SLAs and performance required of the applications.

VMware Project Capitola details currently known

VMware Project Capitola will be tightly integrated with current vSphere features and capabilities such as Distributed Resource Scheduler (DRS), which bolsters the new features provided with VMware Project Capitola with the standard availability and resource scheduling provided in vSphere.

VMware mentions VMware Project Capitola will be released in phases. It will be implemented at the ESXi host level, and then features will be extended to the vSphere cluster. VMware details that VMware Project Capitola will be implemented in a way that preserves current vSphere memory management workflows and capabilities. It will also be available in both vSphere on-premises and cloud solutions.

As expected, VMware is working with various partners, including memory and server vendors (Intel, Micron, Samsung, Dell, HPE, Lenovo, Cisco). In addition, they are working with service providers and various ISV partners in the ecosystem and internal VMware business divisions (Hazelcast, Gemfire, and Horizon VDI) to integrate the solution seamlessly with native VMware solutions. VMware is collaborating with Intel initially as a leading partner with technologies such as Intel® OptaneTM PMem on Intel® XeonTM platforms.

Value proposition

    1. Software-defined memory for all applications provides frictionless deployments without retooling applications and allows addressing memory-bound deployments with large memory footprints. It can also lead to faster recovery from failures.
    2. Operational Simplicity – No changes in the way vSphere works. It provides flexibility to tune performance and tune applications. It reduces infrastructure customization for a specific workload
    3. Technology Agnostics – Pay-as-you-grow model that allows tuning performance as you need for specific applications. Bring pooled and disaggregated memory to your server fabric.

How does VMware Project Capitola work?

In phase 1 of the VMware Project Capitola, it is local-tiering with a cluster. ESXi, installed on top of the physical server hardware, is where the memory tiers are created. Management of the tiering happens at the cluster level. When VMs are created in the environment, they will have access to the various memory tiers.

Future capabilities of VMware Project Capitola will undoubtedly have the ability to control memory tiers based on policies, much like vSAN storage today. All current vSphere technologies, such as vMotioning a VM, will remain available with VMware Project Capitola. It will be able to maintain the tiering assignments for workloads as these move from host to host.

Overview of VMware Project Capitola architecture
Overview of VMware Project Capitola architecture

In phase 2 releases of VMware Project Capitola, the tiering capabilities will be a cluster-wide feature. In other words, if a workload cannot get the tier of memory locally on the native ESXi host, it will get the memory from another node in the cluster or dedicated memory device.

VMware Project Capitola enables transparent tiering

The memory tiering enabled by VMware Project Capitola is called transparent tiering. The virtual machine simply sees the memory that is allocated to it in vSphere. It is oblivious to where the actual physical memory is coming from on the physical ESXi host. VMware vSphere takes care of the appropriate placement of memory paging in the relative physical memory.

A simple two-tier memory layout may look like:

    • Tier 1 – DRAM
    • Tier 2 – Cheaper and larger memory (Optane, NVMe, etc)

VMware Project Capitola enables transparent tiering

The ESXi host sees a sum of all the memory available to it across all memory tiers. At the host level, the host monitoring and the tier or tier sizing decide the tier allocation budget given to a particular VM. It decides this based on various metrics, including:

    • Memory activity
    • Memory size
    • Other factors

The underlying VMware Project Capitola mechanisms decide when and where active pages sit in faster tiers of memory or slower tiers of memory. Again, the virtual machine is unaware of where memory pages actually reside in physical memory. It simply sees the amount of memory it is allocated. This intelligent transparent tiering will allow businesses to solve performance and memory capacity challenges in ways not possible before.

What Project Capitola Means for the Future of Memory Management

VMware Project Capitola is set to change how organizations can solve challenging problems in managing and allocating memory across the environment for business-critical workloads and applications. Today, organizations are bound by physical memory constraints related to physical hosts in the data center. VMware Project Capitola will allow customers to pool memory from multiple hosts in much the same way that vSAN allows pooling storage resources.

While it is currently only shown as a technology preview, VMware Project Capitola already looks extremely interesting and will provide powerful features enabling innovation and flexibility for in-memory and traditional applications across the board.

Learn more about VMware Project Capitola in the following resources:

The post VMware Project Capitola: The vSAN of Host Memory? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-project-capitola/feed/ 0
What is VSAN HCI mesh compute cluster in vSphere 7 Update 2? https://www.altaro.com/vmware/vsan-hci-mesh-compute-cluster/ https://www.altaro.com/vmware/vsan-hci-mesh-compute-cluster/#respond Fri, 18 Jun 2021 12:46:05 +0000 https://www.altaro.com/vmware/?p=22574 HCI Mesh allows vSAN clusters to remotely mount the datastore of another (remote) vSAN cluster, hence sharing the storage capacity. Read more about them in this article.

The post What is VSAN HCI mesh compute cluster in vSphere 7 Update 2? appeared first on Altaro DOJO | VMware.

]]>

Since the rise of Hyperconverged Infrastructures (HCI) and storage virtualization several years ago, VMware has been at the forefront of the movement with VSAN, alongside major players like Nutanix, Simplivity (now part of HPE), VXRail and such.

The main point of hyperconvergence is to consolidate multiple stacks that traditionally run on dedicated hardware in the same server chassis through software virtualization. NSX-T virtualizes the network and VSAN takes care of the storage over high-speed networking. Meaning you don’t need third-party appliances like storage arrays in the case of vSAN.

Each VSAN node includes local storage that makes up a virtualized shared datastore

Each VSAN node includes local storage that makes up a virtualized shared datastore

If you aren’t too familiar with VSAN, make sure to check our blog How it Works: Understanding vSAN Architecture Components, which will give you a better understanding of what it is and how it can help your organization.

Context

VMware VSAN

VMware vSAN is enabled at the cluster level. Each node contains a cache and a capacity tier that make up the shared datastore. VMware offers vSAN Ready Nodes which are certified hardware configurations verified by VMware and the server vendor.

However, you may have already identified how it can become tricky to efficiently scale and maintain a homogeneous cluster. For instance,

    • If you run low on compute capacity, you will need to add nodes, which means you also add storage capacity that will be unused.
    • The other way around, if you run low on storage and can’t add disks, you’ll have to add nodes which will bring unneeded additional compute resources.

VSAN HCI mesh clusters

Back in vSAN 7 Update 1, VMware proposed a new feature called “HCI mesh clusters” to try and mitigate this issue by allowing organizations better use their capacity.

In a nutshell, HCI Mesh allows vSAN clusters to remotely mount the datastore of another (remote) vSAN cluster, hence sharing the storage capacity and span its usage to a wider pool of compute resources.

HCI Mesh allows multiple VSAN clusters to share their datastores remotely

HCI Mesh allows multiple VSAN clusters to share their datastores remotely

Why run multiple vSphere clusters

Now, you may wonder, “Why not run one single cluster in which I throw everything in?”.

Most virtual environments actually start off with a single cluster and grow it as demand increases. However, past a certain point, it is relevant to reflect on the design and consider splitting the capacity into multiple clusters.

    • Management plane: More and more resources get eaten up as you add core solutions such as NSX-T, vRealize Automation, Tanzu, etc into your environment. It is good practice to isolate these components into a dedicated cluster that does not share resources with the workloads as these are critical for smooth operations.
    • Workloads: All these mixed workloads provide a service to your internal users or to other services. North-south security is tightened up and those shouldn’t, ideally, expose services to the outside world.
    • DMZ: As your services become exposed to the internet grow, so too will be the workloads doing the job. In which case, it becomes relevant to only expose the network flux (VLAN, VXLAN, Overlay…) to vSphere hosts in a dedicated cluster.
    • Tenants: If you have a big client renting resources in your environment, chances are they will not be happy getting a resource pool in a cluster shared with other tenants. In which case, dedicating a cluster to the client may become a clause in a contract.
    • Other use cases: There are plenty of other cases for dedicated clusters such as VDI, large Big Data VMs, PKIs…

Adopting such a segregation of your workloads at the cluster level not only improves the organization of resources but also enhances security overall.

VSAN HCI Mesh Compute Clusters

In vSphere 7 Update 2, VMware listened to the customers and built on the HCI Mesh feature by extending it to regular (Compute) clusters. The great thing about it is that VSAN HCI compute mesh clusters can be enabled on any cluster and no vSAN license is required!

Once the remote vSAN datastore is mounted, virtual machines can be migrated between clusters using regular vMotion.

Non-vSAN cluster can now mount remote vSAN datastores

Non-vSAN cluster can now mount remote vSAN datastores

Just like HCI Mesh clusters, HCI Mesh compute clusters use RDT as opposed to other protocols like iSCSI or NFS. RDT (Reliable Datagram Transport) works over TCP/IP and is optimized to send very large files. This is to ensure the best performance and rock-solid reliability.

Considerations

While you could already export a vSAN datastore using iSCSI and NFS, using the vSAN protocol offers value at different levels as you maintain SPBM management, lower overhead, end-to-end monitoring, simpler implementation…

vSAN HCI mesh cluster

vSAN HCI mesh cluster

Before starting with vSAN HCI mesh clusters, consider the following requirements and recommendations:

    • vSAN Enterprise license on the cluster hosting the remote datastore.
    • HA configured to use VMCP with “Datastore with APD”.
    • 10Gbps minimum for vSAN vmkernel.
    • Maximum of 5 clusters per datastore and 5 datastores per cluster.
    • No support for Stretched and 2-node clusters.

Cross-cluster networking recommendations

    • High speed, reliable connectivity (25Gbps recommended).
    • Sub-millisecond latency recommended. An alert is issued during setup if greater than 5ms.
    • Support for both L2 and L3 connectivity (gateway override needed on vSAN vmkernel for routing in case of layer 3).

Note that stretched clusters are not supported as of vSAN 7 Update 2. Meaning it is not recommended at this time to mount a remote vSAN datastore over high-speed WAN.

How to enable VSAN HCI mesh compute cluster

I will show you here how to remotely mount a vSAN datastore using HCI Mesh Compute Cluster.

In this example “LAB01-Cluster” has vSAN enabled with a vSAN datastore creatively renamed “LAB01-VSAN”. Whereas the vSphere host in “LAB02-Cluster” has no local storage but a vSAN enabled vmkernel in the vSAN subnet for the sake of simplicity.

Again, you do not need a vSAN license in the client cluster.

Let’s dig in!

    1. First, navigate in the Configure pane of the cluster on which you want to mount a remote vSAN datastore. Then scroll down to vSAN services and click Configure vSAN.

Enable vSAN services in the configuration pane

Enable vSAN services in the configuration pane

    1. This will bring vSAN configuration wizard where you will click on the new option vSAN HCI Mesh Compute Cluster, then Next.

vSAN HCI Mesh Compute Clusters is available as of vSphere 7.0 Update 2

vSAN HCI Mesh Compute Clusters is available as of vSphere 7.0 Update 2

    1. The next window will just finish the wizard. As you can see enabling it is as easy as it gets. It essentially only enables the vSAN services on the host.

Enabling vSAN HCI Mesh Compute Cluster is a simple 2-step process

Enabling vSAN HCI Mesh Compute Cluster is a simple 2-step process

    1. It is now time to mount the remote datastore. No configuration is required on the target cluster. Still, in the vSAN service pane of the compute cluster, click on Mount Remote Datastores.

Once enabled, remote datastores can be mounted

Once enabled, remote datastores can be mounted

    1. Then again, click on Mount Remote Datastore in the Remote Datastores pane.

The remote datastore pane will get you started

The remote datastore pane will get you started

All compatible vSAN datastores should appear in the list. If they don’t, make sure the remote cluster is running a supported version of vSAN. As you can see below, the list contains “LAB01-Cluster”. Select it and click Next.

All compatible vSAN clusters are listed for selection

All compatible vSAN clusters are listed for selection

    1. The next page runs a series of checks to ensure the environment is suitable.

A number of requirements must be met to validate a remote mount

A number of requirements must be met to validate a remote mount

    1. After the process finishes, the datastore should appear in the Remote Datastores pane. As you can see, the client and server cluster appear in the list which will be useful to understand what’s going on at a glance.

The client (remote) cluster displays the list of mounted datastores

The client (remote) cluster displays the list of mounted datastores

Note that this is also displayed in the vSAN configuration pane of the server cluster (LAB01), except the datastore appears as “Local”.

The server (local) cluster offers the same information

The server (local) cluster offers the same information

    1. The datastore should also appear in the datastore list of the hosts in the client cluster (“LAB02-Cluster” in our case).

Mounted datastores should appear on the client hosts

Mounted datastores should appear on the client hosts

    1. You can then try and migrate a virtual machine from the server cluster to the client cluster (remember the client cluster has no storage in our case).

If the VM was already stored on the vSAN datastore, you can execute a simple vMotion to the client cluster. Note that the mounted vSAN datastore appears as “Remote” in the datastore list when performing a storage vMotion.

Mounted datastores appear as remote when moving a VM

Mounted datastores appear as remote when moving a VM

    1. Once the relocation is completed, you get a virtual machine with vSAN objects stored on a remote datastore.

vSan network cluster

vSan network cluster

Policy-Based Management

Another nifty feature that was added alongside HCI Mesh Compute Cluster are storage rules. Just like RAID levels, those are specified at the VM storage policy level. They add the storage services as a layer of restriction for compatible datastore when applying the policy to a VM.

Storage rules extend datastore compatibility checks to storage services

Storage rules extend datastore compatibility checks to storage services

Unmount a remote vSAN datastore

If you need to unmount a remote datastore, you need to ensure that it is not used by any VM on the client cluster or it won’t be possible. This also concerns the vSphere Cluster Service (vCLS) VMs.

The client cluster must have no resource on the mounted datastore to unmount it

The client cluster must have no resource on the mounted datastore to unmount it

Failure scenarios

While very handy and flexible, such a setup should bring a number of questions regarding availability and tolerance to failures. All vSAN related failures such as disks and nodes incur the same consequences and actions as in a regular vSAN cluster. In this section, we will cover 2 cases that apply to the HCI Mesh compute clusters.

Failure of the link between client and server clusters

Now, in the case of an HCI Mesh architecture, what would happen if the link that connects the hosts in the client and server clusters were to fail?

Inter-cluster link failure will result in loss of access to storage

Inter-cluster link failure will result in loss of access to storage

It is recommended to configure vSphere HA with “Datastore with APD” set to either conservative or aggressive which will result in the following chronology of events:

    1. Failure of the link between client and server cluster.
    2. 60 seconds later: All paths down (APD) event declared “lost access to volume …”.
    3. 180 seconds later: APD response triggered – Power off.

After the APD timers are reached, the response is triggered by powering off the VM which appears “inaccessible” until the link is restored.

APD event at the client cluster level: VM powered off

APD event at the client cluster level: VM powered off

Failure of the vSAN link on a host in the client cluster

If a host in the client cluster loses its vSAN links, the behavior will also be similar to a traditional APD (except here the delay is 180 seconds instead of 140).

Loss of access to vSAN by a host will result in loss of storage

Loss of access to vSAN by a host will result in loss of storage

    1. Failure of the vSAN link on the host.
    2. 60 seconds later: All paths down (APD) event declared “lost access to volume …”.
    3. 180 seconds later: APD response triggered – Restart VM on another host.

Once the APD response timer is reached, the virtual machine is restarted on a host that still has access to the datastore.

APD event at the client host level: VM restarted

APD event at the client host level: VM restarted

Wrap-up

Ever since its original launch back in 2014, VMware vSAN has dramatically improved over the release cycle to finally become a major player in the hyperconvergence game. It offers a wide variety of certified architectures spanning multiple sites, ROBOs and 2-node implementations (direct-connect or not). Those make vSAN a highly versatile product fitting most environments.

vSAN HCI Mesh Compute Clusters is yet another option brought to you to leverage your existing vSAN environment and make better use of these datastores.

Deploying a vSAN cluster may not be such a huge investment compared to a traditional SAN infrastructure. However, scaling it up is not always financially straightforward as the cost of additional nodes is greater than regular compute nodes. HCI Mesh Compute Clusters offer SMBs and smaller environment the flexibility to present an already existing vSAN datastore to up to 5 clusters!

The post What is VSAN HCI mesh compute cluster in vSphere 7 Update 2? appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vsan-hci-mesh-compute-cluster/feed/ 0
The Ten Commandments of Backup https://www.altaro.com/vmware/the-ten-commandments-of-backup/ https://www.altaro.com/vmware/the-ten-commandments-of-backup/#respond Wed, 09 Dec 2020 17:48:18 +0000 https://www.altaro.com/vmware/?p=20796 We run down the 10 most essential concerns for any backup strategy. How many are you taking into consideration?

The post The Ten Commandments of Backup appeared first on Altaro DOJO | VMware.

]]>

In honour of the initial publication of The Backup Bible, I’ve extracted the top 10 most important messages from the book and compiled them into a handy reference.

The Backup Bible is a free eBook I wrote for Altaro that covers everything you need to know about planning, deploying and maintaining a secure and reliable backup and disaster recovery strategy. Download the Backup Bible Complete Edition now!

Plan for the Worst-Case Scenario

We have lots of innovative ways to protect our data. Using HCI or high-end SANs, we can create insanely fault-tolerant storage systems. We can drag files into a special folder on our computer and it will automatically create a copy in the cloud. Many document-based applications have integrated auto-saves and disk-backed temporary file mechanisms. All of these are wonderful technologies, but they can generate a false sense of security.

One specific theme drives all of my writing on backup: you must have complete, safe, separate duplicates. Nothing else counts. Many people think, “What if my hard drive fails?” and plan for that. That’s really one of your least concerns. Better questions:

  • What if I make a mistake in my document and don’t figure it out for a few days?
  • What if the nice lady in the next cubicle tries to delete her network files, but accidentally deletes mine?
  • What if someone steals my stuff?
  • What if my system has been sick but not dead for a while, and all my “saved” data got corrupted?
  • What if I’m infected by ransomware?

Even the snazziest first-line defences cannot safeguard you from any of these things. Backups keep a historical record, so you can sift through your previous versions until you find one that didn’t have that mistake. They will also contain those things that should have never been removed. Backups can (and should) be taken offline where malicious villains can’t get to them.

Plan for the Worst Case-Scenario #Backup10Commandments #BackupBible – Tweet this

Use all Available Software Security and Encryption Options

Once upon a time, no one really thought about securing backups. The crooks realized that and started pilfering backup tapes. Worse, ransomware came along and figured out how to hijack backup programs to destroy that historical record as well.

Backup vendors now include security measures in their products. Put them to good use.

Use all Available Software Security and Encryption Options #Backup10Commandments #BackupBible – Tweet this

Understand the Overlap Between Active Data Systems and Backup Retention Policies

The longer you keep a backup, the taller the media stack gets. That means that you have to pay more for the products and the storage. You have to spend more time testing old media. You have to hold on to archaic tape drives and disk bus interfaces or periodically migrate a bunch of stale data. You might have ready access to a solution that can reduce all of that.

Your organization will establish various retention policies. In a nutshell, these define how long to keep data. For this discussion, let’s say that you have a mandate to retain a record of all financial transactions for a minimum of ten years. So, that means that you need to keep backup data until it’s ten years old, right? Not necessarily.

In many cases, the systems used to process data have their own storage mechanisms. If your accounting software retains information in its database and has an automatic process that keeps data for ten years and then purges it, then the backup that you captured last night has ten-year-old data in it.

Database and Backup Retention Comparison

Does that satisfy your retention policy? Perhaps, perhaps not. Your retention policy might specifically state that backups must be kept for ten years, which does not take the data into consideration. Maybe you can go to management and get the policy changed, but you might also find out that it is set by law or regulation. Even if you are not bound by such restrictions, you might still have good reason to continue keeping backups long-term. Since we’re talking about a financial database, what if someone with tech skills and a bit too much access deletes records intentionally? Instead of needing to hide their malfeasance for ten years, they only need to wait out whatever punctuated schedule you come up with. Maybe accounting isn’t the best place to try out this space-saving approach.

Understand the Overlap Between Active Data Systems and Backup Retention Policies #Backup10Commandments #BackupBible – Tweet this

High Availability is a Goal, Not a Technology

We talk a lot about our high availability tech and how this is HA and that is HA. Really, we need to remember that “high availability” is a metric. How about that old Linux box running that ancient inventory system that works perfectly well but no one can even find? If it didn’t reboot last year, then it had 100% uptime. That fits the definition of “highly available”.

You can use a lot of fault-tolerant and rapid recovery technologies to boost availability, but a well-implemented backup and disaster recovery plan also helps. All of the time that people spend scrounging for tapes and tape drive manuals counts against you. Set up a plan and stick to it, and you can keep your numbers reasonable even in adverse situations.

High Availability is a Goal, Not a Technology #Backup10Commandments #BackupBible – Tweet this

Backup and Disaster Recovery Strategies are Not the Same Thing

If your disaster recovery plan is, “Take backups every night,” then you do not have a disaster recovery plan.

Backup is a copy of data and the relevant technologies to capture, store, and retrieve it. That’s just one piece of disaster recovery. If something bad happens, you will start with whatever is leftover and try to return to some kind of normal state. That means people, buildings, and equipment as much as it means important data.

The Backup Bible goes into much more detail about these topics.

Backup and Disaster Recovery Strategies are Not the Same Thing #Backup10Commandments #BackupBible – Tweet this

Backup Applies to Everyone in an Organization, so Include Everyone

The servers and backup systems live in the IT department (or the cloud), but every department and division in the organization has a stake in its contents and quality. Keep them invested and involved in the state of your backup and disaster recovery systems.

Backup Applies to Everyone in an Organization, so Include Everyone #Backup10Commandments #BackupBible – Tweet this

One Backup is Never Enough

I said in the first commandment that for a proper backup, you must have complete, safe, separate duplicates. A single duplicate is a bare minimum, but it’s not enough. Backup data gets corrupted or stolen just as readily as anything else. You need multiple copies to have any real protection.

Whether you take full backups every week or every month, take them frequently. Keep them for a long time.

One Backup is Never Enough #Backup10Commandments #BackupBible – Tweet this

One Size Does Not Fit All

It would be nice if we could just say, “Computer, back up all my stuff and keep it safe.” Maybe someday soon we’ll be able to do that for our personal devices. It’s probably going to be a bit longer before we can use that at the enterprise scale. In the interim, we must do the work of figuring out all the minutiae. Until we have access to a know-it-all-program and a bottomless storage bucket, we need to make decisions about:

  • Using different retention policies on different types of data
  • Using different storage media and locations
  • Overlapping different backup applications to get the most out of their strengths

As an example of the last one, I almost always configure Microsoft SQL to capture its own backups to a network location and then pull the .bak files with a fuller program. Nobody really backs up and restores Microsoft SQL as well as Microsoft, but just about everyone has better overall backup features. I don’t have to choose.

One Size Does Not Fit All #Backup10Commandments #BackupBible – Tweet this

Test It. Then Test again. And Again…

Your backup data is, at best, no better than it was the last time that you tested it. If you’ve never tested it, then it might just be a gob of disrupted magnetic soup. Make a habit of pulling out those old backups and trying to read from them. Your backup program probably has a way to make this less tedious. Set bi-annual or quarterly reminders to do this.

Test It. Then Test again. And Again… #Backup10Commandments #BackupBible – Tweet this

Backup and Disaster Recovery Planning is a Process, Not a One-Time Event

The most important and most often overlooked aspect of all backup and disaster recovery planning is employing a “set and forget” mentality. Did you set up a perfect backup and disaster recovery plan five years ago? Awesome! How much of the things that were true then are true now? If it’s less than 100%, your plan needs some updating. Make a scheduled recurring event to review and update the backup process. Remember the 6th commandment. Hint: If you feed them, they will come.

Backup and Disaster Recovery Planning is a Process, Not a One-Time Event #Backup10Commandments #BackupBible – Tweet this

Free eBook – The Backup Bible Complete Edition

I’d love to be able to tell you creating a backup and disaster recovery strategy is simple but I can’t. It takes time to figure out your unique backup requirements, business continuity needs, software considerations, operational restrictions, etc. and that’s just the start. I’ve been through the process many, many times and as such Altaro asked me to put together a comprehensive guide to help others create their own plan.

Free eBook - The Backup Bible Complete Edition

 

The Backup Bible Complete Edition features 200+ pages of actionable content divided into 3 core parts, including 11 customizable templates enabling you to create your own personalized backup strategy. It was a massive undertaking but hopefully, it will help a lot of people protect their data properly and ensure I hear a fewer data-loss horror stories from the community!

Download your free copy

The post The Ten Commandments of Backup appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/the-ten-commandments-of-backup/feed/ 0
Esxtop: Uses and Performance Troubleshooting https://www.altaro.com/vmware/esxtop-uses-troubleshooting/ https://www.altaro.com/vmware/esxtop-uses-troubleshooting/#respond Thu, 18 Jun 2020 16:04:02 +0000 https://www.altaro.com/vmware/?p=20383 Learn about the practical uses of Esxtop, a native VMware command tool created for troubleshooting and resolving performance issues

The post Esxtop: Uses and Performance Troubleshooting appeared first on Altaro DOJO | VMware.

]]>

Although VMware hosts are usually highly reliable, things can and sometimes do go wrong and this is in such cases that Esxtop shines. When this happens, it is important to have troubleshooting tools that can help you quickly resolve the issue. One especially helpful tool is VMware’s Esxtop. This article runs down how using Esxtop to collect performance statistics will help you solve production issues.

Esxtop is a command-line tool that is natively included on your VMware hosts. Here we will demonstrate how to troubleshoot with Esxtop. To get started, connect an SSH session to the host server that you wish to examine. PuTTY works well for this purpose, but there are other tools available as well.

Once you have logged in, the first thing that you will need to do is to retrieve the VMIDs for your virtual machines. The Esxtop utility identifies VMs by their VMID, so creating a list of VMIDs ahead of time will help you to better understand the information that is provided to you. The easiest way to retrieve the VMIDs is to use this command:

Vim-cmd vmsvc/getallvms

You can see what this looks like here:

VMIDs

The first column displays the VMIDs of the virtual machines on this host.

Now, enter the Esxtop commands to access the Esxtop interface shown here:

Esxtop commands

This is the information that is displayed when you enter the ESXTOP commands.

As you can see in the screen above, the Esxtop commands tool provides a wealth of information about the host’s workload. Although this information might initially seem to be somewhat convoluted, it can be used to help track down performance issues.

CPU Load

The very first line of Esxtop counters demonstrates CPU contention as shown above. It provides information about your VM’s CPU usage with multiple metrics. You will notice that this line concludes with a statement of CPU load averages, followed by three numbers (0.01, 0.05, and 0.15). The first number displays the load average for the last five seconds. The remaining numbers display load averages for longer periods of time (one minute, five minutes, and fifteen minutes).

The load averages should ideally be around 1.00. Lower values mean that the CPUs are being underutilized, while higher values mean that the CPUs are being overutilized. If the load average reaches 2.00, it means that the CPUs are seriously overloaded and that you need to either upgrade your host’s hardware or move some VMs to another host.

The VMware host displayed above is a lab machine with very low CPU usage, but if the load averages had been excessively high then the next logical question is which VMs are consuming the most CPU resources.

The easiest way to determine which VMs are currently using the most CPU resources is to look at the %Used column. This column reflects the percentage of the host’s physical CPU resources that a virtual CPU is using. Looking at the %Ready column can also be telling. This column reflects the percentage of time when the VPU was waiting to execute an instruction but had to wait for CPU resources to be made available. Ideally, the %Ready column should never exceed 5%.

Switching Modes

The Esxtop commands tool is able to display resource usage data for more than just CPU resources. If you press the H key, you will be taken to a help menu that lists the various commands that are supported by the Esxtop tool. If you look at the bottom of the next screengrab, you can see a section labeled Switch Displays. The commands shown in this section can be used to look at other types of performance metrics. For example, pressing M displays memory data. Similarly, pressing N displays networking data.

Esxtop commands tool

The help menu lists the Esxtop commands tool’s various modes.

Memory

Press M to access the VMware host memory status. This screen shows you the current virtual machine memory size (MEMSZ), as well as how much memory has been granted to each VM (Grant). You can also see how much swap memory is currently being used (SWCUR), as shown below:

VMware host memory status

The Esxtop tool provides memory usage statistics.

The main things to check for on the memory display are memory depletion and excessive swapping. While some swapping can be expected on a heavily loaded host, excessive swapping indicates that the host doesn’t have enough memory, and can also lead to performance problems. In this type of situation, you should add more memory to the host or migrate some of the VMs to another host.

Disks

Storage performance issues have always been a hard one for vSphere admins as it has a high impact on virtual machines’ performances and can be tricky to figure out. High storage latency will render the virtual machines sluggish and harm the performance of the app running in it.

The issue can lie anywhere in the IO path from the virtual machine itself to the disks in the storage array, going through server HBA, SAN switch ports, switch load, storage array controllers, RAID type, disk speed. Add to those design choices such as the sizing and number of LUNs as well as path selection policies and a number of virtual machines per datastores and you end up with an overwhelming amount of possibilities when it comes to finding out the root cause of a VM storage performance issue.

All these potential causes of problems each have a specific metric or a specific way to identify if the value is too high or too low. You want high bandwidth and IOPS but you want low latency for instance. Of course, not all of them can be observed in vSphere as some of them will only be available in the virtual machine itself or on the storage array through the troubleshooting tools. However, you can already get a good amount of information solely from Esxtop as we are about to see. We will especially look at latency metrics as these make up the bulk of storage performance issues in virtualized environments (not only vSphere mind you).

Note that there are three different types of disk visualizations in Esxtop; d for disk adapter or storage controllers, u for disk device or volumes and v for disk VMs.

Starting with M, you will get details about each VM’s disks. You will get insights about the latency a VM is observing on read (LAT/rd) and write (LAT/wr) operations, as well as bandwidth metrics (MBREAD/S and MBWRTN/s) and IOPS (CMDS/s, READS/s and WRITES/s).

VM’s disks

Esxtop gives VM performance details with V

Then pressing D will show information for each disk adapter or vmhba. This will be particularly useful to identify bottlenecks on your server and will make it quite obvious if something is wrong with a specific HBA. You get similar metrics as above with more detailed latency data (xAVG/ cmd) which we will explain further in the next section.

vmhba

Esxtop gives disk adapter performance details with D

Finally, pressing U gets you valuable data for each volume or LUN that is presented to the host. This one is probably my go to display as performance issues are usually observed at the volume or array level, meaning it is more granular to identify. Here you get data about queue length (which is a deeper advanced level of troubleshooting) but it also contains the latency metrics with xAVG/ cmd.

pressing U gets you valuable data

Esxtop gives volume performance details with U

CMDS/s This is the total amount of commands per second and includes IOPS and other SCSI commands such as SCSI reservations, locks, vendor string requests, unit attention commands etc. being sent to or coming from the device or virtual machine being monitored.
DAVG/cmd Average latency in milliseconds per command being sent to the volume.
KAVG/cmd Latency caused by the host’s VMkernel.
GAVG/cmd Latency as it is observed by the guest OS in the VM. This number is calculated with the formula: DAVG + KAVG = GAVG

Getting More Data

Finally, one of the most important things to know about using the Esxtop commands tool is that the displays are highly customizable. Simply press the F key and you will see a list of columns that Esxtop can display for the current mode. The next screengrab, for example, shows the columns that are available for the tool’s memory mode. The columns with an asterisk next to them are currently enabled, while the others are disabled. To display a column, simply press the corresponding letter. You can also press the letter associated with a column if you want to stop displaying that column.

To display a column, simply press the corresponding letter

Esxtop allows you to toggle columns on and off.

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Conclusion

In a world filled with dashboards and UI, sometimes all you need is a command-line tool that provides you instantly with the details you need. Esxtop provides those managing VMware hosts with access to valuable and often insightful information that can be used to help make critical decisions.

Duncan Epping wrote a timely blog over a decade ago that is still relevant to this day where he details the metrics in esxtop. This blog was updated over time to match software evolutions in vSphere.

The post Esxtop: Uses and Performance Troubleshooting appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/esxtop-uses-troubleshooting/feed/ 0