Management Archives - Altaro DOJO | VMware https://www.altaro.com/vmware VMware guides, how-tos, tips, and expert advice for system admins and IT professionals Fri, 22 Jul 2022 11:14:33 +0000 en-US hourly 1 The Top 24 VMware Open Source Projects https://www.altaro.com/vmware/open-source-projects/ https://www.altaro.com/vmware/open-source-projects/#comments Fri, 22 Jul 2022 11:03:12 +0000 https://www.altaro.com/vmware/?p=24499 VMware is a major contributor to the community. We run through the most exciting VMware open source projects right now

The post The Top 24 VMware Open Source Projects appeared first on Altaro DOJO | VMware.

]]>

While you may think that everything happens behind closed doors when thinking about a private company dealing with shareholders, turnover, profits and everything that comes with it, VMware open-source projects are very much a thing as the company is a major contributor to the community through multiple projects and organizations.

The term project may throw you off but the VMware Open-source program encompasses a number of products that are most likely already used in your most critical production system. I’m thinking about PhotonOS which powers the vCenter Server Appliance (VCSA).

In this article, we will have a look at a number of VMware Open-Source projects and what they are all about. Some of them are well-known products while others are a bit more obscure and will probably mature to the point of GA releases.

What is Open-Source?

First things first, let’s quickly touch base on what Open-Source is about for those that aren’t too familiar with it.

Originally, Open-Source software was software in which the code was public and available for everyone to view, modify and distribute. The point of Open-Source projects is to have a collaborative and community-based approach. Everyone can propose changes to the code which will be reviewed by peers who can either reject, edit or approve these changes. Because these projects are developed and supported by the community, they are oftentimes cheaper and have better shelf life than products developed internally by commercial companies (note that this is not a written rule).

The decentralization allows companies to build communities around a product they contribute to in order to deduplicate the quality of the work and increase the pace of development. It also comes with a mindset of community and helping each other.

When a company like VMware uses open-source projects in a commercial product, it is only fair that they contribute to the upstream project to give back to the community. Similarly, the company that opens its code to the public will appreciate contributions from others to improve the product.

What is an Upstream project?

If you start looking at VMware Open-Source projects, you may encounter the term “Upstream”. Upstream projects refer to the source repository or source project where the community contributes. Other projects based on the upstream project are then called downstream. The upstream project is like the trunk of the tree and downstream projects are the branches and leaves.

For instance, the Linux kernel originally developed by Linus Torvald is the upstream project and the hundreds of distributions such as Ubuntu, Debian, Fedora and such are downstream projects.

Most of these projects have community meetings that you can attend with a recurrence that varies according to the effort that goes into it as well as a Slack channel to communicate with the contributors. Below is an example with the Cluster API vSphere Provider.

“Contributors to Open-Source projects usually sync up in meetings and in dedicated Slack channels”

Contributors to Open-Source projects usually sync up in meetings and in dedicated Slack channels”

How Serious is VMware About Open Source?

Like many companies, VMware is involved and maintains a lot of Open-Source source projects. They are not all necessarily related to virtualization but a size of the effort is focused on Cloud Native. A good chunk of the VMware workforce contributing full time to these projects are employees from companies that were acquired by VMware such as Pivotal, AVI, Bitnami, Heptio and many others.

You can find news and announcements on the VMware Open-Source blog.

The VMware open-source projects are obviously maintained in Github repositories where you can find them all. Note that some projects are worked on using their internal Jira system and then pushed to Github but I digress.

Here are the main Github organizations where VMware open-source projects are maintained:

    • Main VMware projects on the VMware organization.
    • Cloud Native and Modern Apps initiatives with Tanzu Labs, which is the software consulting branch (formerly Pivotal Labs) and Tanzu where you will find the projects that most likely run out there in production environments.
    • There are a few other repositories that weren’t rebranded such as Spring, RabbitMQ (yep, it also belongs to VMware), SaltStack

The point is, there are hundreds of Open-Source projects that are maintained here and there by thousands of contributors. Note that some projects will still be around but deprecated like vSphere Integrated Containers that was superseded by the Tanzu portfolio.

For instance, the main VMware organization currently has 195 repositories:

“VMware Open-Source maintains a large number of projects”

VMware Open-Source maintains a large number of projects”

Take Tanzu Community Edition for instance and you will find that there are 110 contributors as of March 2022. The proof of project with a solid community and high-quality code.

Tanzu Community Edition

Tanzu Community Edition is one of the most famous VMware Open-Source projects at the moment”

Open Source Projects by VMware

Enough ramblings for now, let’s get to the good stuff and see what cool VMware Open-Source projects we can find and what they do.

Note that I will link the Github repositories here. Not all projects have a marketing page and they are not as current as the repository anyway. However, you will often find a website for the project in the repository. Open-Source projects are moving targets.

The following VMware open-source projects are organized in no particular order.

Tanzu Community Edition

Tanzu Community Edition repository

Tanzu Community Edition repository

Released in 2021, TCE is a full-featured Kubernetes platform available for free and super quick to start with. You can deploy it to docker, vSphere or a cloud provider. It uses Cluster API in the background to provision the infrastructure components and offer a kapp controller to easily install packages such as Prometheus or Velero. This project has a strong community of contributors backing it and is often updated and improved.

It even includes a web interface to deploy Kubernetes clusters for those that want to get straight to the point.

Carvel

Carvel project

Carvel project

Carvel provides a set of reliable, single-purpose, composable tools that aid in modern application building, configuration, and deployment to Kubernetes. A number of tools are associated with the Carvel Project such as:

    • ytt – Template and overlay of Kubernetes configurations (think Kustomize).
    • kapp – Manage multiple Kubernetes resources as one application.
    • kbld – Build or reference container images in Kubernetes configuration in an immutable way
    • imgpkg – Bundle and relocate app configurations via Docker registries
    • kapp-controller – Sort of an app marketplace.
    • vendir – Declaratively define files that should be in a directory.
    • secretgen-controller – CRDs specifying secrets that should be on a cluster.

Octant

Octant

Octant

I actually talked about Octant in a blog a while back because I really like this project. Octant was the brainchild of Heptio, a company that was acquired by VMware. It is a Dashboard Kubernetes UI software that you typically run on the workstation on which you would normally use kubectl as it will use whatever kubeconfig file you feed it to connect to a target cluster. That way you can visualize the various resources in place and execute basic actions.

It offers a slick UI where you can find tons of information about your Kubernetes resources. Similarly to how you would add content pack to vRealize Log Insight, it can be extended with plugins to get info on the likes of Antrea for instance.

Photon

Photon

Photon

If you use vCenter Server Appliance then you use Photon OS! It is a lightweight Linux container host optimized for cloud-native applications, vSphere and hyperscalers. It powers various VMware appliances such as vCenter Server Appliance, vRealize Automation, vRealize Orchestrator and so on.

Photon OS is a very mature project which is optimized for vSphere ESXi and the key benefits are that it includes support for containers with the Docker daemon, works with the likes of Mesos or Kubernetes, it is easy to manage in terms of lifecycle and offers hardened security out of the box.

NSX Container Plug-in

NSX Container Plugin (NCP)

NSX Container Plugin (NCP)

Another one for the Kubernetes ecosystem with a plugin that provides integration between NSX-T and Kubernetes as well as PaaS products such as OpenShift or Tanzu Application Service (TAS). NCP runs as a container on each node and communicates with NSX Manager and the K8s control plane. It monitors changes to Kubernetes resources and reconciles them by calling the NSX API.

NCP has many capabilities to integrate your Kubernetes environment into NSX-T such as implementing LoadBalancer service types and integrating layer 7 ingress with it, separating logical network for each Kubernetes namespace, allocating IP and MAC addresses and the list goes on. However, it is worth noting that not all industry experts vouch for NCP because of its complexity and the fact that NSX-T may not be the best-suited product for cloud-native workloads.

Harbor

Harbor

Harbor

Harbor is a cloud-native registry project that stores, signs, and scans content. Having a registry closer to the environment speeds up image transfers. Often used alongside Kubernetes, it adds value by offering features such as security, identity and management. It also supports replication of images between registries which mitigates the risk of having a single container registry from being a single point of failure.

Harbor is a very mature project currently in version 2.0 and is hosted by the Cloud Native Computing Foundation (CNCF).

Antrea

Antrea

Antrea

This project is also a really cool one. Antrea is a CNI (Container Network Interface) that is a little less known than the big names like Calico or Flannel. However, Antrea offers lots of interesting capabilities such as a LoadBalancer service type working on layer 2 and integrating with the latest versions of NSX-T to increase visibility in the environment. Just like any other CNI, it is straightforward to install in any Kubernetes cluster.

Herald

Herald Proximity

Herald Proximity

By now we’ve talked a fair bit about Kubernetes and Cloud-Native-related projects but I don’t want you to get bored so let’s switch a bit. The goal of the Herald Proximity project is to offer a range of APIs that will let software developers build applications that rely on regular distance proximity calculation and the exchange of data between devices (VMware’s words).

To simplify this pretty barbaric description, the use cases for this project include Situational awareness apps, Communication apps, healthcare applications for patient tracking or vitals monitoring, or it could be used for Safety apps to record an employee’s exposure to hazardous environments. A very topic project during a pandemic.

Pinniped

Pinniped

Pinniped

Back to Cloud Native with Pinniped, a project that provides identity services to Kubernetes. If you’ve worked with the container orchestrating platform, you’ll know that identity management isn’t the most straightforward of things all the while being a critical one that can make or break the environment’s overall security.

The principal purpose of Pinniped is to allow users to access Kubernetes clusters with a unified login experience. Following the same idea as identity sources in vCenter, Pinniped lets you plug in external identity providers into Kubernetes such as Active Directory, OpenLDAP and other OIDC providers.

Avi Kubernetes Operator (AKO)

Avi Kubernetes Operator (AKO)

Avi Kubernetes Operator (AKO)

We talked a little bit about LoadBalancer service types in the Antrea section which offers built-in LoadBalancer but that’s probably not really sustainable in an intense production scenario. AVI Kubernetes Operator is a Kubernetes operator that communicates with the Kubernetes API and the AVI controller (now NSX Advanced Load Balancer). By doing so, creating LoadBalancer services in Kubernetes will integrate with NSX ALB and create service engines that you can then find in the user interface easily.

Salt Project

Salt Project

Salt Project

Salt is a company that was acquired by VMware in 2020. They develop intelligent and event-driven automation software to deploy and configure complex IT systems. It is based on remotely executing commands and is used to manage large infrastructures with thousands of servers.

Container Service Extension

Container Service Extension

Container Service Extension

Container Service Extension (CSE) is a VMware Cloud Director extension that helps tenants create and work with Kubernetes clusters to achieve on-premise Kubernetes-as-a-Service (KaaS?). CSE works on a client (vcd-cli plugin) – server (VCD api extension) model. Users can then provision Kubernetes clusters in the Cloud director interface like they would a virtual machine.

PowerCLI example scripts

PowerCLI example scripts

PowerCLI example scripts

This one may appear a bit odd in this list but VMware distributes a selection of sample PowerCLI scripts for everyone to consume and learn from. We find big names from VMware in the contributors list such as Alan Renouf and William Lam. You will find modules for various products as well as a number of random scripts

vCheckvCheck

 

vCheck

What started as a flexible reporting script written by Alan Renouf himself ended up becoming almost a full-fledged product with over 60 contributors bringing plugins to expand the capabilities of the script.

vCheck is a nifty PowerCLI reporting script that will create dashboard and issues alerts based on thresholds in your VMware environment. The great thing is that it was written to allow anyone to propose plugins for it.

Flowgate

Flowgate

Flowgate

Flowgate is a project that helps enterprises consolidate data from various sources like facility and IT systems in order to form a single view of their operations. For instance, it can help with workload placement since it fetches metadata and runtime metrics from various facility (power supply, cooling capacity, temperature/moisture situation) and IT systems and correlates them together in one pane of glass. Imagine moving your workloads off of a server rack when the temperature rises drastically or power fluctuations happen, that would be great for big organizations!

vSphere Integrated Containers

vSphere Integrated Containers

vSphere Integrated Containers

At first, I had mixed feelings about whether I should cover this one since it is officially deprecated in favor of the Tanzu product line. However, I thought it made sense if only to make that statement. vSphere Integrated Container is comprised of a container runtime that allows you to deploy containers alongside virtual machines on vSphere, a vSphere client plugin to offer visibility and deployment capabilities. It also contains a container registry (Harbor) that stores and distributes container images as well as a Containers Management Porta (VMware Admiral) where you can manage the solution more in-depth.

With that said, while you may still see fairly recent commits to the GitHub repo, it has been said several times that the project is deprecated.

Weathervane

Weathervane

Weathervane

Weathervane is a stress-test tool that will benchmark the performances of your on-premise and cloud-based Kubernetes clusters. It achieves that by deploying applications on the cluster and then loading those applications. You can configure it to follow different profiles to fit your own environment such as steady load or varying the number of users to find the breaking point which violates quality-of-service (QoS) requirements.

Pyvmomi

pyvmomi

pyvmomi

You may have seen this term here and there in your VI admin career without paying too much attention to it if you never needed it. pyvmomi is a Python Software Development Kit (SDK) for the VMware vSphere API and allows you to manage vSphere and vCenter with it. While most IT admins coming from the Microsoft world are familiar with PowerShell and PowerCLI by extension, those coming from the Linux world will have more experience with Python as a scripting language. Pyvmomi will be a solid addition to their toolbelt should they need to interact and automate tasks in the organization’s VMware environment.

Contour

Contour

Contour

In Kubernetes, Ingress service types are a set of rules that define how external traffic is routed to an application inside of a cluster. Contour is a lightweight and opinionated Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. It also includes an ingress API (HTTPProxy) implemented via a CRD (Custom Resource Definition). Contour helps running workloads at scale on Kubernetes in a smooth and efficient manner.

Velero

Velero

Velero

Like Contour, Velero (formerly known as Heptio Ark) ended up in VMware’s shopping basket after the acquisition of Heptio. This product is an important one since it addresses the problematic of backup and restore of Kubernetes cluster resources as well as persistent volumes. It can run on premise or in public clouds. Velero will allow you to take backups of cluster resources (etcd) and restore in case of loss. Backups, scheduled backups and restores take the form of custom resource definitions (CRD) in Kubernetes.

You can also migrate resources to other clusters and replicate a cluster to a development or testing clusters for instance. Quite a handy feature if you ask me.

Sonobuoy

 

SonobuoySonobuoy

Sonobuoy is a diagnostic and reporting tool aimed at simplifying your understanding of the current state of a Kubernetes cluster. It achieves that through CNCF conformance tests (e2e) running in an accessible and non-destructive manner. It will also simplify workload debugging and custom data collection.

VMware Event Broker Appliance (VEBA)

VMware Event Broker Appliance (VEBA)

VMware Event Broker Appliance (VEBA)

A project that was released as a fling with William Lam as one of the main contributors ended up almost being its own product. The VMware Event Broker Appliance (VEBA) is a VMware open-source project that enables customers to create event-driven automation using vCenter Server Events easily. The idea behind the Fling is simple. It brings modern technologies and innovations out of the cloud-native world, like Kubernetes, to help cloud admins build event-driven automation based on vCenter Server events.

This tool should open new doors to VI admins who have been frustrated for many years by the limitations of the vCenter alerts capabilities. A blog on the topic will arrive soon on the VMware dojo.

Cluster API vSphere Provider

Cluster API Provider vSphere

Cluster API Provider vSphere

Cluster API is an open-source project that offers an API to extend the capabilities of Kubernetes with CRDs and operators which lets you manage clusters in a declarative way, like you would with pods, deployments, services and so on. It does so by plugging “providers” to it, those are like plugins that know how to interact with cloud providers such as AWS, Azure, Open Stack, vSphere and many others. CAPV is the vSphere provider for Cluster API. Once you initialize a cluster with CAPV (regardless of whether it’s a kind or an actual cluster), you can start deploying kubernetes clusters to your vSphere environment by applying a yaml manifest describing it.

CAPV is a SIG (Special Interest Group) within the Kubernetes project, it isn’t clear whether the project is maintained by VMware themselves but a number of VMware employees are among the main contributors. Regardless, this VMware open source project is already used by Tanzu Kubernetes Grid.

Concourse

Concourse

Concourse

Concourse is a VMware open-source project that provides CI/CD tool for Cloud Foundry. Concourse is based around the mechanisms of resources, tasks, and jobs to automatically update and patch software, as well as test code, commits before and after a deployment. The product is built around three principles which are expressive, versatile and safe. The learning curve is apparently steeper compared to other CI/CD products but it is supposed to be beneficial in the long run in terms of improving productivity and reducing stress levels, who wouldn’t want that?

Open VM Tools

Open VM Tools

Open VM Tools

As you know, VMware Tools are essential to efficiently run virtual machines on vSphere for various reasons including memory reclamation mechanisms and virtual hardware drive among other things. Like older versions of Linux distributions, the VMware Tools for Windows workloads are stored on vSphere by default and mounted as an ISO on the VM.

The primary purpose of open source VMware tools is to enable OS/virtual appliance vendors and communities to bundle VMware Tools into their product releases. That way, in later Linux distributions, the open-source VMware tools are already included and there is no need to install them manually.

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Wrap up

The importance and the reach of the open-source community is ever-growing and most big tech companies have teams that contribute like VMware open-source projects. We talked a lot about Kubernetes here as this is a very hot topic but other areas of the IT landscape are also in the list.

While we covered a fair number of VMware open-source projects, this was only part of what is currently being worked on by the various teams as we can’t mention all of them here. Refer to the Github repositories mentioned previously to review them. Such projects mostly come from companies acquired by VMware and anyone can contribute to a project, you can.

The post The Top 24 VMware Open Source Projects appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/open-source-projects/feed/ 3
Drive Workforce Adoption with Workspace ONE Office 365 https://www.altaro.com/vmware/workspace-one-office-365/ https://www.altaro.com/vmware/workspace-one-office-365/#respond Fri, 24 Jun 2022 11:17:52 +0000 https://www.altaro.com/vmware/?p=24044 This overview goes over VMware's Enterprise Mobility Management (EMM) solution with Workspace ONE Office 365 Management.

The post Drive Workforce Adoption with Workspace ONE Office 365 appeared first on Altaro DOJO | VMware.

]]>

For several years now, the IT landscape has been increasingly favourable to remote working and flexibility in terms of devices, apps and location. In fact, VMware Workspace One has been spearheading the company’s User Endpoint management strategy to achieve an eco-system that is “any app, any device, anywhere”.

While this shift in ways of working was already well in motion, the pandemic and its multiple lockdowns significantly accelerated the adoption by organizations and VMware Workspace One was among the leading offerings on the market.

2021 Gartner Magic Quadrant for Unified Endpoint Management (UEM) Tools

2021 Gartner Magic Quadrant for Unified Endpoint Management (UEM) Tools”

A recurrent problem with digital workspace is that users usually have multiple devices that are out of the organization’s control, opening the door to all sorts of security breaches. Add on top of that the need to manage all sorts of modern or legacy apps and you have yourself a right mess to untangle. This is where VMware Workspace One adds value by consolidating management.

If you manage a Microsoft 365 environment, you’ll surely want to know what the best security configurations are to secure your data and deter attacks. Learn more about our upcoming webinar on June 28 & 30, 6 Must-Have Microsoft 365 Security Configurations Every Admin Needs to Know.

What is VMware Workspace ONE?

If the term AirWatch rings a bell then you are in the right place. AirWatch was a company distributing its own Enterprise Mobility Management software (different from MDM) and was acquired by VMware in 2014. After a couple of renames to AirWatch by VMware and VMware AirWatch, the company settled to rebrand the product to VMware Workspace ONE in 2018.

A bit of confusion exists when it comes to what Workspace ONE is. Because it is in the realm of user and device management, most vSphere administrators don’t get exposed to it it’s not something they deal with on a daily basis.

In the end, Workspace ONE is a powerful User Endpoint Management system that aims at consolidating management across mobile devices, desktops, virtual desktops, apps all the while offering automation capabilities and zero trust access control.

VMware Workspace ONE supports several deployment models but it must be connected to a directory infrastructure

VMware Workspace ONE supports several deployment models but it must be connected to a directory infrastructure”

To quote VMware themselves: “Workspace ONE is a digital platform that delivers and manages any app on any device by integrating access control, application management, and unified endpoint management.”

5 main components make up Vmware Workspace ONE:

    • Workspace ONE Access: (Formerly vIDM) Provides SSO capabilities to SaaS and Horizon desktops and application access control based on specific policies.
    • Workspace ONE UEM: (Formerly AirWatch) The EMM software that powers the solution to deliver apps in a secure way to build mobile workspaces. It integrates with public app stores or Office 365.
    • Workspace ONE Intelligence: AWS Cloud service aimed at simplifying the user experience and providing insight into the entire environment.
    • VMware Unified Access Gateway: If you work with Horizon View, you may already be familiar with UAG, a virtual appliance that replaced the Windows-based Security servers to access internal resources from the outside in a secure manner without the need for VPN access.
    • Workspace ONE Intelligent Hub: End-user application to access the resources distributed through Workspace ONE.

VMware Workspace One Access Pricing

The licensing associated with Workspace ONE may be tricky to ascertain as there are several ways to tackle the pricing. You can pay a monthly subscription based on the number of users or devices. You can also purchase perpetual licenses. It gets a little more convoluted when looking at the various Editions available for you and their limitations which involve features sets, app storage space, support…

There are no less than 7 editions of Workspace ONE to choose from

There are no less than 7 editions of Workspace ONE to choose from”

The best way to ensure that you are choosing the best fit for your organization is to get the assistance of your VMware retailer. They will help you define what features you need and in turn, find the best pricing model.

In the meantime, the easiest way to get a sense of the pricing is to look at the Standard, Advanced and Enterprise feature sets. You will find the full comparison in this exhaustive table.

Workspace ONE subscription prices as of January 2022

Workspace ONE subscription prices as of January 2022”

Workspace ONE Office 365 Management

As you may already know, Altaro is more involved than ever in Office 365, especially since the acquisition by Hornet Security, distributor of 365 Total Protection. Because of it, we thought Workspace ONE Office 365 Management would be a great product overview to cover for our audience.

Anyways, a rather lengthy run-up to this blog’s topic but no less important in order to get to set some context for Workspace ONE Office 365 Management. Even more so given that when mentioning Office 365, many people usually think about the end-user apps such as Word, Office, Excel but there are also all these enterprise services like SharePoint, emails…

The Workspace ONE app offers SSO capabilities

The Workspace ONE app offers SSO capabilities and lets users install company apps through an administrator managed store”

Mobile Device Management

Mobile devices are supported by VMware Workspace ONE through the workspace one intelligent hub which provides an easy way for end-users to access apps and confidential company information. Integrated workflows let users install a device profile which will let the organization manage the device along with automatically installing company apps to facilitate BYOD use cases.

With workspace one intelligent hub, the self-service portal lets users install other apps such as Office 365 apps. It has the added benefit to offer employees one touch SSO (Single-Sign-On) across all of the work apps and web apps. Work protected apps co-exist alongside the employee’s personal apps, however, they won’t be able to transfer data in-between (copy-paste disabled for instance).

Work protected data is automatically encrypted and copy paste capabilities

Work protected data is automatically encrypted and copy/paste capabilities are disabled to prevent confidential data leaks”

Secured simplicity and flexibility

Nowadays, users want to access their Office apps from any device, whether it is a desktop, a mobile phone, OWA, an app, you name it. In a nutshell, simplicity and flexibility are the main keywords here. Similarly in a way to how you would use the Horizon client or the Web client to access resources served by a VMware Horizon infrastructure (apps or desktops).

Regardless, while you want to give your users the best experience, ensuring secure access remains paramount to any organization. On top of that, Data Loss Prevention (DLP) mechanisms prevent users from grabbing confidential company data (willingly or not) and exporting it through copy/paste or file transfers across private and professional environments.

 

Workspace ONE offers flexibility to clients while enforcing user entitlement and mode of access

Workspace ONE offers flexibility to clients while enforcing user entitlement and mode of access

Workspace ONE adds value to the user experience but also to the administrator overhead. It allows to automatically deploy Office 365 email and apps from one’s own custom app stores. At the same time, Workspace ONE offers SSO authentication and will transparently allow access to Office 365 only to licensed users and revoke access to those who are not authorized, think of simplified offboarding processes for instance.

Modern authentication

While not being a security expert, if you followed the tech news over the last several years, you probably came across the term “passwordless” a number of times. As you know, human error is often the reason for cybersecurity events, or should I say the lack of discipline they demonstrate. I mean, you will get the chills just looking at the most common password of 2021 (if yours is in the list, please change it now!).

Modern authentication architectures are based on certificates and private key mechanisms. For instance, interacting with a Kubernetes cluster can only be done through the use of a certificate embedded in a config file.

Anyway, Workspace ONE Office 365 Management supports passwordless authentication with a certificate-based mechanism extending to Azure AD or other identity solutions. Restricted access through security policies to Office 365 apps and services can then be supplemented with checks based on compliance groups and device types (web, mobile, desktop…). This is what is referred to as adaptive access.

Workspace ONE Office 365 Management leverages secured access to cloud services

“Workspace ONE Office 365 Management leverages secured access to cloud services”

If you want to learn more about security in Microsoft 365, check out our dedicated FAQ on the topic.

Office 365 Graph API

Now you may be wondering, how does Workspace ONE interact so tightly with Office 365. Well, we currently are in the API decade, right? Enters Microsoft’s Graph API. Graph API exposes a number of Microsoft 365 resources for Microsoft based or third-party products to leverage and interact with.

Graph API opens the gates for Workspace ONE to interact with Microsoft 365

Graph API opens the gates for Workspace ONE to interact with Microsoft 365”

I will not attempt to go into the details of Graph API as I am nowhere near qualified enough to pretend trying so you can find an introduction to it here.

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Essential Webinar for all M365 Admins – Must-Have M365 Security Settings

If you run a Microsoft 365 environment you of course will want to make sure you’re optimizing your security. And while M365 has tons of in-built security options and settings for admins, it’s easy to miss some that would provide a significant boost to your setup. In this upcoming free webinar on 28 and 30 June, IT Consultant Paul Schnackenburg and fellow Microsoft MVP Andy Syrewicze, will demo critical security features, as well as some underrated ones, that hit hard and provide significant protection for your M365 tenant. Learn more and save your seat

 

So, Should I be Using VMware Workspace ONE Then?

As you may know VMware Workspace ONE is a bit of a special snowflake when it comes to VMware products. Even internally at VMware, they have a dedicated team to manage the customers of the solution for instance. Whether they go through with it or not, many organizations equipped with Horizon View infrastructure take interest in Workspace ONE at some point in their IT lifecycle as several concepts translate to it such as distributing resources to the end-user.

Evaluating VMware Workspace ONE isn’t as straightforward as it is with other Cloud products like vRealize Log Insight Cloud as they are a lot of intricacies and requirements. The easiest way to get your hands on VMware Workspace ONE is to start with the online Hands-On-Lab (HOL) which offers a complete environment to play with.

Regardless, we suggest you check out 365 Total Protection from HornetSecurity. A comprehensive protection product for Microsoft cloud services – specifically developed for Microsoft 365 and seamlessly integrated to provide comprehensive protection for Microsoft cloud services.

The post Drive Workforce Adoption with Workspace ONE Office 365 appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/workspace-one-office-365/feed/ 0
Top 10 Features in VMware vRealize Operations Manager https://www.altaro.com/vmware/top-10-vrops-features/ https://www.altaro.com/vmware/top-10-vrops-features/#comments Fri, 27 Aug 2021 10:54:15 +0000 https://www.altaro.com/vmware/?p=22732 Become less reactive and more proactive in your monitoring with our Top 10 features in VMware vRealize Operations Manager (vROPS)

The post Top 10 Features in VMware vRealize Operations Manager appeared first on Altaro DOJO | VMware.

]]>

It won’t come as a surprise but vRealize Operations Manager, also called vROPS, does exactly what it says on the tin; “Manage operations”. Now, although you probably don’t need to, let’s ponder over the term “Operations” for a bit and what it means for the sake of this blog.

vRealize Operations Manager

Operations are often referred to as “RUN” while Transformation is called “BUILD”, two terms that pop up all over the place in the IT world. BUILD teams aim at driving innovation and implementation of new projects while the RUN department ensures that the existing environment runs smoothly according to agreed SLAs. As you probably figured, vROPS falls under the umbrella of the latter.

The boundary between BUILD and RUN doesn’t always fall in the same place according to the organization’s setup (semantics also get in the way). For instance, some RUN teams will install and configure infrastructure components such as vRealize Operations, vCenter, or vSphere, while they may only deal with N2/N3 support and capacity planning in another organization.

However, it is still common for SMBs and some medium-sized businesses not to differentiate RUN and BUILD. In which case the IT department will split their time between project work and day-to-day operations. While organizations of all sizes leverage vRealize Operations Manager, those smaller organizations will greatly benefit from vROPS as it will take some of the heavy lifting of infrastructure operations off their hands!

What is vROPS?

vRealize Operations Manager comes as a virtual appliance that is to be deployed in your management cluster if you have one. It can be installed in a number of ways, tailored to your environment’s size and complexity. The easiest scenario consists of embedding all the components in a single virtual appliance, while more complex architectures will require that you deploy the components independently running as separate VMs which opens the door to HA implementations and larger collection sets.

The vROPS’ components can be deployed in separate appliances

The vROPS components can be deployed in separate appliances to account for large environments and facilitate scalability

vRealize Operations Manager collects data from the environment and processes it to make recommendations, identify issues, trigger policy-based automation as well as a whole lot of analytical goodness to improve operations’ efficiency.

vROPS also offers a pluggable architecture to extend the monitoring to third-party products through what are called management packs. More on that later.

My Top 10 vROPS Features

My top 10 will probably differ from yours as each environment has its quirks and specifics. So, let’s say we will cover 10 features of vRealize Operations Manager that we deemed worthy of making this list. There is obviously a plethora of other high-value features in vROPS that we didn’t mention here, you can find them all in the official VMware documentation.

Feel free to leave a comment with the features that are most interesting to your organization!

Policy Creation and Management

Policies are applied to objects or groups and let you configure which metrics and properties are gathered, which alerts and symptoms are enabled, capacity and compliance settings as well as workload automation.

A default policy is created when you connect an endpoint, from which you can create inherited policies. You then get to tailor each policy to the population of objects it will be applied to.

vRops Policy Creation and Management

For instance, you may want to apply rather aggressive thresholds to your dev and test workloads as you don’t really care if it gets toasty there. However, production VMs will get more conservative settings to ensure as best an SLA as possible.

You may also want to ensure the environment associated with a specific customer is compliant with whatever industry-standard they must comply with by contract such as ISO, PCI, HIPAA…

You also use policies to control what data vROPS will collect and report on for specific objects to avoid wasting storage, bandwidth and compute on useless data.

Create inherited policies that you can modify and then apply to specific groups of objects

Create inherited policies that you can modify and then apply to specific groups of objects

Note that a fair number of default policies are already baked in vRealize Operations Manager when deploying the appliance. Those policies were designed by VMware to fit most environments and offer a good level of visibility to get started without a great level of knowledge of the product.

By creating inherited policies, you can change the state of inherited symptoms and alerts or even disable them on a subset of objects.

vROPS Workload Optimization

If you work with VMware products, chances are you rely heavily on vSphere s/DRS (storage/ Distributed Resources Scheduler) to make sure the demand of your virtual machines is met. While it may appear like so, DRS is not a load balancing feature. Its goal isn’t to have all hosts at the same resource utilization level, its objective is to ensure that the virtual machines have enough resources to run. For instance, if one host is running at 50% with 30 VMs while others are cruising at 5%, DRS won’t make a move if the VMs are fine.

You can get closer to achieve actual load balancing with vRealize Operations Manager thanks to a feature called vROPS Workload Optimization. It works in concert with vSphere DRS to optimize the VM placement in your environment according to a threshold.

The management pane shows the current optimization status, the operation and business intents

The management pane shows the current optimization status, the operation, and business intents

Comparably to DRS, you get to set a threshold that will either balance the workloads across all hosts or consolidate them on as few as possible to reduce the licensing or electric bills for instance. Where it gets interesting here is that you can set a cluster headroom value to implement a resources buffer and account for demand spikes.

Just like with DRS, a cursor lets you select an optimization profile

Just like with DRS, a cursor lets you select an optimization profile

On top of that, Workload Optimization works with tags to let you enforce VM placements on hosts or clusters with the “Business Intent” pane. For instance, all VMs with the tag “MSFT” are placed on the cluster assigned with the same “MSFT” tag. This will come in handy for various purposes such as licensing, geographical locations, hardware types… Consequently, it does mean that vRealize Operations will automatically create and manage DRS rules. As a result, all conflicting user-created DRS rules will be disabled.

VM placement is achieved by assigning the same tag on VMs and hostsVM placement is achieved by assigning the same tag to VMs and hosts. Categories can be customized as well

Note that you can obviously choose to run it manually with the “Optimize Now” button or automatically either following a schedule or in real-time when an alert pops up. You can go even further and tie it with predictive DRS to get a tight resource management automation system.

Note that all the clusters in the datacenter must be configured with DRS in fully automated mode.

Management packs

You can extend vROPS’ monitoring capabilities to other VMware products or third-party products thanks to “Management Packs”. Those are like plug-ins you install in vRealize Operations Manager that open an interface to new endpoints. There is a number of packs already installed in vROPS, some of them are deactivated by default such as standards compliances, ping, and service monitoring.

There is also a plethora of management packs available for download in the VMware Marketplace which offers plugins for other products such as vRealize Log Insight, vRealize Automation… They are distributed either by VMware or by the vendor of the product themselves.

Management packs in vROPs

Filter out the display to get vRealize Operations packs only in the left pane

Note that Management Packs can be free or subject to licensing by the vendor. Refer to their website for additional information.

Management Packs will let you extend the capabilities of vROPS outside of the virtual environment such as PostgreSQL, SAP, Exchange, physical servers, storage arrays, you name it…

Flash array vROPs

 

Example of a Pure Storage FlashArray dashboard included in the management pack

Once you’ve downloaded a management pack you get a *.pak file that you need to upload to the vROPS appliance in Administration > Solutions > Repository.

According to how the plugin is made, new resources will be made available to manage this environment such as dashboards, views, symptoms, alerts… In the example below, you can see all the new dashboards brought by a DellEMC management pack I installed. I don’t have a DellEMC system at home to show you what it looks like but you get the gist.

vROPS dashboard

Management Packs usually bring valuable dashboards, reports, alerts, and symptom definitions

Cloud providers integration

Most companies nowadays have integrated cloud services in their infrastructure to some degree. Whether you leverage SaaS workloads or pay for IaaS capacity such as VMware Cloud on AWS, chances are you will want to monitor whatever you are running in there.

vRealize Operations offers management packs for the biggest cloud providers:

    • Google Cloud Platform (GCP)
    • Microsoft Azure
    • Amazon Web Services (AWS)
    • VMware Cloud on AWS (VMC on AWS)

vROPS cloud providers

vROPS can collect metrics from the main Cloud providers and display the information into dashboards included in the associated management packs

These are incredibly easy to set up. For instance, in order to monitor AWS services, simply go to IAM in the management console and create a user with Programmatic access which will provide you with an access key ID and secret access key pair. You will then use this pair to connect your AWS account in vROPS. You should start seeing data coming in after a few minutes of collection.

vROPS AWS Dashboard

AWS dashboards provide a holistic view of your instances with objects relationships

The screenshot above depicts a t2-micro EC2 instance (free-tier) that I run in AWS. As you can see, similarly to your on-premise components, you get the relationship between the objects (subnet, Nic, EBS volume…) as well as usage metrics such as CPU, RAM, disk, network…

Rightsizing recommendations

IT pros that aren’t well versed in virtualization usually benefit a great deal from running vROPS in their environment for a few weeks and analyzing the result with a consultant. They are often surprised by the outcome as it may sometimes seem counter-intuitive. A common recommendation made by vROPS is to downsize virtual machines, not only to save capacity but also to improve overall performances. However, you also get valuable recommendations on how to efficiently scale up your workloads.

Undersized VMs

While you may figure out by yourself that a bunch of VMs are running hot and struggling to keep up with the demand, it will not always be that obvious to know whether you should actually add resources based on trends, spikes, maintenances, etc… and how much.

vROPS will help you with that as it will tell you which VMs would benefit from an increase, and more importantly how much to add. There is no point in throwing 20GB of RAM at the VM if it’s not likely to use more than 10GB.

Oversized VMs

Virtual Machines sizing is often done on a generic basis from a template and the VMs are scaled up when the demand increases. However, more often than not, people will bump a VM from 2 vCPU to 8 “because it’ll run better” when it would only need 4.

Put it this way, how tricky would it be to get 8 seats next to each other on a Saturday night at the movies when there’s plenty of 2 or 4 groups of empty seats? The problem is the same with oversizing VMs’ CPUs. It can actually harm its performances as the host’s vmkernel will have a hard time scheduling it on the physical cores of the CPU(s) while smaller VMs will easily get a free spot. If you want to learn more about this phenomenon, check out the co-stop CPU metric in esxtop.

Downsizing VM vROPS

Downsizing virtual machines will usually improve overall performances

vRealize Operations Manager will help you a great deal with sizing your VMs as it will make recommendations that you can choose to follow or disregard. It will tell you how many resources you can save. In the example above, these recommendations would reclaim 104GB of allocated RAM and 22 provisioned vCPUs.

Now, don’t power through and apply everything blindly just now. Environment-specific reasons may dictate otherwise. For instance, this is a lab I run at home and I know I underutilize pretty much everything, however, I do want to keep it as is to follow hardware requirements.

Resize Actions

On top of making recommendations, vROPS also offers the possibility to resize the virtual machines for you. It can be triggered instantly or scheduled to run at a later time. It will initiate a guest OS shutdown using the VMware Tools, reconfigure the VM’s hardware, and power it back on.

 


vROPS resize operations

vROPS can initiate the resize operations on virtual machines instantly or on a schedule.

Automation and Actions

vRealize Operations is primarily a monitoring and capacity tool indeed, however, you can very well automate tasks initiated from within vROPS so you don’t have to switch between management consoles.

Actions

You can execute actions on most objects in the inventory. The available set will obviously change according to the object type. Below are actions for a cluster and a virtual machine.

vROPS actions

Actions can be initiated on an object from within vROPS

Execute a script

Note the “Execute Script” choice in the section above. This will execute a script inside the guest OS like you would do in PowerCLI in this way. If you click Execute Script, you have top type valid OS credentials and you will then get the choice to type commands manually or upload a script to run.

running VM commands vROPS

You can run commands or scripts on virtual machine objects

Automation central

This feature available in the “Home” pane lets you automate tasks on a schedule and display them in an easy-to-use calendar. It works by selecting an action, a scope, and a schedule. A limited set of actions are available for now but it covers the most common operational tasks.

Schedule tasks vROPS

Schedule your common operational tasks in an easy-to-use calendar

Remediation

On top of what we’ve seen so far, you can also run actions based on triggered alerts. A fair number of actions are built-in vROPS. Note that if you want to build on this feature to achieve a greater level of automation, you can leverage vRealize Orchestrator to create custom recommendations thanks to the vRealize Orchestrator Management Pack. You will need to download it on the VMware marketplace, upload it to the appliance and configure an account to connect to it.

Once this is done you can configure vRealize Orchestrator workflows as remediation to a vROPS alert. This can be valuable if your workflows are tightly integrated with your IT organization such as a ticketing system.

Compliance enforcement

Ensuring environments are compliant with such and such policy is a critical part of an IT department. There are various industry standards and making sure all the requirements are applied is far from straightforward and can be time-consuming.

Implementing the recommendations may actually be the easy part here, what makes it tricky is to ensure that it stays that way. Environments and configurations tend to drift from their original baseline as time goes by and operations get in the way.

vROPS offers incredible value in compliance enforcement through the use of dashboards, views, symptoms, alerts that you get from industry standards compliance management packs (mostly U.S. ones) that aren’t embedded or activated out of the box.

VMware management packs

The major industry standards are covered by management packs provided by VMware.”

Here are the industry standards that have management packs in vRealize Operations:

    • PCI: This (Payment Card Industry Security Standards) hardening guide addresses the growing threat to consumer payment information. PCI is important to companies that accept, process or receive payments to prevent, detect and respond to cyber-attacks that can lead to breaches.
    • DISA: The Defense Information Systems Agency is a part of the Department of Defense (DoD), and is a combat support agency. Failure to stay compliant with guidelines issued by DISA can result in an organization being denied access to DoD networks.
    • FISMA: The Federal Information Security Management Act is United States legislation that defines a comprehensive framework to protect government information, operations, and assets against natural or man-made threats.
    • ISO 27001: ISO/IEC 27001 is the best-known standard in the ISO/IEC 27000 family of standards providing requirements for an information security management system (ISMS).
    • HIPAA: (Health Insurance Portability and Accountability Act of 1996) provides data privacy and security provisions for safeguarding medical information.
    • CIS: CIS Controls and CIS Benchmarks provide global standards for internet security and are a recognized global standard and best practices for securing IT systems and data against attacks.
    • vSphere Hardening Guide: Now called Security Configuration Guide, it provides prescriptive guidance for customers on how to deploy and operate VMware products in a secure manner.

Once you activate one of these management packs, you can enable it in the compliance view and get a state of your environment. As you can tell my lab does not comply with ISO27001 recommendations.

compliance reports vROPS

You can enable several compliance reports to check your infrastructure against

You then get the list of alerts about objects that aren’t compliant in the bottom right pane so you can start working on the remediation.

Alerts management

Labeling this as a feature might be subject to interpretation but I find the alert management system particularly useful to include it here.

We already talked a little bit about this in another article, however, I still wanted to touch base on this topic as it remains the bread and butter of vROPS as it ties directly in the discussions around monitoring and visibility of the environment.

What happens in too many cases is the monitoring throws so many false positives, admins end up tuning them out. Rendering the whole thing is useless as the approach becomes reactive rather than proactive.

Alerts in vROPS

Alerts provide all the relevant information and recommendations in an easy-to-read display

vRealize Operations brings a lot of value in the sense that the symptoms and alert definitions pre-defined are relevant and they come with a level of importance and recommendations on how to fix the issue.

Take this alert for instance. Its name makes it obvious what rule was violated. You easily find exactly which symptoms were triggered and get recommendations on how to fix them. In the case of this alert, you can trigger the deletion of the snapshots directly from this page. You also get some background info on the “why” in the second recommendations pane.

potential evidence vROPS

Shortened screenshot of the ‘Potential Evidence’ tab of an alert

When doing troubleshooting of any issue in any environment, the first question I ask myself is “what was changed at that time?”, which often helps to identify the culprit. For that reason, the “Potential Evidence” tab is a personal favorite of mine as it will tell you what happened around the same time that could be related to the issue at hand with a blend of events, property changes, and anomalous metrics. Pretty sweet if you ask me.

There are also a few things you can do from within vROPS to narrow down the search like displaying the list of processes after typing in your credentials. This will come in handy to quickly troubleshoot a heavy hitter for instance.

Get Top vROPS

The Get Top Processes tool is useful for quick troubleshooting on VMs with an abnormal demand

In order to tie up this section, I will quickly finish with notifications. On top of the more common emails and SNMP traps, you can push alert notifications to various destination types through plugins such as Slack, Service Now, or webhook if you integrate with a third-party app.

The plugins in the following screenshot are included with vROPS but others may be added when you add a management pack as a solution.

Outbound Plugins vROPs

Additional outbound plugins can be added with management packs

Trending and Capacity planning

One of the pain points of any infrastructure is to account for future growth, also called capacity planning. Although it’s also true for cloud workloads to some degree, it mostly applies to on-premise SDDC as you can’t scale the capacity as flexibly as in the cloud.

vRealize Operations will help you in that aspect by analyzing trends of resource consumption in your environments and make predictions as to where it is going. It will obviously need at least several months’ worth of data to produce somewhat reliable recommendations. Refer to the documentation for more details on the analytics.

Utilization visualizationThe engine uses a combination of usable capacity and demand to calculate the time and capacity remaining, hence deriving recommendations from these

Although obvious, I will also point out that it depends on the business. If your organization signed with a big customer which will require a lot more resources by the end of the year than what you currently have at hand, good for you, however, this is purely business-related and there is no way to predict it with monitoring.

Capacity planning will be most accurate in environments where the overall resources usage is somewhat steady (spikes excluded). As in, if your resource consumption is completely random and goes up and down all over the place, vROPS won’t do a good job at planning future growth. Such patterns may make it worth it to look into moving workloads to the cloud if possible as it may save you some cash.

On the Homepage, you get an overview of the capacity in each data center that will display the time remaining until a resource runs out and recommendations on how to avoid getting there. I used a screenshot from VMware which is more interesting than what my lab environment shows.

Trend Predictions vROPS

Trend predictions will help you understand when you need to scale up

Note that you can obtain the same kind of capacity predictions on specific objects such as VMs, hosts… In which case you get capacity and time remaining. In the following screenshot my vROPS appliance is showing a concerning CPU usage trend (if it were a production environment).

core vROPS objects

Individual objects also benefit from time and capacity remaining calculations

I wanted to tie that up with the “What-if Analysis” feature. While I get why some may find it a bit gimmicky, it fits right into the discussion around capacity planning. Especially that use case we mentioned earlier where the business signed a big customer that will bring a large number of extra workloads. With the What-if Analysis, you can simulate adding workload to your environment to see if it would hold up or if you need to add capacity.

I went a bit crazy with my simulation, turns out it wouldn’t be a good idea to provision 100 virtual machines in my lab. The output will get you an estimate of how much it would set you back to run it in various cloud providers (not cheap).

What-if analysis vROPS

What-if Analysis lets you simulate a scenario where you add a number of workloads in the environment

Note that there are several What-if scenarios you can run, not only adding VMs:

    • Workload Planning: Traditional
    • Workload Planning: Hyperconverged and VMC on AWS
    • Infrastructure Planning: Traditional
    • Infrastructure Planning: Hyperconverged
    • Migration Planning: VMware Cloud
    • Migration Planning: Public Cloud
    • Datacenter Comparison: Private Cloud

Application monitoring

While vRealize Operations is the best tool to monitor VMware environments, it is also a great contender when it comes to application monitoring. Historic software products such as Nagios and Zabbix are very powerful in that regard but vROPS holds its ground as it can offer clear visualization of your application fluxes if you put in the time and effort to set it up.

Straight away you can start by using the Service Discovery feature that works by querying VMware Tools to identify a set of supported services. You can then enable their monitoring of the objects themselves. While this is a great start, you can achieve better in-depth visibility with Application monitoring.

vROPS Applications

25 Applications are available out of the box in vROPS

The feature leverages the Telegraf agent for Windows Server, Linux (rpm), AIX, Solaris, Oracle Linux, and Photon. It can be installed on virtual or physical machines. It supports a number of applications out of the box and is equipped with sets of metrics that you can expand with custom script monitoring.

Application Monitoring vROPS

Application monitoring enables access to relevant metrics and displays the fluxes visually

As you can tell, vRealize Operations Manager is a very powerful product built on VMware’s experience acquired over the years since the first release of vCenter Operations Manager back in 2013. Although this blog was pretty lengthy, we barely scratched the surface of what vROPS can do and how it can help any organization get more proactive and achieve a better SLA.

If you are interested in giving vRealize Operations a shot, keep in mind that it is available in 3 license levels that will give you access to more or fewer features.

Licensing

Consider the different licensing levels and their price points before getting started

I will finish by saying this: don’t expect to configure everything perfectly straight away. vROPS is a very complicated product and some things may be confusing at first. We suggest you take it slowly. Start by deploying the appliance, connect your vCenter, and browse the menus to see what you get out of the box. Then when you feel that you are picking up how it works and get comfortable with it, sit down with your colleagues and identify what must be monitored? When? What are the thresholds? Actions?…

The post Top 10 Features in VMware vRealize Operations Manager appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/top-10-vrops-features/feed/ 1
How to Replace Site Recovery Manager SSL Certificates https://www.altaro.com/vmware/replace-srm-ssl-certificates/ https://www.altaro.com/vmware/replace-srm-ssl-certificates/#respond Thu, 30 Apr 2020 16:07:08 +0000 https://www.altaro.com/vmware/?p=20156 Learn the steps needed in order to create a certificate request, as well as how to replace the signed certificate on the SRM servers

The post How to Replace Site Recovery Manager SSL Certificates appeared first on Altaro DOJO | VMware.

]]>

In this blog post, we will go through the steps needed to create the certificate request and how to replace the signed certificate on the SRM Manager servers. Note that you will need OpenSSL to create the certificate request.

Replacing certificates is part of the work when managing an IT environment. Most products don’t generally require installing certificates signed by a trusted CA (Certificate Authority), but rather offer a quicker route by leveraging self-signed certificates, Site Recovery Manager (SRM) included. However, installing custom certificates is usually regarded as the best practice to increase security in the infrastructure.

VMware‘s disaster recovery solution SRM works with a management server in each of the protected and recovery sites that pair and connect to the vCenter servers. The SRM server certificate establishes the identity and secures the communication between SRM servers and clients.

Creation of the Certificate

  1. Browse to your OpenSSL directory and create a config file like the following and change the fields specific to your environment (in red in the example cfg file below). In this example, I save it as “srm1.cfg“.

Note that you can put the server’s IP in the certificate but it is not required if you work with DNS names (which everyone should do).

In the “SubjectAltName” put the names the server is referred to (usually FQDN and short name).

If you look in openssl.cfg you will find more fields to populate in the last part (Country, city, email…).

[ req ]

default_bits = 2048

default_keyfile = mg-p-srm11.key

distinguished_name = req_distinguished_name

encrypt_key = no

prompt = no

string_mask = nombstr

req_extensions = v3_req

[ v3_req ]

basicConstraints = CA:FALSE

keyUsage = digitalSignature, keyEncipherment, dataEncipherment

extendedKeyUsage = serverAuth, clientAuth

subjectAltName = DNS: mg-p-srm11.mgmtdom.intra, DNS:mg-p-srm11

[ req_distinguished_name ]

O.organizationName = GROLAND

organizationalUnitName = GRO

commonName = srm1

  1. Open an elevated command prompt and browse to the OpenSSL directory once again and generate the certificate request using the config file.

Openssl req -new -nodes -out srm1.csr -keyout srm1-orig.key -config srm1.cfg

Command prompt OpenSSL certificate request 1

  1. Convert the generated private key to RSA format.

Openssl rsa -in srm1-orig.key -out srm1.key

Convert the generated private key to RSA format. 3

  1. Provide your certificate authority with the certificate request (.csr file) to sign the certificate.

(Note: consult this guide if you are signing it with a Microsoft CA and encounter an error regarding a missing template.

  1. Once you have received the certificate from your CA, convert it to PKCS12 with the RSA private key, which is required by the SRM installer.

We will need OpenSSL one last time to do this operation. The signed certificate is named “srm1.cer” and the PKCS12 “srm1.p12” in this example.

Openssl pkcs12 -export -in srm1.cer -inkey srm1.key -name “xav-win-srm” -passout pass:Password123 -out srm1.p12

Convert the certificate to PKCS12

Replacement of the certificate in Site Recovery Manager

  1. Open the “Uninstall or change a program” wizard On the SRM Manager server, select “VMware vCenter Site Recovery Manager” and click “Change“.

Replacement of the certificate in Site Recovery Manager

If you encounter an error due to UAC being enabled, follow this procedure.

VCenter Site Recovery Manager

  • Go to Start > Run, type regedit and click OK. The Registry Editor window opens.

  • Navigate to HKEY_LOCAL_MACHINE > SOFTWARE > Microsoft > Windows > CurrentVersion > policies > system.

  • Modify DWORD EnableLUA from 1 to 0.

  • Restart the Windows machine and run the modify installation.

  1. Select the “Modify” option.

VCenter SRM Modify

  3. Type in the credentials of the service account connecting to vCenter. The Username is pre-populated.

vCenter SRM Credentials

  1. Not much to do here except click “Next“.

vCenter Server Address click next

  1. In this pane, the “Local Host” is pre-populated with the IP address of the server. This field should be one of the SANs of the certificate (Subject Alternative Names).

This means that if the IP is not present in the SANs of your certificate, change these fields to the FQDN of the SRM server.

Site Recovery Manager Extension Local Host

  1. Select “Use a PKCS#12 certificate file” in the next pane.

Select Use a PKCS12 certificate file in the next panel.

  1. Browse to the .p12 file and type the password you chose when converting it in OpenSSL. It was “Password123″ in my example.

Browse to the .p12 file and type the password

  1. Type the password of the user that connects to the database.

Type the password embedded database configuration

  1. Leave the next pane alone unless you want to change the account under which the SRM service runs.

vCenter SRM Use Local System Account

  1. Finish the wizard and click “Install“.

VCenter SRM install wizard

  1. It should complete successfully. If it doesn’t you will need to review the installation logs.

InstallShield Wizard Completed

  1. Once the wizard is closed, you can verify that the replacement has been done correctly in a web browser on the port 9086 of the SRM server with HTTPS.

Open the certificate presented and ensure it is the new one.

Certificate replacement verification

Install Certificate

SRM site pairing connection

Now if you look at the SRM plugin after replacing the certificate on the protected site (xav-win-vc in the example below), you will see that the recovery site (xav-win-vc2 below) is disconnected.

SRM site pairing connection

  1. Then replace the certificate of the recovery site (xav-win-vc2) by following the same procedure as before. You will notice that it is now the protected site that is disconnected (xav-win-vc).

Replace the certificate of the recovery site xav win vc2

  1. To fix it, open the Site Recovery Manager plug-in in vCenter, go to “Sites” and click on “Reconfigure Pairing“.

vSphere Web Client Reconfigure pairing

  1. The remote site of the one selected is pre-populated. Click “Next” here.

Reconfigure Site Recovery Manager Server Pairing

  1. Type the password of the account used for the pairing.

Type the password of the account used for the pairing.

You will notice a task “Repair Connection” which should succeed.

Repair Connection Task

  1. Ensure that both protected and recovery sites are now in the green “Connected” state.

Ensure protected and recovery sites are in green Connected

The SRM Manager certificates are now successfully replaced.

Wrap up

Although not being the most attractive part of the job as a SysAdmin, certificate management is critical to ensure compliance with security policies of the company. It is also very important that customers have a way to closely monitor the expiry date of all the certificates installed in their environments. Whether it is handled manually by a crypto team or by a monitoring tool like Zabbix or Nagios, there should be alarms configured to warn the infrastructure team in advance (2 months is a safe margin). To monitor the SSL certificates of Site Recovery Manager, you can run SSL checks on port 9086 of each server as seen earlier in this blog.

The post How to Replace Site Recovery Manager SSL Certificates appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/replace-srm-ssl-certificates/feed/ 0
Understanding LDAP Channel Binding and LDAP Signing in 2020 https://www.altaro.com/vmware/understanding-ldap-binding-signing/ https://www.altaro.com/vmware/understanding-ldap-binding-signing/#respond Thu, 05 Mar 2020 20:05:23 +0000 https://www.altaro.com/vmware/?p=20163 The Microsoft update in security of network communications has a purpose to prevent a man-in-the-middle attack on an LDAP server. Read more about it here

The post Understanding LDAP Channel Binding and LDAP Signing in 2020 appeared first on Altaro DOJO | VMware.

]]>

Back in summer of 2019, Microsoft announced a change to increase the security of network communications between an Active Directory Domain Services (AD DS) or an Active Directory Lightweight Directory Services (AD LDS) and its clients. This hardening update changes the default behaviour of Active Directory Domain Controllers (AD DC) to enforce LDAP channel binding and LDAP signing. Its purpose is to prevent an attacker from performing a man-in-the-middle attack on an LDAP server.

The statement was accompanied by a Windows support article setting the rollout date of the update to January 2020. Although it may seem like a long time to prepare for such a change, it turns out many software vendors did not react to it and didn’t provide any recommendations for proactive remediation. Other solutions simply offer no support for protocols other than unsigned LDAP. For these reasons, Microsoft delayed the rollout date to March 2020 to give customers and vendors more time to get ready and to wait after the 2019 holidays. Many customers restrict configuration changes during the holiday season and need more time to prepare and test…

It took VMware some time, a few VMTN posts and support tickets from several concerned customers before publishing official communication on the topic. The fact that no documentation was made available was a little worrying as such a change could harm the SLA of many production environments. In this blog, we will go over a few of the main VMware products and see how we can prepare for this Microsoft update.

Important: This blog contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure you know what you are doing and back up the registry before modifying it so you can restore it if a problem occurs. Changes to the registry on production domain controllers should be done with great care.

Identifying clients that could be impacted

To identify the clients that could experience a negative effect from the update we need to enable LDS diagnostic event logging on the domain controller. After this procedure, any Clients that rely on unsigned SASL LDAP binds or on LDAP simple binds over a non-SSL/TLS connection will generate an event with ID 2889 every time it makes a request.

Change the following registry key on the domain controller (No restart is required).

HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics

  • “16 LDAP Interface events” : Set it to 2

Once this key has been edited, the event viewer will start logging diagnostic events under the “Directory Service” log.

Directory Service Log

Whenever a client makes an unprotected request, a 2889 event such as this one will appear. I displayed it in Powershell to show all of its information. It will show you the IP of the server making the request and which domain account was used. You may find some software other than VMware pop up in this list which you can proactively remedy as well.

2889 error screen

vCenter server

The single-sign-on (SSO) component of vCenter leverages identity sources to allow users to connect using their AD or OpenLDAP credentials. We are obviously only dealing with AD here and the two (and a half) different ways to implement it as an identity source.

Active Directory (Integrated Windows Authentication)

Impact of the Microsoft Update: None

The machine on which the vCenter Single Sign-On service is running must be in an Active Directory domain to use this option. VMware released a blog on the 13/01/2020 in which they state the identity sources configured with Integrated Windows Authentication (IWA) will not be impacted by the Microsoft update.

Integrated Windows Authentication (IWA) has also been tested by VMware Engineering and verified to be compatible with these changes. IWA uses different protocols and mechanisms to interact with Active Directory and is not affected by changes to the Active Directory LDAP servers.

One odd thing about it is that it still generates 2889 events even though it works with the LDAP hardening settings enabled.

Active Directory as an LDAP Server

This option is available for backward compatibility and requires that you specify the domain controller and other information. It is also useful if you need to connect another domain than the one vCenter server is a member of. This mode can be configured for TLS encrypted communications with the Domain Controllers (LDAPS) or unencrypted communications (LDAP).

Encrypted (LDAPS)

Impact of the Microsoft Update: None according to VMware.

If your identity source is already configured with LDAPS you don’t need to change anything.

Unencrypted (LDAP)

Impact of the Microsoft Update: Login failure.

Unencypted LDAP vCenter Single Sign in

If you have an identity source configured for unencrypted LDAP you face failed logins whenever using a user on that domain. Any solution connecting to vCenter using AD accounts (usually service accounts) will be negatively affected once the change is rolled out by Microsoft. As a result, it is highly recommended to configure LDAPS on these identity sources.

Any system that connects to Active Directory via LDAP without using TLS will be negatively affected by this change. This includes VMware vSphere.

To proactively remediate this situation, you need to retrieve the domain controllers’ certificates and enable encryption on the identity source.

1. List all domain controllers in the domain (replace xav.test with your domain fqdn). Run the following command on your workstation.

nltest /dclist:xav.test

You will get the list of the domain controllers on the left with a mention on which one is the PDC (primary domain controller). I only have one DC in my lab but you will likely get several more lines for each DCs.

1. List all domain controllers in the domain

2. Retrieve the certificates of each domain controller. Run the following command on your workstation against each domain controller.

openssl s_client -connect xav-win-dc.xav.test:636 -showcerts

The output will contain the certificate to use to validate the identity when using LDAPs in vCenter.

2. Retrieve the certificates of each domain controller

3. Copy and paste the content of the certificate in a notepad. To have the full certificate chain, remove the extra text in between the server certificate and the root/intermediate CA certs. Save as xav-win-dc.xav.test.cer so you know which is which and open it to check that it is correct.

3. Copy and paste the content of the certificate in a notepad

4. Log in vCenter using an SSO admin (administrator@vsphere.local) > Administration > Configuration > Identity sources.

4. Log in vCenter using an SSO admin

5. Edit the LDAP source > Enable LDAPs on the identity source by checking “Protect LDAP communication using SSL certificate (LDAPS)” and click “Next”.

If you use “Connect to any dc in the domain” and an “ldap://xxx” value is under the greyed out server URL field, check the other box, clear the field and check the first box again. Otherwise, the menu will incorrectly report ldap instead of ldaps.

5. Edit the LDAP source

6. Checking that box will add a pane to provide the domain controller certificates that we gathered earlier. Click the “+” button to add all the certificates. Click “Next” and finish.

A quick reminder that you need to add the DC server certificates, not the root CA certificate.

Click 22Next22 and finish

7. Complete the wizard and ensure that the menu is now showing ldaps.

7. Complete the wizard

Horizon View

VMware’s virtualized desktop infrastructure (VDI) solution binds with Active Directory to perform a number of tasks. VMware released KB76062 which states that the solution is compatible with the Microsoft update. The Horizon manager UI doesn’t offer any configuration for LDAPS anyway.

  • Horizon Enterprise uses secure Generic Security Services Application Program Interface (GSSAPI) LDAP binds, with both signing and sealing enabled.
  • Horizon Enterprise supports Active Directory Domain Controllers that require signing.
  • Horizon Enterprise supports Microsoft Security Update as described in ADV190023.

However, do make sure that the vCenter servers paired with Horizon are correctly configured. Failing to do so will result in vCenter being in error in Horizon administrator because it cannot connect to it due to the domain controller rejecting insecure communications.

vCenter servers paired with Horizon

App Volumes

Warning: App Volumes up to versions 2.18 and 4.0 is not compatible with LDAP Channel Binding. If the registry key LdapEnforceChannelBinding is set to 2, “login failure” will occur. VMware is currently working on a fix. The currently recommended actions are described in KB77093.

If you use App Volumes in your VDI environment, you will need to pay attention to how it binds to your Active Directory. If you use insecure LDAP you will once again face failed logins.

App Volumes

To proactively remediate we need to enable LDAPS.

1. Connect to App Volumes Manager and go to the AD Domains configuration pane. Select your insecure domain binding and click “Edit“.

App Volumes Manager

2. Under “Security” select “Secure LDAP (LDAPS)“.

Secure LDAP LDAPS

Note that it is not recommended to disable certificate validation but it will still work if you disable it.

To enable certificate validation you will need to save the certificate of your Root CA (and all intermediate CA) as a file named “adCA.pem” that you must store in “C:\Program Files (x86)\Cloud Volumes\Manager\config“. Finally, you will need to restart the “App Volumes Manager” service.

Enforcing LDAP signing and Channel Binding

You can temporarily enforce LDAP signing and Channel binding even before the update is distributed if you want to test your setup and see if things break in a controlled environment or just want to see for yourself in a lab.

LDAP Signing

Channel Binding

It is safe to assume that this setting can be set back to the original value after the update is deployed in case of an emergency where some software stops working and you need a quick fix. However, it is of course not recommended to delay a security patching. If there’s more you wish to know about the subject of LDAP channel binding and LDAP signing, let me know in the comments below and I’ll get back to you!

The post Understanding LDAP Channel Binding and LDAP Signing in 2020 appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/understanding-ldap-binding-signing/feed/ 0
How to Manage VMware VMs with Azure Arc https://www.altaro.com/vmware/azure-arc-management/ https://www.altaro.com/vmware/azure-arc-management/#respond Fri, 24 Jan 2020 08:49:52 +0000 https://www.altaro.com/vmware/?p=20078 Learn everything you need in order to manage VMware VMs with Azure Arc - a management Godzilla that may very well set Azure apart from other cloud providers

The post How to Manage VMware VMs with Azure Arc appeared first on Altaro DOJO | VMware.

]]>

In this blog post, we’ll be talking about Azure Arc and how VMware admins can leverage it for management purposes.

At Microsoft Ignite 2019, Microsoft announced the public preview of Azure Arc. This new service is the management Godzilla that may very well set Azure apart from other cloud providers. It extends the feature set of Azure Resource Manager to servers and Kubernetes clusters and provides a centralized management platform for these endpoints whether they reside on-premise or even in other public clouds like AWS and GPC.

As of right now, Azure Policy Guest Configuration and Log Analytics are the only services available with Azure Arc managed servers. There is also no pricing scheme set up at the moment for this service while it is in preview. While this new and shiny service is in its infant stage, Azure Arc can be a fantastic way for managing all resources with a single panel of glass especially for VMware Administrators that are running a hybrid cloud infrastructure. In order to onboard servers into Azure Arc, an agent must be installed on each server. Below are the steps for getting started.

Requirements and Limitations

Azure Arc is currently compatible with the following server OSes:

Windows Server 2012 or newer

Ubuntu 16.04 and 18.04

By default, you can only have 800 Servers per Resource Group. So keep this in mind when planning. Azure Arc requires an outbound connection to the Azure Arc services and also works with an HTTP proxy, so for any networking configurations be sure to check out their network configuration guide.

Installing the Azure Arc Providers

Before we can start using Azure Arc, we need to register the providers. The easiest way to do this is to open up a CloudShell environment or login to Azure CLI and type in the following commands:

az provider register --namespace 'Microsoft.HybridCompute'
az provider register --namespace 'Microsoft.GuestConfiguration'

Installing the Azure Arc Providers

Now that we have both required providers registered we are ready to connect a Windows and Linux server in our VMware environment to the Azure Portal by deploying the agent to them.

Deploying Agent On-Premise

The quickest way to get some agents connected is with PowerCLI. To do this we will need to create a Service Principle which will be used to authenticate with our Azure Subscription and onboard our VMs. The fastest way to create a SP is by pasting the following command into Azure CLI. We will be assigning our SP the “Azure Connected Machine Onboarding” role. This is a role that Microsoft has made specifically for onboarding VM’s to Azure Arc, it has a very limited scope of permissions:

az ad sp create-for-rbac --name sp-lukelab-azurearc --role "Azure Connected Machine Onboarding"

Deploying Agent On Premise

Take note of the AppID and Password. We will need to feed these into our PowerCLI script. If you don’t have PowerCLI installed on your machine, run the following command:

Install-Module VMware.PowerCLI -Force

Then we need to connect to our VCenter environment with the following syntax. In the example, my VCenter server’s name is “vcenter.lukelab.lcl”. Input the credentials to VCenter to successfully connect:

Connect-VIServer -Server vcenter.lukelab.lcl

Connect to VCenter environment

Now let’s install an agent on Ubuntu and Windows. Microsoft provided a script for each OS version, we will use PowerCLI to invoke the script on each OS.

Installing on Windows

Fill in the service principle, resource group, subscription, and tenant ID with your own details:

$installscript = @'
# Download the package
Invoke-WebRequest -Uri https://aka.ms/AzureConnectedMachineAgent -OutFile AzureConnectedMachineAgent.msi

# Install the package
msiexec /i AzureConnectedMachineAgent.msi /l*v installationlog.txt /qn | Out-String

& "$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect `
  --service-principal-id "962a8e5f-7e3e-43bd-aba8-22sdfs234b43" `
  --service-principal-secret "O31FafVA:mL_=zZoLo[JeMV8vLXrjWF3" `
  --resource-group "rg-LukeLab-OnPremise" `
  --tenant-id "24e2975c-af72-454e-8dc0-572345151" `
  --location "WestUS2" `
  --subscription-id "f7c32571-81bd-4e97-977b-7e2234323"

'@

Invoke-VMScript -vm web1 -scripttype powershell -scripttext $installscript -GuestCredential (get-credential)

Paste the script into the PowerShell window that is already connected to VCenter through PowerCLI:

Script pasting into Powershell window

Now the script will run ad it will take a few minutes to install and show up in the portal.

Installing on Ubuntu

Just like with our Windows script, fill in the service principle, resource group, subscription, and tenant ID with your own details:

$installscriptlinux = @"
# Download the installation package
wget https://aka.ms/azcmagent -O ~/Install_linux_azcmagent.sh

# Install the connected machine agent. Omit the '--proxy "{proxy-url}"' parameters if proxy is not needed
bash ~/Install_linux_azcmagent.sh

azcmagent connect \
  --service-principal-id "962a8e5f-7e3e-43bd-aba8-22sdfs234b43" \
  --service-principal-secret "O31FafVA:mL_=zZoLo[JeMV8vLXrjWF3" \
  --resource-group "rg-LukeLab-OnPremise" \
  --tenant-id "24e2975c-af72-454e-8dc0-572345151" \
  --location "WestUS2" \
  --subscription-id "f7c32571-81bd-4e97-977b-7e2234323"

"@
 
Invoke-VMScript -vm web4 -scripttype bash -scripttext $installscriptlinux -GuestCredential (get-credential)

Then we invoke it with Invoke-VMScript:

Invoke-VMScript

It will take a few minutes to install. Then the server will appear in the portal.

Managing Servers in the Azure Arc Portal

When we check the portal we can see both our Windows and Linux servers that reside on-premise are now added into Azure. We can now use Guest Configuration and Log Analytics with them. Let’s assign a policy to Web1:

Managing Servers in the Azure Arc Portal

We select Web1 and choose “Assign Policy”:

Web1 policy assignation

Select the resource group for the scope and we will select the policy definition “Configure time zone on Windows machines” this is currently the only configuration policy that can be enforced on guest VMs. More and more will be added in the future, but Microsoft wanted to start with something nonintrusive at first, like setting the time:

Assign policy

We will configure Web1 to always have the “Hawaii” timezone set. Also, we will check the “create a remediation task” to enforce this policy on the resource group  right now:

Create a remediation task Configure time zone on Windows machines

During the remediation process we can see that it is evaluating the assigned scope:

Remediation process

In a few minutes our resource is now compliant:Resource compliant

When we check Web1 we can see that the time zone has been changed:

Time zone change

Just the Beginning

Microsoft has big plans for Azure Arc. More and more policy definitions will become available bringing more functionality to the service. This is a big play on Microsoft’s part and really shows their stance on multi/hybrid cloud environments. It could potentially become the answer for solving the management headaches that come with the cloud and could make compliance and governance with VMware and Azure much easier.

 

The post How to Manage VMware VMs with Azure Arc appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/azure-arc-management/feed/ 0
How to use Azure DSC and Terraform to Customize vSphere VMs https://www.altaro.com/vmware/azure-dsc-terraform-customize-vsphere-vms/ https://www.altaro.com/vmware/azure-dsc-terraform-customize-vsphere-vms/#respond Thu, 11 Jul 2019 16:47:57 +0000 https://www.altaro.com/vmware/?p=19637 This guide shows you how to use Desired State Configuration to provide another level of customization to VMs. Let's get started!

The post How to use Azure DSC and Terraform to Customize vSphere VMs appeared first on Altaro DOJO | VMware.

]]>

In our previous article, we successfully provisioned a VM in vSphere using Terraform while saving our state files to Terraform Enterprise. Now, we will take it a step further and use Desired State Configuration to customize our VM even further. Terraform can easily be confused as another form of Configuration Management, however, its not the same as products like Ansible, Chef, DSC, or Puppet. Terraform is classified as an “orchestration tool” which is used to define, deploy, and organize infrastructure, while Configuration Management software is used to deploy and manage the OS and hosted software. We will add an additional element to our VM deployment process that includes adding our VM into Azure Automation‘s Configuration Management and applying a configuration to the node.

Creating an Azure Automation Account

To get started, we need to create an Azure Automation Account in order to use Azure DSC with our on-prem VMs. If you do not already have an Azure account, sign up for the free trial. Azure DSC gives us some great management tools for managing our DSC nodes. This service is free for all VMs that reside in Azure, however for on-prem nodes, your first 5 nodes are free, otherwise, it’s $6 USD a month which in the grand scheme of things isn’t a bad deal. Also, you can adjust the pull intervals to reduce that cost even further. To create the Automation Account search for the service in the search bar at the top of the screen, then select Add to create your account:

azure automation accounts

Now that our Automation Account has been created, we can upload a DSC configuration and compile it in order to apply a “desired state” to our VM deployment. For our example, I’m going to upload a simple DSC configuration to install the web server role. To do this, we’ll save the below configuration to a .ps1. I named mine “webserver.ps1”:

Configuration WebserverConfig {

    # Import the module for dsc
    Import-DscResource -ModuleName PsDesiredStateConfiguration

    Node 'localhost' {

        #ensures that the Web-Server (IIS) feature is enabled.
        WindowsFeature WebServer {
            Ensure = "Present"
            Name   = "Web-Server"
        }

     
    }
}

To upload the configuration, navigate to the Automation Account we created and select State Configuration (DSC) on the left-hand side. Then select the Configurations tab and choose Add:

state configuration

Select the .ps1 file we just created, in my example it was saved as “Webserver.ps1”. Click OK to import it:

Importing configurations

Now our configuration is showing in the list, select the WebServerConfig configuration:

configurations

Now we are presented with a menu for compiling our configuration. This is the process where a mof file is generated from our configuration we just uploaded. So click on Compile. The Completed status will show once it has finished, it can take a few minutes:

webserverconfig

Note the name of our newly compiled configuration, we will use that in our Terraform configuration to specify the name of the config to assign to the VM after deployment:

compiled configuations

Integration with vSphere VM Templates

In order to add Windows on-premise servers into Azure Configuration Management, we need to generate a meta mof file that tells the Local Configuration Manager to report into our Azure Automation Account. After we generate this meta mof file, we will store it on our VMware Template and use Terraform’s customize os options to run a PowerShell script that configures the LCM when the VM gets built.

To generate our meta mof, we will need the AZ cmdlets. If you don’t have them you can install them on an administrative PowerShell console with the following command:

install-module AZ -Force

Next, we’ll use the Connect-AZAccount command to connect to our Azure tenant:

Connect-AzAccount

Once connected, we will generate our meta mof file to the “C:\DSCConfigs” directory. We will use the Get-AZAutomationDSCOnboardingMetaConfig cmdlet and specify our Automation Account Name and Resource Group. Also, we will want to use the computername localhost. This is because we are going to be moving this metamof file to the VM Template and configuring the LCM directly from the VM Template on deployment:

Get-AzAutomationDscOnboardingMetaconfig -ResourceGroupName 'LukelabDSC-RG' -AutomationAccountName 'LukeLabDSC' -ComputerName localhost -OutputFolder "C:\DSCConfigs"

My “localhost.meta.mof file” has been created:

Now we will need to create a script that will apply the meta configuration from the file. So copy the command below and save it as a .ps1. I saved mine as “ConfigureLCM.ps1”:

Set-DSCLocalConfigurationManager -path 'C:\DSCConfigs' -force

Now, we can transfer our meta mof file and ConfigureLCM.ps1 script to our VM template. VCenter, right click on the VM Template and choose Convert to Virtual Machine:

We’ll power up the template VM and transfer our two files to the C:\DSCConfigs folder. Note that the .ps1 calls the meta mof file from that folder so if you want to place them elsewhere you will need to change the directory on the .ps1:

Next, power off the VM and convert it back to a template:

converting to template

One last .PS1 file to create. We must make a script for Terraform to execute for assigning the VM to the specified DSC Compiled Configuration. I’ve created a service principal account and assigned the appropriate permissions to assign configurations to DSC nodes. The script will use that account to connect to Azure and perform our node assignments. Note that for the purpose of making the example simple, I’ve hardcoded the password for the Service Principal Account into the script. I highly recommend taking another approach like encrypting a text file with the password and retrieving the password with a designated service account. I saved this script as “C:\Scripts\AddNodeToDSCConfig.ps1” in my example. We’ll use the parameters to pass through the VM name and the compiled configuration to apply:

param(

[String]$servername,
[string]$dscconfig


)


#Automation Account Info
$resourceGroup = "LukeLabDSC-RG"
$AutomationAccountName = "LukeLabDSC"


#Service Principal Credentials
$password = ConvertTo-SecureString 'P@ssw0rd' -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ('a38b9074-e00e-41d8-b0e1-dbd59250e120', $password)

#connect to Azure Account
Connect-AzAccount -Credential $Credential -Tenant "24e2975c-af72-454e-8dc0-579d886a1532" -ServicePrincipal

#Wait for VM to appear in State Configuration as a node, then assign node the Compiled DSC Configuration
Do{

$CheckNode = Get-AzAutomationDscNode -Name $servername -ResourceGroupName $resourceGroup -AutomationAccountName $AutomationAccountName

sleep -Seconds 10

}until ($CheckNode.name -eq $servername)



Set-AzAutomationDscNode -NodeConfigurationName $DSCConfig -ResourceGroupName $resourceGroup -Id $CheckNode.Id -AutomationAccountName $AutomationAccountName -Force

The hard part is over, now we just need to edit our Terraform configuration file.

Deploying a VM with Terraform

We only need to add a few lines to our configuration from the previous article. First, we will modify the VM customization to include the step for running the script to configure our VM to report into Azure Automation:

customize {
      windows_options {
        computer_name  = "Web1"
        workgroup      = "home"
        admin_password = "${var.admin_password}"

        auto_logon = "true"
        auto_logon_count = "1"
        run_once_command_list = ["powershell C:/DSCconfigs/ConfigureLCM.ps1"]
      }

Next we will add in a section at the end of the VM resource to run our script to assign our webserverconfig.localhost configuration to the node:

provisioner "local-exec" {
    command = "C:\\Scripts\\AddNodeToDSCConfig.ps1 -servername web1 -dscconfig webserverconfig.localhost"
    interpreter = ["PowerShell"]
  }

The full Terraform configuration looks like the following. Keep in mind I have all the variables set in a terraform.tfvars file:

variable "username" {}
variable "password" {}
variable "admin_password" {}

terraform {
  backend "remote" {
    organization = "LukeLab"

    workspaces {
      name = "VM-Web1"
    }
  }
}

provider "vsphere" {
  user           = "${var.username}"
  password       = "${var.password}"
  vsphere_server = "192.168.0.7"
  version = "~> 1.11"

  # If you have a self-signed cert
  allow_unverified_ssl = true
}

#Data Sources
data "vsphere_datacenter" "dc" {
  name = "LukeLab"
}

data "vsphere_datastore" "datastore" {
  name          = "ESXi1-Internal"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_compute_cluster" "cluster" {
  name          = "Luke-HA-DRS"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_network" "network" {
  name          = "VM Network"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_virtual_machine" "template" {
  name          = "VMTemp"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

#Virtual Machine Resource
resource "vsphere_virtual_machine" "web1" {
  name             = "Web1"
  resource_pool_id = "${data.vsphere_compute_cluster.cluster.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"

  num_cpus = 2
  memory   = 4096
  guest_id = "${data.vsphere_virtual_machine.template.guest_id}"

  scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"
  firmware = "efi"

  network_interface {
    network_id   = "${data.vsphere_network.network.id}"
    adapter_type = "vmxnet3"
  }

  disk {
    label            = "disk0"
    size             = "${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub    = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  clone {
    template_uuid = "${data.vsphere_virtual_machine.template.id}"

    customize {
      windows_options {
        computer_name  = "Web1"
        workgroup      = "home"
        admin_password = "${var.admin_password}"

        auto_logon = "true"
        auto_logon_count = "1"
        run_once_command_list = ["powershell C:/DSCconfigs/ConfigureLCM.ps1"]



      }

      network_interface {
        ipv4_address = "192.168.0.46"
        ipv4_netmask = 24
      }

      ipv4_gateway = "192.168.0.1"
    }
  }
provisioner "local-exec" {
    command = "C:\\Scripts\\AddNodeToDSCConfig.ps1 -servername web1 -dscconfig webserverconfig.localhost"
    interpreter = ["PowerShell"]
  }


}

We run our terraform apply and see the build process start:

powershell terraform

Once complete we can see that our Web1 VM node has deployed and been assigned a DSC node configuration:

configuration status overview

We can verify IIS is installed by navigating to the VM’s IP in a web browser:

internet information services

Wrap-Up

The combination of Terraform and a Configuration Management tool has endless possibilities. You can get as granular as you want with a VM build. The next step would be to host all of the code for this VM into some sort of source control for versioning, which we’ll cover in a different blog post in the future.

What about you? Do you find this type of deployment management helpful? Let us know in the comments section below!

Want to make sure your VMs are optimized at all times? Here’s how to make your own EXSi dashboard using PowerShell

The post How to use Azure DSC and Terraform to Customize vSphere VMs appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/azure-dsc-terraform-customize-vsphere-vms/feed/ 0
How to Change the IP Configuration of vCenter Server Appliance https://www.altaro.com/vmware/ip-configuration-vcenter-server-appliance/ https://www.altaro.com/vmware/ip-configuration-vcenter-server-appliance/#comments Thu, 21 Feb 2019 17:43:45 +0000 https://www.altaro.com/vmware/?p=19290 As of vSphere 6, we can now change the IP address configuration of vCenter in a few short steps as long as the following prerequisites are met.

The post How to Change the IP Configuration of vCenter Server Appliance appeared first on Altaro DOJO | VMware.

]]>

Whether you’re undergoing a data center move, or implementing a new network solution, as a VMware Administrator, you may run into the scenario where the IP address of vCenter needs to be changed after it has been deployed. Back in the good old days we would have had to rebuild vCenter with the new IP address, but vCenter has come a long way since then. As of vSphere 6, we can now change the IP address configuration of vCenter in a few short steps as long as the following prerequisites are met:

  • The System Name of the appliance must be an FQDN, it cannot be an IP address. If the system name is set as an IP address, you will be unable to change any of the IP address settings because the system name is hardcoded into the appliance and used as a network identifier. So if your VCenter is configured to use the IP address as the system name, you may either get the error “Management Network Configuration not allowed” or your IP address changes simply won’t save.
  • The user account that is used to change the IP address settings of vCenter must be a member of the SystemConfiguration.Administrators group in vCenter Single Sign-On.

Once the prerequisites are met, there are several ways to change the IP address of the VCenter Server Appliance. Below I will demonstrate how to change the IP configuration of VCSA through the vSphere web client and through the VM console.

Changing the IP Address from the vSphere Client

Using the vSphere client to change the IP Address of vCenter is VMware’s documented and preferred method. It is only a few simple steps. Log into the “flash” version of the vSphere client by going to the following URL:

http://addressofvcenterserver/vsphere-client

Once logged in, select the Administration menu from the home page and on the left-hand side select System Configuration:

Vmware vsphere web client

Next, select Nodes and then choose the VCenter server that you would like to change the IP Address on. Select the Manage tab and then under Networking click the Edit button:

You can now enter your desired networking configurations. Expand DNS or the network adapter to modify each one. Click OK to save your changes:

As soon as you click OK the IP address will be changed and you will need to log back into VCenter under the new IP Address. Below I have confirmed that the address of my VCenter has been changed successfully to 192.168.0.17:

Changing the IP Address from the Console

What happens if I need to change the IP address of vCenter and no longer have access to the server because the IP space has drastically changed? This wouldn’t be the ideal process, but sometimes unplanned events happen and a major part of working in IT is about jumping through hurdles. You can change the IP Address of vCenter Server Appliance through the console. Simply right click on the VM itself and open a console session to it:

Now, here’s the tricky part that hangs up a lot of people. In order to get access to the screen that provides the “usual ESXi” options for configuration, you must hold ALT and press F2. This will switch you over to the familiar interface. Notice that it is blue instead of the typical yellow color that we usually see:

Press F2 for Customize System. Then, log in with an account that meets the required privileges stated in the prerequisites above. Select Configure Management Network. Now select IP Configuration to change the IP address of the server or DNS configuration to change any of the DNS servers. When you’re done hit Enter and then make sure you back out of the menu by pressing ESC. You will get a prompt to Apply changes and restart management network? select Yes:

NOTE: Like stated in the prerequisites above, if vCenter is configured with the System Name as the IP address, you will get the “Management network configuration not allowed” message like below. If this is the case then your out of luck, you will not be able to change the IP address of vCenter and will need to deploy a new one:

Wrap Up

There are many scenarios where changing the IP address of vCenter is required, one of the most common is when changing out DNS servers. Changing the IP address of VCenter Server Appliance can be relatively easy, however, make sure you do your due diligence and ensure vCenter is in a healthy state as well as create a backup of vCenter before making any sort of changes. It’s better to have multiple backups and not need them than to have no backups and need them. Also, let me know in the comments below on any situations you’ve run into where you had to change the IP configuration of VCenter and whether or not if it was a successful and easy change.

Thanks for reading!

[the_ad id=”4738″][thrive_leads id=’18673′]

The post How to Change the IP Configuration of vCenter Server Appliance appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/ip-configuration-vcenter-server-appliance/feed/ 1
3 High-Value Reasons to use vRealize Operations Manager https://www.altaro.com/vmware/vrealize-operations-manager/ https://www.altaro.com/vmware/vrealize-operations-manager/#respond Thu, 07 Feb 2019 20:56:08 +0000 https://www.altaro.com/vmware/?p=19227 3 core reasons why you would want to use vRealize Operations Manager, and why you couldn't do the same thing with vCenter

The post 3 High-Value Reasons to use vRealize Operations Manager appeared first on Altaro DOJO | VMware.

]]>

In our previous post, we discussed some of the tools that VMware includes in the vRealize Suite such as vRealize Operations Manager, one of the most popular ones, also commonly referred to as vROPs. We briefly discussed some of the benefits in that post, but I wanted to deep dive into this particular tool just a bit more as I think a lot of businesses can benefit from it.

One of the most common complaints about the tool is that it can be expensive and that if it’s pulling information from vCenter, why not just use vCenter to manage and monitor your environment? In this article, I’m going to give you 5 core reasons why you would want to use this tool, and why you can’t do the same thing with vCenter.

What is vRealize Operations Manager?

To answer the question of what is vRealize Operations Manager is simple, it takes some of vCenter’s functions and makes them better. It’s designed to provide you, the admin with more actionable information at your fingertips and even automate some of the actions. vRealize Operations Manager comes as a virtual appliance that you deploy in your environment in standalone or HA mode. You can find this in our short vRealize Operations Manager installation guide for Horizon.

There are many useful applications for vRealize Operations Manager as it is a complete and complex product. Note that we are only skimming the surface of what is possible in this blog and you can find other exciting features in our top 10 vRops features blog.

vROPs Alerts

vROPs Alerts

Frequent alerts or alert storms happen when tools flood administrators with non-pertinent information and ironically this is always right in the middle of a major issue. Your phone just keeps going off, even though you’re aware of the issue! This extends the time it takes to correct issues, resulting in longer periods of downtime.

vRealize Operations Manager includes a Smart Alerts feature that identifies the root cause of an issue, alerts administrators to that issue while filtering out extraneous notifications, and makes a recommendation for remediation. The result is a faster time to problem remediation.

One of the great benefits is it gives you a tip on what it might just be. Eliminating some of the troubleshooting!

It is very much a proactive tool, whereas vCenter is more retroactive. You actually have to know what you want to monitor for vCenter alerts to give you what you want.

You can see in the image to the left, that it tells you the issue and then explains what you can do to fix it. In this particular example, it’s possible that the DRS automation level is set too conservatively and will need to be changed, or maybe you’re running the cluster in manual or partially automated mode and fully automated could help relieve some of those contention issues. So it helps get you in the right direction! vCenter by itself doesn’t have the ability to make some of these observations and make educated suggestions based on the information at hand.

Policy-Based Reporting and More!

First a Definition (From the official documentation) of what a Policy in vRealize Operations Manager is:

A policy is a set of rules that you define for vRealize Operations Manager to use to analyze and display information about the objects in your environment. You can create, modify, and administer policies to determine how vRealize Operations Manager displays data in dashboards, views, and reports.

Policy-Based Reporting and More!

After the initial installation, a default policy is created in vRealize Operations Manager, containing settings that most organizations would find useful. This policy is initially linked to all the objects that are monitored. However, you can create additional policies and link them to vCenter objects that are grouped together in a custom group. (NOTE: You first have to create these custom groups). Like vCenter, vRealize uses a tree-like structure and supports inheritance.

(We’ll be discussing how to manage vRealize policies in a future post!)

In selecting and managing these policies there are a few controls (Shown in the image) throughout the interface worth mentioning

The Base Policy shows you a policy preview and contains information on what metrics & properties, alarm definitions, symptom definitions, and custom profiles are inherited from the base policy. It also shows you what configurations are defined in the current policy.

Under Analysis Settings, you are given the option to configure the way objects are analyzed in your environment. By default, you get the settings that are inherited from the base settings. You can change them if you need to.

Workload Automation contains settings that are used in the Workload Balance dashboard. You are able to optimize the load across your clusters. (More on this in a future post as well!)

Collect Metrics and Properties contain properties and metrics that are collected by vRealize. Metrics are dynamic values that change every 5 minutes, while properties contain static information.

If you want to set certain alerts on symptoms, you can do that under the Alert / Symptom Definitions section.

Custom Profiles let you add custom profiles to a policy. A custom profile is used for capacity planning purposes in vRealize. For example, you can define a specific standard size for a virtual machine or ESXi host.

Applying Policy to Groups will let you link the policy to one or more custom groups that you’ve defined.

Rightsizing recommendations

As the size of a vSphere environment grows in terms of virtual machines, it becomes more and more complicated to ensure the optimized size of VMs. A virtual machine that required 8 vCPUs a couple of years ago may only need 2 after a big project is finished. On the other side, projects ramping up slowly may require a lot more resources than what it was originally deployed with.

vRealize Operations Manager analyzes your environment with all the virtual machines running in it and will make recommendations regarding undersized and oversized virtual machines. You can then choose to take action to reclaim or increase resources either manually or through automatic resize actions.

Note that you also get recommendations on the storage used in your environment, something many companies struggle with, and realize that they’ve been wasting Terabytes of space in VMs that no one decommissioned or else.

Compliance Enforcement

This is a really neat feature and one of my favorites. The Configuration and Compliance category caters to the administrators who are responsible to manage configuration drifts within a virtual infrastructure. Configuration drift is the concept that over time settings and configs in a datacenter slowly change over time via work and modifications from different admins. Ideally, in a virtual environment, settings and configs stay as uniform as possible. In short, vRealize Operations Manager Compliance Enforcement helps with this.

Since most of the issues in a virtual infrastructure are a result of inconsistent configurations, dashboards in this category highlight the inconsistencies at various levels such as Virtual Machines, Hosts, Clusters, and Virtual Networks.

Compliance Enforcement

You can view a list of configuration improvements that helps you to avoid problems that are caused by misconfiguration.

Additionally, from a security perspective, vRealize Operations Manager can now analyze vSphere hosts and virtual machines to ensure they are as hardened as possible.

Issues requiring mitigation are reported to administrators. It will inform you if you’re violating any kind of configuration or security best practice, and best of all can inform you on how to improve the issue.

Workload optimization

If you worked with vCenter in your job, you probably heard of or maybe used and configured vSphere DRS and SDRS (storage/ Distributed Resources Scheduler). This is obviously the feature that ensures the demand for your virtual machines is satisfied. Even though it looks like DRS is a load balancing feature, it is not. Its purpose is to ensure that the virtual machines have enough resources to run, nothing else. If one host is running at 70% with 80 VMs while another is doing 5% with all VMs working fine, DRS has no reason to move VMs around.

vRealize Operations Manager has a feature called vROPS Workload Optimization which offers load balancing features. It works with DRS to find the best VM placement in the environment according to a threshold you set.

vRealize Operations Manager

vRealize Operations Manager offers proper load balancing capabilities for workloads across hosts in clusters”

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Wrap Up

As you can see, vRealize Operations Manager does more than just monitor vCenter. It gives you more control and insight into your vSphere environment. Also worth mentioning is it can also monitor other tools you might have like Horizon. It’s well worth having it run in your environment! The longer it is able to analyze your environment, the better and smarter it becomes. Having the ability to see the issue and a recommendation to go along with the said issue is invaluable. Most SMBs are always looking to streamline alerts and notifications and this solution will fill that void.

What about you? Have you been looking at vRealize Operations Manager? What looks enticing? What types of vRealize Operations Manager content would you like to see more of? Let us know in the comments section below!

The post 3 High-Value Reasons to use vRealize Operations Manager appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vrealize-operations-manager/feed/ 0
Introduction to the VMware vRealize Suite https://www.altaro.com/vmware/introduction-vrealize-suite/ https://www.altaro.com/vmware/introduction-vrealize-suite/#respond Thu, 24 Jan 2019 20:12:08 +0000 https://www.altaro.com/vmware/?p=19226 vRealize is like a consultant in your environment pro-actively monitoring things. This article explains why you should use vRealize and a breakdown of the various apps it includes

The post Introduction to the VMware vRealize Suite appeared first on Altaro DOJO | VMware.

]]>

So before we dive into what this tool is and what it’s useful for, let’s talk about its origins. It was formerly called vCenter Operations Management Suite. Being that the tool does far more than just manage vCenter, I’m guessing VMware decided to do away with that old name and came up with vRealize. So prior to 2014, that’s what it was called. Good old marketing! This cloud management suite is made up of several components, including vRealize Automation, vRealize Operations, vRealize Log Insight and vRealize Business for Cloud.

Why use vRealize?

My students always ask me about why they should deploy this tool. If vRealize just points to vCenter, why not just look in vCenter for the details? This is the most common comment I get. My best analogy is that it’s like a consultant in your environment pro-actively monitoring things. Yes, you can manage and monitor through vCenter but the main issue is you have to know what to look for and manually set alarms. vCenter is more retroactive. With vRealize, it’s constantly looking for issues. If a virtual machine acts up and isn’t performing like it once was, you’ll know about it. Shoot, it will even tell you what’s wrong and how to go about fixing it!

In the screen capture below, you can see at a high level what Operations Manager is trying to do. Optimizing environments is one of the best features and it also looks for savings opportunities, access capacity, and attempts to reclaim wasted resources, etc. If you think about how many people ask for VMs there is bound to be some kind of waste in your environment. VM sprawl is one of those negatives of virtualization. We’ve made it so easy to get our hands on new machines that we forget about the underlying resources that they need and consume. With vRealize you can plan and project resources for future projects. Additionally, if you’re having performance issues troubleshooting is built right in. You can monitor that VMs are staying within certain compliance levels as well!

VMware vRealize Suite

There are three different editions of this platform: Standard Edition, Advanced Edition and Enterprise Edition. Standard Edition includes vRealize Operations, vRealize Log Insight and vRealize Business for Cloud. It’s definitely a solid buy for SMBs. Advanced Edition adds vRealize Automation and an enhanced version of vRealize Business for Cloud. Enterprise adds vRealize Automation for applications and application monitoring.

There are additional Add-ons as well that are worth mentioning. vRealize Code Stream, vRealize Orchestrator and vRealize Infrastructure Navigator. Code Stream is a pretty useful tool in itself. It allows you to automate the development and test of developer code in order to release it into production environments, tracks code artifacts, and versions. You create a pipeline that runs actions to build, deploy, test, and release your software. VMware Code Stream runs your software through each stage of the pipeline until it is ready to be released to production.

There are some GREAT dashboards in vRealize as well. If you’ve got the product in front of you, you can click through and see some of them. I’ll list some of my favorites below. The dashboards are probably my favorite feature of the product.

Operations Overview

This dashboard provides an overview of your virtual data centers, clusters, hosts, and datastores. It breaks down this overview by showing you the number of running virtual machines vs. powered-off virtual machines. You’ll notice a “Top-15” that might be experiencing performance issues.

Troubleshooting

If you have a problem in your datacenter, this is a great place to go. It will show specific hosts and look at stress, overall health, and recommendations on how to fix them. Like I said above, it’s a consultant!

vSAN

If you use vSAN these are a must. It will monitor the cluster, hosts, cache disks, capacity disks, VMs running on vSAN, overall IOPs and latency!

Application

From an application standpoint, after the 6.6 release, you can now have application-specific dashboards. You are able to drill down into the application and monitor performance. So it’s great for tier-1 applications.

Capacity Overview

This is where you are going to drill down into specific infrastructure resources like the CPU capacity, memory capacity, storage capacity, and more. It also allows you to see if you have a reclaimable capacity.

Heavy Hitter VMs

This one is great. If you have large machines, they’ll show up on this dashboard. Heavy Hitter VMs. You can look at CPU demand and memory demand for your largest machines.

Wrap Up

To wrap up, this tool is far more than just a monitoring tool. Forecasting comes in useful if your environment is growing quickly and you need to gauge just how much additional hardware will cost in the future. I love the proactive nature of this tool! Sometimes I’ve heard people say it’s TMI (Too Much Information) but I’m not sure I’d agree with that. As your environment gets bigger it becomes harder to troubleshoot and vRealize Suite helps you manage it effectively.

What about you? Have you used this suite of tools to your advantage? Have you tried it and found it too much? Let us know in the comments section below!

[the_ad id=”4738″][thrive_leads id=’18673′]

The post Introduction to the VMware vRealize Suite appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/introduction-vrealize-suite/feed/ 0