Automation Archives - Altaro DOJO | VMware https://www.altaro.com/vmware VMware guides, how-tos, tips, and expert advice for system admins and IT professionals Thu, 14 Apr 2022 16:40:17 +0000 en-US hourly 1 Enhance Your Own Private Cloud Governance with vRealize Automation https://www.altaro.com/vmware/cloud-governance-vrealize/ https://www.altaro.com/vmware/cloud-governance-vrealize/#respond Thu, 14 Apr 2022 16:40:17 +0000 https://www.altaro.com/vmware/?p=23474 Learn how to control governance requirements using VMware vRealize Automation and its benefits to infrastructure automation.

The post Enhance Your Own Private Cloud Governance with vRealize Automation appeared first on Altaro DOJO | VMware.

]]>

Organizations worldwide are having to rethink the way they deliver the technology needs of the business. COVID-19 has changed the way companies operate and end-users access resources and carry out business productivity. To meet the demand for increased digital resources, IT teams have had to improve the efficiency of their operations. However, security and governance are still top priorities that businesses cannot neglect.

Most organizations see a hybrid strategy for the foreseeable future, with a mix of public and private cloud. Automated processes help IT meet increased demands for today’s digital resources. It also helps to do this within the defined boundaries of your business and any regulatory requirements.

vRealize Automation – Purpose, Components, Resources, and Licensing

Many organizations today power their private cloud infrastructure with VMware vSphere. It is a powerful, robust platform that provides many capabilities and features to the enterprise. However, many organizations are also using public cloud environments. VMware vRealize Automation (vRA) is a component of the vRealize Cloud Suite that provides a modern infrastructure automation platform that increases productivity and agility, not only in VMware vSphere but also across public cloud infrastructure.

It does this by taking very manual administrative tasks and enables IT teams to automate these processes. Public cloud processes and automation has shifted the way businesses want to provision infrastructure with automated processes. With vRealize Automation, companies can introduce similar automated workflows to provision environments and resources in a public cloud-like experience.

VMware vRealize Automation provides the following key benefits:

    • Easy to set up and simple to use – VMware provides an Easy Installer that provides an easy way to stand up necessary components of the vRealize Cloud Suite, including vRealize Suite Lifecycle Manager, Workspace ONE, and vRealize Automation
    • Secure and compliant – As businesses provide consistent orchestration using self-service processes with controlled, automated workflows, they can maintain governance across a multi-cloud environment
    • Agility – Businesses can deliver fast and agile service deliver with Infrastructure as Code (IaC) and vRealize Automation
    • Faster time to market – Organizations can deliver software releases much quicker
    • High availability and reliability – It enables consistent automation throughout the entire lifecycle of an application
    • Any app on any cloud – Provision and run apps, virtual machines, containers, and other resources across multi-cloud environments

Components of vRealize Automation

There are three main components to vRealize Automation. These include the following:

    • Cloud Assembly – It is a multi-cloud provisioning service that offers the ability to create a private cloud. Cloud Assembly is essentially an API layer that the Blueprint Engine uses and supports vRealize Orchestrator workflows and event-broker subscriptions
    • Code Stream – Provides a CI/CD pipeline for DevOps. It automates application and infrastructure delivery with pipeline management. It also provides out-of-the-box integrations for existing tools and processes.
    • Service Broker – It aggregates content from different platforms, including Cloud Assembly, vRealize Orchestrator, and provides the product catalog for self-service delivery. It also provides the policies to help organizations enforce governance.
    • vRealize Orchestrator – Historically a separate product, VMware vRealize Orchestrator is a modern workflow automation platform that simplifies and automates complex data center tasks and is now included when you install vRA.

Overview of vRealize Automation components

Overview of vRealize Automation components

Resources

When doing a vRealize automation installation, the Easy Installer will provision the vRA resources in your VMware vSphere environment. The deployed resources include the following:

    • vRealize Lifecycle Manager
    • vRealize Automation
    • VMware Identity Manager

The Easy Installer is an ISO file downloaded from the VMware portal. It provides a wizardized deployment of your vRealize automation installation.

Using the vRealize Automation Easy Installer to install vRA, vRLCM, and VMware Identity Manager

Using the vRealize Automation Easy Installer to install vRA, vRLCM, and VMware Identity Manager

What are the system requirements for the three VMs provisioned as part of the Easy Installer for vRealize automation installation?

Requirements vRealize Suite Lifecycle Manager VMware Identity Manager vRealize Automation
      Medium Profile Extra Large Profile
Total Disk Size 78 GB 100 GB 246 GB (Only for single node Installation) 246 GB (Only for single node Installation)
Virtual CPU

2

8

12

24

Memory/RAM Size 6 GB 16 GB 42 GB 96 GB
Maximum Network Latency     5 ms between each cluster node 5 ms between each cluster node
Maximum Storage Latency     20 ms for each disk IO operation from any vRA node 20 ms for each disk IO operation from any vRA node

Licensing

VMware vRealize Automation licensing is part of the vRealize Suite of products from VMware. VMware vRealize Suite is licensed using Portable License Unit (PLU) that offers flexibility to manage workloads on-premises and in the cloud. There is no license switching or conversion required between on-premises and cloud infrastructure. One PLU allows usage of vRealize Suite to manage unlimited operating system instances (OSI) deployed on-premises on one vSphere CPU or up to 15 OSIs deployed in the public cloud.

There are no limits on the number of VMs you can manage using vRealize Suite on a vSphere CPU. However, it requires the vSphere CPU to be licensed for vRealize Suite or vCloud Suite. VMware vRealize Automation licensing is found in the Advanced Edition of the VMware vRealize Suite. The Advanced Edition supports IT automation to IaaS use cases. Note the solutions found in the various versions of VMware vRealize Suite.

vRealize Automation licensing and components of VMware vRealize Suite editions

vRealize Automation licensing and components of VMware vRealize Suite editions

You can learn more information on how to purchase vRealize Automation from the VMware “How to Buy” resource page found here:

Integration with cloud services

VMware vRealize Automation provides integration with a wide variety of cloud services and includes integration with most cloud service providers organizations are using today. In VMware vRealize Automation, integrating with various cloud services is as simple as adding a new cloud account. Once a cloud account is added in vRealize Automation, vRA can extend automation features to the various environments.

What cloud accounts are available within vRealize Automation for integration? These include:

    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure
    • NSX-T Manager
    • NSX-V Manager
    • vCenter Server
    • VMware Cloud Director
    • VMware Cloud Foundation
    • VMware Cloud on AWS

Adding a new cloud account in vRA

Adding a new cloud account in vRA

You may have seen reference to vRealize Automation Cloud. What is this? VMware vRealize Automation Cloud is formerly known as VMware Cloud Automation Services. With vRealize Automation Cloud, customers get a fully managed vRealize Automation solution hosted in the VMware Cloud as a SaaS solution. It means the vRealize Automation infrastructure is fully managed, and you can simply consume the automation services provided by the product without worrying about the underlying infrastructure.

vRO vs. vRA – What’s the difference?

As mentioned above, when you install a current installation of vRealize Automation, it includes vRealize Orchestrator (vRO) as part of the solution. Both vRA and vRO provide automation benefits to your environments. VMware vRealize Automation provides a self-service experience and the capability to build out blueprints for infrastructure resources. In addition, it provides the tools for IT admins to define their infrastructure and provide the self-service and governance needed for end-users and consumers.

VMware vRealize Orchestrator provides a workflow engine that complements the features of vRA to provide more powerful automation capabilities. The entire focus of vRO is workflows via APIs from solutions like vRealize Operations Manager. In addition, it can perform standalone automation tasks externally to vRA.

Most common use cases

Many use cases are satisfied by using vRealize Automation. However, consider the following use cases that vRealize can accomplish:

    • Create a self-service portal where users are delegated the workflows needed to provision infrastructure
    • Offer other services beyond infrastructure, for example—PaaS, XaaS
    • The requirement to integrate with CMDB or ITSM tools to track activities when creating resources such as new virtual machines
    • Integration with an IPAM system for obtaining network addressing for a virtual machine
    • Advanced governance capabilities
    • Deployment of resources across hybrid cloud environment

How does it compare to other solutions like Terraform or Ansible?

Most IT admins will want to know and understand how vRealize Automation compares to other automation tools they have heard about or used. Two of these tools that come to mind are Terraform and Ansible. What are these?

    • Terraform

Terraform is a popular Infrastructure as Code (IaC) solution. It allows writing declarative Infrastructure as Code in the Hashicorp Configuration Language (HCL) that can run in DevOps pipelines. Like vRealize Automation, Terraform enables organizations to interact with and build infrastructure across clouds using automation.

Terraform is freely available for download at no cost and is a simple command-line tool. VMware vRealize is a GUI tool that provides many of the same features of Terraform and arguably much better integration with vSphere environments. However, it is a paid product. Out of the box, vRealize Automation provides more robust tooling for configuring a self-service environment with the governance requirements.

    • Ansible

Ansible is another prevalent automation framework. However, Ansible differs in purpose compared to Terraform and vanilla vRealize Automation. Ansible is a configuration management framework that is focused on how to remediate configuration drift than provisioning infrastructure. It can provision infrastructure, but this is not its strong suit. Conversely, Terraform can perform some post-process tasks for configuration management, but this is not its strength either.

VMware recently introduced vRealize Automation Salt Stack Config, a modern configuration management solution that is a separate download integrated into vRealize Automation. It provides the tools for organizations to extend the infrastructure automation capabilities of vRealize Automation with the configuration management features of Salt Config.

Who is likely to leverage which?

As mentioned, Terraform, Ansible, and many other automation platforms are popular in the enterprise today. Terraform, Ansible, and vRealize Automation can all successfully automate your environment. However, each has its strengths and weaknesses. So, what makes the difference between choosing Terraform, Ansible, or vRealize Automation?

Both Terraform and Ansible are free downloads that are readily available to begin automating from the command line. However, to have a GUI interface and other governance features in a supported way with Terraform and Ansible, you must upgrade to the paid versions of the tools with Terraform Enterprise and Ansible Tower. VMware vRealize is a paid product only. There is no free version you can download, aside from a time-limited trial version.

Organizations already heavily invested in VMware technologies will benefit from the seamless integration between vRealize Automation and VMware technologies. However, as mentioned, it also has strong capabilities in cloud environments. Therefore, many who are VMware shops will likely see benefits to investing in vRealize Automation.

Terraform and Ansible will likely draw many from VMware environments due to their open-source nature, easy learning curves, and robust capabilities. In addition, both have modules for VMware vSphere. However, these are lacking in seamless integration and strong governance capabilities provided by vRealize Automation. To get similar role-based access control and governance workflows comparable to vRealize Automation, organizations will need to invest in the paid versions of Terraform and Ansible.

Again, it is common to see organizations using a combination of tools. It is unlikely that one single tool will fit absolutely every use case of everyone in a single industry or business sector. SMBs, large IT departments, and cloud providers will have their favorite tools for automation and configuration management. VMware vRealize Automation again will appeal to SMBs, IT departments, and cloud providers who are invested in VMware technologies and familiar with the VMware ecosystem. The additional cloud capabilities of vRA are icing on the cake.

Organizations may find themselves using a combination of vRA and other tools. The great thing about vRA is it supports PowerShell, Terraform, Salt, and other configuration languages. So, vRA can be the engine organizations are using that easily provides role-based access and governance capabilities and the ability to incorporate other scripting and configuration languages.

What is IT governance, and why is it important?

IT governance has been described as the formal framework that allows organizations to ensure IT processes and procedures align with the business’s overall objectives and other requirements. These help to ensure IT activities meet:

    • Business strategies and goals
    • Legal and regulatory obligations
    • Reliability and uniformity of processes
    • Comply with corporate governance requirements
    • Mitigate risks associated with security concerns

A large part of IT governance is making decisions in a repeatable, structured manner to support investment in and use of IT to achieve an organization’s goals. It requires a framework or structure that defines roles and responsibilities, processes, policies, and criteria that help business stakeholders make sound decisions.

How does vRealize Automation allow businesses to enhance cloud governance?

As mentioned, organizations must make repeatable, structured decisions and have the processes and tools to support these requirements. VMware vRealize Automation provides the means to produce an automated framework to overcome the challenges of IT governance in several ways.

    • Self-service provisioning with consistent governance and compliance – vRealize Automation provides fine-grained governance capabilities that allow admins to apply policies and approval workflows to provide the security and guardrails needed for consistent provisioning. In addition, it provides users with a content catalogue that includes blueprints, templates, and images from multiple clouds and platforms.
    • It enables multi-cloud automation with governance – Extending the on-premises capabilities of vRealize Automation, it can provide the same benefits to multi-cloud environments with public clouds, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
    • Kubernetes automation – Using vRealize Automation, companies can implement self-service automation and governance for Kubernetes clusters and application deployment. With vRA, organizations can manage and govern Kubernetes clusters and namespaces and import their existing clusters while doing this within the governance boundaries defined by the business.
    • DevOps for infrastructure – Most businesses are adopting agile development methodologies. VMware vRA allows businesses to support on-premises developers with a range of sandbox development environments and CI/CD pipeline process while enforcing governance.
    • Multi-cloud governance – Many may think of vRealize Automation as a VMware vSphere-only product. However, VMware has evolved vRealize Automation into a robust multi-cloud tool that provides tigheter integrations with multi-cloud provisioning and governance across multiple public cloud environments. These include: Amazon AWS, Google Cloud, Microsoft Azure, and VMware Cloud.
    • Personalized policies – In most environments, each user or consumer requires a personalized service for specific business use cases. VMware vRealize Automation provides this ability using policies. Fine-grained policies work with personalized services offered in vRA. A developer may need to have a development environment, spun up in the public cloud such as Amazon AWS. Another consumer may need a similar service, but personalized so that it gets deployed into the private cloud with the appropriate approvals in place. All of this is possible using vRealize Automation.
    • Network automation – One of the most difficult types of infrastructure to automate is the network. However, with software-defined network technologies like VMware NSX, IT operations can deliver agile network operations via code. Using vRA, organizations can provide governance around network automation. Instead of governance issues being a blocker to operations, these are simply handled behind the scenes with vRA.
    • Security framework – One of the most common issues with security is inconsistent operations, configuration, and ensuring security measures are implemented consistently across the infrastructure landscape. With vRA, businesses can ensure appropriate security guardrails are baked into the deployment of infrastructure with code as vRA handles this automatically.

Create a quick workflow with governance using vRealize Automation

As soon as you have done the vRealize automation installation and it is up and running, it provides an easy way to create the first workflow you can assign to an end-user or other consumer.

Select the account type to add during the vRA Quickstart wizard

Select the account type to add during the vRA Quickstart wizard

Select the content to enable during the Quickstart. It includes VM template images.

Adding VM templates and specifying the settings for the first cloud template
Adding VM templates and specifying the settings for the first cloud template

Skipping to 5 Policies, you will see the ability to configure governance policies for self-service applications. Note how you can easily define an approval workflow, lease time for the resources, and enforce a naming convention for the newly created VM resources.

Defining governance settings during the vRA Quickstart
Defining governance settings during the vRA Quickstart

Accepting the settings configured on the Summary screen and running the Quickstart.

Running the vRA Quickstart

Running the vRA Quickstart

The power of vRA includes assigning Active Directory users the projects that are defined for deploying infrastructure. In addition, it allows creating a self-service workflow including the governance settings defined.

Adding an Active Directory user to a vRA project
Adding an Active Directory user to a vRA project

Note the various constraints that you can define for a specific vRA project assigned to a user. These include constraints related to:

    • Network
    • Storage
    • Extensibility

You can also define resource tags, custom properties, and custom naming.

Configuring constraints for a vRA project
Configuring constraints for a vRA project

The governance settings and configuration possibilities with vRA are robust and allow organizations to control and constrain how resources are provisioning in the environment. In addition, it helps to align the workflows with the governance requirements of the business.

Final Thoughts

VMware vRealize Automation is a powerful tool that can provide the tools needed to meet and exceed the governance requirements defined by the business. Governance is an essential topic in organizations today with the growing demands on companies to meet regulatory, security, and other needs.

As businesses continue to implement and use hybrid cloud solutions, spanning on-premises and cloud environments, they need to use well-versed solutions, both in on-premises technologies and cloud services. VMware vRealize Automation has matured into a robust solution equally capable in cloud environments as it is in VMware vSphere. Organizations can use vRealize Automation to empower teams to automate infrastructure deployment in a self-service way. In addition, it provides built-in functionality to enable role-based access and governance constraints to ensure infrastructure is deployed appropriately.

Is vRealize Automation worth the investment for organizations today? It comes down to the standard answer of “it depends” for most organizations. Some businesses may already have preferred tooling for infrastructure automation and may have another means to enforce governance constraints.

Even with that being the case, companies do well to investigate the features and functionality provided by vRealize Automation. It provides one of the best out-of-the-box workflows and role-based access experiences you will find on the market. VMware vRealize Automation also allows integrating the tools you already use, like Terraform, and extending these with the robust integrations made possible by vRA.

VMware vRealize Automation allows organizations to easily stand up the self-service portal and cloud-like service catalog to provide the same rich public cloud experience on-premises. With the rich cloud integrations found out-of-the-box, businesses can easily connect to the cloud services they most likely are using today, including AWS, Azure, GCP, and others.

Learn more about vRealize Automation and how it can extend your automation and governance needs at the official vRealize and vCloud Suite page here:

The post Enhance Your Own Private Cloud Governance with vRealize Automation appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/cloud-governance-vrealize/feed/ 0
Octant: The VMware Open Source Dashboard for Kubernetes https://www.altaro.com/vmware/octant-kubernetes-dashboards/ https://www.altaro.com/vmware/octant-kubernetes-dashboards/#respond Fri, 10 Dec 2021 13:32:54 +0000 https://www.altaro.com/vmware/?p=23418 Discover how the Octant Kubernetes UI dashboard helps you increase visibility in your kubernetes clusters without using up cluster resources

The post Octant: The VMware Open Source Dashboard for Kubernetes appeared first on Altaro DOJO | VMware.

]]>

If you follow the trends of the IT industry, you will know that Kubernetes had been creeping in the background for several years since Google released it as an open-source project but it’s been really booming in the last couple of years with everyone talking about it. One of the specifics of Kubernetes is that it is a bit complicated and the learning curve is steep, however, by leveraging Octant Kubernetes becomes a little more user friendly by providing a clear Kubernetes UI. In this article, you’ll get an overview of Octant, its benefits, and how to get started with the tool. Let jump right in!

An Overview of Using Octant with Kubernetes

Octant is an open-source project that was originally developed by a company named Heptio which was acquired by VMware in late 2018. Following this acquisition, VMware keeps paying employees to work on the projects developed by Heptio and slowly integrates these into the Tanzu portfolio. Although the term is still simply “Octant”, I suppose one can refer to it as VMware Octant since they are maintaining it.

The traditional way of managing Kubernetes clusters is through the kubectl command line tool. Kubeclt is comparable to PowerCLI in the sense that, your human-readable commands are translated into API calls that the Kubernetes api-server understands. VMware Octant does the same thing by interacting with the API-server, except it presents the output in a graphical interface.

Octant is a Dashboard Kubernetes UI software that you typically run on the workstation on which you would normally use kubectl to manage your Kubernetes and Tanzu clusters. That way you can visualize the various resources in place and execute a few actions.

Benefits of Octant:

    • The Octant Kubernetes UI runs locally on the user workstation: Meaning it won’t use up expensive Kubernetes node resources.
    • Octant is based on kubeconfig file: Meaning you don’t need to worry about authentication or permissions. As long as your Kubernetes users have kubeconfig files with the correct role. You can even give read-only access to those that want to have a look at Kubernetes or a first exposure at the solution.

Getting started with Octant Kubernetes

Getting the Octant Kubernetes UI up and running is a 2 minutes thing (unless you have a really slow internet connection), and this is what’s great about it!

Download Octant from the official website

Download Octant from the official website”

    • The version you see will probably be different since these things change all the time. In my case, we are currently on v0.24.0. At the bottom of the page, you will find a list of assets to choose from.

On Windows, for instance, you can download the installer (Octant.Setup.0.24.0.exe) which will display the dashboard in a local (lightweight) app, or you can download the portable version (octant_0.24.0_Windows-64bit.zip) in which case you access the Octant Kubernetes UI locally in your browser (http://127.0.0.1:7777).

In my case I selected the installer so it’s easier to launch on a daily basis when I want to have a look at my cluster.

The rest of it isn’t worth talking about really… Just launch the .exe file and off you go.

If you are greeted with the “Add a new kubeconfig” page it means Octant cannot find your configuration file. By default, it will look in “%USERPROFILE%\.kube\config”. In which case you can either paste one in there or fix it in your operating system but this is off-topic.

Octant is based on Kubernetes kubeconfig files

Octant is based on Kubernetes kubeconfig files”

Navigating the Interface with Octant Kubernetes

The Octant Kubernetes UI is intuitive and links elements together like vCenter and vSphere for instance when looking at datastores connected to a host, then VMs on this datastore, portgroups on the VM and so on.

The Octant Kubernetes UI is clear and intuitive

The Octant Kubernetes UI is clear and intuitive”

    1. List of applications in the cluster (pods and deployments). Note that you can click on each of them to display their relationships with other objects.

Note that those depend on the namespace you selected in the top right corner (7).

Applications appear in the Applications view of the UI

Applications appear in the Applications view of the UI”

    1. Namespaced resources. Meaning all the resources that enter the scope of a namespace such as deployments, pods, pvc…

Note that you can get a list of all namespaced objects in command line with:

kubectl api-resources –namespaced=true

Below is an example with the list of the persistent volume claims in my dev-01 namespace.

Namespace overview objects will change according to the selected namespace

Namespace overview objects will change according to the selected namespace”

    1. Not namespaced resources such as pv, node…

Note that you can get a list of all non-namespaced objects in command line with:

kubectl api-resources –namespaced=false

Below is a list of all my persistent volumes in the cluster. Those will appear in the list regardless of the namespace.

Cluster overview objects will not change regardless of the selected namespace

Cluster overview objects will not change regardless of the selected namespace”

    1. Plugins to expand the capabilities of Octant kubernetes ui, more on this in the documentation.

Plugins let you expand the capabilities of Octant

Plugins let you expand the capabilities of Octant”

    1. List of resource types associated with the far-left panel.

Here is where you will select which resources you want to display in the main center panel. For instance, I can show all the namespaces in the cluster. Note how the labels are display in the list view.Most resources support labels that help you filter and select them

 

Most resources support labels that help you filter and select them”

    1. Switch between the contexts defined in the kubeconfig file.

You can easily switch between contexts in the top right corner. You can for instance connect to a Kubernetes cluster in Azure or a Tanzu cluster.

You can easily switch between contexts in the kubeconfig file

You can easily switch between contexts in the kubeconfig file”

    1. Switch between namespaces.

You mostly have multiple namespaces in your Kubernetes cluster. You can quickly switch between them in the Octant Kubernetes UI as well.

Namespaces can be selected to display the associated resources

Namespaces can be selected to display the associated resources”

    1. Apply a YAML manifest like you would with “kubectl apply -f my_manifest.yaml”.

You can also even apply YAML manifests directly from the Octan Kubernetes UI if that is something you need at a particular moment for some reason. You have the possibility to either paste your YAML content in there or browse to a manifest file.

Push changes to the environment by applying a YAML manifest

Push changes to the environment by applying a YAML manifest”

    1. Various settings related to the Octant tool such as light/dark theme, page size and a few other things like that.

Not the most interesting pane but you may like the option to switch to dark mode or to remind you of the Kube config path if you forgot it.

A dark theme is available for you night owls

A dark theme is available for you night owls”

    1. Note also that when you close Octant in Windows it won’t actually close the app as it will stay in the Windows System Tray (bottom right of the taskbar).

You get access to the context switching utility, the Octant tool logs…

Closing Octant will reduce it to the Windows system tray

Closing Octant will reduce it to the Windows system tray”

Octant: Local and Real-Time Dashboard for Kubernetes Workloads

A really cool thing about the Octant Kubernetes UI is that it is real-time so you can see the changes taking place and they are colour-coded, making them easy to identify.

For instance, in the screenshot below, I nuked a pod from kubectl. And because it is part of a deployment, Kubernetes automatically creates a new one and you can see the operation in the dashboard.

nuked a pod from kubectl

Another example is if you want to scale a deployment by changing the number of replicas. Head over to the deployment and click EDIT. Notice the single pod running in there.

Head over to the deployment and click EDIT

Then type in the number of replicas you want this deployment to run and hit SUBMIT.

Then type in the number of replicas you want this deployment to run and hit SUBMIT.

You will see in real-time a bunch of pods being created to comply with the updated replicaSet.

updated replicaSet

You can obviously go the opposite direction a scale the deployment down. In the screenshot below I went from 5 replicas back down to 1.

from 5 replicas back down to 1

Troubleshooting with Octant Kubernetes

When using Octant, Kubernetes is presented in a clear graphical interface in which you can quickly identify issues thanks to colour coding.

For instance, you can see in the screenshot below that one of my deployments has issues.

Octant, Kubernetes

Let’s drill into the problem to try and figure out what’s going on. I am going to click on the deployments.

click on the deployments

I now get a view of the pod(s) among other things. In which I see that one of them is yellow, let’s click on it. I can already see in the status field “ImagePullBackOff” so you may already know where I am going with this.

ImagePullBackOff

Oops, it looks like I misspelt nginx for mginx in my deployment manifest so it can’t find it in the repositories. The octant kubernetes UI lets me set it back to something that actually exist and see if it fixes the issue.

I am going back to the deployment and in the YAML tab. I will then fix the mistake and click Update to apply the manifest.

Update to apply the manifest

The deployment will be fixed before you know it. It will do a rolling update since we changed only the image.

The deployment will be fixed

You can get your rollout history with the kubectl command line. You can’t visualize it in the octant kubernetes UI yet as far as I know.

editing the YAML manifest in the Octant tool

Note that editing the YAML manifest in the Octant tool is fine for troubleshooting and testing but make sure that it doesn’t get in the way of your configuration tracking tool (Gitlab…).

My Thoughts on Octant Kubernetes

While the Octant Kubernetes UI will not fit the bill for everyone but it is a very nice addition to have in your toolbelt if you are exposed to Kubernetes. It is also great to offer visibility into the environment to users that aren’t knowledgeable about Kubernetes. And because it is linked to the kubeconfig file, all you need to do to give someone access is to create a profile for them and apply the correct permissions with role-based access control (RBAC).

The post Octant: The VMware Open Source Dashboard for Kubernetes appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/octant-kubernetes-dashboards/feed/ 0
Getting started with vRealize Orchestrator 8 (vRO) https://www.altaro.com/vmware/vrealize-orchestrator-8-vro/ https://www.altaro.com/vmware/vrealize-orchestrator-8-vro/#respond Fri, 20 Aug 2021 07:23:12 +0000 https://www.altaro.com/vmware/?p=22824 Automate your IT infrastructure for free by getting started with the VMware vRealize Orchestrator 8 virtual appliance and its powerful workflows.

The post Getting started with vRealize Orchestrator 8 (vRO) appeared first on Altaro DOJO | VMware.

]]>

Task automation has always been an important discussion topic in IT departments as it offers faster-than-humans’ delivery, ensures consistency, saves us time and reduces the risk of error. vRealize Orchestrator is one of the many workflow engine platforms out there. Contrary to what many people think, vRealize Orchestrator is not limited to VMware integration as it can automate almost anything.

Check out our blog for more infrastructure automation topics.

What is vRealize Orchestrator

vRealize Orchestrator is an automation platform that comes in the form of a virtual appliance. You can automate various infrastructure tasks using workflows that can be combined in a number of ways. While it is perfectly integrated with VMware vSphere and offers extensive content for it, you can automate almost any IT process through partner provided workflows for tasks outside of the VMware world such as Active Directory, SQL Server, REST…

Now you may wonder what is the difference between vRealize Orchestrator and vRealize Automation?

    • vRealize Orchestrator: vRO is a workflow engine that offers runbook automation with the ability to automate almost any IT task and not only VMware centric ones as some might think.
    • vRealize Automation: On the other hand, vRA is an IT service delivery platform with governance and control features. vRealize Automation actually contains an embedded version of vRealize Orchestrator for workflow execution.

vRealize Orchestrator can be used as a standalone but it is embedded within vRealize Automation

vRealize Orchestrator can be used as a standalone but it is embedded within vRealize Automation”

vRealize Orchestrator 8

What’s new

You may have come across our older blog on vRealize Orchestrator 7.3 which is the previous major version of the product. As of the time of this writing, we are currently on vRealize 8.4.2 which brought many changes and improvements.

vRealize Orchestrator

“vRealize Orchestrator

Among other new features and fixes you will find the following:

    • New Kubernetes based virtual appliance architecture. The components actually run in containers.
    • Devops friendly approach with Git integration (vRealize Automation license required).
    • Web Orchestrator client only, the java-based client is no longer available.
    • Support for PowerShell, Node.js, Python (vRealize Automation license required).
    • New viewer role with read-only permissions in the Orchestrator Client (vRealize Automation license required).
    • Enhanced search and filtering capabilities.

Plug-ins

The following plugins are embedded in the vRealize Orchestrator appliance.

    • vCenter Server Plug-In 6.5.0
    • Mail Plug-In 7.0.1
    • SQL Plug-In 1.1.4
    • SSH Plug-In 7.1.1
    • SOAP Plug-In 2.0.0
    • HTTP-REST Plug-In 2.3.4
    • Plug-In for Microsoft Active Directory 3.0.9
    • AMQP Plug-In 1.0.4
    • SNMP Plug-In 1.0.3
    • PowerShell Plug-In 1.0.13
    • Multi-Node Plug-In 8.0.0
    • Dynamic Types 1.3.3
    • vCloud Suite API (vAPI) Plug-In 7.5.0

As you can tell you can already automate a fair bunch of things with these. However, if it is not enough for you and you need additional product integration, you can refer to the VMware Marketplace in which you will find a whole lot of VMware and third-party provided plugins to extend vRealize Orchestrator’s capabilities.

Plug-in extend the automation capabilities of vRealize Orchestrator

Plug-in extend the automation capabilities of vRealize Orchestrator”

Migration from vRealize Orchestrator 7

Note that upgrading from vRealize Orchestrator 7.x to vRealize Orchestrator 8.x is not supported, instead it has to be a migration. You can perform such migration if a number of conditions are met:

    • Running standalone vRealize Orchestrator 7.3 or above.
    • Standalone version (not vRA embedded).
    • Source deployment running in non-clustered mode.
    • Using vSphere authentication.

Step 0 – Check compatibility and requirements

Before starting any project, it is always a good idea to ensure that all the components that will be interacting with each other are cross-compatible to avoid unpleasant surprises and fruitless troubleshooting sessions.

You can start by checking the VMware interoperability matrix to ensure that the latest version of vRealize Orchestrator is compatible with your version of vSphere, vCenter, vRA or whatever product you are running. In my case, I am running the latest versions which will, unsurprisingly, happily work together.

Check the VMware interoperability matrix to ensure product compatibility

Check the VMware interoperability matrix to ensure product compatibility”

Step 1 – Deployment of the appliance

The deployment of the vRealize Orchestrator appliance is straightforward. If the deployment fails against the vCenter server, you can deploy it using the VMware OVF Tool or by connecting directly to the vSphere client of an ESXi host. We will demonstrate the latter as it is the easiest.

Note that even though you are connected to a vSphere host, the VM will be automatically registered in the vCenter inventory.

    1. First things first, we need to download the appliance itself from my.vmware.com. Select the latest version, 8.4.2 in my case, and download the OVA. Make sure you grab the virtual appliance, not the update repository.

The deployment of the vRealize Orchestrator appliance

    1. Then, open the vSphere client on one of your ESXi hosts and click on Create/Register VM.

Create Register VM

    1. Click Deploy a VM […] and Next.

Deploy a VM

    1. Choose a name for the VM and browse to the OVA file you downloaded.

Choose a name for the VM and browse to the OVA file you downloaded

    1. Select a datastore with enough space. The requirements state 200GB but you can get away with 40GB to start with if you use thin disk.

Select a datastore with enough space

    1. Accept the EULA that you’ll have read, obviously.

Accept the EULA

    1. Select a portgroup on your management network that has access to vCenter server at the very least. Choose Thin or Thick according to your storage policies and leave Power on automatically checked.

Select a portgroup on your management network

    1. In the last pane you need to configure the appliance according to your organization’s network.

Make sure you put the FQDN in the hostname sections as this is what you will use to connect to the server.

Leave the Kubernetes sections as is.

I suggest you double-check everything for typos if you don’t want to re-deploy the appliance.

configure the appliance according to your organization’s network

    1. Finally, review the settings and hit Finish to start the deployment.

review the settings and hit Finish to start the deployment

    1. Note that by default the root password will expire in 365 days. If you want to set it to never expire, connect to the appliance via SSH using root with the password you configured and run the following command.
Passwd -x 99999 root

Set the root password to never expire in SSH

Set the root password to never expire in SSH”

Step 2 – Initial configuration in the Control Center

Now that the appliance is deployed, we need to perform the initial configuration of it in the control center. Note that it will take a few minutes before the web interface is available so go make yourself a coffee and press F5.

The Control Center is comparable to the VAMI on vCenter Appliances. It is where you configure the appliance itself such as authentication, clustering, certificates…

    1. Browse to this URL and make sure you use the same name (FQDN recommended) you used in the hostname section during the vApp deployment. The URL still shows vco as the product used to be called “vCenter Orchestrator” in early versions. Then click on START THE CONTROL CENTER.

https://<fqdn-vro>/vco

START THE CONTROL CENTER

    1. Click on Configure Authentication Provider. This is where we will connect our vCenter to use vCenter Single Sign-On authentication.

Configure Authentication Provider

    1. Select vSphere as the Authentication Mode, type your vCenter server’s FQDN in the Host Address section and click Connect.

Select vSphere as the Authentication Mode

    1. Accept the certificate thumbprint if the vCenter server’s certificate’s root CA is not trusted by the vRO appliance and save the change.

Accept the certificate thumbprint

    1. You are then asked to type your credentials to connect to vCenter.

type your credentials to connect to vCenter

    1. Finally, you have to set a user group that will have administrator permissions on vRealize Orchestrator. In this case I set it to an AD group named “vRO Administrators” but you can use the SSO domain if you prefer.

set a user group

    1. Once you finish the Authentication provider wizard, the server will restart automatically after 2 minutes. Go have yourself another coffee and log back in the UI. Go back to the Control Center and click on Validate Configuration.

Validate Configuration

    1. Make sure that everything is green. If some are still red, wait a couple of minutes and hit refresh.

Every time you make a change to the configuration, you need to come back here to check that it has been applied correctly. It usually takes a few minutes or so.

Make sure that everything is green

At this point we are done with the control center. We don’t need to touch any of the other settings to get started with it.

Step 3 – Add a vCenter instance in the Orchestrator Client

Now that the initial configuration of the appliance is done, we can start looking at the vRealize Orchestrator client. This is where everything interesting happens. If the control center is comparable to the VAMI in vCenter, the Orchestrator Client is the equivalent of the vSphere Client.

    1. To get access to the Orchestrator client, head back to the same URL as before and click START THE ORCHESTRATOR CLIENT.

https://<fqdn-vro>/vco

    1. Type the credentials of a user that is a member of the vRO Admin group. You need to use the “user@domain” syntax and not “domain\user” which does not seem to work.

Use the ‘user@domain’ syntax in order for the logon to work

Use the ‘user@domain’ syntax in order for the logon to work”

      1. If the vRealize Orchestrator Client is empty like in the screenshot below, it means you connected using a user that isn’t a member of the Admin Group.

connected using a user that isn’t a member of the Admin Group

Interface displayed to a user that is not a member of the admin group”

      1. Below is what the vRealize Orchestrator Client should look like. The left-hand navigation pane gives you access to the workflow library, the tasks in the activity pane, all your plugins as well as several administration opportunities.

Typical interface displayed to a vRO admin user

Typical interface displayed to a vRO admin user”

    1. Under Library > Workflows, type add a vcenter and it should narrow down the result to the Add a vCenter Server instance workflow. Click Run.

Add a vCenter Server instance workflow

    1. In the Set the vCenter Server instance properties tab, type in the FQDN of the vCenter server to connect in the first field. You can then check to ignore the certificate warning and click the Set the connection properties tab. Don’t hit RUN yet.

Set the vCenter Server instance

    1. Type in the credentials of the user that will connect to vCenter to run the workflows with the domain it uses (AD, vSphere SSO…) and click RUN. I used the administrator account in this screenshot.

Type in the credentials of the user that will connect to vCenter

    1. The workflow engine will display a diagram of the workflow’s execution. There is a bit of red in there due to response times but it shows Completed in green in the top left corner.

The workflow engine will display a diagram of the workflow’s execution

    1. You can verify that the connection completed successfully by browsing to Administration > Inventory and expand the vCenter Plug-in view. It should show your vCenter instance along with the inventory objects managed by it.

Administration > Inventory and expand the vCenter Plug-in view

Step 4 – Execute Workflows

Now that vRealize Orchestrator is connected to vCenter server, you can start executing embedded workflows.

In the following example, I will use the simple use case of taking a VM snapshot.

    1. Type a string that is relevant to your action in the search field. In my case it is “snapshot”.

As you can tell there is a bunch of workflows available for snapshot tasks. The first “Create a snapshot” workflow is the one I am after.

vRealize Create a snapshot

    1. The next step will depend on the workflow you choose. This one is as simple as it gets but your custom workflows can be as complicated as you want them to be. The field is configured to accept VM objects, click on it to open the wizard.

vRealize choose the VM snapshot

    1. Expand the tree view and check the virtual machine(s) you want to snapshot and click Select.

Expand the tree view and check the virtual machine(s) you want to snapshot and click Select

    1. Fill in the mandatory field(s) at least and hit Run.

Fill in the mandatory field(s) at least and hit Run

    1. Wait for the task to complete. It should show up green at the top.

Wait for the task to complete. It should show up green at the top

    1. You will find the same task in the vSphere client that ran under the user you were logged in as.

You will find the same task in the vSphere client that ran under the user you were logged in as

Note that you can find the history of the workflows that were run in the vRealize Orchestrator Client under Activity > Workflow Runs.

vRealize Orchestrator Client under Activity > Workflow Runs

Note on vCenter plug-in integration

In older versions of Orchestrator, you could integrate it in the vSphere Client as a plug-in but this feature is not supported yet with the HTML5-based GUI. Although the “Register vCenter Orchestrator as a vCenter Server extension” workflow is here and you can run it, it won’t bring vRealize Orchestrator to your vSphere web client.

If you feel you really need it, work has been done on a Beta version which is obviously not supported yet so we won’t describe it here.

Wrap up

In these days of modern apps, cloud and automation, it is becoming increasingly relevant to invest in a workflow engine platform to start automating more infrastructure task and get closer to an Infrastructure as a Service (IaaS) approach in SDDC management.

While vRealize Automation has an incredible amount of features and possibilities, vRealize Orchestrator offers a nice compromise with an easy-to-use Orchestrator Client and compatibility for a wide array of solutions at no cost.

The post Getting started with vRealize Orchestrator 8 (vRO) appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vrealize-orchestrator-8-vro/feed/ 0
How to build a GUI tool for VMware PowerCLI https://www.altaro.com/vmware/build-a-gui-tool-powercli/ https://www.altaro.com/vmware/build-a-gui-tool-powercli/#respond Thu, 03 Jun 2021 11:37:44 +0000 https://www.altaro.com/vmware/?p=22529 The GUI based script presented in this article shows you how to get started with the most common use case of deploying a virtual machine. Read more to learn about automating with PowerCLI.

The post How to build a GUI tool for VMware PowerCLI appeared first on Altaro DOJO | VMware.

]]>

If you are a vSphere administrator, you and your team most likely have a bunch of manual tasks that could be significantly sped up through automation in any shape or form. When it comes to it, VMware PowerCLI and PowerShell is a great option that allows you to easily add a GUI which will be beneficial for various reasons such as:

    • You may not feel comfortable handing the keys to the shell to inexperienced scripters who could, involuntarily, make mistakes in production. “With great power comes great responsibility!
    • You don’t want to fire up the shell and go through the usual Connect, change dir etc… every time you want to use this script.

A great way for offering quick and easy access to a custom script is through the use of a Graphical User Interface (GUI). In this example, we will use PowerShell forms to create a very simple GUI tool that will deploy a virtual machine based on a Template.

Download the tool from our Github repository.

Altaro Gui Tool

The tool we are going to build is purposefully simple and shouldn’t be used as-is in production as it has almost no checks and has no OS customization capabilities.

Windows Forms

In order to build a Graphical Interface, we are going to use Windows Forms. They are based on .Net and have been around since the dawn of times. It is what powers most of the windows you interact with. There are different ways to create a GUI:

    • Online tools such as PoshGUI lets you design your GUI and generates the PowerShell code for you. It is a time-saver; however, it requires a paid monthly subscription since March of 2021 which will set you back around $7 per month.
    • Visual Studio with PowerShell Pro Tools. Also, a paid option that starts at $10 per month. Most relevant option if you deal with advanced and complicated UIs.
    • Manually within PowerShell ISE. Slightly more labor-intensive option but sufficient for simple UIs.

In this blog, we are building the GUI manually as there isn’t much to it. It will leverage a few types of GUI components such as labels, textboxes, comboboxes, etc… However, there are plenty of other item types that you can use to enhance your graphical interface. Refer to the Microsoft documentation for more information.

Launching the script quickly

We can have the script behave like an installed software by launching it by double-clicking an icon. In this case, we will create a short .bat file in the same folder as the .ps1 file and add the following line.

powershell.exe -WindowStyle Hidden -file .\%~n0.ps1

Whenever you launch this file, it will only open the GUI. You can then place it on one of your organization’s network shares and instruct your colleague to always use this one so they always get the latest version should you make changes to it.

Step 1 – Make sure your script works

You will not build a GUI for the sake of it. Before starting with Windows Forms, you need to ensure that the script you want to build a GUI for has a use for it and is 100% functional. Building a GUI around a bad script is not advised. At the end of the day, a GUI is nothing but a wrapper for cmdlets and parameters.

For the sake of this demonstration, we are deploying a virtual machine from a template with only a few parameters. As you can see we don’t even customize it.

    • VM name
    • Number of CPU(s)
    • Template
    • Datastore
    • Whether the VM should be powered on or not at the end

In terms of PowerCLI commands, this can be achieved with the following:

Connect-VIServer -Server core-vc.lab.priv

$NewVM = New-VM -Name “My-Test-VM” -VMHost “r430.lab.priv” -Datastore “RAID5-15K” -Template “Ubuntu20LTS”

Set-VM -VM $NewVM -NumCpu 2

Start-VM -VM $NewVM

There is only 4 lines of code that could easily be parameterized and run interactively. Meaning, the rest of the code in the script we are writing is dedicated to the GUI.

Obviously, in a real-life scenario, there would be a lot more parameters, checks and error handling.

Step 2 – Building the GUI

In this section, we are breaking down the creation of the GUI component by component.

Note that I will only describe each type of component once along with properties of interest. You can easily figure out the rest of the script by reading through it and changing values to see what happens as it is rather simple.

    1. Create the main form. Amend the following code to fit your needs.

You can change the size of the window with $Form.ClientSize. Make sure that $Form.ShowDialog() is at the very bottom of the script.

The AcceptButton property of the $Form item lets you specify which button is highlighted by default. Meaning you can press Enter to connect instead of having to click on it.

I also added a line to disconnect the vCenter session when closing the form. This isn’t mandatory but it is a good measure, especially when working in PowerShell ISE as it retains active sessions.

Add-Type -AssemblyName System.Windows.Forms

[System.Windows.Forms.Application]::EnableVisualStyles()

$Form = New-Object system.Windows.Forms.Form

$Form.ClientSize = ‘400,400’

$Form.text = “My First GUI Tool”

$Form.TopMost = $false

$Form.MaximizeBox = $false

$Form.FormBorderStyle = ‘Fixed3D’

$Form.Font = ‘Microsoft Sans Serif,10’

# Press Enter to click the “Connect” button.

$Form.AcceptButton = $vcenterButton

# Disconnect vCenter when closing the form.

$Form.add_FormClosing({if ($VIServer.IsConnected) {Disconnect-VIServer $VIServer -Confirm:$false}})

# Display main form. To put at the end.

$Form.ShowDialog()

Altaro Gui Main Form

    1. LABEL, TEXTBOX and BUTTON – We now add a label, a text box and a button to connect to vCenter.

You can move the objects by changing the coordinates in system.drawing.point(xx,yy).

Whenever you add a GUI component, you need to declare it in $Form.controls.addrange(@(…)).

You need to use the Add_Click() method on button items to set an action on click. In this case, when the button is clicked, the “Invoke-vCenterButton” function is triggered (you can try it by replacing it with something like “Get-Service | ogv” for instance).

$vcenterLabel = New-Object system.Windows.Forms.Label

$vcenterLabel.text = “vCenter”

$vcenterLabel.AutoSize = $true

$vcenterLabel.width = 25

$vcenterLabel.height = 10

$vcenterLabel.location = New-Object System.Drawing.Point(17,18)

$vcentertextbox = New-Object system.Windows.Forms.TextBox

$vcentertextbox.multiline = $false

$vcentertextbox.width = 200

$vcentertextbox.height = 20

$vcentertextbox.location = New-Object System.Drawing.Point(105,14)

$vcenterButton = New-Object system.Windows.Forms.Button

$vcenterButton.text = “Connect”

$vcenterButton.width = 75

$vcenterButton.height = 20

$vcenterButton.location = New-Object System.Drawing.Point(314,14)

$vcenterButton.Font = ‘Microsoft Sans Serif,9’

$Form.controls.AddRange(@($vcenterButton,$vcentertextbox,$vcenterLabel))

$vCenterButton.Add_Click({Invoke-vCenterButton})

GUI tool Vcenter Connection

    1. COMBOBOX – You probably use normalized CPU counts on your VMs to ensure optimal NUMA placement. Let’s add a drop-down list to only allow certain CPU counts.

I used a hard-coded list of values that I pipe into the Item.Add() method in order to include them to the menu. Note that we will also use combobox items for VMHosts and Templates.

As specified previously, don’t forget to add the new items to $Form.controls.addrange(@(…)).

You will also notice that I disable the CPU drop-down list at the beginning. They will be unlocked when vCenter is connected. This is also applicable to other items such as datastores, which depend on the selected host or cluster.

The selected value is then stored in $cpuComboBox.Text.

$cpucnt_Label = New-Object system.Windows.Forms.Label

$cpucnt_Label.text = “CPU count”

$cpucnt_Label.AutoSize = $true

$cpucnt_Label.width = 25

$cpucnt_Label.height = 10

$cpucnt_Label.location = New-Object System.Drawing.Point(17,94)

$cpucnt_Label.Font = ‘Microsoft Sans Serif,10’

$cpuComboBox = New-Object system.Windows.Forms.ComboBox

$cpuComboBox.width = 200

$cpuComboBox.height = 20

$cpuComboBox.location = New-Object System.Drawing.Point(105,94)

$cpuComboBox.DropDownStyle = “DropDownList”

$cpuComboBox.SelectedItem = $cpuComboBox.Items[2]

@(1,2,4,8,12) | ForEach-Object {[void] $cpuComboBox.Items.Add($_)}

$cpuComboBox.enabled = $true

GUI CPU Count

    1. CHECKBOX – Let’s add a checkbox for the sake of it. We can use it to specify whether the VM starts after the deployment or not.

The label area is included in the checkbox item which simplifies its use. Use the “Checked” property in your conditional statements.

Like any other item, this should be added to $Form.Controls.AddRange

$PwrCheckbox = new-object System.Windows.Forms.checkbox

$PwrCheckbox.Location = new-object System.Drawing.Size(17,254)

$PwrCheckbox.Size = new-object System.Drawing.Size(250,50)

$PwrCheckbox.Text = “Power on new VM”

$PwrCheckbox.Checked = $false

GUI, Deploy New VM

    1. Complete the form with the items required for the parameters of the cmdlet you will run when you hit “Deploy”.

Keep in mind that this is a simplistic example aimed at providing understanding. It does not fulfill the requirements for a production-ready VM deployment tool.

Step 3 – Utility functions

Triggering functions when interacting with an item is a great way to make the script more readable, easier to maintain and is the best practice in general. You shouldn’t put processing tasks in the main Form section.

In this section, I will describe the purpose and actions of these functions.

GUI Utility functions

Connect button: Invoke-vCenterButton

Before doing anything else, we want the user to connect to vCenter to be able to pull information about the environment.

    • SECTION A: Try to connect to vCenter and put the output in a variable.
    • SECTION B: If the connection succeeded, disable the vCenter fields.
    • SECTION C: Then enable the other fields in the form.
    • SECTION D: Populate the vmhost and template comboboxes.
    • SECTION E: If the connection didn’t succeed, display a warning popup with the error message and change the button text to “Retry”.
Function Invoke-vCenterButton {

# SECTION A

$VIServer = Connect-VIServer -Server $vcenterTextBox.Text

if ($VIServer.IsConnected) {

 

# SECTION B

$vcenterButton.Enabled = $false

$vcenterButton.Text = “Connected”

$vcenterTextBox.Enabled = $false

# SECTION C

$deployButton.Enabled = $true

$VmName_textbox.Enabled = $true

$vmhComboBox.Enabled = $true

$templateComboBox.Enabled = $true

$cpuComboBox.enabled = $true

# SECTION D

$vmhost = Get-VMHost -Server $VIServer | where connectionstate -eq connected

$vmhComboBox.Items.Clear()

$vmhost.Name | Sort | ForEach-Object {[void] $vmhComboBox.Items.Add($_)}

$Templates = Get-Template

$templateComboBox.Items.Clear()

$Templates.Name | Sort | ForEach-Object {[void] $templateComboBox.Items.Add($_)}

} else {

 

# SECTION E

Invoke-WarningPopup -WarningTitle “Connection failed” -WarningBody $error[0].exception.message

$vcenterButton.text = “Retry”

}

}

Case of an unsuccessful vCenter connection.

Case of an unsuccessful vCenter connection.

Datastore button: Invoke-DatastoreButton

The datastore section could also be a drop-down menu, however, we made it a button here for the sake of the example.

The button item will be enabled once a host is selected. When a datastore is selected, the choice is stored in a label instance next to it. Every time the selected host changes, the datastore label is cleared as not all hosts will have the same connected datastores.

The function associated with the datastore button is a one-liner that displays the list of available datastore on the host.

It uses Out-GridView with the -passthru switch which lets you select a record and stores the result in the associated label item.

Function Invoke-DatastoreButton {

$datastoreLabel.text = Get-VMHost $vmhComboBox.Text | Get-Datastore -Server $VIServer | where state -eq available | Out-GridView -PassThru | select -ExpandProperty Name

}

Deploy button: Invoke-DeployButton

This script being simplified, the deploy button does a few checks and proceeds to the deployment. In a production-ready tool, you would need a function dedicated to running a large range of checks and verification to ensure a safe deployment such as compute resources, free storage, provisioned storage…

    • SECTION A: Series of checks to make sure all the fields are populated and the VM name is not already used. A variable is populated with the parameters of the New-VM cmdlet.
    • SECTION B: If something isn’t right, a warning popup is invoked including the list of missing items.
    • SECTION C: The button is disabled during the deployment and the VM is deployed with the parameter object.
    • SECTION D: If necessary, the CPU count is updated on the VM.
    • SECTION E: Power on the new VM if the checkbox is checked.
Function Invoke-DeployButton {

 

# SECTION A

$deployparams = @{Server = $VIServer}

if (!$templateComboBox.Text) {$WarningBody += “Template not set`n”} else {$deployparams.Add(‘Template’,$templateComboBox.Text)}

if (!$vmhComboBox.Text) {$WarningBody += “Host not set`n”} else {$deployparams.Add(‘VMHost’,$vmhComboBox.Text)}

if (!$datastoreLabel.Text) {$WarningBody += “Datastore not set`n”} else {$deployparams.Add(‘Datastore’,$datastoreLabel.Text)}

if (!$cpuComboBox.Text) {$WarningBody += “CPU not set`n”}

if (!$VmName_textbox.Text) {$WarningBody += “VM name not set`n”}

elseif (Get-VM $VmName_textbox.Text) {$WarningBody += $($VmName_textbox.Text) already used`n”}

else {$deployparams.Add(‘Name’,$VmName_textbox.Text)}

# SECTION B

if ($WarningBody) {

 

$WarningBody = @(“Issues:`n$WarningBody)

Invoke-WarningPopup -WarningTitle “Missing fields” -WarningBody $WarningBody

} else {

# SECTION C

$deployButton.Text = “Deploying”

$deployButton.enabled = $False

$NewVM = New-VM @deployparams

# SECTION D

if ($NewVM.NumCpu -ne $cpuComboBox.Text) {Set-VM -VM $NewVM -NumCpu $cpuComboBox.Text -confirm:$false}

# SECTION E

if ($PwrCheckbox.Checked) {Start-VM -VM $NewVM}

$deployButton.enabled = $True

$deployButton.Text = “Deploy”

}

}

Case of missing items.

Case of missing items

Wrap up

Writing PowerCLI scripts is a refreshing task for a vSphere administrator, and a favorite of mine, as it involves a great deal of creativity and problem-solving. However, resorting to a simple graphical user interface can open up a script to a wider population of users that may not have scripting experience.

If you have a use case and want to get started with PowerShell Forms, you can start by building on the example of this article to add features and customize it.

The GUI based script presented in this article shows you how to get started with the most common use case of deploying a virtual machine. However, keep in mind that this is a simplified version. Including error handling and a wide range of checks is paramount to ensure the script is used within certain boundaries and cannot cause any harm in the environment.

If you want to learn more about PowerCLI, you can get our free ebook PowerCLI – The Aspiring Automator’s Guide or watch our free on-demand webinar How to Become a PowerCLI Superhero.

The post How to build a GUI tool for VMware PowerCLI appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/build-a-gui-tool-powercli/feed/ 0
Your PowerCLI Questions Answered https://www.altaro.com/vmware/powercli-questions/ https://www.altaro.com/vmware/powercli-questions/#respond Thu, 29 Apr 2021 15:39:54 +0000 https://www.altaro.com/vmware/?p=22510 We gathered your questions about about VMware PowerCLI and provided the answers you need to automate your vSphere tasks more efficiently. Let's have a look!

The post Your PowerCLI Questions Answered appeared first on Altaro DOJO | VMware.

]]>

Working with VMware PowerCLI is one of those things many IT professionals leave aside as they don’t know where to start and what problems it can solve for their organization. Although there is a learning curve, it is not as steep as one might think. Being built on PowerShell, the syntax makes it a great language to start on your scripting journey. In this article, we answer your questions about VMware PowerCLI to help you automate your vSphere tasks efficiently.

The questions gathered below were generated from a webinar in which I teamed up with the author of “PowerCLI: The Aspiring Automator’s Guide”, Xavier Avrillier, to provide an introduction to VMware PowerCLI and demonstrate use cases on how it can help at different levels.

    • Installing PowerCLI on a Non-Windows Operating System
    • Testing PowerCLI Code with vCenter and Docker
    • HTML Reporting with PowerCLI
    • 3rd Party Rest APIs and PowerCLI
    • Building PowerCLI Tools

If you didn’t attend the session, you can watch the recording of that webinar right now. 

powercli superhero webinar

You can also download the resources demonstrated during the webinar on our Github repo.

PowerCLI Questions Answered

Q. Is it good practice to always install the latest version of PowerShell or remain a few point versions behind?

A. It is best practice to at least run the major version of PowerShell (5.1) which is mainly tied to your Windows version anyway. However, using the latest version of VMware PowerCLI is best as cmdlets are added and others are deprecated across versions.

Q. We are running VMWARE 5.5. currently. Could we use PowerCLI with it?

A. Absolutely. Although vSphere 5.5 is no longer supported and an upgrade is highly recommended, you can still connect to it via PowerCLI up until version 10.2.0. Refer to VMware’s interoperability matrix and unselect “Hide Legacy Releases”.

Q. What free tools such as Chef/Puppet leveraging PowerCLI can further enhance its automation potential and span to large scale use cases? (hundreds of VM deployments, re-ip, snapshotting) …

A. PowerCLI is a way to connect to an endpoint and manage it through a set of cmdlets leveraging its API. Automation tools such as Chef and Puppet will do the same thing through a different interface and language. The value is that you don’t need PowerCLI at all unless for very specific use cases where you have to trigger a script from a PowerShell source.

You may want to check out vRealize Orchestrator which is included in the vCenter license.

Q. How can I use PowerCLI to determine what users are connected via VMRC?

A. There is currently no reliable way to determine VMRC sessions currently in use. You can query the events of each VM and look for the string “A ticket for * of type webmks on * has been acquired”. This will tell you when a VMRC was open and by whom, however, you won’t know if the console is still open as there is no event logged when the user closes it.

You can get started with the following piece of code which will look for this event in the past 30 minutes with 300 samples per VM for execution’s speed’s sake. Meaning if the VMRC open event is the 301st event, it won’t show up.

Get-VM | ForEach-Object {$_ | Get-VIEvent -Start (get-date).AddMinutes(-30) -MaxSamples 300 | where Fullformattedmessage -like “A ticket for * of type webmks on * has been acquired”} | select createdtime,username,@{l=”VM”;e={$_.vm.name}}

Q. So PowerCLI is PowerShell for the VMware hypervisor?

A. PowerCLI is a collection of PowerShell modules provided by VMware. They allow you to connect to VMware endpoints and offer cmdlets to interact with them. Note that PowerCLI isn’t limited to vSphere (Hypervisor). You can use it to connect to various types of endpoints such as SRM, Horizon, VMC on AWS, NSX…

Q. Is root really needed to run pwsh on Linux?

A. No, pwsh will run fine as a regular user.

Q. How can I find the path of an OVF Template which I could use in Ansible?

A. When exporting to an OVF template, it is downloaded in your computer’s Downloads folder by default.

Q. Can we use the deployment script (GUI – demonstrated during the webinar) to integrate with CI/CD Tools like Jenkin or Builtkite?

A. No. The deployment script that was demonstrated is only a wrapper, in the form of a GUI, for a collection of commands that will safely deploy a VM.

Altaro product related questions

Q. Can a Banking customer rest assured that doing backup only on the DR remote site thru Altaro is good for banking regulatory compliance?

A. A disclaimer first prior to answering this question. I am NOT a compliance expert, and any answer taken here should be directed at your company’s compliance officer or legal consul for further verification.

That said, if I’m understanding the question correctly, it sounds like you intend to only run backups at the remote end of the replication target? If that’s the case I would recommend against that. Best practice for any organization would include running local backups at the production site as well. Not only will these “onsite” backups provide you with your day to day restoration capabilities without having to pull files across a WAN, they are often the most time-efficient and current of backups as well.

In short, I think running backups against the replicated data at the remote site is fine and probably a good idea for an organization with sensitive compliance requirements, but I would also take the step of running local backups at the production location as well.

The post Your PowerCLI Questions Answered appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/powercli-questions/feed/ 0
How to test scripting in PowerCLI with vCenter simulator (VCSIM) https://www.altaro.com/vmware/powercli-scripting-vcsim/ https://www.altaro.com/vmware/powercli-scripting-vcsim/#respond Thu, 22 Apr 2021 17:35:38 +0000 https://www.altaro.com/vmware/?p=22410 VCSIM is a great container-based tool to get started with PowerCLI in minutes, with little resources in hand and minimum risks. Read more about it.

The post How to test scripting in PowerCLI with vCenter simulator (VCSIM) appeared first on Altaro DOJO | VMware.

]]>

If you want to start on your scripting journey with PowerCLI, you need a target to run your commands against. If you are running low on resources, be it memory, CPU or storage, you will struggle to get a usable virtual environment up and running. If this is your case, this article is made for you. However before we get started, if you’re interested in learning about PowerCLI in a more interactive format, be sure to check out our webinar on PowerCLI below!

Powercli Superhero Webinar

Ok, let’s get started!

vCenter simulator (VCSIM) is a tool that has been around since VCSA 5.1. It helps beginners practice without the need for a full-scale environment by simulating changes in a vSphere environment. A contributor going by the alias Nimmis wrapped VCSIM inside a container and released it on Docker hub.

Prerequisites

The great thing about VCSIM is that there are almost no prerequisites. All you need is a Docker engine and PowerCLI. If you need some help with the prerequisites you can check our comparison VMware Workstation vs VirtualBox or our article on Client Hyper-V in Windows 10 to create your Linux virtual machine and our beginner’s guide for PowerCLI.

Additionally, if you prefer an eBook format, take a look at our PowerCLI eBook for more information!

Installation of the Docker engine

You can use whatever Linux distribution that is supported by Docker, but for the sake of this demonstration we will use Ubuntu 20.1 LTS (Long Term Support).

Note that some of the commands we will perform are specific to Ubuntu. If you are running a different distribution, you will need to refer to the documentation to find the equivalent commands.

  • As usual, it is always good measure to start by updating your package index

sudo apt-get update

  • You will then need to install a set of dependencies required for Docker

sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

dependencies required for Docker

  • You can add Docker’s GPG key. This is optional but then you don’t have to worry about the validation of the signatures

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
  • Now we need to install the Docker repository. Note that we set it to stable but you could choose test or nightly

echo \

“deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \

$(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

  • Update the package index once again. You should get hits on the docker repo.

sudo apt-get update

Update the package index

  • We can then go ahead and install the Docker Engine

sudo apt-get install docker-ce docker-ce-cli containerd.io

install the Docker Engine

systemctl status docker

Once this is done the docker engine should be ready to run some containers.

Docker Engine Active/Running

Deployment of the VCSIM container

What makes this simulator so easy to use is that you only have to deploy it from the public repo. You can then destroy it and create a new one in a matter of seconds like any other container.

  • Run the following command to automatically pull the binaries from the repo and run the VCSIM container:

sudo docker run -d -p 443:443 nimmis/vcsim

Run the VCSIM container

  • Check that the container is now running with the “ps” command.

sudo docker container ps

container is now running with the “ps” command

Note that by default you will need it to start the container when you reboot the machine. To start the container, find its ID and then use it to start it.

sudo docker ls -al

sudo docker start <ID>

Start the Container

Once the container is running you should have access to it on port 443. However, it is solely an API endpoint so you won’t get anything out of it in a web browser as there is no vSphere-client service running.

Connection Port 443

PowerCLI and VCSIM

You can connect to the simulated vCenter in a similar way as you would connect to any other VI instance by using the “Connect-VIServer” cmdlet.

  • Username: u

  • Password: p

Connect-VIServer @IPvCenter -user u -password p

The connection takes longer than with a traditional vCenter endpoint. You can then start playing around with PowerCLI cmdlets without the risk of breaking something. Below is a short overview of the default simulated provisioned environment.

Get-VMHost

Get-VMHost cmdlet

Get-VM

Get-VM cmdlet

Cloning a VM

Cloning a VM

You will find that many commands won’t work as they would in a normal environment as not everything can be simulated. You can get the list of supported methods by browsing to https://@IPvCenter/about

List of supported commands methods

Difference with a real vCenter

The main drawback of the tool is that it doesn’t always react like an actual vCenter. For instance, the “New-VM” and “New-Template” cmdlets output a string instead of VM or Template objects, which you cannot pipe into another cmdlet such as “Start-VM”.

VCSIM:

VCSIM

vCenter:

vCenter

Conclusion

VCSIM is a great, easy to use, container-based tool to get started with PowerCLI in a matter of minutes with very little resources at hand and no risk at all when running commands. If you break something or want to go back to the original environment, just destroy the container and recreate it.

However, you will need to ensure that what you are doing is actually supposed to work with the simulator if you encounter problems. If a script of yours doesn’t work, it doesn’t necessarily mean that there is an issue with it. It might be that whatever you are trying to do isn’t supported by VCSIM or that it doesn’t return the correct object type.

Do you find this tool helpful? Does it address some of your testing needs? Do let us know in the comments section below and If you’d like to see a live demo of this tool be sure to check out our on demand PowerCLI webinar as we cover this tool in detail in a lab environment!

Thanks for reading!

The post How to test scripting in PowerCLI with vCenter simulator (VCSIM) appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/powercli-scripting-vcsim/feed/ 0
Getting Started with VMware PowerCLI – A Beginner’s Guide https://www.altaro.com/vmware/vmware-powercli-guide/ https://www.altaro.com/vmware/vmware-powercli-guide/#comments Fri, 19 Mar 2021 18:52:11 +0000 https://www.altaro.com/vmware/?p=21094 Did you know that VMware PowerCLI allows IT Pros to not only perform most vSphere admin tasks but also automate them? Learn how to get started with it.

The post Getting Started with VMware PowerCLI – A Beginner’s Guide appeared first on Altaro DOJO | VMware.

]]>

VMware PowerCLI is a collection of PowerShell modules providing many cmdlets to manage a wide range of VMware products. It allows IT Pros to not only perform most vSphere administrative tasks but also automate them.

Take a chunky vSphere cluster made up of 50 nodes for instance, in which you need to make a change to detach one or more LUNs on all of them. You could either spend a few hours making the change on each host manually in the vSphere Client, or you can use PowerCLI to execute the change on all the nodes in a matter of minutes. Granted such change must be performed carefully if you know what you are doing.

Pushing changes to hosts or VMs isn’t the only benefit VMware PowerCLI brings to the table. You can also use it to collect data that is relevant to a specific use case in a single place. There are ready to use scripts available such as Alan Renouf’s vCheck that acts as a framework to email HTML reports based on what you want to keep an eye on. This script is backed by VMware themselves.

Automation always was a crucial part in IT operations and was introduced into its own category since the exponential gain in traction by solutions like Terraform. VMware PowerCLI and PowerShell becoming more advanced with each release, we are able to automate more and more. PowerCLI also provides integration with applications like vRealize Operations, NSX, VSAN, Horizon, VMware Cloud platforms…

PowerCLI Licensing Limitations

It is important to understand that there are some limitations to VMware PowerCLI based on which type of licenses are installed in your vSphere environment.

Hosts that are licensed with the free hypervisor version can only be queried by PowerCLI in “read only” mode. This means that you can only use commands that collect information. Commands that are used to make changes like Set-*, Add-*, New-*, or Remove-* will not work. Any paid license of vSphere will be enough to provide full access to all of PowerCLI’s functions and features.

Prerequisites for VMware PowerCLI

Powershell

VMware PowerCLI 12.2.0 is compatible with the following PowerShell versions:

  • Windows PowerShell 5.1
  • PowerShell 7

To verify which version of PowerShell is installed on your system, simply open up a PowerShell prompt and display the content of $PSVersionTable.PSVersion. In the following example, you can see that PowerShell 5.1 is installed.

Getting Started with VMware PowerCLI -1

The $PSVersionTable variable contains information about PowerShell.

If the version installed on your system is outdated you will need to install the Windows Management Framework 5.1. Refer to the appropriate documentation for the installation of Powershell 7 on Linux distributions.

.Net Framework

As well as running a recent version of PowerShell, you also need to ensure that .Net Framework is installed in a supported version:

Getting Started with VMware PowerCLI – 1A

You can quickly check which version of .Net Framework is installed:

Registry Editor

  1. Left click on the Start Menu and select Run. 
  2. Enter regedit.exe to open up the Registry Editor.
  3. In the Editor, navigate to the following key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4.
  1. Display the content of the “Client” subkey to find the exact version.
Getting Started with VMware PowerCLI – 2

Get the installed .Net version with the registry key

PowerShell

You can also get the installed version with a PowerShell one-liner that will check the same registry key:

Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full"
Getting Started with VMware PowerCLI – 3

Display the .Net registry key in PowerShell

Updating .Net

Note that a newer version may already be installed on your system according to the Windows build you are running. Details on the .Net version per OS build in the Microsoft documentation.

If the version you are currently running is outdated, download and install the latest version on Microsoft’s repository.

Getting Started with VMware PowerCLI – 4

Download and install the latest .Net version online.

To learn more about the interoperability between PowerCLI and VMware products, refer to the official interoperability matrix.

How to Install VMware PowerCLI

VMware PowerCLI used to be a standalone software to install up until version 6.5 R1. You need to uninstall any such version from your system prior to installing the latest version. You can uninstall it like any installed software in “Programs and Features“.

The installation procedure will be slightly different with regards to whether the machine you are installing it on has internet access or not. We will cover both scenarios.

Machine with Internet Access

Installing

The installation procedure has been simplified since the modules have been added to the PowerShell Gallery on version 6.5.1 in April 2017 and is now straightforward.

  • Open a PowerShell prompt and install the modules using:
Install-Module VMware.PowerCLI -Scope CurrentUser

The modules will be automatically downloaded and stored in the correct folder. Note that you can use the -Scope parameter to make the PowerCLI modules available to AllUsers.

Updating

Although it will work, it is recommended to avoid using the Update-Module cmdlet to update PowerCLI as it will not remove any files rendered obsolete by the new version. Therefore,

  • Uninstall the existing version using:
Get-module VMware.* -listAvailable | Uninstall-Module -Force
  • Next, install the new version by following the install procedure outlined previously

Machine with no Internet access

Installing

If your system does not have Internet access you need to download PowerCLI as a zip file from the VMware website or with the “Save-Module” cmdlet and copy the content into the modules folder of the offline system. Unlike many of the VMware products, you don’t need to be logged in to download PowerCLI.

  • Head over to VMware code and select the latest version of PowerCLI
  • Download the zip file
Getting Started with VMware PowerCLI – 5

Download PowerCLI online to install it on offline systems

  • Transfer the file to your offline machine and copy all the folders present in the zip to your PowerShell Modules folder. Again, choose the location accordingly to make it available to everyone or to yourself only:
Current User %USERPROFILE%\Documents\WindowsPowerShell\Modules
All Users C:\Program Files\WindowsPowerShell\Modules

Updating

To update PowerCLI, delete the existing PowerCLI module folders and follow the procedure outlined above.

Execution policy

Execution policies are a security mechanism that determines if files can be loaded in PowerShell such as config files, modules, scripts… You can find your current execution policy by running:

Get-ExecutionPolicy

You may need to change the default execution policy to be able run scripts you wrote. Unless a GPO changed the default setting, you won’t need to if you are on a Windows Server OS. However, it is required for Windows client OS (i.e. Windows 10) as the default is set to Restricted, you will need to change it to RemoteSigned with the following command:

 Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

Some highly secured environments may require that all the scripts (even those written in-house) are digitally signed by the enterprise PKI. This means the execution policy must be set to AllSigned if not already done via GPO. In which case you will have to digitally sign your scripts prior to running them.

Connecting to a VMware instance

Once the PowerCLI modules are installed you can start using it from within your PowerShell prompt, as opposed to older versions of VMware PowerCLI where you had to either launch the software or import the snap-ins.

You also don’t need to manually import the module prior to using PowerCLI, they will be automatically loaded and autocompleted in your prompt.

Getting Started with VMware PowerCLI – 6

Get-PowerCLIVersion

Note that if a warning regarding CEIP is displayed whenever loading the PowerCLI modules, you can disable it by enabling or disabling CEIP with:

Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $true or $false

Although it may have seemed useless to enable CEIP before, it has become more interesting in the last few vSphere versions as it enables online checks and facilitates support tickets which can prove useful in the lifecycle management of your environment.

Getting Started with VMware PowerCLI – 7

CEIP related warning that can be disabled

Finally, we can now connect to our VMware environment. We can connect to a single ESXi host or a vCenter server.

  1. To establish a connection, we simply use the Connect-VIServer cmdlet with the –Server parameter.
Connect-VIServer -Server 192.168.10.11

Note that you don’t have to specify the -Server parameter as it has the default position 1 applied to it so you can just type the IP or FQDN right after the cmdlet.

Server parameter of the Connect-VIServer cmdlet

Server parameter of the Connect-VIServer cmdlet

  1. A prompt for login credentials may appear, input correct credentials and click OK. Note that if the user you are currently logged as in Windows has permissions on vCenter (AD account), you will be connected automatically.
Input credentials to connect

Input credentials to connect

  1. The connection info will be displayed and you will now have an established PowerCLI session to the vCenter Server.
$DefaultVIServer contains information about the connected instance

$DefaultVIServer contains information about the connected instance

The output you see is the content of the “$DefaultVIServer” variable which is the vCenter or ESXi object you are connected to.

The procedure is the same whether you are connecting to a host or a vCenter server. Below you see the difference in the “ProductLine” property if you are connected to a host or a vCenter.

Content of $DefaultVIServer when connected to a vCenter and a standalone host

Content of $DefaultVIServer when connected to a vCenter and a standalone host

How to use VMware PowerCLI

If you have no experience with PowerShell or other command line tools you might feel a little disoriented as to where to start and what to type in? Worry not, PowerCLI is known to have a shallow learning curve and includes a plethora of help pages similar to Linux’s “man” pages. To quote Jeffery Snover, PowerShell Chief Architect, “It’s like programming with hand grenades”. We recommend getting comfortable with Get-* commands before using Set-*, Remove-*

Here are a few tips and tricks to help you get started on your PowerCLI journey. Note that, while those will serve you with VMware PowerCLI, they apply to PowerShell in general.

Using Get-Help on PowerCLI

VMware PowerCLI has its own built in help system which will save you some googling time. It works by typing Get-Help followed by the name of a cmdlet.

Note that it is always good measure to update the help in PowerShell by running the Update-Help cmdlet. You should do it at least once when you launch PowerShell for the first time it is not a bad idea to run the command now and again to get the most up to date help content. Check out the Save-Help cmdlet if your system doesn’t have a direct internet connection.

Updating the help with Update-Help

Updating the help with Update-Help

In the example below we look up the help information about the Get-Command cmdlet. Note that Help is an alias of Get-Help.

Get-Help Get-Command
Obtain help on any cmdlet

Obtain help on any cmdlet

This gives us information such as the different parameters, the value types that they take as well as the description of what the cmdlet does. You can also get examples on how to use the cmdlet by appending the example switch. Note that some of the help won’t be available if you never ran Update-Help.

Get-Help Get-Command -Example
Display examples on how to use a command

Display examples on how to use a command

More often than not you will need to display the complete help information including the examples, description, parameters, etc. You can do that simply with the –full switch:

Get-Help Get-Command -Full
Obtain the full help content of a cmdlet

Obtain the full help content of a cmdlet

All of the information appears right in the shell display without having to open up a web browser and manually search for the information.  If you really want to improve your skills with PowerCLI and PowerShell in general, it is a good idea to use the help system first before “Googling” it. The more familiar you are with using the help system, the better you will become at using VMware PowerCLI.

How to use Get-Command

Get-Command is a great way to sift through the different cmdlets when trying to decide how you want to accomplish a task. For instance, let’s say you want to delete a snapshot on a VM but you don’t know which command to use.

A way to figure out which cmdlet will perform this action for us is to use Get-Command with the Name parameter and search for any commands that match the keyword Snapshot with the wildcard symbol (*). The syntax is as follows:

Get-Command -Name *snapshot*
Display the list of cmdlets that match a string

Display the list of cmdlets that match a string

The output is a list of cmdlets that perform actions on snapshots. However, you will look at the “ModuleName” column to find which cmdlets belong to the VMware modules. According to the screenshot above, we will choose the Remove-Snapshot cmdlet.

In order to display all the cmdlets available within the VMware modules we can use the Module parameter and specify any module that starts with “VMware” by using the following syntax:

Get-Command -Module VMware*
Display the list of cmdlets within a module

Display the list of cmdlets within a module

If you count the number of cmdlets with Measure-Object on the output you will realize that there are quite a few.

Get-Command -Module VMware* | Measure-Object
Count the number of objects in a collection

Count the number of objects in a collection

Using Out-GridView

Out-GridView is a nifty little cmdlet that will display the output in an interactive GUI window that you can sort through, filter… In order to use it, simply pipe (|) your commands into Out-GridView (Alias ogv). In the example below, we use it to get all the cmdlets that contain the word “Snapshot”. Note that we use gcm which is an alias for Get-Command.

Gcm *snapshot* | Out-GridView
Open the output of a command in a useful GUI

Open the output of a command in a useful GUI

You can then use the window to filter and sort the information displayed. Let’s say we want to narrow down the search for cmdlets that contain “Remove” in their name. We can do this by simply clicking the Add Criteria button, check the Name check box, and click Add and type your filter :

Narrow down the search with criteria

Narrow down the search with criteria

This example is pretty simple, however, if you were filtering through large amounts of data it can prove beneficial. The screenshot below shows an example where I narrowed down to the running VMs that have 2 or more CPUs:

You can stack several filters

You can stack several filters

If you ever get stuck and don’t know how to get around a specific problem, you can always ask the PowerCLI VMTN community. One of its many members will try and help you out. It is a great resource for documents and discussions that can provide assistance with any question you might have.

Scheduled scripts

Those have been around since the dawn of modern IT and they are still as relevant today as they used to be. Our updated ebook will show you how to work with scheduled tasks. The main pointers being the following:

  • Test your script extensively before putting it on a schedule.
  • The account running the script must have the right permissions on the destination (vCenter…).
  • You can run the script from a simple batch file which can log all the output in a log file or run PowerShell straight from the scheduled task.

Customizing your profile

PowerShell offers the possibility to create a customized profile in which you can execute commands that will be run when you start PowerShell.

Creation of the profile

It comes in the form of a .ps1 file that doesn’t exist by default so you need to create it:

  1. Check if the profile file already exists.
Test-Path $PROFILE
  1. If the output is “False”, you need to create it.
New-Item -Type File -Force $PROFILE
  1. You can now edit the newly created profile file. You can get the path to the profile by displaying the content of $PROFILE.

Prompt customization

One of the most popular uses of the PowerShell profile is to customize the prompt. In order to do that, you need to create a function named “Prompt” that you place inside the Microsoft.PowerShell_profile.ps1 file. Meaning whatever you put into this function will be executed every time the prompt is solicited.

In the example below we show you how to enrich the prompt by displaying the username you’re running as and the system on which you are connected like in a Linux shell. Another nice addition that I particularly like is to display the vCenter servers you are currently connected to. I almost consider it to be a security measure to avoid running commands against an environment you forgot you were still connected to.

Here is the difference between a vanilla and customized PowerShell prompt.

Example of a customized prompt that displayed the connected vCenter(s)

Example of a customized prompt that displayed the connected vCenter(s)

Here is the code that will achieve this result. It needs to be placed in Microsoft.PowerShell_profile.ps1.

Function Prompt {

write-host ""

# Display list of connected vCenter servers.

If ($global:DefaultVIServers) {

Write-Host "Connected to: $([string]($global:DefaultVIServers | where isconnected -eq true).name -replace " "," , ")"

}

# Display username@computername in color.

Write-Host $env:USERNAME -ForegroundColor Yellow -NoNewline

Write-Host "@" -NoNewline

Write-Host "$env:COMPUTERNAME " -ForegroundColor Magenta -NoNewline

"PS> "

}

Other applications

While customizing the prompt is fun and can be a good prank at the office, PowerShell profiles can serve many purposes.

Here are a few ideas of what you can do with it:

  • Set a different default location.
  • Add aliases for often used commands.
  • Display the welcome message of your organization.
  • Display whether the prompt is elevated or not.

As you can see your imagination is the limit but try not to get carried away with too many actions or your shell will become less responsive.

Custom modules

One of the strengths of PowerShell is the flexibility offered by the use of modules. Those are files that contain collections of cmdlets to extend the reach of your shell. Major software and hardware vendors distribute PowerShell modules to simplify interactions with their products’ APIs, PowerCLI is one of them by the way.

You can write your own modules in which you will put your homemade functions. You will find more details on functions in our updated ebook on PowerCLI with examples covering datastores, vCenter HA, RDM disks…

You can create your own modules containing your cmdlets

You can create your own modules containing your cmdlets

Writing your own functions is a great exercise that allows you to condense a set of actions that would require many commands or GUI interactions into a single cmdlet.

Get in the scripting mindset

Here I wanted to touch base on a less technical approach to PowerCLI. Whether it is VMware PowerCLI or some other framework, scripting is not something you learn overnight. Like most learning processes, it takes time, it is a trial-and-error path and you need to make mistakes, fix them and start over to get better.

Don’t feel bad about taking time to script

An unspoken truth that is regularly verified among our peers is that some IT managers only value work that can be quantified like closing tickets or deploying VMs and RUN is a great example of that. Spending time working on scripts may be seen differently by the less hands-on people who could think you are not doing “real work”. As one wise man used to say: “You are never thanked for the problems you don’t have”.

It may be the admin’s role to explain that spending time learning, testing and writing scripts is an investment both in yourself and in the company. Sure, it may take a few days or even weeks of brainstorming to understand how to interact with such and such in VMware PowerCLI.

Automation will free up time from RUN tasks to work on BUILD projects

Automation will free up time from RUN tasks to work on BUILD projects

But this won’t be a problem if you can demonstrate that the time you spent doing it will free up valuable hours over the course of (a) year(s) during which you can work on projects, other automation topics or even those pesky RUN tasks that will always require a human being behind the keyboard. The diagram above is an attempt at depicting the effects of implementing automation in your processes.

Identify the actions you perform often

Automation is all about improving efficiency and saving time. At the end of the day, you could probably automate just about anything as long as you can throw in enough man-hours. However, the point is: Automating a task should save you time in the long run, not the opposite.

For instance, it makes very little sense to spend 2 weeks automating a task that takes a day or two a year. Although you will have learned some things in the process, the return on invested time will be very poor. Instead, try and identify tasks on which you spend a significant amount of time.

All of this to say that, even though an automation project may sound cool and appealing, you should always review it before starting.

Challenge your own ideas

Starting an automation project isn’t always easy. We often start with a specific goal in mind and start writing code in order to get there as quickly as possible. Some very experienced professionals will get it right the first time, however, it isn’t the case for most of us. I found myself that it is unusual that I get it spot-on on the first go.

When I finish a PowerCLI automation project or even a function, I try and challenge myself to find what I could do better and cleaner. For instance:

  • Can I reduce it in size to optimize the code tidiness? with a cleaner loop maybe?
  • Can I turn repetitive occurrences into a function?
  • Can/Should I parameterize some of the variables?
  • Can I simplify it to make it more flexible and render it functional in various environments?
  • Where should I add comments so I understand it 6 months down the line?
Keep challenging your own scripts to improve them

Keep challenging your own scripts to improve them

Now, the quality of a script is a subjective concept. While we recommend going through a few passes of improvement, it will also depend on how much time you can spend on it. At the end of the day, if it works, fulfils its purpose and you’re happy with it, it’s as good a script as any.

Wrap up

You will probably find at some point, or maybe you can already relate to the fact that the first PowerCLI scripts are usually quite long and monolithic. Writing scripts is like playing a musical instrument, everyone is a lifelong student as there is so much to it and there will always be something new to learn and do better.

Most of the concepts we reviewed so far are not only applicable to VMware PowerCLI but to any scripting language for that matter. Fortunately, PowerCLI is quite an easy scripting language compared to others and you don’t need to be an expert to work with it.

So, get started now with our updated free ebook on PowerCLI and start writing your very own scripts.

 

 

The post Getting Started with VMware PowerCLI – A Beginner’s Guide appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-powercli-guide/feed/ 1
What is VMware vCloud, and Why Should You Use it https://www.altaro.com/vmware/vmware-vcloud/ https://www.altaro.com/vmware/vmware-vcloud/#respond Fri, 07 Aug 2020 08:43:33 +0000 https://www.altaro.com/vmware/?p=20446 This article walks you through the latest iterations of VMware vCloud and Cloud Director. Is vCloud an essential tool for VMware admins? Let's find out

The post What is VMware vCloud, and Why Should You Use it appeared first on Altaro DOJO | VMware.

]]>

It’s easy to become somewhat confused by precisely what vCloud means, so let’s take a brief look at the evolution of vCloud, and where it sits in the market today.

What is VMware vCloud?

VMware first introduced the vCloud tag at the Las Vegas 2008 VMworld conference. In the early days, there were many iterations from vCloud Pavilion, through to vCloud Hybrid Service and vCloud Air. The latter providing public Infrastructure-as-a-Service (IaaS) running VMware vSphere, which was eventually acquired in 2017 by French cloud computing company OVH.

Over the last few years, VMware has shifted its focus towards cloud-agnostic software, and the integration of its products with leading cloud providers from Amazon, Microsoft, Google, IBM, and Oracle.

Furthermore, VMware aims to bring the benefits of cloud computing to customer’s existing data centers through private and hybrid cloud deployments, as well as to provide platforms for cloud-native application development.

Although VMware still partners with OVH on go-to-market solutions and customer support for vCloud Air, the acquisition suggested a move away from VMware itself being a cloud provider, and more towards engineering the building blocks for deployment and management of multi-cloud platforms.

VMware now classifies vCloud Suite as a cloud infrastructure management solution, and VMware Cloud Director (VCD) a cloud-service delivery platform for Cloud Providers.

According to VMware’s Public Cloud Solution Service Definition, VMware Cloud Providers are a global network of ‘service providers who have built their cloud and hosting services on VMware software.’

  • VMware powered private clouds, service provider-managed or unmanaged, use VMware vSphere with the vRealize Suite, which forms VMware vCloud Suite.

  • VMware powered public clouds use VMware vSphere, with VMware Cloud Director, and generally with vCloud Application Programming Interfaces (APIs) exposed to its tenants.

The original vCloud Air is available through OVH as a hosted private cloud with enterprise support including vSphere, vCenter, and NSX.

OVH cloud, hosted private cloud

vCloud Suite

VMware vCloud Suite is the combination of enterprise-proven virtualization platform vSphere, and multi-cloud management solution vRealize. VMware vSphere includes the hypervisor ESXi, providing server virtualization, and vCenter Server, which centralizes the management of physical ESXi hosts and Virtual Machines, as well as enabling some of the enterprise features like High Availability.

Included with vSphere in the vCloud Suite is vRealize, delivering automation, orchestration, and intelligent IT operations for multi-cloud management and modern applications.

The vRealize Suite contains the following products:

  • vRealize Automation: for self-service provisioning, service catalog, governance, and policy enforcement, with aligned orchestration to automate runbooks and workload deployments.

  • vRealize Operations: offers Machine Learning (ML) powered and self-driving operational capabilities, monitoring, automated remediation, performance optimization, capacity management and planning, usage metering, service pricing, and chargeback.

  • vRealize Log Insight: enables centralized log management and intelligent log analytics for operational visibility, troubleshooting, and compliance.

  • vRealize Suite Lifecycle Manager: provides a comprehensive application lifecycle management solution for vCloud Suite.

Additionally, vCloud Suite fully supports vSphere with Kubernetes and integrates seamlessly with other Software-Defined Data components such as NSX and vSAN.

With multi-tenancy, each vRealize Automation tenant can have its own branding, services, and fine-grained permissions. The following screenshot shows an example of tenant branding at the login page:

vRealize Automation tenant

In the screenshot below the vRealize Automation design canvas is shown, administrators drag and drop the relevant components for automated builds with corresponding catalog items:

vRealize Automation design canvas

The following screenshot shows the vRealize Automation self-service catalog:

vRealize Automation self-service catalog

A VMware powered hybrid cloud can be formed by connecting the private cloud with either a public VMware cloud offering or another public cloud service. With vCloud Suite, infrastructure administrators can integrate private and public clouds to deliver and manage modern infrastructure across many environments. Developers can consume infrastructure services through APIs, Command Line Interface (CLI), or the service catalog Graphical User Interface (GUI).

VMware Cloud Director

VMware Cloud Director is VMware’s flagship cloud services platform, empowering cloud providers with an API-driven cloud infrastructure control plane for managing global VMware Cloud estates. Available through the VMware Cloud Provider Program (VCPP), VMware Cloud Director allows cloud service providers to automate the provisioning and management of compute resources and services.

As the portfolio of Software-as-a-Service (SaaS) offerings in the VMware Cloud brochure continues to grow, the formerly named vCloud Director became VMware Cloud Director in v10.1 to align with VMware’s branding direction.

The key features VMware Cloud Director delivers are as follows:

  • Resource pooling of compute into virtual data centers providing Software-Defined Data Centre operations with a range of tenancy options. 

  • Cloud-native development of modern applications with enterprise-grade Kubernetes and lifecycle management.

  • Automation of service-ready cloud stacks as code with the VMware Cloud Director Terraform provider.

  • Policy-driven approach to cloud resource management, tenancy, security, compliance, and independent role-based access control.

  • A centralized suite of services for integrating with leading storage, network, security, data protection, and other software vendors, or custom applications.

  • Single pane of glass management and monitoring for enterprise-scale multi-SDDC environments, with deep visibility and predictive remediation.

These features allow cloud providers to upscale from IaaS hosting to a profitable portfolio of cloud-based services, providing the following key benefits:

  • VCPP Cloud Providers:

    • Operational efficiency of deploying and maintaining cloud infrastructure for tenants across multi-cloud environments.

    • A unified management plane for the entire service portfolio.

    • Reduced time-to-market for new and expanding services.

    • Additional revenue streams from publishing custom service suites and integration with Independent Software Vendors (ISVs).

    • VCD is one of the main steps towards becoming Cloud Verified, providing an industry-standard mark of recognition.

  • VMware Cloud Customers:

    • VMware Cloud-as-a-Service consumption model of the full VMware Software-Defined Data Center, as a managed service or with a complete set of self-service controls.

    • Ease of provisioning and scaling cloud services and partner services from a single web interface or set of APIs.

    • The fastest available path to hybrid cloud services and workload migration, whether that be for portability between cloud platforms, or backup and evacuation of existing data centers.

    • Leverage Infrastructure-as-Code (IaC) capabilities across various cloud platforms with native container services and Platform-as-a-Service (PaaS) for Kubernetes and Bitnami.

Many of the benefits above work in turn for both parties, alongside taking advantage of economies of scale to facilitate business growth with minimal operational overhead.

You can try both vCloud Suite (vSphere with vRealize) and VMware Cloud Director using VMware Hands on Labs. At the time of writing the Cloud Director lab is still running v9.7, so is still branded vCloud:

vCloud Suite, vCloud Director

vCloud Connector

Accompanying VMware Cloud Director, vCloud Air customers can make use of vCloud Connector, a vSphere plugin that connects up to 10 private and public clouds. Using vCloud Connector, customers can harness the full power of hybrid cloud from a single interface to help with private data center extension and migration to a public cloud, or management of hybrid cloud setups.

One of the great features of managing distributed environments from the vCloud Connector plugin is the content sync, creating a single content library across the entire cloud environment for increased operational efficiency and simplified source catalog management.

The vCloud Connector itself has been available as a free download since v2.6. Although the latest version of the product is v2.8.2, updated in March 2016, it remains available to support vCloud Air customers with multi-cloud management.

Summary

To summarise, in this article, we have taken a journey through the vCloud brand from its early days as an IaaS provider, which is still available today through vCloud Air and vCloud Connector, to the present-day iteration of vCloud Suite for multi-cloud management. With the modern vCloud Suite, we can standardize, automate, and monitor distributed vSphere environments with vCenter Server and vRealize Suite.

We observed that VMware Cloud Director, previously vCloud Director, remained a staple of the vCloud brand, underpinning global cloud deployments for a community of cloud service providers up to the present day. The VMware Cloud family continues to grow across private and public clouds, with customers creating hybrid clouds, and VMware Cloud Director enables the automation of these deployments at scale.

VMware’s cloud-agnostic slogan Any App, Any Device, Anywhere, aims to keep the companies existing market-leading products, and recent acquisitions, relevant for customers with cloud and multi-cloud strategies. By embedding further native PaaS services for developers building modern applications, and a wide range of additional SaaS offerings, both vCloud Suite and VMware Cloud Director are crucial elements of this vision.

The post What is VMware vCloud, and Why Should You Use it appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vmware-vcloud/feed/ 0
7 Benefits of Adopting Infrastructure as Code for VMware https://www.altaro.com/vmware/infrastructure-as-code/ https://www.altaro.com/vmware/infrastructure-as-code/#respond Thu, 12 Dec 2019 17:57:13 +0000 https://www.altaro.com/vmware/?p=20033 Infrastructure as Code (IoC) is one the major benefits of migrating to the cloud - here's 7 reasons to adopt an IoC approach for your VMware environment now

The post 7 Benefits of Adopting Infrastructure as Code for VMware appeared first on Altaro DOJO | VMware.

]]>

In this post, we’ll be covering 7 critical benefits to adopting an Infrastructure as Code (IoC) approach for your VMware environment.

With the increasing hype of the cloud, we are starting to see more and more buzz around the benefits of migrating to the cloud. Infrastructure as Code is one of those benefits, however, it’s not just a cloud term. Infrastructure as Code can be implemented on-premise too even for those who are not ready yet to migrate to the cloud.

What is Infrastructure as Code? It is the coined phrase for taking a developer approach to defining our infrastructure components like storage, compute and networking. We are essentially adopting the concepts of how a developer would handle building an application and using those methods for building our infrastructure. Just how a developer stores their application’s code in source control, we store the code for our infrastructure in some sort of source control as well like Git. The benefits of Infrastructure as Code are so powerful that we are starting to see companies separate themselves from their competitors by utilizing this approach in their IT Environment. Below are some of the benefits gained from adopting the IaC model.

Want to jump right in with IaC and VMware? Take a look at our article about Terraform and VMware!

Increased Site Reliability

The traditional IT operating model consists of groups or teams that run monthly or daily checks on devices to ensure the environment is healthy. I’ve been apart of several teams in my System Administrator days who were tasked with this very role. Running health checks manually on systems is never going to be 100 percent reliable because there are too many human variables like high workload, sick days, or holidays. When our environment is defined in code we can then enforce that code using Configuration Management tools like Ansible, Puppet, or Chef. The daily or monthly checks that are done by an entire team can now be done every 15 minutes 24/7 365 days out of the year. Also, we are not just getting a “node up” type of insight like you would get from a monitoring platform. With Configuration Management you get a fine-grained insight into the environment and with the ability to define the code yourself the possibilities are almost limitless. If SQL isn’t configured according to our standards, we will get an alert and know about it. If Windows doesn’t have feature X installed, we will know about it and installed it. Having this sort of power over the configuration of our environment now gives us site reliability on a whole new level.

Agility and Efficiency In Deploying Infrastructure

Back in the day we were doing manual deployments of infrastructure, procuring server hardware for each application and deploying everything by hand. Nowadays with Virtual Machines, we are able to move much quicker and most companies have some sort of automated VM deployment process. With Infrastructure as Code, the efficiency of our deployments are taken to the next level. We don’t have to maintain these complex ad-hoc scripts that require hundreds of lines of code to deploy and configure a VM. With tools like Packer, we can define the creation and configuration of our VMware templates all through code and then work in a SecOps process like scanning those newly created templates with a vulnerability scanner to ensure we are deploying up to date templates. With an IaC tool like Terraform, we can define our VM deployment in under 100 lines of code and even turn our deployment code into a module that can be reused over and over again. Then with Puppet, we can enforce the configuration on our VM and ensure that nothing changes unless we want it to.

Disaster Recovery and Migration

Migrating or rebuilding infrastructure that has already been defined in code quickly becomes a trivial task. The code for the system is already there, you just have to re-deploy it and poof, everything is there again. We no longer have to mount a Windows ISO or find the golden template to deploy from because it’s already there defined in the code. This can be extremely powerful and a great real-life example is a company in California that was using Puppet. They had a natural disaster in one of their data centers due to a forest fire and had to evacuate resources elsewhere. Turns out they were able to rebuild everything in another data center in under 1 hour because they were using Puppet to define the resources in code.

Change Tracking

Because we are storing our configuration and infrastructure code in source control, we now get the ultimate benefit of source control, change tracking. During an outage, everyone is scrambling to find out what has changed. With IaC we can see in detail all changes that were made to the system. Not only is change tracking on infrastructure great for troubleshooting, but it’s also amazing for rolling back a system. Imagine if you made a devastating change to 100 nodes, with IaC you just revert the code back and redeploy the code again. This can be a life saver for companies that lose money by the thousands during an unplanned outage.

Remove Skill Set Silos

Once the installation and configuration of an application has been defined in code, we no longer need to depend on specific teams or employees that specialize in installing and configuring that software. If we automate the deployment of SQL or Citrix, then anyone on the team can deploy it with the tools and it will be configured the same way every time. Now the person who used to do all the SQL installs can focus on other things that provide more value to the company.

Process Synergy for Hybrid Cloud or Future Cloud Endeavors

IaC is now the recommended way for managing Cloud environments like AWS, Azure, and GCP. Because IaC tools like Terraform are cloud-agnostic, the process for deploying and managing infrastructure doesn’t change regardless of where the infrastructure is being housed. Now don’t get this part confused, the PROCESS won’t change but the code that will be deployed will, there are too many differences between on-prem and the various clouds to have one written configuration be reused for all of them. But if the team is already familiar with Terraform and how to deploy and manage configurations, they will have an extremely easy time with hybrid cloud or any future cloud adoptions. Packer, another cloud-agnostic IAC tool, automates the creation of Virtual Machine images in both VMware, AWS, Azure, and GCP. The great part about this is that with Packer an Image can be created and deployed to each of these environments AT THE SAME TIME!

Gain Time Back

Investing in IaC provides an efficiency on the team that allows employees to gain their time back. Because of the increased site reliability, employees aren’t getting the weekend phone calls. They are also no longer having to perform tedious tasks like clicking through application installs, spot-checking systems, or running reports. Instead, Engineers can focus on being innovative and working on projects that will be of more benefit to the company.

How to Get Started with Infrastructure as Code

Now that we’ve gone over some of the benefits of IaC with VMware, where do we start? First, we will need to look into some of the toolsets out there. The typical strategy at the moment is to use an orchestration tool like Terraform or Pulumi combined with a Configuration Management tool like Ansible, Puppet, Chef, or SaltStack. The orchestration tool does the high-level provisioning of the infrastructure components while the configuration management tool handles the configuration and enforcement of the infrastructure component. Each tool has it’s pros and cons and it really depends on your environment to figure out which one is best. Most IaC tools are open source so getting started and doing some hands-on testing is pretty easy. I recommend taking it one step at a time and start with deploying something simple and evolve from there. If you’re interested in playing around with Terraform, be sure to check out our article on how to get started with Terraform on VMware. IaC has a big role in modernizing your on-premise infrastructure, it can provide amazing benefits like infrastructure agility when done correctly and should be seriously looked at by any VMware Administrator.

 

The post 7 Benefits of Adopting Infrastructure as Code for VMware appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/infrastructure-as-code/feed/ 0
The Lazy Admin’s Guide to Site Recovery Manager and PowerCLI https://www.altaro.com/vmware/site-recovery-manager-powercli/ https://www.altaro.com/vmware/site-recovery-manager-powercli/#comments Fri, 29 Nov 2019 10:26:15 +0000 https://www.altaro.com/vmware/?p=20053 Site Recovery Manager is VMware's disaster recovery orchestration tool and while there is an HTML5 management UI, you can be LAZY and automate everything...

The post The Lazy Admin’s Guide to Site Recovery Manager and PowerCLI appeared first on Altaro DOJO | VMware.

]]>

Most administrators know that PowerCLI is intuitive and easy to use in most cases, and we’ve covered a great deal of topics on the Altaro VM Blog when it comes to PowerCLI. However, we’ve only covered the traditional vSphere datacenter operations and some of the newer Infrastructure as Code capabilities that exist on the market today. There are many other modules for VMware Products like Horizon View, NSX, vROPS and Site Recovery Manager (SRM) as well. I figured we could cover SRM in today’s post. Site Recovery Manager is VMware’s offering for disaster recovery orchestration. SRM is like the brain while the muscles are vSphere Replication and Array-based replication.

First a question, why/when would you use Site Recovery Manager? It really comes down to a few points.

  1. You would use SRM if you need basic replication capabilities
  2. Your backup/DR provider has no replication capabilities in box such as Altaro’s WAN Optimized Replication

To many, it makes sense to have the backup/DR provider software take care of the replication and DR, but if your provider lacks those capabilities or you want to de-couple DR from the backup/recovery process for some reason SRM may be an option.

As you can see in the image below SRM is designed to replicate VMs from one site to another

Site Recovery Manager

While there is an HTML5 management UI you can use to manage the solution, what if you managed the solution with PowerCLI? What if you wanted to be LAZY and automate everything? I chose to cover this module in particular because it differs from the core ones as it only provides cmdlets to connect and disconnect SRM servers. After that everything is done via the exposed API with PowerShell methods and properties. It is a more complicated approach but remains a great exercise to improve your skills and understanding of PowerCLI, and once you’ve learned it, you can take the Lazy-Admin approach and use your automation skills to do the work for you!

PowerCLI

Let’s start with some resources and documentation!

NOTE: If you need more preliminary info on PowerCLI in general, we have a great eBook on the topic here.

SRM Help and Documentation

API developer’s guide

You can find the official documentation about the SRM SDK in the API developer’s guide for SRM which I recommend you save somewhere if you plan on working with SRM in PowerCLI. It is a great source of information as it presents the data structure clearly with extended information about each object and what can be done with it.

SRM SDK

For some reason, when I checked at the time of this writing VMware seems to have locked down access to some of the documentation so you might need to request access to it.

Community sample code

Because this module is harder to use due to the fact that everything is done with the API, VMware offers the possibility to download community-made SRM cmdlets. This allows people to download cmdlets to get started without having to go down the SRM module rabbit-hole of writing your own functions for everything. You will find the cmdlets and documentation here.

While this is a great addition to those who don’t want to bother with it, we are here to learn how SRM works with PowerCLI! So, let’s get to it!

Connecting to Site Recovery Manager with PowerCLI

Permissions

Unless the user has the role ‘Administrator’ applied, the account you will use to connect to SRM needs to have the Administration > Access Control > Global Permissions rights on both vCenter servers. If you want to be more restrictive or if you want a dedicated user for this purpose, you can assign one of the built-in SRM roles.

Connection

Since SRM 6.0, the platform service controller (PSC) and vCenter servers are associated with the local and remote SRM instances. Meaning you can easily connect to SRM if your session is connected to vCenter without having to specify its address or name. Note that we are describing the most common way to connect to SRM, but your mileage may vary according to your environment and needs. I’ve also included a home-made PowerShell function below that will help with SRM connection as well.

Standard method

  • Connect to the vCenter server in the protected site (makes for better scripts opportunities).
  • Store your SRM credentials in a variable for faster connection.
$Creds = Get-Credential
  • Connect to the protected and remote SRM instances. You need to specify credentials (as shown above) as it doesn’t pick up your session credentials

Home-made function

Below is a small function that makes connecting to SRM slightly quicker and easier. You only have one set of credentials to specify and the user field is already populated with your current session username. You can already have a taste of how the SRM module works by looking at how the remote site is connected using the LoginRemoteSite method of the extensiondata property.

Function Connect-SRMPair {
param(
     [PSCredential]$Credentials = (Get-Credential $env:username)
)
Write-host "Connecting to local SRM server.." -ForegroundColor DarkCyan
if (Connect-SrmServer -Credential $Credentials) {
    Write-host "Connecting to remote SRM server.." -ForegroundColor DarkCyan
    $DefaultSrmServers.extensiondata.LoginRemoteSite($Credentials.username,$Credentials.GetNetworkCredential().password,$null)
}
}

$DefaultSrmServers

Just like when you connect to vCenter, once you established a connection with your SRM instance, a new global variable called $Global:DefaultSrmServers is created and is the starting point for interacting with SRM. All the properties, sub-properties, and methods associated with SRM are stored in the extensiondata property of the $DefaultSrmServers variable. Let’s run a Get-Member command on it to see what’s in it:

$DefaultSrmServers.extensiondata | Get-Member

Notice that the property is of type SrmServiceInstance and contains a number of properties and methods that we will describe to make it easier to understand.

Properties

You can maybe already tell where this is going by looking at the last three properties of the Get-Member screenshot:

  • Protection: Management of everything related to the protection groups, VMs, datastores…
  • Recovery: Management of everything related to recovery plans.
  • Storage: Reserved to run devices discovery in Array-Based Replication (ABR) environments.

Those are really the three properties in the $DefaultSrmServers variable. The properties of virtual machines, protection groups, etc will come as output of the various methods.

Below is an extract of the API developer’s guide we mentioned earlier that shows a tree view of the SRM methods where you find the same three properties pointing to all the methods they offer. For instance, if you run the DiscoverDevice method on the SrmStorage object, you will get a DiscoverDevicesTask object on which you can run the GetDiscoverDevicesTaskFailures and the IsDiscoverDevicesTaskComplete methods.

SRM storage protection recovery

How to use SRM with PowerCLI

Now that we know more about the structure of the SRM module, we can start working with it and retrieve information about the environment. We will use a fairly easy example to demonstrate how it works and we will do it step by step. The objective is to return the VM objects protected by a specific protection group.

You can then explore the module according to your operational needs.

1 – List all protection groups

You can quickly find in the chart where you have to go to list the protection groups. You can confirm this by running Get-Member (alias gm) on $DefaultSrmServers.extensiondata.Protection to inspect it.

If you look at the definition field of the Get-Member output you will see that no argument is needed, which is usually the case with List commands.

  • Let’s run that command to see what we get. We start by storing the output in a variable so we can easily work with it later.
$PG = $DefaultSrmServers.extensiondata.Protection.ListProtectionGroups()

As you can see at first glance we only get the MoRef IDs of the protection groups.

  • Explore the first protection group with Get-Member.

NOTE: When exploring in PowerShell or testing in general, I usually do it on one object to start with and then move on to the whole list.

$PG[0] | Get-Member

Just like the chart showed us earlier, we find here a number of methods that offer to retrieve information about the protection group. In this example, we want to display the name of the protection group.

  • Invoke the method GetInfo on the protection group object.
$PG[0].GetInfo()

Type “san” stands for Array-based replication (ABR).

  • Run the method on all protection groups.
$PG.GetInfo()

We can now list our protection groups which is a good start. In the next step, we will list the VMs of a protection group based on its name.

2 – List the VMs in a protection group based on its name

In the previous example, we demonstrated how to obtain the name of a protection group. We will now use this information to filter one protection group that we want to inspect.

  • Filter the protection group based on its name and store it in a variable.

Note that it supports wildcards with the -Like­ or -Match operators.

$TestPG02 = $PG | where {$_.GetInfo().name -eq "Test-PG-02"}

  • Use the ListProtectedVMs method on the protection group and store it in a variable.

As specified in the API guide, the ListAssociatedVms that you see in Get-Member is only for vSphere replication, it does not apply to array-based replication (ABR).

$PG02VMs = $TestPG02.ListProtectedVms()

There is a small gotcha here. You get some interesting information about the protected VM. However, the output is slightly disappointing as SRM works with IDs so not “human understandable” data. The VM property contains a virtual machine managed object with the bare minimum in terms of properties, not even the name.

Again we are working with the first record for cleaner output.

  • Convert this managed object to an inventory object (VM) by using the Get-VIObjectByVIView
$VM = Get-VIObjectByVIView $PG02VMs[0].vm

We now have a proper VM object. We can even compare the object types to see the difference

Obviously you can now run the command against all VMs in the protection group to get the VMs list.

3 – Automate the process in a function

We proved that we can obtain VM objects based on the protection group they belong to. However, it was a rather cumbersome operation that you don’t want to have to do every time. Which means it is a perfect opportunity to write a function and have a nice and parameterized cmdlet.

There is not much else to explain here as we demonstrated every command so I will just put the function below for you to inspect.

Note 1: I used the verb Get because List is not a valid verb for some reason.

Note 2: This function assumes that you have an active session with an SRM instance.

Note 3: This function only takes one protection group for the sake of the example. Working with multiple protection groups would require returning one object per protection group with their respective lists of VMs as a property of type array.

Function Get-ProtectedVMs {
param(
    [string]$ProtectionGroupName
)
# Checking if connected to an SRM instance.
if (!$DefaultSrmServers.IsConnected) {Write-Warning "Not connected to SRM instance"; break}
# Retrieving the protection group object by name.
$PG = $DefaultSrmServers.extensiondata.Protection.ListProtectionGroups() | Where {$_.GetInfo().name -eq $ProtectionGroupName}
# Retrieving the list of VMs as managed objects.
$PGVMs = $PG.ListProtectedVms()
# Converting managed objects to inventory objects and returns output.
Get-VIObjectByVIView $PGVMs.VM
}

Example output below!

Wrap-Up

Again, working with SRM in an automated fashion will allow you to use it more quickly and effectively in the future. While what we covered above was very example-driven, you can use these same steps in your own environment to begin working with SRM and PowerShell.

Also, if you’re interested in learning more about working with PowerCLI and VMware, check out our free eBook on the subject!

PowerCLI ebook free download

Read more and download the free eBook

Also if you have any questions, comments, or feedback, do let us know in the comments section below!

Thanks for reading!

The post The Lazy Admin’s Guide to Site Recovery Manager and PowerCLI appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/site-recovery-manager-powercli/feed/ 2