4 Reasons Why MSPs Should Be Using Containers

I recently co-hosted an AMA styled webinar with Ben Armstrong from the Hyper-V team at Microsoft. The topic of the AMA was specifically centered around containers and docker in the Microsoft ecosystem. It was a fantastic webinar and was very insightful. If you have the time I highly recommend you give it a watch here.

Once the webinar was over and I was fashioning a new blog post out of the numerous question we got throughout the AMA, I started thinking. What about containers for MSPs? The last time I worked directly for an MSP was in 2015, and back then containers were just starting to become a thing at all, let alone the big industry buzzword that they are today. I’ve found over the last several months that containers are NOT just another industry buzzword, but can have a very real impact on the businesses that use them. Nor are they something that is specifically designated for developers anymore. For example, I remember attending a session at Microsoft Ignite last year where MetLife came on and talked about how they achieved massive infrastructure improvements by moving a chunk of their applications from Virtual Machines to Containers. This was not a developer-only process, IT Pros had just as large a part of the endeavor as well. Another thing that struck me about the sessions was that I was watching what VMs did to physical devices again, only this time VMs were on the losing end of the battle!

Don’t get me wrong, VMs are going to continue to be around for a long time, but this containerization thing is here NOW and getting better, and that got me thinking. If I put my MSP hat back on, do containers make sense for the MSP? The answer, to me, is a resounding YES! Let’s list the reasons why…

1. Density

The entire MSP business model thrives on the idea of doing as much as possible with as little as possible. VMs have been great for years as they’ve allowed us to make better, more efficient use of hardware. However, we now live in a very cloud-centric world, and we’re finding out that the VM model doesn’t work as well when you simply lift and shift it into the cloud or a privately hosted datacenter. There is ALOT of wasted resource at the operating system layer in datacenters today. How many instances of Windows are running in your datacenter? How much memory is that consuming? How much CPU? Storage? It adds up, and it adds up exponentially, especially if you’re hosting customers in your own datacenter. Even worse, what if you’re paying for colocation or hosting services in someone else’s datacenter? Wouldn’t you want to get as much as you can out of that investment?

Containers lend themselves really well to solving this concern by removing chunks of the OS component from the equation. Containerization like this can be considered to be “kernel virtualization”, in that while each workload is unique, some parts of the host kernel are shared with the container to facilitate efficiency and speed. While each container still requires a “container image”, said images are optimized for containerized workloads and are also heavily stripped down. Newer versions of the Windows Server Core image and Nano Server image are 3.5GB and 230MB in size respectively! That’s a MASSIVE improvement over previous image sizes! So not only are you saving some resources from a CPU/Memory perspective due to kernel virtualization capabilities, you are also saving quite a bit on storage vs what we’ve had with traditional VMs to date.

Container Architecture

One good example of this stemmed from one of the questions we had during our AMA with Ben, was “How many containers could you run on a 4CPU/16GB memory sized server?” The answer was roughly 40. With each one containing some sort of workload, that’s MUCH higher density than you’d get with VMs, making containers stand out from VMs in this regard.

2. Multi-Platform

Ever have a VM that was running on VMware or Xen and you wanted to just simply move it and run it on Hyper-V, or vice-versa? Sure you can do a V2V, but that takes time and potential downtime. Containers are truly multi-platform, in that you can move them easily from one platform to another with ease. With docker at the core of most container technologies out there, you can basically run a container anywhere where docker is present. This includes:

  • Windows Server
  • Windows Client
  • Linux OSs
  • VMware vSphere
  • and much, much more

This flexibility is extremely important for MSPs. Your customers depend on you to keep them agile in today’s rapidly changing IT ecosystem, and with business interests so closely tied to IT, this is one way that you can help ensure that your customers maintain as much flexibility as possible.

3. Multi-Cloud

What is considered to be a cloud today? Azure, AWS, and the other big players only? Not necessarily. The “cloud” can also encompass private cloud hosted on-prem and service provider clouds. Basically, for the purposes of this discussion, a “cloud” is a supported collection of computing resources that can be used to host a workload. Or, more specifically to this topic, a place to run containers. Building on the multi-platform topic above, I’ll go on to say that containers are also multi-cloud. This designation not only extends from the public cloud platforms such as Azure, and AWS but also into your own private host clouds as well.

Have a customer that wants to start on-prem and then slowly move things up into Azure? No problem. Container service architecture makes that very doable, and many cloud vendors today support container hosting, including:

Factoring in the platform flexibility with the added cloud capabilities that container services on these platforms offer, and you have the ability to basically run any workload, anywhere, at any time. This is something that will set you apart from MSP competitors in your market.

4. Quality of Life Improvements for Patching

Ask any MSP out there to list their top 5 pain points for managing customers, and patching will likely fall within that list. Patching issues stem from the amount of time it takes to roll out a patch, to troubleshooting bugs and failures, and more. Containerization has some added benefits in this space. As containers are ultra lightweight and only contain the components necessary to run the contained workload, they boot quite fast and are highly mobile. As an example in patching, due to their containerized nature, you could have one container that contains the x.5 version of an application, and another that contains the x.6 version. Let’s say you put x.6 into place, and despite proper testing, it still manages to break something, you simply turn down the x.6 container, and put the x.5 one back into place.

Some might say that by utilizing snapshots and checkpoints with VMs we could do something similar, but while snapshots are hard-linked to the VM that they are associated with, each container in the above example is completely independent. Maybe after noticing the failure of x.6 you want to move it back to your test environment. You can easily do that with a container, and not so easily with a VM. It’s possible but requires a bit more work.

What About Downsides?

So many of you will ask, “Gee Andy, this all sounds great, but there have to be some downsides? It sounds too good to be true!”

While yes, it does sound amazing, there are some caveats just like any new technology, but I think you’ll find that these ones aren’t that bad, and can be worked around with proper planning.

Isolation – I mentioned earlier that containers share part of the running host’s kernel. This could be a cause for concern in security centric organizations and hosting companies. Those types of organizations need to make sure that there is a complete separation between workloads, and your standard Windows Container will not address this. To address this concern, Microsoft has introduced Hyper-V Containers. Essentially the container runs with a similar environment, but inside a Hyper-V VM with its own running kernel. There is, of course, a resource trade-off here, and they are licensed like VMs from a Microsoft perspective, so save these types of containers for when you REALLY need them.

Hyper-V Container Architecture

Learning Curve – Just like any new technology, there is going to be a learning curve. The curve seems to be a bit steeper with containers. This is primarily due to them being such a massive departure from VM architecture and it really takes some time to wrap your head around how they work. Below are some good resources to get your technical team started on their journey with containers.

Not every workload is a fit for a container – This concept goes with any solution. Most things are not the end all be all solution for all our needs. That applies to containers as well. Your application will not run inside of a container if:

  • It requires a GUI – Containers are intended to be headless
  • requires an older version of Windows – Containers in the windows space require Windows Server 2016 or newer

Wrap-Up

Hopefully, it’s become clear that containers could be a big help for your MSP. They’ll allow you to maintain that market edge and keep a one-up on your competition when it comes to your service delivery, you simply need to have your team take some time to learn how they work and where they fit.

Have you already been looking into container services as an offering? Do you have any concerns about this new technology from your tech team? Let us know in the comments section below! We’d love to hear!

Thanks for reading!

Altaro O365 Backup for MSPs
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

Leave a comment

Your email address will not be published.