Understanding Backup and Recovery with CSV Disks for Failover

Save to My DOJO

Understanding Backup and Recovery with CSV Disks for Failover

If you have deployed a Windows Server Failover Cluster in the past decade, you have probably used Cluster Shared Volumes (CSV).  CSV is a type of shared disk which allows multiple simultaneous write operations, yet they happen in a coordinated fashion to avoid disk corruption.  It was not an easy journey for CSV to be widely adopted as the recommended disk configuration for clustered virtual machines (VMs) or Scale-Out File Servers (SOFS).  The technology faced many challenges to keep up with the constantly evolving Windows Server OS, its File Server, and industry storage enhancements.

Software partners, particularly backup and antivirus providers, continually struggled to support the latest versions of CSV.  Now Cluster Shared Volumes and its partner ecosystem are thriving as millions of virtual machines worldwide use this technology.

This blog post will provide an overview of how CSV works so that you can understand how to optimize your backup and recovery process.

Virtual Machines Challenges with Traditional Cluster Disks

When Windows Server Failover Clustering was in its infancy, Hyper-V did not yet exist.  Clusters were usually small in size and hosted a few workloads.  Each workload required a dedicated disk on shared storage, which was managed by the host that ran the workload.  If a clustered application failed over to a different node, the ownership of that disk also moved, and its read and write operations were then managed by that new host.

However, this paradigm no longer worked once virtualization became mainstream as clusters could now support hundreds of VMs. This meant that admins needed to deploy hundreds of disks, causing a storage management nightmare.  Some applications and storage vendors even required a dedicated drive letter to be assigned to each disk, arbitrarily limiting the number of disks (and workloads) to 25 or fewer per cluster.

While it is possible to deploy multiple VMs and store their virtual hard disks (VHDXs) on the same cluster disk, this meant that all the VMs had to reside on the same node.  If one of the VMs had to failover to a different node, then its disk had to be moved and remounted on the new node.  All the VMs that were running on that disk had to be saved and moved, causing downtime (this was before the days of live migration).  Cluster Shared Volumes (CSV) was born out of necessity to support Hyper-V. It was an exciting time for me to be on the cluster engineering team at Microsoft to help launch this revolutionary technology.

Cluster Shared Volumes (CSV) Fundamentals

Cluster Shared Volumes were designed to support the following scenarios:

  • Multiple VHDs could be stored on a single shared disk, which was used by multiple VMs.
  • The VMs could simultaneously run on any node in the cluster.
  • All Hyper-V features would be supported, such as live migration.
  • Disk traffic to VHDs could be rerouted across redundant networks for greater resiliency.
  • A single node would coordinate access to that shared disk to avoid corruption.

Even with the emergence of this new technology, there was still an important principle that remained unchanged – all traffic must be written to the disk in a coordinated fashion.  If multiple VMs write to the same part of a disk at the same time, it can cause corruption, so write access still had to be carefully managed.  The way that CSV handles this is by splitting storage traffic into two classes: direct writes from the VM to blocks on the disk and file system metadata updates.

The metadata is a type of storage traffic that changes the structure or identifiers of the blocks of data on disk, such as:

  • Starting a VM
  • Extending a file
  • Shrinking a disk
  • Renaming a file path

Any changes to the disk’s metadata must be carefully coordinated, and all applications writing to that disk need to know about this change.

When any type of metadata change request is made, the node coordinating access to that disk will:

  1. Temporarily pause all other disk traffic.
  2. Make the changes to the file system.
  3. Notify the other nodes of the changes to the file system structure
  4. Resume the traffic

This “coordinator node” is responsible for controlling the distributed access to a single disk from across multiple nodes.  There is one coordinator node for each CSV disk, and since the coordinators require additional CPU cycles to process and manage all of the traffic, the coordinators are usually balanced across the cluster nodes.  The coordinator is also highly-available, so it can move around the cluster to healthy nodes, just like any other clustered resource.

Data traffic, on the other hand, is simply classified as standard writes to a file or block, known as Direct I/O.  Provided that a disk does not incur any metadata updates, then the location of the VM’s virtual hard disk (VHDX) on the shared disk will remain static.  This means that multiple VMs can write to multiple VHDs on a single disk without the risk of corruption because they are always writing to separate parts of that same disk.  Whenever a metadata change is requested (a VHDX size increase, for example), all the VMs will:

  1. Pause their Direct I/O traffic
  2. Wait for the changes to the file system.
  3. Synchronize their updated disk blocks for their respective VHDs
  4. Then resume their Direct I/O traffic to the new location on the disk.

Another benefit of using CSV is to increase resiliency in the cluster in the event that there is a transient failure of the storage connection between a VM and its VHD.  Previously, if a VM lost access to its disk, it would failover to another node and then try to reconnect to the storage.  With CSV, it can reroute its storage traffic through the coordinator node to its VHD, known as Redirected I/O.  Once the connection between the VM and VHD is restored, it will automatically revert back to Direct I/O.  The rerouting process is significantly faster and less disruptive than a failover.

The Cluster Shared Volumes feature has since expanded its support from only Hyper-V workloads to also providing distributed access to files with a Scale-Out File Server (SOFS) and certain configurations of SQL Server; however, the details of these are beyond the scope of this blog.

Backup and Recovery using Cluster Shared Volumes (CSV)

There are different ways in which you can effectively back up your VMs and their VHDs from a CSV disk.  This can use either a Microsoft backup solution like Windows Server Backup or System Center Data Protection Manager (DPM) or a third-party solution like Altaro VM Backup. Besides CSV only being supported for specific workloads and requiring the use of either the NTFS or ReFS file system(s), there are few additional restrictions placed on the backup. When a backup is initiated on a CSV disk, the backup requestor will locate the coordinator node for that particular disk and manage the backup or recovery from there.

Volume-Level Backups

Taking a copy of the CSV disk, which contains your virtual machines, is the easiest solution, yet it is an all-or-nothing operation, preventing VM-specific backups.  This is because you must backup (and recover) all the VMs on that disk at once.  You may be able to successfully back up a single VM on the CSV disk, but you will actually see an error in the event log as this is technically unsupported.

When the backup request is initiated, the coordinator node will temporarily suspend any metadata updates and take a copy of the shared disk. However, Direct I/O is still permitted. The backup is crash-consistent as well so that the backup should always be recoverable. However, since it does not verify the state of the VMs, it is not application-consistent, meaning that the VM will be restored in the exact same state as when the backup was complete.  This means that if the VM had crashed or was corrupt on the disk, it will be in the same bad state upon recovery.

Application-Level Backups

A better solution is to install your CSV-aware backup software (VSS writer) on each node to allow you to backup and recover a specific VM.  If the VSS writer is not present, the coordinator resource can be moved to a different node that has the VSS writer to initiate the backup.  During the backup, the coordinator node will suspend any metadata updates to the disk until the backup is complete.  This allows for an application-consistent and crash-consistent backup.

It is also recommended that you restore any VMs to the same node to maintain application consistency, although most vendors now support cluster-wide restoration. Most backup software providers will also allow you to backup at the file-level or at the block-level, allowing you to make the tradeoff between faster block-level backups or better file-level recovery options. You should still also be able to back up multiple VMs simultaneously, so application-level backups are generally recommended over volume-level backups.

In Summary

Now that you understand how CSV works, you can hopefully appreciate why it requires special consideration from your backup vendor to ensure that any backups are taken in a coordinated fashion.  Before selecting a backup provider, you should check that their solution explicitly supports Cluster Shared Volumes.  Next, make sure that you have the latest software patches for both CSV and your backup provider, and do some quick online research to see if there are any known issues.

Make sure that you test both backup and recovery thoroughly before you deploy your cluster into production with a CSV disk.  If you encounter an error, I also recommend looking it up online before spending extensive time troubleshooting it, as CSV issues are well-documented by vendors and Microsoft support.  Now you should have the knowledge to understand how CSV works with your backup provider.

Altaro Backup Solutions
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

Leave a comment

Your email address will not be published.