Hyper-V Storage Articles - Altaro DOJO | Microsoft Hyper-V blog https://www.altaro.com/hyper-v Hyper-V guides, how-tos, tips, and expert advice for system admins and IT professionals Thu, 31 Mar 2022 13:21:27 +0000 en-US hourly 1 Azure File Sync: End of the Road for Traditional File Servers? https://www.altaro.com/hyper-v/azure-file-sync/ https://www.altaro.com/hyper-v/azure-file-sync/#respond Thu, 07 Oct 2021 15:13:30 +0000 https://www.altaro.com/hyper-v/?p=23259 Is Azure File Sync the ultimate data synchronizing tool for system admins? Learn what is does, how it works, and pros/cons over file server

The post Azure File Sync: End of the Road for Traditional File Servers? appeared first on Altaro DOJO | Hyper-V.

]]>

In this article, I will focus on Azure File Sync, explaining the service and the use cases. I will not be able to touch everything about Azure File Sync, but I will try to include the most important things and guide you to the rest so that you can find them in the documentation or additional resources.

What is Azure File Sync Service?

When I try to explain Azure File Sync, I normally start with: “Think of Windows Server Distributed File System Replication on Drugs, or Office 365 One Drive for Servers.”

So, to understand file sync we need to understand its on-premises equivalent and its client equivalent first.

Microsoft Office 365 OneDrive

With OneDrive, users can access and store files from various devices like Windows Clients, Mobile Phones and Web Browsers. The access is built to be easy and secure. Users can cooperate with others on files within an organization or outside of it. Users can share those files using the Microsoft Content Delivery Network. To access OneDrive you need either a web browser or the OneDrive Client which integrates into the Operating System of your Client or Mobile Device.

OneDrive is primarily built for User files like those you would classically put into a user fileserver home directory. It’s not meant to be used as a classic file share.

Distributed File System Replication

Distributed File System Replication or DFS Replication is a role service within a Windows Fileserver that was introduced in Windows Server 2008. Since then it is part of all Windows Servers. DFS R is the successor of File Replication Service or FRS. It was built to replace FRS as the replication engine for DFS Namespaces as well as Windows Server Active Directory Domain Services (Windows ADDS) SYSVOL folder. This folder contains the Active Directory Domain Information for a domain and forest. You can enable DFS R to replace NFS starting with Windows Server 2008 or later domain function level.

DFS-R is a pretty good Windows Service, sometimes a bit hard and clunky to set up and stabilize, but it does its job.

In comparison to Azure File Sync, you can consider File Sync as a cloud-managed DFS-R Service which uses Azure Storage Backend in addition to local File Servers.

How it works

In the difference to Distributed File System Replication, Azure Files works with a sync client like OneDrive.

The client syncs all data to an Azure Storage Fileshare. Azure Storage acts as a central repository for all attached File Servers.

Distributed File System Replication

 

You can use Azure File Sync to sync Fileshares and Servers from 2012 R2 and newer. You can sync fileserver clusters and stand-alone servers.

This gives you a great option to set up global Fileshares using DFS Namespaces and also set up redundancy without the hustle of cluster hardware.

Azure File Sync to sync Fileshares and Servers

With the Fileshares and Files, the Access Control Lists (ACLs) are also migrated between servers. If you want to use a local fileserver only, you do not need to do anything in addition but there is also the option to use Azure Fileshare target directly without a fileserver. That would be a great scenario if you are already in Azure and want to save some money or have a branch or datacenter very near to an Azure Region. The latency should be below 12 milliseconds.

Azure Fileshare, Azure Region

There is only one small issue, Azure Files is based on Azure AD Users, Groups and Permissions. So, every user or computer who wants to use the Fileshare must be hybrid joined/cloud synced. To achieve that, your Windows Active Directory must be synced with Azure Active Directory. A guide to configuring the hybrid connectivity can be found below.

Azure AD Connect sync: Understand and customize synchronization | Microsoft Docs

Here you can find more information about the File Sync and Identity integration.

Introduction to Azure File Sync | Microsoft Docs

Another feature worth mentioning is the cloud tearing and cloud site backups possible with Azure Files and Azure File Sync.

Cloud Tearing

With cloud tiering, you enable a feature that only caches the most frequently accessed files on-prem on the local storage. Other files are transferred to Azure and only kept as a local link. Those files can be downloaded on demand. You can control how many files are uploaded by limiting the storage and local disk space used on-prem on the fileserver.

For more information about cloud tiering, please visit the Cloud tiering overview.

You can also set up different peering policies. You will find a detailed guide about these policies as well as the requirements here: Choose Azure File Sync cloud tiering policies | Microsoft Docs

Cloud Site Backup

We all know the struggles of backing up a fileserver, especially when you don’t want to impact the user while reducing fileserver performance during backup.

Normally you only have a small timeframe to backup your file server. That timeframe is mostly out of business hours during the night. With that there are two issues coming up:

    • If the backup fails, you will see it the next morning and you probably won’t have a backup from the past day.
    • If the fileserver becomes larger you could run out of time when backing up many changes or new files.

With Azure Backup combined with Azure Files and File Sync, you avoid those issues. Azure Backup can run any time without any performance impact on the Storage.

It also enables a quicker restore. If you need to restore files, you can restore them on Azure and the files will be replicated to all connected fileservers automatically.

Implementation

I don’t want to go too deep into the implementation because there is a very granular tutorial on the Microsoft Azure Docs, but let me explain what you basically need:

    • Create an Azure Storage for Files and download the sync client
    • Install the client on your fileserver and connect it to the fileshare
    • If you want to implement a hybrid identity, you need to implement and configure Azure AD Connect

As said, you will find all guidance needed following the link below.

Planning for an Azure File Sync deployment | Microsoft Docs

Use Cases

I don’t want to go through all the possible use cases but will concentrate on the most common ones.

Fileserver Replication

As already discussed above, the most common case to use Azure File Sync is to replace file replication between servers and branches regionally or globally. Customers use Azure to cache and replicate their file storage or discover fileservers from Azure Storage.

Azure File Sync replicates the files to an Azure Storage Vault and pulls down changes from Azure Storage Vault if files changed. The intelligence to manage duplicates or access is done by Azure File Sync Service Azure component.

Azure FileSync Storage

So, nothing really fancy and uncommon in that use case.

Reduction of Storage Cost for Branches and Datacenters

Another pretty common use case is to reduce storage costs in a datacenter or branch. Reasons to reduce the storage costs are obvious, normally for on-prem you pay around 23 cents per Gigabyte in storage costs, while in the cloud it’s around 2 cents plus maybe another 5 cents on bandwidth to access the files.

So it is a no-brainer that you want to replicate rarely accessed data to the cloud instead of keeping it on disks in your server.

To be honest, that scenario only makes sense when you can fulfil the following requirements:

    • You have cold data which can be uploaded to Azure
    • You have enough bandwidth to download cold files if they need to b accessed for some reason

If you want to use access Files on Azure live without any download, you should also fulfil the following requirement:

    • Low latency access to Azure Region storing the data. I normally prefer a latency lower than 12 milliseconds. Above that, you could run into issues.

That scenario has some great benefits:

    • Reducing storage, electricity, and operational costs
    • Optimizing your economic and ecological footprint
    • Reducing space needed in datacenter and branch for the equipment.
    • Free up resources for other operations or projects

You can also use Azure Backup or Altaro Backup to back up your files directly in the cloud.

Back up Azure file shares in the Azure portal – Azure Backup | Microsoft Docs

That optimizes resource usage of the hardware still on-prem and you can maybe move the Fileservers on Hyperconverged Systems like with Azure Stack HCI or others.

Azure Stack HCI solution overview – Azure Stack HCI | Microsoft Docs

The next scenario would be File Migration and Hardware replacement.

Fileserver Migration

In the past, I had customers who used Azure File Sync to migrate their Fileservers to other Hardware, Virtual Machines or Locations. They just set up Azure Filesync to Sync all data over to Azure and then down to the new Fileserver.

Fileserver Migration

 

After the Migration, they just remove the old fileserver and the Azure Filesync Migration and Agent.

So, they spare license costs and development time for Migration Tools and Scripts and they only spend Azure Costs during the migration.

Fileserver Cleanup

For Fileserver cleanup, there are two options. The first one is to reduce the amount of storage used and the other is removing a Fileserver completely from a site.

Storage reduction

When using the scenario for reduction of storage used on your fileserver, you use basically the tiering feature we discussed before. You would use the cloud tiering feature. There are two options to reduce the amount of storage.

    • Silent removal: You use the cloud tiering feature, keeping your data on-prem and waiting for the tiering feature to kick in. This means the data is silently moved to Azure and removed from the storage depending on usage
    • Big Bang: You replicate all data to Azure, connect Azure to a new Server or Folder on the server, and keep all data in Azure if it is not needed. If needed, the data is downloaded on demand.

Fileserver removal

As previously said, another common scenario is to remove the Fileserver completely from a branch or datacenter. Here you replicate the data to Azure and then disconnect the Fileserver as an endpoint. Afterward, you mount the fileshare directly from Azure to your clients using DFS-N or direct link and shut down the original Fileserver.

So Azure File Sync is a nice tool for shutting down traditional Fileservers.

Is it REALLY the Future then?

As we already learned with the previous blog about Azure Files, it might not be the right tool or service for everyone. If you fit into the limitations and scenarios, it’s an awesome service to work with. Azure File Sync removes the issues we all faced for example with DFS-Replication timeouts or SMB Transfer over wide area networks.

It also brings more value to an enterprise, as we are considering for example OneDrive, dropbox, etc. a replacement for Fileservers. These tools are great for personal data but if you have classic applications that need SMB or NFS, I would still stick with my Fileservers.

It also adds additional security, as it encapsulates traffic into SSL encryption.

Azure Storage Encryption for data at rest | Microsoft Docs

I would highly recommend going through the Microsoft documentation and building yourself a lab to test it.

Azure File Sync documentation | Microsoft Docs

The lab guide can be found here:

Tutorial – Extend Windows file servers with Azure File Sync | Microsoft Docs

Overall,  it’s a great addition to every infrastructure engineer or administrator toolbox.

The post Azure File Sync: End of the Road for Traditional File Servers? appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/azure-file-sync/feed/ 0
Your Windows Server Software-Defined Storage Questions Answered https://www.altaro.com/hyper-v/windows-server-storage-faq/ https://www.altaro.com/hyper-v/windows-server-storage-faq/#respond Fri, 27 Aug 2021 09:21:01 +0000 https://www.altaro.com/hyper-v/?p=23333 We answer community questions on Windows Server Storage including QUIC, Ceph support, ReFS, Intel Optane, and more!

The post Your Windows Server Software-Defined Storage Questions Answered appeared first on Altaro DOJO | Hyper-V.

]]>

For all IT professionals, storage infrastructure is very much at the core of the services we provide in our datacenters. That said there seems to be this industry misconception that there has been little innovation in the Windows Server stack in terms of storage. Nothing could be further from the truth! There has been a TON of innovation in Windows Server Storage. This includes things like Storage Spaces DirectReFSQUIC over SMB, and lots more! 

With all-new technologies come questions. We recently had the opportunity to answer a number of Windows Server Storage-related questions during an exclusive webinar I hosted along with fellow Microsoft MVP Didier Van Hoye on the subject. You can find that list of questions further down the page. However, if you’re interested in watching our webinar on the subject you can do so with the below link 

Before we get to the questions, however, I’d like to share a video that Didier and I recorded where we discuss some of the questions and expand on them with our own thoughts and additional detail.

Resources 

Is working with permissions in ReFS different than NTFS?

Nope! The process is pretty uniform across the two filesystems. You won’t notice any difference at all.

Andy Showed his lab environment during the webinar, can we get a lab breakdown?

Absolutely! In fact, we’ll be recording a dedicated video in the future specifically on this and what the process of setting up Storage Spaces Direct on Windows Server 2022 Looks like. At a high level, this is what my lab environment looks like: 

2 Physical Nodes 

  – Server-Class MB 

  – 6-Core Xeon CPUs 

  – 32 GBs of Memory (64 Recommended) 

  – 4 1TB Spinning Disks in Each Node for capacity tier storage 

  – 2 400GB NVME Devices in Each Node for read/write cache 

  – 1 250GB SSD for Host OS on the Nodes 

  – 1 Quad Port 1Gbps Intel I350 NIC 

  – Mellenox ConnectX3 10Gbps NIC Direct connected between the nodes 

Can QUIC be used for printer sharing as well?

While it certainly is an interesting use-case, at this time QUIC cannot be used for printers.

Is Ceph supported on Hyper-V?

First of all, you should be asking yourself, why do you want to run Ceph with Hyper-V? There are certainly more native options that are better. However, if you already have Ceph in your environment and you want to leverage it for your VMs, you certainly can. You can use Ceph as an iSCSI target for your Hyper-V Hosts if you want, and there is even a method of using Ceph directly on Hyper-V Hosts (Link in Resources under the video above). If you want more details on this, Didier and I discuss it at length in the video.

Will SMB over QUIC be available for on-prem use cases?

While we don’t know if it will always be like this, currently SMB over QUIC is only available in Windows Server 2022 Azure Edition.

Are there currently any competitors to Intel Optane?

While there certainly are some smaller companies that are entering the space much of the technology in play is still highly experimental and is currently “struggling to get out of the lab” so to speak. For consumable persistent memory that you can use today, Intel is one of the only real options currently, and it comes at a premium price as well.

Are there any ReFS use cases for VMware VMs?

Not Officially no, and not in any way that is supported. Sure you could use ReFS and Storage Spaces to host up an NFS share, but that is in no way recommended or supported. Will it work? Probably. Is it supported? Not in any way.

How does cloud storage fit into this whole storage discussion

Just like any technology, Cloud storage options are there to give you more options. Cloud storage is useful when you have workloads already living in the cloud or maybe you have people from multiple locations that need to access a unified repository. The cloud also lends itself well to backup storage. Didier and I discuss this in more detail in the video.

How do I get started learning all this storage stuff? There are so many acronyms!

Yes, there is a lot to learn. I would suggest watching our video as Didier and I spent quite a bit of time on this topic. The short answer is you need to learn by doing. Set up lab environments, break it, fix it, tear it down and set it back up a different way. Find a community to learn more from (Like the Altaro DOJO!) and don’t give up! Rome wasn’t built in a day!

Additional Resources

If you haven’t had a chance to watch the full webinar where Didier and I discuss all these storage technologies at length be sure to check that out! 

On-Demand Webinar - Windows Server Storage

Finally, if you have any additional questions that you would like answered on storage technologies in the Windows Server stack, be sure to include them in the comments section below this article and we’ll be sure to get you an answer! 

Thanks for reading! 

The post Your Windows Server Software-Defined Storage Questions Answered appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/windows-server-storage-faq/feed/ 0
Are you Using these Windows Server Storage Features? You Should. https://www.altaro.com/hyper-v/windows-server-storage/ https://www.altaro.com/hyper-v/windows-server-storage/#respond Fri, 06 Aug 2021 07:09:50 +0000 https://www.altaro.com/hyper-v/?p=23232 We break down the essentials of Windows Server Storage to optimize your storage infrastructure including ReFS, SMB, QUIC, and more!

The post Are you Using these Windows Server Storage Features? You Should. appeared first on Altaro DOJO | Hyper-V.

]]>

Storage technologies are always changing and evolving while at the same time bringing immense benefit to our datacenters. We have come a long way from IDE spinning rust drives, iSCSI/FC protocols, and FAT32/NTFS. We have observed disk options grow in choice, capacity, and speed. We witnessed ethernet become a first-class storage protocol beyond iSCSI, and we were around to see file systems emerge and mature.

In this article, we’ll briefly talk about 2 such technologies, and leave you with some resources to learn more, including a webinar focused on modern-day storage technologies.

Let’s start with ReFS.

What is ReFS?

The ReFS file system was introduced in Windows Server 2012 and has evolved since then in terms of reliability and capabilities. Apart from scalability, ReFS offers some other capabilities that are very interesting for backup workloads.

NOTE: Want to know how ReFS stacks up to NTFS?

The block cloning capabilities of ReFS are convenient when it comes to synthetic operations in backups (depending on your backup vendor). Merging files, deleting old restore points, or creating full synthetic backups becomes lighting fast as you perform metadata operations instead of copying the data. That’s because ReFS can reference the existing blocks of data already on disk to create new files as needed.

The magic of block cloning: storing 30TB worth of data and consuming only 12TB

Figure 1: The magic of block cloning: storing 30TB worth of data and consuming only 12TB (image by Didier Van Hoye)

Next to incredible speed gains, the block cloning capabilities save space on disk. Instead of using copies of existing files to create a synthetic full, ReFS mainly needs to reference existing file data blocks. Those capacity savings add up.

ReFS - Block cloning and Integrity Streams

Figure 2: ReFS – Block cloning and Integrity Streams (image by Didier Van Hoye)

Finally, next to speed and space efficiencies, we can detect data corruption due to bit rot. For this, you need to turn on data integrity streams. While it does impact performance somewhat, ReFS offers the potential of auto repairing bit rot or file data block corruption. For this, you need to leverage a redundant implementation of storage spaces. That can be stand-alone storage spaces with mirroring or Storage Spaces Direct with mirroring. That allows ReFS to grab the needed file bock copies it needs to replace the corrupt ones detected via data integrity streams. All that happens transparently to the workload.

How Does Altaro Leverage These Features?

While Altaro Supports conducting backups from ReFS volumes, and restoring to ReFS volumes, we handle the block deduplication (inline), storage efficiencies, and auto-repair within our own software in order to provide a unified experience whether you are leveraging ReFS for your backup storage or older file system such as NTFS.

More information on Altaro VM Backup requirements and features can be found here.

SMB over QUIC

SMB over QUIC is new in Windows Server 2022 Azure Edition. Essentially it allows us to access SMB file shares over HTTPS/443. While that might sound scary, let me elaborate a bit on the reasons, benefits, and how this might be safer and easier than a VPN in certain use cases.

Next to TCP and RDMA, we now have QUIC for use with SMB

Figure 3: Next to TCP and RDMA, we now have QUIC for use with SMB (image by Didier Van Hoye)

The use case for SMB over QUIC

Microsoft offers SMB over the internet already with Azure File Shares over port 445. Now port 445 is more often than not blocked for ingoing and outgoing traffic at the edge firewalls. For a good reason, but it does mean that some handy and legitimate scenarios do not work. It also means that we need a VPN to access file shares from a client outside the corporate network. That introduces some extra complexity, maintenance and adds to the workload support workers have to handle. All that for something that is second nature to people, accessing a file share. Likewise, we need a VPN or Express Route to access Azure File Shares.

SMB over QUIC address these challenges. By allowing secure access to file shares over HTTPS/443, SMB over QUIC eliminates a lot of complexity and removes the game stopping factor that firewalls and ISPs most block port 445, which is often beyond your control. It also makes the process of accessing a file share identical no matter where a user is. Whether in the office, on the road, or at home, it remains the same experience without them needing a VPN connection. Azure File Shares will also support SMB over QUIC and solve that challenge.

QUIC requires TLS 1.3, and Windows Server 2022 supports this and has it enabled by default.

Without a line of sight to a domain controller, authentication will happen over NTLMv2. Therefore, to avoid introducing NTLMv2 dependent workloads, you can and should implement one or more KDC proxy servers. These will handle the Kerberos authentication for you when you have no connectivity to a domain controller.

The Kerberos Key Distribution Center Proxy Protocol

Here’s the gist of the challenges a KDC proxy solves. A Kerberos requires client connectivity to a Key Distribution Center (KDC) server to authenticate. In practice, that means a domain controller. Now, what if that is not possible? When you are outside of the corporate network and don’t have a VPN connection, what can you do? That is where the Kerberos Key Distribution Center Proxy Protocol (KKDCP) provides a solution. It allows clients to use a KKDCP server to obtain Kerberos service tickets securely.

 

Overview of the KDC proxy service

Figure 4: Overview of the KDC proxy service (image by Didier Van Hoye)

The Kerberos client must be configured with the KDC servers (GPO/registry), and when the standard ways of authenticating fail, it becomes a KKDCP client that sends Kerberos messages using HTTPS to the KKDCP server.

The KKDCP server locates a KDC (domain controller) for the request and sends the request to the KDC on behalf of a KKDCP client. From the KDC’s perspective, nothing is different; it receives Kerberos messages and is otherwise not involved in the KKDCP. When the KKDCP server gets the response from the KDC, it sends the Kerberos message over HTTPS back to the KKDCP client.

When accessing file share on the corporate network, by default, this will happens over TCP/445. The negotiation for TCP/445 starts before SMB over QUIC does, and as such, this wins typically. However, that will not work outside the corporate network because TCP/445 is blocked, and SMB over QUIC will kick in. When the client has no line of sight to domain controller fails. When you have configured that client with one or more KDC proxies, those come into play when all else fails, and Kerberos authentication is handled for you by a KDC proxy.

Note that you need to configure the KDC proxy to know about the trusted SAN certificate SMB over QUIC uses. That implies that you must have an internet reachable CRL/OCSP for this certificate. There are many more details to this, but the good news is that Windows Admin Center makes it super easy to set up SMB over QUIC for file shares and configure a KDC Proxy.

Critique about QUIC

When QUIC, in general, was first introduced, many security people and vendors got into a frenzy proclaiming this reduces security because firewalls and other security appliances could not handle QUIC and became blind. Now that should have been addressed by any vendor and turned into a sales pitch. Many of the other objections hold equally true for TLS 1.3 in general or for different ways of accessing file shares. While security is essential, the industry does love its drama and spectacle over new technologies. But change affects us all, and even the security industry has to adapt.

Would you Like to Learn More?

On August 11 we’re hosting a webinar that covers everything here plus lots more Windows Server Storage goodness. As always, we’ll be presenting it live twice on the day to enable as many people to join as possible and ask your questions. Session one is at 14:00 CEST, 08:00 EDT, and 05:00 PDT. Session 2 is at 19:00 CEST, 13:00 EDT, and 10:00 PDT. I (Didier Van Hoye) will be presenting the event with long-time Altaro webinar host Andy Syrewicze.

You can register to watch it live (before August 11) or on-demand at Altaro Webinar: Unlock your Storage Potential with these Powerful Built-in Windows Server Features.

Windows Server Storage webinar

Save my Seat

That title gives us a vast scope, and we will cover many subjects, so get ready for a whirlwind journey through storage devices, protocols, technologies, file systems, and more. Finally, we’ll glance over what Microsoft did with all these over the years to arrive at where we are today in the era of Windows Server 2022.

HDD, SSD, NVMe, PMEM …. SCSI, IDE, SATA, SAS, … iSCSI, FC, FCoE, NVMeOF, NVMeoFC, SMB 3, …, FAT, FAT32, NTFS, ReFS, NFS … , welcome to only tiny acronym soup in the IT world! Add to that local, shared, and software-defined storage, and that’s what we’ll be touching on in this webinar. See you there!

The post Are you Using these Windows Server Storage Features? You Should. appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/windows-server-storage/feed/ 0
Should I be using Azure Files? https://www.altaro.com/hyper-v/azure-files/ https://www.altaro.com/hyper-v/azure-files/#comments Fri, 29 Jan 2021 08:01:51 +0000 https://www.altaro.com/hyper-v/?p=19273 Ever used Azure Files before? If not, read through for a complete introduction into Azure Files and Azure File Sync, as well as scenarios where to use them.

The post Should I be using Azure Files? appeared first on Altaro DOJO | Hyper-V.

]]>

Welcome to my new article for Altaro Software. I want to give you an introduction into Azure Files and Azure File Sync, as well as scenarios where to use them.

What is Azure Files?

Before we can speak about Azure Files use cases, we need to learn a few more things about Azure Files in general.

Azure Files is a Microsoft Azure managed file share. It can be accessed by standard protocols like Server Message Block (SMB) or Network File System (NFS). Azure Files can be mounted either from on-premises and from the cloud directly.

You can access Azure Files from Windows, Linux and macOS. The following table gives an overview of the protocol and the possible operating systems.

Azure Files SMB Share Azure Files NFS Share
Windows Yes No
Linux Yes Yes
macOS Yes Yes

You also have the option to cache Azure Files SMB shares on a Windows Server using Azure File Sync. That enables your users to faster access regularly used files and store them near the user. The technology is comparable to Windows Server Distributed File System Replication (DFS-R) but much easier to set up, reliable, and more advanced in regards to features.

Azure Files SKUs and Limits

As you can imagine, Microsoft Azure Files has different limitations and costs which are reflected in the SKUs. The tables below show you the limitations and SKUs. The current SKUs are Standard and Premium File Shares.

Resource Standard file shares* Premium file shares
Minimum size of a file share No minimum; pay as you go 100 GiB; provisioned
Maximum size of a file share 100 TiB**, 5 TiB 100 TiB
Maximum size of a file in a file share 1 TiB 4 TiB
Maximum number of files in a file share No limit No limit
Maximum IOPS per share 10,000 IOPS**, 1,000 IOPS or 100 requests in 100ms 100,000 IOPS
Maximum number of stored access policies per file share 5 5
Target throughput for a single file share up to 300 MiB/sec**, Up to 60 MiB/sec , See premium file share ingress and egress values
Maximum egress for a single file share See standard file share target throughput Up to 6,204 MiB/s
Maximum ingress for a single file share See standard file share target throughput Up to 4,136 MiB/s
Maximum open handles per file or directory 2,000 open handles 2,000 open handles
Maximum number of share snapshots 200 share snapshots 200 share snapshots
Maximum object (directories and files) name length 2,048 characters 2,048 characters
Maximum pathname component (in the path ABCD, each letter is a component) 255 characters 255 characters
Hard link limit (NFS only) N/A 178
Maximum number of SMB Multichannel channels N/A 4

* The limits for standard file shares apply to all three of the tiers available for standard file shares: transaction optimized, hot, and cool.

** Default on standard file shares is 5 TiB, see Enable and create large file shares for the details on how to increase the standard file shares scale up to 100 TiB.

What is Azure File Sync?

With Azure File Sync, Azure customers get the opportunity to centrally organize their file shares with Azure Files. With Azure Files and the Azure Storage backend, you can gain a flexible, performant and overall very compatible environment for your file server backend. Using Azure File Sync, a Windows Server becomes a local data cache for your branch to provide SMB, NFS and FTPS file access.

With Azure Storage Synchronization and the Azure Edge network, you can set up many caches around the globe where ever it is necessary depending on your office footprint.

The picture below should show a simple example of how such a setup could look like.

Azure File Synch

Within the next part of the blog post, I want to give a brief intro on how an Azure File Share works.

I will not configure a fileshare within the blog post but if you need a detailed guide, please visit the full deployment guide.

As you can imagine, the current Azure File Sync has some limitations. You can see them below.

Resource Target Hard limit
Storage Sync Services per region 100 Storage Sync Services Yes
Sync groups per Storage Sync Service 200 sync groups Yes
Registered servers per Storage Sync Service 99 servers Yes
Cloud endpoints per sync group 1 cloud endpoint Yes
Server endpoints per sync group 100 server endpoints Yes
Server endpoints per server 30 server endpoints Yes
File system objects (directories and files) per sync group 100 million objects No
Maximum number of file system objects (directories and files) in a directory 5 million objects Yes
Maximum object (directories and files) security descriptor size 64 KiB Yes
File size 100 GiB No
Minimum file size for a file to be tiered V9 and newer: Based on the file system cluster size (double file system cluster size). For example, if the file system cluster size is 4kb, the minimum file size will be 8kb.
V8 and older: 64 KiB
Yes

If it is not a hard limit, it can be changed via Microsoft Support.

How to enable Azure Files?

Enabling Azure Files is simple. You just create an Azure Storage Account as a Storagev1 or Storagev2 Account. Afterwards, you just add a fileshare.

Azure Files

Azure Files fileshare

After you created the share, you can access it via Network Mount or Synchronize it via Azure File Sync Agent.

Azure File Synch Agent

Microsoft published a very detailed guide on how to connect a Windows File Server with the File Sync agent. Deploy Azure File Sync | Microsoft Docs

What is the difference between Azure and OneDrive?

Now you may wonder and think “Why should I use Azure Files? Microsoft already offers already Microsoft Office OneDrive. Can’t I use OneDrive also for Enterprise File Shares?”.

In the first place, OneDrive is an individual File Storage with certain limitations in Sharing and Storage Capacity. OneDrive has no centrally manageable access management and is based on SharePoint online while Azure Storage is based on SMB / NFS file sharing.

Let me give you a deeper comparison with the table below.

OneDrive Azure Storage
Target Targets individual Users Targets classic Fileserver Workloads
Maximum Storage 5 TB Storage per User 500TB for a single storage account
Backup Does not offer any backup Backup optional via Azure Backup Service
Offline work Yes Yes but needs Fileserver with Azure File Sync as cache
Redundancy Comes as a redundant SaaS service Storage Vault can be replicated locally in one Azure Region, between different Azure Regions in a Zone or globally with geo-replication to any Azure Region.

As already explained, OneDrive is built to give individual User a personal fileshare comparable to a classic “\homedriveuser.user” share. Azure Files is a classic fileserver offered by Azure as a cloud service. You can also use it for Homedrives or User Profiles but its normally build to replace classic file shares or offer file shares for applications which still rely on them.

Usage Scenarios

Within the next part of the post, I want to go through some usage scenarios which are pretty common with Azure Customers.

Fileserver for Azure Workloads

One of the most common scenarios for the usage of Azure Files is as File Server Backend for Azure Workloads, Virtual Machines and Services like Windows Virtual Desktop.

At the moment the most used architecture is for Virtual Machines. Virtual Machines are deployed in a Virtual Network, an Azure Storage with Files is connected into a separate Subnet using Azure Private Link. Azure Files then represents a file share to the Virtual Machines.

The architecture could look like below.

Fileserver for Azure Workloads

You can also access file shares via public Azure IP but most of the customers prefer private link for that scenario since it is available.

Fileserver for On Premises

When using Azure Files on premises, you should first test your latency and roundtrip to the service. If you have a larger roundtrip than 22ms, it makes no sense to use Azure Files. As you remember, we are still using the SMB and NFS protocol. Both of them are not WAN optimized and produce too many overheads to be performant. In those scenarios, you should choose the Azure File Sync scenario and put a cache on a File Server on premises.

There is an easy way to get an estimate using Azure Speed, a community tool which uses Azure Storage to estimate the Roundtrip between your client and Azure Regions.

The connection to an Azure Files can be performed through the Public Endpoint of Azure Storage “storageaccount.file.core.windows.net” using the Internet with native HTTPS encryption.

Fileserver for On Premises

Another way would be using Azure ExpressRoute with Microsoft Peering and also accessing the same Storage Account.

Azure ExpressRoute with Microsoft Peering

The latest method would be using VPN or Azure ExpressRoute to Access the file shares via Azure Private Link.

Azure Private Link

When you have an Azure Region in the proximity of less than 22ms, Azure Files is a great way to replace your current Fileservers.

Hybrid Filestorage for On Premises Fileserver

There is one issue we all know, that is storage space in a Fileserver, especially in a branch. Normally you have a bunch of disks and storage in a server. To reduce the amount of storage used, you must use expensive technology for deduplication and compression.

What would you say if you could you use Azure Files as a hybrid storage space and reduce the storage used on-prem?

There are currently two options which I will briefly introduce below.

Microsoft Azure Stack Edge

The first option is pretty much out of the box. You can order an Azure Stack Edge via the Azure Portal. Azure Stack Edge comes with a preconfigured solution to connect to an Azure Storage Vault and provide a Fileshare to the network.

The required agents are already on the Edge and can be managed via Azure Portal. Azure Stack Edge Pro – FPGA share management | Microsoft Docs

That makes this solution pretty easy to deploy and use but you now own the hardware. It’s a rental pay-as-you-go model where you pay around 560€ to 800€ per month per device depending on the device type. Pricing – Azure Stack Edge | Microsoft Azure

Microsoft Azure File Sync

Another more customizable option is the use of Azure File Sync. Here you take a standard file service like a Dell PowerEdge R640 with a bunch of disks and a simple SAS controller. You can also choose a virtual machine instead of a physical server.

You only need a supported Windows Server OS. Currently, the following Windows Server versions are supported.

Version Supported SKUs Supported deployment options
Windows Server 2019 Datacenter, Standard, and IoT Full and Core
Windows Server 2016 Datacenter, Standard, and Storage Server Full and Core
Windows Server 2012 R2 Datacenter, Standard, and Storage Server Full and Core

Now you can install the Azure File Sync Agent on a Windows Server and connect the Azure File Share to the server. Afterwards, you can configure the cache and sync options. You can find the guides to deploy below.

Deploy Azure File Sync | Microsoft Docs
Choose an Azure solution for data transfer | Microsoft Docs

You can also use that type of deployment to clean up fileservers but I will explain that in the “fun fact for admins” part at the end of the blog post.

Using DFS Namespaces

When you work with different fileshares in different locations e.g. on a synched file server and Azure, connecting to the right fileshare can be a problem. There is a pretty simple and classic tool you can use to solve the issue.

Maybe you know about Windows Server Distributed File System Namespaces? This little sneaky service is available for 20 years and was released with Windows Server 2003. So it is bulletproof. 🙂

One of my co-workers at Microsoft wrote a pretty good guide on how to deploy Azure File Sync with DFS-N. You can find the link below. Azure File Sync: Integration with DFS Namespaces – Microsoft Tech Community

That’s the end of the technical part of my blog post. I will leave you with some closing thoughts and some admin fun facts about Azure Files.

Fun fact for Admins

Do you know the situation? Your users store a bunch of files on your fileservers and never go through them again. I normally call that WORN, write once read never. How do you solve that normally? You normally buy a bunch of very costly storage appliances who do cool things like deduplication, compression and storage tiering. You also buy lots of tapes to backup your data.

That is pretty expensive over time and you still need to backup all that stuff your users are storing. As you may know, Azure Storage is pretty check in comparison with about 2 cents per Gigabyte.

With Azure File Sync you can do a pretty easy trick to migrate your files to Azure and clean up your storage. Azure File Sync can, much like OneDrive, present files that are located in the remote storage of Azure and download them when they are accessed. So what you can do is, upload all your files to Azure and set up a new file share. After you upload the files, you connect the Azure Fileshare with Azure File Sync to the file server on premises. Now only the files customers need will be downloaded. Files which are put on your fileserver with Filesync will, depending on your strategy, sometime disappear from the fileserver and only be stored on Azure. They will leave a link and will be downloaded on demand.

That helps you to keep the footprint on-premises pretty small and will enable centralized backup and recovery within Azure, which reduces administrative effort too.

If you want to learn about the implementation, please visit the documentation.

Closing

I hope after going through the above you gained more knowledge on Azure Files and why should be using them. If you have any additional questions, do not hesitate to leave a comment.

The post Should I be using Azure Files? appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/azure-files/feed/ 4
Azure Blob Storage: Data protection and Recovery capabilities https://www.altaro.com/hyper-v/azure-blob-storage/ https://www.altaro.com/hyper-v/azure-blob-storage/#respond Fri, 15 Jan 2021 07:44:33 +0000 https://www.altaro.com/hyper-v/?p=19479 Learn what Blob Storage is, how it works, how to design resiliency and data protection based on your business scenarios, and how to recover from disasters.

The post Azure Blob Storage: Data protection and Recovery capabilities appeared first on Altaro DOJO | Hyper-V.

]]>

Storing data is a “killer application” for the public cloud and it was one of the services adopted early by many businesses. For your unstructured data such as documents, video/audio files, and the like, Azure Blob storage is a very good solution.

In this article, we’ll look at what Blob Storage is, how it works, how to design resiliency and data protection based on your business scenarios, and how to recover from outages and disasters.

Azure Storage Overview

It all starts with a storage account, inside of which you can have one or more containers, each of which can store one or more blobs (Binary Large Objects). The names of accounts and containers need to be in lowercase and they’re reachable through a URL by default and thus need to be unique globally. Blobs can be Block blobs for text and binary data and each one can be up to 4.75 TiB, with the new limit of 190.7 TiB in preview. Append blobs are similar to block blobs but are optimized for append/logging operations. Page blobs are used to store random access files up to 8 TiB and are used for virtual hard drive (VHD) files for VMs. In this article, we’re focusing on block and append blobs.

Throughout this article and in Azure’s documentation Microsoft uses TebiByte (TiB), which is equivalent to 240 or 1,099,511,627,776 bytes, whereas a TeraByte (TB) is 1012 bytes or 1,000,000,000,000 bytes. You know that fresh 4 TB drive that you just bought and formatted and only got 3.6 TB of usable storage from? This is why these newer names (kibi, mebi, gibi, pebi, exbi, zebibytes) are more accurate.

Storage accounts also provide Azure Files, think Platform as a Service managed file shares in the cloud, and Azure File Sync which lets you connect your on-premises file servers to Azure and keep only frequently used files locally and sync cold data to Azure. Both of these fantastic solutions are not the topic of this article.

There are two generations of storage accounts, general purpose V1 and V2. In most scenarios, V2 is preferred as it has many more features.

To get your data from on-premises to the cloud over the network you can use AzCopy, Azure Data Factory, Storage Explorer (an excellent free, cross-platform tool for managing Azure storage), and Blobfuse for Linux. For offline disk transfers, there is Azure Data Box, -Disk, and -Heavy, along with Azure Import/Export where you supply your own disks.

Azure Storage Explorer - manually setting access tier

Azure Storage Explorer – manually setting access tier

Blob storage is hard-drive-based but there is an option for premium block blob storage accounts which is optimized for smaller, kilobyte-range objects and high transaction rates / low latency storage access.

Resilience

One of the best features of Azure Blob storage is that you won’t lose your data. When designing storage solutions for on-premises building high availability is challenging and requires good design, WAN or MAN replication and other costly technical solutions. In the cloud, it’s literally a few tick boxes. Picking the right level of data protection and recovery capabilities does require you to understand the options available to you and their cost implications.

Note that this article is looking at Blob storage for an application that you’re developing in-house, for VM resiliency look at this blog post on Azure Availability Sets and Zones, if you’re looking at using Blob storage for long term archiving of data look here and if you need a tutorial on setting up storage look here. You can also use Blob storage to serve images or documents directly in a browser, streaming video or audio, writing to log files, store backup, DR and archiving data or store data for Big Data analysis.

Creating Storage Account Replication options

Creating Storage Account Replication options

The simplest level of data protection is Locally redundant storage (LRS) which keeps three copies of your data in a single region. Disk and network failures, as well as power outages, are transparently hidden from you and your data is available. However, a failure of a whole datacenter will render your stored data unreachable. Zone redundant storage (ZRS) will spread your three copies across different datacenters in the same region and all three copies have to be written for the write to be acknowledged. Since each datacenter has separate power, cooling, and network connections, your data is more resilient to failures. This is reflected in the guaranteed durability, LRS gives you 99.999999999% (11 nines) over a given year, whereas ZRS gives you 99.9999999999% (12 9’s). Not all regions support zones and ZRS yet. In the event of a large-scale natural disaster taking out all datacenters in an entire region however you need even better protection.

Geo-redundant storage (GRS) keeps three copies in your primary region and also copies them asynchronously to a single location in a secondary, paired region. In regions where zones are supported, you can use Geo-zone-redundant storage (GZRS) instead, which uses ZRS in your primary region and again copies it asynchronously to a single location in the secondary region. There’s no guaranteed SLA for the replication but “Azure Storage typically has an RPO of less than 15 minutes”. Both GRS and GZRS gives you 99.99999999999999% (16 9’s) durability of objects over a given year. This provides excellent protection against a region failing but what if you’d like to do something with the replicated data such as periodic backups, analysis, or reporting?

To be able to do this, you need to choose read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). This provides the same durability as GRS/GZRS with the addition of the ability to read the replicated data. Predictably the cost of storage increases as you pick more resilient options. Unless Microsoft declares a region outage you have to manually fail over a storage account, see below.

Providing Access

As mentioned, each storage account has a URL but data isn’t public and you need to set up authentication correctly to ensure that the right people have access to the appropriate data, and no one else. When you create a Storage account you can set up which networks it can be accessed from. You can pick from a public endpoint – all networks (suitable if you must provide access to users from the internet), public endpoint – selected networks (pick vNet(s) in your subscription that can access the account), or a private endpoint.

Each HTTPS request to a storage account must be authorized and there are several options for controlling access. You can use a Shared Key, each storage account has a primary and a secondary key, the problem with this approach is that the access is very broad and until you rotate the key, anyone with the key has access to the data. Another, older method, is Shared access signatures (SAS) which provides very specific access at the container or blob level, including time-limited access. The problem is again that someone else could obtain the SAS and use it to access data. The recommended method today is to use Azure Active Directory (AAD) to control access. For blob storage, you can also provide anonymous public read access which of course is only suitable for a few business scenarios.

Tiering

Blob storage accounts let you tier your data to match the lifecycle that most data go through. In most cases, data is accessed frequently when it’s just created and for some time after that, after which access decreases as it ages. Some data is just dumped in the cloud and rarely accessed, whereas other data is modified frequently over its entire lifetime.

These three tiers are hot, cool, and archive. The cool tier is the same hard disk-based storage as the hot tier, but you pay less for storing data at this tier, provided you don’t access it frequently. An example would be relatively recent backup data, you’re unlikely to access it unless you need to do a restore. The archive tier on the other hand is tape-based and rehydrating/retrieving the data can take up to 15 hours, but it is the cheapest storage tier.

Storage account lifecycle rule move to cool tier

Storage account lifecycle rule move to cool tier

You can set the tier of a blob programmatically in your application or you can use lifecycle management policies. This lets you do things such as transition blobs from hot to cool, hot to archive, or cool to archive based on when it was last accessed, delete blobs and versions/snapshots at the end of their lifecycle, and apply these rules at the container level or on a subset of blobs.

Data Protection

Now that we’ve looked at the basics of storage accounts, blobs, tiering, and geographical resilience, let’s look at the plethora of features available to manage data protection.

Blob versioning is a fairly new feature (for general purpose V2 only) that creates a new version of a blob whenever it’s modified or deleted. There’s also an older feature called Blob Snapshots that also creates read-only copies of the state of a blob when it’s modified. Both features are also billed in the same way and you can use tiering with versions or snapshots, for instance keeping current data on the hot tier and the older versions on the cool tier. The main difference between the two is that snapshots is a manual process that you have to build into your application, whereas versioning is automatic once you enable the feature. Another big difference is that if you delete a blob, its versions are not deleted automatically, with snapshots you have to delete them to be able to delete a blob. There’s no limit on the number of snapshots/versions you can have but Microsoft recommends less than 1000 to minimize the latency when listing them.

To protect you against users deleting the wrong document or blob by mistake you can enable soft delete for blobs and set the retention period between 1 and 365 days. Protecting entire containers against accidental deletion is also possible, currently, it’s in preview. Note that neither of these features helps if an entire storage account is deleted – but a built-in feature in Azure called Resource locks allows you to stop accidental deletions (or changes) to any resource, including a storage account.

To keep track of every change to your blobs and blob metadata, using the change feed feature. It stores an Apache Avro formatted ordered, guaranteed, durable, immutable and read-only changelog.

If you have Soft delete, Change feed and Blob versioning enabled you can use point-in-time restore for block blobs, which is useful for in accidental deletion, corruption or data testing scenarios.

Creating a Storage Account Data Protection Options

Creating a Storage Account Data Protection Options

Also for block blobs only is the Object replication feature. This lets you asynchronously copy block blobs from one storage account to another. This could be for a geo-distributed application that needs low latency access to a local copy of the blobs, or data processing where you distribute just the results of the process to several regions. It requires that Change feed and Blob versioning are enabled. The difference between this and GRS / GZRS is that this is granular as you create rules to define exactly which blobs are replicated, whereas geo-replication always covers the entire storage account. If you’re using blob snapshots be aware that they’re not replicated to the destination account.

If you have any of the geo-replicated account options, you should investigate exactly what’s involved in a manual failover that you control and include it in your Disaster Recovery plan. If there’s a full region outage and Microsoft declares it as such, they’ll do the failover but there are many other situations that might warrant you failing over, which typically takes about an hour. Be aware that storage accounts with immutable storage (see below), premium block blobs, Azure File Sync, or ADLS Gen2 cannot be failed over.

All storage (after 20th October 2017) in Azure is encrypted, you can check if you have data that’s older if it’s encrypted or not. If you have data from different sources in the same account, you can use the new Encryption scope (preview) feature to create secure boundaries between data using customer-managed encryption keys.

Creating a Storage Account Advanced Settings

Creating a Storage Account Advanced Settings

If you have a regulatory need to provide Write Once, Read Many (WORM) or immutable storage you can create legal hold (until it’s lifted) or time based retention policies during which time no blobs can be deleted or changed, even if you have administrative privileges. It can be set at the container level and works across all access tiers (hot, cool, and archive).

It’s interesting to note that with all of these built-in data protection features for Disaster Recovery, including geographical replication, there’s no built-in backup solution for blob storage. Backup, as opposed to DR, comes into play when you have an application error for instance and data has been corrupted for some time and you need to “go back in time”. There are ways to work around this limitation.

Azure Blob Storage features

There are several other features that contribute to data protection and resiliency such as Network routing preference. Normally traffic to and from your clients on the internet are routed to the closest point of presence (POP) and then transfer on Microsoft’s global network to and from the storage account endpoint, maximizing network performance, at the cost of network traffic charges. Using this preview feature you can instead ensure that both inbound and outbound traffic is routed through the POP closest to the storage account (and not closest to the client), minimizing network transfer charges.

Creating a Storage Account Network Settings

Creating a Storage Account Network Settings

If you have REALLY big files, blob storage now supports up to 190.7 TiB blobs.

To understand what data you have in your storage accounts use the new Blob inventory report preview feature to see total data size, age, encryption status, etc. Managing large amounts of blobs becomes easier with Blob index which lets you dynamically tag blobs using key-value pairs which you can then use when searching the data, or with lifecycle management to control the shifting of blobs between tiers.

Azure Data Lake Store Gen2

No conversation around Azure storage is complete without mentioning ADLS Gen2. Traditionally data lakes are optimized for big data analytics and unaware of features such as file system semantics / hierarchical namespaces and file level security. ADLS Gen2 builds on Azure Blob storage and provides these features, along with many others to provide a low cost, tier aware, highly resilient platform to build enterprise data lakes. There are some features available in Blob storage accounts that are not yet available for ADLS Gen2. To optimize your application to only retrieve exactly the required data use the new Query Acceleration feature for both Blob storage and ADLS Gen2.

Conclusion

Azure Blob storage provides a multitude of features to ensure the protection and recoverability of your data in one comprehensive platform. Good luck in designing optimized Azure Blob storage solutions for your business needs.

The post Azure Blob Storage: Data protection and Recovery capabilities appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/azure-blob-storage/feed/ 0
Azure Availability Sets and Zones https://www.altaro.com/hyper-v/azure-availability-sets-zones/ https://www.altaro.com/hyper-v/azure-availability-sets-zones/#respond Thu, 07 Jan 2021 16:25:58 +0000 https://www.altaro.com/hyper-v/?p=19450 Azure Availability Sets and Azure Availability Zones are part of a plethora of technologies you can use to ensure your applications stay up at all times.

The post Azure Availability Sets and Zones appeared first on Altaro DOJO | Hyper-V.

]]>

If there’s one thing that’s a lot easier to achieve in the cloud than on-premises, it’s High Availability (HA). This might sound strange given that when you move to public cloud, you give up a lot of control over your infrastructure but as we’ll show in this article – Azure Availability Sets and Azure Availability Zones are part of a plethora of technologies you can use to ensure your applications stay up.

High Availability on Premises

There are some fundamental concepts that contribute to HA in computer systems. At the server level on-premises we have redundancy built-in (dual or triple power supplies in each server, RAID for disk storage, multiple NICs connected to separate switches). Networks can be built to be redundant with multiple paths, switches and routers, eliminating single points of failure. Once we bring virtualization into the picture, clustering multiple physical servers together becomes feasible, automatically restarting VMs on other hosts if a physical server fails. And if you have particularly business-critical applications, we look to stretched clusters where nodes in a cluster are separated by some kilometres of distance. As you can appreciate, cost and complexity increase as you implement more of these technologies, with the most critical applications reserved for the really expensive solutions. Also notice that we go from guarding against a single component in a server failing, to a whole server or network, to a whole site.

That’s HA, keeping an application available for users through eliminating single points of failure and storing data in a redundant fashion, which is separate to Disaster Recovery (DR) preparedness.

DR applies similar concepts, generally by replicating data from one location to another to make it possible to restore services should a whole site fail. The main difference between HA and DR is that the latter isn’t instantaneous, and some downtime is expected. It’s tempting to get caught up in technical solutions to HA and DR as you imagine scenarios of hurricanes and floods taking out your servers but as much as this is a technical issue, it’s a people and process problem. Nearly all outages are caused by people making mistakes or processes not being thought through, not by unexpected natural disasters.

If you outsource any part of the system described above, your provider generally offers a Service Level Agreement (SLA) where they will reduce your cost if they don’t provide the agreed uptime, described in availability, generally per month, for example, 99.9% uptime equates to 43.2 minutes of downtime in a month.

The good news is that an understanding of these concepts (which most IT Pros know off by heart) transfer very well to public cloud.

High Availability in Azure

The main difference between running your workloads on-premises and Azure when planning for HA is that you don’t have access to the underlying infrastructure which means less work as well as less control. We’ll start by looking at IaaS VMs, where a single VM running on Standard HDD Managed Disk has an SLA of 95%, Standard SSD Managed Disk gives you 99.5% and a VM running on Premium / Ultra SSD gives you 99.9%. These SLAs only apply if all disks (OS and data disks) are running on the required storage type.

So, if your business has a critical application that can only run in a single VM Azure can at most provide 99.9%. Obviously, this is downtime that Microsoft has responsibility for, if your application in the VM crashes or is attacked by malware or your network infrastructure or ISP has an outage those don’t count. If the SLA is breached, and you go through the work of proving to support that it was, you’ll get service credits, which is small comfort if your business application was down for hours or days.

Just as on-premises, a single VM really isn’t enough for a proper HA strategy. Hence you need to use Availability Sets, here’s a good primer. There are two new terms to cover here, Update Domains (UD) and Fault Domains (FD). The former means that if two VMs are spread across UDs, only one host will be restarted at a time when Microsoft updates their Hyper-V hosts. Be aware that Microsoft has put in a lot of work to limit the times that hosts actually need to be rebooted for updates, mostly they can apply updates to a running OS without a reboot. An FD is a rack which has separate power and networking connections and is the smallest fault isolation in Azure.

Fault and Update Domains in Azure

Fault and Update Domains in Azure (courtesy of Microsoft)

Take a canonical three-tier application with two web front end servers, two middle-tier application layer VMs and two backend database servers. When creating this in Azure you’d put each tier in an Azure Availability Set. This tells Azure to separate the two web VMs for instance in two distinct FDs, if a rack fails your second VM is still servicing clients as it’s running in a separate rack. Also, note that three copies of your managed disks for each VM are spread across storage infrastructure. When creating an Azure Availability Set the default number of FDs is 2 (max 3, depending on region) and UDs is 5 (max 20). VMs are spread across each FD and UD so that if you have seven VMs in an Azure Availability Set and five UDs, two of the UDs will have two VMs in it and one FD will have three VMs and the other one four. You cannot pick which VM goes in which UD or FD.

Creating an Azure Availability Set

Creating an Azure Availability Set

Once you have your application deployed in one or more Azure Availability Sets, you get an SLA of 99.95% (21.6 minutes downtime). If your business has strict regulatory requirements and has opted for an Azure Dedicated Host they provide the same SLA as an Azure Availability Set.

Note that the concept of an Azure Availability Set lets Azure know that these VMs are “related” and needs to be kept separate, it’s up to you to make sure that the applications running in the VMs are using guest clustering appropriately. For instance, if you have two VMs running as Active Directory Domain Controllers (DCs) in the same domain they’ll automatically replicate and if one of them is on an FD that fails, the other DC will still be available for other VMs to authenticate against. If you have a SQL Server backend, you’ll need to set up database clustering in a guest cluster so that your application continues to be able to access data, even if one SQL VM is unavailable.

The opposite (sort of) to Azure Availability Sets is Proximity placement groups where you have a need to keep VMs very close to each other to support latency-sensitive applications.

Azure Availability Zones

In the ongoing “battle of worlds” between Azure and AWS, Microsoft proudly proclaims that they have more regions than AWS and GCP combined, whereas AWS claims more zones per region.

Currently, there are over 60 regions in Azure, spread across 140+ countries. Each region is one or more datacenters, and in each geography (apart from Brazil) there are two regions that are paired so that you can replicate data from one region to another whilst still complying with your country’s data residency laws. Each region is separated by at least 300 miles (typically) to reduce the likelihood that a natural disaster in one region affects the other region.

Azure Availability Zones and Regions

Azure Availability Zones and Regions

Within a region, there are multiple datacenters that have separate cooling, power and network infrastructure, providing isolation should an entire datacenter fail, these are known as Azure Availability Zones. For regions that provide Azure Availability Zones you can create VMs and distribute them across Azure Availability Zones which gives you a 99.99% SLA (4.3 minutes downtime per month).

At the time of writing 12 regions support Azure Availability Zones with four more coming soon.

Here I’m creating a VM in Australia East and picking zone 2 to house it.

Creating a VM in Australia East and picking zone 2 to house it

Creating a virtual machine

If you need several VMs that can be created from a single image that has your application already installed spread automatically across zones use a VM Scale Set, that spans Azure Availability Zones.

You cannot create an Azure Availability Set that spans Azure Availability Zones. For Azure Availability Zones there are some services that are Zonal, meaning each instance is “pinned” to a specific Azure Availability Zone (VMs, public IP addresses or managed disks) or Zone-redundant where Azure takes care of spreading them across zones. An example is Zone Redundant Storage (ZRS) which automatically spreads copies of your data across three zones, giving you 99.9999999999% (12 9’s) SLA for the stored data. There are also non-regional services in Azure that do not have a dependency on a particular region, making them resilient to both zone and region-wide outages. These tables list Zonal and Zone-redundant service for each region.

When it comes to DR you can use Azure Site Replication (ASR) to replicate disk writes on VM disks in one region to disks in another region. This is asynchronous replication so the copy might be slightly out of date, here’s a table showing the latency between different Azure regions but you’ll get up to date data on latency from your location to each region on this site.

Azure Speed Test network latency results

Azure Speed Test network latency results

You can also use ASR to replicate VMs from one Availability Zone to another. This has the distinct disadvantage that a natural disaster affecting several datacenters might take out all your zones but there are also benefits. Networking is much simpler as you can reuse the same virtual network, subnet, Network Security Groups (NSGs), private and public IP addresses and load balancer across zones. Latency will also be less but be aware that this feature is only available in five regions at the time of writing.

So far, we’ve been looking at IaaS VMs but ultimately, you’ll get the best cloud computing has to offer with PaaS services. Services such as Service Fabric, Data Lake, Firewall, Load Balancer, VPN Gateway, Cosmos DB, Event Hubs and Event Grid, Azure Kubernetes Services, and Azure Active Directory Domain Services all support zones today, giving you good building blocks for your HA architecture.

As you can see Azure offers many different options for building resilient applications and compared to managing multiple on-premises clusters or datacenters with redundant LAN and WAN infrastructure using what’s provided in the cloud is both easier and far more cost-effective.

Bringing Azure Availability Sets and Availability Zones together

Let’s make this real with an example application. A customer-facing, business-critical application needs to be moved to Azure and here’s an example architectural solution.

I’d pick a region to host the application, based on the lowest latency to the highest number of end-users, if this was a global application that needs to be distributed worldwide, we’d need to involve Azure Front Door and perhaps Cosmos DB but, in this scenario, let’s assume we’re looking at a single region.

If we need VMs for the front end we’d host multiple ones across each Availability Zone in a region and then use Azure Load Balancer to spread incoming traffic across each VM. As an alternative we might look to Azure App Environment (ASE, a PaaS version of web hosting) which lets us pin an ASE to a zone, we’d need at least two ASEs. Be aware that the Load Balancer as a PaaS is highly available as it’s zone aware, whilst also ensuring that the application VMs / resources are highly available.

For the application logic layer, we’d put one VM in each zone and use an Internal Load Balancer to manage the traffic coming from the web front end to this layer. Depending on the database layer used for the application today we may need to have multiple backend VMs with SQL server (again, spread across zones) in a guest cluster. Alternatively, if possible, switching to zone aware Azure SQL or Cosmos DB as PaaS database services would minimize infrastructure management.

This application is now resilient to host, networking hardware and storage failures, as well as an entire datacenter failing. To ensure timely recovery in the case of an entire region outage (DR), we’d use ASR to replicate the VMs to the paired region, and SQL or Cosmos DB to replicate the data to that region as well. In ASR we’d create (and test regularly) a Recovery Plan with all required steps to bring the application up quickly.

Conclusion

Most HA concepts that we’ve been using in IT for decades translate very well to public cloud, creating resilient applications doesn’t require relearning from scratch, rather just tweaking your thinking. Azure Availability Sets and Availability Zones give you great building blocks to lay a great foundation for your mission-critical applications.

The post Azure Availability Sets and Zones appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/azure-availability-sets-zones/feed/ 0
Cross Region Restore CRR for VMs https://www.altaro.com/hyper-v/cross-region-restore/ https://www.altaro.com/hyper-v/cross-region-restore/#respond Thu, 17 Dec 2020 20:40:23 +0000 https://www.altaro.com/hyper-v/?p=19381 Making sure you have verified backups of your data and VMs in Azure is critical. Cross Region Restore (CRR) is a new feature that helps you with this.

The post Cross Region Restore CRR for VMs appeared first on Altaro DOJO | Hyper-V.

]]>

Making sure you have verified backups of your data and VMs in Azure is critical. But backup is more than just copying data, it’s part of a wider Disaster Recovery (DR) preparedness and as Azure becomes a platform for your business, your DR plan needs to be solid. In this article, we’ll look at how this can be best achieved, how to handle business-critical workloads, and the best way to use a new feature, Cross Region Restore (CRR).

A quick note – all Azure regions are made up of one or more datacenters, each datacenter has separate power, cooling and networking infrastructure. Each region also has a paired region in the same country / geographical area, ensuring that you can comply with data residency requirements whilst also providing optional replication in case of a region outage.

Azure Backup

Would you believe that in the early days of IaaS VMs becoming available in Azure, there was no platform backup system on offer? The recommendation was to run System Center Data Protection Manager (DPM) in a VM and back up your other VMs to it.

Times have definitely changed and Azure Backup is now a very capable enterprise data protection solution that safeguards much more than your Azure VMs. In fact, you can use Azure Backup to protect Linux and Windows Azure VMs, SQL server and SAP HANA VMs, Azure File shares and on-premises VMs using either the Microsoft Azure Recovery Services (MARS) agent or the Microsoft Azure Backup Server (MABS) option.

Production VMs protected in Azure Backup

Production VMs protected in Azure Backup

Let’s start with Azure VMs, the first step is creating a vault to store backups. Each vault can hold up to 1000 VMs (a total of 2000 data sources) and you can back up each Azure VM once a day. Each region needs its own vault (if you have deployments globally) and VMs can only be backed up to a vault in its region. Each disk can be up to 32 TB in size and in total the disks for a VM can be up to 256 TB. Windows VM backups are application-aware, whereas Linux VMs are file consistent, unless you use custom scripts.

The first choice is which underlying type of Azure storage you’re going to use because once you’ve started protection this can’t be changed. You can pick from Locally Redundant Storage (LRS), three copies of your data in a single region, or Geo Redundant Storage (GRS), three copies in the local region and three additional copies in the paired region. Currently only UK South and South East Asia support the third option, Zone Redundant Storage (ZRS) for backup which spreads copies of your data across different datacenters in the same region. The default and recommended option is GRS.

Once you’ve created the vault, simply define one or more policies that specify when to backup and how long to keep the backups for. For SQL Server (in a VM) you can define log backups up to every 15 minutes.

SQL Server backup policy

SQL Server backup policy

When the time comes to restore (which is the point really, nobody wants backup for its own sake, what you want is the successful recovery of the VM or the data) you have several options. If you need to restore individual files, a recovery point (by default the latest one) will be mounted a local drive through a script that you download, allowing you to browse the file system and grab the files you need, as you can see here:

Script mounting drives for file recovery

Script mounting drives for file recovery

When it comes time to restore a corrupted VM (or just testing your DR plan – something that you should do regularly) you can create a new VM, specifying the Resource Group (RG), virtual network (VNet) and storage account. This new VM must be created in the same region as the source (but see CRR below). You can also just restore a VMs disk(s), which will give you a template as well that you can customize and create a new VM based on the restored disks. A third option is to replace an existing VM, while the fourth option is CRR.

Backup jobs reporting in Azure Backup

Backup jobs reporting in Azure Backup

If you have VMs on premises that you’d like to back up to the cloud you have three options, the MARS agent that lets you backup any Windows server, anywhere, to Azure. If you have a handful of servers this is definitely an easy option (essentially replacing Windows Server Backup with a similar tool, that includes support for Azure as a destination). MARS supports files, folders and system state and backs up twice a day but if you have more than a few servers, MABS is a better option.

MABS is a “free” version of System Center Data Protection Manager (DPM), which doesn’t support backing up to tape, nor protecting one DPM server with another. With MABS you don’t pay for the license of the server itself, instead, you pay for each protected instance. The beauty of MABS is that you first protect workloads on premises to local disk (as often as every 15 minutes if you need it) and it then synchronizes recovery points to Azure up to three times a day. This makes most recoveries much faster as data doesn’t have to be downloaded from Azure. The third option is to use DPM, with the addition of Azure as a secondary backup storage location (replacing tapes).

Note that restore operations from the cloud to on-premises are free. You don’t pay the normal data egress charges as data is downloaded.

Azure Site Recovery

Backup is essential and it’s what you need when everything else has failed. But recovering from a large-scale outage, either in the Azure platform or due to an attack such as ransomware by just restoring backups is a time-consuming proposal. There are business-critical workloads that require more than a mere backup, a full DR plan is required. This can be in the form of High Availability by spreading workloads across Availability Zones, using a load balancer to provide multi-server redundancy, distributing the data to multiple regions using Cosmos DB or putting Front Door in front of a global web application. Here, we’re going to look at Site Recovery (ASR). Symon looked at ASR in the context of on-premises, geo-distributed Hyper-V clusters in this blogpost.

Where Azure Backup is “copy your VM / data to a separate storage location on a regular cadence”, ASR is “replicating VM (and physical servers) disk changes on a continuous basis to a separate host” for very fast recovery. They’re not mutually exclusive, having tamper-proof historical backup recovery points is going to save your behind when ransomware strike or a super important document folder was deleted two weeks ago. But replication is what’s going to make you the hero when a region in Azure is down and you can bring up the replicated VMs in the paired region in minutes. Be aware that replication is continuous (with recovery points kept 24 hours by default and app consistent snapshots generated every 4 hours by default) so that if a file server VM is infected with ransomware, ASR will dutifully replicate the encrypted files to the target region almost instantaneously. This is why Azure Backup and ASR need to be used together.

ASR can replicate on-premises Hyper-V, VMware VMs or physical servers for DR and provides recovery plans to orchestrate complex applications (for example: bring up the DCs first, then the database servers, stop for a manual step to run a script to check database consistency, then start the front end servers), along with many other features. The other way to use ASR is to replicate from one Azure region to another.

You can group up to 16 VMs together into replication groups so that all VMs that make up an application also share application and crash-consistent recovery points. You can also use recovery plans, including adding automation runbooks to ensure that your VMs are started in the right order and recovery tasks are automated.

Whether for on-premises to Azure, or Azure to Azure DR, you don’t pay for VMs in the target location, just a per VM cost, plus storage costs. Only when you do a test failover or a real failover, which creates VMs do you pay VM running costs. And the first 31 days of each replicated VM is free.

Cross Region Restore

If you’re using Azure Backup to protect VMs in one region and you’ve configured the vault(s) to use GRS, you might assume that you could restore them in the secondary region at will. Not so, unless Microsoft declares a disaster in your primary region. Cross Region Restore (CRR), currently in preview, changes this dynamic and lets you decide when to restore a VM in the secondary, paired region, perhaps for testing purposes or because something’s happening to your resources in the primary region, but the problem isn’t large enough for Azure to declare an outage.

If you already have a Recovery Services vault that’s using GRS, you can enable CRR under Properties. This action cannot be undone, so you can’t turn a CRR enabled vault back to a GRS vault. Note that if you have a vault that’s using LRS and already has protected data in it you’ll need to perform some workarounds.

Enable Cross Region Restore for a vault

Enable Cross Region Restore for a vault

Conclusion

Currently, CRR supports Azure VMs (with disks smaller than 4 TB), SQL databases hosted on Azure VMs and SAP HANA databases in Azure VMs. Encrypted VM disks are supported for restore, including the built-in Storage Service Encryption (SSE) as well as Azure Disk Encryption (ADE).

CRR is based on customer feedback and it makes a lot of sense for Microsoft to provide more control for customers as to when, where and how they restore their workloads. There could be regulatory or audit reasons to test restores and CRR also obviates any waiting time for Microsoft to declare a disaster for the primary region.

Remember, just because it’s in the cloud doesn’t mean you can forget about backup and DR, your VMs are still your responsibility.

The post Cross Region Restore CRR for VMs appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/cross-region-restore/feed/ 0
Why you should be using Azure Archive Storage https://www.altaro.com/hyper-v/azure-archive-storage/ https://www.altaro.com/hyper-v/azure-archive-storage/#respond Thu, 10 Dec 2020 19:40:38 +0000 https://www.altaro.com/hyper-v/?p=19356 Azure Archive Storage is a capable, flexible, and affordable storage solution which performs within advertised SLA and data durability figures. Learn more.

The post Why you should be using Azure Archive Storage appeared first on Altaro DOJO | Hyper-V.

]]>

Azure archive storage is cloud-based storage. That sounds like a simple statement, however, consider cloud-based storage like a set of hard drives at the end of a wire. Since there’s some distance involved between you and that hard drive, there are things that you cannot achieve like direct block-based access, however, there are unlockable storage capabilities that are far beyond what we’re able to achieve by ourselves. One of those capabilities is to present itself as Archive Storage as part of a scalable object store. In this post, we will explore what Azure Archive Storage is and to do that we will take a detour through the basics and then head on towards how to start using it.

Before we dive into the depths of archive storage and what we can do with it, we will start with the very basics.

Azure Archive Storage Account

We cannot provision Azure Archive Storage without requiring an Azure Storage account. When we create one of these we are able to configure several parameters, one of which is what type of account we need.

Azure Storage Account Details

Unless we need to be restrictive about what we are doing we would use a General-purpose V2 storage account type, which supports 6 different services including Blob, File, Queue, Table, Disk, and Data Lake Gen2. For the sake of storage tiering, we require both blob storage as well as a general-purpose V2 account.

How many Azure Archive Storage copies are enough?

As we think through our storage requirements, we need to consider how many copies of our data we need, where those copies are, as well as what kind of availability Service Level Agreement (SLA) is associated with the various choices.

Azure Archive Storage Deployment Model

I’ll summarize these for us below:

Locally-redundant storage (LRS) contains 3 copies of your data within a single physical location or datacenter, with a durability of 11 nines, that is 99.999999999%. Choose this option if you don’t require any special storage recoverability outside of the chosen Azure region

Zone-redundant storage (ZRS), contains the 3 copies across the Azure availability zones in the chosen region. Availability zones are physically separate locations which don’t share power, cooling or networking, with a durability of 12 nines, that is 99.9999999999%. Choose this option:

  1. to safeguard against a single Azure datacenter or physical location failure that could be caused by a fire, flood or natural disaster
  2. create redundancy within the geography or chosen country

Geo-Redundant Storage (GRS) contains 3 copies of your data within a single physical location or datacenter, as well as another copy to another location which is hundreds of miles away using LRS. That means 6 copies over two distinct geographies with a durability of 16 nines, that is 99.99999999999999%.

Geo-zone-redundant storage (GZRS) copies your data across three Azure availability zones in the chosen region and replicates it to a second geography. The durability of GRS and GZRS is the same; 16 nines. Choose this option as your belts and braces storage option should you be concerned that your country could fail or disappear and you still need a copy of your data.

Choosing your Tier of Storage

If you’re testing and you’re not massively concerned about the availability of your data then LRS will do. As the criticality of your data or your requirement to recover from data increases, consider moving from LRS to GRS. As of the time of writing ZRS, GZRS or RA-GZRS (that is globally zone redundant with an additional read-only copy) is not supported for Azure storage archive tiers.

NOTE: some fine print applies here in terms of when your data is available after it has been written and replicated. The following has been extracted from the SLA for Storage Accounts which may shape your decision-making process based on the definition of the terminology and replication times:

Azure Archive Tier of Storage

SLA definitions are an important part of our storage planning, as well as understanding which options are available.

As you create the storage account you have the option to define a default Blob access tier, as either Hot or Cool, but not Archive. Archive Storage tiering for specific items is set after the creation of the storage account, but more on that later.

Azure Archive Hot Cool Storage Account

Putting your Data on ice

For the sake of this article, I have created a storage account, in which I have created a blob container and uploaded a small text document. I can change the storage tiering for this document within the Storage Explorer GUI by right-clicking on the file and choosing the Change Access Tier option.

Azure Archive Storage Explorer GUI

Note the option to change from Hot to Cool, but not to Archive

Azure Archive Storage Change Tier Hot Cool Archive

Changing the storage tier presents us with a warning that our blog will be inaccessible until we rehydrate it to another storage tier, i.e. Cool or Hot

Changing Azure Archive Storage Tier

The Access Tier status reflects in the GUI immediately

Access Tier Status GUI Azure Archive Storage

Once the tier is changed, we cannot interact with the file any longer. Attempting to download the file in Storage Explorer leads to the following error message.

Storage Explorer Error Message

Trying to Change the tier after we have commenced the rehydration operation also results in an error:

Azure Storage tier change rehydration operation error

To restore access to the file, we need to Rehydrate this blog from Archive to either Hot or Cool. This operation is not quick and is something you need to plan for as the rehydrate operation can take up to 15 hours to complete.

Azure Storage restoring access

Note that we can set the Rehydrate priority for the file as well, however, the change in priority will attract significant costs. For smaller blob objects, a high priority rehydrate is able to bring the object back in under an hour.

Azure Storage Rehydrate Priority

Automating Storage Tiering

While using the GUI for a few files or a few operations is fine, it certainly doesn’t scale to hundreds, thousands, or millions of files. We can automate blog tiering using programmatic methods via API calls and/or PowerShell as well as using storage Lifecycle Management policies.

Automating Azure Storage Tiering

Understanding Tiering

Tiering is available in three options, Hot, Cool and Achieve. I will discuss what the minimum tier ageing is.

Hot tier- Use this tier for active data with the fastest possible response time within the SLA for standard or premium storage.

Cool tier – Use this tier for data that can be classified as cool for at least 30 days. Use this for data that isn’t accessed frequently, as well as part of a tiered backup strategy.

Archive tier – as the cheapest storage tier it is also the most restrictive to use. Data needs to remain in this tier for at least 180 days. While data is in this tier, it’s offline for active use unless you rehydrate it. Think of it as the equivalent of an offsite tape backup. Bear in mind that if you rehydrate early, you will attract a charge depending on the priority of the rehydration. Use this for data that you want to park or archive as the name suggests.

Earlier in this article, we discussed various account options including LRS and GRS storage account types. I have extracted the following table which compares the availability of tiers

Azure Storage Tiering Differences

Azure Archive storage vs the rest

Azure Archive storage is relatively seamless and well-integrated as storage solution which may be consumed as part of a cloud=storage strategy or a cloud-integrated application strategy hosted completely on Azure services. While there are many other pure cloud-based storage solutions in the market like Wasabi, Backblaze B2, etc., I’m offering a brief comparison against two other global hyper-scale cloud solutions, which offer IAAS and PAAS services, including an archive equivalent class of storage.

Amazon Glacier – an S3 compliant object store which offers two storage classes, which offer a retrieval time measured from a few minutes for S3 Amazon Glacier to up to 48 hours in the case of the Amazon S3 Glacier Deep Archive storage class. Durability is offered at 11 nines or 99.999999999%.

Google Coldline – Durability is offered at 11 nines or 99.999999999%, however, latency for retrieval sets this service apart from its competitors at sub-second data retrieval, albeit at a comparatively low availability SLA of 99%

Estimating the cost of storage

Storage costs have several factors to consider. One is the pure costs of storage, the next is the cost of accessing the data once it’s stored in the cloud.

Let’s start with how to calculate the storage costs. For the sake of this article, I’m using the UK South as the region and USD as the currency. Below we can clearly see storage becoming cheaper as pricing moves from Premium class (SSD) to standard Hot, Cool and Archive tiers

Estimating the cost of Azure storage

However, for cloud costing – storage costs are not the only concern. How the data is used and how often it’s accessed is the next factor. Pricing storage can be tricky if we don’t know the IOPS profile of a given application or use case, however, lifecycle management features within Azure help to smooth out the pricing of moving blobs to the Archive tier.

Azure Storage Operations and data transfer prices

Speaking of backup

One of the natural use cases for this class of storage is backup. With that in mind, Microsoft has documented a rich partner ecosystem who are able to use Azure Archive Storage Directly.

Azure Archive Storage Backup Partners

Conclusion

Azure Archive storage is a capable, flexible, and affordable storage solution which performs within advertised SLA and data durability figures. Azure Archive storage is a natural extension to Azure Storage tiering, which includes hot, cold and archive tiers and does not require a separate storage GUI to configure. Data may be tiered using the Azure Resource Manager GUI, PowerShell, and automated lifecycle policies.

One of the primary use cases is long term backup retention and any kind of long terms storage and archiving requirement. Microsoft and its partners offer direct support for backup solutions able to write data directly to the Azure Archive storage tier.

The post Why you should be using Azure Archive Storage appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/azure-archive-storage/feed/ 0
The Ten Commandments of Backup https://www.altaro.com/hyper-v/ten-commandments-backup/ https://www.altaro.com/hyper-v/ten-commandments-backup/#respond Wed, 09 Dec 2020 17:23:17 +0000 https://www.altaro.com/hyper-v/?p=19264 We run down the 10 most essential concerns for any backup strategy. How many are you taking into consideration?

The post The Ten Commandments of Backup appeared first on Altaro DOJO | Hyper-V.

]]>

In honour of the publication of The Backup Bible, I’ve extracted the top 10 most important messages from the book and compiled them into a handy reference.

The Backup Bible is a free eBook I wrote for Altaro that covers everything you need to know about planning, deploying and maintaining a secure and reliable backup and disaster recovery strategy. Download the Backup Bible Complete Edition now!

Plan for the Worst-Case Scenario

We have lots of innovative ways to protect our data. Using HCI or high-end SANs, we can create insanely fault-tolerant storage systems. We can drag files into a special folder on our computer and it will automatically create a copy in the cloud. Many document-based applications have integrated auto-saves and disk-backed temporary file mechanisms. All of these are wonderful technologies, but they can generate a false sense of security.

One specific theme drives all of my writing on backup: you must have complete, safe, separate duplicates. Nothing else counts. Many people think, “What if my hard drive fails?” and plan for that. That’s really one of your least concerns. Better questions:

  • What if I make a mistake in my document and don’t figure it out for a few days?
  • What if the nice lady in the next cubicle tries to delete her network files, but accidentally deletes mine?
  • What if someone steals my stuff?
  • What if my system has been sick but not dead for a while, and all my “saved” data got corrupted?
  • What if I’m infected by ransomware?

Even the snazziest first-line defences cannot safeguard you from any of these things. Backups keep a historical record, so you can sift through your previous versions until you find one that didn’t have that mistake. They will also contain those things that should have never been removed. Backups can (and should) be taken offline where malicious villains can’t get to them.

Plan for the Worst Case-Scenario #Backup10Commandments #BackupBible – Tweet this

Use all Available Software Security and Encryption Options

Once upon a time, no one really thought about securing backups. The crooks realized that and started pilfering backup tapes. Worse, ransomware came along and figured out how to hijack backup programs to destroy that historical record as well.

Backup vendors now include security measures in their products. Put them to good use.

Use all Available Software Security and Encryption Options #Backup10Commandments #BackupBible – Tweet this

Understand the Overlap Between Active Data Systems and Backup Retention Policies

The longer you keep a backup, the taller the media stack gets. That means that you have to pay more for the products and the storage. You have to spend more time testing old media. You have to hold on to archaic tape drives and disk bus interfaces or periodically migrate a bunch of stale data. You might have ready access to a solution that can reduce all of that.

Your organization will establish various retention policies. In a nutshell, these define how long to keep data. For this discussion, let’s say that you have a mandate to retain a record of all financial transactions for a minimum of ten years. So, that means that you need to keep backup data until it’s ten years old, right? Not necessarily.

In many cases, the systems used to process data have their own storage mechanisms. If your accounting software retains information in its database and has an automatic process that keeps data for ten years and then purges it, then the backup that you captured last night has ten-year-old data in it.

Database and Backup Retention Comparison

Does that satisfy your retention policy? Perhaps, perhaps not. Your retention policy might specifically state that backups must be kept for ten years, which does not take the data into consideration. Maybe you can go to management and get the policy changed, but you might also find out that it is set by law or regulation. Even if you are not bound by such restrictions, you might still have good reason to continue keeping backups long-term. Since we’re talking about a financial database, what if someone with tech skills and a bit too much access deletes records intentionally? Instead of needing to hide their malfeasance for ten years, they only need to wait out whatever punctuated schedule you come up with. Maybe accounting isn’t the best place to try out this space-saving approach.

Understand the Overlap Between Active Data Systems and Backup Retention Policies #Backup10Commandments #BackupBible – Tweet this

High Availability is a Goal, Not a Technology

We talk a lot about our high availability tech and how this is HA and that is HA. Really, we need to remember that “high availability” is a metric. How about that old Linux box running that ancient inventory system that works perfectly well but no one can even find? If it didn’t reboot last year, then it had 100% uptime. That fits the definition of “highly available”.

You can use a lot of fault-tolerant and rapid recovery technologies to boost availability, but a well-implemented backup and disaster recovery plan also helps. All of the time that people spend scrounging for tapes and tape drive manuals counts against you. Set up a plan and stick to it, and you can keep your numbers reasonable even in adverse situations.

High Availability is a Goal, Not a Technology #Backup10Commandments #BackupBible – Tweet this

5. Backup and Disaster Recovery Strategies are Not the Same Thing

If your disaster recovery plan is, “Take backups every night,” then you do not have a disaster recovery plan.

Backup is a copy of data and the relevant technologies to capture, store, and retrieve it. That’s just one piece of disaster recovery. If something bad happens, you will start with whatever is leftover and try to return to some kind of normal state. That means people, buildings, and equipment as much as it means important data.

The Backup Bible goes into much more detail about these topics.

Backup and Disaster Recovery Strategies are Not the Same Thing #Backup10Commandments #BackupBible – Tweet this

Backup Applies to Everyone in an Organization, so Include Everyone

The servers and backup systems live in the IT department (or the cloud), but every department and division in the organization has a stake in its contents and quality. Keep them invested and involved in the state of your backup and disaster recovery systems.

Backup Applies to Everyone in an Organization, so Include Everyone #Backup10Commandments #BackupBible – Tweet this

One Backup is Never Enough

I said in the first commandment that for a proper backup, you must have complete, safe, separate duplicates. A single duplicate is a bare minimum, but it’s not enough. Backup data gets corrupted or stolen just as readily as anything else. You need multiple copies to have any real protection.

Whether you take full backups every week or every month, take them frequently. Keep them for a long time.

One Backup is Never Enough #Backup10Commandments #BackupBible – Tweet this

One Size Does Not Fit All

It would be nice if we could just say, “Computer, back up all my stuff and keep it safe.” Maybe someday soon we’ll be able to do that for our personal devices. It’s probably going to be a bit longer before we can use that at the enterprise scale. In the interim, we must do the work of figuring out all the minutiae. Until we have access to a know-it-all-program and a bottomless storage bucket, we need to make decisions about:

  • Using different retention policies on different types of data
  • Using different storage media and locations
  • Overlapping different backup applications to get the most out of their strengths

As an example of the last one, I almost always configure Microsoft SQL to capture its own backups to a network location and then pull the .bak files with a fuller program. Nobody really backs up and restores Microsoft SQL as well as Microsoft, but just about everyone has better overall backup features. I don’t have to choose.

One Size Does Not Fit All #Backup10Commandments #BackupBible – Tweet this

Test It. Then Test again. And Again…

Your backup data is, at best, no better than it was the last time that you tested it. If you’ve never tested it, then it might just be a gob of disrupted magnetic soup. Make a habit of pulling out those old backups and trying to read from them. Your backup program probably has a way to make this less tedious. Set bi-annual or quarterly reminders to do this.

Test It. Then Test again. And Again… #Backup10Commandments #BackupBible – Tweet this

Backup and Disaster Recovery Planning is a Process, Not a One-Time Event

The most important and most often overlooked aspect of all backup and disaster recovery planning is employing a “set and forget” mentality. Did you set up a perfect backup and disaster recovery plan five years ago? Awesome! How much of the things that were true then are true now? If it’s less than 100%, your plan needs some updating. Make a scheduled recurring event to review and update the backup process. Remember the 6th commandment. Hint: If you feed them, they will come.

Backup and Disaster Recovery Planning is a Process, Not a One-Time Event #Backup10Commandments #BackupBible – Tweet this

Free eBook – The Backup Bible Complete Edition

I’d love to be able to tell you creating a backup and disaster recovery strategy is simple but I can’t. It takes time to figure out your unique backup requirements, business continuity needs, software considerations, operational restrictions, etc. and that’s just the start. I’ve been through the process many, many times and as such Altaro asked me to put together a comprehensive guide to help others create their own plan.

Free eBook - The Backup Bible Complete Edition

 

The Backup Bible Complete Edition features 200+ pages of actionable content divided into 3 core parts, including 11 customizable templates enabling you to create your own personalized backup strategy. It was a massive undertaking but hopefully, it will help a lot of people protect their data properly and ensure I hear fewer data-loss horror stories from the community!

Download your free copy

The post The Ten Commandments of Backup appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/ten-commandments-backup/feed/ 0
How to Use Storage Migration Service for Windows Server and Azure https://www.altaro.com/hyper-v/storage-migration-windows-server-azure/ https://www.altaro.com/hyper-v/storage-migration-windows-server-azure/#respond Thu, 03 Sep 2020 16:31:45 +0000 https://www.altaro.com/hyper-v/?p=19010 This article reviews the scenarios, features, requirements, and best practices for using Storage Migration Services for Windows Server and Azure

The post How to Use Storage Migration Service for Windows Server and Azure appeared first on Altaro DOJO | Hyper-V.

]]>

Storage migration projects are often one of the most daunting tasks that IT administrators face. These projects are risky due to potential data loss or misconfiguring identity permissions. Migrations are unrewarding, as end-users rarely notice a difference, but perhaps more challenging is that many migration tools are of substandard quality with a limited support matrix. In the past, migration was a lower priority initiative by Microsoft, but things are changing with the Storage Migration Service (SMS).

About a decade ago, I worked for Windows Server as an engineer and designed several of their migration technologies, even earning a patent. However, these projects were often rushed, had features cut and used a limited test matrix. This surprised a lot of people since migration should be viewed as an important tool for bringing users to the latest versions. But from the business perspective, a Windows Server to Windows Server migration was generally a one-time operation by already-paying customers. It made more sense to invest engineering resources in building new features and bringing new customers to the platform. If an existing customer had a subpar migration experience, although not ideal, it was acceptable. Migrating customers from a different platform (like VMware of AWS) is different and has always been treated as a high priority by the company as it generates new revenue.

However, Microsoft has recently (Windows Server 1809) released a first-class Windows Server to Windows Server (or to Azure VMs or Azure Stack) migration solution. Storage Migration Service (SMS) provides a new storage migration technology which is managed using Windows Admin Center (WAC) or Remove Server Administrator Tools (RSAT). This GUI-based utility is straightforward and allows file servers (including their data, shares, permissions and associated metadata) to be migrated from older versions of Windows Server to Windows Server 2019 servers and clusters. SMS supports most Windows Server and various Linux distribution running Samba. After the migration, the identity of the file server can also be migrated so that users and applications do not lose access.

This article will review the scenarios, features, requirements and best practices to use Storage Migration Services. This article includes the latest updates included in release 1903 (May 2019) and 1909 (November 2019).

How Storage Migration Service Works

Storage Migration Service is fairly straightforward and follows other migration processes. First, you need to open a few firewall ports as described in the Security Requirements section of this article. Next, you will install Storage Migration Service in Windows Admin Center and open this tool.

To start the migration, you will select your source servers which will be inventoried and display a list of volumes and folders. You will select the storage that you wish to copy and specify some other settings. Next, you can pick the destination servers and volumes, and map them to the source volumes.

You are also given the option of migrating the entire file server, including its identity of the server and its networks, or just the shares and their data. It is possible to use this technology as a basic asynchronous file replication solution. The data will then be copied from the source volumes to destination volumes using the SMB protocol. The copy operations may run directly between the pair of servers or may be routed through an intermediary Orchestrator server which manages the migration operation.

If the identity of the file server is also migrated, users will be able to connect to their storage once Active Directory and DNS records throughout the infrastructure are updated. There may be slight disruptions in service, but all of their files and settings should be retained. The old servers will enter a maintenance state and not be available to users, and they can be repurposed.

Storage Migration Service Overview

Figure 1 – Storage Migration Service Overview

Note that a step-by-step guide for using the Storage Migration Service can be found in Microsoft’s official files.

Planning for Storage Migration Service

This section provides an overview of different considerations based on the hardware and software requirements for the various migration servers. Processing the migration can be resource-intensive, so it is recommended that the Orchestrator and any destination server have at least 2 GB of memory, 2 (v)CPUs and 2 cores. Conventional infrastructure enhancements will also speed up the process, such as providing a dedicated high-bandwidth network and using fast storage disks.

To make the migration faster, you can use hardware which has been optimized for SMB traffic. This can include using multiple NICs which support Remote Direct Memory Access (RDMA), SMB3 multichannel, Receive Side Scaling (RSS) and NIC teaming. On the server, you can try to maximize the memory and QPI speed, disable C-State, and enable NUMA.

Windows Server as a Source Server

The source server hosting the original storage must use one of the following versions of Windows Server:

  • Windows Server, Semi-Annual Channel
  • Windows Server 2019, 2016, 2012 / R2, 2008 / R2, 2003 / R2
  • Windows Small Business Server 2011, 2008, 2003 R2
  • Windows Server 2012 / R2, 2016, 2019 Essentials
  • Windows Storage Server 2016, 2012 / R2, 2008 / R2

Migration from Failover Clusters running Windows Server 2012 / R2, Windows Server 2016 and Windows Server 2019 is also supported.

Linux Servers using Samba as a Source Server

Storage Migration Service makes it easy to migrate from legacy Linux server using Samba. Samba is a suite of programs for Linux and UNIX which provides file server interoperability with Windows Server. It allows file shares to be managed like they are running on Windows by providing compatibility with the SMB/CIFS protocol. It supports Active Directory, but when migrating from a Linux server you will enter additional Linux and Samba credentials, including a private key or SSH password.

Samba 4.8, 4.7, 4.3, 4.2, and 3.6 is supported on the following Linux distributions:

  • CentOS 7
  • Debian GNU/Linux 8
  • RedHat Enterprise Linux 7.6
  • SUSE Linux Enterprise Server (SLES) 11 SP4
  • Ubuntu 16.04 LTS, 12.04.5 LTS

Windows Server as a Destination Server

It is generally recommended to migrate to the latest version of Windows Server (currently WS19), as this operating system will be supported for longer and has performance optimizations for SMB file transfers. With SMS, using Windows Server 2019 as the destination server actually runs about twice as fast as older versions of Windows Server as it can function as both the Orchestrator Server and destination. This is because data can be directly transferred to the destination, rather than routing through another intermediary Orchestrator server. However, Windows Server 2016 and Windows Server 2012 R2 are also supported.

Failover Clusters

Windows Server Failover Clusters are supported as host and destination servers, provided that they are running Windows Server 2012 / R2, Windows Server 2016 or Windows Server 2019. It is possible to migrate storage between two clusters, from a standalone server/VM to a cluster, or from cluster to a standalone server/VM. Failover cluster are also supported for consolidating multiple standalone hosts onto a single cluster by having each migrated file server become a clustered file server workload.

Microsoft Azure Stack

Microsoft Azure Stack can be used as a destination server, with the storage being migrated to VMs running on Azure Stack. Azure Stack is deployed as a failover cluster, so it can also be used for consolidating multiple standalone hosts onto a single piece of hardware.

Microsoft Azure

Storage Migration Services can migrate storage, identity and network settings to a file server running inside a Microsoft Azure Virtual Machine (VM). Simply deploy your Azure Active Directory-connected file server and access it like you would any on-premises file server.

Azure File Sync Integration

Azure File Sync is a technology which optimizes how an on-premises file server syncs its data with Microsoft Azure. It allows Windows Server to function as a local cache of the Azure file share. It integrates with Storage Migration Server and can optimize performance after the migration.

Active Directory Considerations

Storage Migration Service requires that both the source and destination server are within the same Active Directory domain. All of the source servers, destination servers and any Orchestrator Server must have a migration account with administrative access to all systems. If you use migration credentials, the domains must be within the same AD Forest. Any Linux servers running Samba are also required to be managed within the same domain.

When using Windows Server Essentials or Windows Small Business Server you likely have your domain controller (DC) on the source server. For this reason, you likely will not be able to migrate the identity settings as the DC must remain online throughout the process. You can still inventory and transfer files from these servers. If you have two or more domain controllers this should not be an issue, and you can promote the domain controller on the source server after the cutover.

Workgroup migration is not supported.

Installing Storage Migration Service

The Storage Migrations Service feature will appear in your Windows Admin Center feed. SMS can also be installed using PowerShell. Installing the Storage Migration Service feature on the management server will make it function as the Orchestrator Server. Install Storage Migration Service Proxy on your destination host(s) to maximize performance as this enables them to directly copy data from the source servers. You can optionally install the Storage Migration Service Tools if you are using an independent management server.

Installing the Storage Migration Service Features

Figure 2 – Installing the Storage Migration Service Features

Storage Migration Service (Orchestrator Server)

This feature is installed on the primary server running the migration, known as the Orchestrator Server. This server manages the migration process. It can run on any server or VM that is part of the same domain. The Orchestrator Server can run directly on the Windows Server 2019 destination server or an independent server. It is a good practice to always copy the migration events and logs from this server to track the migration progress.

Installing Storage Migration Service through Windows Admin Center

Figure 3 – Installing Storage Migration Service through Windows Admin Center

Storage Migration Service Proxy

The Storage Migration Service Proxy is a role installed by Server Manager. This can be installed on the Windows Server 2019 destination server in order to double the transfer speed as this allows the source and destination server to copy data directly between each other. Without the proxy, the files need to be first copied to the Orchestrator server then they are copied again to the destination server. This takes twice as long as the Orchestrator server acts as a bottleneck. Installing SMS on any Windows Server 2019 host will automatically open the necessary firewall ports.

Storage Migration Service Tools

These are the management tools which can be installed in Windows Admin Center or Remote Server Administration Tools (RSAT).

Configuring Firewall Settings

When installing the Storage Migration Service Proxy, the proper firewall settings will be configured. The source and destination servers must have the following firewall rules enabled for inbound traffic:

  • File and Printer Sharing (SMB-In)
  • NetLogon Service (NP-In)
  • Windows Management Instrumentation (DCOM-In)
  • Windows Management Instrumentation (WMI-In)

The Orchestrator Server must have the inbound File and Printer Sharing (SMB-In) firewall rule enabled.

Inventory Storage Volumes

One of the first steps performed by Storage Migration Services is to inventory the storage which is selected for migration. This will list details about each of the components which will be copied, including the volumes, shares, configuration settings and network adapters. This information will also be retained in the migration reports.

Storage Migration Service will Scan a Server to Inventory its Volumes

Figure 4 – Storage Migration Service will Scan a Server to Inventory its Volumes

Map Source and Destination Servers and Volumes

During the migration, you will get to match each volume on the source server with a volume on the destination server. After selecting your source server(s), SMS will scan them and present a list of volumes. You can select any or all of the drives you wish to migrate, and you will map each to a volume on the destination server which has enough capacity. You must also migrate between the same file system type (NTFS to NTFS or ReFS to ReFS).

Mapping Source and Destinations Servers using Storage Migration Service

Figure 5 – Mapping Source and Destinations Servers using Storage Migration Service

Consolidate File Servers on a Failover Cluster

Many administrators want to use Storage Migration Service as a consolidation tool, allowing them to merge several older file servers onto a single destination file server. This scenario is only supported by migrating each legacy file server to a clustered file server. This is permitted because a failover cluster can run multiple file servers as a native cluster workload or as virtualized file servers inside VMs.

Migration Using Storage Migration Service

This section provides details about what happens during the migration.

Validate Migration Settings

Once the source and destination servers are mapped, click Validate. This will run several tests to verify that a unique destination exists, its proxy is registered, the SMB connection is healthy, and that the credentials work with administrative privileges.

Validate Migration Settings

Figure 6 – Validate Migration Settings

Migrate Data

Once the transfer begins, data will be copied from each volume on the source server to its mapped volume on the destination server. If there is any data already in a share on the destination server, then this existing content will be backed up as a safety measure before the first migration. This initial backup only happens the first time and not on subsequent backups. If the storage migration is repeated, any identical folders and files will not be copied to avoid duplication.

Migrate Storage Settings

The following settings (if available) are migrated to the destination server.

  • Availability Type
  • CA Timeout
  • Caching Mode
  • Concurrent User Limit
  • Continuously Available
  • Description
  • Encrypt Data
  • Folder Enumeration Mode *(aka Access-Based Enumeration or ABE)*
  • Identity Remoting
  • Infrastructure
  • Leasing Mode
  • Name
  • Path
  • Scoped
  • Scope Name
  • Security Descriptor
  • Shadow Copy
  • Share State
  • Share Type
  • SMB Instance
  • Special
  • Temporary

One important component which is not copied during migration is Previous Versions made with the Volume Shadow Copy Service (VSS). Only the current version of the file will be migrated.

Migrate Local Users and Groups

During the migration, you are given the option to copy the account settings for local users and groups. This allows current users to be able to reconnect to the file server without any additional configuration, which would be ideal if the server identity is also migrated. If you decide to migrate the local users and groups, you will be given the option to keep these accounts the same or force them to be reset with a more secure password. You would not select this option if you plan on keeping your existing file servers in production as there would be duplicate and conflicting file servers in your infrastructure.

If you are running the migration to set up or seed a DFS Replication server, you must skip migrating the local users and groups.

Skip Critical Files and Folders

Since the migration process happens on a running operating system, it is important that any critical files or folder which are in use are protected. Storage Migration Service will skip these files and folders and add a warning to the log.

The following files and folders will automatically be skipped:

  • Windows files, including: Windows, Program Files, Program Files (x86), Program Data, Users 
  • System files, including: pagefile.sys, hiberfil.sys, swapfile.sys, winpepge.sys, config.sys, bootsect.bakbootmgrbootnxt 
  • Computer-specific files, including: $Recycle.bin, Recycler, Recycled, System Volume Information, $UpgDrv$, $SysReset, $Windows.~BT, $Windows.~LSWindows.old, boot, Recovery, Documents and Settings 
  • Any conflicting files or folders on the source server that conflicts with reserved folders on the destination server.  

Multi-Threaded Migration

SMS allows for multiple copy jobs to run simultaneously as it uses a multi-threaded engine. By default, SMS will copy 8 files at a time within a job. This can be changed from 1 file to 128 simultaneous files by editing the FileTransferThreadCount registry setting for HKEY_Local_MachineSoftwareMicrosoftSMSProxy. It is best to not set this higher unless you have hardware-enhanced for SMB as it increases processing overhead, and network bandwidth or disk speed are usually the limiting factors. 

View Post-Migration Information

It is a good best practice to keep track of your SMS migration errors, transfers and jobs. There are a few different ways which you can track this information with Storage Migration Services.

Error Log

Any files or folders which cannot be transferred will be noted as warnings in the Error Log, such as those being used by the running operating system. This error log will also describe any other types of warnings and errors.

Transfer Log

To keep track of all of the migrations download the Transfer Log as a CSV (spreadsheet) file. Every time you run the migration this information will be overwritten. You may want to create an automated task in Task Scheduler which copies this file every time the Storage Migration Service has completed so this information is always captured.

Jobs Log

There is a log which tracks all of the SMS jobs, however, this is generally not needed by the admin so it is hidden. You can find it under C:ProgramDataMicrosoftStorageMigrationService. If you are migrating a large number of files, you may want to delete this database to reduce the size it takes up on disk. Additional information can be found in Microsoft’s official files.

Migrate & Cut Over the Identity of the File Servers

Once the data has been copied to your destination server you have the option to migrate the file server itself. This allows users to continue to access their files on the new hardware with minimal disruption. After completing the migration, select the Cut Over to the New Servers option and enter your Active Directory credentials. You can rename the server, but most likely you will keep the same file server name. If you do not copy the identity, the users will keep access files on the source server.

Migrate Network Adapter & IP Address Identity

When you migrate the File Server identity you will be given the option to map each of the network adapters from the source server to network adapters on the destination server. This will allow you to move the IP address during the cutover, whether it uses a static IP address or DHCP address. If you use a static IP address, make sure that the subnets on the source and destination server are also identical. You can also skip the network migration.

Configuring the Network Migration for a Cutover

Figure 7 – Configuring the Network Migration for a Cutover

Migrate & Cut Over a Failover Cluster

If you are migrating to a failover cluster, you may also need to provide credentials which allow you to remove a cluster from the domain and rename it. This is required any time a cluster node is renamed.

Antivirus Considerations

Make sure that the antivirus versions and settings are the same on the source and destination server, particularly for scanning any included and excluded folder. You may need to temporarily disable antivirus scans during the migration to ensure that any files are not temporarily locked while being scanned.

Summary

Storage migration projects can be overwhelming. But if you plan on using Storing Migration Services for Windows Server and Azure, I hope the scenarios, features, requirements and best practices described here, will prove useful. As always, if you have any questions or concerns, let me know in the comments below.

The post How to Use Storage Migration Service for Windows Server and Azure appeared first on Altaro DOJO | Hyper-V.

]]>
https://www.altaro.com/hyper-v/storage-migration-windows-server-azure/feed/ 0