There are many file services out there, but how do you know which one is best for your business? NetApp’s Guide to File Services in the Cloud contains everything you need to know. Inside this guide you’ll learn:
- All the challenges of using a file share service.
- What the major cloud providers are offering: Amazon EFS and FSx, Azure Files, GPC Cloud Filestore, and others.
Challenges and Solutions of File Services in the Cloud Architecture. Source: NetApp
Content Summary
Executive Summary
Introduction to File Services in the Cloud
File Services in the Cloud Today
Conclusion
Executive Summary
File shares support some of the most important workloads that enterprise businesses rely on, and the resources of the public cloud have created interesting new possibilities. Every major public cloud provider now offers its own shared file service, each with its own target workloads and considerations. But not every enterprise will find what they’re looking for in a fully-managed, all-cloud service.
How can you find out what is the best option for you? In this guide to file services in the cloud we’ll give you a short introduction into shared file storage technologies, including all of the challenges of running a file service in the cloud. We’ll profile each of the major file service offerings available today, including Amazon EFS and FSx, Azure Files, GCP’s Cloud Filestore, and more. We’ll also give you a full view of what NetApp’s Cloud Volumes ONTAP offers file users, spotlighting important performance, availability, and data protection features, examples for how to get started, and some case studies of enterprise businesses that rely on Cloud Volumes ONTAP to meet all their file service requirements.
Introduction to File Services in the Cloud
What is File Storage?
Moving your file share services to the cloud gives you unlimited scalability, while at the same time, transparently addressing concerns over high availability and resilience to system failure. Where NFS and SMB / CIFS file shares allow a file system to be accessed concurrently by hundreds or thousands of client machines, cloud file sharing services can also be used to support a wider range of use cases, such as media processing, off-site backup, home directories, data analytics and more.
As can be expected, there are a number of things to consider when choosing cloud file sharing services. Each file share service may be fully managed or require a certain amount of setup and will support different access protocols and provide different backup facilities. Finding the best fit for your particular requirements usually necessitates a certain amount of research, proof of concept, and trial and error.
Two Types of File Formats to Consider
What is SMB?
SMB stands for Server Message Block. It is a network protocol that allows shared file access. CIFS stands for Common Internet File System. CIFS is an SMB dialect protocol that was developed by Microsoft to access Window files. In time, CIFS and SMB became two names for the same thing. For reference, we will use the terms CIFS protocol (the old name) and SMB protocol (the very old and now new-again name) interchangeably.
What is NFS?
NFS stands for Network File System, a Linux-based protocol that lets users access other files on different computers via a network in the same way that the computer would access its own locally-stored files.
Challenges of File Services in the Cloud
What kind of challenges and requirements need to be considered when it comes to using shared file services in the cloud? In the following section we’ll take a look at each of the considerations that users need to find solutions for when they choose a file service:
Availability
Shared file storage provides access to a vast number of users and it needs to be available on a constant basis. When using the major cloud offerings, configuring the file share’s availability is on the user. This requires complex manual configurations for supporting automatic failover and failback, especially when it comes to using NAS storage. Many enterprise file share-based workloads require strict SLAs of minimal downtime (RTO<60 seconds) and no data loss (RPO=0). In those cases, any loss of data or downtime will be too costly—in terms of lost revenue, reputation, customer churn, legal exposure, and more—to absorb.
Accessibility
To meet the demands of both Linux/ Unix and Windows workloads, a file share solution should enable access with both NFS and SMB / CIFS protocols and any of those protocols various versions or flavors. Using the major cloud providers, there isn’t a single, native solution that is able to provide this multi-protocol access. Configuring an in-house solution can also be prohibitively expensive and time consuming.
Data Protection
There are several points to consider with data protection for file shares. Snapshots are key to guaranteeing point-in-time recovery points for cases where data is corrupted, infected, or accidently deleted, and they should easily and quickly be restored to an upto-date copy. Cloud provider snapshots load lazily, which means not all the data may be ready when you need it, and the costs for creating the initial copy can be high. Another challenge is related to application-aware snapshots. The snapshot mechanism should be able to guarantee consistent recovery, for databases or any other application. Another aspect of data protection is disaster recovery (DR). The DR solution needs to ensure reliable failover and failback processes, as well as automatic syncs to keep the secondary copy up to date, and regular testing. All this needs to be done while maintaining the copy at reasonable costs, as the DR copy is a complete copy of the primary share.
Performance
Shared file services serve important workloads that require a high, consistent performance and low levels of latency. Data no matter where it is requested must be immediately usable. It is important to have the ability to scale out or up on request, and to be able to move data between tiers non-disruptively, and without causing performance issues. In case of an uptick in usage, the file service should be able to move to a more performant tier and at a reasonable cost.
Backup & Archive
Preventing data loss requires a sufficient method for backing up file data. Data that may need to be kept for longer periods or compliance purposes requires an archiving solution for the files. Creating and restoring backups should not affect production-level performance. Backups also need to be available for use at any time, consistent, and able to be restored easily. Restore granularity should also be possible so that a single file can be recovered without requiring the rest of the volume or data set to be restored.
Storage Footprint and Costs
Since file storage is typically used to support massive data sets such as media libraries or home directories, the overall storage footprint and costs can be a considerable challenge even for the most established organizations. Huge cloud bills can be a detriment to further scaling or investment in new developments.
Scalability and Agility
Shared file storage capacity needs to be able to scale with the massive datasets enterprise file storage require. File storage serves use cases that can see sudden, dramatic increases and decreases in usage. The ability to scale both up and down to meet those demand peaks and down periods is key.
API and Automation
File storage requires users be able to carry out complex tasks and workflows such as managing volumes, snapshots, and clones, setting up replications, etc. via automation and orchestration tools.
Cloud Migration
Working with a cloud-based file service requires in many cases the ability to move file data between on-prem or other data repositories without having to refactor or re-architect your existing applications and processes (lift and shift approach) which could otherwise be cost and time consumptive.
Data Replication and Sync
Users need to be able to replicate file shares between various repositories and keep them synced for use cases such as DR, data collaboration, offline testing, offline analytics, and more. The costs for data replication and sync, both in terms of storage and traffic costs, will need to be considered, as massive amounts of data may require to be kept up to date between repositories.
Security
Sending sensitive data to the cloud and having it accessible by vast numbers of users requires that the data is protected with encryption, efficient key management, and role-based access restrictions.
Multi-cloud and Hybrid
The native cloud service providers each have their own attractive offerings for file usage, but not every enterprise will be willing to completely let go of their trusted on-premise data center or go all-in with just one cloud. Managing a file share between deployments in one or more clouds and an on-premise data center can be a challenge in terms of data synchronization, management, cost control, and more.
Kubernetes Integration
Kubernetes is the most popular way that developers can orchestrate their container usage in the cloud today. However, unless containers are deployed to the same pod, sharing data between containers or between Kubernetes clusters can be challenging.
NFS makes it much easier to attach storage to pods and reduce the administrative overhead of working with persistent storage. To do this, a file solution needs to be able to work with a persistent volume provisioner. Resizing NFS persistent volumes, mounting persistent volumes as Read/Write Many, creating separate storage classes for different mount parameters, protecting data with instant snapshots, and other requirements must also be supported.
File Services in the Cloud Today
Solutions available in the market
AWS EFS (for NFS)
Amazon EFS provides a scalable and highly-available solution for creating cloud-based NFS file shares. The setup process is very straightforward, allowing you to create a new file system through the wizard-based UI within minutes.
These file systems grow and shrink automatically as required, with file data redundantly distributed across multiple Availability Zones. Use of multiple nodes also helps to provide greater aggregate throughput for data access. Amazon EFS file systems are primarily meant for access by Amazon EC2 and make use of security groups to act as a kind of firewall to manage network access. In order to use an EFS mount for access to the file system from an on-premises server, AWS Direct Connect must be used to make a connection to the share over a non-internet-based connection, as AWS VPN connections are not supported.
Each file system is billed according to the amount of storage used each month. As storage use is normally not static, and can vary within any month, this is calculated based on a more granular measure of capacity used in hours, known as GBhours. A worked example can be found in the AWS documentation.
One of the main considerations when moving to Amazon EFS is protecting live data through a built-in backup or snapshot mechanism. Though an AWS Data Pipeline can be used to perform an AWS EFS backup to a secondary file system, this AWS EFS-to-EFS backup solution must be set up manually. As snapshots are not supported, these Amazon EFS backups to the secondary file system could potentially double storage usage, and therefore double Amazon cloud storage costs.
Another consideration is the relationship between capacity and throughput performance. A system of burst credits is used to determine the highest level of performance a file system can be expected to achieve, which directly relates to the size of the data being stored. Small, actively-used file systems that use up all their allocated credits drop down to a base level of performance that may not be acceptable in all cases.
Benefits:
- Very easy to set up.
- Fully-managed cloud service.
- Horizontally scalable and multi-AZ availability
Considerations:
- Performance levels/IO for small systems.
- Support NFSv4+ only
- No built-in backup or snapshot system; may result in additional AWS EFS costs.
AWS FSx for Windows File Server (for SMB / CIFS)
In late 2018, AWS finally addressed the need for shared file support for SMB / CIFS workloads with the release of Amazon FSx. This fully-managed file service is targeted at third-party protocols, such as Windows Server and Lustre. Because it is built on Windows, Amazon FSx is able to fully integrate with any Microsoft workload, such as Active Directory (AD), Windows NTFS, and Distributed File System (DFS). However, it should be noted the AD is only accessible via the native AWS directory system and cannot be used without it.
Unlike Amazon EFS, Amazon FSx also offers enterprise-grade performance and IO (2 GB/second throughput). This shared storage service is also highly accessible, providing concurrent access globally. As FSx targets enterprise users, it also comes with a data migration capability for lifting and shifting existing workloads to the cloud with minimal effort.
A unique feature to Amazon FSx is that it enables throughput capacity to be set for individual volumes, regardless of how large the volume may be. There is no fee associated with setting up the service; charges are applied based on the amount of throughput, storage, and backup storage used per GB per month, respectively. Capacity for the service is currently limited to 300 GB-64 TB.
Security needs are met by Amazon FSx through the use of encryption for data both in transit and at rest. When it comes to data protection, unlike Amazon EFS, Amazon FSx is equipped with a snapshot feature. These snapshots can create backups of files on a daily basis, or they can be created by the user manually. Since the snapshots are incremental, storage consumed for their retention will only be based on the changes made to the original data.
To maintain availability in a multi-AZ setup, Amazon FSx requires use of Distributed File System (DFS) replication, which can be an additional cost factor as it essentially doubles the cost for the service.
Deployments on Amazon FSx can be started through the use of the AWS CLI and AWS SDK developer tools directly, or through the easy-to-use AWS Management Console GUI.
Benefits:
- Easy to set up.
- Fully-managed file service.
- AD integration, and support for Windows NTFS and DFS.
Considerations:
- Daily incremental snapshots or manual.
- Multi-AZ availability through DFS.
- Availability limited to these AWS regions: US West: Oregon, US East: N. Virginia, US East: Ohio, and Europe: Ireland.
- Possible weekly maintenance (and downtime)
- AWS directory service usage.
- Three costs: throughput, storage, and backup.
- 300 GB-64 TB capacity
Azure Files
Azure Files enables users to create SMB v3.0 file shares in the Microsoft cloud, in a similar way to Amazon EFS. Creating a new file share is a very straightforward procedure through the UI and can also be performed through Powershell or the Azure CLI.
Though the SMB protocol is usually used with Microsoft Windows, these shares can also be mounted for reading and writing on Linux and MacOS systems. Support for the newer version of SMB enables features such as encryption in transit.
However, this can also be achieved by using the REST interface over HTTPS. Azure File Sync allows Azure Files to be fully integrated with on-premises systems. By running the Azure File Sync agent on an on-premises Windows Server machine, Azure Files data can be cached locally for faster access, with all writes transparently synchronized back to Azure. Azure File Sync ensures resiliency of your data and end-to-end integration with Geo-Redundant Storage (GRS).
Multiple servers can be configured in this way to provide uniform access in different regional areas. Azure Files Share Snapshots is another feature that allows for read-only snapshots to be created of a file share for Azure files backup purposes.
Azure Files cost is split into two levels: The Azure storage cost of the share itself and access costs, as an example for listing the contents of a directory or accessing a file. Additional Azure files costs would also be incurred for using features such as Azure File Sync.
Benefits:
- Very quick to get started
- Fully-managed solution
- File Sync and File Share Snapshots
Considerations:
- Size limited to 5 TB
- No built-in support for NFS
- Integration with Microsoft Azure AD for SMB share authentication is in preview and can be implemented only by using Azure AD Domain Services
GCP: Cloud Filestore
On GCP, Cloud Filestore provides a fully managed, high performance file system for NFS. Users have two performance options available to best match their workload: Standard with 5,000 IOPS, and Premium with a max of 30,000 IOPS.
When it comes to the size of the file share, Cloud Filestore requires a minimum size of 1 TB with a maximum size ceiling of 63.9 TB. An instance of Cloud Filestore is available in only one GCP zone and does not include any way to failover if the zone where it resides becomes unavailable. That means, should there be an outage, users can expect downtime. Backups would need to be performed by the user, as Cloud Filestore has no snapshot feature currently. However, average availability for users on either service option stands at a solid 99.9%.
It should be noted that this service is still in a beta release, which may be a factor to consider when looking to deploy an enterprise-level workload. Full integration with other GCP cloud services may make it attractive to users who are already in that cloud.
Benefits:
- Managed service by GCP
- Ready-to-use NFSv3-based NAS storage in the cloud
- Standard and Premium performance options
Considerations:
- Size limited to 63.9 TB
- Backup facility is completely manual
- Beta release, no guaranteed SLAs
IBM SoftLayer File Storage
IBM Cloud’s File Storage offers highly available NFS file share services with a sophisticated feature set. Setup is performed through the platform’s web-based UI, which allows for disk performance to be specified in terms of IOPS per GB. Users can choose between the Endurance Tier, selecting one of three fixed levels of included IOPS, and the Performance Tier, where they can manually enter the level of performance they need.
Storage management features include space efficient snapshots, data replication and even volume cloning, in order to rapidly create writable copies of existing shares. Data is also encrypted at rest to ensure information security. These features, however, are not currently available in all regions.
Pricing is dependent on the level of IOPS per GB required. At the low end this can be very cost effective, however, at the opposite end of the spectrum, it can be the most expensive option within this series. Another thing to note is the limit of storage capacity; currently, only shares of between 20GB and 12TB are supported.
Benefits:
- Highly-available platform for NFS file shares
- Ability to tune performance based on requirements
- Sophisticated storage management features
Considerations:
- Storage management features not currently available in all regions
- Limits on storage capacity means the platform may not be suitable for all use cases
Open-Source Solutions
Not every solution for file storage in the cloud is fully-managed. There is the option of configuring your own file service based on open-source technology that can take advantage of public cloud storage and compute.
GlusterFS
One such solution is GlusterFS. GlusterFS can be used to distribute a file share across multiple virtual and physical machines in order to provide scalability and resilience against failure. Though GlusterFS is open source, commercial support is available from Red Hat. Gluster storage supports a wide range of different storage configurations, including distributed, striped, replicated, dispersed, and a variety of combinations of those. The filesystem also supports Google Cloud Platform backups through snapshots, as well as snapshot clones, and can serve out data over NFS, SMB, and even iSCSI through the use of different drivers and add-ons. As setting up the GlusterFS open source platform is on the end-user, this is not a solution for the faint-hearted. You’ll be expected to roll up your sleeves in order to set the system up and resolve any issues that may arise on your own. It should also be noted that this solution is not specific to any cloud, meaning it can be deployed on AWS, Azure, or GCP.
Avere vFXT
Another open-source option for cloud file service is Avere vFXT, which can act as a caching proxy on top of Google Cloud Storage, AWS, or Azure. Clients are able to access data over NFS and CIFS, leaving the filer to manage the actual persistence to Google’s object store in the back end. Avere virtual appliances can be clustered and offer many advanced features, such as integrating with your on-premises Avere NAS devices to create a global namespace. This solution is more suitable for high-end systems, as is reflected by the pricing.
Benefits:
- Variety of solutions available
- Support for both NFS and CIFS
- GlusterFS support for snapshot backups and clones
Considerations:
- Setup, scripting, domain expertise, and administration of GlusterFS may be very technical
- Avere vFXT may be out of scope for most users’ requirements
Conclusion
The cloud is transforming the way that file storage works. Enterprise businesses can leverage the fully-managed file services offered by the cloud providers, or may opt to build in-house file systems based on open-source technology which require an extensive amount of technical skill and maintenance to use.
Source: NetApp