Kubernetes is moving beyond the data center to micro data centers, point-of-presence facilities, and even the edge. Managing Kubernetes is difficult when it’s isolated to a data center, but multiple deployments in different environments compound your management challenges.
Considerations for Distributed Kubernetes From the Data Center to the Edge
Table of contents
Variety of Deployment Models
Network Issues and Multiple Kubernetes Sites
Local Data Processing
Security Considerations
Centralized Management of Multiple Environments
Focus on Your Core Business Objectives
Kubernetes is widely recognized as the platform of choice for running efficient, distributed, containerized applications. It’s also common to think of Kubernetes in terms of a single, large cluster or set of clusters running in a data center. This is certainly a common deployment approach, but it’s not the only one.
Variety of Deployment Models
Kubernetes can be deployed in many kinds of environments. The platform is well-suited to run in micro data centers that are closer to the edge. A branch office may only need a small cluster to support the remote operations of an office. This kind of use case can typically run components that fit into a single rack. Kubernetes can also run at point-of-presence sites. For example, retailers may deploy Kubernetes clusters to physical stores and distribution centers to run applications, store data locally, and coordinate operations with centralized processes.
Kubernetes may also run at edge locations to support Internet of Things (IoT) systems. A manufacturer may deploy Kubernetes in multiple locations within a manufacturing facility to collect IoT data and perform preliminary processing and analysis. This kind of processing close to the environment can help compensate for unreliable networks and long latencies that can reduce the effectiveness of highly centralized processing.
It’s clear there’s a spectrum of cluster deployments. When you’re considering and planning your Kubernetes strategy, it’s important to understand where your deployment falls on that spectrum because there are requirements particular to each. A data center cluster, for example, may have ample resources to scale up the number of pods in a deployment, while a micro data center is more constrained.
In the case of Kubernetes deployed at the edge, you should consider how continuous integration/continuous deployment (CI/CD) will work with potentially unreliable networking. The number of sites can quickly become a factor you need to consider. Updating a single cluster in a data center is challenging enough—updating hundreds of point-of-presence sites is even more difficult.
Network Issues and Multiple Kubernetes Sites
When deploying Kubernetes clusters to multiple data centers and remote sites, the quality and capacity of network infrastructure can impact the overall performance of the platform.
Data centers typically have high-bandwidth connectivity. Clusters are composed of servers with high-speed network connections between them and run in an environment with multiple racks. The combination of high-bandwidth networking and the ability to distribute pods over multiple racks provides the optimal environment for performant and reliable Kubernetes clusters.
That level of network capacity extends beyond single data centers, too. Hybrid clouds composed of resources in a data center and in one or more public clouds can have high bandwidth dedicated direct connections between sites.
Micro data centers and point-of-presence deployments typically won’t have the same network bandwidth available in data centers and within hybrid clouds. Edge processors and IoT devices are even more constrained in terms of bandwidth. This is one of the reasons it’s advantageous to deploy Kubernetes to multiple locations—with remotely deployed clusters, the processing is brought close to where the data is being generated. Local processing reduces the amount of data that must be sent to the data center and gives local sites the ability to function autonomously in the event of a network outage.
This highlights another factor to consider when planning your Kubernetes strategy: There may be periods of extended outage. Short outages in well-architected deployments won’t significantly adversely affect operations. Longer outages, however, will cause different clusters to get out of sync. Changes to data will accumulate in the clusters that are isolated by the network outage and when connectivity is restored, recovery can begin and data can be synced. Depending on the duration of the network outage, the recovery may be long enough to impact performance and service delivery.
Figure 1: Canary deployments ensure that if something goes wrong, only a small number of customers who used the new version would experience any problems
Local Data Processing
The ability to process data locally is a key advantage of having multiple Kubernetes deployments. This approach, however, does make it more difficult to deploy services to multiple clusters. Consider, for example, the various ways you can deploy updates and new applications.
Canary deployments release an update to a small number of servers (Figure 1). A small percentage of workload traffic is routed to the newly deployed version. If the new version functions as expected, then it can be rolled out to all users. In the event something goes wrong, only a small number of customers who used the new version would experience any problems.
Figure 2: Rolling deployments are a variation of canary deployments, where a new version of an application or service is rolled out to a small number of users and then another group receives the new version if there are no problems encountered
A variation on the canary deployment is the rolling deployment (Figure 2). As with a canary deployment, this starts with releasing a new version of an application or service to a small number of users. After a period of time passes without problems, another group of users can be routed to the new version. This incremental process continues until all users are using the new application or service, or a problem is discovered and the change is rolled back.
Figure 3: With blue/green deployments, a duplicate environment is created with a new version of software running (and tested thoroughly) before being deployed to all users
The third variation of deployment models is the blue/green deployment (Figure 3). In this approach, a duplicate environment is established and the new version of the software is running there. Since the environment is duplicated, all users can be switched over from the old version (blue) to the new version (green).
This has the advantage of making the new service available to all users at once. This approach works well when the green deployment can be thoroughly tested before the switchover and there are sufficient compute and storage resources to duplicate the entire production environment.
In addition to rolling out new versions of software, you’ll need to consider the process of deploying bare metal servers. There are multiple options. For example, you could run virtual machines (VMs) on VMware, but this requires a vCenter in each location. Also, someone would have to manually log in to each system to manage the location.
Alternatively, using OpenStack for VM orchestration with a single management plane, such as Platform9, will provide you the same benefits of central management and visibility for both your VMs and containers. Still another option is to use KubeVirt, an emerging technology that allows you to run your VMs on Kubernetes, which would allow you to deploy your services without VMware.
Stateful services, such as databases, bring another set of challenges to managing multiple Kubernetes clusters. These services need persistent storage, so you’ll need to understand how to architect the cluster to deliver the needed read and write performance. To enable some level of autonomy within the cluster, plan for graceful degradation of services when the network is down. For example, data stored on a remote cluster could be cached locally so that data is available to local processes. When the network is available, the databases can sync and caches can refresh.
Security Considerations
Security operations need to be coordinated across all environments— especially encryption for data at rest and key management.
Encryption at rest is required to comply with a wide variety of regulations, especially when personally identifying information (PII) or other sensitive data is stored. There may be multiple levels of encryption, starting with the storage device.
Middleware, such as databases, may also provide for encryption. For example, some relational database management systems allow data modelers to specify that particular columns of data should be encrypted. Applications can also provide for their own encryption policies and methods. Regardless of the combination of encryption options you may employ, they need to be coordinated across all Kubernetes environments.
Key management is another security process that will need to be managed across environments. Key management services can provide all the required functionality, but you’ll still need to define policies and monitor operations. For example, you’ll want to define policies for key rotation and be able to verify the operation occurs.
Centralized Management of Multiple Environments
Multiple environments can be a challenge to manage, especially as the number of sites grows. Some clusters will be in the data center and can be managed to some degree with existing tools. Clusters at point-of-presence sites and on the edge need to be monitored and managed to maintain the necessary quality of service.
Fortunately, Kubernetes has auto-healing capabilities that reduce the need for human intervention. Unhealthy pods are replaced automatically without requiring a DevOps engineer to log into a cluster, identify the failing pods, and replace them. Auto-healing also promotes autonomy—if the network is down, the cluster can continue to function and correct for some failure within the system.
Focus on Your Core Business Objectives
Kubernetes is moving beyond the data center to micro data centers, point-of-presence facilities, and even the edge. Managing Kubernetes is difficult when it’s isolated to a data center, but multiple deployments in different environments compound your management challenges.
Source: Platform9