Skip to Content

Best practices for Modernizing Windows Application Workloads Using Containers on AWS

Servers are no longer a sustainable method to host complex, internet-based applications. One solution was virtual machines and microservices, however, those can be inefficient and difficult to scale with your applications. There had to be another way to simplify application deployment, without compromising scalability. For many, that solution was Docker — a user-friendly container deployment methodology.

Best practices for Modernizing Windows Application Workloads Using Containers on AWS

Best practices for Modernizing Windows Application Workloads Using Containers on AWS

Containers have transformed the way applications are created and deployed. AWS offers infrastructure resources optimized for running containers, as well as a set of orchestration services that make it easy for you to build and run containerized applications in production.

You may already know what containers are, but do you know the best practices to get started and which container strategy is right for you?

Read our whitepaper to learn about how containers can support your next wave of growth, best practices and how you can easily get started using them on AWS.

Read on this article to learn:

  • The different container offerings on AWS and best practices for planning your container strategy
  • How to migrate your Windows workloads to AWS using containers
  • How AWS is optimized for Kubernetes and other container orchestration services

Content Summary

Understanding Containers on AWS
Why Containers Matter?
Before Containers
Microservices and Virtual Machines
Containers and Microservices
Containers and Cloud-Nativeness Go Hand-in-Hand
Container versus Container Orchestration
Containers on AWS
Amazon Elastic Container Service(ECS): Born and Raised in AWS
AWS Fargate: the Marriage of Serverless and Containers
Amazon Elastic Container Service for Kubernetes (EKS): Easily Operating Kubernetes on AWS
Amazon EC2-native Workloads: Specific Requirements Need Specific Tools
Containers on Windows
All Container Workloads are supported on AWS

Understanding Containers on AWS

Every so often there is a game changer in technology that completely disrupts how people operate. Over the last six years that game changer has been cloud computing — in the past two years it has been containers (often interchangeably known as Docker).

Strictly speaking, containerization is not a new concept; as early as 1979 the concept of containerization began with chroot, which isolates and restricts namespaces of a Unix process (and its children) to a new location in the file system.

Docker has offered a certain “ease of adoption” for containers in current technology. Around since 2014, Docker has been adopted into business strategy instead of being relegated to just a technological trend.

Why Containers Matter?

A conversation on containers usually starts with the technology, but quickly transforms to a conversation of change management, safe and scalable collaboration across different groups, community-driven initiatives and other disciplines that are normally kept separate. These days, data scientists are talking about deploying their data workloads on Kubernetes, while machine learning specialists are actively contemplating how to generate and train their predictive models efficiently leveraging container orchestration tools. Containers have begun to underpin the evolution of the different distributed computing disciplines.

While containers as a packaging solution has a strong impact on the modern “DevOps” culture, they’ve also gained popularity because they go hand-in-hand in the continuous evolution of software architecture.

Every so often there is a game changer in technology that completely disrupts how people operate.

Before Containers

Monolithic applications were once the norm.
As servers were tedious and difficult to configure, it was worthwhile to reduce the pain by using as few servers as possible—combining front-end user interfaces, business logic, and backend services into one single package deployed to a single server. Every major upgrade translated directly to downtime. Servers were expensive both in terms of upfront investment and ongoing maintenance costs; less was more.

While fewer servers with larger installations looked reasonable on paper, as the complexity of the system increased and demands of new features hastened, the development and testing of such complex applications broke down and easily grinded to a halt. The more complex the system was the more time is required on testing. Releases became slower and larger in scope, with more features slotted in for fear of “missing the boat”, and production rollout became a multi-hour/day affair.

These methods were unsustainable. As Internet-based business became more prominent, users expectations meant that the haphazard way of massive deployment needs to be completely eradicated. The “microservices” paradigm was introduced to address this problem. The complex installation of all features every release was broken down into independent releases of much smaller components. As long as the agreed upon interfaces did not change, it became the prerogative of individual component owners to update and change the logic as seen fit instead of needing to worry about every single test case of in the test suite.

Microservices and Virtual Machines

While microservices were a good response to these complex applications, it was not free from challenges. For example, with each microservice on an independent release cycle having slightly different dependencies or running on different versions of an operating system, it was necessary to provision dedicated servers for each of the services. By breaking the monolithic applications to microservices, the number of servers would exponentially increase from tens to hundreds, and the monolithic days exponentially to tens or hundreds.

Better Internet connectivity also meant users and systems were no longer siloed. Peak usage no longer meant hundreds or more calls within the hour, but within a minute. The tens or hundreds of servers to serve the full collection of microservices now had to be increased to hundreds and thousands.

The rise of virtual machines helped solve the problem. Sitting on top of hardware, the hypervisor is able to create and run one or more virtual machines on the same hardware. Each of the virtual machines can operate independently on the shared resources, and can be configured using configuration management tools. This increases the number of servers on the same hardware, therefore more microservices can run on the same hardware.

Running microservices on virtual machines is not a perfect solution to every use case. Managing a fleet of VMs and hypervisors introduces its own set of overhead challenges around managing load, machine density, horizontal and vertical scaling, as well as configuration drift and OS maintenance. Configuration management tools such as Chef, Puppet, and Ansible coupled with platforms such as OpsWorks and AWS SSM can often eliminate or greatly reduce these challenges and for many use cases. For some applications, this overhead is a fair trade for the flexibility required, especially for large but still-evolving enterprise applications. However, as applications evolve further towards the segmentation of microservices, this balance between management overhead and hosting flexibility can skew unfavorably. For developers focused on maintaining a large set of many smaller systems, a new solution was required.

Containers and Microservices

For many, that solution was Docker.
Docker represents a user-friendly container deployment methodology. By adopting Docker, multiple applications can run on the same virtual machine/bare metal server. Since Docker packages all of an application’s dependencies within a single image, conflicting dependencies between different services can exist. As long as the services share the same kernel as the host machine, the different Docker processes run harmoniously with one another. Now the hundreds and thousands of machines can drop drastically back while not sacrificing the release independence and integrity of the applications.

Another advantage of containers is immutability and therefore consistency. Upgrading a containerized application is equivalent to stopping an existing process and starting a new one based on a newer image. This removes the potential drift in configurations. The removal of configuration ambiguities also helps introduce a more streamlined process. Since the dependencies are already packaged within the container image, the overhead is drastically reduced. This is analogous to compiled binary applications where all dependencies are encapsulated at build time.

Containers and Cloud-Nativeness Go Hand-in-Hand

Another movement that helped promote adoption of containers is cloud computing. With the advent of cloud, “cloud-native” technologies became the building blocks for developing cloud-based applications. Containers are an enabling technology, facilitating encapsulation of independent services, fully utilized compute infrastructure, scalability, and rapid development.

Why do containers work well with the cloud? One prominent feature of cloud computing is elasticity: cloud infrastructure can scale up and down based on demand. Raising a new virtual machine to service the demand is fine, but it takes minutes, which can result in significant loss in business. Scaling up containers takes seconds instead of minutes, meeting scaling demands much more efficiently. The speediness of container deployment aligns with the requirement of the cloud that demands rapid changes.

Because containers run on an abstraction layer on top of virtual machines, it is further separated from the underlying compute resources. A Docker image that can run on-premise can also run on AWS and other environments. Since cloud transcends physical locations and service providers, containers work well with the cloud. Cloud native architectural patterns state that scaling horizontally (more servers) is preferable to scaling vertically (more powerful servers). Docker provides a way to run the applications across different servers easily. Because containers are immutable, it poses less operational costs with less margin of error. As there are increasing demands on hybrid cloud computing for disaster recovery and high availability purpose, a deployment mechanism that promises working across the different physical environments is certainly very attractive.

Container versus Container Orchestration

While running one container is indeed easy, managing a number of containers – or generally what is known as “container orchestration” – is a lot more complex.

Container orchestration can be a heavy operational overhead. Unless your core business is managing container processes, mastering the managing and orchestrating of containers often does not increase the bottom line. The good news is that because of the popularity of containers, a number of people have been trying to solve these problems, and services like AWS have been introducing a variety of solutions to help orchestrate containers.

Containers on AWS

While it has always been possible to run containers directly on Amazon EC2 instances, AWS recognised the need to relieve users from undifferentiated operating activities, allowing businesses to focus on their core products and services, as opposed to focusing on mastering container orchestration.

There are a number of managed container orchestration services on AWS such as Amazon EKS (Elastic Container Services for Kubernetes) and AWS Fargate, both of which were announced during re:Invent 2017. Along with AWS Elastic Container Service (ECS), these services maximize coverage and cover the diverse needs of AWS users.

To the right is a summary diagram of the various container options available to AWS customers. Each has its own specific advantages and is best suited for particular use cases. Together, they serve the full spectrum of container needs on AWS and help prove AWS’ commitment to the wide assortment of container approaches being adopted by customers.

Amazon Elastic Container Service(ECS): Born and Raised in AWS

The first managed container services on AWS was ECS, which was first announced in 2014 but has since gone through many changes.

Elastic Container Service provides a rich experience for running containers. A product that is born and raised in AWS, ECS has completely embedded the DNA of succinct integration with other AWS services. Logging needs can be answered by the awslog option that points straight to CloudWatch Logs, autoscaling and monitoring are concisely integrated with CloudWatch, while permissions and security are heavily backed by IAM and security groups. Given container instances are specialized EC2 instances, any needs for spot instances and spot fleets are also supported by ECS. On top of that, if you’re looking for a CI/CD experience within AWS, Elastic Beanstalk can be used to manage the ECS workloads easily.

ECS is well integrated with the rest of the AWS ecosystem. If your workload is fully dependent on AWS native services, ECS can be a great tool for further ease and simplicity.

Requirement comparison between All-in AWS Services and Cross-Provider

Requirement comparison between All-in AWS Services and Cross-Provider

AWS Fargate: the Marriage of Serverless and Containers

AWS Fargate was announced as the container orchestration tool with no management. Currently supported for the Elastic Container Service (ECS) (while EKS support is pending), Fargate provisions and configures compute and network infrastructure while developers simply specify CPU and memory requirements.

In a nutshell, Fargate is serverless deployment for containers. Instead of developing function code in one of the supported languages like when running Lambda functions, you can now just run the image directly in a serverless fashion. Existing ECS task definitions can be reused to set up Fargate workloads. The Fargate workloads will run the containers driven by events such as schedules, event-driven patterns, or automation-driven instantiation for on-demand workloads. The user is liberated from the overhead associated with managing the entire container hosting platform aside from ensuring the sizing and access requirements of the container itself. The containers run in VPCs in AWS accounts, while users can set up network load-balancers and application load balancers pointing directly to the target group consisting of the Fargate containers in order to run a load-balanced workload.

Fargate is by far the simplest way to start running containers on AWS, as it poses managing overhead on scaling the containers and the underlying instances. The open question is if there are specific workloads best suited for Fargate. Lambda is best suited for event-driven or scheduled short duration workloads. Given the price point and features currently supported, Fargate appears to be following the same pattern of serving scheduled and short term workload.

Amazon Elastic Container Service for Kubernetes (EKS): Easily Operating Kubernetes on AWS

Originating from an internal container platform at Google named Borg, Kubernetes has grown into a widely-adopted and growing ecosystem that has won the support of a vast community of engineers. It is so popular that all of the cloud providers on the Gartner magic quadrant in 2018 provide different levels of support to it.

Running Kubernetes on AWS is attractive for numerous reasons. Many people choose Kubernetes for is its open architecture: because of the pluggable provider interface, Kubernetes can run both on premise and across different cloud providers, making the application management code reusable regardless of the underlying infrastructure. It is a feature-rich container orchestration platform that supports widely different workloads ranging from highly fluctuating stateless services to containerized persistent backends and tools supporting such workloads. Furthermore, it fully aligns Linux security paradigms to create a highly scalable yet secure container deployment experience.

While Kubernetes has internal support for container scaling, service discovery and config management within the platform; running Kubernetes on AWS allows the platform to leverage some key AWS services such as cluster autoscaler integration with autoscaling groups, Amazon Elastic Load Balancer (ELB), Amazon Elastic Block Store (EBS) volumes, Amazon Route53, Parameter Store/Secrets Manager. For an organization that is already operating on AWS, many of the existing operating paradigms and provisioning tools can be extended to also run Kubernetes with relative ease. Running Kubernetes on AWS is an opportunity to take advantage of the feature-rich container platform and robust scalable on-demand infrastructure.

Many organisations have adopted Kubernetes, however engineers are often flummoxed by the steep learning curve. While tools such as Kops and Kubeadm can help ease the creation and upgrade of the Kubernetes clusters on AWS, configuring the master nodes, troubleshooting network errors, managing etcd backup/restoration, and performing other control plane related tasks takes a lot of education. Engineers can quickly find themselves descending into a rabbit hole if they adopt Kubernetes without fully appreciating the complexity of maintaining a highly available Kubernetes control plane. Many engineers working on Kubernetes often spend too much time reading documentation and user blogs, working through tutorials and source code and catching up on the latest messages on Kubernetes GitHub issues and Slack channels. This coupled with the rapidly changing nature of the Kubernetes platform with the multitude of features, enhancements and add-ons make navigating in the Kubernetes world even more difficult.

That is why Amazon EKS is a positive tool for users who only have conventional orchestration needs. With the managed Kubernetes control plane, the indispensable – and often most challenging – pieces of the Kubernetes cluster is in the hands of the AWS managed services, giving the team more time to focus on developing applications and services, and thereby reducing mean time to market of the differentiating products and services. Meanwhile, application developers and operators can still retain full control on the nodes where the containers are run.

Also with the official support on AWS, the plugins running on the platform are more closely aligned with the AWS core design. Traffic across pods on different worker nodes closely follow the VPC network and security model, making the process very efficient. Logging and analytics are tightly integrated with AWS platform tools such as Amazon CloudWatch. Plus, certain industry-specific compliance requirements are resolved by using EKS. Currently EKS is HIPAA-eligible as well as ISO and PCI-DSS Level 1.

By becoming an active contributor to the Kubernetes project, AWS has addressed potential concerns that EKS is going to diverge from Kubernetes development. The releases onto EKS follow the official Kubernetes releases very closely. If a workload runs on Kubernetes on premise on a version that is supported by EKS, it is expected to run smoothly on EKS too.

Amazon EC2-native Workloads: Specific Requirements Need Specific Tools

Running container workloads on EC2 instances is analogous to running databases on AWS using the example of a SQL Server database. If you need a SQL Server database that fits into the operating parameter of RDS SQL server, you should leverage RDS; but if you have specific requirements for SQL Server Analysis or Reporting services, the ideal solution is still running the SQL server on EC2 instances. It is the usual balance between control and delegation. There are more limitations with managed services, while the user gains peace of mind; conversely, with flexibility the user has to assume more operational responsibility. The same logic applies to running container workloads on EC2 directly, be it through Kubernetes or other container orchestration platforms.

When working on a greenfield project, it is easy to decide between the most popular choices i.e. ECS or EKS; however, there are considerably more constraints when managing a publicly released product that serves critical workloads and has service level agreement obligations. Before Kubernetes becomes arguably the cross-platform standard, workloads have been deployed onto Mesosphere (or DC/OS), Docker Swarm, OpenShift and Racher for a variety of reasons such as add-on support and hybrid deployment requirements. Each of these solutions has its merits and shortcomings. Philosophical debates aside, production workloads still need to be supported while migration decisions to either EKS or ECS are being made. That means running those container workloads on EC2.

Even in the case of Kubernetes, if the non-descript Kubernetes cluster works for you, the most effective way to run is using EKS. If you have specific compliance, performance, or management requirements, anything that requires setting up specific features of the cluster by configuring parameters into the Kube API server, Kubernetes on EC2 is still the optimal solution.

As technologies progress, what is the standard today may well shift because of evolving needs. What container workloads on EC2 provides is the fundamental basis that the workload will function on AWS. It may be more management overhead or more flexibility in configuration—the key is it’s supported and it can be done.

Containers on Windows

Modern containerization is largely an evolution of isolation approaches that emerged from various Unix-like operating systems. As such, it can be a common (though decreasingly so) misconception that containerization in the cloud is restricted to Linux environments. Sensing the importance of technology, Microsoft began experimenting with containerization with the early versions of Windows Server 2016.

Throughout the 2016 lifecycle, and especially with the late 2018 release of Windows Server 2019, customers heavily invested in the Windows and/or .NET ecosystems can enjoy robust first-class support of containerization. In supported versions of Windows Server (and Windows 10), Docker containers based on Windows Server images are supported in much the same way as they are on Linux systems. Windows offers a similar process-level isolation, called “Windows Server Containers” where containers share the kernel of the underlying host, with other resources abstracted through the container runtime. This gives Windows containers the same behavior and management patterns through Docker that are available in Linux, and allows Windows containers to enjoy a vast array of hosting and management options in the cloud.

Microsoft did take the evolution one step further by offering a second isolation option, known as Hyper-V isolation, which runs containers individually in an optimized VM preventing the need to share the OS kernel, and offering VM-level security protections to a container that may need such isolation. Isolation model is a runtime feature, so the containers themselves do not differ across models. We’ll cover this in a little more detail, but the key takeaway is that Windows containers are just Docker containers running Windows base images, and offer two different isolation options, with Windows Server Containers (process isolation) being the most similar to the model Docker uses in Linux.

With differing operating systems, there are a few minor considerations that need to be taken into account when deploying Windows containers. The main limitation/restriction comes from the fact that containerization is largely an abstraction offered by the OS. As such, containers are mostly limited to running on a host running the same OS architecture. Linux VMs need to run on Linux Hosts, and Windows VMs need to run on Windows hosts. Fortunately, many of the management & hosting platforms have made this consideration largely a solved issue and handle the node OS on the backend, allowing containers to be managed together. Windows container images can also be quite large in size, due again to differences in OS architecture. This is improving, with various versions of Windows Server being offered for this use case. In Server 2019, there are 3 Windows versions – Server, Server Core, and Nano Server. In past versions, Core & Nano were somewhat similar, but in 2019, Nano has been repositioned to be the clear best base image for Windows containers. Core offers a UI-less streamlined server that is perfect for use as a container host, and Server offers the traditional UI-based server experience Windows Server is known for. There are also some additional platform restrictions or differences, covered briefly in the following sections. However, outside of this small list, containers on Windows provide the same benefits, usage model, and management options as their Linux brethren.

As Windows containers are first-class citizens in the Docker/container ecosystem, many of the same options we’ve already discussed are available. Similar to Linux, when specialized workloads require it, a Docker system can be built on top of an EC2 instance (or fleet of instances) directly. In the Windows world, this offers one unique item of flexibility. As mentioned before, Windows offer a 2nd level of isolation, known as Hyper-V isolation which provide VM levels of security for containers. As its name suggests, this is a core feature of the hypervisor and requires direct access to the hardware, which many cloud options abstract. However, using AWS EC2 Bare Metal instances, you can build your own Hyper-V nodes running Docker, and take advantage of Hyper-V isolation in the cloud. This option also gives you the ability to run Linux containers on Windows nodes, as Hyper-V isolation is providing a complete VM and can therefore support native Linux workloads. This is a highly specialized use case, but another great example of the flexibility of the AWS platform.

Amazon ECS has provided support for Windows containers since 2016, now providing various OS options (2016, 2019) and optimized AMIs for a streamlined deployment. Amazon ECS has a few caveats with Windows containers, mostly aligning to the general restrictions described earlier. Additionally, some of the AWS integration features provided by the host proxy software on the node may have limitations or operational differences in a Windows environment.

At the time of this writing (early 2019), ECS Fargate does not support Windows containers.

Kubernetes introduced beta support for Windows containers in v1.9. As of version 1.14, Windows container support is a production-level feature. In Kubernetes, containers must run on a node with a matching OS. Beyond that limitation, all core features work the same as they do with Linux containers, which minor implementation differences at hardware level features such as storage and networking. Adding Windows containers to a Kubernetes cluster is as simple as adding a Windows-based node to the cluster. Kubernetes control nodes and master components must run on Linux, and the roadmap does not suggest Windows support for cluster management. At the time of this writing (early 2019) Amazon EKS supports Windows containers in beta-level deployments, as EKS is currently running Kubernetes 1.12. AWS has pledged continued support for future versions of Kubernetes and features supported by those releases.

One last item to consider when planning a Windows container strategy on AWS is to look at the applications themselves. For legacy Windows applications that don’t have active development, or modern/evolving applications heavy based on .NET Framework runtimes, Windows containers are a fantastic and flexible option. However, for applications leveraging .NET (but not Windows APIs or Framework-only libraries), a lot of customers are choosing .NET Core as the first step to their modernization plans. .NET core offers all the options we’ve covered here, but adds a few interesting new options in the cloud ecosystem. .NET core is platform independent and will run well on Linux-based containers. For some, this provides a great path towards better scaling options without the limitations of Windows Server licensing. Microsoft SQL server also now runs on Linux, which can further offer licensing independence where applicable. Also, .NET Core is a first-class language in AWS Lambda, the workhouse of the AWS Serverless platform. Modernizing to .NET Core on containers opens a path towards full serverless application design.

With this, developers can continue to select “the best tool for the job”, moving use cases that are better served by serverless approaches to those platforms and ensuring the container deployments are optimized and modernized to best serve the applications that can best benefit from this technology.

All Container Workloads are supported on AWS

As is the norm for AWS, every service in the portfolio serves a specific workload. With the container orchestration services – Amazon EKS, Amazon ECS and AWS Fargate – AWS is doubling down on making sure that all containerized workloads are welcome on AWS. There are different services for different workflows, just pick the one that works best for your solution.

Source: Onica

    Ads Blocker Image Powered by Code Help Pro

    Your Support Matters...

    We run an independent site that\'s committed to delivering valuable content, but it comes with its challenges. Many of our readers use ad blockers, causing our advertising revenue to decline. Unlike some websites, we haven\'t implemented paywalls to restrict access. Your support can make a significant difference. If you find this website useful and choose to support us, it would greatly secure our future. We appreciate your help. If you\'re currently using an ad blocker, please consider disabling it for our site. Thank you for your understanding and support.