Skip to Content

Microservices Approach For Modernization of Monoliths Using Strangler Pattern

This article takes the approach of recommending the use of a Strangler pattern for modernization of monoliths, as every organization’s application landscape and the situation is unique.

Microservices Approach For Modernization of Monoliths Using Strangler Pattern

Microservices Approach For Modernization of Monoliths Using Strangler Pattern

While Strangler is certainly a common and recommended approach, there can be other patterns that best suit the needs of a given organization’s modernization journey. The goal of this article is essential to advocate a specific point of view given a specific situation, with a deployment targeting an AWS-based target environment. In that consideration, an architectural approach with a combination of Strangler, a sidecar injection for addressing common concerns, and a saga pattern for distributed transactions are the key goals of this white paper, along with recommendations on how to effectively use EKS for orchestrating and operationalizing a scalable deployment using Kubernetes on AWS.

Table of contents

Drivers for Microservices Driven Architecture
Common Goals for Modernization
#1: Vision of Current Landscape
#2: Future State Goals for Application Modernization
Microservices Driven Methodology
#1: Tactical Domain-Driven Design
#2: Microservices Design Pattern: Strangler
Chassis based Approach
#1: Building the Foundational Chassis
#2: Workflow Orchestration for Microservices
#3: API Gateway for Integration
#4: Distributed Transaction Handling
Candidate Solution Architecture
Operationalization of EKS

Drivers for Microservices Driven Architecture

Organically grown organizations are typically stacked up with disparate applications and systems across siloed departments within the IT organization such as Security & Governance, Architecture, Data Management, Development etc. A possible path for modernization in such businesses could be to build a microservices foundational chassis to achieve platform independence and provide a path from monolithic to modern microservices-based architecture. The use of foundational microservices chassis framework will create a well-orchestrated development environment and a uniform way to secure, connect, observe, trace and extend the underlying microservices and enable an easy and agile development practice.

The microservices chassis framework will achieve separation of concerns and provide two distinct advantages:

  • Platform Independence: The harmonized way to support and sustain existing monolithic applications, while transitioning to modern architecture and platforms with a progressive refactoring approach.
  • Speed to Market: Remove hurdles and delays with building a chassis foundation for consistent common microservices driven components such as identity and access management, messaging, security, logging etc. and help build self-organizing teams that can be off to a rapid start for any new microservices development.

Compunnel Digital has extensive experience in building microservices driven ecosystem for its diverse clientele with primary considerations focused on:

  • Provisioning a highly scalable infrastructure: Setting up and configuring a reliable and scalable AWS PaaS platform.
  • Faster development & release cycles: Containerized backend services on Docker containers orchestrated using Elastic Kubernetes Services with CI/CD automation for efficient build and release management.
  • Reduced maintenance cost: Cloud-native solutions such as App Services and API Gateway do not require any heavy maintenance.
  • IaaC: Leveraging templated Infrastructure as a Code to consistently creates the right environment for deployment and release.

Common Goals for Modernization

#1: Vision of Current Landscape

Most organizations’ current application landscape represents dependence on monolithic applications that are powering the entire business. Every organization face challenges due to its current state, such as:

  • Monolithic Applications: Monolithic applications have organically grown over the last several years and are critical to the business, however technology is outdated or difficult to support
  • Fast-changing requirements: With an increased need for integrations and digital transformation, there is a need to constantly add features and functionality to core business systems as well as be able to make changes to the core applications. Current monolithic applications pose a challenge to respond quickly and in an agile manner.
  • Legacy to Modernization Harmony: While undertaking steps towards much-needed modernization, organizations face challenges in keeping the current system running to support to the business.

Consistency needs to be brought into the process of building microservices so that you follow a pattern and process, ensuring that they are built in a fast, reliable and scalable manner while ensuring developer productivity by avoiding duplication of commonly needed features across the architecture.

#2: Future State Goals for Application Modernization

The vision for the future state of technical landscape requires the following aspects to be addressed while planning for the modernization of applications:

  • Phased Approach: While approaching a complete modernization of all applications that power the operations of the entire business, it is important to break it down into smaller phases or functional components to ensure less or no impact on the business as usual.
  • Keep the lights on: While a modern solution is being developed in a phased approach, it is critical to continue supporting and maintaining the monoliths.
  • Microservices Approach: Taking the microservices approach will help address critical aspects of ensuring that complete, independent parts of the application are built with a cloud-native modern approach in a phased and agile manner. It is also critical to ensure that business domains break down the components of the application so multiple smaller groups can take on individual pieces of microservices-based applications to be built in parallel to accelerate the time to market for overall modernization.
  • Strong Consistent Foundation: Since multiple groups will be involved in building individual microservices pertinent to their business functions, it is essential that there is a consistency maintained in the approach and that there are no duplicate efforts undertaken for the most common functions within the microservices, such as authentication & access, messaging, logging, error handling etc. So, it is important to build a strong foundation that will cater to common needs as well as provide a platform for consistency across different business domain-specific development groups that are building their parts of the modern application with microservices.

Keeping the future state goals in mind, in the journey towards modernization using a microservices-based architecture, the key goals as part of our recommended approach could be:

  • Complete architectural modernization with a microservices-based approach for a faster but reliable and scalable path to modernization.
  • Overall application should remain unaffected by the failure of a single module by ensuring fault isolation.
  • Microservices provides the flexibility to try out any new technology stack on individual service. This helps to eliminate vendor or technology lock-in and helps to select the best-suited technology for your business needs.
  • Empower the development teams to rapidly build on top of readily available components in a microservices chassis foundation (e.g. identity and access management, security, connectivity, messaging, logging, error handling etc.) so the development focus can be on critical business logic and reducing the overall development time. This approach will also help multiple smaller groups engage in developing individual microservices while maintaining consistency.
  • Simplify and expedite the deployment and support of microservices with the implementation of a DevOps and CI/CD processes that includes components for release management, automated deployments, continuous build, deployment pipeline, source control management and provisioning.

Microservices Driven Methodology

#1: Tactical Domain-Driven Design

A domain-driven approach is advocated to design the microservices around business functions and capabilities, as opposed to horizontal layers such as identity & access management, data access, messaging, logging etc.

Analyze Domain > Define bounded contacts > Define entities, aggregates, and services > Identify Microservices.

Defining bounded contexts

The approach ensures that all microservices are cohesive and have a single clearly defined purpose that is bound to a business context and loosely coupled so that it is independent of other domains and business functions and provides a clear separation of business context and concerns. This approach helps contextualize the design to real entities and terms related to the business and provide clear separation by business function to reduce complexities.

Identifying entities and aggregates

Once the bounded contexts have been defined, the next steps to address are the tactical approach to breaking down each bounded context and defining the business entities and aggregates. This will ensure that there are clear identification and representation of all unique business entities, their relationships and dependencies. The definition of aggregates helps model the transactional invariants and data, which is a traditional application would have been implemented using database transactions. In a domain-driven microservices design, aggregates help handle such distributed transactions spanning multiple data stores to maintain data consistency.

Identifying entities and aggregates

Identifying entities and aggregates

Identify and define the approach to microservices.

While this approach to microservices design uses a domain-driven model with clearly bound contexts, it is also important to understand the criticality of addressing the microservices boundaries so that they are “not too big and not too small”. A possible approach to address these can be by ensuring:

  • Every microservice has a single responsibility. They avoid chatty calls and lock-steps with other microservices.
  • Each service is small enough for a single team to design and build independently.
  • Microservices are loosely coupled and can be built, supported, and maintained independently.
  • The service boundaries should not create data inconsistencies across the application, but also ensure that they can still own and maintain their own versions of the truth.

The next step in the microservices design approach recommends the best practices for architectural patterns and the overall strategy for the microservices framework roadmap.

#2: Microservices Design Pattern: Strangler

One of the most popular design patterns recommended for the modernization of monoliths using a microservices driven architecture is the Strangler pattern. The Strangler approaches modernization of the monolith using a progressive refactoring approach, by having a dispatcher proxy route applications requests between the monolith and the microservices as they become available, as illustrated below:

Microservices Design Pattern: Strangler

Microservices Design Pattern: Strangler

Why strangler? While building microservices that will be typically decentralized, loosely coupled units of execution, the strangler pattern helps maintain separation of concerns while supporting progressive refactoring of the monolith into self-contained and scalable core microservices that carry their own independent parts of the version of the truth.

The proposed method is to incrementally replace the specific pieces of functionality into a different domain and hosting as a separate service. The idea is to do it one domain at a time. This helps each microservice evolve and provide room for enhancements and modifications along the way without compromising or affecting the functionality of other components. This creates two separate applications in the same URI Space. The new system gradually replaces or strangles the original application until the monolith application is ready to be shut down.

Chassis based Approach

#1: Building the Foundational Chassis

In a foundational chassis-based approach, each microservice should be designed to cater to the core functionality, with a sidecar injection that deploys assistant components of an application as a separate container or process to provide isolation and encapsulation. In this pattern, the sidecar is attached to a core microservice functionality and will provide for the common features across all parts of the modernized application.

Building the Foundational Chassis

Building the Foundational Chassis

This approach also helps address one of the key objectives for the modernization journey which requires supporting common functionalities, such as identity & access management, state management, monitoring, logging, configuration, and networking services. All the required peripheral activities will be implemented as separate components that keep the Core Microservices running smoothly to build the foundational chassis layer.

Using an attached Sidecar Component in the Microservices architecture will provide the flexibility that allows each component to have its own dependencies and requires language-specific libraries to access the underlying platform and any resources shared with the main application. Consequently, the component and the application will have closer interdependence on each other seamlessly handled.

Advantages of using a sidecar alongside with strangler pattern include:

  • Developer Productivity: The key benefit of the chassis implementation using a sidecar injection approach is to boost developer productivity by providing the necessary underlying plumbing required for common functionalities.
  • Programming language independence: A sidecar is independent of its primary application, i.e. Core Microservices attached to it in terms of the runtime environment and programming language, so there is no need to develop one sidecar per programming language.
  • Component sharing: Both Core Microservices Application and Sidecar will share the same resources. That means that a sidecar could monitor system resources used by both the sidecar and the Core Microservices application.
  • Low latency: As sidecar is attached to Core Microservices in proximity; hence, zero or low latency will exist when communicating between a two-way communication.
  • Extensively used across the entire Ecosystem: The sidecar can also be used to extend the functionality of any application ecosystem in its sub-container.

#2: Workflow Orchestration for Microservices

Since the future state application will feature a series of microservices with a need for a defined workflow between the services, it is imperative to build an orchestration mechanism for the process flow between the microservices. While the microservices could communicate peer to peer for interdependent communications, as the number of services grows, scaling can become a challenge in a peer to peer task orchestration. Therefore, a possible recommended approach for orchestration of the process workflows between the microservices with the help of SNS, Step Functions, and Simple Queue Service (SQS) is as shown below:

Workflow Orchestration for Microservices

Workflow Orchestration for Microservices

#3: API Gateway for Integration

It may be necessary also to implement an API gateway for publishing APIs to external and internal consumers, using message queues and events to decouple the underlying services for greater scalability and reliability.

API Gateway for Integration

API Gateway for Integration

#4: Distributed Transaction Handling

SAGA design patterns could be adopted for message brokering within the sidecar design for data consistency across microservices for situations requiring distributed transaction handling. The Saga pattern functions by initiating a sequence of transactions that updates each service and triggers the next step via a message of the event. Upon a failure, saga executes compensating transactions to retract a preceding transaction. This pattern helps address distributed transaction needs which in traditional applications are handled in a single database commit, which is not possible in a microservices architecture since the recommended design for microservices follows a “one database per microservice” model.

Distributed Transaction Handling

Distributed Transaction Handling

Candidate Solution Architecture

Candidate architecture (representative only): Note that, this is an approach based on the high-level requirements which were shared. However, the detailed design will be solidified based on the envisioning/discovery process.

Candidate Solution Architecture

Candidate Solution Architecture

High-Level Architecture

The high-level candidate architecture illustrated above integrates the following aspects for the deployment and operationalization of the microservices for monolith modernization.

AWS Identity and Access Management: An AWS Identity and Access Management System (IAM). It will restrict users before accessing application API(s). Privileged Identity Management (PIM) is a recommended solution for this system. The architecture recommends setting up an account for on-premise users. Other system users can access through AWS Direct Connect once the user gets authenticated by the Identity Manager. The proposed system will be able to provide reliable, fast, and secure access to the underlying services.

AWS API Gateway: This AWS service will abstract application backend services and provide gatekeeper front-end for Http based endpoints. The proposed system will setup API level security such as to request throttling, logging & key protection.

API Publisher: It allows API Gateway instance to create and manage policies. The policy level can be defined at the operation level. The following strategies can be set in the publisher Portal:

  • Inbound Message: For all incoming messages
  • Backend: Inbound messages before routing to the actual backend service
  • Outbound: Policies for the outbound messages
  • On-error: In case an error occurs at any stage, the request skips the remaining steps.
  • The system will define the inbound policy for on-premise users. In this way, only authorized users or services can access AWS services such as step functions based on the API key.

Developer Portal: The portal is used by developers to browse the published APIs, look at the documentation, and subscribe to published APIs. It can also apply to trying out APIs through an interactive interface and viewing usage information. The leading utility for developers will be used to provide not only information about the APIs but also the other information that could be of use to the development community. The portal setup offers details about API technical composition and usage.

Elastic Kubernetes Service (EKS): Microservices are built into Docker images and deployed as Kubernetes pods. The pods are capable of auto-scaling based on CPU and memory requirements.

ALB Ingress Controller: Gatekeeper used to configure the ingress rules for triaging and routing requests to microservices by providing a reverse proxy, configurable routing, and encryption termination for Kubernetes services.

AWS DevOps For Automation: To reduce the operational complexity of microservices architecture, we need to automate operational tasks such as build, deployment, error reporting, alerting, monitoring, auto-scaling, and others. Leveraging AWS DevOps and our best practices (continuous integration, continuous build, automated provisioning) removes a lot of error-prone human tasks so you can concentrate on the code and test cycles. Combining those practices with AWS Code Pipeline provides a standard framework for designing, building, and deploying services, which saves time and effort.

AWS Lambda: TAWS Lambda is a small unit of code for backend API, which will be created for specific tasks. Compute on Demand delivery model ensures that computing resources are available to the users as per their demand. A cloud service maintains these resources without having to worry about provisioning or managing them anywhere, but rather, focus on their business logic.

AWS Step Functions: This cloud service will schedule, automate, and orchestrate various business processes. Below are some examples of AWS Step Functions to automate workflows:

Process and route request across on-premises systems and cloud services

  • Send notifications based on some events.
  • Upload file local system to Elastic File System
  • Monitor Trends for a specific subject
  • Create alert for reviews
  • Exception Handling

AWS Direct Connect: AWS Direct Connect will create private connections between AWS and infrastructure on-premises. Connectivity can be established from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider. AWS Direct Connect offers more reliability, faster speeds, consistent latencies, and higher security than typical connections over the Internet.

In the proposed approach, AWS Direct Connect between the on-premise enterprise application and cloud-based workflows and services bus for data transfer is highly recommended.

CloudWatch Application Insights: CloudWatch Application Insights helps you monitor your applications that use Amazon EC2 instances along with other AWS application resources. It identifies and sets up key metrics, logs, and alarms across your application resources and technology stack. The proposed system adopts all default features for application monitoring and configures alerts wherever needed.

Simple Notification Service: SNS will help in event-driven applications as it takes care of event ingestion, delivery, security, authorization and error handling. We can leverage AWS SNS to set up routing rules to determine where to send your data to build application architectures that react in real-time to all of your data sources.

We will be using this service to handle the events data coming through on-premise applications or other microservices.

Simple Queue Service (SQS): It is a fully managed enterprise integration message broker. SQS will help decouple applications and services. SQS offers a reliable and secure platform for asynchronous transfer of data and state. In the proposed system, an SQS will be used to communicate between Step Functions and long-running data services and asynchronous AWS Lambda.

Elastic Cache: Elastic Cache provides an in-memory data store and improves the performance and scalability of an application that uses on-backend data stores heavily. It will help process large volumes of application requests by keeping frequently accessed data in the server memory that can be written to and read from quickly.

Cloud Watch: Amazon CloudWatch enables you to collect, access, and correlate this data on a single platform from across all your AWS resources, applications, and services that run on AWS and on-premises servers

Salient Features of the Architecture:

  • Secure solutions for secure sensitive assets
  • Zero Fault-Tolerance Support
  • Autoscale support for high usages resources
  • Manage Workflow support for distributed business processes
  • Event-Driven loosely coupled architecture.
  • Advance level Application health monitoring and Usage Analytics Report
  • Fast Search engine for big data
  • Real-time notifications and alerts
  • Reliable Communication System
  • API documentation for easy API usages
  • Support Enterprise Level Service Bus to handle high usages
  • Advance Application profiling and Exception Handling
  • Reliable Backup and Recovery System
Architecture Component Technology Stack
API Gateway AWS API Gateway
Authentication Service Identity and Access Management (IAM)
Service Discovery ALB Ingress Controller
Workflow and Orchestration Step Functions
Microservices .NET Core, Java, Python
Monitoring Logging and Auditing – CloudWatch
Microservices Testing Visualization-Management Console TDD
Container Ecosystem Docker, Elastic Kubernetes Services (EKS)
Circuit Breaker .NET Core
Project Management JIRA / Confluence
IAC CloudFormation

Operationalization of EKS

Use of EKS for Kubernetes orchestration and operational efficiency

Another recommendation is to use of Elastic Kubernetes Services (EKS) for dockerized container deployments and their orchestration. The effort to operationalize EKS can be quite complex and requires diverse expertise across the development and DevOps teams. Such diversity in expertise is hard to assemble within any organization, and therefore an established managed services partner can provide the breadth in skills required for operationalizing EKS. A recommended deployment plan for operationalizing EKS follows a four-step plan, as shown below:

Planning and Onboarding:

  • Identify goals & objectives
  • Onboard DevOps team
  • Create product backlog
  • IaC for AWS for foundational components
  • Baseline architecture RBAC, Network and Security
  • Define Deployment roadmap

Development Team Integration:

  • Onboard development team
  • Define nest practices & patterns for containers
  • Define CI/CD processes for builds & releases
  • Integrate key AWS services for key management services, RDS etc.

Operationalize The Platform:

  • Establish SLAs
  • Define the process for pushing updates/rollbacks
  • Establish standard operating procedures (SOPs) for EKS deployments
  • Define DevOps value stream maps
  • Establish processes for outage handling

Enable Cloud-Native Development:

  • Leverage EKS to Scale
  • Define & embrace best practices, patterns & guidelines for cloud-native development
  • Shift workload processes to automation using CI/CD
  • Establish methods for continuous improvement & optimization

Elastic Kubernetes Services (EKS) Benefits

The following are some of the benefits offered by EKS:

  • Efficient resource utilization: The fully managed EKS offers easy deployment and management of containerized applications with efficient resource utilization that elastically provisions additional resources without the headache of managing the Kubernetes infrastructure.
  • Increase Developer Agility and Faster Time-to-Market: Developers spend most of their time on bug-fixing. EKS reduces the debugging time while handling patching, auto-upgrades, and self-healing and simplifies the container orchestration. It saves a lot of time and enables developers to focus on developing their apps while remaining more productive.
  • Security and compliance: Cybersecurity is one of the most important aspects of modern applications and businesses. EKS integrates with Identity and Access Management (IAM) and offers on-demand access to users to reduce threats and risks greatly. EKS is also completely compliant with the standards and regulatory requirements such as System and Organization Controls (SOC), HIPAA, ISO, and PCI DSS.
  • Quicker development and integration: Elastic Kubernetes Service (EKS) supports auto-upgrades, monitoring, and scaling and helps in minimizing the infrastructure maintenance that leads to comparatively faster development and integration. It also supports provisioning additional compute resources in Serverless Kubernetes within seconds without worrying about managing the Kubernetes infrastructure.


This article takes the approach of recommending the use of a Strangler pattern for modernization of monoliths, but every organization’s application landscape and the situation is likely unique. While Strangler is certainly a common and recommended approach, there can be other patterns that best suit the needs of a given organization’s modernization journey. The goal of this whitepaper was essential to advocate a specific point of view given a specific situation, with a deployment targeting an AWS-based target environment. In that consideration, an architectural approach with a combination of Strangler, a sidecar injection for addressing common concerns, and a saga pattern for distributed transactions were the key goals of this white paper, along with recommendations on how to effectively use EKS for orchestrating and operationalizing a scalable deployment using Kubernetes on AWS.

Source: Compunnel Digital

Alex Lim is a certified IT Technical Support Architect with over 15 years of experience in designing, implementing, and troubleshooting complex IT systems and networks. He has worked for leading IT companies, such as Microsoft, IBM, and Cisco, providing technical support and solutions to clients across various industries and sectors. Alex has a bachelor’s degree in computer science from the National University of Singapore and a master’s degree in information security from the Massachusetts Institute of Technology. He is also the author of several best-selling books on IT technical support, such as The IT Technical Support Handbook and Troubleshooting IT Systems and Networks. Alex lives in Bandar, Johore, Malaysia with his wife and two chilrdren. You can reach him at [email protected] or follow him on Website | Twitter | Facebook

    Ads Blocker Image Powered by Code Help Pro

    Your Support Matters...

    We run an independent site that is committed to delivering valuable content, but it comes with its challenges. Many of our readers use ad blockers, causing our advertising revenue to decline. Unlike some websites, we have not implemented paywalls to restrict access. Your support can make a significant difference. If you find this website useful and choose to support us, it would greatly secure our future. We appreciate your help. If you are currently using an ad blocker, please consider disabling it for our site. Thank you for your understanding and support.