Source: https://www.dsm.net/it-solutions-blog/public-cloud-repatriation-offers-cost-savings-control-and-compliance

Repatriation: From Microservices on Cloud to On-premise Monolith

Sriharsha Polimetla
NEW IT Engineering
Published in
4 min readNov 12, 2019

--

Repatriation in Cloud context is bringing applications running on cloud to the good old on-premise server. While the software development industry is gradually moving their applications to cloud, whether or not to consider Repatriation as an anti-pattern is still up for debate.

This is happening in a lot of industries where physical control of data center in aspects of data security is desired. There are also cases where public cloud can be more expensive than on-premise solutions. This post covers my experiences of migrating cloud applications to on-premise which was a result of a management decision about Data Security Compliance.

Architecture of existing Cloud Applications

The application architecture was based on AWS Lambda execution which was triggered by API gateway. Each Lambda function had a dedicated endpoint to it and the Lambda functions were made as small as possible in terms of their dependencies to reduce latency. Ideally about 2–3 Lambda functions with related functionality were packed into a microservice which was deployed using AWS CodePipeline. Each microservice had a dedicated CodePipeline. An example of this can be found on my Lambda-API. Here I had Lambda functions for the microservice deployed within a single environment. These functions were triggered via API Gateway.

Source: https://ordina-jworks.github.io/cloud/2018/10/01/How-to-build-a-Serverless-Application-with-AWS-Lambda-and-DynamoDB.html

Our actual deployment however, was done using a cross account CodePipeline, with three AWS Accounts. Each had its own environmental scope namely, Development, UAT and Production. For promoting from one stage to another, a manual approval was needed which was done after successful automated integration tests and/or when the manual testers gave their green light.

The Brainstorming

Now these microservices were to be migrated to an on-premise server with the least effort possible. After having grown fond of the AWS Developer tools and the CI/CD pipelines, we (as a team) still wanted to automate/minimize the operations tasks as much as possible during this Repatriation.

We decided that dockerizing the microservices was a better way to ship the applications. We wanted to have a docker image which could be easily deployed on an on-premise server.

At that time we had close to 20 microservices and it was expected that we continue to develop further microservices alongside the migration. If each microservice was to be deployed as a docker container, we would have ended up with at least 20 docker images which needed to be manually deployed across environments because we were not allowed to use a proper orchestration framework like Kubernetes for managing the containers. This is much of an operations overhead and that was when we decided to pack all these existing and the new microservices into a monolith. Yes, you read it right. Going from microservices to a monolith. Another anti-pattern? Well, maybe; hopefully when you reached the end, you will see it differently.

The New Architecture

Given that we had a couple of months during which we both migrated and developed further, we wanted to make use of AWS Developer tools for building images upon code changes. At the end of the migration period, we wanted to give out the recent docker image and source code over to the operations team of the client, so that they could deploy it.

Since our microservices were all REST APIs, we chose a web framework, namely Fastify for our monolith. Note that, choosing Fastify was not related to the migration and any web framework for Node.js (e.g. Express) would do the job. So the running container has a single entry point which re-routes the API request to the respective services (plugins in Fastify).

Source: https://survivejs.com/blog/fastify-interview/

Let us see some code. In this repo I only have the steps till the docker image is built. In our execution we had an extra deploy stage where we deployed the container produced in the build step on an EC2 instance (mocking on-premise) which was used as our development environment. This way the developers were kept at ease where we deployed at least once a day and tested the APIs ourselves.

When the migration period was over, we handed over the production ready docker image to the operations. We also had persistent data from DynamoDB tables which we migrated into a MySQL Database. The exact details of it is out of scope for this post but still good to know that data migration was also a part of it.

Observations

Since Code Quality is not something one should comprise, we still had the linter check and unit tests running during the build stage. Our microservices together had a lot of unit tests which we thought would run much longer when packed into a monolith. It wasn’t true. At the end of the migration period, we had close to 300 unit tests running within 5 seconds. A couple extra seconds of delay for deployment was not really a issue.

The term monolith is often taken in a negative sense when the Software Development world is moving towards microservices. Our Application might be a good example where using a monolith made more sense than individual microservices for a single Application.

Final Note

This post is neither about recommending nor opposing microservice or monolithic architecture. It is simply a description about my migration experience and a use case that Monoliths can also make good applications.

PS: StackOverflow is one of the websites which I use at least thrice a week while programming something new and it is based on a monolith architecture.

--

--