How will you design, break and migrate database matching to microservices?

“Code is simple, State is Hard” – Edson Yanaga

Micro services architecture is now been widely adopted and celebrated cornerstone nowadays. It has proved its worth to  many leading technology solutions by helping them in expansion of business, rapid releases and value added services. We often are able to explain our customer approach we are going to take to help them in growing their business by helping them with technology. Very often our customers are very meticulous and careful about there data; and then they come up with below question,

“Ok, We got the essence of micro services. but How will you break and migrate our monolithic database matching to micro services?”

In Rest of the article I will put in most of my experience to help you in answering this question.

First of all, Naturally Data is generally most important asses for the organization. Thats why database requires very thoughtful baby steps.

Step 1 :- Database should also follow Domain Design

Organizations database breakup should also follow domain driven design style breakage of bounded context and aggregate. DDD ( acronym to Domain Driven Design ) is popular and very efficient style of modeling data and architecture around Main domain of business. I strongly recommend readers to checkout Domain Driven Design Distilled By Vaughn Vernon. We should start looking at our database side as bounded context and aggregate. Here identification of Seams is important thing. Seam is very loosely coupled entity or component in databases. There are few tools like Schema Spy which are going to help you in identifying loosely based seam tables from databases.  If this table scheme is matching to already identified micro service functionality then we can start developing micro service pertaining to it without much effort needed but when identified micro service has some tight coupling involved then we need to break apart this structure to loosely coupled tables.

Step 2 –  Break apart monolith structure to loosely coupled table.

There are 3 established patterns to achieve this kind of loose coupling. Those are as below:

  • VM per Service
  • Schema per Service
  • Table per Service

Here VM per micro service become too expensive and very complicated option to operate and manage, thats why Schema per Micro Service and  Table per Micro Service looks more convincing option. Table per Service suits very well in initial migration plan because it is very helpful in matching need of Rinsing and Repeating migration steps and it is fast option too. This approach is very adaptive and reliable option to iterative database breakup strategy.

Next big challenging thing is to migrate identified migration steps. Its very important to perform db schema migration by executing careful , Non destructive baby steps, also we should verify every attempted step to have confidence in migration.

There are few tools available which can help us in applying database migration. for example, Liquibase,Flyway, Squitch.  Our migration plan also require some transition period in which both old and new schemas will co-exist in all staging environments. that means we should write to both schemas and read from the newer tables, plus there should proper backup of database before migration. One important point to repeat is that there must not be any destructive changes in migration steps which can lead to any kind of data loss such as table truncates, delete commands, column drops as well as column renames.

 

Step 3 – Treat migration scripts as application code

We should also treat migration scripts like application code. We should have it properly versioned in version control system. There should be proper CI CD pipe line to execute migration and we should  avoid migration when each time application server start. it should be triggered as well as it should be one time activity. Each migration should be verified time to time and we should smoke test migration each time when its done.

 

Step 4 – Migrate Data in small shards instead of big bang migration

While transforming existing data to newly created tables we need to take special care for performance. We can perform migration in small small batch or chunks of data which are specially called as migration shards. This is very efficient option because whole database migration can take hours to days when your datasets are humongous. As a side effect of this database acquire lock on table which can lead to your  applications read downtime or can turn to latent queries. Small incremental migration shards also can be verified as and when one shard is complete. All the shards can also be roll backed whenever any of the database shard is failure. We can achieve logical transactional behavior by adopting this technique.

Step 5 – Drop referential integrity constraints from tables

Foreign keys and other referential integrity constraints introduces tight coupling between services and defeat the purpose of high cohesion and loose coupling between micro services.

After carefully following all the above migration practices we can safely consider dropping referential integrity constraints like foreign keys and similar. with this we get additional responsibility of managing referential integrity in application code by adopting event driven architecture with state machines or similar proven approach.

Step 6 – Explore alternatives like NoSQL vendors for suitable services

One of the beauty of micro services architecture is that it can smoothly supports Polyglot Storages. We can choose other suitable database technology like mongodb for mostly readable tables. We can cache datasets by adopting caching techniques by adopting Redis or similar technologies. CQRS is also very suitable patterns widely adopted these days with proven results. I deliberately kept this part as last as our stable RDBMS migration should be our first priority. We can slowly adopt Polyglot architecture slowly and on need basis.

 

Step 7 – Consider other Database Integration Patterns to achieve eventual consistency

In micro services world transaction are managed bit up differently because local transactions are not sufficient to achieve scalability and availability of micro services. So we need to mostly introduce Eventual Consistency either by using State Machines or some other options like eventuate , axon framework etc

There are other database integration patterns that you guys should explore , analyse and match for your specific  eventual consistency requirement. I have some of the standard suggestions which is as below:

  • ETL tools like informatica etc
  • Materialized Views
  • Database Triggers
  • Virtualized tools

Last but not least, I would like to introduce you to fantastic book on this subject by Edward Enaga wchich is called as Migrating to Microservice Databases.

So guys, I have tried hard to answer above commonly asked question by customers which is How will you design, break and migrate database matching to micro services?

So lets not grim over it next time when it is asked. Finally all the best and adieus!

 

 

 

 

 

 

 

Your like and share is best motivation booster:

3 thoughts on “How will you design, break and migrate database matching to microservices?

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *