Microservices are gaining traction, but it is not straight forward to adopt an Microservices architecture.
Nic Jackson, an Engineering Evangelist at NotOnTheHighStreet.com, has given a talk about this at a London Dev community meetup in October 2016.
The video of the talk can be seen on Youtube (https://youtu.be/g-1oAKSBBJM), and what follows is a summary of what Nic said. If you haven’t seen the talk yet, now is the time (you can always skip that and read this digest if you are in a hurry)
Four different ways to fail when implementing a microservice architecture
Not enabling change
This part of the talk suggests that only the organisations with the right support from the leadership team can succeed on taking a monolithic app to the microservices world. Without this support developers and architects can behave over cautiously and be encouraged to remain in their comfort zone (the 5 or 10 years old codebase all of them know, get used and learnt to love).
Of course, none of the leaders are going to be supportive if they don’t have a reason to believe microservices are a great idea. Companies need to ask themselves why they think microservices are good for them. Hopefully, the answer is not that they are chasing the shiny as Nic explained, but instead seeking a way to keep being relevant in a fast paced and competitive landscape.
Not modeling your domain
While the first point of failure was more of a “sociopolitical” one, this one focuses on the practicality of the business and its model.
It was the norm (and sadly still is in some organisations) to evolve platforms around the database, with layers and layers of new functionality, adding more dependent code, contributing to the monolith (a.k.a. The big ball of mud).
Domain Drive Design proposes a different approach, where software architects draw (sometimes imaginary) boundaries that separate different business processes and logic. Instead of adding to a central place we now have different specialized components contributing to a bigger system. This system exists only to sustain the core service which is what differentiates one company from another. Whereas the core business can’t be delegated (or at least it shouldn’t) the speaker suggests that some other services could be, for instance, using third party services running somewhere else or running alongside our main service but bought and already developed.
Not having the right tooling
Not having the tools for the job is also a reason for failure. Nic describes three categories of tools you need to do right in order to not fail on this microservice venture:
- Failure to understand how important automation is
- If we are saying developers are encouraged to create new microservices based on their preferences and experience, we need to provide them with tools that facilitate the setup of these microservices so that adding unit tests, CI and various other tools don’t take 50% of the company’s time budget.
- Similarly with the deployment, we need to automate absolutely everything to save time, avoid mistakes and be truly scalable. These things can only be achieved with a consistent set of tools for scaffolding of projects, building, and deployment, and although the tools for microservice management are in their infantile age, we can rely on popular devops tools such as ansible, chef or terraform to perform these system tasks against microservices as well.
- Failure to realise your test coverage is not good enough
- A fact is that it is probably easier to test a single codebase on the monolith app, where all the dependencies were together and resilience to dependent services being down was out of the scope in most of the cases. We are seeing the benefits of microservices architecture but this comes with the cost of testing more complex interactions across disperse systems.
- Tests should be ran at different stages with even some of them working against production after a deployment as a final sanity check.
- Failure to realise that you don not have enough logging
- And last but not least, the importance of logging. It should not be the customer or the support department telling us something is down or broken. With tools like Newrelic for general health checking, Datadog to do some live profiling and logstash for collecting and searching for application logs, we should be in the position of knowing better and before than anyone what’s going on at any time in our systems.
- We can use all these metrics and KPIs to also create automatic alerts based on general or business specific rules, to our own advantage.
Not breaking up the monolith
The talk ends with a presentation of two distinct ways on which we can tackle the exodus to microservices. The big-bang approach is about developing a shiny new microservice architecture in one go. That’s a daunting task that only a few companies and products managed to complete. Good news is there is no need to take that risky way. Sam Newman on his Building microservices book proposes a 3 steps way that start splitting the database in multiple schemas according to their function, to later do the same at the application code level until the point all the microservices are created. Although not mentioned during the talk, this is also consistent with what Martin Fowler describes on his article Monolith First
The iterative process of moving code into microservices doesn’t happen by the art of magic as Nic properly mentioned. Are software engineers that need to do this tedious task of previous refactoring, in preparation for the stage on which the application is finally split into many smaller ones. It is sometimes hard to find the willingness to take a monolith application and convert it to microservices when we all know how hard and error prone this can be with the lack of proper unit tests as it is often is found on legacy software and large codebases. It is mentioned that the ability to refactor brittle code is also a valuable skill to have.
To conclude the task, Nic briefly introduces to some of the problems of communicating systems via now common HTTP and RESTful standards. He reminds us of the fact there are other means of communication, sometimes more appropriate such as asynchronous messaging based on events (publish/subscribe style with message queues such as rabbitmq) and binary protocols like Thrift and protobuffers than can outperform HTTP and even HTTP2 in some circumstances.
A great talk plenty of good advice and great pointers for those considering a migration of experimenting one at the moment.