Previously on my blog: in the previous post we spoke about what are microservices, pros and cons of the network in the architecture (fallacies of distributing systems, CAP theorem…) and the problem of changes, but most of the discussion was related the Domain Driven Design.

Second day of the course. After a briefing of the first day, we started speaking about communication patterns. The normal basic interaction point to point, indeed, creates what he called a “web of components“. Those webs have a big problem of availability: if (all the instances of) one component dies, your web collapses and you have loss of functionalities, at least, or a complete unavailability. The first model that he introduced is the circuit breaker: if your component A sends message to B and B becomes unavailable, A can avoid hammering B for every request and can set a period of time in which it will consider B unavailable, and then it will check it again. This is the minimal example of circuit breaker.
A nice and easy post I found on that topic is this. Anyway the teacher suggested to introduce Hystrix, that is a complex circuit breaker solution that is opensource and developed by Netflix.

Every component that access another component usually do that to access their data. The question is “why split this data?” and the answer is “because we think in terms of resources“. But as we said, webs are fragiles. A solution to that is to have enough data in one component to fulfil its duties, so that we can have views and transactions aggregated in the same components. Those two models (separated data in webs and aggregated, and eventually replicated, data to fulfil one duty) embody the CAP theorem: webs are consistent while replication leads to higher availability, because if one component dies, all the others can still work without problems.

We then started speaking about two different approach of communication: point to point and with a broker. Solutions for point to point communications are: http, rpc, finagle, rmi, xmlrpc, rest, soap/ws-*, protobuf, corba, avro… but also zeromq, websocket and grpc that have a concept of schema for data that they transfer. In those models one important concept is expectation: component that sends a request expect some kind of response in some kind of format (and if the format changes, it is a problem). Data format can then be unclear (except with protobuf, avro and zeromq). Moreover, if you have one request and multiple response, it is a particular case that is not addressed by most of those protocols (only by websockets or zeromq…).
For brokers we have XMMP, sqs, amqp (rabbitmq), sns, jms, service mesh (like linkerd). Those have the advantage to achieve location transparency, meaning that you need to know where the broker is, but you don’t need to know where the services that are receiving your request are. Service mesh in particular achieves Location Transparency by using proxies. Last but not least, with brokers your communication is asynchronous.

So, when you are thinking about your microservice architecture, you have to think about the contract among your components: the data schema and the interaction model (meaning messages and uri). One good approach is to follow the Postel’s Law of robustness: “be tollerant in what you accept and strict in what you return“.
Sooner or later you will anyway have an upgrade of your API that will break a communication about two components. Among the solutions you can consider there is also the possibility to add a proxy that accepts an old fashion request and converts it in a new fashion request (of course if this is possible, if you need more data related the request, you are doomed).

The teacher then spent 20 minutes speaking about one of his projects, MUON, that is a set of tools for creating microservices. I honestly dig in the website 20 minutes without understanding what the hell it is used for, but what he said during the lesson was that, with MUON, you can code your microservice components in a communication pattern agnosticism, MUON will take your events, or messages or whatever and send them with RPC or a chosen other technology (I think the default, right now, is AMQP, so RabbitMQ or a similar).

Changes can break everything! We have different level of tolerance to changes related our systems: they can be fragile, meaning that every small change will break everything, robust, meaning that we have put on a kind of armour, we can resist something, but still the system can break completely, resilient, meaning that, even if it can die, it knows how to resurrect or to recover, or ANTIFRAGILE, meaning that he takes advantages from being stressed and changed and it becomes stronger and stronger as much as you try to kill it. An example of a system that is antifragile in biology is the muscle: the more you stress it, the bigger and stronger it becomes. For those who are interested, there is a book or several posts like this.

We spoke than about feedback loops starting by analysing two cases: Continuous Delivery, in which, for any step (starting from compile) you have tests and errors that blocks your pipeline. Those are feedbacks that prove that your software is not good and save you from putting in production something that won’t work properly. Devops work is the same: they need constant feedbacks to work.
Feedback loops are stressors, and they are important to know what it can happen and be prepared to react. In microservice world we have other types of stressors, like Chaos Monkey of Simian Army (Simian Army is a suite of services that creates failures). Chaos Monkey is studied to kill services in a random way. This is done during office hours so that people are alerted and they can test how to react and be prepared to do it in production, if needed, and they code thinking about unexpected outages. You can find (or implement) any kind of stressor for your architecture, like: for load (Chaos Gatling… I think it is this), slow hardware, latency, move stuff/networks, take down a CD, mutation testing, penetration testing, data fuzzing…

Last concept of the day the Consumer Driven Contracts: a service provider does not only expose an API, but gives also a contract of the data that the API needs. Like for the old fashion SOA architecture the WSDL document with SOAP calls. This makes immediately aware every client of every change on the API, because the data basically doesn’t work anymore. There are interesting articles about this subject, like this. Moreover, there are tools to achieve this, like Pacto (inactive), Spring Cloud Contract or Pact.
This, with all the labs we did, closed the day. Stay tuned!

Share