Today I finished a 3 days training on (reactive) microservices with David Dawson (TwitterWebsiteLinkedIn). I had no opportunities to share day by day what we learnt, but I will write the three posts anyway. The first day started with a question: what are microservices? The adjectives that we replied were like resilient, SRP (small responsability), loosely coupled, bounded context, standalone and autonomous, easy to replace…
But apparently all those attributes are questionable: there is no common set that is really describing the concept. What we can generally say about microservices is that there is some form of network and that they give us the aspiration that the world, with them, will be better.

People essentially want to change the things: scale, change pipelines… and although network can introduce some problems in the architecture of our application, it force also our service to be thought in a separate way. The objective of the architecture is to keep separation and implement communication among different components in order to have changes affect only one (or few) components. This mindset is somehow used to track changes.
Indeed we can say that the Microservice Architecture is a mental model to take a problem and create a design, and as a mental model, we can call it Microservice philosophy, meaning with that the philosophy of “use a network to contain changes“.

Now we have two new problems: 1. network development is hard to implement, 2. changes affecting multiple components are hard to handle (this is something particularly important because the concept of single responsibility, for example, doesn’t include automagically the concept of single place in which changes happen, and whenever this is not assured it is useless to develop components with single responsibility, because what we want to achieve is to have component with common needs of changes).
Than David told us about the Fallacies of distributed computing, 8 wrong assumptions that software developers make about distributed computations and that should always be considered when developing components on a network. The fallacies are: 1. The network is reliable, 2. Latency is zero, 3. Bandwidth is infinite, 4. The network is secure, 5. Topology doesn’t change, 6. There is one administrator, 7. Transport cost is zero, 8. The network is homogeneous. Of course those should be considered also on a microservice architecture, being this distributed over a network.

Another consequence of the distribution of the software on a network is the CAP Theorem or Brewer’s Theorem (David spoke about it in a slightly different way, follow the link for the complete definition): “network partition means that data consistency can be achieved at price of availability, or vice versa” (the true definition says that you can have only a maximum of 2 out of 3 among consistency, availability and partition tolerance). For example, if you have a database and 2 services accessing it the user can use both of the services, but when one of the two lost the connection with the db he can follow just one of the two paths: fail to any request from the client (consistency) or accepting it anyway but processing it later or with old data (availability).

We then spoke about 3 architectural concepts: Domain Driven Design (know your bits), Change modelling (create boundaries) and Reactive Systems (network and non functional requirements). The Domain Driven Design Blue Book is the bible of this architectural technique,  it starts from how to implement and goes toward the why. What you want to achieve is the analysis of your problem (and it is not mandatory that this was a technology problem or something involving computers). The first step is defining a glossary with the Ubiquitous Language (all the words used in the domain) and the Events (all the things that can happen).
A domain then can have different subdomains with similar concepts that assume different meanings or representations (for example, in one application for video distribution, a client is represented by its information related to the content he is watching, the data that he is storing… but there could be an administrative subdomain for which it is more important to consider a client for its age or physical location, maybe for limiting what he can see).

First step in DDD is then to understand the problem, and to create subdomains called Bounded Contexts, that are isolated for terminology and transactions (a good definition of bounded context is this: “Bounded contexts are autonomous components, with their own domain models and their own ubiquitous language. They should not have any dependencies on each other at run time and should be capable of running in isolation. However they are a part of the same overall system and do need to exchange data with one another..“). Inside a bounded context we can maybe find one or more Aggregate Root, communication/transaction paths that are involving different components (let’s say that component A needs data from B that needs data from C for every request that they received, A -> B -> C is an Aggregate Root).

Associating terminology, components, which event and which transactions are involved in that bounded context is defining the core of the bounded context, and it is the first step for the definition of the design. The second is the definition of the connections between the external world and the core, or the integration layer. We have only two types of connection: adaptors that are unidirectional (inbound or outbound) and gateways that are bidirectional. A component that sends monitoring data by fire-and-forget needs an adaptor, for example, one that needs to request data needs to send out a request and wait a reply, so a gateway.
At this point we need to define two more things: views, meaning the query that can be asked from the outside world to the bounded context, and repositories, the data store for the different components.

Now that we have all the pieces, the question is: what can change and how? The principle is that “what change together stay together“, meaning that it is useless or even harmful to have several components that change always all together, better to make one out of them. Another good idea is to isolate components that are not changing often: if a component is almost stable and needs no maintenance and evolution, better to separate it from the others and keep it alone, or anyway not divide it in several components.
What we want is to rate views, connectors and components on a scale from 0 to 10 based on their change frequency so that we can better understand which components to aggregate. Moreover we want to try to list the most probable reasons for having changes in our bounded context, so that this can help us find connections between components in order to keep together what changes together.

Last discussion of the day was about the reasons for having changes in a software. There are fundamentally four reasons: functional reasons (change of functionality), non functional reasons (technology, AWS, scaling…), team scaling and regulatory changes.
Soon the review of the second day. Stay tuned!