Good, Fast, Cheap; Pick two… lies

Ok, so for this clickbaity title there has to be a showerthougthly post. 

Any developer that has been in the industry for a few years has heard the saying (and maybe even used it on occasion) that there are three variables to building a solution (not only in software development); Good, Fast and Cheap… and that we can only choose two of those as it is impossible to have all three. Product teams do not like to hear this but you need to compromise somewhere… I say we can only choose ONE. Any client will probably want to burn me at the stake. Not only can’t they choose all three but they can now only choose one? Unfortunately yes… and here is my, I think very simple, reasoning.

Good and Cheap

Probably the most commonly selected option (unless you are VC funded). OK, so how do you go about making it cheap? You hire inexperienced developers. Juniors, undergrads, etc. Maybe a mid or two. You take a risk that it might take a bit longer but you hope that given enough time you will get a good product. Unfortunately by definition, like in any craft, you need experience to build something “good” (don’t want to get into a definition of what good means… this is a WHOLE other book). So how does a junior build such a solution? By learning and gaining experience while building it… and making a ton of mistakes along the way… This sounds expensive…

Good and Fast

Ah… the rich people land. Anyone that has even heard of “The Mythical Man-Month” can suspect where this is going. Lets hire all the top talent. All the “rock stars”. First of all there is the law of diminishing returns. At some point adding yet another developer adds little value and even becomes detrimental (e.g. communication overhead). Ok, so lets say that the management knows this but still wants it as fast as possible. Let’s push them to the limit. Overtime. Weekends. Bugs. Burnout. Turnover.

Fast and Cheap

Not sure why someone would want this but… this one is actually possible… I mean if we do not care about the quality of our product we can hire a bunch of undergrads, grind them to dust and we might get something that works 2 out 3 times.

If this is something that is good enough… never is, even if at first the product says it is but that is a whole other topic.


Apart from the last, very questionable, option, it can be clearly seen that you really can’t have even two. To build a good product you need experience. Building a good product takes time. Period.

Good, Fast, Cheap; Pick two… lies

Expressions Over Statements

Photo by George Becker on

I have been a fan of functional programming for a while now. The reasons are plenty but mostly come from referential transparency. There is one feature of the FP approach that has been hard for me to explain to others. Due to the recent project I have finally started to have a better grasp on the subject and it comes down to using expressions instead of statements.

When writing Haskell, or other FP language code, expression approach is forced on the user but it can also be used in Java. Streams with lambdas are a great example of this.

What is the difference? Best to show on example:

Java Statements:

public String someMethodStatement() {
  var usernameList = getUserNames();

  var username = select(usernameList);
  var modifiedUsername = doSomething(username);
  // LOG :)
  return modifiedUsername;

Java Expressions (with minor modification, note: it can be written in multiple different ways):

public String someMethodExpression() {
  return getUserNames().stream()

For anyone that hasn’t been living under a rock in a Java community the second example should be totally understandable.

So why would I argue that the second example is potentially better code than the first one? One might say that it is actually less readable. Due to language limitations, it also changes from an Optional Monad back to a Stream in the middle of the execution. Those are valid concerns but miss one aspect: Scope. 

In the third line:

var modifiedUsername = doSomething(username);

The operation has access to not only username variable but also to usernameList. Even though it is not used, when reading this, a programmer still has to keep a mental checklist of all the variables that are in the scope of the operation (just like a compiler 😉 ). Even if they are no longer needed. In the second example, when calling doSomething the code no longer has access to that list. The reader can focus only on the things that matter.

Since in Java this approach is somewhat clunky it might still be preferable to simply use statements. I will shamelessly plug that in Kotlin we can have this in a fluent and expressive form.

fun someMethodExpression() =

What we can see here are Scope functions. It almost looks like a mathematical equation and I love it. It has less code than the original Java Statement while still giving the benefit of an expression.

Scope is not the only advantage of using expressions. In this approach the code has a very clear start and SINGLE end of the function/operation. Having one exit from a function has been long said to be a good practice (unless you have performance reasons). Writing expressions forces this. No more multiple returns flying around a 200+ line method.

Last but not least, expressions guide us to better decompose our code. Instead of having one chunk of code after another, we have to separate them out into separate, clearly defined functions (or run a risk of deep indentations). This also helps to keep each function on one level of abstraction. Jumping between levels is harder when you do not have access to all the variables.

These are my reasons for preferring to write expressions over statements. They limit the cognitive load on the reader, encourage better practices and help keep the code modular.

Expressions Over Statements

Building regression free (or close enough) service using Kotlin, Jacoco and Mutation tests

man wearing black and white stripe shirt looking at white printer papers on the wall
Photo by Startup Stock Photos on

We have decided to build our current product using Kotlin and, like with any new project, do everything by the book (and try to keep it that way as long as possible :D). This meant a number of automated safeguards and checks, including having good test coverage from the start. We decided to go with Jacoco as it seemed to have the best compatibility with Kotlin and set the bar as high as possible: 100% branch coverage. Yes, branch coverage. Not line coverage. It is too easy to have good line coverage and useless tests. This topic has been covered many times by others so I will not go into details. For more info you can check out e.g.: The check is part of the automated build and is done on each Pull Request. No code will be merged if the coverage is not high enough. These are CICD basics but I wanted to reiterate this point. 

Of course it is still possible to write tests that have high branch coverage and tests nothing (although harder than for line coverage). It is possible to tackle this using mutation tests. Mutation tests change (mutate) the code and check if any of the tests fail. If a mutation passes all the tests, it means that the tests are of poor quality. This is slow so we run it only once a day but that is enough to keep us in check.

100% coverage sounds extreme but the assumption was to see how long we can maintain that and lower it if such need arises. If the coverage needs to be dropped we had to have a good reason for it. We have worked in this setup for over half a year now and few interesting outcomes have come from this.

Kotlin and Jacoco work well together, but not perfectly. The coverage limit had to be dropped to 98% due to some edge cases. Unfortunately as percentages work, this means that, as codebase grows, the number of branches that fall into the 1% also grows. Some critical branches might go unnoticed. We need to actively check the reports and see what actual other branches are currently not covered. Hopefully in the future we can improve on this.

No bugs reported so far were from introduction of regression to the code.

Having said that, here is the kicker. No bugs reported so far were from introduction of regression to the code. I had this realization quite recently and made me curious as to what could be the cause of that.

First is the void/null safety enforced by Kotlin. No other popular JVM language has this functionality. This is a huge boost to productivity and helps tremendously in keeping the code correct. If a variable/field can be null, the programmer is forced to handle it. This means that a visible branch is created. Jacoco can report that and force us to write a test that covers it. 

This has another effect. Since humans (that includes developers) are lazy by nature, they want to write as little code as possible. If you push an optional parameter through multiple layers and execute a number of operations on it, Jacoco will force you to write a test for each case. Some cases might not be possible to test at all.

data class User(
    val firstName: String?

fun someUseCase1(user: User) {

fun placeOrder1(firstName: String?) {
    if (firstName == null) {
        throw Exception("First Name is missing!")
    // Do something

fun sendEmail1(firstName: String?) {
    if (firstName == null) {
        throw Exception("First Name is missing!")
    // Do something

No one wants to do that. It forced us to resolve the null object as early as possible.

fun someUseCase1FewerBranches(user: User) =
    (user.firstName ?: throw Exception("First Name is missing!"))
        .let { firstName ->

fun placeOrdere1FewerBranches(firstName: String) {
    // Do something

fun sendEmaile1FewerBranches(firstName: String) {
    // Do something

We end up with an easier to test, more readable and safer code. And less of it. It is also just good design that should have been done from the start. Now we have an automated tool that makes us do it.

An aspect of this is also wider usage of the null object pattern. Say we have a list of strategies to select from or an optional function parameter.

fun someUseCase2(users: List<User>?) =

Instead of having separate tests to handle cases where the selected strategy is missing or the parameter has not been passed, we can introduce sane default behaviour (e.g. use empty list instead of optional list).

fun someUseCase2FewerBranches(users: List<User> = emptyList()) =

All those worked in tandem to guide us in having a better quality solution and allow us to keep safely introducing new features. Although there are also other reasons for it, our productivity per developer has not dropped since the beginning of the project (and it is always highest in the beginning when there is no code :D). Bugs still happen and always will. They are, however, fewer of them and of different nature. It has only been a few months on this project so I am curious how will this evolve but so far it looks very promising.

Building regression free (or close enough) service using Kotlin, Jacoco and Mutation tests

Value Types In Kotlin

With the recent release of Kotlin version 1.5 the value types have exited the experimental stage. This means we can now use type driven development approach without the fear of the overhead that wrapping in a custom classes caused. The value class works by inlining the wrapped value during compilation. From now one we will be able to safety pass values around without the overhead of new instance creation. Example below:

value class UserId(val value: String)

data class User(val userId: UserId)

Unfortunately we still need to use an annotation and probably will needed it for the time being as stated:

In Kotlin/Native and Kotlin/JS, because of the closed-world model, value-based classes with single read-only property are inline classes. In Kotlin/JVM we require the annotation for inline classes, since we are going to support value-based classes, which are a superset of inline classes, and they are binary incompatible with inline classes. Thus, adding and removing the annotation will be a breaking change.


There are also some problems with language interoperability. For example, Groovy. The language does not see those types. In Spock tests we will have to use raw types:

def user = new User("someUserId")

This will cause a lot of headaches during refactors (as is often the case with Groovy, but that is a different topic).

From Java perspective we will (hopefully) have value types from Project Valhalla but since there is no known release date as of today, Kotlin release is very welcome. Especially for someone that firmly believes in type driven development.

Value Types In Kotlin

Monolith with Feature Flags as alternative to Microservices?

person holding laboratory flask
Photo by Chokniti Khongchum on

I want to go through a thought experiment where I try to mimic microservices behaviour in a monolithic architecture. The thesis is that using feature flags in a monolithic application we can reap the benefits that we would get from using microservices. Without the inherent costs associated with it (although we will probably move the complexity elsewhere :P).

Microservices Architecture: A loosely coupled, separately built artifacts, split across context boundaries, deployed independently. Multiple teams involved in development.

Monolithic Architecture: A single build artifact (and in effect one deployment unit), built from multiple modules, that are split across context boundaries. Multiple teams involved in development.

Feature Flags: A technique that allows for changing system behaviour without changing any code.


I think this is the most common argument for using microservices architecture. The ability to scale functionality independently. With microservices you can easily scale individual services. The problem is that you have to have designed your system in a way that scaling a service actually translates to scaling a feature. This is often easier said than done. System functionality often ends up being split between multiple services. Sometimes this is what you want and sometimes it is just bad design. You also have to make sure that all the other overhead involved does not overshadow the performance gain from that scaling.

Scaling your monolith using feature flags is definitely possible. Put a feature behind a flag. Deploy the whole artifact but enable just the single feature flag. This will make sure that all the resources on this node will go to processing just that particular feature. Sure, this is not what feature flags were initially designed for but this is an experiment, so hey, lets go wild. There are limits to this technique however since this actually worked well for Facebook (although admittedly I can’t find a source for this), then it will probably be enough in most cases.

Clear context boundaries

A cornerstone of good modularity are clearly defined context boundaries. In a microservices architecture this is done with a physical network layer. This, however, does not prevent having clear boundaries in a monolith. From my experience (I realize this is somewhat subjective) if you do not have enough knowledge/discipline to have well defined boundaries in a monolith, then you will end up with a distributed ball of mud. I have covered how this can be done in a monolith in a previous post.

Independent deployments and ownership

I think this is the most important and powerful feature of a microservices architecture. Ability to deploy small chunks of code, ideally independently by every team, in effect decoupling the teams. They can deploy often and quickly verify their changes on the best testing environment, the production. They can own their services, meaning they will have intimate knowledge of their whole environment to the degree simply impossible on a monolithic application. My experience is that the higher control the team had over their production, pipelines, etc, the more stable and error free the product was.

Unfortunately I do not see any way to mimic that in a monolithic application. Even if each module governs its own versioning, the management of that would be hellish. I think this is one of the major advantages of microservices over monolithic applications.

Bulkhead pattern

This technique lets us limit the splash radius of failures. By being able to run different features on independent machines and networks we can create bulkheads that will ensure our solution keeps working, protecting us from cascading failures. I actually think this is easier to implement in a monolithic application than in microservices architecture. For example running a copy of your application in two different AWS regions will require far less configuration when running a single artifact. Not to mention the infrastructure cost. Running multiple instances of several services against running just a couple of instances of a single service.

Different characteristics (response time vs throughput)

Features will have different non-functional requirements. They usually come down to whether it should have a short response time or be able to process a lot of requests, aka throughput. Ideally we would have both a high throughput and short response time but there will always be some trade-off. Being able to run features on separate machines, means we can configure them independently. Good example of this is CQRS. We will want to be able to quickly get query results. Meanwhile it is sometimes OK for the client to wait a moment for the command to be processed. 

This is similar to the case of performance. We are able to scale individual features using feature flags. Nothing stands in a way to configure those instances in a custom way.

Different languages

Being able to use different languages within one product had been one of the major selling points of microservices architecture. Truth is that many companies try to limit this practice as much as possible and with good reason. Cost of maintenance rises exponentially with each additional language. That being said, there will be cases when this is a valid solution. For example when building a ML feature you will probably write it in python instead of, for example, Java, which might be the dominant language in your product. Being able to use different languages also means we can update language versions in a more controlled and gradual way. Just don’t end up with four different Java versions :P.

You can do that to a certain degree in a monolithic application. Like using Java and Kotlin in one code base. However, it is definitely very limited. You will probably have a hard time maintaining such an application. Upgrading one module at a time will also pose a significant challenge. I think ability stems from the possibility of independent deployments. That, as we mentioned previously, is pretty much not possible in a monolithic application.

Shorter build times / Fewer merge conflicts

As the application grows, so does the build and test times. Having the application split into smaller chunks makes it easier to control those variables, making for a shorter feedback loop. Same thing applies to the process of merging changes. Forcing developers to work within confines of a single service, we can mitigate the occurrence of merge conflicts. In large monolithic applications it is possible to build only things that changed, which should put a cap on the build times. If the teams work within their modules, the number of merge conflicts will also be limited. It all comes down to discipline and proper tooling.


Short answer to the question in title: To a certain extent. Feature Flags can allow for a monolithic application to reap similar benefits in terms of performance. However, they do not provide as much flexibility. I think that as long as the developers are deploying new features to production in a timely fashion but more tuning capabilities are needed, then using feature flags is a viable option. This might be especially interesting for companies whose product needs to be able to be deployed on-premise as well as a SAAS solution.

Monolith with Feature Flags as alternative to Microservices?

ADT with Java: Sealed Classes, Pattern Matching, Records

Photo by Karolina Grabowska on

I am a big fan of Algebraic Data Types. They allow us to declaratively specify data model grammar. Many modern statically typed languages deliver this functionality out of the box, allowing for writing very expressive code. Since I primarily work with Java I have tried to use ADTs in Java on a number of occasions. That has not been a pleasant experience. Java simply does not provide proper tools. Last time I tried, it was with Java 11. Now that we are at Java 15 I have decided to give another go, using new features.

One of the basic data structures that we work with are Lists. When learning a functional language they are usually the first hurdle that takes a while to get your head around. In Haskell a List is pretty much defined as:

data List a = Empty | Cons a (List a)

This means that a List is either Empty (no elements) or has an element and a pointer to the next “object”. In short this is the definition of a Linked List. Pretty neat, right? To not come across as some Haskell snob the same can be done in Typescript:

type List<T> = null | {value: T, next: List<T>}

I tried to recreate that in Java 15 and came up with this:

public sealed interface LinkedList<T> permits LinkedList.Nil, LinkedList.Cons {
    record Nil<T>() implements LinkedList<T> {}
    record Cons<T>(T value, LinkedList<T> next) implements LinkedList<T> {}

We have few new things done here that were not possible before.

First we have sealed classes ( Those are classes/interfaces that strictly define what classes can inherit from them. This means that when we check the type of the object we can do it exhaustively. One of the major critiques of using instanceof is the fact that we never truly know what implementations we can encounter. Until now. This allows us to safely deliver more logic through the type system, allowing the compiler to verify it for us.

Second are records ( Those allow us to declare immutable data models with far less boilerplate. Would be great if we didn’t need those curly brackets at the end :).

So this is the definition of the LinkedList using type system in Java 15. Lets see it in action:

LinkedList<String> emptyList = new LinkedList.Nil<>();
LinkedList<String> oneElementList = new LinkedList.Cons<>("Test", new LinkedList.Nil<>());

Let’s try to build a bigger list. To do that we need a util method:

static <T> LinkedList<T> addElement(LinkedList<T> list, T element) {
    if (list instanceof Nil<T>) {
        return new Cons<>(element, new Nil<>());
    } else if (list instanceof Cons<T> cons) {
        return new Cons<>(cons.value, addElement(, element));
    } else {
        throw new IllegalArgumentException("Unknown type");

Here we yet again take advantage of a new Java feature: instanceof pattern matching ( This allows us to skip the type casting after the instanceof check, making for a more readable code. Actually, once more work is done in this area and we get the planned switch expression for instanceof, will end up with something akin to:

static <T> LinkedList<T> addElement(LinkedList<T> list, T element) {
    return switch (list) {
        case Nil<T> nil -> new Cons<>(element, new Nil<>());
        case Cons<T> cons -> new Cons<>(cons.value, addElement(, element));

Which will finally be quite pleasant to the eye. We can use this code as simply as:

LinkedList<Integer> list = new LinkedList.Nil<>();
for (int i = 0; i < 10; i++) {
    list = addElement(list, i);

I have added several more functions to the solution and the complete code can be found here.


So there it is. An immutable LinkedList written using the type system. There is still space for improvement but I feel like Java is on the right track. Although those features are still in preview I have high hopes that when we reach the next LTS (Java 17?) we will be able to truly take advantage of ADT techniques. Of course Java is playing a sort of catch up to other JVM languages like Kotlin and Scala but I hope that their implementation will be better since Java can play with JVM as it sees fit. Next time someone asks you to implement a Linked List in an interview, you can just use this :P.

ADT with Java: Sealed Classes, Pattern Matching, Records

Scrum only works with independent CI pipelines

Photo by Christophe Dion on Unsplash

The title is a bit clickbaity, I know. I couldn’t figure out a better one, so there it is.

What is agile development? This is a very controversial topic. There is the Agile Manifesto, there are strict “Agile” frameworks like Scrum and methodologies like Kanban. I like the simple definition where a team of senior engineers is put into the conference room with a direct line to the client and are asked to deliver a releasable program increment every X period of time. Scrum has a lot of additional elements, one of which is the requirement that the program increment should have prior agreed set of features. One could argue that this limits the development flexibility but allows for the development team to focus on the delivery and make predictions on where the product could go in the future. You can change the scope of the sprint during its run but this is frown upon and with good reason. In order for the team to be able to estimate with any certainty and agree to the sprint scope with the PO during the planning, it needs to have a certain level of predictability of the environment in which they operate. Continuous Integration (CI) pipeline plays a huge role in that predictability.

CI pipeline is an automated process that takes our new code change and runs it through a series of steps to verify its correctness, like running tests, static analysis tools, etc. Generally any modern development team/s will have a CI pipeline as they help with making sure our change is correct. The problem arises when the same pipeline is used by multiple teams.

What happens when the teams share the CI pipeline?

The severity of the problems outlined here, depend on the pipeline implementation, however all of them eventually hit these issues:


When many engineers try to merge their changes, there will be times when the CI will become overloaded and the engineers will have to wait a long time for their change to be verified. The more engineers, the longer the wait will be. This can be mitigated with automated scaling but this will become either expensive or unstable (if low quality servers are used or something like AWS Spot instances).

Pipeline failures

Having one pipeline makes changes to it difficult and dangerous. Just like you would be cautious with making changes to your product on all client instances simultaneously. One wrong change can cripple the pipeline and force all your developers to be unable to work, potentially costing the company millions. Having many deployment pipelines lets you do independent changes, e.g. through canary releases.


This will be more pravailaint with large monorepos, but having one deployment pipeline makes it very tempting to basically build everything everytime. As the code base grows, so will the build times, making the developers again wait longer and longer. This can be mitigated with appropriate build tools like Bazel (never used it personally, but I heard good things) but it can be costly for the company to change their processes once they reach the point where this becomes a problem.


Once in a while some team will try and push something through the CI something that will cause it to die. Maybe they will deploy some docker image that blocks a vital port (seen that happen, done it myself once or twice) or run a script that kills everything on the given machine (also seen it). Such mistakes happen. They should not block merges of other teams.

You might ask, so what? Just take that into account when estimating the time needed to deliver the feature. After all the estimates are made based on previous estimates, when those incidents also occurred. The problem is those issues are unpredictable. One sprint they will not happen, the second they will, derailing the whole sprint. We want to limit uncertainty to increase predictability.

So what to do with it?

It would be perfect if the team was the one maintaining and taking the responsibility for their CI pipeline. This will give them high control of their environment, significantly lowering uncertainty (of course there can still be unforeseen circumstances like a network failure, but there is little one can do with that). This approach can be unrealistic (and unnecessary in a company with high DevOps culture) in large corporations, for various reasons like security, costs, etc. However if the company does not have deployment properly figured out, they should let the teams set up their own pipelines and manage them independently. Once the company matures then it can try and unify the processes, keeping the pipelines separate. Just like in a microservice architecture. This approach works for any project setup. Regardless if it is a monolith or microservice, monorepo or multi repo (although it is easiest in a multi repo microservice architecture).

Final Thoughts

Think about this in terms of encapsulation. You try to enforce encapsulation within your services/modules in order to make them loosely coupled, stable and easy to change. The same applies to CI pipelines. Having a separate CI pipeline will allow the team to make better estimations as there are fewer variants.

That said, in some products having independent pipelines will be infeasible. It can be too expensive/time consuming, there might be a lack of skill necessary within the teams, etc. There can be many reasons. This, however, means it is also infeasible to expect development teams to deliver releasable features every sprint. And that is OK, alabait painful. In such cases maybe use something closer to Kanban than Scrum.

Scrum only works with independent CI pipelines

The Mythical Modular Monolith

What is it and how to build one?

Photo by Raphael Koh on Unsplash

We all know the Microservice trend that has been around for years now. Recently voices started to rise that maybe it is not a solve-all solution and there are other, better suited approaches. Even if, eventually, by evolution, we end up with a Microservice architecture, then there are other intermediate steps to safely get there. Most prominent, a bit controversial in some circles, is the modular monolith. If you follow tech trends you would have already seen a chart like this:

I will not get into details of this diagram, since there are other great resources that talk about it. The main idea is that if are at the bottom left (Big Ball of Mud) then we want to move up and right through the modular monolith instead of the distributed ball of mud. Ideally we want to start with modular monolith and if our product is successful enough potentially move towards Microservices.

The problem I found with those articles/talks is that they talk about the modular monolith but rarely go into details as to what that actually means, as if that was self-explanatory. In this piece I will outline some patterns that can help when building a modular monolith.

What is a modular monolith?

Before we can talk about how to build a modular monolith we need to answer this question. After all, what makes it different from a regular monolith? Is it just a “correctly” written monolith? That is correct but too vague. We need a better definition.

The key is in the word modular. What is a module? It is a collection of functionalities that have high cohesion in an isolated environment (lowly coupled with other functionality). There are various techniques that can be used to collect such functionalities and build a boundary around them, e.g. Domain Driven Design (DDD).

Breaking it down:

  • Clear responsibility / high cohesion: Each module has a clearly defined business responsibility and handles its implementation from top to bottom. From DDD perspective: a single domain.
  • Loosely coupled: there should be little to any coupling between modules. If I change something in one module it should affect other modules in a minimal way (or even better not at all).
  • Encapsulation: the business logic and domain model should not be visible from outside of the module (linked with loose coupling).

A good rule of thumb to check if a module is well written is to analyse how difficult it would be for it to be extracted into a separate microservice. If it’s easy then the module is well written. If you need to make changes in multiple modules to do it, then it needs some work. A typical ball of mud might also have modules but they will break aforementioned guidelines.

Having defined what a module is, defining a Modular Monolith is straightforward. A Modular Monolith is a collection of modules that adhere to those rules. It differs from a Microservice architecture that all the modules are deployed in one deployment unit and often reside in one repository (aka mono-repo).

Integration Patterns

There are a number of integration patterns one can employ when building a modular monolith.

Each one has its strengths and weaknesses and should be used depending on the need. I have ranked them in the level of maturity.

Level 1: One compilation unit/module

The codebase has one compilation unit. The modules communicate between each other using exposed services or internal events. This approach is fastest to implement initially, however as the product grows it will become progressively harder to add new functionality as the coupling tends to be high in such systems. Same applies to ease of reasoning. At first it will be very easy to “understand” the system. As the time flows the number of cases needed to keep in mind will grow as it is very hard to determine what are the relations between domains. Benefit from this approach is that we can quickly deliver initial value while refining our development practices (especially new teams). From a practical point of view the build/test times of such a system will rise exponentially, slowing down development.

Recommendation: Use with a single small team (2–3 people). Ideal for Proof of Concept and MVPs.

Level 2: Multiple compilation units/modules

The codebase has multiple compilation units, e.g. multiple maven modules, one per domain. Each module exposes a clearly defined API. This approach allows for better encapsulation as there is a clear boundary between the modules. You can even split the team and distribute responsibility for each module, allowing for independent development. The readability also benefits from this approach since it is easy to determine what dependencies are between the modules. In addition we can only build and test the module that has been changed. This speeds up development significantly. Requires a little bit more fiddling with build tools but nothing a regular developer couldn’t handle.

Recommendation: Good for a typical product team. Team members can work fairly independently. Could work with 2–3 small teams. This approach will take you far as the implementation overhead is small, while it is very easy to maintain consistency through the code.

Level 3: Multiple compilation module groups

Each domain is split into 2+ modules. This is an expansion on the previous approach. This way we can extract an API module that other domains will be dependent on. This will further enforce encapsulation. You can even employ static analysis tools that ban other modules from being dependent on anything but the API modules. This approach could benefit from the Java Jigsaw Project.

Recommendation: This is ideal when moving from a medium sized product to a large product where 2+ full size product teams are needed. Each team will expose their API module that the others can ingest.

Level 4: Going web: Communicating through network

Same as Level 3 but modules are totally independent (no shared API module) and communicate using the network (REST/SOAP, queues, etc). This is an extreme step. One that should not be taken lightly. You lose compile time checking on the APIs and gain multiple problems related to networking. This allows a very high decoupling of the modules as there is no shared code (apart from some utils, etc). When deciding to take this step it means that we are nearing a Microservice architecture.

Option A: Having a single deployment unit. It might seem weird to call a REST API when everything is deployed in one unit but this approach does allow for better load distribution. This is especially possible when using a queue like kafka for communication. However I agree that in most cases this is a redundant approach. It is a good stepping stone when moving to Option B.

Option B: Separate deployment units. This is pretty much the final move from a modular monolith to microservice architecture.

Level 4.5: using separate repositories (aka moving away from monorepo), CI/CD pipelines, etc

Moving to full blown Microservice approach. Not going into detail as this article is not about Microservices.

Recommendation: Large/Multiple Products, many development teams, efficient DevOps culture, mature company.


You might have noticed that Cost of Maintenance is never low. That is correct.

You can not hide complexity. You can only change its location.

Contracts and Testing

A very important aspect of modular monoliths is to treat the APIs or domain boundaries (call it as you wish) as Contracts between domains/modules that need to be respected. This might seem obvious but it is easy to fall into a trap in a monolith where the module APIs get treated as second class citizens. When designing and maintaining them we should think of them as if designing a REST API. Changes to them should be done carefully. They should have a proper set of tests. This is key when there are multiple teams cooperating.

One of the more common issue is no clear distinction between what team is responsible for which module. Responsibility for APIs becomes blurred and their quality drops rapidly. Each module should have one team responsible for it. Ownership is key. The API of this module should be a contract between that team and the teams that use it. That API MUST BE covered by tests. Only the team responsible for that module can introduce changes to that API. This does increase communication overhead and extends the time of introducing changes but keeps those factors constant instead of spinning out of control.


I hope those make the Modular Monolith a tiny bit less Mythical. I think we developers like to over-complicate things while in truth most software engineering comes down to a few basic principles. There are a few more topics I think would be worth discussing regarding the Modular Monolith (tooling, architecture as code) but I think this gives a good starting point. The most important takeaways are:

  • Encapsulated modules with high cohesion and low coupling
  • Ownership is key

Keep those in mind and you will get far.

The Mythical Modular Monolith


What is the point of having tests? Why do we write them? To prove that the code is correct (in some cases, yes)? To fill out the test coverage requirements? Or maybe we don’t write them at all? If this is a single use program or a proof of concept then the last approach is usually best. Sounds heretical but hear me out. 

Where tests shine is in code maintainability. Having tests is indispensable when writing a piece of code that is intended to live. Code that lives is code that changes. Code that has multiple people working on it. Tests give you semblance of security as to the correctness of your changes during refactors/modifications.

I once heard a description of legacy code. Code that reaches a point where no one dares to change it, becomes legacy. I think it is spot-on. Having tests extends code lifespan (all code eventually becomes legacy). Being less scared of introducing breaking changes and having runnable use cases gives people courage. You will need it when asked to modify that piece of code that haven’t been touched for the last 10 years.

Tests are also a great insight into what is the purpose of the particular code. A runnable documentation of sorts. Documentation that has to be updated as functionality changes. When going into a new code base my first reaction is usually to start my reading from the tests. 

However, it’s not all sunshine and roses.

Note: I don’t want to get into discussion as to what is a unit test, what is an integration test, etc. I have seen people fight to the death over nomenclature whose sole purpose is to ease the communication. As long as both parties understand what they are both referring to then you can call it whatever your heart desires. My preferred approach is not to call them anything specific and just state exactly what I am testing. Single method, single class, collection of classes, connection with database, rest endpoint etc.

The problem with having tests, as with all things, is that there is a way to overdo them. Many developers (myself included) would jump on the testing train (and rightly so) and would write a test for every class, every method.  This doesn’t sound so bad. After all, you have a really well tested code, right? In theory, yes. The problem is that the tests cost. Cost time. To write them, to review them, to fix them, to run them. Eventually you will reach a point where adding yet another test will bring very little additional value. This is called the law of diminishing returns. The trick is to strike the balance. There are a number of techniques that try to remedy this like the test pyramids. My experience with those techniques is that often time trying to adhere to their rules can cause us to write tests that we usually would not write, but have to, in order to fit the framework. I have experimented with multiple approaches and I want to share what has worked best for me.

I like the idea of thinking about the tests as documentation. You know, that thing that we always say that we need but no one ever writes and even if there is one then it immediately stops being up to date. Approaching tests as documentation means that they need to cover your business use cases. In my experience, when taking the DDD approach, use cases are usually scenarios that describe how a given bounded context interacts with other contexts. This means that our tests should reside on that boundary. They should run against boundary entry points. For a Microservice this would mean testing the REST/SOAP/whatever endpoints. For a modular monolith, it would mean testing against modules external APIs. 

Does that mean that there should never be tests for individual classes/methods? No. There are bound to be cases where you are delivering a functionality that has a huge number of variations, e.g. a credit score calculator. Testing such a functionality might be impractical through the module API. In such a case a test for an individual class is highly recommended, with a caveat. This class should be leaf. What does that mean? 

This is not my idea but I really like this analogy. When we look at our class dependency graph there are usually classes that are roots, i.e. nothing depends on them. Those are usually our module APIs, aka boundary entry point. This is what we test. The other distinctive classes are those that depend on no other class in our module. Those are usually our repositories, utilities, etc., aka, leaves. Why do we want our class that needs individual testing as a leaf? Exactly because it has no dependencies. We do not have to mock any other service and therefore assume how it behaves. If we had mocks and the behavior changes then we need to remember to fix them all. Without mocks we can truly test this class as a standalone functionality. In order to make our class a leaf we would usually need to refactor it out from the middle of the dependency graph, since, in my experience, such functionalities end up inside domain service classes. As an added benefit such operation usually improves our design.

In summary, we want to test our bounded context APIs, aka module entry points, aka dependency tree roots. For special functionality that needs an individual suite of tests, we extract such functionality to a leaf class and test it in an isolated manner. In my experience, this approach strikes the best cost-benefit balance. This rule of thumb is also quite simple to follow. No need for arguments regarding, if we need to test those getters with five other services mocked. As a matter of fact I have become a firm believer that the existence of a mock is a code smell and probably should be refactored.

I think any text about tests would be incomplete without the mention of test coverage and its measurement. I will not get into much detail here apart from that I am against measuring line coverage and instead much prefer measuring branch coverage. I think it allows for better test optimization. Unfortunately I have not found any good tools for measuring it, especially for large, multi module code bases. Maybe I will eventually write my own or try to fix an existing one.

As a last note, I would like to share my approach to writing tests in Java. Again, this is my preferred way. I really like how Spock (a Groovy test framework), forces the developer to use the BDD approach. I try to write all my tests using it, however there is a huge down side to it, Groovy. As mentioned before, we write tests for code that lives, changes, that will be refactored. Groovy, being a dynamically typed language, is abysmal during refactors. Due to this, I write the tests (Specifications) in Spock, however all the fixture setup is done in plain Java. This way we can get the best of both worlds. Nice, readable tests, and fixtures that can react to refactors.

There is also the subject of context bootstrapping. The most common way in Java by far is by wiring all your applications using Spring and, during tests, starting the whole context. I find this approach to be OK in a Microservice world, but still much prefer bootstrapping the dependencies manually. First of all the tests will run much faster, second I can clearly see how my dependencies interact and therefore check for smells and warning signs. Remember: Dependency Injection is not equivalent to IoC containers. This approach is pretty much the only sensible way in a large monolithic application.

I have not mentioned so called End-to-End tests, aka testing through e.g. clicking on the front-end application that is deployed on a close resemblance of production environment. This is a topic for a whole other post.


Builder Anti-Pattern

Since I am professionally a Java developer, this will primarily focus on Java code.

I feel there is a misunderstanding, and in effect, overuse of the Builder pattern in Java codebases. The purpose of Builders is to allow creation of complex objects that have different representations (i.e. have optional fields). First and foremost I consider Builders a code smell (as to why, I will get to later). In my experience developers use Builders not as a way to create objects that can have multiple representations but because it is convenient to construct big objects using Builders. I was one of them. On the surface this does not seem to be a bad practice. It allows you to create an object with less chance of assigning an incorrect parameter to a field since you can see what field you are setting. This approach became notorious with the appearance of Lombok (note: this text is not a Lombok hate text, I actually really like some aspects of it). The problem with this approach is that it is not really a Builder pattern. Instead it is a substitution for a lack of language feature called: named parameter (actually most design patterns are ways of working around language deficiencies). When faced with an object that has a huge number of fields typed String, int and boolean (we all saw those), developers cower in fear on a thought of building such an object using a constructor. Anyone with a bit of experience faced hell searching for a bug that was caused by a wrong argument order in a constructor. If Builders solve this issue then what is the point of this post? Lets just use them… except lets not. I say use constructors (i.e. construct the whole object in one method call) or in most extreme cases, factories.

I mentioned earlier that I consider Builders a code smell. Why? Above all Single Responsibility Principle. The clue is in the description of the pattern: construction of different representations. In essence if you need a Builder then there is a huge chance that your object is doing too much. It has too many dependencies and too many properties. Split it. Actually, I have almost never seen a Builder used to construct an object that could have many representations (e.i. all fields were always set regardless of context), meaning that by definition it was used incorrectly.  If an object has one representation but still a large number of dependencies, split it further. If there are many implementations of a single interface that you do not want to expose, use Factory Pattern. An exception could be a large DTO that we have to handle due to some higher forces e.g. integration with external system. Even then I would be very careful due to reasons stated further down. 

Many of us use Domain Driven Design, or at least say they are using. It introduces a concept of Value Objects. Use them. This will solve the problem of having parameters of the same type. Instead of three String typed variables you will be passing three domain specific typed variables. Let the compiler help you. The Project Valhalla will make this even easier. Digression: I will take this opportunity to shamelessly plug something I really believe in, which is Type (not Test) Driven Development. Look it up. Unfortunately Java type system just barely qualifies as a statically typed language (and does not have type aliases) so it makes it hard to benefit form it. Maybe our industry will eventually move to better typed systems.

Next issue I take with Builders is that we forgo the help of the best checking tool available to us during refactors, the aforementioned compiler. When adding fields or dependencies to a class, if you use Builders for construction, you will have to manually check all the places it was used. If you use a constructor, then the compiler will automatically find it for you. The code will not run until this is fixed. I have seen countless bugs caused by not passing all arguments to the builder. This can also happen during initial development. It is easy to accidentally forget a field or set the same one twice (and just compare the number of setters to the number of fields). You might argue that this could be solved using tests (hopefully you have those…). Which is true. But so could be said about the incorrect order of parameters in a constructor. Unless you use e.g. Spock (by the way, love it), and Groovy to construct your Fixtures. God help you. Construct your objects in Java/Kotlin so you have that sweet compiler verification. In addition, compile time checking is faster and… can’t be accidentally deleted. Bottom line: Builders are easier to write initially (in theory) but harder to maintain. Note: I find modifying/removing fields to have a similar refactoring cost in both constructors and Builders.

This is mostly a problem with Lombok Builders but they leak implementation details. They expose all internal fields (unless there is a feature I am not aware of). Not a huge problem for anemic objects, especially if the Functional Programming approach is used (but then you face other problems, see below). Not acceptable when used with OO related encapsulation.

Builders used with a Functional Programming approach should be immutable. Lombok Builders are not. So are most implementations I have seen. So now you have an object (Builders are objects too), that is mutable and, depending on use, possibly stateful. One of useful things that Lombok introduced to Java is the toBuilder() method. It allows you to create a shallow copy of an object while modifying fields. Reminds me of functional lenses.

So this is it. If you actually must use Builders then please take the Effective Java approach: The mandatory fields are initialized in the Builder constructor, while optional fields through setters. This allows us to make sure necessary fields are always passed since we have compile time checking, while still allowing us to have multiple representations.

Builder Anti-Pattern