I want to go through a thought experiment where I try to mimic microservices behaviour in a monolithic architecture. The thesis is that using feature flags in a monolithic application we can reap the benefits that we would get from using microservices. Without the inherent costs associated with it (although we will probably move the complexity elsewhere :P).
Microservices Architecture: A loosely coupled, separately built artifacts, split across context boundaries, deployed independently. Multiple teams involved in development.
Monolithic Architecture: A single build artifact (and in effect one deployment unit), built from multiple modules, that are split across context boundaries. Multiple teams involved in development.
Feature Flags: A technique that allows for changing system behaviour without changing any code.
I think this is the most common argument for using microservices architecture. The ability to scale functionality independently. With microservices you can easily scale individual services. The problem is that you have to have designed your system in a way that scaling a service actually translates to scaling a feature. This is often easier said than done. System functionality often ends up being split between multiple services. Sometimes this is what you want and sometimes it is just bad design. You also have to make sure that all the other overhead involved does not overshadow the performance gain from that scaling.
Scaling your monolith using feature flags is definitely possible. Put a feature behind a flag. Deploy the whole artifact but enable just the single feature flag. This will make sure that all the resources on this node will go to processing just that particular feature. Sure, this is not what feature flags were initially designed for but this is an experiment, so hey, lets go wild. There are limits to this technique however since this actually worked well for Facebook (although admittedly I can’t find a source for this), then it will probably be enough in most cases.
Clear context boundaries
A cornerstone of good modularity are clearly defined context boundaries. In a microservices architecture this is done with a physical network layer. This, however, does not prevent having clear boundaries in a monolith. From my experience (I realize this is somewhat subjective) if you do not have enough knowledge/discipline to have well defined boundaries in a monolith, then you will end up with a distributed ball of mud. I have covered how this can be done in a monolith in a previous post.
Independent deployments and ownership
I think this is the most important and powerful feature of a microservices architecture. Ability to deploy small chunks of code, ideally independently by every team, in effect decoupling the teams. They can deploy often and quickly verify their changes on the best testing environment, the production. They can own their services, meaning they will have intimate knowledge of their whole environment to the degree simply impossible on a monolithic application. My experience is that the higher control the team had over their production, pipelines, etc, the more stable and error free the product was.
Unfortunately I do not see any way to mimic that in a monolithic application. Even if each module governs its own versioning, the management of that would be hellish. I think this is one of the major advantages of microservices over monolithic applications.
This technique lets us limit the splash radius of failures. By being able to run different features on independent machines and networks we can create bulkheads that will ensure our solution keeps working, protecting us from cascading failures. I actually think this is easier to implement in a monolithic application than in microservices architecture. For example running a copy of your application in two different AWS regions will require far less configuration when running a single artifact. Not to mention the infrastructure cost. Running multiple instances of several services against running just a couple of instances of a single service.
Different characteristics (response time vs throughput)
Features will have different non-functional requirements. They usually come down to whether it should have a short response time or be able to process a lot of requests, aka throughput. Ideally we would have both a high throughput and short response time but there will always be some trade-off. Being able to run features on separate machines, means we can configure them independently. Good example of this is CQRS. We will want to be able to quickly get query results. Meanwhile it is sometimes OK for the client to wait a moment for the command to be processed.
This is similar to the case of performance. We are able to scale individual features using feature flags. Nothing stands in a way to configure those instances in a custom way.
Being able to use different languages within one product had been one of the major selling points of microservices architecture. Truth is that many companies try to limit this practice as much as possible and with good reason. Cost of maintenance rises exponentially with each additional language. That being said, there will be cases when this is a valid solution. For example when building a ML feature you will probably write it in python instead of, for example, Java, which might be the dominant language in your product. Being able to use different languages also means we can update language versions in a more controlled and gradual way. Just don’t end up with four different Java versions :P.
You can do that to a certain degree in a monolithic application. Like using Java and Kotlin in one code base. However, it is definitely very limited. You will probably have a hard time maintaining such an application. Upgrading one module at a time will also pose a significant challenge. I think ability stems from the possibility of independent deployments. That, as we mentioned previously, is pretty much not possible in a monolithic application.
Shorter build times / Fewer merge conflicts
As the application grows, so does the build and test times. Having the application split into smaller chunks makes it easier to control those variables, making for a shorter feedback loop. Same thing applies to the process of merging changes. Forcing developers to work within confines of a single service, we can mitigate the occurrence of merge conflicts. In large monolithic applications it is possible to build only things that changed, which should put a cap on the build times. If the teams work within their modules, the number of merge conflicts will also be limited. It all comes down to discipline and proper tooling.
Short answer to the question in title: To a certain extent. Feature Flags can allow for a monolithic application to reap similar benefits in terms of performance. However, they do not provide as much flexibility. I think that as long as the developers are deploying new features to production in a timely fashion but more tuning capabilities are needed, then using feature flags is a viable option. This might be especially interesting for companies whose product needs to be able to be deployed on-premise as well as a SAAS solution.