Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really like modular designs, but this article is missing some key limitations of monolithic applications, also if they are really well modularized (this is written mostly from the perspective of a Java developer):

* they force alignment on one language or at least runtime

* they force alignment of dependencies and their versions (yes, you can have different versions e.g. via Java classloaders, but that's getting tricky quickly, you can't share them across module boundaries, etc.)

* they can require lots of RAM if you have many modules with many classes (semi-related fun fact: I remember a situation where we hit the maximum number of class files a JAR could have you loaded into WebLogic)

* they can be slow to start (again, classloading takes time)

* they may be limiting in terms of technology choice (you probably don't want ot have connections to an RDBMS and Neo4j and MongoDB in one process)

* they don't provide resource isolation between components: a busy loop in one module eating up lots of CPU? Bad luck for other modules.

* they take long to rebuild an redeploy, unless you apply a large degree of discipline and engineering excellence to only rebuild changed modules while making sure no API contracts are broken

* they can be hard to test (how does DB set-up of that other team's component work again?)

I am not saying that most of these issues cannot be overcome; to the contrary, I would love to see monoliths being built in a way where these problems don't exist. I've worked on massive monoliths which were extremely well modularized. Those practical issues above were what was killing productivity and developer joy in these contexts.

Let's not pretend large monoliths don't pose specific challenges and folks moved to microservices for the last 15 years without good reason.



Mostly valid, but...

On the RAM front, I am now approaching terabyte levels of services for what would be gigabyte levels of monolith. The reason is that I have to deal with mostly duplicate RAM - the same 200+ MB of framework crud replicated in every process. In fact a lot of microservice advocates insist "RAM is cheap!" until reality hits, especially forgetting the cost is replicated in every development/testing environment.

As for slow startup, a server reboot can be quite excruciating when all these processes are competing to grind & slog through their own copy of that 200+ MB and get situated. In my case, each new & improved microservice alone boots slower than the original legacy monolith, which is just plain dumb, but it's the tech stack I'm stuck with.


>As for slow startup, a server reboot can be quite excruciating when all these processes are competing to grind & slog through their own copy of that 200+ MB and get situated.

You are writing microservices and then running them on the same server??


There are multiple hosts, but yeah I doubt our admins would go for 1 service per host, plus they'd just be on VM's sharing the same hardware anyhow.


> they force alignment on one language or at least runtime

How is this possibly a down-side from an org perspective? You don't want to fracture knowledge and make hiring/training more difficult even if there are some technical optimizations possible otherwise.


The capabilites of the language and the libraries available for it can sometimes be a good reason for dealing with multiple languages.

E.g. if you end up having a requirement to add some machine learning to your application, you might be better off using Tensorflow/PyTorch via Python than trying to deal with it in whatever language the core of the app is written in.


Different domains will have different requirements. Doing a monolith, doesn't preclude you from building domain specific services external to the monolith (Model serving service for example).

Take an RDBMS as an example, that is a service external to the monolith often written in a very different language to the monolith.


>You don't want to fracture knowledge and make hiring/training more difficult

these are not maxims of development, there can be reasons that make these consequences worth it. Furthermore you can still use just a single language with microservices*, nothing is stopping you from doing that if those consequences are far too steep to risk.

*:you can also use several languages with modules by using FFI and ABIs, probably.


In an organization that will be bankrupt in three years it doesn’t matter. But if you can pour that much energy into a doomed project you’re a steely eyed missile man. Or dumb as a post. Or maybe both at once.

This is the Pendulum Swing all over again. If one language and runtime is limiting, forty is not liberating. If forty languages are anarchy, switching to one is not the answer. This is in my opinion a Rule of Three scenario. At any moment there should be one language or framework that is encouraged for all new work. Existing systems should be migrating onto it. And because someone will always drag their feet, and you can’t limit progress to the slowest team, there is also a point in the migration where one or two teams are experimenting with ideas for the next migration. But once that starts to crystallize any teams that are still legacy are in mortal danger of losing their mandate to another team.


>they force alignment on one language or at least runtime

A sane thing to do.

>they force alignment of dependencies and their versions

A sane thing to do. Better yet to do it in a global fashion, along with integration tests.

>they can require lots of RAM if you have many modules with many classes

You can't make the same set of features build in a distributed manner comsume _less_ RAM than the monolith counterpart. Given you're now running dozens of copies of the same java vm + common dependencies.

>they can be slow to start

Correct.

>they may be limiting in terms of technology choice

Correct.

>they don't provide resource isolation between components

Correct.

>they take long to rebuild an redeploy, unless you apply a large degree of discipline and engineering excellence to only rebuild changed modules while making sure no API contracts are broken

I think the keyword is the WebLogic Server mentioned before. People don't realise that monolith architecture does't mean legacy technology. Monolith web services can and should be build in Spring Boot, for example. Also, most of the time, comparisons are unfair. In all projects i've worked im yet to see a MS instalation paired feature-wise with his old monolith cousin. Legacy projects tends to be massive, as they're made to solve real world problems while evolving during time. MS projects are run for a year or two and people start to compare around apples to oranges.

>they can be hard to test

If other team's component break integration, the whole building stops. I think Fail-Fast is a good thing. Any necessary setup must be documented in whatever architectural style. It can be worse in a MS scenario, where you are tasked to fix a dusty, forgotten service with an empty README.

If anything, monolithic architecture brings lots of awareness. It's easier to get how things are wired and how they interact together.


> a sane thing to do

imaging your application contains of two pieces - somewhat simple crud, that requires to respond _fast_ and huge batch processing infrastructure, that needs to work as efficient as possible, but doesn't care about single element processing time. And suddenly 'the sane thing to do' is not the best thing anymore. You need different technologies, different runtime settings and sometimes different runtimes. But most importantly they don't need constraints imposed by unrelated (other) part of the system.


You're absolutely right. My comment is towards the view that using a single language for a certain Project Backend is a bad thing per se. The online vs batch processing is the golden example of domains that should be separated in different binaries, call it microsservices or services or just Different Projects with Nothing in Common. Going further than that is where the problems arise.


>>they force alignment of dependencies and their versions

>A sane thing to do. Better yet to do it in a global fashion, along with integration tests.

But brutally difficult at scale. If you have hundreds of dependencies, a normal case, what do you do when one part of the monolith needs to update a dependency, but that requires you update it for all consumers of the dependency's API, and another consumer is not compatible with the new version?

On a large project, dependency updates happen daily. Trying to do every dependency update is a non-starter. No one has that bandwidth. The larger your module is, the more dependencies you have to update, and the more different ways they are used, so you are more likely to get update conflicts.

This doesn't say you need microservices, but the larger your module is, the further into dependency hell you will likely end up.


> A sane thing to do.

This is incredibly subjective, and contingent on the size and type of engineering org you work in. For a small or firmly mid-sized shop? yea I can 100% see that being a sane thing to do. Honestly a small shop probably shouldn't be doing microservices as a standard pattern outside of specific cases anyway though

As soon as you have highly specialized teams/orgs to solve specific problems, this is no longer sane.


> Honestly a small shop probably shouldn't be doing microservices as a standard pattern outside of specific cases anyway though

And yet, that is exactly what gets done


> unless you apply a large degree of discipline and engineering excellence to only rebuild changed modules while making sure no API contracts are broken

Isn't that exactly what's required when you're deploying microservices independently of each other? (With the difference of the interface not being an ABI but network calls/RPC/REST.)


I used to be monolith-curious, but what sold me on micro-services is the distribution of risk. When you work for a company where uptime matters having a regression that takes down everything is not acceptable. Simply using separate services greatly reduces the chances of a full outage and justifies all other overhead.


Why would using microservices reduce the chance of outages? If you break a microservice that is vital for the system, you are as screwed as with a monolyth.


Sure, but not all micro-services are vital. If your "email report" service has a memory leak (or many other noisy-neighbor issues) and is in a crash loop then that wont take down the "search service" or the "auth service", etc. Many other user paths will remain active and usable. It compartmentalizes risk.


Proper design in a monolith would also protect you from failures of non-vital services (e.g. through exception capture).

So it seems like we’re trying to compensate bad design with microservices. It’s orthogonal IMO.


How does exception capture protect from all failures? The most obvious one I don't see it relating to is resource utilization, CPU, memory, threadpools, db connection pools, etc etc.

> we’re trying to compensate bad design

No I think we're trying to compensate for developer mistakes and naivety. When you have dozens to hundreds of devs working on an application many of them are juniors and all of them are human and impactful mistakes happen. Just catching the right exceptions and handling them the right way does not protect against devs not catching the right exceptions and not handling them the right way, but microservices does.

Maybe you call that compensating for bad design, which is fair and in that case yes it is! And that compensation helps a large team move faster without perfecting design on every change.


With microservices you have to have a tradeoff - a monolith is inherently more testable at integration level, than a microservice based architecture.

There's a significant overhead to build and run tests at API level, that includes API versioning... and there's less of a need to version API inside a monolith.


You have fifty (or 10,000) servers running your critical microservice in multiple AZs. You start a deployment to a single host. The shit hits the fan. You rollback that one host. If it looks fine, you leave it running for a few hours while various canaries and integration tests all hit it. If no red flags occur, you deploy another two, etc. You deploy to different AZs on different days. You can fail over to your critical service in different AZs because you previously ensured that the AZs are scaled so that they can handle that influx of traffic (didn't you?). You've tested that.

And that is if it makes it to production. Here is your fleet of test hosts using production data and being verified against the output of production servers.


If you have a truly modularized monolith, you can have a directed graph of dependent libraries, the leaves of which are different services that can start up. You can individually deploy leaf services and only their dependent code will go out. You can then rationalize which services can go down based on their dependency tree. If email is close to a root library, then yes a regression in it could bring everything down. If email is a leaf service, its code won’t even be deployed to most of the parallel services.

You can then have a pretty flexible trade off between the convenience of having email be a rooted library against the trade off of keeping it a lead service (the implication being that leaf services can talk to one another over the network via service stubs, rest, what have you).

This is SOA (Service Oriented Architecture), which should be considered in the midst of the microservice / monolith conversation.


> Simply using separate services greatly reduces the chances of a full outage and justifies all other overhead.

or maybe run redundant monolith fail over servers. should work the same as micro services.


then you need to hotfix you need to re build a giant monolith that probably has thousands of tests and a 20-30 minute regression suite easily.


I have seen exactly this. Waiting for a 60+ min CI build during an outage is not a good look.


However, there's another "bonus here" is that you have integration tests that have a better coverage.

Microservices don't make builds radically faster for the majority. People still split systems into larger services.


How do microservices help here? You can deploy a monolith 10 times and have the same risk distribution.


It's not about replication, it's about compartmentalizing code changes / bugs / resource utilization. If you deploy a bug that causes a service crash loop or resource exhaustion isolating that to a small service reduces impact to other services. And if that service isn't core to app then the app can still function.


> they force alignment on one language or at least runtime

You can have modules implemented in different languages and runtimes. For example you can have calls between Python, JVM, Rust, C/C++, Cuda etc. It might not be a good idea in most cases but you can do it.

Lots of desktop apps do this.


And in runtimes like BEAM, JVM and .NET, multiple languages are even supported out of the box, plus FFI.


Please check your assumptions. Why do you think 2 modules cannot be in different runtimes? How do you think JNI works?

You can absolutely call js running in V8 vm from scala running in jvm. No networking needed, hell not even IPC is needed.

And when you deploy this you don't have to deploy all modules' http servers (for external requests into the system) and queue consumers in the same container, only a single module's. So no busy loops affect other modules, unless as a result of direct api call from module to module. If anything it encourages looser coupling as you are incenticised to use indirect communication through the queue over direct api calls.


hell not even IPC is needed.

Uh... what's the trick? I don't see how you can have V8 and the JVM communicate without something that's inter-process.


Why? They're both just libraries. Load them both into the process and see what happens. At worst you'll have a bit of fighting over process-global state like signal handlers, but at least the JVM is designed to allow those to compose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: