I think the big problem no one speaks about is the name "microservices" was incredibly poorly named.
Your design goal should not be "create the smallest service you can to satisfy the 'micro' label". Your design goal should be to create right-sized services aligned to your domain and organization.
The deployment side is of course a red herring. People can and do deploy monoliths with multiple deployments and different endpoints. And I've seen numerous places do "microservices" which have extensive shared libraries where the bulk of the code actually lives. Technically not a monolith - except it really is, just packaged differently.
Another key is that you should always be able to reasonably hack on just one of the "services" at once— everything else should be able to be excluded completely or just run a minimal mock, for example an auth mock that just returns a dummy token.
If you've got "microservices" but every dev still has to run a dozen kubernetes pods to be able to develop on any part of it, then I'm pretty sure you ended up with the worst of both worlds.
> Your design goal should not be "create the smallest service you can to satisfy the 'micro' label".
A place I worked at years ago did what I effectively called "nano-services".
It was as if each API endpoint needed its own service. User registration, logging in, password reset, and user preference management were each their own microservice.
When I first saw the repo layout, I thought maybe they were just using a bunch of Lambdas that would sit behind an AWS API Gateway, but I quickly learned the horror as I investigated. To make it worse, they weren't using Kubernetes or any sort of containers for that matter. Each nanoservice was running on its own EC2 instance.
I swear the entire thing was designed by someone with AWS stock or something.
I know one place that did all of their transactional payment flow through lambdas. There were about 20 lambdas in the critical auth path, and they regularly hit the AWS per-global-account limits.
Another place did all their image processing via lambdas, about fifty of them. They literally used lambdas and REST calls where anyone sane would have done it in one process with library calls. It cost them tens of thousands of dollars a month to do basic image processing that should have cost about $100 or so.
I agree with this. Personally I think the two pizza team model, single responsibility isnot a great idea. Most successful "microservices" model I've worked on actually had 100ish devs on the service. Enough to make on-call, upgrades, maintenance, etc. really spread out.
Agreed. This is why I prefer the term “service oriented architecture” instead. A service should be whatever size its domain requires - but the purpose is to encapsulate a domain. A personal litmus test I have for “is the service improperly encapsulating the domain” is if you need to handle distributed transactions. Sometimes they are necessary - but usually it’s an architectural smell.
Your design goal should not be "create the smallest service you can to satisfy the 'micro' label". Your design goal should be to create right-sized services aligned to your domain and organization.
The deployment side is of course a red herring. People can and do deploy monoliths with multiple deployments and different endpoints. And I've seen numerous places do "microservices" which have extensive shared libraries where the bulk of the code actually lives. Technically not a monolith - except it really is, just packaged differently.