> AWS takes that away and makes you focus on the product. Issues arising from AWS only requires you talking to support.
Not my experience at all. e.g. NLBs don't support ICMP which has broken some clients of the application I work on. When we tried to turn on preserve-client-ip so we could get past the ephemeral port limit, it started causing issues with MSS negotiation, breaking some small fraction of clients. This stuff is insanely hard to debug because you can't get onto the loadbalancer to do packet captures (nor can AWS support). Loadbalancing for long-lived connections works poorly.
Lambda runs into performance issues immediately for a web application server because it's just an entirely broken architecture for that use-case (it's basically the exact opposite of user-mode threads to scale: let's use an entire VM per request!). For some reason they encourage people to do it anyway. Lord help you if you have someone with some political capital in your org that wants to push for that.
RDS also runs into performance issues the moment you actually have some traffic. A baremetal server is orders of magnitude more capable.
ipv6 support is still randomly full of gaps (or has only very recently been fixed, except you might have to do things like recreate your production EKS cluster, oops) which leads to random problems that you have to architect around. Taken with NAT gateway being absurdly expensive, you end up having to invert sensible architectures or go through extra proxy layers that just complicate things.
AWS takes basic skills around how to build/maintain backend systems and makes half of your knowledge useless/impossible to apply, instead upgrading all of your simple tuning tasks into architectural design problems. The summary of my last few years has basically been working around problems that almost entirely originate from trying to move software into EKS and dealing with random constraints that would take minutes to fix baremetal.
I agree that building your backend on Lambda is terrible for many reasons: slow starts, request / response size restrictions, limitations in "layer" sizes, etc.
RDS, however, I have found to be rock solid. What have you run into?
The parent compares RDS to baremetal, which I think isn't a fair comparison at all. Especially since we don't know the specs of either of these.
I found RDS to be rock solid too, although performance issues are often resolved by developers by submitting a PR that bumps the instance size x2, because "why not". On baremetal it's often impossible to upgrade CPU just like that, so people have to fix performance issues elsewhere, which leads to better outcome at the end.
RDS works great, but it's far easier to scale a bare metal setup to an extent that makes RDS look like an expensive toy because you have far more hardware options
RDS is a good option if you want convenience and simplicity, though.
Managing database backups myself is something that gives me nightmares. I would refuse to use bare-metal dbs unless I have a dedicated team just to manage the database (or data that is okay to lose, like caching layers).
Managing database backups is fairly straightforward. Postgres + a base backup + long term wal archiving in a blob store is very easy to set up and monitor. It could be easier, and if you don't want to manage that using RDS is certainly a valid choice, but it's a tradeoff - I often have customers that help addressing performance issues with RDS they simply wouldn't have if they sized a bare metal setup with enough RAM and NVMe and configured it even halfway decently instead, and the end result is often that they end up paying more for devops help to figure out performance bottlenecks than they'd spend putting the same devops consultant on retainer ensuring they have a solid backup setup.
I dunno, it does sound like significant work and way outside my (and most devs) area of expertise. I can definitely supervise a managed RDBMS (like RDS) by myself without help on the side even though I am no dba.
A mismanaged VPS is downtime and churn, a mismanaged DB will insta-kill your business if you have unrecoverable data loss. I would definitely use a managed solution until I can get a dedicated person to babysit the DB, but I would consider managing a VPS myself.
There's no need for a dedicated person. A single operator can easily manage dozens of DB instances unless your needs are extremely complex. Managing these kinds of things are serviced trivially available on retainer.
I don't know too much about the performance side of RDS, but the backup model is absolutely a headache. It's at the point where I'd rather pg_dump into gz and upload to s3.
Not my experience at all. e.g. NLBs don't support ICMP which has broken some clients of the application I work on. When we tried to turn on preserve-client-ip so we could get past the ephemeral port limit, it started causing issues with MSS negotiation, breaking some small fraction of clients. This stuff is insanely hard to debug because you can't get onto the loadbalancer to do packet captures (nor can AWS support). Loadbalancing for long-lived connections works poorly.
Lambda runs into performance issues immediately for a web application server because it's just an entirely broken architecture for that use-case (it's basically the exact opposite of user-mode threads to scale: let's use an entire VM per request!). For some reason they encourage people to do it anyway. Lord help you if you have someone with some political capital in your org that wants to push for that.
RDS also runs into performance issues the moment you actually have some traffic. A baremetal server is orders of magnitude more capable.
ipv6 support is still randomly full of gaps (or has only very recently been fixed, except you might have to do things like recreate your production EKS cluster, oops) which leads to random problems that you have to architect around. Taken with NAT gateway being absurdly expensive, you end up having to invert sensible architectures or go through extra proxy layers that just complicate things.
AWS takes basic skills around how to build/maintain backend systems and makes half of your knowledge useless/impossible to apply, instead upgrading all of your simple tuning tasks into architectural design problems. The summary of my last few years has basically been working around problems that almost entirely originate from trying to move software into EKS and dealing with random constraints that would take minutes to fix baremetal.