Hacker Newsnew | past | comments | ask | show | jobs | submit | switz's commentslogin

It doesn’t really matter if they’re using commercial VPNs or the same upstream providers as commercial VPNs. Blocking an ASN is a million times more effective than blocking single IPs (at the risk of blocking genuine customers). I’ve had customers reach out to me asking to be unbanned after I blocked a few ASNs that had hostile scrapers coming out of them. It’s a tough balance.

VPNs often use providers with excellent peering and networking - the same providers that scrapers would want to use.


Hilariously – react server components largely solves all three of these problems, but developers don't seem to want to understand how or why, or seem to suggest that they don't solve any real problems.

It’s no secret that RSC was at least partially an attempt to get close to what Relay offers but without requiring you adopt GraphQL.

There's an informed critique of RSC, but no one is making it.

I agree though worth noting that data loader patterns in most pre-RSC react meta frameworks + other frameworks also solve for most of these problems without the complexity of RSC. But RSC has many benefits beyond simplifying and optimizing data fetching that it’s too bad HN commenters hate it (and anything frontend related whatsoever) so much.

I have tailscale running on my robot vacuum. It's my own little autonomous mesh vpn node that lets me connect back to my home network when I'm on the go.

Please share more details! This sounds so cool!

You can root certain models of robot vacuums and then ssh into them. Most run some variant of linux. Then just install tailscale. There are a few blogs out there of people who have done it[0][1].

It's taking a cloud-based product, de-clouding it, and then connecting it to your own private 'cloud'. Pretty cool all things told.

[0] https://kazlauskas.me/entries/tailscale-doesnt-quite-suck

[1] https://tailscale.com/blog/tailscale-sucks


What value do you get from installing tailscale on your robot vacuum?

They lend you optionality of when and where you want your code to run. Plus it enables you to define the server/client network boundary where you see fit and cross that boundary seamlessly.

It's totally fine to say you don't understand why they have benefits, but it really irks me when people exclaim they have no value or exist just for complexity's sake. There's no system for web development that provides the developer with more grounded flexibility than RSCs. I wrote a blog post about this[0].

To answer your question, htmx solves this by leaning on the server immensely. It doesn't provide a complete client-side framework when you need it. RSCs allow both the server and the client to co-exist, simply composing between the two while maintaining the full power of each.

[0] https://saewitz.com/server-components-give-you-optionality


But is it a good idea to make it seamless when every crossing of the boundary has significant implications for security and performance? Maybe the seam should be made as simple and clear as possible instead.

Yep! It’s really hard to reason in Next about when things happen on the server vs client. This makes it harder to make things secure.

You can create clean separation in your code to make this easier to understand but it’s not well enforced by default.


Just because something is made possible and you can do it doesn't mean you should!

The criticism is that by allowing you to do something you shouldn't, there isn't any benefit to be had, even if that system allows you to do something you couldn't before.


It also gives you a 10 cve

No, he clearly points that anyone else would have to be taken off their existing work and would have to context-switch to the context he already has. That's not trashing his engineering team.


I recently bought a robot vacuum, installed valetudo, installed tailscale onto the robot itself and now I can control it from anywhere through my personal mesh vpn.

It's pretty amazing. Valetudo is perhaps the most opinionated software I've ever used, which comes with the good and the bad. But overall, it works and it does what it says it will do.

I don't really need to access it remotely, though it has come in handy: when heading out of town I can turn off the scheduled cleans and just run it once on the day I'm returning home. Which is really the only functionality you would need the manufacturer-provided cloud connectivity for.

It's been fun explaining to people that it's "declouded", but I can access it from anywhere. Melts non-tech peoples' brains a little bit.


Frontend web development is effectively distributed systems built on top of markup languages and backwards compatible scripting languages.

We are running code on servers and clients, communicating between the two (crossing the network boundary), while our code often runs on millions of distributed hostile clients that we don't control.

It's inherently complex, and inherently hostile.

From my view, RSC's are the first solution to acknowledge these complexities and redesign the paradigms closer to first principles. That comes with a tougher mental model, because the problem-space is inherently complex. Every prior or parallel solution attempts to paper over that complexity with an over-simplified abstraction.

HTMX (and rails, php, etc.) leans too heavily on the server, client-only-libraries give you no accessibility to the server, and traditional JS SSR frameworks attempt to treat the server as just another client. Astro works because it drives you towards building largely static sites (leaning on build-time and server-side routing aggressively).

RSCs balance most of these incentives, gives you the power to access each of them at build-time and at run-time (at the page level or even the component level!). It makes each environment fully powerful (server, client, and both). And manages to solve streaming (suspense and complex serialization) and diffing (navigating client-side and maintaining state or UI).

But people would rather lean on lazy tropes as if they only exist to sell server-cycles or to further frontend complexity. No! They're just the first solution to accept that complexity and give developers the power to wield them. Long-term, I think people will come to learn their mental model and understand why they exist. As some react core team members have said, this is kind of the way we should have always built websites-once you return to first principles, you end up with something that looks similar to RSCs[0]. I think others will solve these problems with simpler mental models in the future, but it's a damn good start and doesn't deserve the vitriol it gets.

[0] https://www.youtube.com/watch?v=ozI4V_29fj4


Except RSC doesn't solve for apps, it solves for websites, which means its server-first model leads you to slow feeling websites, or lots of glue code to compensate. That alongside the immensely complex restrictions leaves me wondering why it exists or has any traction, other than a sort of technical exercise and new thing for people to play with.

Meanwhile, sync engines seem to actually solve these problems - the distributed data syncing and the client-side needs like optimistic updates, while also letting you avoid the complexity. And, you can keep your server-first rendering.

To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX) and the only reason I think anyone really likes RSC is because there is so much money behind it, and so little relatively in sync engines. That said, I don't blame people for not even mentioning them as they are newer. I've been working with one for the last year and it's an absolute delight, and probably the first genuine leap forward in frontend dev in the last decade, since React.


> Except RSC doesn't solve for apps, it solves for websites

This isn't true, because RSCs let you slide back into classic react with a simple 'use client' (or lazy for pure client). So anywhere in the tree, you have that choice. If you want to do so at the root of a page (or component) you can, without necessarily forcing all pages to do so.

> which means its server-first model leads you to slow feeling websites, or lots of glue code to compensate

Again, I don't think this is true - what makes you say it's slow feeling? Personally, I feel it's the opposite. My websites (and apps) are faster than before, with less code. Because server component data fetching solves the waterfall problem and co-locating data retrieval closer to your APIs or data stores means faster round-trips. And for slower fetches, you can use suspense and serialize promises over the wire to prefetch. Then unwrapping those promises on the client, showing loading states in the meantime as jsx and data stream from the server.

When you do want to do client-side data fetching, you still can. RSCs are also compatible with "no server"-i.e. running your "server" code at build-time.

> To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX)

You say it's worse UX but that does not ring true to my experience, nor does it really make sense as RSCs are additive, not prescriptive. The DX has some downsides because it requires a more complex model to understand and adds overhead to bundling and development, but it gives you back DX gains as well. It does not lead to worse UX unless you explicitly hold it wrong (true of any web technology).

I like RSCs because they unlock UX and DX (genuinely) not possible before. I have nothing to gain from holding this opinion, I'm busy building my business and various webapps.

It's worth noting that RSCs are an entire architecture, not just server components. They are server components, client components, boundary serialization and typing, server actions, suspense, and more. And these play very nicely with the newer async client features like transitions, useOptimistic, activity, and so on.

> Meanwhile, sync engines seem to actually solve these problems

Sync engines solve a different set of problems and come with their own nits and complexities. To say they avoid complexity is shallow because syncing is inherently complex and anyone who's worked with them has experienced those pains, modern engines or not. The newer react features for async client work help to solve many of the UX problems relating to scheduling rendering and coordinating transitions.

I'm familiar with your work and I really respect what you've built. I notice you use zero (sync engine), but I could go ahead and point to this zero example as something that has some poor UX that could be solved with the new client features like transitions: https://ztunes.rocicorp.dev

These are not RSC exclusive features, but they display how sync engines don't solve all the UX problems you're espousing they do without coordinating work at the framework level. Happy to connect and walk you through what a better UX for this functionality would look like.


Definitely disagree on most of your points here, I think you don’t touch at all on optimistic mutations, don’t put enough weight on the extreme downsides it enforces on your code organization, the limits and downsides of forcing server trips, the huge downsides of opting out (yes you can, but now you have two ways of writing everything and two ways of dealing with data, or you can’t share data/code at all), it is in effect all or nothing else you really are duplicating a ton and then even worse DX.

Many of the features like transitions and all the new concepts are workaround you just don’t really need when your data is mostly local and optimistically mutated, and the ztunes app is a tiny demo but ofc you could easily server render it and split transitions and all sorts of things to make it more of a comparable demo to what I assume you think are downsides vs RSC.

I think time will show that RSC was a bad idea, like Redux which I also predicted would not last the time of time, it’s interesting in theory but too verbose and cumbersome in practice, and other ways of doing things have too many advantages.

The problems they solve overlap more than enough, and once you have a sync engine giving you optimistic mutations free, free local caching, and free realtime sync, you look at what RSC gives you above SSR and there’s really no way to justify the immense conceptual burden and actual concrete downsides (like now having two worlds / essentially function coloring, forces server trips / lack of routing control) I just bet it won’t win. Though given the immense investment by two huge companies it may take a while for that to become clear.


There’s a long history of subtick bugs that have been identified and patched over the years. CS2 still isn’t quite as stable as 128-tick CS:GO (perhaps benefitting from a decade of patches and simpler architecture)


I run my entire business on a single OVH box that costs roughly $45/month. It has plenty of headroom for growth. The hardest part is getting comfortable with k8s (still worth it for a single node!) but I’ve never had more uptime and resiliency than I do now. I was spending upwards of $800/mo on AWS a few years ago with way less stability and speed. I could set up two nodes for availability, but it wouldn’t really gain me much. Downtime in my industry is expected, and my downtime is rarely related to my web services (externalities). In a worst case scenario, I could have the whole platform back up in under 6 hours on a new box. Maybe even faster.


What's the benefit of using K3 on a single node?


I'd list these as the real-world advantages

  * Very flexible, but rigid deployments (can build anywhere, deploy from anywhere, and roll out deployments safely with zero downtime)
  * Images don't randomly disappear (ran into this all the time with dokku and caprover)
  * If something goes wrong, it heals itself as best it can
  * Structured observability (i.e. logs, metrics, etc. are easy to capture, unify, and ship to places)
  * Very easy to setup replicas to reduce load on services or have safe failovers 
  * Custom resource usage (I can give some pods use more/less CPU/memory limits depending on scale and priority)
  * Easy to self-host FOSS services (queues, dbs, observability, apps, etc.)
  * Total flexibility when customizing ingress/routing. I can keep private services private and only expose public services
  * Certbot can issue ssl certs instantly (always ran into issues with other self-hosting platforms)
  * Tailscale Operator makes accessing services a breeze (can opt-in services one by one)
  * Everything is yaml, so easy to manipulate
  * Adding new services is a cake-walk - as easy as creating a new yaml file, building an image and pushing it. I'm no longer disincentivized to spin up a new codebase for something small but worthwhile, because it's easy to ship it.
All-in-all I spent many years trying "lightweight" deployment solutions (dokku, elastic beanstalk, caprover, coolify, etc.) that all came with the promise of "simple" but ended up being infinitely more of a headache to manage when things went wrong. Even something like heroku falls short because it's harder to just spin up "anything" like a stateful service or random FOSS application. Dokku was probably the best, but it always felt somewhat brittle. Caprover was okay. And coolify never got off the ground for me. Don't even get me started on elastic beanstalk.

I would say the biggest downside is that managing databases is less rigid than using something like RDS, but the flip side is that my DB is far more performant and far cheaper (I own the CPU cycles! no noisy neighbors.), and I still run daily backups to external object storage.

Once you get k8s running, it kind of just works. And when I want to do something funky or experimental (like splitting AI bots to separate pods), I can go ahead and do that with ease.

I run two separate k8s "clusters" (both single node) and I kind of love it. k9s (obs. tool) is amazing. I built my own logging platform because I hated all the other ones, might release that into its own product one day (email in my profile if you're interested).


Also running a few single node clusters - perfect balance for small orgs that don't need HA. Been running small clusters since ~2016 and loving it.


Deployments are easy. You define a bunch of yamls for what things are running, who mounts what, and what secrets they have access to etc.

If you need to deploy it elsewhere, you just install k3s/k8s or whatever and apply the yamls (except for stateful things like db).

IT also handles name resolution with service names, restarts etc.

IT's amazing.


any notes or pointers on how to get comfortable with k8? For a simple nodejs app I was looking down the pm2 route but I wonder of learning k8 is just more future proof.


Use K3s in cluster mode, start doing. Cluster mode uses etcd instead of kine, kine is not good.

Configure the init flags to disable all controllers and other doodads, deploy them yourself with Helm. Helm sucks to work with but someone has already gone through the pain for you.

AI is GREAT at K8s since K8s has GREAT docs which has been trained on.

A good mental model is good: It's an API with a bunch of control loops


I'd say rent a hetzner vps and use hetzner-k3s https://github.com/vitobotta/hetzner-k3s

Then you are off to races. you can add more nodes etc later to give it a try.


Definitely a big barrier to entry, my way was watching a friend spin up a cluster from scratch using yaml files and then copying his work. Nowadays you have claude next to you to guide you along, and you can even manage the entire cluster via claude code (risky, but not _that_ risky if you're careful). Get a VPS or dedicated box and spin up microk8s and give it a whirl! The effort you put in will pay off in the long run, in my humble opinion.

Use k9s (not a misspelling) and headlamp to observe your cluster if you need a gui.


Is this vanilla k8 or any flavor?


I use microk8s


I didn't even really realize it was a SPOF in my deploy chain. I figured at least most of it would be cached locally. Nope, can't deploy.

I don't work on mission-critical software (nor do I have anyone to answer to) so it's not the end of the world, but has me wondering what my alternate deployment routes are. Is there a mirror registry with all the same basic images? (node/alpine)

I suppose the fact that I didn't notice before says wonderful things about its reliability.


I guess the best way would be to have a self-hosted pull-through registry with a cache. This way you'd have all required images ready even when dockerhub is offline.

Unfortunately that does not help in an outage because you cannot fill the cache now.


In the case where you still have an image locally, trying to build will fail with an error complaining about not being able to load metadata for the image because a HEAD request failed. So, the real question is, why isn't there a way to disable the HEAD request for loading metadata for images? Perhaps there's a way and I don't know it.


Yeah, this is the actual error that I'm running into. Metadata pages are returning 401 and bailing out of the build.


Sure? --pull=missing should be the default.


While I haven’t tried --pull=missing, I have tried --pull=never, which I assume is a stricter version and it was still attempting the HEAD request.


You might still have it on your dev box or build box

  docker image ls
  docker tag name/name:version your.registry/here/name/name:version
  docker push your.registry/here/name/name:version


Per sibling comment, public.ecr.aws/docker/library/.... works even better


This saved me. I was able to push image from one of my nodes. Thank you.


This is the way tho this can lead to fun moments like I was just setting up a new cluster and couldn't figure out why I was having problems pulling images when the other clusters were pulling just fine.

Took me a while to think of checking the docker hub status page.


> Is there a mirror registry with all the same basic images?

https://gallery.ecr.aws/


> I don't work on mission-critical software

> wondering what my alternate deployment routes are

If the stakes are low and you don't have any specific need for a persistent registry then you could skip it entirely and push images to production from wherever they are built.

This could be as simple as `docker save`/`scp`/`docker load`, or as fancy as running an ephemeral registry to get layer caching like you have with `docker push`/`docker pull`[1].

[1]: https://stackoverflow.com/a/79758446/3625


It's a bit stupid that I can't restart (on Coolify) my container, because pulling the image fails, even though I am already running it, so I do have the image, I just need to restart the Node.js process...


Nevermind, I used the terminal, docker ps to find the container and docker restart <container_id>, without going through Coolify.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: