Your code is returning TOO_SLOW_OR_TOO_LOW for the case when the heading is too far right. The disassembly in the op looks like it correctly jumps to too_far_right.
Even if you actually use the network module in Go, just so that the compiler wouldn't strip it away, you would still have a startup latency in Go way below 25 ms from my experience with writing CLI tools.
Whereas with Python, even in the latest version, you're already looking at atleast 10x the amount of startup latency in practice.
Note: This is excluding the actual time that is made for the network call, which can of course also add quiete some milliseconds, depending on how far on planet earth your destination is.
The answers is: likely yes, but the solution is to become the market leader now and pay the fines later. This buisness strategy has worked out very well for the magnificent 7.
Yes, unfortunately those who jumped on the microservices hype train over the past 15 years or so are now getting the benefits of Claude Code, since their entire codebases fits into the context window of Sonnet/Opus and can be "understood" by the LLM to generate useful code.
This is not the case for most monoliths, unless they are structured into LLM-friendly components that resemble patterns the models have seen millions of times in their training data, such as React components.
I guess that the benefit of monoliths in the context is that they (often) live in distinct repositories, which makes it easier for Claude to ingest them entirely, or at least not get lost into looking at the wrong directory.
One problem is that the idea of being "well-structured" has gone overboard at some point over the past 20 years in many companies. As a result, many companies now operate highly convoluted monolithic systems that are extremely difficult to replace.
In contrast, a poorly designed microservice can be replaced much more easily. You can identify the worst-performing and most problematic microservices and replace them selectively.
> One problem is that the idea of being "well-structured" has gone overboard at some point over the past 20 years
That's exactly my experience. While a well-structured monolith is a good idea in theory, and I'm sure such examples exist in practice, that has never been the case in any of my jobs. Friends working at other companies report similar experiences.
Both of these are effectively the same damn thing but everyone loses their minds over the first one.
Also, a lot of those install scripts do check signatures of the binaries they host. And if you’re concerned that someone could have owned the webserver it’s hosted on, then they can just as easily replace the public key used for verification in the written instructions on the website.
If it points to mirror.ubuntu.com, it'll be mirroring at host end, instead of inside apt. But as apt does do resolution to a list, it'll be fetching from multiple places at once.
> I’m always a bit shocked how seriously people take concerns over the install script for a binary executable they’re already intending to trust.
The issue is provenance. Where is the script getting the binary from? Who built that binary? How do we know that binary wasn't tampered with? I'll lay odds the install script isn't doing any kind of GPG/PGP signature check. It's probably not even doing a checksum check.
I'm prepared to trust an executable built by certain organisations and persons, provided I can trace a chain of trust from what I get back to them.
The thing that gets installed, if it is an executable, usually also has permissions to do scary things. Why is the installation process so scrutinized?
I think there's a fundamental psychological reason for this - people want to feel like some ritual has been performed that makes at least some level of superficial sense, after which they don't have to worry.
You see this in all the obvious examples of physical
security.
In the case of software it's the installation that's the ritual I guess. Complete trust must be conferred in the software itself by definition, so people just feel better knowing for near certain that the software installed is indeed 'the software itself'.
It would raise the same kind of alert for me if someone used wget to download a binary executable instead of a shell script.
The issue is not the specific form in which code is executed on your machine, but rather who is allowed by you to run code on your computer.
I don't trust arbitrary websites from the Internet, especially when they are not cryptographically protected against malicious tampering.
However, I do trust, for instance, the Debian maintainers, as I believe they have thoroughly vetted and tested the executables they distribute, with a cryptographic signature, to millions of users worldwide.
I'm so thankful for nixos for making it hard for me to give in to that temptation. you always think "oh just this once". but with nixos I either have to do it right or not bother.
$ ./Downloads/tmp/xpack-riscv-none-elf-gcc-15.2.0-1/bin/riscv-none-elf-cpp
Could not start dynamically linked executable: ./Downloads/tmp/xpack-riscv-none-elf-gcc-15.2.0-1/bin/riscv-none-elf-cpp
NixOS cannot run dynamically linked executables intended for generic
linux environments out of the box. For more information, see:
https://nix.dev/permalink/stub-ld
You have to go out of your way to make something like that run in an fhs env. By that point, you've had enough time to think, even with ADHD.
It sort of does actually, at least if you don't have nix-ld enabled. A lot of programs simply won't start if they're not static-linked, and so a lot of the time if you download a third-party script, or try to install it when the `curl somesite.blah | sh`, it actually will not work. Moreover, it also is likely that it won't be properly linked in your path unless you do it thr right way.
Maybe they can with postinstall scripts, but they usually don't.
For the most part, installing packaged software simply extracts an archive to the filesystem, and you can uninstall using the standard method (apt remove, uv tool remove, ...).
Scripts are way less standardized. In this case it's not an argument about security, but about convenience and not messing up your system.
Equally I don't like how many instructions and scripts everywhere use shorthands.
Sometimes you see curl -sSLfO. Please, use the long form. It makes life easier for everybody. It makes it easier to verify, and to look up. Finding --silent in curl's docs is easier than reading through every occurrence of -s.
curl --silent --show-error --location --fail --remote name https://example.com/script.sh
For a small flight of fancy, imagine if each program had a --for-docs argument, which causes it to simply spit out the canonical long-form version equivalent to whatever else it has been called with.
While I'd appreciate that facility too, it seems... even-more-fanciful, as one tool would need to somehow incorporate all the logic and quirks of all supported commands, including ones which could be very destructive if anything went wrong.
Kind of like positing a master `dry-run` command as opposed to different commands implementing `--dry-run` arguments.
I did muck around with using "sed" to process the "man" output to find a relevant long option in a one-liner, so it wouldn't be too difficult to implement.
I did something like this:
_command="sed" _option="n"
man -- "${_command}" | sed --quiet --expression "s/^ -${_option}.*, //p"
Then I realised that a bit of logic is needed (or more complicated regexp) to deal with some exceptions and moved onto something else.
agreed. i get if you're great at cli usage or have your own scripts, but if you're publishing for general use, it should be long form. that includes even utility scripts for a small team.
also, putting it out long-form you might catch some things you do out of habit, rather than what's necessary for the job.
Another possible advantage is that I invariably have to check the man page to find the appropriate long-form option and sometimes spot an option that I didn't know about.
Trusting software would be foolish. Most software has access to file system and the net. Due to practical reasons, I have no energy or time to verify whether the next update of libsecure came with a trojan or stole my env, neither do you. I just acknowledge this fact, take a risk and install it.
But on the other hand, at the current speed of LLM progression, a game that might have been obfuscated with the help of Opus 4.5 might in two years be decompiled within hours by Opus 6.5.
But why posting a "free" product that becomes unfree after a hug from HN, so that the vast majority will see an unfree product and thinking you were baiting them with false claims?
EDIT:
> Due to this incredible demand, we've hit our current budget limit and need to temporarily pause the service.
In my experience, a decently managed database scales very hard.
3x EX44 running Patroni + PostgreSQL would give you 64GB of working memory, at least 512 GB NVMe of dataset (configurable with more for a one-time fee) at HA + 1 maintenance node. Practically speaking, that would have carried the first 5 - 10 years of production at the company I work at with ease, for 120 Euros hardware cost/month + a decent sysadmin.
I also know quite a few companies who toss 3-4x 20k - 30k at DELL every few years to get a database cluster on-prem so that database performance ceases to be a problem (unless the application has bad queries).
Yes there is some bureaucratic paper churn to deal with them, but it's a one time cost. I did it once probably more than 10 years ago. Since then, login to the website takes me <10s (with OTP) every couple of days and then finding what I'm looking for in the web UI or the API doc is usualy just 3 or 4 clicks away (their website is a bit messy).
Compare that with AWS, where login is slow and unreliable (anyone else got an error message after every login and has to refresh to get in?), the website is a giant mess collapsing under its own weight, and slow like it's still running websphere.
Over the last 10 years, I've certainly lost way more time working through aws paperless bureaucracy than complying with Hetzner paper bureaucracy. And I'm not even using aws for that long.
Can you elaborate on what the bureaucracy is you experienced? I'm a Hetzner customer since last month and so far I thoroughly enjoy it. Have not encountered any bureaucracy yet.
I think I was still being a bit too harsh even after throwing into my comment that other providers aren't perfect either.
But basically after the initial paperwork I had some issues with my account getting flagged even though I wasn't using it 99.999% of the time. It's not a huge deal for me because I wasn't trying them out for anything serious. I just questioned how often that might happen if I was actually using it seriously and what kind of headaches it could cause me while re-verifying everything with them.
From people I know if everything is going good then their service is great. Server performance is good, pricing is good, etc.
You’re renting an entire infrastructure, I think a bit of KYC is reasonable.
I had more trouble onboarding AWS SES, with a process that felt more like me begging. With which I said fuck it and went with self hosting ever since (on a bare metal server no less)
I was asked for a passport photo when I tried to open an account. They literally asked for a passport photo immediately after the signup form. Like WHAT? I couldn't believe my eyes. The most insane shit I've ever seen.
Quite commonly required by law in Europe; but often times not implemented very seriously by hosting providers, but Germany seems to be an exception.
I remember a time in France for instance, about 15years ago, it was mandatory to provide your ID when bying a mere prepaid sim card. No seller would actually check, and a coworker of mine who used to work for one of the largest french telcos at the time told me that once they ran some stats over the customer database and noticed that most names where from popular comics and TV show. They laughted and moved on. These days, the seller would at least ask for some ID.
It's weird seeing people on HN complain about this aspect regarding Hetzner because it's the complete opposite of my experience. Two years I've rented a dedicated server for around 40 euros monthly from Hetzner as a business customer and I had no issues whatsoever. They didn't ask for a business license or personal ID or anything really, I provided a VAT ID along with a business name and address but it wasn't anything extra compared to what I also provided Migadu or Porkbun for example.
I suppose they might have more KYC procedures for personal accounts based outside the EU otherwise I have no clue.
Same, Hetzner has always been very flexible with me when it comes to practically anything. It's always been humans answering my queries, with of course various quality but overall quite good especially for the price. I gave them some VAT number to get reduced prices at some point and that was it :shrug:
I'm based in the US and I tried twice to create an account for Hetzner (a personal account as well as a company / startup account). They rejected all my attempts. I don't quite understand their business model :)
I love their pricing and the simplicity, but they don't give the impression of being highly skilled. They have zero managed services, not even managed K8. Their s3 (very mature tech at this point) is utterly garbage even one year after their launch.
Then the bureaucracy you mention which is just a reflection how they work internally as well.
> I want a provider that leaves me alone and lets me just throw money at them to do so.
That’s been my experience with Hetzner.
A lot of people get butthurt that a business dares to verify who they’re dealing with as to filter out the worst of the worst (budget providers always attract those), but as long as you don’t mind the reasonable requirement to verify your ID/passport they’re hands-off beyond that.
That's fair and I don't have any major issues with that.
I guess my concern on the bureaucracy is if you are unlucky enough to get flagged as a false positive it can be an annoying experience. And I can't really blame them too hard for having to operate that way in an environment of bad actors.
You're definitely right that the budget providers do attract the types of people trying to do bad things/exploit them in some way.
A love letter to the last operating system that isn’t trying to gaslight you. FreeBSD really is the anti-hype choice: no mascot-as-a-service, no quarterly identity crisis, just a system that quietly works until the heat death of the universe.
Speaking of better vendor support, why doesn’t it support Apple Silicon yet? Obviously, Asahi has led the way on this and their m1n1 boot loader can be used out of the box. But OpenBSD has supported Apple Silicon for three years now.
The original, unedited version of the grandparent was bemoaning the lack of vendor support behind FreeBSD so the parent's comment made a lot more sense in-context.
Firstly, FreeBSD already supports x86 Mac Minis. Servers? M-series Minis and Studios are very good servers. Lastly, FreeBSD has an Apple Silicon port which has stalled.
reply