Hacker Newsnew | past | comments | ask | show | jobs | submit | tuhgdetzhh's commentslogin

Just for comparison, this is how the code could look like in Python:

  SUCCESS = 0
  TOO_FAR_LEFT = 2
  TOO_SLOW_OR_TOO_LOW = 4
  TOO_FAST_OR_TOO_HIGH = 8
  MIN_ALTITUDE = 100
  MAX_ALTITUDE = 300
  MIN_SPEED = 200
  MAX_SPEED = 400
  MIN_SPEED_200_RANGE = 238
  MAX_SPEED_300_RANGE = 338
  MAX_HEADING_RIGHT = 8

  def landing_skill_check(
    altitude: int,
    speed: int,
    heading: int) -> int:

    if altitude < MIN_ALTITUDE:
        return TOO_SLOW_OR_TOO_LOW
    if altitude >= MAX_ALTITUDE:
        return TOO_FAST_OR_TOO_HIGH
    if speed < MIN_SPEED:
        return TOO_SLOW_OR_TOO_LOW
    if speed >= MAX_SPEED:
        return TOO_FAST_OR_TOO_HIGH
    if speed < 300:
        if speed < MIN_SPEED_200_RANGE:
            return TOO_SLOW_OR_TOO_LOW
    else:
        if speed >= MAX_SPEED_300_RANGE:
            return TOO_FAST_OR_TOO_HIGH
    if heading < 0:
        return TOO_FAR_LEFT
    if heading >= MAX_HEADING_RIGHT:
        return TOO_SLOW_OR_TOO_LOW

    return SUCCESS

Your code is returning TOO_SLOW_OR_TOO_LOW for the case when the heading is too far right. The disassembly in the op looks like it correctly jumps to too_far_right.

Oh my... This is how the code could look indeed. Which LLM did you use to generate this?

Even if you actually use the network module in Go, just so that the compiler wouldn't strip it away, you would still have a startup latency in Go way below 25 ms from my experience with writing CLI tools.

Whereas with Python, even in the latest version, you're already looking at atleast 10x the amount of startup latency in practice.

Note: This is excluding the actual time that is made for the network call, which can of course also add quiete some milliseconds, depending on how far on planet earth your destination is.


The answers is: likely yes, but the solution is to become the market leader now and pay the fines later. This buisness strategy has worked out very well for the magnificent 7.

Yes, unfortunately those who jumped on the microservices hype train over the past 15 years or so are now getting the benefits of Claude Code, since their entire codebases fits into the context window of Sonnet/Opus and can be "understood" by the LLM to generate useful code.

This is not the case for most monoliths, unless they are structured into LLM-friendly components that resemble patterns the models have seen millions of times in their training data, such as React components.


Well structured monoliths are modularized just like microservices. No need to give each module its own REST API in order to keep it clean.

Conversely, poorly-structured microservices are just monoliths where most of the code is in other repositories.

I guess that the benefit of monoliths in the context is that they (often) live in distinct repositories, which makes it easier for Claude to ingest them entirely, or at least not get lost into looking at the wrong directory.

One problem is that the idea of being "well-structured" has gone overboard at some point over the past 20 years in many companies. As a result, many companies now operate highly convoluted monolithic systems that are extremely difficult to replace.

In contrast, a poorly designed microservice can be replaced much more easily. You can identify the worst-performing and most problematic microservices and replace them selectively.


> One problem is that the idea of being "well-structured" has gone overboard at some point over the past 20 years

That's exactly my experience. While a well-structured monolith is a good idea in theory, and I'm sure such examples exist in practice, that has never been the case in any of my jobs. Friends working at other companies report similar experiences.


I'm always a bit shocked how casual people people wget and execute shell scripts as part of their install process.

This is the equivalent of giving an author of a website remote code execution (RCE) on your computer.

I get the idea that you can download the script first and carefully read it, but I think that 99% of people won't.


I’m always a bit shocked how seriously people take concerns over the install script for a binary executable they’re already intending to trust.

Between you and me, are a bunch of other hops. Blindly trusting dependencies is one part of why npm is burning down at the moment.

Why trust un-signatured files hosted on a single source of truth? It isn't the 90s anymore.


    $ curl ${flags} https://site.io/install.sh | sh

    $ curl ${flags} https://site.io/tool > ./tool
    $ chmod u+x ./tool
    $ ./tool
Both of these are effectively the same damn thing but everyone loses their minds over the first one.

Also, a lot of those install scripts do check signatures of the binaries they host. And if you’re concerned that someone could have owned the webserver it’s hosted on, then they can just as easily replace the public key used for verification in the written instructions on the website.


I'm not advocating for either of those.

    pacman -Sy {tool}
    pkg_add {tool}
    apt install {tool}
Even the AUR does a lot more to make you secure, than a straight curl - even though throwing things up there is easy.

What’s your alternative?

A mirrored package manager, where signature and executable are always grabbed from different sources.

Like apt, dnf, and others.


Pretty sure my apt sources have the signing and package pointing to the same place

If you have more than a single source, then apt will already be checking this for you.

The default is more than a single source.


All of mine point to like somethingsomething.ubuntu.com

If it points to mirror.ubuntu.com, it'll be mirroring at host end, instead of inside apt. But as apt does do resolution to a list, it'll be fetching from multiple places at once.

> I’m always a bit shocked how seriously people take concerns over the install script for a binary executable they’re already intending to trust.

The issue is provenance. Where is the script getting the binary from? Who built that binary? How do we know that binary wasn't tampered with? I'll lay odds the install script isn't doing any kind of GPG/PGP signature check. It's probably not even doing a checksum check.

I'm prepared to trust an executable built by certain organisations and persons, provided I can trace a chain of trust from what I get back to them.


The thing that gets installed, if it is an executable, usually also has permissions to do scary things. Why is the installation process so scrutinized?

I think there's a fundamental psychological reason for this - people want to feel like some ritual has been performed that makes at least some level of superficial sense, after which they don't have to worry.

You see this in all the obvious examples of physical security.

In the case of software it's the installation that's the ritual I guess. Complete trust must be conferred in the software itself by definition, so people just feel better knowing for near certain that the software installed is indeed 'the software itself'.


It would raise the same kind of alert for me if someone used wget to download a binary executable instead of a shell script.

The issue is not the specific form in which code is executed on your machine, but rather who is allowed by you to run code on your computer.

I don't trust arbitrary websites from the Internet, especially when they are not cryptographically protected against malicious tampering.

However, I do trust, for instance, the Debian maintainers, as I believe they have thoroughly vetted and tested the executables they distribute, with a cryptographic signature, to millions of users worldwide.


Even assuming it’s not malicious, the script can mess up your environment configuration.

I'm so thankful for nixos for making it hard for me to give in to that temptation. you always think "oh just this once". but with nixos I either have to do it right or not bother.

NixOS gives you a place to configure things in a reproducible way, but it doesn’t require you do it.

$ ./Downloads/tmp/xpack-riscv-none-elf-gcc-15.2.0-1/bin/riscv-none-elf-cpp Could not start dynamically linked executable: ./Downloads/tmp/xpack-riscv-none-elf-gcc-15.2.0-1/bin/riscv-none-elf-cpp NixOS cannot run dynamically linked executables intended for generic linux environments out of the box. For more information, see: https://nix.dev/permalink/stub-ld

You have to go out of your way to make something like that run in an fhs env. By that point, you've had enough time to think, even with ADHD.


It sort of does actually, at least if you don't have nix-ld enabled. A lot of programs simply won't start if they're not static-linked, and so a lot of the time if you download a third-party script, or try to install it when the `curl somesite.blah | sh`, it actually will not work. Moreover, it also is likely that it won't be properly linked in your path unless you do it thr right way.

So can a random deb, or npm package, or pip wheel? You’re either ok with executing unverified code or not - piping wget into bash doesn’t change that

Maybe they can with postinstall scripts, but they usually don't.

For the most part, installing packaged software simply extracts an archive to the filesystem, and you can uninstall using the standard method (apt remove, uv tool remove, ...).

Scripts are way less standardized. In this case it's not an argument about security, but about convenience and not messing up your system.


Equally I don't like how many instructions and scripts everywhere use shorthands.

Sometimes you see curl -sSLfO. Please, use the long form. It makes life easier for everybody. It makes it easier to verify, and to look up. Finding --silent in curl's docs is easier than reading through every occurrence of -s.

   curl --silent --show-error --location --fail --remote name https://example.com/script.sh
Obligatory xkcd: https://xkcd.com/1168/

For a small flight of fancy, imagine if each program had a --for-docs argument, which causes it to simply spit out the canonical long-form version equivalent to whatever else it has been called with.

Or, a separate program that can convert from short to long form:

> for-docs "ls -lrth /mnt/data"

ls -l --reverse -t --human-readable -- /mnt/data

(I'd put in an option to put the options alphabetically too)


While I'd appreciate that facility too, it seems... even-more-fanciful, as one tool would need to somehow incorporate all the logic and quirks of all supported commands, including ones which could be very destructive if anything went wrong.

Kind of like positing a master `dry-run` command as opposed to different commands implementing `--dry-run` arguments.


I did muck around with using "sed" to process the "man" output to find a relevant long option in a one-liner, so it wouldn't be too difficult to implement.

I did something like this:

  _command="sed" _option="n"
  man -- "${_command}" | sed --quiet --expression  "s/^       -${_option}.*, //p"
Then I realised that a bit of logic is needed (or more complicated regexp) to deal with some exceptions and moved onto something else.

> Finding --silent in curl's docs is easier than reading through every occurrence of -s.

Dumb trick: Search prefixed with 2 spaces.

  man curl
  /  -s
Yields exactly one hit on my machine. In the general case, you may have to try one and two spaces.

Absolutely agree.

The shorthands are for when typing it at a console and the long form versions should be used in scripts.


Aren't there tools for which the short flags are standardized (e.g. POSIX) but the long flags aren't?

agreed. i get if you're great at cli usage or have your own scripts, but if you're publishing for general use, it should be long form. that includes even utility scripts for a small team.

also, putting it out long-form you might catch some things you do out of habit, rather than what's necessary for the job.


Another possible advantage is that I invariably have to check the man page to find the appropriate long-form option and sometimes spot an option that I didn't know about.

If you don't trust the software, don't install it.

Trusting software would be foolish. Most software has access to file system and the net. Due to practical reasons, I have no energy or time to verify whether the next update of libsecure came with a trojan or stole my env, neither do you. I just acknowledge this fact, take a risk and install it.

But on the other hand, at the current speed of LLM progression, a game that might have been obfuscated with the help of Opus 4.5 might in two years be decompiled within hours by Opus 6.5.

But why posting a "free" product that becomes unfree after a hug from HN, so that the vast majority will see an unfree product and thinking you were baiting them with false claims?

EDIT:

> Due to this incredible demand, we've hit our current budget limit and need to temporarily pause the service.

There you go.


Or rent a bare-metal machine from hetzner with 2-3x performance per core and 90% less costs[1].

[1] Various HN posts regarding Hetzner vs AWS in terms of costs and perf.


In my experience, a decently managed database scales very hard.

3x EX44 running Patroni + PostgreSQL would give you 64GB of working memory, at least 512 GB NVMe of dataset (configurable with more for a one-time fee) at HA + 1 maintenance node. Practically speaking, that would have carried the first 5 - 10 years of production at the company I work at with ease, for 120 Euros hardware cost/month + a decent sysadmin.

I also know quite a few companies who toss 3-4x 20k - 30k at DELL every few years to get a database cluster on-prem so that database performance ceases to be a problem (unless the application has bad queries).


There are no Hetzner servers that have 24TBs of RAM


This might be true in terms of direct monetary costs.

I want to like Hetzner but the bureaucratic paper process of interacting with them and continuing to interact with them is just... awful.

Not that the other clouds don't also have their own insane bureaucracies so I guess it's a wash.

I'm just saying, I want a provider that leaves me alone and lets me just throw money at them to do so.

Otherwise, I think I'd rather simply deploy my own oversized server in a colo even with the insanely overpriced hardware prices currently.

edit: And shortly after writing this comment I see: "Microsoft won't let me pay a $24 bill, blocking thousands in Azure spending" https://news.ycombinator.com/item?id=46124930


Yes there is some bureaucratic paper churn to deal with them, but it's a one time cost. I did it once probably more than 10 years ago. Since then, login to the website takes me <10s (with OTP) every couple of days and then finding what I'm looking for in the web UI or the API doc is usualy just 3 or 4 clicks away (their website is a bit messy).

Compare that with AWS, where login is slow and unreliable (anyone else got an error message after every login and has to refresh to get in?), the website is a giant mess collapsing under its own weight, and slow like it's still running websphere.

Over the last 10 years, I've certainly lost way more time working through aws paperless bureaucracy than complying with Hetzner paper bureaucracy. And I'm not even using aws for that long.


Can you elaborate on what the bureaucracy is you experienced? I'm a Hetzner customer since last month and so far I thoroughly enjoy it. Have not encountered any bureaucracy yet.


I think I was still being a bit too harsh even after throwing into my comment that other providers aren't perfect either.

But basically after the initial paperwork I had some issues with my account getting flagged even though I wasn't using it 99.999% of the time. It's not a huge deal for me because I wasn't trying them out for anything serious. I just questioned how often that might happen if I was actually using it seriously and what kind of headaches it could cause me while re-verifying everything with them.

From people I know if everything is going good then their service is great. Server performance is good, pricing is good, etc.


You’re renting an entire infrastructure, I think a bit of KYC is reasonable.

I had more trouble onboarding AWS SES, with a process that felt more like me begging. With which I said fuck it and went with self hosting ever since (on a bare metal server no less)


I was asked for a passport photo when I tried to open an account. They literally asked for a passport photo immediately after the signup form. Like WHAT? I couldn't believe my eyes. The most insane shit I've ever seen.


Quite commonly required by law in Europe; but often times not implemented very seriously by hosting providers, but Germany seems to be an exception.

I remember a time in France for instance, about 15years ago, it was mandatory to provide your ID when bying a mere prepaid sim card. No seller would actually check, and a coworker of mine who used to work for one of the largest french telcos at the time told me that once they ran some stats over the customer database and noticed that most names where from popular comics and TV show. They laughted and moved on. These days, the seller would at least ask for some ID.

aka circling the cattle.


If I was letting some random person rent one of my servers without oversight, I'd sure want to see some ID first.


It's weird seeing people on HN complain about this aspect regarding Hetzner because it's the complete opposite of my experience. Two years I've rented a dedicated server for around 40 euros monthly from Hetzner as a business customer and I had no issues whatsoever. They didn't ask for a business license or personal ID or anything really, I provided a VAT ID along with a business name and address but it wasn't anything extra compared to what I also provided Migadu or Porkbun for example.

I suppose they might have more KYC procedures for personal accounts based outside the EU otherwise I have no clue.


Same, Hetzner has always been very flexible with me when it comes to practically anything. It's always been humans answering my queries, with of course various quality but overall quite good especially for the price. I gave them some VAT number to get reduced prices at some point and that was it :shrug:


Used them for more than 10 years. There was a one off, straightforward process of providing some details back then, and then nothing more.


I'm based in the US and I tried twice to create an account for Hetzner (a personal account as well as a company / startup account). They rejected all my attempts. I don't quite understand their business model :)


similar experience, as well. not sure what's going on with hetzner.


I love their pricing and the simplicity, but they don't give the impression of being highly skilled. They have zero managed services, not even managed K8. Their s3 (very mature tech at this point) is utterly garbage even one year after their launch.

Then the bureaucracy you mention which is just a reflection how they work internally as well.


> I want a provider that leaves me alone and lets me just throw money at them to do so.

That’s been my experience with Hetzner.

A lot of people get butthurt that a business dares to verify who they’re dealing with as to filter out the worst of the worst (budget providers always attract those), but as long as you don’t mind the reasonable requirement to verify your ID/passport they’re hands-off beyond that.


That's fair and I don't have any major issues with that.

I guess my concern on the bureaucracy is if you are unlucky enough to get flagged as a false positive it can be an annoying experience. And I can't really blame them too hard for having to operate that way in an environment of bad actors.

You're definitely right that the budget providers do attract the types of people trying to do bad things/exploit them in some way.


It’s too bad that there does not seem to be a comparable provider with datacenters in North America.


A love letter to the last operating system that isn’t trying to gaslight you. FreeBSD really is the anti-hype choice: no mascot-as-a-service, no quarterly identity crisis, just a system that quietly works until the heat death of the universe.


Speaking of better vendor support, why doesn’t it support Apple Silicon yet? Obviously, Asahi has led the way on this and their m1n1 boot loader can be used out of the box. But OpenBSD has supported Apple Silicon for three years now.


The why is simply: because nobody wants it enough to build it. Otherwise it would exist.


Why does it have to? Why does everything have to supper everything? Why can’t a project have a focus on servers and that’s its “thing”?

Also it’s OSS — contribute that support if you’re so passionate about it.


The original, unedited version of the grandparent was bemoaning the lack of vendor support behind FreeBSD so the parent's comment made a lot more sense in-context.


Yeah, sorry for removing that part. Changed my mind just minutes after posting, because I really like FreeBSD any my critique sounded a bit too harsh.


> everything

Firstly, FreeBSD already supports x86 Mac Minis. Servers? M-series Minis and Studios are very good servers. Lastly, FreeBSD has an Apple Silicon port which has stalled.

https://wiki.freebsd.org/AppleSilicon

I'll ignore your last point.


FreeBSD has always been the non-portable one.


Sigh. Yes. It’s the boring choice and therefore the better choice a lot of the time. Not all of the time, but most of the time.

Impatience and lost skills is why it’s not a mainstream player.


For all of you who played Wolfenstein 3D in your youth in Germany, here is an update: It is now completely legal since 2019.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: