Hacker Newsnew | past | comments | ask | show | jobs | submit | vlovich123's commentslogin

It gets mess not just in that way but also someone can have a weird LD_LIBRARY_PATH that starts to have problems. Statically linking drastically simplifies distribution and you’ve had to have distributed 0 software to end users to believe otherwise. The only platform this isn’t the case for is Apple because they natively supported app bundles. I don’t know if flat pack solves the distribution problem because I’ve not seen a whole lot of it in the ecosystem - most people seem to generally still rely on the system package manager and commercial entities don’t seem to really target flat pack.

You’ve got this backward. The vast majority of time due to spatial and temporal locality, in practice for any application you’re actually usually doing CPU registers first, cache second, memory third, disk fourth, network cache fifth, and network origin sixth. So this stuff does actually matter for performance.

Also, aside from memory bandwidth, there’s a latency cost inherent in traversing object graphs - 0 copy techniques ensure you traverse that graph minimally, just what’s needed to actually be accessed which is huge when you scale up. There’s a difference between one network request and fetching 1 MB vs making 100 requests to fetch 10kib and this difference also appears in memory access patterns unless they’re absorbed by your cache (not guaranteed for object graph traversal that a package manager would be doing).


I think it’s naive to think engineers or managers don’t realize this or don’t think in these ways.

https://www.folklore.org/Saving_Lives.html


Is it truly naive if most engineer's careers pass and they never meet even one such manager?

For 24 years of career I've met the grand total of _two_ such. Both got fired not even 6 months after I got in the company, too.

Who's naive here?


I’ve met one who asked me a question like this and he’s still at Apple having been promoted several times to a fairly senior position. But the question was only half hearted because the question was “how much CO2 would we save if we made something 10% more CPU efficient” and the answer even at Apple’s current scale of billions of iPhones was insignificant.

So now you and I both have come across such a manager. Why would you make the claim most engineer’s don’t come across such people?


I’ve heard stories from communist villages where everyone knew everyone. Communal parks and property was not respected and frequently vandalized or otherwise neglected because it didn’t have an owner and it was treated as something for someone else to solve.

It’s easier to explain in those terms than assumptions about how things work in a tribe.


No, because then no language would be included, including Rust. Implementation bugs are not treated the same as integral parts of the language as defined by the standard. Python is defined as memory safe.

I understand this website as focusing on unsafety in a more practical sense of writing your stack in memory safe ways, not in the sense of discussing what's theoretically possible within the language specs. After all, Fil-C is standard compliant, but "run everything under Fil-C" is not the argument it's making. The most common language runtime being memory unsafe is absolutely an applicable argument here, mitigated only by the fact that it's a mature enough runtime that memory issues are vanishingly rare.

Fil-C is super new and while it is memory safe, a lot of work is still ongoing in terms of getting existing programs to run under it and currently it only supports Linux which is nowhere near being “c and C++ can now be memory safe”.

By definition, C and C++ are memory safe as long as you follow the rules. The problem is that the rules cannot be automatically checked and in practice are the source of unenumerable issues from straight up bugs to subtle standards violations that trigger the optimizer to rewrite your code into what you didn’t intend.

But yes, fil-c is a huge improvement (afaik though it doesn’t solve the UB problem - it just guarantees you can’t have a memory safety issue as a result)


> By definition, C and C++ are memory safe as long as you follow the rules.

This statement doesn't make sense to me.

Memory safety is a property of language implementations, which is all about what happens when the programmer does not follow the rules.

> The problem is that the rules cannot be automatically checked and in practice are the source of unenumerable issues from straight up bugs to subtle standards violations that trigger the optimizer to rewrite your code into what you didn’t intend.

They can be automatically checked and Fil-C proves this. The prior art had already proved it before Fil-C existed.

> But yes, fil-c is a huge improvement (afaik though it doesn’t solve the UB problem - it just guarantees you can’t have a memory safety issue as a result)

Fil-C doesn't have UB. If you find anything that looks like UB to you, please file a GH issue.

Let's also be clear that you're referring to nasal demons specifically, not UB generally. In some contexts, like CPU ISAs, UB means a trap, rather than nasal demons. So let's use the term "nasal demons".

C and C++ only have nasal demons because:

- Policy decisions. For example, making signed integer addition have nasal demons is because someone wanted to cook a benchmark.

- Lack of memory safety in most implementations, combined with a refusal to acknowledge what happens when the wrong kind of memory access occurs. (Note that CPU ISAs like x86 and ARM are not memory safe, but have no nasal demons, because they do define what happens when any kind of memory access occurs.)

So anyway, Fil-C has no nasal demons, because:

- I turned off all of those silly policy decisions for cooking benchmarks.

- The memory safety means that I define what happens when the wrong kind of memory access occurs: the program gets killed with a panic.


First, let me say that I really respect the work you’re doing in fil-c. Nothing I say is intended as a knock and you’re doing fantastic engineering work moving the field forward and I hope you find success.

That’s good to know about nasal demons. Are you saying you somehow inhibit the optimizer from injecting a security vulnerability due to UB ala https://www.cve.org/CVERecord?id=CVE-2009-1897 ? I’m kinda curious how you trick LLVM into not optimizing through UB since it’s UB model is so tuned to the C/C++ standard.

Anyway, Fil-C is only currently working on (a lot of, but not all yet I think right?) Linux userspace while C and C++ as a standard language definition span a lot more environments. I agree the website should call out Fil-C as memory safe but I think it’s also fair to say that Fil-C is more an independent dialect of C/C++ (eg you do have to patch some existing software) - IMHO it’s too confusing for communicating out to say that C/C++ is memory safe and I’d rather it say something like Fil-C is memory safe or C/C++ code running under Fil-C is memory safe.

> Memory safety is a property of language implementations, which is all about what happens when the programmer does not follow the rules.

By this argument no language is memory safe because every language has bugs that can result in memory safety issues. Certainly rustc definitely has soundness issues that haven’t been fixed and I believe this is also true of Python, JavaScript, etc but I think it’s an unhelpful bar or framing of the problem. The language itself is memory safe and any safety issues within the language spec or implementation are a bug to be fixed. That isn’t true of C/C++ where there’s going to always exist environments where it’s impossible to even have a memory safe implementation (eg microcontrollers) let alone mandate one in the spec. And also fil-C does have a performance impact so some software may not ever be a good fit for it (eg video encoders/decoders). For example, a non memory safe conforming implementation of JavaScript is not possible. Same goes for safe rust, Python or Java. By comparison that isn’t true for c/c++.


At a certain point, it's a trade-off. A systems language will offer facilities that can be used to break encapsulation and abstractions, and access memory as a sequences of bytes. (Anything capable of file I/O on stock Linux can write to /proc/self/mem, for example.) The difference to (typical) C and C++ is that these facilities are less likely to be invoked by accident.

Reasonable people will disagree about what memory safety (and type safety) mean to them. Personally, bounds checking for arrays and strings, some solution for safe deallocation of memory, and an obviously correct way to write manual bounds checks is more interesting than (for example) no access to machine addresses and no FFI.

Regarding bounds checking, GNAT offers some interesting (non-standard) options: https://gcc.gnu.org/onlinedocs/gnat_ugn/Management-of-Overfl... Basically, you can write a bounds check in the most natural way, and the compiler will evaluate the check with infinite precision (or almost, to improve performance). In standard, you might end up with an exception in some corner cases where the check should pass. I wish more languages would offer something like this. Among widely used languages, only Python offers this capability because it uses infinite-precision integers.


> Are you saying you somehow inhibit the optimizer from injecting a security vulnerability due to UB ala https://www.cve.org/CVERecord?id=CVE-2009-1897 ? I’m kinda curious how you trick LLVM into not optimizing through UB since it’s UB model is so tuned to the C/C++ standard.

Yes that is inhibited. There’s no trick. LLVM (and other compilers) choose to do those stupid things by policy, and the policy can be turned off. It’s not even hard to do it.

> Fil-C is more an independent dialect of C/C++ (eg you do have to patch some existing software)

Fil-C is not a dialect. The patches are similar to what you’d have to do if you were porting a C program to a new CPU architecture or a different compiler.

> By this argument no language is memory safe because every language has bugs that can result in memory safety issues.

You rebutted this argument for me:

> any safety issues within the language spec or implementation are a bug to be fixed

Exactly this. A memory safe language implementation treats outstanding memory safety issues as a bug to be fixed.

This is what makes almost all JS implementations, and Fil-C, memory safe.


Firstly, the existence of unsafe does not inherently mean the code isn’t memory safe.

Secondly, memory safety does not mean no security vulnerabilities. What it does mean is that 80% of the most commonly found vulnerabilities (as gathered through statistical analysis of field failures) are gone. It means that the price for finding a vulnerability is higher.

And also sudo-rs precisely removes a lot of complexity that’s the source of vulnerabilities in normal sudo. There may be better approaches but it’s specifically not targeting 100% compat because sudo is horribly designed right now.

TLDR: this is a lazy knee jerk critique. Please do better in the future.


> Firstly, the existence of unsafe does not inherently mean the code isn’t memory safe.

That does not contradict what I wrote.

I am confounded by your post, since an article with vulnerabilities in sudo-rs was posted.

You can also read

https://news.ycombinator.com/item?id=46388181

> TLDR: this is a lazy knee jerk critique. Please do better in the future.

TL;DR: This is a lazy knee jerk critique. Please do better in the future.


Memory safety does not make your code free of vulnerabilities.

Reading comprehension and critical thinking again missing from your post.

The article would only “invalidate” what I wrote if the sudo-rs vulnerability was a result of memory safety. That isn’t what these vulnerabilities are.

By the way, the data on this is so clear and readily available about the real world reduction in memory safety issues Rust has in the real world I really don’t understand how you’re doubling down on your flawed position: https://security.googleblog.com/2025/11/rust-in-android-move....

This is literally empirical validation of the theoretical expected result. And Microsoft has also presented they’re seeing similar results. This is literally scientific evidence for the blinking neon sign that Rust achieves a significantly meaningful higher bar of memory safety than C/C++ regardless of any concerns you’ve raised (valid or otherwise). Rust isn’t evaluated in a vacuum against a hypothetically perfect alternative.

Unsafe rust behind harder to work with also doesn’t mean that unsafe in sudo-rs instantly runs into such issues. You can see the vast majority of the unsafe here is invoking syscalls. That isn’t what people are typically referring to as “unsafe is hard”. Basically you seem to not actually understand the issues at play and are cherry picking sound bites you think support the predetermined position you’re really set on taking. Thats what I mean by being lazy - you claim the existence of unsafe in sudo-rs makes it memory unsafe when that’s not at all necessarily the case - it just means there’s a risk there. Same with the Vec example - it’s highlighting how there can be issues but it doesn’t mean the vast majority of unsafe runs into it.

Is rust as memory safe as Java? No, it’s not. Is it substantially closer to Java safety than C/C++? Yes and it looks like it’s about at least an order of magnitude better than C/C++ while offering the same performance profile (and actually often better because it’s aliasing rules can be more aggressive and the standard library is more modern). An order of magnitude fewer vulnerabilities for the same performance is an insane jump in the Pareto frontier.


But you still believe that quantum computers have a likelihood of being possible to build AND that they can accomplish a task faster than classical? I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits). Happy to be proven wrong of course.

> But you still believe that quantum computers have a likelihood of being possible to build AND that they can accomplish a task faster than classical?

Not GP but yes. I'm reasonably confident that we will have quantum computers that are large and stable enough to have a real quantum advantage, but that's mostly because I believe Moore's law is truly dead and we will see a plateau in 'classical' CPU advancement and memory densities.

> I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits)

I don't think people appreciate or realize that a good chunk of the innovations necessary to "get there" with quantum are traditional (albeit specialized) engineering problems, not new research (but breakthroughs can speed it up). I'm a much bigger fan of the "poking lasers at atoms" style of quantum computer than the superconducting ones for this reason, the engineering is more like building cleaner lasers and better AOMs [0] than trying to figure out how to super cool vats of silicon and copper. It's outside my area of expertise, but I would expect innovations to support better lithography to also benefit these types of systems, though less directly than superconducting.

Source: I worked on hard-realtime control systems for quantum computers in the past. Left because the academic culture can be quite toxic.

[0]: https://en.wikipedia.org/wiki/Acousto-optic_modulator


I don’t know how people claim the science is solved and “it’s just engineering” when scaling up to no trivial quantum circuits is literally the problem no one has solved and hand waving it away as an “engineering problem” seems really disingenuous. Foundational science needs to be done to solve these problems.

Classical CPUs have slowed but not stopped but more importantly quantum machines haven’t even been built yet let alone been proven possible to scale up arbitrarily. Haven’t even demonstrated they can factor 17 faster than a classical computer.


Why not hand off the fd to the new process spawned as a child? That’s how a lot of professional 0 downtime upgrades work: spawn a process, hand off fd & state, exit.

That's exactly what it's doing. The tricky part is the “hand off state” part.

> While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets. We established these confirmation protocols out of an abundance of caution during our early deployment, and we are now refining them to match our current scale. While this strategy was effective during smaller outages, we are now implementing fleet-wide updates that provide the Driver with specific power outage context, allowing it to navigate more decisively.

Sounds like it was and you’re not correctly understanding the complexity of running this at scale.


Sounds like their disaster recovery plan was insufficient, intensified traffic jams in already congested areas because of "backlog", and is now being fixed to support the current scale.

The fact this backlog created issues indicates that it's perhaps Waymo that doesn't understand the complexity of running at that scale, because their systems got overwhelmed.


What about San Francisco allowing a power outage of this magnitude and not being able to restore power for multiple days?

This kind of attitude to me indicates a lack of experience building complex systems and responding to unexpected events. If they had done the opposite and been overly aggressive in letting Waymo’s manage themselves during lights that are out would you be the first in line criticizing them then for some accident happening?

All things being considered, I’m much happier knowing Waymo is taking a conservative approach if the downside means extra momentary street congestion during a major power outage; that’s much rarer than being cavalier with fully autonomous behavior.


DR always stands for "didn't realize" in the aftermath of an event.

That's what they're learning and fixing for in the future to give the cars more self-confidence.


They probably do, they just don't give a shit. It's still the "move fast and break things" mindset. Internalize profits but externalize failures to be carried by the public. Will there be legal consequences for Waymo (i.e. fines?) for this? Probably not...

What Waymo profits?

They're one-of-one still. Having ridden in a Waymo many times, there's very little "move fast and break things" leaking in the experience.

They can simulate power outages as much as they want (testing) but the production break had some surprises. This is a technical forum.. most of us have been there.. bad things happened, plans weren't sufficient, we can measure their response on the next iteration in terms of how they respond to production insufficiencies in the next event.

Also, culturally speaking, "they suck" isn't really a working response to an RCA.


Waymo cars have been proven safer than human drivers in California. At the same time, 40k people die each year in the US in car accidents caused by human drivers.

I'm very happy they're moving fast so hopefully fewer people die in the future


Both things can be true. They can be safer, but at the same time Waymo can still externalize stuff to the public...

Who cares? Honestly?

…the public?

"Move fast and break things" is a Facebook slogan. Applying it to Google or Waymo just doesn’t fit. If anything, Waymo is moving too slow. 100 people are going to die in seven days from drunk drivers and New Years in the US.

How's that for a real world trolley problem?


The most effective way of decreasing traffic deaths is safer driving laws, as the recent example of Helsinki has shown. That and better public transportation infrastructure. If you think that a giant, private, for-profit company cares about people's lives, you are in for a ride.

> The most effective way of decreasing traffic deaths is safer driving laws

This is almost hilariously false. "Oh yeah, those words on paper? Well, they actually physically stopped me from running the red light and plowing into 4 pedestrians!"

> If you think that a giant, private, for-profit company cares about people's lives, you are in for a ride.

I honestly wonder how leftists manage to delude themselves so heavily? I'm sure a bunch of politicians really have my best interests at heart. Lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: