Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Obviously not

Is it obvious? I haven't heard of new projects in non-memory-safe languages lately, and I would think they would struggle to attract contributors.





New high-scale data infrastructure projects I am aware of mostly seem to be C++ (often C++20). A bit of Rust, which I’ve used, and Zig but most of the hardcore stuff is still done in C++ and will be for the foreseeable future.

It is easy to forget that the state-of-the-art implementations of a lot of systems software is not open source. They don’t struggle to attract contributors because of language choices, being on the bleeding edge of computer science is selling point enough.


There's a "point of no return" when you start to struggle to hire anyone on your teams because no one knows the language and no one is willing to learn. But C++ is very far from it.

There's always someone willing to write COBOL for the right premium.

I'm working on Rust projects, so I may have incomplete picture, but I'm from what I see when devs have a choice, they prefer working with Rust over C++ (if not due to the language, at least due to the build tooling).


Writing C++ is easier than writing Rust. But writing safe multithreaded code in C++?

I don't want to write multithreaded C++ at all unless I explicitly want a new hole in my foot. Rust I barely have any experience with, but it might be less frustrating than that.


Anecdotally, my "wow, this Rust business might really go somewhere" moment was when I tried adding multithreading to a random tool I made (dispatching tasks to new threads).

Multithreading had not been planned or architected for, it took 30 min, included the compiler informing me I couldn't share a hashmap with those threads unsynchronised, and informing me on how to fix it.


I've had similar experience when the compiler immediately found unsynchronized state deep inside a 3rd party library I've been using. It was a 5-minute fix for what otherwise could have been mysterious unreproducible data corruption.

These days even mobile phones have multicore CPUs, so it's getting hard to find excuses for single-threaded programs.


Game development, graphics and VFX industry, AI tooling infrastructure, embedded development, Maker tools like Arduino and ESP32, compiler development.


Zig at least claims some level of memory safety in their marketing. How real that is I don't know.

About as real as claiming that C/C++ is memory safe because of sanitizers IMHO.

I mean, Zig does have non-null pointers. It prevents some UB. Just not all.

Which you can achieve in C and C++ with static analysis rules, breaking compilation if pointers aren't checked for nullptr/NULL before use.

Zig would have been a nice proposition in the 20th century, alongside languages like Modula-2 and Object Pascal.


I'm unaware of any such marketing.

Zig does claim that it

> ... has a debug allocator that maintains memory safety in the face of use-after-free and double-free

which is probably true (in that it's not possible to violate memory safety on the debug allocator, although it's still a strong claim). But beyond that there isn't really any current marketing for Zig claiming safety, beyond a heading in an overview of "Performance and Safety: Choose Two".


Runtime checks can only validate code paths taken, though. Also, C sanitizers are quite good as well nowadays.

That's a library feature (not intended for release builds), not a language feature.

It is intended for release builds. The ReleaseSafe target will keep the checks. ReleaseFast and ReleaseSmall will remove the checks, but those aren't the recommended release modes for general software. Only for when performance or size are critical.

DebugAllocator essentially becomes a no-op wrapper when you use those targets.

I have heard different arguments, such as https://zackoverflow.dev/writing/unsafe-rust-vs-zig/ .

Out of curiosity, do the LLMs all use memory safe languages?

Llama.cpp is called Llama.cpp, so there’s that…

Whenever the public has heard about the language it's always been Python.

The language that implements Python's high-speed floating point has often been FORTRAN.

https://fortranwiki.org/fortran/show/Python


With lots of CUDA C++ libraries, among others.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: