Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Was there something wrong with the old coreutils that needed improvement?


I have a suspicion it's about the license, like this commenter [0] did a year ago.

[0]: https://news.ycombinator.com/item?id=38853429


Agreed. Since GNU Coreutils is GPLv3 but uutils is MIT, my guess is eventually Canonical will start using "works like the GNU software except you don't have to comply with GPLv3" as a selling point for Ubuntu Core (their IoT focused distro). This would let them sell to companies who want to only permit signed firmware images to run on their devices, which isn't allowed under GPLv3.


There are F500 companies shipping Ubuntu Core on devices that will only permit signed firmware, so I'm not sure your assessment is correct.

https://buildings.honeywell.com/au/en/products/by-category/b...


Depending on the product, this might be OK! If you've ever had cause to closely read the GPLv3, the anti-tivoisation clause for some reason is only really aimed at "User products" (defined as "(1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling"). This one looks like it's a potential grey area, since it's not obvious if it's intended for buildings that anyone would live in.


I worked on an embedded product that was leased to customers (not sold). The system included GPLv3 portions (e.g. bash 5.x) but they concluded that we did not need to offer source code to their cuatomers.

The reasoning was that the users didn’t own the device. While I personally believe this is not consistent with recent interpretations of the license by the courts, I think they concluded that it was worth the risk of a customer suing to get the source code, as the company could then pull the hardware and leave that customer high and dry. It is unlikely any of their users a would risk that outcome.


Take a look at their customer testimonials [0] and ask yourself if they have recently made anticompetitive or user-hostile moves. Now, ask yourself: do you think they like being beholden to a license that makes it harder for them to keep their monopolies?

[0]: https://ubuntu.com/pro/

Edited to add: it would be cool if, instead of the top-most wealth-concentrators F[500:], there was an index of the top-most wealth-spreaders F[:500]. What would that look like? A list of cooperatives?


As long as nobody sues them everything is fine


If that's really the case, I wish they would just come out and say it and spare the rest of us the burden of trying to debate such a decision on its technical merits. (Of course, I am aware that they owe me nothing here.)

Assuming this theory is true then, what other GPLv3-licensed "core" software in the distro could be next on their list?


I doubt GPL version 3 is the motivation here.

https://packages.ubuntu.com/plucky/rust-coreutils

The dependencies of rust-coreutils list libgcc-s1, which is GPL version 3.


This isn't anything specific to uutils. When you build a Rust program that links with glibc, it needs to use libgcc to do the stack unwinding. If you look at other packaged Rust programs on Ubuntu, they all depend on libgcc for this reason. For example, Eza https://packages.ubuntu.com/plucky/eza and Ripgrep https://packages.ubuntu.com/plucky/ripgrep . If Ubuntu moves to some safe, permissively licensed glibc replacement in the future, this requirement will drop off all their Rust packages. I'm not saying this uutils change alone will let Ubuntu get out of GPLv3 compliance, I'm saying they likely view GPLv3 software in the base install as undesirable due to their IoT customers and will replace it with a permissively licensed alternative given the opportunity.


The dependency of glibc on the unwinder (for backtrace, pthread_exit and pthread_cancel) is a glibc packaging problem. You need to plan for replacing glibc anyway because its licensing could switch to (L)GPLv3+ (including for existing stable release branches).

However, it would be a fairly straightforward project to replace the unwinder used directly by Rust binaries with the one from libunwind. Given that this hasn't happened, I'd be surprised if Canonical is actually investing into a migration. Of course there are much bigger tasks for avoiding GPLv3 software, such as porting the distribution (including LLVM itself and its users) from libstdc++ (GCC's C++ standard library that requires GCC to build, but provides support for Clang as well) to libc++ (LLVM's C++ standard library).


In this hypothetical situation are Canonical also replacing the GPL Linux kernel? If they’re not replacing the Kernel, how does anything change for the end user?


Linux is GPLv2, there is no tivoization protection. In fact most tivoized devices run Linux.


The Software Freedom Conservancy disagrees, and they are the main enforcers of the GPL these days, especially for Linux.

https://sfconservancy.org/blog/2021/mar/25/install-gplv2/ https://sfconservancy.org/blog/2021/jul/23/tivoization-and-t... https://events19.linuxfoundation.org/wp-content/uploads/2017...


Basically every IOT/router/phone/whatever which is advanced enough runs Linux and almost every one of them enforces firmware signing. They'd have to fight the whole world at this point.


They'll be doing that I expect, they are starting by making it possible for anyone to sue over GPL compliance, not just the authors.

https://sfconservancy.org/copyleft-compliance/vizio.html


If it was only for that, they could use/improve busybox, which has the same license as the kernel (GPLv2).

Perhaps it is also so they can be used in closed source systems (I have uutils installed on my Windows system which works nicely).


Busybox is frankly a horrible user experience, and will never be a good one. Its niche is to be as small as possible, as a single static executable, while providing most tools you need to get the job done in an embedded system. Bells and whistles like a shell that's nice to use, or a vi implementation with working undo/redo, or extensive built-in documentation in the form of --help output, are non-features which would make busybox worse for its primary use case.


  This would let them sell to companies who want to only permit signed firmware images to run on their devices, which isn't allowed under GPLv3.
How is this not allowed under GPLv3?


Search for "Tivoization" and the GPLv3


Isn't preventing "tivoization" the whole point of the GPLv3?



The authors have specifically said that it’s not. They just chose Rust community licensing norms, they don’t really care about licenses.


At best this just makes them a patsy, which isn't actually better; but it, also becomes pretty clear if you pay more attention and dig into this (watch some of their interviews, etc.) that they actually DO care about the license, and are splitting hairs on what that means: if they don't care, but they have users who do that will be disappointed if they went in a different direction, then they not only do care, but have chosen to actively align with those specific users. But, regardless, again, and most importantly: this is about why they have a niche and why Canonical is pushing this, and if you try to just right software in such an environment and actively truly actually don't care about the license and just YOLO it, then that level of cavalier negligence cannot be rewarded with immunity to guilt or culpability in the outcomes.


That might be Canonical’s motive though.


Seriously, how on earth are you coming up with this? Time and again they debunk those silly claims but people just keep bringing this up on and on. Is it some sort of conspiracy theory?


It could be a conspiracy on the part of Canonical, sure. People have hidden motives all the time. Sometimes you have to deduce their motives from their actions, while ignoring their words.

I don't think there’s any serious evidence of it being true though. All we can see right now is that there are a surprising number of MIT-licensed packages replacing GPL-licensed packages. It could be a coincidence.


Some of us in the Enterprise and Governmental sector try hard to avoid software with viral licenses.

We sigh in relief every time we see a software that we rely upon changes/adds non-viral license such as MIT, Apache, MPL, BSD, and so on.


That's fair!


"Licensing norms"? Are people really choosing software licenses without considering the implications just because it's a "norm"?

This is gonna cause a lot of disappointment down the road.


If most of an ecosystem chooses a specific license (dual licensed in Rust's case), the simplest thing to do is choose the same license as everyone else.


Regardless of what others do, the best thing to do is to choose the best license for one’s own software. One which preserves the freedom of one’s users and the openness of one’s code.


Sadly people don’t always do what’s best. We sometimes do what other people are doing on the theory that maybe someone else has thought it through and already decided that it _is_ the best thing to do. It’s not perfect, but then heuristics rarely are. But it’s cheap to implement.


This may be reasonable if you're writing a library but not for applications.


Considering how often MIT is chosen over the slightly simpler ISC version... yeah.

In the end, a lot of people are willing to write open source just for the sake of having it as it scratches their own need and isn't otherwise monetizable or they just think it should exist. I would never even consider touching a GPLv3 licensed UI library component, for example.

It's not always the most appropriate license and if a developer wants to use a permissive license, they are allowed to. This isn't an authoritarian, communist dictatorship, at least it isn't where I live and to my dying breath won't be.


Of course it's allowed. People can do whatever they want. If they think it over, consider the implications of what they are doing and decide that this is what they want, then by all means.

Choosing licenses due to peer pressure is completely stupid though. If you're not sure, you can just not pick a license at all. Copyright 2025 all rights reserved. If you must pick a license just because, then the reasonable choice is the strongest copyleft license available, simply because it maximizes leverage. The less you give away, the more conditions, the more leverage. It's that simple.

That people are actually feeling "pressure" to pick permissive licenses leads me to conclude this is a psyop. It's a wealth transfer, from well meaning developers straight into the pockets of corporations. It's being actively normalized so that people choose it "by default" without thinking. Yeah, just give it all away! Who cares, right?

I urge people to think about what they are doing.


I made some open source software myself and my desire is to see my code used as widely as possible.

So the ONLY reasonable choice for me is to release my code with a non-viral license. A copyleft license is TOTALLY UNREASONABLE for me because it limits the reach of my software.

(My license of choice is MPL-2.0)


Have you thought about it? If you have given it serious thought and decided that this is what you want, then by all means, go ahead.

The problem is that people choose permissive licenses to be "nice" when the truth is they have tons of unwritten rules and hidden assumptions. Magical thinking like "if I publish this open source software then it will come back to me in some way, maybe a job, maybe a sponsorship." No such deal exists. Then they wake up one day with corporations making billions off of their software while they're not making even one cent, and they suddenly have a very public meltdown where they bitterly regret their decisions. I've seen it happen, even with copyleft licenses.


When I publish something under MIT/ISC, it's generally, I wrote this to solve a problem/need, if anyone else finds it useful, cool. Use it for whatever you like.

If I'm writing something I intend or might intend to monetize later or otherwise don't want to have privatized, I'll probably reach for GPLv3, AGPL or a different license. The less "whole" a thing is, the more likely I'm going to use a more permissive license than not. Libraries or snippets of code are almost always going to be permissive at least from me. This includes relatively simple CLI utils.


I like how the first comment is asking "is anyone actually going to switch to this version?" and here we are with one of the major Linux distributions using it already, and already managed to ship a bug via it.

Brave of them to ship a Rust port of sudo as well.


It looks like we have three major open source imementations:

- GNU coreutils (GPLv3)

- uutils coreutils (MIT)

- busybox (GPLv2)


There's the BSD coreutils too.


Which is the first and canonical one. GNU added a lot of bells and whistles, and esp. nonsense like --help and --version support for everything, like true and false.



Agreed. Proprietary tools could then rely on those coreutils without any license fears.


I've had the same suspicion since I read about it the first time.


Yeah, if this is not upstreamed eventually, it will have to be rewritten again.


Is it not just yet another Rust rewrite?


If you're the maintainer of OpenBSD, then implementing coreutils in a given language is a necessary requirement for it to be considered a viable systems language: https://marc.info/?l=openbsd-misc&m=151233345723889&w=2



"Denial of service"

In sort command

Is this the best they could come up with?


In some cases it was possible to crash (overflow) sort.c, not just DoS. I did try to look more info the issue - it was not handled for quite some time however I did not find any real world impact.


Minor correction, but that bug was never in any "official" coreutils release. The bug was in a multi-byte character patch that many distributions use (and still use). There have been other CVEs in that patch [1].

But the worst you can do is crash 'sort' with that. Note that uutils also has crashes. Here is one due to unbounded recursion:

  $ ./target/release/coreutils mkdir -p `python3 -c 'print("./" + "a/" * 32768)'`
  Segmentation fault (core dumped)
Not saying that both issues don't deserve fixing. But I wouldn't really panic over either of them.

[1] https://lwn.net/Articles/535735/


Didn't that bug get fixed before it went public?


They weren't written in Rust. But I wonder why the borrow checker wouldn't catch the date bug...


> where date ignores the -r/--reference=file argument

This has nothing to do with memory ownership, so borrow checker is irrelevant. Ubuntu just shipped before that argument's handling was implemented.


This should give you the necessary background

https://discourse.ubuntu.com/t/carefully-but-purposefully-ox...


To summarize, a Jon Seager from Canonical says it’s for safety and resilience.

> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.


It wasn’t… “safe”


It seems like I'm probably preaching to the choir, but what really is the attack surface with coreutils? I can't imagine there have been a lot of pwns as a result of the `date` command.


Untrusted input is often stored in files. Coreutils tools are often used to operate on those files.

As an obvious example, I sometimes download files from the Internet, then run coreutils sha256sum or the like on those files to verify that they're trustworthy. That means they're untrusted at the time where I use them as input to sha256sum.

If there's an RCE in sha256sum (unlikely, but this is a thought experiment to demonstrate an attack vector), then that untrusted file can just exploit that RCE directly.

If there's a bug in sha256sum which allows a malicious file to manipulate the result, then a malicious file could potentially make itself look like a trusted file and therefore get past a security barrier.

Maybe there's no bug in sha256sum, but I need to base64 decode the file before running sha256sum on it, using the base64 tool from coreutils.

If you use your imagination, I'm sure you yourself can think up plenty more use cases where you might run a program from GNU coreutils against untrusted user input. If it helps, here's a Wikipedia article which lists all commands from GNU coreutils: https://en.wikipedia.org/wiki/GNU_Core_Utilities#Commands

EDIT: To be clear, this comment is only intended to explain what the attack surface is, not to weigh in on whether rewriting the tools in Rust improves security. One could argue that it's more likely that the freshly rewritten sha256sum from uutils has a bug than that GNU sha256sum has a bug. The statement "tools from coreutils are sometimes used to operate on untrusted input and therefore have an attack surface worth exploring" is not the same as the statement "rewriting coreutils in Rust improves security". Personally, I'm excited for the uutils stuff, but not primarily because I believe it alone will directly result in significant security improvements in Ubuntu 25.10.


But if there is a bug in the date command that prevents security updates from being installed, you've got your vulnerability right there.

Rust is not a silver bullet.


It's not really a bug in uutils. The option was not implemented yet when Ubuntu decided to switch. It's known that there's no 100% compatibility and won't be for a while.


Can you show a post from an influential figure in the Rust community that literally said "Rust is a silver bullet", please?


Please read my edit.


To play devil's advocate, who knows what kind of madness people are handing off to subprocess.run(["date"]) et al. They shouldn't, but I'd bet my last dollar it's out there.


You don't attack coreutils. You attack the scripts. In this case it was an update script that failed because of an incompatibility. It's not too hard at all to imagine one failing in an exploitable way.

Honestly, Rust-related hilarity aside, this project was a terrible, terrible idea. Unix shell environments have always been ad hoc and poorly tested, and anything that impacts compatibility is going to break historical code that may literally be decades old.

See also the recent insanity of GNU grep suddenly tossing an error when invoked as "fgrep". You just don't do that folks.


> See also the recent insanity of GNU grep suddenly tossing an error when invoked as "fgrep". You just don't do that folks.

The 'fgrep' and 'egrep' didn't throw errors, it would just send a warning to standard error before behaving as expected.

Those commands were never standardized, and everyone is better off using 'grep -F' and 'grep -E' respectively.


> didn't throw errors, it would just send a warning to standard error

Noted without comment. Except to say that I've had multiple scripts of my own break via "just" discovering garbage in the output streams.

> Those commands were never standardized

"Those commands" were present in v7 unix in 1979!


I think he means POSIX. Didn’t check but in some cases posix only covers some options a tool provides not all. It’s a hard lesson I learned while keeping shell scripts portable between Linux and macOS.


Yep. I was slightly incorrect in my original message, though. SUSv2 (1997) specified egrep and fgrep but marked them LEGACY. POSIX.1-2001 removed them.

The only place that that doesn't support 'grep -E' and 'grep -F' nowadays is Solaris 10. But if you are still using that you will certainly run into many other missing options.

[1] https://pubs.opengroup.org/onlinepubs/007908775/xcu/egrep.ht... [2] https://pubs.opengroup.org/onlinepubs/007908775/xcu/fgrep.ht...


"GNU grep implemented a change that breaks pre-existing scripts using a 46 year old API, but it's OK because the required workaround works everywhere but Solaris 10" seems like not a great statement of engineering design to me.


"GNU grep added a warning to inform you of the deprecation which happened 28 years ago, but only to stderr, and still works like you expect", does to me.


Meh. Look, it broke code. "Still works like you expect" is 100% false.

The deprecation argument is at least... arguable. It was indeed retired from POSIX. But needless deprecation is itself a smell in a situation where you can't audit all the code that uses it. Don't do that. It breaks stuff. It broke the updates in the linked article too. If you have an API, leave it there absent extremely strong arguments for its removal.


and yet coreutils continues to receive updates in ways that could break things.


This is not a rousing endorsement of the Unix shell environment. Maybe that should be rewritten in something else too (probably not Rust, Rust is probably not a good choice for this - but something that is designed in such a way that it is easy to test would be nice!).


There's nothing about rust that makes things hard to test. Actually the embedded test framework makes it easier than C. But what really matters is the public interface of those tools and that's got an extensive test suite available. It doesn't matter which language is used internal for those tests to run.


I meant that Rust is probably not a good choice for a new shell scripting environment, not that Rust is hard to test. I was responding to the claim "Unix shell environments have always been ad hoc and poorly tested", which is a bad thing and is worth fixing in and of itself.


> not a good choice for a new shell scripting environment

Why?


> This is not a rousing endorsement of the Unix shell environment.

It's surely not. The question wasn't how to rewrite the shell environment to be more "endorseable", though.

The point is that we have a half century (!) long history of writing code to this admittedly fragile environment, with no way to audit usage or even find all the existing code (literally many of the authors are retired or dead).

So... it's just not a good place to play games with "Look Ma, I rewrote /usr/bin/date and it's safe now!" Mess with your own new environments, not the ones that run the rest of the world please.


Maybe it's more important to rewrite half a century of poorly documented and specified shell scripts that are so embedded that their existence gets in way of rewriting fundamental Unix command line utilities, than it is to rewrite those utilities themselves. Any time someone makes the claim "we shouldn't touch this code, it's fragile" that state of affairs is itself bad. Our free software source code shouldn't be some poorly understood black box that we're afraid to touch for fear of breaking something, and if it is that is something we should fix.


> Maybe it's more important to rewrite half a century of poorly documented and specified shell scripts

Sounds like a plan. Let me know when you're done, and then we can remove fgrep.


I can certainly understand it for something like sudo or for other tools where the attack surface is larger and certain security-critical interactions are happening, but in this case it really seems like a questionable tradeoff, where the benefits in this specific case are abstract (theoretically no more possibility of any memory-safety bugs) but the costs are very concrete (incompatibility issues; and possibly other, new, non-memory-safety bugs being introduced with new code).

EDIT: Just to be clear, I'm otherwise perfectly happy that these experiments are being done, and we should all be better off for it and learn something as a result. Obviously somebody has assessed that this tradeoff has at least a decent probability of being a net positive here in some timeframe, and if others are unhappy about it then I suppose they're welcome to install another implementation of coreutils, or use a different distro, or write their own, or whatever.


I'd prefer it if all software was written in languages that made it as easy as possible to avoid bugs, including memory-safety bugs, regardless of whether it seems like it has a large attack surface or not.


I view `uutils` as a good opportunity to get rid of legacy baggage that might be used by just 0.03% of the community but has to sit there and it impedes certain feature adding or bug fixing.

F.ex. `sudo-rs` does not support most of what the normal `sudo` does... and it turned out that most people did not need most of `sudo` in the first place.

Less code leads to less bugs.


> "sudo"

Hence "doas".

OpenBSD has a lot of new stuff throughout the codebase.

No need for adding a bloated dependency (e.g. Rust) just because you want to re-implement "yes" in a "memory-safe language" when you probably have no reasons to.


A thousand badly written shell scripts might disagree.


I reported a segfault in "tac" a number of years ago.


Safer threading for performance improvements was part of it, as I understand.


   $ /usr/bin/time date
   Fri Oct 24 10:20:17 AM CDT 2025
   0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 2264maxresident)k
   0inputs+0outputs (0major+93minor)pagefaults 0swaps
Imagine how much faster it will be with threading!


Rather think of cp, mv, install, backup. Copying files faster is still an ongoing effort, also in gnu coreutils


When I want to do stuff in parallel at the OS level, I rather user processes and e.g. GNU parallel.


when your bug it fully typed


I think it's mainly that it's a fun project and Rust is a lot nicer to work with than C. You're way more likely to see modern niceties and UX improvements in these ones than the old ones.


> Rust is a lot nicer to work with than C

What? How??


Modern conveniences such as compiler support for

- Tagged unions so you can easily and correctly return "I have one of these things".

- Generics so you can reuse datastructures other people wrote easily and correctly. And a modern toolchain with a package manager that makes it easy to correctly do this.

- Compile time reference counting so you don't have to worry about freeing things/unlocking mutex's/... (sometimes also called RAII + a borrow checker).

- Type inference

- Things that are changed are generally syntactically tagged as mutable which makes it a lot easier to quickly read code

- Iterators...

And so on and so forth. Rust is in large part "take all the good ideas that came before it and put it in a low level language". In the last 50 years there's been a lot of good ideas, and C doesn't really incorporate any of them.


The borrow checker better described as compile time rwlock with all possible deadlocks caught as compiler errors


It's that as well, but that part of the description doesn't catch how objects are automatically freed once the last reference to them (the owning one) is dropped.

Meanwhile my description doesn't fully capture how it guarantees unique access for writing, while yours does.


> but that part of the description doesn't catch how objects are automatically freed once the last reference to them (the owning one) is dropped.

You're confusing the borrow checker with RAII.

Dropping the last reference to an object does nothing (and even the exclusive &mut is not an "owning" reference). Dropping the object itself is what automatically frees it. See also Box::leak.


No I'm rather explicitly considering the joint behavior or the borrow checker and RAII.

With only RAII you don't get the last reference part.

Yes, there are exceptions, it's a roughly correct analogy not a precise description.


I agree with your points, except that "compile time reference counting" is not a thing. I'm not sure what that would even mean. :-)


The borrow tracker tracks whether there is 1, more than 1, or no references to a pointer at any particular time and rust automatically drops it when that last reference (the owning one) goes away. Sounds like compile time reference counting to me :P

I didn't invent this way of referring to it, though I don't recall who I stole it from. It's not entirely accurate, but it's a close enough description to capture how rust's mostly automatic memory management works from a distance.

If you want a more literal interpretation of compile time reference counting see also: https://docs.rs/static-rc/0.7.0/static_rc/


So the problem here is that it is almost entirely wrong. There is no reference count anywhere in the borrow checker’s algorithm, and you can’t do the things with borrows that you can do with reference counting.

It’s just not a good mental model.

For example, with reference counting you can convert a shared reference to a unique reference when you can verify that the count is exactly 1. But converting a `&T` to a `&mut T` is always instantaneous UB, no exceptions. It doesn’t matter if it’s actually the only reference.

Borrows are also orthogonal to dropping/destructors. Borrows can extend the lifetime of a value for convenience reasons, but it is not a general rule that values are dropped when the last reference is gone.


There is a reference count in the algorithm in the sense that the algorithm must keep track of the number of live shared borrows derived from a unique borrow or owned value so that it knows when it becomes legal to mutate it again (i.e. to know when that number goes to zero) or if there are still outstanding ones.

Borrow checking is necessary for dropping and destructors in the sense that without borrows we could drop an owned value while we still have references to it and get a use after free. RAII in rust only works safely because we have the borrow checker reference counting for us to tell us when its again safe to mutate (including drop) owned values.

Yes, rust doesn't support going from an &T to an &mut T, but it does support going from an <currently immutable reference to T> to a <mutable reference to T> in the shape of going from an &mut T which is currently immutably borrowed to an &mut T which is not borrowed. It can do this because it keeps track of how many shared references there are derived from the mutable reference.

You're right that it's possible to leak the owning reference so that the object isn't freed when the last reference is gone - but it's possible to leak a reference in runtime reference runtime reference counted language too.

But yes, it's not a perfect analogy, merely a good one. It's most likely that the implementation doesn't just keep a count of references for instance, but a set of them to enable better diagnostics and more efficient computation.


You are reiterating the same points, and they are still wrong, I’m sorry.


I think Rust speaks to people who don't "play" with their code during development. Moving stuff around, commenting things out, etc. When I try to do this in Rust, the borrow checker instantly complains because $something violates $some_rule. I can't say "yeah I know but just for now let's try it out this way and if it works I'll do it right".

I work this way and that's why I consider Rust to be a major impediment to my productivity. Same goes for Python with its significant whitespace which prevents freely moving code around and swapping code blocks, etc.

I guess there are people who plan everything in their mind and the coding part is just typing out their ideas (instead of developing their ideas during code editing).


That might be true. In my case, it is precisely because I do play a lot with my code, doing big 2-day refactors sometimes too. With Rust, when it finally compiles, it very often tends to run without crashing, and often correctly too, saving me a lot of debugging.

But it's also because of all the things I'm forced to fix while implementing or refactoring, that I would've been convinced were correct. And I was proven wrong by the compiler, so, many, times, that I've lost all confidence in my own ability to do it correctly without this kind of help. It helped me out of my naivety that "C is simple".


You eventually don't even think about the borrow checker, writing compiling code becomes second nature, and it also has the side effect of encouraging good habits in other languages.


> I guess there are people who plan everything in their mind and the coding part is just typing out their ideas (instead of developing their ideas during code editing).

I don't think there are, I think Gall's law that all complex systems evolve from simpler systems applies.

I play with code when I program with Rust. It just looks slightly different. I deliberately trigger errors and then read the error message. I copy code into scratch files. I'm not very clever; I can't plan out a nontrivial program without feedback from experiments.


I enjoy the ability to do massive refactors and once it builds it works and does the expected. There are so few odd things happening, no unexpected runtime errors.

I've written probably tens of thousands of lines each in languages like C, C++, Python, Java and a few others. None other has been as misery-free. I admit I haven't written Haskell, but it still doesn't very approachable to me.

I can flash a microcontroller with new firmware and it won't magically start spewing out garbage on random occasions because the compiler omitted a nullptr check or that there's an off-by-one error in some odd place. None. Of. That. Shit.


So many ways it's hard to list them. Better tooling, type system, libraries, language features, compile-time error checking.

I'm a bit surprised that you are surprised by this. I sometimes think Rust emphasizes memory safety too much - like some people hear it and just think Rust is C but with memory safety. Maybe that's why you're surprised?

Memory safety is a huge deal - not just for security but also because memory errors are the worst kind of bug to debug. If I never have to a memory safety bug that corrupts some data but only in release mode... Those bugs take an enormous amount of time to deal with.

But Rust is really a great modern language that takes all the best ideas from ML and C, and adds memory safety.

(Actually multithreading bugs might be slightly worse but Rust can help there too!)


no, just the usual... people want to rewrite stuff in Rust "just because". it's getting annoying.


Other people are allowed to do whatever they want.


Yes, people are allowed to do stupid things, but other people are also allowed to call them out on those stupid things. Especially if they make other people's lives harder, like in this case.


Whether it’s “stupid” remains to be seen. I personally would not have made this choice at this point in time, but the way some people seem to consider “Rust program has a bug” to be newsworthy is… odd.


> but the way some people seem to consider “Rust program has a bug” to be newsworthy is… odd.

But the fact that program X was written in Rust is, on the other hand, newsworthy? And there is nothing odd in the fact that the first property of the software that is advertised is the fact that it was made in Rust.

Yeah, nothing odd there.


> to consider “Rust program has a bug” to be newsworthy is… odd.

That's not why it is newsworthy though.

"A project reimplementing core OS programs for the sake of reimplementing in the favourite language breaks stable OS", is what makes it newsworthy.


Having unimplemented features makes a thing stupid?


Rewriting already perfectly working and feature-complete software and advertising your version as a superior replacement while neglecting to implement existing features that users relied on is pretty stupid, yes.


Need I remind you what coreutils or Linux (re)implement?


Replacing a program which implements these features and is a core foundation of the OS, which one that doesn't, to mock people not using the latest language is.


certainly! I didn't say otherwise.

however once software that has been only rewritten for the sake of being written in Rust starts affecting large distributions like Ubuntu, that's a different issue...

however one could argue that Ubuntu picking up the brand new Rust based coreutils instead of the old one is a 2nd order effect of "let's rewrite everything in Rust, whether it makes sense of not"


Nobody - least of all the authors of uutils - is forcing Ubuntu to adopt this change. I personally feel like it's a pretty radical step on Ubuntu's part, but they are free to make such choices, and you are free to not use Ubuntu if you believe it is detrimental.

There's no "however" here. Rewriting anything in Rust has no effect on anybody by itself.


So rewrites in Rust can happen as long as they don't have practical usage?

This isn't something that affected Ubuntu. It's something Ubuntu wanted to test in day to day usage.


ideally stupid rewrites never happen, but alas...


Iirc it started as simple exercise. It just aimed at high compatibility with original coreutils.

Which part of that is stupid? License is chosen because Rust is more static linkage friendly. Which leaves exercise part or high compatibility.

You might as well say Linux is a stupid rewrite that will never achieve anything circa 1998.


what purpose does the new coreutils serve other than being written in Rust?


> we’re releasing an s3 compatible self hosted service for free

> nice

> we’re releasing coreutils rewritten in a memory safe language for free

> how dare you!


How dare these software engineers working in their free time don't do it in a way I agree >:(


coreutils once refused to add more options. Like cp, mv --progress, which would have been extremely useful.

They've added internal features though, like better hardware support in copying and moving files.


Absolutely nothing.

But systemd projects and Rust rewrites have this one thing in common: them being pure virtue signaling they absolutely have to be noticed. And what's a better way to get noticed if not going for something important and core?

To me, Rust rewrites look like "just stop oil" road blocks - the more people suffer, the better.

PS: Disclaimer: I love Rust. I hate fanboys.


> To me, Rust rewrites look like "just stop oil" road blocks - the more people suffer, the better.

Then blame Canonical? Quit it with the Rust hate.


rust coreutils had zero chance of being even noticed until canonical decided to replace working tools with this.


It wasn't rewritten in rust yet. Therefore it wasn't complete. /s


This joke is getting more worn out than ‘fizz buzz saas in rust’ hn post titles have ever been


Yeah. Except it isn't a joke. Rustafarians are dead serious.


Some actual links would be nice. I have not seen a Rust zealot in HN for like at least 3 years at this point. (On Reddit I've seen plenty, but who takes Reddit seriously?)



They hadn't implemented the `-r` flag of `date`... But worse than that, they didn't squeek on the unimplemented flag (because the interface was already accepting it...). This is an incompetent implementer (and project management?)


Not enough Rust.

The thought of rewriting anything as intricate, foundational, and battle-tested as GNU coreutils from scratch scares me. Maybe I'd try it with a mature automatic C-to-Rust translator, but I would still expect years of incompatibilities and reintroduced bugs.

See also the "cascade of attention-deficit teenagers" development model.


FWIW, GNU coreutils is itself a rewrite of stuff that existed before, and which has been rewritten multiple other times.


Eh. People have written replacements for glibc because they didn't like something or another about it, and that seems to me to be way more fraught with risk than coreutils.


Folks also run into compatibility issues with musl as well. The biggest I recall was an issue with DNS breaking because musl didn’t implement some piece.


TBF DNS handling of glibc is crazy.


Fair enough. My gut sense is that C functions are simpler than shell commands, with a handful of parameters rather than a dozen or more flags, and this bug supports that -- they forgot to implement a flag in "date." But I haven't tried to do either, so I could be wrong.


> The thought of rewriting anything as intricate, foundational, and battle-tested as GNU coreutils from scratch scares me. Maybe I'd try it with a mature automatic C-to-Rust translator, but I would still expect years of incompatibilities and reintroduced bugs.

It is extremely bad that it's not a relatively straightforward process for any random programmer to rewrite coreutils from scratch as a several week project. That means that the correct behavior of coreutils is not specified well enough and and it's not easy enough to understand it by reading the source code.


Not to be too harsh, but if that’s your (fundamentalist) attitude to software, remind me to argue strenuously to never have you hired where I work. Fact is you can’t rewrite everything all the time, especially the bits that power the core of a business, and has for a decade or more. See banking and pension systems, for instance.


I think you're completely missing the point. The problem being solved is not that coreutils is bad and thus they should be rewritten, the problem is that coreutils is not specified well enough to make new implementations straight-forward. Thus a new implementation written from scratch is tremendously valuable for discovering bugs and poorly documented / unspecified behavior.

For a business it's often fine to stop at a local maximum, they can keep using old versions of coreutils however long they want, and they can still make lots of money there! However we are not talking about a business but a fundamental open source building block that will be around for a very long time. In this setting continuous long term improvement is much more valuable than short term stability. Obviously you don't want to knowingly break stability either, and in this regard I do think Ubuntu's timeline for actually replacing the default coreutils implementation is too ambitious, but that's beside the point—the rewrite itself is valuable regardless of what Ubuntu is doing!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: