> And a few days ago a security vulnerability was found in the Rust Linux kernel code.
was it a security vulnerability? I'm pretty sure it was "just" a crash. Though maybe someone smarter than me could have turned that into something more.
I have no dog in this race, I really like the idea of Rust drivers but can very much understand retiscience at getting Rust to be handling more core parts of the kernel, just because Rust's value seems to pay off way more in higher level code where you have these invariants to maintain across large code paths (meanwhile writing a bunch of doubly-linked lists in unsafe Rust seems a bit like busy work, modulo the niceties Rust itself can give you)
> was it a security vulnerability? I'm pretty sure it was "just" a crash.
It's a race condition resulting in memory corruption.[1][2] That corruption is shown to result in a crash. I don't think the implication is that it can result only in crashes, but this is not mentioned in the CVE.
Whether it is a vulnerability that an attacker can crash a system depends on your security model, I guess. In general it is not expected to happen and it stops other software from running, and can be controlled by entities or software who should not have that level of control, so it's considered a vulnerability.
It is entertaining to observe that how - after the bullshit and propaganda phase - Rust now slowly enters reality and the excuses for problems that did not magically disappear are now exactly the same as what we saw before from C programmers and which Rust proponents would have completely dismissed as unacceptable in the past ("this CVE is not exploitable", "all programmers make mistakes", "unwrap should never been used in production", "this really is an example how fantastic Rust is").
There were certainly a lot of people running around claiming that "Rust eliminates the whole class of memory safety bugs." Of course, not everybody made such claims, but some did.
Whether it is "significantly easier" to manage these types of problems and at what cost remains to be seen.
I do not understand you comment about "confirmation bias" as did not make a quantitative prediction that could have bias.
> There were certainly a lot of people running around claiming that "Rust eliminates the whole class of memory safety bugs."
Safe Rust does do this. Dropping into unsafe Rust is the prerogative of the programmer who wants to take on the burden of preventing bugs themselves. Part of the technique of Rust programming is minimising the unsafe part so memory errors are eliminated as much as possible.
If the kernel could be written in 100% safe Rust, then any memory error would be a compiler bug.
> it is not "Safe Rust" which is competing with C it is "Rust".
It is intended that Safe Rust be the main competitor to C. You are not meant to write your whole program in unsafe Rust using raw pointers - that would indicate a significant failure of Rust’s expressive power.
Its true that many Rust programs involve some element of unsafe Rust, but that unsafety is meant to be contained and abstracted, not pervasive throughout the program. That’s a significant difference from how C’s unsafety works.
But there are more than 2000 uses of "unsafe" even in the tiny amount of Rust use in the Linux kernel. And you would need to compare to C code where an equally amount of effort was done to develop safe abstractions. So essentially this is part of the fallacy Rust marketing exploits: comparing an idealized "Safe Rust" scenario compared to real-word resource-constrained usage of C by overworked maintainers.
The C code comparison exists because people have written DRM drivers in Rust that were of exceedengly high quality and safety compared to the C equivalents.
Even if you somehow manage to ignore the very obvious theoretical argument why it works, the amount of quantitative evidence at this point is staggering: Rust, including unsafe warts and all, substantially improve the ability of any competent team to deliver working software. By a huge margin.
This is the programming equivalent of vaccine denialism.
So kernel devs claiming Rust works isn't good enough? CloudFlare? Mozilla?
Your're raising the bar to a place where no software will be good enough for you.
> Of course, this bug was in an `unsafe` block, which is exactly what you would expect given Rust's promises.
The fix was outside of any Rust unsafe blocks. Which confused a lot of Rust developers on Reddit and elsewhere. Since fans of Rust have often repeated that only unsafe blocks have to be checked. Despite the Rustonomicon clearly spelling out that much more than the unsafe blocks might need to be checked in order to avoid UB.
Is it any more or less amusimg, or perhaps tedious, watching the first Rust Linux kernel CVE be pounced on as evidence that "problems .. did not magically disappear"?
Does anyone involved in any of this work believe that a CVE in an unsafe block could not happen?
The TLDR is that this race condition happened with unsafe code, which was needed to interact with existing C code. This was not a vulnerability with Rust's model.
That said, you can absolutely use bad coding practices in Rust that can cause issues, even for a regular programmer.
Using unwrap without dealing with all return cases is one example. Of course, there is a right way to dealing with return methods, but it's up to the programmer to follow it
My magical ideal is to be able to do this even without the sender's consent. Let me, as a reviewer, chop up half of someone's PR and then get that sent in, CI'd, approved.
Very interesting historical document, though I don't have that much confidence in the precision of the explanation of the terms.
Related to this: does anyone know if there's any document that delves into how Church landed on Church numerals in particular? I get how they work, etc, but at least the papers I saw from him seem to just drop the definition out of thin air.
Were church numerals capturing some canonical representation of naturals in logic that was just known in the domain at the time? Are there any notes or the like that provide more insight?
Before Church there was Peano, and before Peano there was Grassmann
> It is rather well-known, through Peano's own acknowledgement, that Peano […] made extensive use of Grassmann's work in his development of the axioms. It is not so well-known that Grassmann had essentially the characterization of the set of all integers, now customary in texts of modern algebra, that it forms an ordered integral domain in which each set of positive elements has a least member. […] [Grassmann's book] was probably the first serious and rather successful attempt to put numbers on a more or less axiomatic basis.
While I don't know much about Church numbers or the theory how lambda calculus works, taking a glance at the definitions on wikipedia they seem to be the math idea of how numbers works (at the meta level)
I forgot the name of this, but they seem the equivalent of successors in math
In the low level math theory you represent numbers as sequences of successors from 0 (or 1 I forgot)
Basically you have one then sucessor of one which is two, sucessor of two and so on
So a number n is n successor operations from one
To me it seems Church numbers replace this sucessor operation with a function but it's the same idea
Church ends up defining zero as the identity function, and N as "apply a function to a zero-unit N times"
While defining numbers in terms of their successors is decently doable, this logical jump (that works super well all things considered!) to making numbers take _both_ the successor _and_ the zero just feels like a great idea, and it's a shame to me that the papers I read from Church didn't intuit how to get there.
After the fact, with all the CS reflexes we have, it might be ... easier to reach this definition if you start off "knowing" you could implement everything using just functions and with some idea of not having access to a zero, but even then I think most people would expect these objects to be some sort of structure rather than a process.
There is, of course, the other possibility which is just that I, personally, lack imagination and am not as smart as Alonzo Church. That's why I want to know the thought process!
> Church ends up defining zero as the identity function
Zero is not the identity function. Zero takes a function and calls it zero times on a second function. The end result of this is that it returns the identity function. In Haskell it would be `const id` instead of `id`.
zero := λf.λx.x
one := λf.λx.fx
two := λf.λx.ffx
id := λx.x
I suspect that this minor misconception may lead you to an answer to your original question!
Why isn't the identity function zero? Given that everything in lambda calculus is a function, and the identity function is the simplest function possible, it would make sense to at least try!
If you try, I suspect you'll quickly find that it starts to break down, particularly when you start trying to treat your numerals as functions (which is, after all, their intended purpose).
Church numerals are a minimal encoding. They are as simple as it possibly gets. This may not speak to Church's exact thought process, but I think it does highlight that there exists a clear process that anyone might follow in order to get Church's results. In other words, I suspect that his discover was largely mechanical, rather than a moment of particularly deep insight. (And I don't think this detracts from Church's brilliance at all!)
Their structural properties are similar to Peano's definition in terms of 0 and successor operation. ChatGPT does a pretty good job of spelling out the formal structural connection¹ but I doubt anyone knows how exactly he came up with the definition other than Church.
Yeah I've been meaning to send a request to Princeton's libraries with his notes but don't know what a good request looks like
The jump from "there is a successor operator" to "numbers take a successor operator" is interesting to me. I wonder if it was the first computer science-y "oh I can use this single thing for two things" moment! Obviously not the first in all of science/math/whatever but it's a very good idea
The idea of Church numerals is quite similar to induction. An induction proof extends a method of treating the zero case and the successor case, to a treatment of all naturals. Or one can see it as defining the naturals as the numbers reachable by this process. The leap to Church numerals is not too big from this.
Probably not possible unless you have academic credentials to back up your request like being a historian writing a book on the history of logic & computability.
I am _not_ a microservices guy (like... at all) but reading this the "monorepo"/"microservices" false dichotomy stands out to me.
I think way too much tooling assumes 1:1 pairings between services and repos (_especially_ CI work). In huge orgs Git/whatever VCS you're using would have problems with everything in one repo, but I do think that there's loads of value in having everything in one spot even if it's all deployed more or less independently.
But so many settings and workflows couple repos together so it's hard to even have a frontend and backend in the same place if both teams manage those differently. So you end up having to mess around with N repos and can't send the one cross-cutting pull request very easily.
I would very much like to see improvements on this front, where one repo could still be split up on the forge side (or the CI side) in interesting ways, so review friction and local dev work friction can go down.
(shorter: github and friends should let me point to a folder and say that this is a different thing, without me having to interact with git submodules. I think this is easier than it used to be _but_)
I worked on building this at $PREV_EMPLOYER. We used a single repo for many services, so that you could run tests on all affected binaries/downstream libraries when a library changed.
We used Bazel to maintain the dependency tree, and then triggered builds based on a custom Github Actions hook that would use `bazel query` to find the transitive closure of affected targets. Then, if anything in a directory was affected, we'd trigger the set of tests defined in a config file in that directory (defaulting to :...), each as its own workflow run that would block PR submission. That worked really well, with the only real limiting factor being the ultimate upper limit of a repo in Github, but of course took a fair amount (a few SWE-months) to build all the tooling.
We’re in the middle of this right now. Go makes this easier: there’s a go CLI command that you can use to list a package’s dependencies, which can be cross-referenced with recent git changes. (duplicating the dependency graph in another build tool is a non-starter for me) But there are corner cases that we’re currently working through.
This, and if you want build + deploy that’s faster than doing it manually from your dev machine, you pay $$$ for either something like Depot, or a beefy VM to host CI.
A bit more work on those dependency corner cases, along with an auto-sleeping VM, should let us achieve nirvana. But it’s not like we have a lot of spare time on our small team.
* In addition, you can make your life a lot easier by just making the whole repo a single Go module. Having done the alternate path - trying to keep go.mod and Bazel build files in sync - I would definitely recommend only one module per repo unless you have a very high pain tolerance or actually need to be able to import pieces of the repo with standard Go tooling.
> a beefy VM to host CI
Unless you really need to self-host, Github Actions or GCP Cloud Build can be set up to reference a shared Bazel cache server, which lets builds be quite snappy since it doesn't have to rebuild any leaves that haven't changed.
I've heard horror stories about Bazel, but a lot of them involve either not getting full buy in from the developer team or not investing in building out Bazel correctly. A few months of developer time upfront does seem like a steep ask.
You're pointing out exactly what bothered me with this post in the first place: "we moved from microservices to a monolith and our problems went away"...
... except the problems had not much to do with the service architecture but all to do with operational mistakes and insufficient tooling: bad CI, bad autoscaling, bad oncall.
The thing is that some section of the right has convinced itself that Calibre is some DEI font. Meanwhile the rest of the world is just living life and having to deal with people getting this worked up about the default font of Microsoft Office since what, 2008?
> Standards are different. The purpose of the standard is that Alice wants her output device to be compatible with everyone else's input device and Bob wants his input device to be compatible with everyone else's output device.
I do think there's value and a lot of work in coming up with a standard that manufacturers agree on. It's a huge coordination problem, based on the idea of unlinking a standard's success with the success of, say, a hardware competitor. It's real work! And like.... HDMI is an invention, right? If that isn't then what is?
"we should have drivers for the hardware that relies on this tech" just feels like an obvious win to me though. The (short-term) ideal here is just the forum being like "yes it's good if HDMI 2.1 works on linux" and that being the end of the story
I don't have much love for things that mean that like VGA info online all being "we reverse engineered this!!!" so they're not my friends but I wouldn't succeed much at standards coordination
> I do think there's value and a lot of work in coming up with a standard that manufacturers agree on. It's a huge coordination problem, based on the idea of unlinking a standard's success with the success of, say, a hardware competitor. It's real work!
It's work they would be doing anyway because they all benefit from it, which is why it isn't a coordination problem. The known and effective coordination solution is a standards body. Everyone sends their representative in to hash out how the standard should work. They all have the incentive to do it because they all want a good standard to exist.
Moreover, the cost of developing the standard is a minor part of the total costs of being in the industry, so nobody has to worry about exactly proportioning a cost which is only a rounding error to begin with and the far larger problem is companies trying to force everyone else to license their patents by making them part of the standard, or using a standard-essential patent to impose NDAs etc.
> And like.... HDMI is an invention, right? If that isn't then what is?
It's not really a single invention, but that's not the point anyway.
Patenting something which is intrinsically necessary for interoperability is cheating, because the normal limit on what royalties or terms you can impose for using an invention is its value over the prior art or some alternative invention, whereas once it's required for interoperability you're now exceeding the value of what you actually invented by unjustly leveraging the value of interoperating with the overall system and network effect.
> It's work they would be doing anyway because they all benefit from it, which is why it isn't a coordination problem
HDMI: tech is shared between you and competitors, but you don't get to collect all the licensing fees for yourself
Some bespoke interface: you can make the bet that your tech is so good that you get to have control over it _and_ you get to license it out and collect all the fees
in the standards case, the standards body will still charge licensing fees but there's an idea that it's all fair play.
Apple had its lightning cable for its iPhones. It collaborated with a standards body for USB-C stuff. Why did it make different decisions there? Because there _are_ tradeoffs involved!
(See also Sony spending years churning through tech that it tried to unilaterally standardize)
> HDMI: tech is shared between you and competitors, but you don't get to collect all the licensing fees for yourself
> Some bespoke interface: you can make the bet that your tech is so good that you get to have control over it _and_ you get to license it out and collect all the fees
Except that these are alternatives to each other. If it's your bespoke thing then there are no licensing fees because nobody else is using it. Moreover, then nobody else is using it and then nobody wants your thing because it doesn't work with any of their other stuff.
Meanwhile it's not about whether something is a formal standard or not. The government simply shouldn't grant or enforce patents on interoperability interfaces, in the same way and for the same reason that it shouldn't be possible to enforce a copyright over an API.
That's definitely a thing that happened, but it's minimising so much other important work that it's misrepresenting the whole thing.
Do you know how much bandwidth six channels of uncompressed audio needs? Home theaters would be a HUGE hassle without a single cable doing all that work for you.
ADAT Lightpipe supports up to 8 audio channels at 48 kHz and 24 bits - all using standard off-the-shelf Toslink cables and transceivers. MADI can do significantly more.
Let's not pretend surround sound is a nearly-impossible problem only HDMI could possibly solve.
I... think you might be proving my point for me? The ability to have a single cable that can do video AND a bunch of audio channels at once is amazing for the average joe.
Don't get me wrong, I use optical in my setup at home & I'd love to have more studio & scientific gear just for the hell of it, but I'm the minority.
I'm not trying to defend the HDMI forum or the greedy arsehole giants behind them. The DRM inbuilt to HDMI and the prohibitive licensing of the filters (like atmos) is a dick move and means everything is way more expensive than it needs to be. Was just pointing out that parent's comment was reductive.
Correct! Now put that USB cable _inside_ a DVI cable, magically solve all the buffering problems that plagued the industry for several decades, slap on some DRM over the top, and you'll have HDMI 1.0 :-D
You just replied to someone who explained it was about the DRM, with 'nuh-uh."
Pivot much?
The rest of the capabilities were all being done for over a decade before HDMI came out, and quite well by some companies.
Sure, firewire was typically used for video plus two channels of audio, but it's a single twisted pair, and HDMI uses 4 high-speed twisted pair to transmit clock and data, plus another few pins for out-of-band signalling information.
Technically, HDMI is actually a huge failure. It wasn't until 2.1 that they started supporting compressed video.
Take a system, figure out where it has the highest possible bandwidth need, and then insert the communication cable at that point. Yeah, that's the ticket!
Before HDMI, some equipment did AV sync really well, and even after HDMI came out, some TVs still didn't do the A/V sync very well. The correct buffering for that has nothing to do with the cable, although it might seem like it because when the audio comes out of the TV, the circuits in there sure ought to be able to do delay matching.
The adoption of HDMI was, in fact, completely driven by HDCP.
I replied to someone who claimed HDMI's only purpose was DRM, which is wrong.
I haven't pivoted since the start of the thread. There simply was not a digital solution that could negotiate then stream video and AND 2+ channels of audio, all in one cable, that was supported by more than a small fraction of consumer and industry devices at once. Firewire (which you seem fixated on), for all it's many technical superiorities, had almost zero market with Windows users, or consumers in general. Set-top boxes used it in the US, but was uncommon outside of the US. Camcorders used it, but in 2002 when HDMI came out most people were still using film camcorders IIRC; digital only really became commonplace well after HDMI gained footholds.
I'm not saying the cable itself controlled clocks and handshakes, I'm conflating terms over the last couple of comments. I'm referring to HDMI, the cable, the protocol, and the connectors. And yes - HDCP had a huge part in how HDMI was pushed, which is both bad (introducing proprietary bullshit's never great) and good (larger adoption of standards that work well in the field).
Was HDMI perfect? FAR from it. But all these "there was this tech that did THIS facet better" is missing the point that I've stated a few times. It was a good solution to a number of small problems.
But to be fair, there is a standard that could have been used for digital video, SDI/HD-SDI, but the transceivers were expensive and it doesn't support any form of bi-directional handshake. There was already prosumer kit, mostly in the US, which had SD-SDI connections as an alternative to component. It didn't get popular in Europe mostly because of SCART.
I was once talking with someone who was very much involved in the process of standardising TV connectivity, a senior engineer at Gennum, and he said it wouldn't have been practical and SDI couldn't have been competitive with HDMI.
Oh, for sure. That and ADAT are great examples of tech that worked and worked well - and maybe even instrumental in HDMI's later adoption of optical tech in their cables.
I'm pretty sure in most places in the world if you are travelling from abroad you are asked to share your passport, and have been for a very very very very long time.
The difference between sending it over a chat and handing it over to a clerk (who then photocopies it or types in the data into the computer) feels almost academic. Though at least "Typing it into the computer" doesn't leave them with a picture, just most of the data.
> The difference between sending it over a chat and handing it over to a clerk (who then photocopies it [...]
The difference is that the paper copy is local and only accessible to the hotel (and any government employee that might come knocking).
The digital version is accessible to anyone who has access to the system, which as we know well on HN includes bureaucrats (or police) with a vendetta against you and any hacker that can manage to breach the feeble defenses of the computer storing the data. That computer isn't locked down because the information is not valuable to the person who holds it; they're paid to satisfy a record-keeping law, not maintain system security.
> at least "Typing it into the computer" doesn't leave them with a picture, just most of the data.
Agreed, except now uploading a scan is the easiest way to file the data.
I do agree that "not without a warrant" is a pretty load-bearing thing and it _should_ be tedious to get information. When a lot of info is just so easy to churn through that can activate new forms of abuse, even if from an information-theoretical point of view the information was always there.
And it's not even just about public officials. All those stories of people at Google reading their exes emails or whatever (maybe it was FB? Still) sticks to me.
I think people are overindexing on how much of this is "get more data on users".
I don't get why people believe there's a conspiracy here. There's perhaps a large tent, but "social media bad" is not a controversial opinion! "The gov't should do something about it" is more controversial, though I think the controversiality is less heavy in spaces with parents, teachers, places where people have to deal with kids.
Not that this is how things should be determined, but... I think reading this as a "get more data and track people" play feels like giving everyone involved too much credit. It really just feels like what it says on the tin here.
was it a security vulnerability? I'm pretty sure it was "just" a crash. Though maybe someone smarter than me could have turned that into something more.
I have no dog in this race, I really like the idea of Rust drivers but can very much understand retiscience at getting Rust to be handling more core parts of the kernel, just because Rust's value seems to pay off way more in higher level code where you have these invariants to maintain across large code paths (meanwhile writing a bunch of doubly-linked lists in unsafe Rust seems a bit like busy work, modulo the niceties Rust itself can give you)
reply