Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m excited to see how this turns out. I work with Go every day and I think Io corrects a lot of its mistakes. One thing I am curious about is whether there is any plan for channels in Zig. In Go I often wish IO had been implemented via channels. It’s weird that there’s a select keyword in the language, but you can’t use it on sockets.


Wrapping every IO operation into a channel operation is fairly expensive. You can get an idea of how fast it would work now by just doing it, using a goroutine to feed a series of IO operations to some other goroutine.

It wouldn't be quite as bad as the perennial "I thought Go is fast why is it slow when I spawn a full goroutine and multiple channel operations to add two integers together a hundred million times" question, but it would still be a fairly expensive operation. See also the fact that Go had fairly sensible iteration semantics before the recent iteration support was added by doing a range across a channel... as long as you don't mind running a full channel operation and internal context switch for every single thing being iterated, which in fact quite a lot of us do mind.

(To optimize pure Python, one of the tricks is to ensure that you get the maximum value out of all of the relatively expensive individual operations Python does. For example, it's already handling exceptions on every opcode, so you could win in some cases by using exceptions cleverly to skip running some code selectively. Go channels are similar; they're relatively expensive, on the order of dozens of cycles, so you want to make sure you're getting sufficient value for that. You don't have to go super crazy, they're not like a millisecond per operation or something, but you do want to get value for the cost, by either moving non-trivial amount of work through them or by taking strong advantage of their many-to-many coordination capability. IO often involves moving around small byte slices, even perhaps one byte, and that's not good value for the cost. Moving kilobytes at a time through them is generally pretty decent value but not all IO looks like that and you don't want to write that into the IO spec directly.)


> One thing I am curious about is whether there is any plan for channels in Zig.

The Zig std.Io equivalent of Golang channels is std.Io.Queue[0]. You can do the equivalent of:

    type T interface{}

    fooChan := make(chan T)
    barChan := make(chan T)

    select {
    case foo := <- fooChan:
        // handle foo
    case bar := <- barChan:
        // handle bar
    }
in Zig like:

    const T = void;

    var foo_queue: std.Io.Queue(T) = undefined;
    var bar_queue: std.Io.Queue(T) = undefined;

    var get_foo = io.async(Io.Queue(T).getOne, .{ &foo_queue, io });
    defer get_foo.cancel(io) catch {};

    var get_bar = io.async(Io.Queue(T).getOne, .{ &bar_queue, io });
    defer get_bar.cancel(io) catch {};

    switch (try io.select(.{
        .foo = &get_foo,
        .bar = &get_bar,
    })) {
        .foo => |foo| {
            // handle foo
        },
        .bar => |bar| {
            // handle bar
        },
    }
Obviously not quite as ergonomic, but the trade off of being able to use any IO runtime, and to do this style of concurrency without a runtime garbage collector is really interesting.

[0] https://ziglang.org/documentation/master/std/#std.Io.Queue.


Have you tried Odin? Its a great language thats also a “better C” but takes more Go inspiration than Zig.


Second vote for Odin but with a small caveat.

Odin doesn't (and won't ever according to its creator) implement specific concurrency strategies. No async, coroutines, channels, fibers, etc... The creator sees concurrency strategy (as well as memory management) as something that's higher level than what he wants the language to be.

Which is fine by me, but I know lots of people are looking for "killer" features.


Completely replaced Go for me after using Go since inception.

Wonderful language!


Is there a GC/equivalent, or you do manual memory management?

There's a GC library around somewhere, but I doubt anyone uses it. Manual memory management is generally quite simple, as long as you aren't using archaic languages.

https://www.youtube.com/watch?v=xt1KNDmOYqA is worth a watch.


At least Go didn't take the dark path of having async / await keywords. In C# that is a real nightmare and necessary to use sync over async anti-patterns unless willing to re-write everything. I'm glad Zig took this "colorless" approach.


Where do you think the Io parameter comes from? If you change some function to do something async and now suddenly you require an Io instance. I don't see the difference between having to modify the call tree to be async vs modifying the call tree to pass in an Io token.


Synchronous Io also uses the Io instance now. The coloring is no longer "is it async?" it's "does it perform Io"?

This allows library authors to write their code in a manner that's agnostic to the Io runtime the user chooses, synchronous, threaded, evented with stackful coroutines, evented with stackless coroutines.


The interesting question was always “does it perform IO”.


Rust also allows writing async code that is agnostic to the async runtime used. Subsuming async under Io doesn't change much imo.


Except that now your library code lost context on how it runs. If you meant it to be sync and the caller gives you an multi threaded IO your code can fail in unexpected ways.


How so? Aside from regular old thread safety issues that is.


This is exactly the problem, thread safety. The function being supplied with std.Io needs to understand what implementation is being used to take precautions with thread safety, in case a std.Io.Threaded is used. What if this function was designed with synchrony in mind, how do you prevent it taking a penalty guarding against a threaded version of IO?


The function being called has to take into account thread safety anyway even if it doesn't do IO. This is an entirely orthogonal problem, so I can't really take it seriously as a criticism of Zig's approach. Libraries in general need to be designed to be thread-safe or document otherwise regardless of if the do IO, because a calling program could easily spin up a few threads and call it multiple times.

> What if this function was designed with synchrony in mind, how do you prevent it taking a penalty guarding against a threaded version of IO?

You document it and state that it will take a performance penalty in multithreaded mode? The same as any other library written before this point.


One of the harms Go has done is to make people think its concurrency model is at all special. “Goroutines” are green threads and a “channel” is just a thread-safe queue, which Zig has in its stdlib https://ziglang.org/documentation/master/std/#std.Io.Queue


A channel is not just a thread-safe queue. It's a thread-safe queue that can be used in a select call. Select is the distinguishing feature, not the queuing. I don't know enough Zig to know whether you can write a bit of code that says "either pull from this queue or that queue when they are ready"; if so, then yes they are an adequate replacement, if not, no they are not.

Of course even if that exact queue is not itself selectable, you can still implement a Go channel with select capabilities in Zig. I'm sure one exists somewhere already. Go doesn't get access to any magic CPU opcodes that nobody else does. And languages (or libraries in languages where that is possible) can implement more capable "select" variants than Go ships with that can select on more types of things (although not necessarily for "free", depending on exactly what is involved). But it is more than a queue, which is also why Go channel operations are a bit to the expensive side, they're implementing more functionality than a simple queue.


> I don't know enough Zig to know whether you can write a bit of code that says "either pull from this queue or that queue when they are ready"; if so, then yes they are an adequate replacement, if not, no they are not.

Thanks for giving me a reason to peek into how Zig does things now.

Zig has a generic select function[1] that works with futures. As is common, Blub's language feature is Zig's comptime function. Then the io implementation has a select function[2] that "Blocks until one of the futures from the list has a result ready, such that awaiting it will not block. Returns that index." and the generic select switches on that and returns the result. Details unclear tho.

[1] https://ziglang.org/documentation/master/std/#std.Io.select

[2] https://ziglang.org/documentation/master/std/#std.Io.VTable


Getting a simple future from multiple queues and then waiting for the first one is not a match for Go channel semantics. If you do a select on three channels, you will receive a result from one of them, but you don't get any future claim on the other two channels. Other goroutines could pick them up. And if another goroutine does get something from those channels, that is a guaranteed one-time communication and the original goroutine now can not get access to that value; the future does not "resolve".

Channel semantics don't match futures semantics. As the name implies, channels are streams, futures are a single future value that may or may not have resolved yet.

Again, I'm sure nothing stops Zig from implementing Go channels in half-a-dozen different ways, but it's definitely not as easy as "oh just wrap a future around the .get of a threaded queue".

By a similar argument it should be observed that channels don't naively implement futures either. It's fairly easy to make a future out of a channel and a couple of simple methods; I think I see about 1 library a month going by that "implements futures" in Go. But it's something that has to be done because channels aren't futures and futures aren't channels.

(Note that I'm not making any arguments about whether one or the other is better. I think such arguments are actually quite difficult because while both are quite different in practice, they also both fairly fully cover the solution space and it isn't clear to me there's globally an advantage to one or the other. But they are certainly different.)


> channels aren't futures and futures aren't channels.

In my mind a queue.getOne ~= a <- on a Go channel. Idk how you wrap the getOne call in a Future to hand it to Zig's select but that seems like it would be a straightforward pattern once this is all done.

I really do appreciate you being strict about the semantics. Tbh the biggest thing I feel fuzzy on in all this is how go/zig actually go about finding the first completed future in a select, but other than that am I missing something?

https://ziglang.org/documentation/master/std/#std.Io.Queue.g...


"but other than that am I missing something?"

I think the big one is that a futures based system no matter how you swing it lacks the characteristic that on an unbuffered Go channel (which is the common case), successfully sending is also a guarantee that someone else has picked it up, and as such a send or receive event is also a guaranteed sync point. This requires some work in the compiler and runtime to guarantee with barriers and such as well. I don't think a futures implementation of any kind can do this because without those barriers being inserted by either the compiler or runtime this is just not a guarantee you can ever have.

To which, naturally, the response in the futures-based world is "don't do that". Many "futures-based worlds" aren't even truly concurrently running on multiple CPUs where that could be an issue anyhow, although you can still end up with the single-threaded equivalent of a race condition if you work at it, though it is certainly more challenging to get there than with multi-threaded code.

This goes back to, channels are actually fairly heavyweight as concurrency operations go, call it two or three times the cost of a mutex. They provide a lot, and when you need it it's nice to have something like that, but there's also a lot of mutex use in Go code because when you don't need it it can add up in price.


Thanks for taking the time to respond. I will now think of Channels as queue + [mutex/communication guarantee] and not just queue. So in Go's unbuffered case (only?) a Channel is more than a 1-item queue. Also, in Go's select, I now get that channels themselves are hooked up to notify the select when they are ready?


Maybe I'm missing something, but how do you get a `Future` for receiving from a channel?

Even better, how would I write my own `Future` in a way that supports this `select` and is compatible with any reasonable `Io` implementation?


If we're just arguing about the true nature of Scotsmen, isn't "select a channel" merely a convenience around awaiting a condition?


This is not a "true Scotsman" argument. It's the distinctive characteristic of Go channels. Threaded queues where you can call ".get()" from another thread, but that operation is blocking and you can't try any other queues, then you can't write:

    select {
    case result := <-resultChan:
        // whatever
    case <-cxt.Done():
        // our context either timed out or was cancelled
    }
or any more elaborate structure.

Or, to put it a different way, when someone says "I implement Go channels in X Language" I don't look for whether they have a threaded queue but whether they have a select equivalent. Odds are that there's already a dozen "threaded queues" in X Language anyhow, but select is less common.

Again note the difference between the word "distinctive" and "unique". No individual feature of Go is unique, of course, because again, Go does not have special unique access to Go CPU opcodes that no one else can use. It's the more defining characteristic compared to the more mundane and normal threaded queue.

Of course you can implement this a number of ways. It is not equivalent to a naive condition wait, but probably with enough work you could implement them more or less with a condition, possibly with some additional compiler assistance to make it easier to use, since you'd need to be combining several together in some manner.


It's more akin to awaiting *any* condition from a list.


What other mainstream languages have pre-emptive green threads without function coloring? I can only think of Erlang.


I'm told modern Java (loom?) does. But I think that might be an exhaustive list, sadly.


Maybe not mainstream, but Racket.


It was special. CSP wasn't anywhere near the common vocabulary back in 2009. Channels provide a different way of handling synchronization.

Everything is "just another thing" if you ignore the advantage of abstraction.


What's the harm exactly?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: