Hacker Newsnew | past | comments | ask | show | jobs | submit | kbolino's commentslogin

Yep, this is a common misunderstanding, and the blog post itself repeats it.

The only way to "pass the file contents" would be through the standard input stream, but the script might want to use stdin like normal, so this isn't an option.


The `go run` tool will not execute (or even recognize) a file that does not end in .go, so this is not good advice.

Unfortunate! Yet another little bit of Go design decision that makes things worse for, in my opinion, no reason.

In what way does Python have more null safety than Go? Using None will cause exceptions in basically all the same places using nil will cause panics in Go, and Python similarly lacks the usual null-safe operators like traversal (?.), coalescing (??), etc.

You can abuse the falsity of None to do things like `var or ""`, but this ground gets quite shaky when real bools get involved.


The interesting thing about Go's loopvar change is that nobody was able to demonstrate any real-world code that it broke (*1), while several examples were found of real-world code (often tests) that it fixed (*2). Nevertheless, they gated it behind go.mod specifying a go version >= 1.22, which I personally think is overly conservative.

*1: A great many examples of synthetic code were contrived to argue against the change, but none of them ever corresponded to Go code anyone would actually write organically, and an extensive period of investigation turned up nothing

*2: As in, the original behavior of the code was actually incorrect, but this wasn't discovered until after the loopvar change caused e.g. some tests to fail, prompting manual review of the relevant code; as a tangent, this raises the question of how often tests just conform to the code rather than the other way around


You certainly won't find me arguing against that change, and the conservatism is why I called it borderline. The only reason I bring it up is because of the "absolute non-negotiable" bit, which I took to probably indicate a very exacting standard lest it include most widespread languages anyways.

Yes, I think it's also a good example of how "absolute" backwards compatibility is not necessarily a good thing. Not only was the old loopvar behavior probably the biggest noob trap in Go (*), it turned out not to be what anyone writing Go code in the wild actually wanted, even people experienced with the language. Everyone seems to have: a) assumed it always worked the way it does now, b) wrote code that wasn't sensitive to it in the first place, or c) worked around it but never benefitted from it.

*: strongest competitor for "biggest noob trap" IMO is using defer in a loop/thinking defer is block scoped


Strongly agree there. IMO breaking backwards compatibility is a tradeoff like any other, and the flexibility non-hardline stances give you is handy for real-world situations,

I interpret this as asking "why can't you get the address of a value in a map?"

There are two reasons, and we could also ask "why can't you get the address of a key in a map?"

The first reason is flexibility in implementation. Maps are fairly opaque, their implementation details are some of the least exposed in the language (see also: channels), and this is done on purpose to discourage users of the language from mucking with the internals and thus making it harder for the developers of the language to change them. Denying access to internal pointers makes it a lot easier to change the implementation of a map.

The second reason is that most ways of implementing a map move the value around copiously. Supposing you could get a pointer p := &m[k] for some map m and key k, what would it even point to? Just the value position of a slot in a hash table. If you do delete(m, k) now what does it point to? If you assign m[k2] but hash(k2) == hash(k) and the map handles the collision by picking a new slot for k, now what does it point to? And eventually you may assign so many keys that the old hash table is too small and so a new one somewhere else in memory has to be allocated, leaving the pointer dangling.

While the above also apply to pointers-to-keys, there is another reason you can't get one of those: if you mutated the key, you would (with high probability) violate the core invariant of a hash table, namely that the slot for an entry is determined exactly by the hash of its key. The exact consequences of violating this would depend on the specific implementation, but they are mostly quite bad.

For comparison, Rust, with its strong control over mutability and lifetimes, can give you safe references to the entries of a HashMap in a way Go cannot.


I was burnt by the mutability of keys in go maps a few months ago, I'm not sure exactly how go handles it internally but it ended up with the map growing and duplicate keys in the key list when looking at it with a debugger.

The footgun was that url.QueryUnescape returned a slice of the original string if nothing needed to be escaped so if the original string was modified, it would modify the key in the map if you put the returned slice directly into the map.


This sounds like a bug, whether it be in your code, the map implementation, or even the debugger. Map keys are not mutable, and neither are strings.

This shouldn't be a race condition, reads were done by taking a RLock() from a mutex in a struct with the map, and defer RUnlock(), writes were similar where a Lock() was taken on the same mutex with a defer Unlock(). All these functions did was get/set values in the map and operated on a struct with just a mutex and the map. Unless I have a fundamental misunderstanding of how to use mutexes to avoid race conditions this shouldn't have been the case. This also feels a lot like a llm response with the Hypotheses section.

edit: this part below was originally a part of the comment I'm replying to

Hypotheses: you were modifying the map in another goroutine (do not share maps between goroutines unless they all treat it as read-only), the map implementation had some short-circuit logic for strings which was broken (file a bug report/it's probably already fixed), the debugger paused execution at an unsafe location (e.g. in the middle of non-user code), or the debugger incorrectly interpreted the contents of the map.


That just means fiber is a bad library that abuses unsafe, resulting in real bugs.

Just how are you modifying strings? Cause that's your bug to fix.

That was probably done by fiber[1] the code specifically took the param from it in the function passed to the Get(path string, handlers ...Handler) Router function. c is the *fiber.Ctx passed by fiber to the handler. My code took the string from c.Param("name") passed it to url.QueryUnescape then another function which had a mutex around setting the key/value in the map. I got the hint it was slices and something modifying the keys when I found truncated keys in the key list.

My guess is fiber used the same string for the param to avoid allocations. The fix for it is just to create a copy of the string with strings.Clone() to ensure it does not get mutated when it is used as a key. I understand it was an issue with my code, it just wasn't something I expected to be the case so it took several hours and using the debugger to find the root cause. Probably didn't help that a lot of the code was generated by Grok-4-Code/Sonic as a vibe coding test when I decided to go back a few months later and try and fix some of the issues I had myself.

[1] https://github.com/gofiber/fiber


Go strings are supposed to be immutable.

I see that fiber goes behind your back and produces potentially mutable strings behind your back: https://github.com/gofiber/utils/blob/c338034/convert.go#L18

And… I actually don't have an issue with it to be honest. I've done the same myself.

But this mutability should never escape. I'd never persist in using a library that would let it escape. But apparently… it's intentional: https://github.com/gofiber/fiber/issues/185

Oh well. You get what you ask for. Please don't complain about maps if you're using a broken library.


Encrypted tokens are opaque but they are also offline-verifiable. A simple opaque token has to be verified online (typically, against a database) whenever it's used.

Auth0, for example, supports JWE for its access tokens: https://auth0.com/docs/secure/tokens/access-tokens/json-web-...


There are undoubtedly still some optimizations lying around, but the biggest source of Go's FFI overhead is goroutines.

There's only two "easy" solutions I can see: switch to N:N threading model or make the C code goroutine-aware. The former would speed up C calls at the expense of slowing down lots of ordinary Go code. Personally, I can still see some scenarios where that's beneficial, but it's pretty niche. The latter would greatly complicate the use of cgo, and defeat one of its core purposes, namely having access to large hard-to-translate C codebases without requiring extensive modifications of them.

A lot of people compare Go's FFI overhead to that of other natively compiled languages, like Zig or Rust, or to managed runtime languages like Java (JVM) or C# (.NET), but those alternatives don't use green threads (the general concept behind goroutines) as extensively. If you really want to compare apples-to-apples, you should compare against Erlang (BEAM). As far as I can tell, Erlang NIFs [1] are broadly similar to purego [2] calls, and their runtime performance [3] has more or less the same issues as CGo [4].

[1]: https://www.erlang.org/doc/system/nif.html

[2]: https://pkg.go.dev/github.com/ebitengine/purego

[3]: https://erlang.org/documentation/doc-10.1/doc/efficiency_gui...

[4]: https://www.reddit.com/r/golang/comments/12nt2le/when_dealin...


Java has green threads and c#/.net has logical threads


Yes, I have cleaned up the wording a bit. Also, the common implementation of Rust's async is comparable to green threads, and I think Zig is adopting something like it too.

However, the "normal" execution model on all of them is using heavyweight native threads, not green threads. As far as I can tell, FFI is either unsupported entirely or has the same kind of overhead as Go and Erlang do, when used from those languages' green threads.


Genuine question, you make it seem as this is a limitation and they're all in the same bucket but how was Java for example able to scale all the enterprises while having multi threading and good ffi, same with .net.

My impression is that the go ffi is with big overhead because of the specific choices made to not care about ffi because it would benefit the go code more?

My point was that there's other gc languages/envorionments that have good ffi and were somehow able all these decades to create scalable multithreaded applications.


I would suggest gaining a better understanding of the M:N threading model versus the N:N threading model. I do not know that I can do it justice here.

Both Java and Rust flirted with green threads in their early days. Java abandoned them because the hardware wasn't ready yet, and Rust abandoned them because they require a heavyweight runtime that wasn't appropriate for many applications Rust was targeting. And yet, both languages (and others besides) ended up adding something like them in later anyway, albeit sitting beside, rather than replacing, the traditional N:N threading they primarily support.

Your question might just be misdirected; one could view it as operating systems, and not programming languages per se, that screwed it all up. Their threads, which were conservatively designed to be as compatible as possible with existing code, have too much overhead for many tasks. They were good enough for awhile, especially as multicore systems started to enter the scene, but their limitations became apparent after e.g. nginx could handle 10x the requests of Apache httpd on the same hardware. This gap would eventually be narrowed, to some extent, but it required a significant amount of rework in Apache.

If you can answer the question of why ThreadPoolExecutor exists in Java, then you are about halfway to answering the question about why M:N threading exists. The other half is mostly ergonomics; ThreadPoolExecutor is great for fanning out pieces of a single, subdividable task, but it isn't great for handling a perpetual stream of unrelated tasks that ebb and flow over time. EDIT: See the Project Loom proposal for green threads in Java today, which also brings up the ForkJoinPool, another approach to M:N threading: https://cr.openjdk.org/~rpressler/loom/Loom-Proposal.html


PFS is just one of many desirable properties, and getting access to plaintext is just one of many kinds of threat. Getting access to ephemeral keys and other sensitive state can enable session hijacking. It's still not a great example, though, because it doesn't illustrate that threat model either.


There are two main reasons why this approach isn't sufficient at a technical level, which are brought up by comments on the original proposal: https://github.com/golang/go/issues/21865

1) You are almost certainly going to be passing that key material to some other functions, and those functions may allocate and copy your data around; while core crypto operations could probably be identified and given special protection in their own right, this still creates a hole for "helper" functions that sit in the middle

2) The compiler can always keep some data in registers, and most Go code can be interrupted at any time, with the registers of the running goroutine copied to somewhere in memory temporarily; this is beyond your control and cannot be patched up after the fact by you even once control returns to your goroutine

So, even with your approach, (2) is a pretty serious and fundamental issue, and (1) is a pretty serious but mostly ergonomic issue. The two APIs also illustrate a basic difference in posture: secret.Do wipes everything except what you intentionally preserve beyond its scope, while scramble wipes only what you think it is important to wipe.


Thanks, you brought up good points.

While in my case i had a program in which i created an instance of such a secret , "used it" and than scrambled the variable it never left so it worked.

Tho i didn't think of (2) which is especially problematic.

Prolly still would scramble on places its viable to implement, trying to reduce the surface even if i cannot fully remove it.


One of the goals here is to make it easy to identify existing code which would benefit from this protection and separate that code from the rest. That code is going to run anyway, it already does so today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: