Hacker Newsnew | past | comments | ask | show | jobs | submit | TheDong's commentslogin

You're saying "musicians" aren't "artists", and "open source contributors" aren't artists _or_ writers? Artists covers both of the groups you said.

Yes, we're all artists. Good now?

Do you read Chinese, Hindi, and Vietnamese to read about thefts in those countries?

Latin-based-language countries also have more relations to the english world (mostly through Britain historically conquering most of them), and so as an English speaker you're more likely to see news about those countries.

I'm not sure if you're trying to imply something else, but if you are, please don't. The relationships between languages, what countries are reported in the western news, what countries americans (i.e. the HN audience visit), and so on is complicated, multi-faceted, and cannot be easily boiled down to language as a root cause of anything.


There are things like this.

The things I know of and can think of off the top of my head are:

1. appimage https://appimage.org/

2. nix-bundle https://github.com/nix-community/nix-bundle

3. guix via guix pack

4. A small collection of random small projects hardly anyone uses for docker to do this (i.e. https://github.com/NilsIrl/dockerc )

5. A docker image (a package that runs everywhere, assuming a docker runtime is available)

6. https://flatpak.org/

7. https://en.wikipedia.org/wiki/Snap_(software)

AppImage is the closest to what you want I think.


It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods and also very big for typical systems which include most libraries. They're good as a "compile once, run everywhere" approach but you're really accommodating edge cases here.

A "works in most cases" build should also be available for that that it would benefit. And if you can, why not provide specialized packages for the edge cases?

Of course, don't take my advice as-is, you should always thoroughly benchmark your software on real systems and choose the tradeoffs you're willing to make.


IMO one of the best features of AppImage is that it makes it easy to extract without needing external tools. It's usually pretty easy for me to look at an AppImage and write a PKGBUILD to make a native Arch package; the format already encodes what things need to be installed where, so it's only a question of whether the libraries it contains are the same versions of what I can pull in as dependencies (either from the main repos or the AUR). If they are, my job is basically already done, and if they aren't, I can either choose to include them in the package itself assuming I don't have anything conflicting (which is fine for local use even if it's not something that's usually tolerated when publishing a package) or stick with using the AppImage.

I agree. I've seen quite a few AUR packages built that way and I'm using a few myself too. The end user shouldn't be expected to do this though! :D

> It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods

'Noticeably slower' at what? I've run, e.g. xemu (original xbox emulator) as both manually built from source and via AppImage-based released and i never noticed any difference in performance. Same with other AppImage-based apps i've been using.

Do you refer to launching the app or something like that? TBH i cannot think of any other way an AppImage would be "slower".

Also from my experience, applications released using AppImages has been the most consistent by far at "just working" on my distro.


I wish AppImage was slightly more user friendly and did not require the user to specifically make it executable.

We fix this issue by distributing ours in a tar file with the executable bit set. Linux novices can just double click on the tar to exact it and double click again on the actual appimage.

Been doing it this way for years now, so it's well battle tested.


That kind of defeats the point of an AppImage though - you could just as well have a tar archive with a c classic collection of binaries + optional launcher script.

A single file is much better to manage on the eyes than a whole bunch of them, plus AppImages can be installed into the desktop using integration.

AppImage looks like what I need, thanks.

I wonder though, if I package say a .so file from nVidia, is that allowed by the license?


AppImage is not what you need. It's just an executable wrapper for the archive. To make the software cross-distro, you need to compile it manually on an old distro with old glibc, make sure all the dependencies are there, and so on.

https://docs.appimage.org/reference/best-practices.html#bina...

There are several automation tools to make AppImages, but they won't magically allow you to compile on the latest Fedora and expect your executable to work on Debian Stable. It's still require quite a lot of manual labor.


Yeah a lot of Appimage developers make assumptions about what their systems have as well (i.e. "if I depend on something that is installed by default on Ubuntu desktop then it's fine to leave out"). For example, a while ago I installed an Appimage GUI program on a headless server that I wanted to use via X11 forwarding. I ended up having to manually install a bunch of random packages (GTK stuff, fonts, etc) to get it to run. I see Appimage as basically the same as distributing Linux binaries via .tar.gz archives, except everything's in a single file.

Don't forget - AppImage won't work if you package something with glibc, but run on musl/uclibc.

>I wonder though, if I package say a .so file from nVidia, is that allowed by the license?

It won't work: drivers usually require exact (or more-or-less the same) kernel module version. That's why you need to explicitly exclude graphics libraries from being packaged into AppImage. This make it non-runnable on musl if you're trying to run it on glibc.

https://github.com/Zaraka/pkg2appimage/blob/master/excludeli...


No, that's a copyright violation, and it won't run on AMD or Intel GPUs, or kernels with a different Nvidia driver version.

But this ruins the entire idea of packaging software in a self-contained way, at least for a large class of programs.

It makes me wonder, does the OS still take its job of hardware abstraction seriously these days?


The OS does. Nvidia doesn't.

Does Nvidia not support OpenGL?

Not really. Nvidia-OpenGL is incompatible to all existing OS OpenGL interfaces, so you need to ship a separate libGL.so if you want to run on Nvidia. In some cases you even need separate binaries, because if you dynamically link against Nvidia's libGL.so, it won't run with any other libGL.so. Sometimes also vice versa.

Does AMD use a statically linked OpenGL?

AMD uses the dynamically linked system libGL.so, usually Mesa.

So you still need dynamic linking to load the right driver for your graphics card.

Most stuff like that uses some kind of "icd" mechanism that does 'dlopen' on the vendor-specific parts of the library. Afaik neither OpenGL nor Vulkan nor OpenCL are usable without at least dlopen, if not full dynamic linking.

It does, and one way it does that is by dynamically loading the right driver code for your hardware.

That’s a licensing problem not a packaging problem. A DLL is a DLL - only thing that changes is whether you’re allowed redistribute it

Typically appimage packaging excludes the .so files that are expected to be provided by the base distro.

Any .so from nvidia is supposed to be one of those things. Because it also depends on the drivers etc.. provided by nvidia.

Also on a side note, a lot of .so files also depends on other files in /usr/share , /etc etc...

I recommend using an AppImage only for the happy path application frameworks they support (eg. Qt, Electron etc...). Otherwise you'd have to manually verify all the libraries you're bundling will work on your user's distros.


Depends on the license and the specific piece of software. Redistribution of commercial software is may be restricted or require explicit approval.

You generally still also have to abide by license obligations for OSS too, e. G., GPL.

To be specific for the exampls, Nvidia has historically been quite restrictive (only on approval) here. Firmware has only recently been opened up a bit and drivers continue to be an issue iirc.


Not necessarily. Reviewing an issue report is already enough time. Reviewing a patch is even more developer time.

The problem they had before was a financial incentive to sending reports, leading to crap reports that wasted time to review. Incentivizing sending reports + patches has the same failure mode, but they now have to waste even more time to review the larger quantity of input.

Anyway, for most cases I'd bet that Daniel can produce and get reviewed a correct patch for a given security bug quicker than the curl team can review a third-party patch for the same, especially if it's "correct, but ai-written".


I've read their reports before. When there's not enough information to reproduce, they do a good job of asking for more information first, and I've never seen a reasonable good-faith report elicit anything overt.

If you failed to give them proper reproduction information when asked, then yeah, you were wasting their time and they should rightfully close your issue.

I've never seen anyone on the curl team undeservedly "lambast" someone, and for a project that has a quite good reputation, I think the burden of proof is on you. Can you link to these supposedly terrifying comments?


It says in the curl file that they will ridicule time-wasters in public and here is one pression confirming that it happened to them, yet somehow that's not enough? Come on.

When people don't provide a citation online when discussing some specific instance like this-- which could be provided with a couple clicks and would radically improve their argument a reasonable assumption is that the citation would undermine their argument.

If you follow cURL’s development, what you’ll see is the main contributors tend to be extremely patient, helpful, and thankful of contributions. Sometimes too patient. If you look at the HackerOne slop reports cURL got, you’ll see Daniel accommodating people outright wasting their time.

So if you follow what’s been happening, you know the types of reports this message is talking about. What they consider time-wasters are slop reports where the reporter didn’t do any effort to even test the “bug” and then keeps pasting whatever the LLM says in replies and lying about using them.

In other words, for a legitimate report it’s hard to believe that was the reaction. I would expect them to be patient with a human contributor which really put in the work. It’s particularly hard to believe the maintainers would even waste their time to lambast someone on Reddit. Doesn’t seem like their style.

Maybe the person in this thread is exaggerating, maybe they misinterpreted it, or maybe it did happen. But it seems so out-of-character that some proof would be warranted, especially since it’s a single report.


We don’t need anecdotes, every single bug is public. Just looking now I see respectful responses to genuine reports. This document is clearly in response to AI slop and spam.

I skimmed the "slop" collection they maintain that was posted here yesterday, and even under those HackerOne submissions, Daniel was perfectly reasonable and respectful.

It is entirely possible I merely chanced upon his highlights, but this announcement to me really just signifies a final straw breaking than anything else. His historical conduct is all public and speaks for itself. I wish I had the patience and perseverance he does, and I wish he didn't need it.


> The core is open source, though. macOS' particular choice for its graphical user land is proprietary as well

I ran into a kernel panic specific to my macbook's hardware. How do I compile a new kernel with some extra debug printlns and boot it to figure out the panic?

On any actually open source operating system, this is doable, but I'm not holding my breath for any working instruction here. As far as I know, there's no way to modify the source code of, and then boot, the macOS kernel.

Perhaps "the core is open source" doesn't mean that I can run a modified kernel to you?


> How do I compile a new kernel with some extra debug printlns and boot it to figure out the panic?

First, explain how you are doing it with the AT&T UNIX kernel. We can then help you adapt the process to Darwin.

> On any actually open source operating system, this is doable

I suspect you forgot to read the thread. While the grandparent comment considered AT&T UNIX to be "open source", that doesn't mean open source in the way we think of the term today. AT&T UNIX was very much proprietary in its own right. Today, we'd probably say "source available". Whether or not that is doable was dependent on what kind of agreement you had with the owner. They might have let you for a substantial fee, but Apple might let you for a substantial fee too. Have you asked?


AT&T did not ship with the kernel source code, but they often shipped with the compiled object files of the kernel and a command line utility that allowed to change the kernel configuration parameters, after which the kernel would get re-linked into a new one.

Not open source by any definition, but it was a viable way to obtain a new kernel image. The practice has become obsolete after the adoption of loadable kernel modules across nearly all UNIX flavours, with the exception being OpenBSD (if my memory serves me well).


> I ran into a kernel panic specific to my macbook's hardware. How do I compile a new kernel with some extra debug printlns and boot it to figure out the panic?

1. You can find panic logs in Console.app. macOS writes them into NVRAM and stows away into files on its next boot. That will give you the process and kernel extension that was the culprit, and a stack trace.

2. sudo nvram boot-args="debug=0x122" or something like this will increase log output from the kernel. Those debug prints are probably there already. You can even attach a debugger running from somewhere else, presumably over Thunderbolt on newer machines.


The only reason that these company mark ads at all are due to consumer protection laws about undisclosed sponsored content.

If stronger consumer protection laws are "totalitarian dictatorship methods", then no, there is no path. If we aren't allowed to have laws and regulation, only unregulated capitalism, by definition capital makes right, and so apple having more money than you means you have no recourse.

Any way to structure incentives (like "we will all agree to only buy from companies that don't act unfairly") is the same as creating an ad-hoc government regulation.


The big difference here is that if you buy short-dated out-of-the-money options and make it big, the SEC comes knocking on your door and reads your text messages to find out what you knew and when.

It's both easy to track down stock traders due to KYC, and easy to prosecute due to laws.

Polymarket and friends make it both much harder to find the trader, and also it's less clear if there's a legal theory that lets you prosecute someone dealing in these new markets.

Sure, congress and the president can insider trade a bit here and there, but the everyday joe is rightfully afraid to.


It seems like the difference is all due to historical accidents in the US and laws that should arguably be changed. Nothing to do with prediction markets by itself.

iMessage and RCS have some very different affordances, and apple keeps it that way to keep people walled into the system.

Most notably, a single non-iMessage member in a group chat will degrade the experience for everyone significantly.

It's very much an issue in the US.


By "degrade the experience" you mean you get a text that says "TheDong liked $message." The horror! Maybe people will go back to just sending a thumbs up emoji.

By "degrade the experience" I mean:

1. Unable to remove members, or change member's phone-numbers without recreating the entire chat and losing continuity / bothering everyone with noise about these changes.

2. Green bubbles, so if your teenage child talks in the group chat at school and one of their classmates sees the green bubble, they'll be bullied for the rest of the time in school.

3. Unable to send high quality photos or videos

4. Just plain failure to deliver messages with shocking frequency for a supposedly modern messaging system.

5. RCS still isn't supported by carriers in a bunch of countries, so when one member of the group chat travels, roams to a foreign network that doesn't support RCS, and chats the group chat can split into one for MMS and one for RCS, and then it's a total crapshoot based on network conditions as to which one the messages go to in the future, with messages having now an even higher chance of vanishing into the void.

Basically, it's a subpar experience. Every other group messaging app (signal, whatsapp, etc) works fine on iOS and android, Apple really should be publishing iMessage for Android to solve this. But, due to reason 2 where green bubbles result in becoming a social outcast and being bullied, they of course won't.

Like, signal, a company running on donations iirc, is able to build a messaging app for windows/linux/iOS/android, and yet Apple isn't capable of that? Come on.


Outside the US people use WhatsApp and other third party messengers so none of that is necessarily a big issue. As for teenagers they mostly use Snapchat and Instagram for groupchats nowadays.

> Green bubbles, so if your teenage child talks in the group chat at school and one of their classmates sees the green bubble, they'll be bullied for the rest of the time in school.

Wth, is this even a real thing?


Then use a vendor-agnostic platform instead for group chats like Signal or Matrix.

I do with anyone I can. Unfortunately some people I want to chat with (i.e. family) are too scared to install any third-party apps from the app store because each time they tried, they clicked on an app store ad and get garbage instead.

It would be great if people actually did this, but in the US that is not the case. There are only so many people you can convince to move off of their main platform, and usually you have to meet people where they are.

Exactly — I know a good portion of my family simply wouldn't switch. SMS and MMS are also less secure and a poor experience (e.g. photos are often swapped via iMessage).

> inescapable. It's rumored that they'll add ads to maps

If you move to the EU you can change the default navigation app on iOS and never see apple maps.

A plan to display ads would explain why they region locked that setting.


I'd much prefer the EU to the current situation in the US, but it's not in the cards at the moment.

You can remove the maps app in the US too.

You can remove the default navigation app, but you can't set the default navigation app to something else. https://developer.apple.com/documentation/MapKit/preparing-y...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: