Hacker Newsnew | past | comments | ask | show | jobs | submit | more Defletter's commentslogin

Question, does that work with other types? Say you have two u16 values, can you concatenate them together with ~ into a u32 without any shifting?


It works with arrays (both fixed size, and dynamically sized) and arrays; between arrays and elements; but not between two scalar types that don't overload opBinary!"~", so no it won't work between two `ushorts` to produce a `uint`


No, it doesn't. But I'm not sure that this matter, a sufficiently "smart" compiler understand that this is the same thing.


I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit because then they can gleefully point at it and pretend that any first-party package manager for C/C++ would inevitably result in the same, nevermind the other languages that do not have this issue, or have it to a far, far lesser extent. Do these cultists just not use dependencies? Are they just [probably inexpertly] reinventing every wheel? Or do they use system packages like that's any better *cough* AUR exploits *cought*. While dependency hell on nodejs (and even Rust if we're honest) is certainly a concern, it's npm's permissiveness and lack of auditing that's the real problem. That's why Debian is so praised.


What makes me a C++ "cultist"? I like the language, but I don't think it's a cult. And yes, they do implement their own wheel all the time (usually expertly) because libraries are reserved for functions that really need it: writing left pad is really easy. They also use third-party libraries all the time, too. They just generally pay attention to the source of that library. Google and Facebook also publish a lot of C++ libraries under one umbrella (abseil and folly respectively), and people often use one of them.


STOP SAYING CULTIST! The word has very strong meaning and does not apply to anyone working with C or C++. I take offense at being called a cultist just because I say C++ is not nearly as bad as the haters keep claiming it is - as well I should.


> Or do they use system packages like that's any better cough AUR exploits cought.

AUR stands for "Arch User Repository". It's not the official system repository.

> I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit

I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.

My problem with language package managers is that people love them precisely because they don't want to learn how to deal with dependencies. Which is actually the problem: if I pull a random Rust library, it will itself pull many transitive dependencies. I recently compared two implementations of the same standard (C++ vs Rust): in C++ it had 8 dependencies (I can audit that myself). In Rust... it had 260 of them. 260! I won't even read through all those names.

"It's too hard to add a dependency in C++" is, in my opinion, missing the point. In C++, you have to actually deal with the dependency. You know it exists, you have seen it at least once in your life. The fact that you can't easily pull 260 dependencies you have never heard about is a feature, not a bug.

I would be totally fine with great tooling like cargo, if it looked like the problem of random third-party dependencies was under control. But it is not. Not remotely.

> Do these cultists just not use dependencies?

I choose my dependencies carefully. If I need a couple functions from an open source dependency I don't know, I can often just pull those two functions and maintain them myself (instead of pulling the dependency and its 10 dependencies).

> Are they just [probably inexpertly] reinventing every wheel?

I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.

Would it make me more of an expert if I was pulling, running and distributing random code from the Internet without having the smallest clue about who wrote it?

Do I need to complain about how hard CMake is and compare a command line to a "magic incantation" to be considered an expert?


> AUR stands for "Arch User Repository". It's not the official system repository.

Okay... and? The point being made was that the issue of package managers remains: do you really think users are auditing all those "lib<slam-head-on-keyboard>" dependencies that they're forced to install? Whether they install those dependencies from the official repository or from homebrew, or nix, or AUR, or whatever, is immaterial, the developer washed their hands of this, instead leaving it to the user who in all likelihood knows significantly less than the developers to be able to make an informed decision, so they YOLO it. Third-party repositories would not exist if they had no utility. But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted. Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.

> "It's too hard to add a dependency in C++"

It's not hard to add a dependency. I actually prefer the dependencies-as-git-submodules approach to package managers: it's explicit and you know what you're getting and from where. But using those dependencies is a different story altogether. Don't you just love it when one or more of your dependencies has a completely different build system to the others? So now you have to start building dependencies independently, whose artefacts are in different places, etc, etc, this shouldn't be a problem.

> I, for one, do not love it when there is an exploit in a language package manager.

Oh please, I believe that about as much as ambulance chasers saying they don't love medical emergencies. Otherwise, why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm as if anyone is actually asking for that, instead of, say, what Zig or Go has? It's because of the cultism, and every npm exploit further entrenches it.


C++ usage has nothing to do with static/dynamic linking. One is a language and the other is a way of using libraries. Dynamic linking gives you small binaries with a lot of cross-compatibility, and static linking gives you big binaries with known function. Most production C++ out there follows the same pattern as Rust and Go and uses static linking (where do you think Rust and Go got that pattern from?). Python is a weird language that has tons of dynamic linking while also having a big package manager, which is why pip is hell to use and PyTorch is infamously hard to install.

Dynamic linking shifts responsibility for the linked libraries over to the user and their OS, and if it's an Arch user using AUR they are likely very interested in assuming that risk for themselves. 99.9% of Linux users are using Debian or Ubuntu with apt for all these libs, and those maintainers do pay a lot of attention to libraries.


> But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted.

So you do understand my point about AUR. AUR is like adding a third-party repo to your Debian configuration. So it's not a good example if you want to talk about official repositories.

Debian is a good example (it's not the only distribution that has that concept), which proves my point and not yours: this is better than unchecked repositories in terms of security.

> Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.

Nobody says that ever. Either you make up your cult just to win an argument, or you don't understand what C/C++ people say. The whole goddamn point is to have a trusted system repository, and if you need to pull something that is not there, then you do it properly.

Which is better than pulling random stuff from random repositories, again.

> I actually prefer the dependencies-as-git-submodules approach

Oh right. So you do it wrong, it's good to know and it will answer your next complaint:

> Don't you just love it when one or more of your dependencies has a completely different build system to the others

I don't give a damn because I handle dependencies properly (not as git submodules). I don't have a single project where the dependencies all use the same build system. It's just not a problem at all, because I do it properly. What do I do then? Well exactly the same as what your system package manager does.

> this shouldn't be a problem.

I agree with you. Call it a footgun if you wish, you are the one pulling the trigger. It isn't a problem for me.

> why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm

Where did I do that?

> It's because of the cultism, and every npm exploit further entrenches it.

It's because npm is a good example of what happens when it goes out of control. Pip has the same problem, and Rust as well. But npm seems to be the worse, I guess because it's used by more people?


Your defensiveness is completely hindering you and I cannot be bothered with that so here are some much needed clarifications:

> I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.

If you do neither of those things then did it ever occur to you that this might not be about YOU?

> I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.

Yeah, hi, no you didn't explain that. You're probably mistaking me for someone else in some other conversation you had. The only comment of yours prior to mine in the thread is you saying "I can use pkg-config just fine." And again, you're thinking that I'm calling YOU incompetent, or even that I'm calling you incompetent. But okay, I'm sure your code never has bugs, never has memory issues, is never poorly designed or untested, that you can whip out an OpenGL alternative whatever in no time and it be just as stable and battle-tested, and to say otherwise must be calling you incompetent. That makes total sense.

> AUR stands for "Arch User Repository". It's not the official system repository.

> So it's not a good example if you want to talk about official repositories.

I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.

--

Overall, getting bored of this, though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny. Have a nice day.


> Your defensiveness

Start by not calling everybody disagreeing with you a cultist, next time.

> I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.

It's not that it is unclear, it's just that it doesn't make sense. When we compare npm to a system package manager in this context, the thing we compare is whether or not is it curated. Agreed, I was maybe not using the right words (I should have said curated package managers vs not curated package managers), but it did not occur to me that it was unclear because comparing npm to a system package manager makes no sense otherwise. It's all just installing binaries somewhere on disk.

AUR is much like npm in that it is not curated. So if you find that it is a security problem: great! We agree! If you want to pull something from AUR, you should read its PKGBUILD first. And if it pulls tens of packages from AUR, you should think twice before you actually install it. Just like if someone tells you to do `curl https://some_website.com/some_script.sh | sudo sh`, no matter how convenient that is.

Most Linux distributions have a curated repository, which is the default for the "system package manager". Obviously, if users add custom, not curated repositories, it's a security problem. AUR is a bad example because it isn't different from npm in that regard.

> though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny

Well I did elaborate at least one bit, but I doubt you are interested in more details than what I wrote: "What do I do then? Well exactly the same as what your system package manager does."

I install the dependencies somewhere (just like the system package manager does), and I let my build system find them. It could be with CMake's `find_package`, it could be with pkg-config, whatever knows how to find packages. There is no need to install the dependencies in the place where the system package manager installs stuff: it can go anywhere you want. And you just tell CMake or pkg-config or Meson or whatever you use to look there, too.

Using git submodules is just a bad idea for many reasons, including the fact that you need all of them to use the same build system (which you mentioned), or that a clean build usually implies rebuilding the dependencies (for nothing) or that it doesn't work with package managers (system or not). And usually, projects that use git submodule only support that, without offering a way to use the system package(s).


> Start by not calling everybody disagreeing with you a cultist, next time.

You'd do very well as a culture war pundit. Clearly I wasn't describing a particular kind of person, no, I'm clearly I'm just talking about everyone I disagree with /s


So, not interested at all in how to deal with dependencies without git submodules, I reckon?

We can stop here indeed.


You misunderstand, I am already well aware. My comment about your lack of elaboration was not due to any ignorance on my part, but rather to point out how you assumed that and refused to elaborate anyway. The idea that I may have my reasons for preferring dependencies-as-git-submodules or their equivalents (like Zig's package system) never crossed your mind. Can't say I'm surprised. Oh well.


> The idea that I may have my reasons for preferring dependencies-as-git-submodules

Well, git submodules are strictly inferior and you know it: you even complained about the fact that it is a pain when some dependencies use different build systems.

You choose a solution that does not work, and then you blame the tools.


Okay, I'll bite: your proposed alternative to being able to specify exact versions of dependencies regardless of operating system or distro that I can statically include into a single binary, everything is project-local, guaranteed, is... what? Is it just "Don't"?


I'm not sure what you mean.

What I am saying is that using a dependency is formalised for build systems. Be it npm, cargo, gradle, meson, cmake, you name it.

In cargo, you add a line to a toml file that says "please fetch this dependency, install it somewhere you understand, and then use if from this somewhere". What is convenient here is that you as a user don't need to know about those steps (how to fetch, how to install, etc). You can use Rust without Cargo and do everything manually if you need to, it's just that cargo comes with the "package manager" part included.

In C/C++, the build systems don't come with the package manager included. It does not mean that there are no package managers. On the contrary, there are tons of them, and the user can choose the one they want to use. Be it the system package manager, a third-party package manager like conan or vcpkg, or doing it manually with a shell/python script. And I do mean the user, not the developer. And because the user may choose the package manager they want, the developer must not interfere otherwise it becomes a pain. Nesting dependencies into your project with git submodules is a way to interfere. As a user, I absolutely hate those projects that actually made extra work to make it hard for me to handle dependencies the way I need.

How do we do that with CMake? By using find_package and/or pkg-config. In your CMakeLists.txt, you should just say `find_package(OpenSSL REQUIRED)` (or whatever it is) and let CMake find it the standard way. If `find_package` doesn't work, you can write a find module (that e.g. uses pkg-config). A valid shortcut IMO is to use pkg-config directly in CMakeLists for very small projects, but find modules are cleaner and actually reusable. CMake will search in a bunch of locations on your system. So if you want to use the system OpenSSL, you're done here, it just works.

If you want to use a library that is not on the system, you still do `find_package(YourLibrary)`, but by default it won't find it (since it's not on the system). In that case, as a user, you configure the CMake project with `CMAKE_PREFIX_PATH`, saying "before you look on the system, please look into these paths I give you". So `cmake -DCMAKE_PREFIX_PATH=/path/where/you/installed/dependencies -Bbuild -S.`. And this will not only just work, but it means that your users can choose the package manager they want (again: system, third-party like conan/vcpkg, or manual)! It also means that your users can choose to use LibreSSL or BoringSSL instead of OpenSSL, because your CMakeLists does not hardcode any of that! Your CMakeLists just says "I depend on those libraries, and I need to find them in the paths that I use for the search".

Whatever you do that makes CMake behave like a package manager (and I include CMake features like the FetchContent stuff) is IMO a mistake, because it won't work with dependencies that don't use CMake, and it will screw (some of) your users eventually. I talk about CMake, but the same applies for other build systems in the C/C++ world.

People then tend to say "yeah I am smart, but my users are stupid and won't know how to install dependencies locally and point CMAKE_PREFIX_PATH to them". To which I answer that you can offer instructions to use a third-party package manager like conan or vcpkg, or even write helper scripts that fetch, build and install the dependencies. Just do not do that inside the CMakeLists, because it will most certainly make it painful for your users who know what they are doing.

Is it simpler than what cargo or npm do? No, definitely not. Is it more flexible, totally. But it is the way it is, and it fucking works. And whoever calls themselves a C/C++ developer and cannot understand how to use the system package manager, or a conan/vcpkg and set CMAKE_PREFIX_PATH need to learn it. I won't say it's incompetence, but it's like being a C++ developer and not understanding how to use a template. It's part of the tools you must learn to use.

People will spend half a day debugging a stupid mistake in their code, but somehow can't apprehend that dealing with a dependency is also part of the job. In C/C++, it's what I explained above. With npm, properly dealing with dependencies means checking the transitive dependencies and being aware of what is being pulled. The only difference is that C/C++ makes it hard to ignore it and lose control over your dependencies, whereas npm calls it a feature and people love it for that.

I don't deny that CMake is not perfect, the syntax is generally weird, and writing find module is annoying. But it is not an excuse to make a mess at every single step of the process. And people who complain about CMake usually write horrible CMakeLists and could benefit from learning how to do it properly. I don't love CMake, I just don't have to complain about it everywhere I can because I can make it work, and it's not that painful.


While I do appreciate you taking the time to write that, I am somewhat at a loss. How does this justify the antipathy towards notions of a first-party build system and package manager? That's how we got into this argument with each other: I was calling out C/C++ cultists who cling to the ugly patchwork of hacky tooling that is C/C++'s so-called build systems and decry any notion of a first-party build system (or even a package manager to boot) as being destined to become just like npm.

C/C++ developers clearly want a build system and package manager, hence all this fragmentation, but I can't for the life of me understand why that fragmentation is preferable. For all the concern about supply-chain attacks on npm, why is it preferable that people trust random third-party package managers and their random third-party repackages of libraries (eg: SQLite on conan and vcpkg)? And why is global installation preferable? Have we learnt nothing? There's a reason why Python has venv now; why Maven and Gradle have wrappers; etc. Projects being able to build themselves to a specification without requiring the host machine to reconfigure itself to suit the needs of this one project, is a bonus, not a drawback. Devcontainers should not need to be a thing.

If anything, this just reads like Sunk Cost Fallacy: that "it just works" therefore we needn't be too critical, and anyone who is or who calls for change just needs to git gud. It reminds me of the never-ending war over memory safety: use third-party tools if you must but otherwise just git gud. It's this kind of mindset that has people believing that C/C++'s so-called build systems are just adhering to "there should be some artificial friction when using dependencies to discourage over-use of dependencies", instead of being a Jenga tower of random tools with nothing but gravity holding it all together.

If it were up to me, C/C++ would get a more fleshed-out version of Zig's build system and package manager, ie, something unified, simple, with no central repository, project-local, exact, and explicit. You want SQLite? Just refer to SQLite git repository at a specific commit and the build system will sort it out for you. Granted, it doesn't have an official build.zig so you'll need to write your own, or trust a premade one... but that would also be true if you installed SQLite through conan of vcpkg.


> How does this justify the antipathy towards notions of a first-party build system and package manager?

I don't feel particularly antipathic towards notions of first-party build system and package manager. I find it indeniably better to have a first-party build system instead of the fragmentation that exists in C/C++. On the other hand, I don't feel like asking a 20-year old project to leave autotools just because I asked for it. Or to force people to install Python because I think Meson is cool.

As for the package manager, one issue is security: is it (even partly) curated or not? I could imagine npm offering a curated repo, and a non-curated repo. But there is also a cultural thing there: it is considered normal to have zero control over the dependencies (my this I mean that if the developer has not heard of dependencies they are pulling, then it's not under control). Admittedly it is not a tooling problem, it's a culture problem. Though the tooling allows this culture to be the norm.

When I add a C/C++ dependency to my project, I do my shopping: I go check the projects, I check how mature they are, I look into the codebase, I check who has control over it. Sometimes I will depend on the project, sometimes I will choose to fork it in order to have more control. And of course, if I can get it from the curated list offered by my distro, that's even better.

> C/C++ developers clearly want a build system and package manager, hence all this fragmentation

One thing is legacy: it did not exist before, many tools were created, and now they exist. The fact that the ecosystem had the flexibility to test different things (which surely influenced the modern languages) is great. In a way, having a first-party tool makes it harder to get that. And then there are examples like Swift where is slowly converged towards SwiftPM. But at the time CocoaPods and Carthage were invented, SwiftPM was not a thing.

Also devs want a build system and package manager, but they don't necessarily all want the same one :-). I don't use third-party package managers for instance, instead I build my dependencies manually. Which I find gives me more control, also for cross-compiling. Sometimes I have specific requirements, e.g. when building a Linux distribution (think e.g. Yocto or buildroot). And I don't usually want to depend on Python just for the sake of it, and Conan is a Python tool.

> why is it preferable that people trust random third-party package managers and their random third-party repackages of libraries (eg: SQLite on conan and vcpkg)?

It's not. Trusting a third-party package manager is actually exactly the same as trusting npm. It's more convenient, but less secure. However it's better when you can rely on a curated repository (like what Linux distributions generally provide). Not everything can be curated, but there is a core. Think OpenSSL for instance.

> And why is global installation preferable?

For those dependencies that can be curated, there is a question of security. If all your programs on your system link the same system OpenSSL, then it's super easy to update this OpenSSL when there is a security issue. And in situations where what you ship is a Linux system, then there is no point in not doing it. So there are situations where it is preferable. If everything is statically link and you have a critical fix for a common library, you need to rebuild everything.

> If it were up to me

Sure, if we were to rebuild everything from scratch... well we wouldn't do it in C/C++ in the first place, I'm pretty sure. But my Linux distribution exists, has a lot of merits, and I don't find it very nice when people try to enforce their preferences. I am fine if people want to use Flatpak, cargo, pip, nix, their system package manager, something else, or a mix of all that. But I like being able to install packages on my Gentoo system the way I like, potentially modifying them with a user patch. I like being able to choose if I want to link statically or dynamically (on my Linux, I like to link at least some libraries like OpenSSL dynamically, if I build an Android apk, I like to statically link the dependencies).

And I feel like I am not forcing anyone into doing what I like to do. I actually think that most people should not use Gentoo. I don't prevent anyone from using Flatpak or pulling half the Internet with docker containers for everything. But if they come telling me that my way is crap, I will defend it :-).

> I am somewhat at a loss.

I guess I was not trying to say "C/C++ is great, there is nothing to change". I just think it's not all crap, and I see where it all comes from and why we can't just throw everything away. There are many things to criticise, but many times I feel like criticisms are uninformed and just relying on the fact that everybody does that. Everybody spits on CMake, so it's easy to do it as well. But more often than not, if I start talking to someone who said that they cannot imagine how someone could design something as bad as CMake, they themselves write terrible CMakeLists. Those who can actually use CMake are generally a lot more nuanced.


Even though I understand why you prefer that, I feel like you're painting too rosy of an image. To quote Tom Delalande: "There are some projects where if it was 10% harder to write the code, the project would fail." I believe this deeply and that this is also true for the build system: your build config should not be rivalling your source code in terms of length. That's hyperbole in most cases, sure, and may well indicate badly written build configs, but writing build configs should not be a skill issue. I am willing to bet that Rust has risen so much in popularity not just because of its memory safety, but also because of its build system. I don't like CMake, but I also don't envy its position.


> but writing build configs should not be a skill issue

I think it shouldn't be a skill issue because a true professional should learn how to do it :-).

My build configs are systematically shorter than the bad ones.

Also I feel like many people really try to have CMake do everything, and as soon as you add custom functions in CMake, IMO you're doing it wrong. I have seen this pattern many times where people wrap CMake behind a Makefile, presumably because they hate having to run two commands (configure/build) instead of one (make). And then instead of having to deal with a terrible CMakeLists, they have to deal with a terrible CMakeLists and a terrible Makefile.

It's okay for the build instructions to say: "first you build the dependencies (or use a package manager for that), second you run this command to generate the protobuf files, and third you build the project". IMO if a developer cannot run 3 commands instead of one, they have to reflect on their own skills instead of blaming the tools :-).


> I think it shouldn't be a skill issue because a true professional should learn how to do it :-)

Therein lies the issue, in my opinion: I do not believe that someone should have to be a "true professional" to be able to use a language or its tooling. This is just "git gud" mentality, which as we all [should] know [by now] cannot be relied upon. It's like that "So you're telling me I have to get experience before I get experience?" meme about entry-level jobs: if you need to "git gud" before you can use C/C++ and its tooling properly, all that means is that they'll be writing appalling code and build configs in the mean time. That's bad. Take something like AzerothCore: I'd wager that most of its mods were made by enthusiasts and amateurs. I think that's fine, or at least should be, but I'm keenly aware that C/C++ and its tooling do not cater to, nor even really accommodate amateurs (jokey eg: https://www.youtube.com/watch?v=oTEiQx88B2U). That's bad. Obviously, this is heading into the realm of "what software are you trusting unwisely", but with languages like Rust, the trust issue doesn't often include incompetence, more-so just malice: I do not tend to fear that some Rust program has RCE-causing memory issues because someone strlen'd something they shouldn't.


> It's like that "So you're telling me I have to get experience before I get experience?"

Not at all. I'm not saying that one should be an architect on day one. I'm saying that one should learn the basics on day one.

Learning how to install a package on a system and understanding that it means that a few files were copied in a few folders is basic. Anyone who cannot understand that does not deserve to be called a "software engineer". It has nothing to do with experience.


> I'm saying that one should learn the basics on day one.

Except that C/C++ have entirely incongruous sets of basics compared to modern languages, which people coming to C/C++ for the first time are likely to have a passing familiarity with (unless it's their first language, of course). Yes, cmake configs can be pretty concise when only dealing with system packages, but this assumes that developers will want to do that, or whether they'll want to replicate the project-localness ideal, which complicates cmake configs. We're approaching this from entirely different places and is reminding me of the diametrically-opposed comments on this post (https://news.ycombinator.com/item?id=45328247) about READMEs.


Seems like a false dichotomy


This is pure Stockholm syndrome. If I were forced to choose between creating a cross-platform C++ project from scratch or taking an honest to god arrow to the knee, the arrow would be less painful.


I don't want any arrows in my knees but I agree.

The main reason I don't want to use C/C++ are the header files. You have to write everything in a header file and then in an implementation file. Every time you want to change a function you need to do this at least twice. And you don't even get fast compilation speed compared to some languages because your headers will #include some library that is immense and then every header that includes that header will have transitive header dependencies, and to solve this you use precompiled headers which you might have to set up manually dependending on what IDE you are using.

It's all too painful.


> You have to write everything in a header file and then in an implementation file.

No you don't. You write your guaranteed to the public interface into the header file. When you start to put code in there it stops being a header file.

> because your headers will #include some library that is immense and then every header that includes that header will have transitive header dependencies

Your approach is what leads to this problem. Your header files should be tiny and only composed of, well, headers. Also almost all header files should have include guards, so including more then once should be a no-op.[1] Nothing stops you from including implementation files.

[1] When you say that your or compiler doesn't have that optimization: When it has the complexity to support precompiled header files, it can also implement this optimization.


It gets better with experience. You can have a minimal base layer of common but rarely changing functionality. You can reduce static inline functions in headers. You reduce data structure definitions, but put only forward declarations in header files. (Don't use C++ methods, at least don't put them in an API, because they force you to expose your implementation details needlessly). You can separate data structures from functions in different header files. Grouping functions together with types is often a bad idea since most useful functionality combines data from two or more "unrelated" types -- so you'd rather make function headers "by topic" than putting them alongside types.

I just created a subsystem for a performance intensive application -- a caching layer for millions or even billions of objects. The implementation encompasses over a 1000 LOC, but the header only includes <stdint.h>. There are about 5 forward struct declarations and maybe a dozen function calls in that API.

To a degree it might be stockholm syndrome, but I feel like having had to work around a lot of C's shortcomings I actually learned quite a lot that helps me in architecting bigger systems now. Turns out a lot of the flexibility and ease that you get from more modern languages mostly allows you to code more sloppily, but being sloppy only works for smaller systems.


If you were forced to choose between creating a cross-platform project in one of the trendy language, but of course, which must also work on tiny hardware with a weird custom OSes on some hobbyist hardware, and with 30-year-old machines in some large organization's server farm - then you would choose the C++ project, since you will be able to make that happen, with some pain. And with the other languages - you'll probably just give up or need to re-develop all userspace for a bunch of platforms, so that it can accommodate the trendy language build tool. And even that might not be enough.

Also: If you are on platforms which support, say, CMake - then the multi-platform C++ project is not even that painful.


> but of course, which must also work on tiny hardware with a weird custom OSes on some hobbyist hardware, and with 30-year-old machines in some large organization's server farm - then you would choose the C++ projectt, since you will be able to make that happen, with some pain.

With the old and proprietary toolchains involved, I would bet dollars to doughnuts that there's a 50% odds of C++11 being the latest supported standard. In that context, modern C++ is the trendy language.


Why? There are lots of cross platform libraries and most aspects are not platform specific. It's really not a big deal. Use FLTK and you get most of the cross platform stuff for free in a small package.


This is one of the things that has me so hesitant towards upgrading my "server". I've been using an old Thinkpad for a while now and it has served me well, but lately I've been using it for more intensive things (like JetBrains remote development and a Jellyfin server). It's become a regular occurrence that, while I'm trying to sleep, its fans spin up and sound like it's trying to take off because someone downstairs is watching a movie from it. I don't begrudge them for it since I set it up for that exact purpose, but it can make it difficult to sleep soundly.

The most obvious solution would be make a small PC: more powerful and bigger fans means less noise. I've been considering something like this (https://www.youtube.com/watch?v=Jr5MjhgPz_c)... but then how am I supposed to use it? Yes, I can ssh into it, but what if it fails to start? Just last month my Thinkpad server failed to restart properly. This was a trivial fix but it being a laptop whose lid I can just open and use immediately made it an extremely easy fix, which would not be true for a PC.

Thing is, I know that dumb terminals exist, ie, a screen, keyboard, and trackpad that takes the form-factor of a laptop but has no actual internals, it's just a convenient interface when plugged into a server. I've seen them. I've tried searching for them but there doesn't seem to be an agreed-upon search category, and the ones I manage to find are more expensive than the PC itself and are usually designed as a server-rack drawer.

Genuinely, what do people do here? Do they just have their server setup somewhere like a desktop? Or are people keeping spare monitors, keyboards, and mice around that they then need to unpack, plug in, and use awkwardly before putting it all away again?


I have a tiny HDMI screen which I can power from a USB port which I can plug into a computer if for some reason it is unreachable over the network. (this one: https://www.amazon.com/dp/B0B1L935ZT ), and a tiny keyboard with built-in track pad (something like https://www.amazon.com/dp/B00B9996LA ).

They're stored together in a small box with all needed cables, so they're easy to take with me to whichever computer is having issues. In practice I only use them a few times per year.


SSH most of the time of course, and a management interface (iDRAC, iLO, etc) if you have an enterprise server; otherwise an old monitor and spare keyboard. Sometimes they'll support serial out that you can use over a cable to another computer instead of the whole monitor+keyboard combo. Or nowadays you can use a network KVM like a PiKVM, NanoKVM, or JetKVM


the thing about network KVM is, they require ... network. So if you already can't ssh into your machine, it maybe network issue, thus you can't use network kvm


I just keep a cheap screen and a cheap keyboard near my servers. No need for a mouse. For my garage and basement servers, the KV stays in place always, and the MIL's condo, the KV goes away when not in use... and the keyboard got moved at some point, so I have to remember to bring it over when it needs adjusting.

Around me, most days I can stop at goodwill and get a monitor and keyboard for $30 or less.


> I can stop at goodwill and get a monitor and keyboard for $30 or less.

The issue isn't cost in this case, it's the storage and effort of having to lug it out and put it back afterwards. Even if someone gave me an old screen and keyboard for free, I'm still not going to build that server PC. I've been looking into PiKVM as advised by another comment and they're pretty pricey at ~£200 but that's genuinely orders of magnitude more preferable. In another conversation on another platform, I was told about nexdock, which is more for docking phones but can be used as a dumb terminal, which is pretty enticing... though their website is pretty dubious, eg: the shop doesn't even tell me what version of the nexdock I'd be buying.


> The issue isn't cost in this case, it's the storage and effort of having to lug it out and put it back afterwards.

Then connect it once and leave it. :p


Okay, thanks for the advice


> but what if it fails to start?

Since you're mentioning opening laptop's lid I assume you mean literally failing to start, as after power cycling. For that, wouldn't simple hitting the power button be enough? It certainly doesn't require keyboard. If you plan to place it somewhere not easily accessible, there is Wake on LAN, which most modern PC motherboards are going to support.

If some maintenance task cannot be done with ssh/tmux, you can always use remote desktop software, in local network even RDP will do. And if something went wrong enough for you to not be able to connect to the server remotely then there is indeed no way around bringing and connecting a spare keyboard and monitor, but events like that should be quite rare normally.


I have a keyboard and monitor somewhere in a closet. That's also what we do at work with our "real" servers.

I'd say it's needed about once a year at most though. Servers don't just fail to start, normally.


> The most obvious solution would be make a small PC: more powerful and bigger fans means less noise

In the performance window of "old Thinkpad", why not go fanless? Those lovely little Intel N150 mini-pc boxes are mostly fanless and completely silent - I have on my desk running Jellyfin/web server/etc, and it's inaudible under load.

> but what if it fails to start?

In ~15 years of running headless linux boxes, I've never had one crippled to the point it wouldn't boot as far as ssh.


My server sits next to my existing desktop, and I just move the keyboard cable from one to the other when I need to get at a local interface on the server. One of my monitors has two inputs, and so is always plugged into both, I can just change the input selected. Not the "cleanest" solution, but it works when I need to get at it, and the space it's in wasn't being used by anything else.


Have it boot with serial out, get a cheap usb to serial dongle, and use the laptop you have to serial in. Or do you specifically want a gui?


PiKVM

or really any cheap IP based KVM is what you’re looking for


> The term “evil” is being used partially hyperbolic to make a point.

Kind of bonkers this even needs to be said, and even then it's missed/ignored.


The title is provocative and attention grabbing. -- It's completely fair game to react to the provocation rather than the substance of the article itself. (Or, rather, it's silly to use attention grabbing rhetoric, then complain that people paid attention to the rhetoric).

I'd prefer instead a more balanced title like "Remember to Consider the Costs When Using Package Managers", or whatever.


> It's completely fair game to react to the provocation rather than the substance of the article itself.

Yeah, but its down right stupid to do so.

The title isn't even misleading or part of a Motte-and-bailey argument.

People just hear "Package Managers are Evil" and assume that the author means you shouldn't use third party dependencies. Which is NOT what's being argued.

But I guess you'd know that, if you read passed the title.


In the article, the author does say "I am not advocating to write things from scratch", while also describing third party dependencies as liabilities (e.g. security vulnerabilities), that people are too trusting of third party dependencies, that people overestimate the quality of third party dependencies.

I think you're splitting hairs if you're saying that these points from the article argue against package managers but don't argue against using third party dependencies.

I similarly think you're splitting hairs if to consider "package managers are useful?" and "third party dependencies are useful?" as distinct points.


Liability: "Something for which one is liable; an obligation, responsibility, or debt."

Third party dependencies absolutely are liabilities. You are liable to vet them, inspect their licenses and keep them updated while ensuring that they continue working with your existing code.

This is not something package managers help you do. Package managers like NPM make it trivial to skip these steps entirely.

What is being argued for, is a more thoughtful approach to handling third party dependencies. Or at the very least, the need for people to realise that there are costs associated with bringing third party dependencies into your codebase.

Its not splitting hairs at all. Its more of an presumption on the part of a large number of readers, that the 2 points argued conflate to "Package manager suck, because third party dependencies suck and you should write everything from scratch instead".


Sorry but I lack any respect for authors that use clickbaits. Call them put and move on seem the best approach.


Its not clickbait though.

You should try reading the article before passing judgement.

Its not like the article is called "5 facts that will make you hate package managers. Number 5 will shock you"


It was clickbait because the article, which I did read, did not support the contention that package managers are evil. Therefore "evil" seems to be used in a hyperbolic way to grab attention, which makes it clickbait, specifically ragebait.


I wouldn't class it as clickbait myself, but I will stand by the use of the word "evil". I am using evil in the very old fashioned sense: the privation of the good. Is the title provocative? Yes. But that's the point of the article in general. I am trying to argue that they are a net bad with virtually no good upsides to them for the programming world as a whole. They've automated something at scale which should not have been automated. And to be clear, there is no solution to the problems they are trying to solve, rather it's all about trade-offs.

I a little annoyed that HackerNews post renamed it to "A critique of package managers" because that implies very different connotations. I'd view an article written like that as if I have some criticisms that could be addressed, rather than the entire concept being bad from the start.


> I am trying to argue that they are a net bad with virtually no good upsides to them for the programming world as a whole.

What I'm saying is that you have failed in this argument. You hardly even attempt to make it. Thus clickbait.

You said "this is why I am saying it is evil, as it will send you to hell quicker."

Okay, so then it's up to you to prove this hell actually exists. But you don't. You just assert its existence -- "Dependency hell is a real thing which anyone who has worked on a large project has experienced." By framing it this way, you can dismiss anyone who claims to not have experienced this as not having sufficient experience. But reading the comments here, a lot of people have experienced a sort of "dependency hell" (the kind that's talked about in the wiki you link to) that is solved by package managers.

So that's why it's classed as clickbait -- you (admittedly) wrote a provocative headline that you don't even remotely back up.

FYI for the future since you're lamenting in many comments that people are misinterpreting you, this is why. Given that you don't really make an attempt to prove this dependency hell and package managers are evil, and you don't acknowledge anything good about them, it's reasonable to assume your bias is just that dependencies are evil at their core. It's actually the most charitable reading because otherwise you seem confused.


Then again, there is a trope going back to Knuth - "Premature optimization is the root of all evil" - which is an argument that it is not clickbait, but merely applying a pattern in discussions about computer programming.


Hyperbole is just a pretty common thing for humans to do


> The title is provocative and attention grabbing. -- It's completely fair game to react to the provocation rather than the substance

No it isn't.


The title of the article comes from the direct words I said in the video, of which the article is effectively a polished transcription of.

Your "more balanced title" isn't even close to what I am saying. I am saying that Package Managers are just bad and should not be used. Not "remember to consider the costs". The net cost is bad for everyone, that's why I said "evil".


I guess clickbait is evil


> Also the fact is that I don't need AI included into this software.

Yup, the fact that every new service now is AI-first is troubling... it's literally the first thing said about Amber when going on their website. My tab says "Amber — AI-enabled all-in-on...". It's like the only thing they want you to know about their service.


Checked exceptions are not the problem. In fact, I don't think there should be ANY unchecked exceptions outside of OOM and other JVM exceptions. I will die on this hill. The actual problem is that handling exceptions in Java is obnoxious and verbose. I cannot stress enough how bad it feels going from Zig back to Java, error-handling wise. If Java had Zig-style inline catching, there wouldn't nearly be as much humming and hawing over checked exceptions.


A hot take I have is that Java's checked exceptions - which everyone hates - are semantically very similar to Rust's Result<T, E> and ? error handling - which is broadly liked. The syntax makes or breaks the feature.


It shares the same important property (short-circuiting, explicit & type checkable) with Rust's `Result<>`, yes. I don't think it's a direct equivalent though

- Java's exception creates stack trace by default

- Type erasure which prevents you from doing anything "fancy" with try-catch + generic's type parameter

- You can only throw something that extends `Throwable` rather than any type `E`

- etc

But yeah sure, if Java somehow provided a standard interface & an associated operator like Rust's Try trait to handle checked exception, it would probably have a much better reputation than it has now.


A while ago, I had attempted to do it the Rust style with a Result type in Java (where it is called Data oriented programming) and the result is not great. I was fighting a losing battle in a naive attempt to replace checked exceptions but still retain compile time type safety.

https://www.reddit.com/r/java/s/AbjDEo6orq


Interesting anecdote.

At $DAYJOB, I'm currently migrating a non-performance sensitive system to a functional style, which requires defining basic functional types like Option<> and Result<>. So far I think it works pretty well and certainly improves edge case handling through explicit type declaration.

My current stable incarnation is something like this:

  sealed interface Result<T,E> {
      record Ok<T,E>(T value) implements Result<T,E> {}
      record Error<T,E>(E error) implements Result<T,E> {}
  }
Some additional thoughts:

- Using only 1 type parameter in `Result<T>` & `Ok<T>` hides the possible failure state. `Result<Integer>` contain less information in type signature than `Integer throwing() throws CustomException`, leaving you fall back on catching `Exception` and handling them manually. Which kind of defeats the typechecking value of `Result<>`

- A Java developer might find it unusual to see `Ok<T,E>` and realize that the type `E` is never used in `Ok`. It's called "phantom types" in other language. While Java's generic prevents something like C++'s template specialization or due to type erasure, in this case, it helps the type system track which type is which.

- I would suggest removing the type constraint `E extends Exception`, mirroring Rust's Result<> & Haskell's Either. This restriction also prohibits using a sum type `E` for an error.

- In case you want to rethrow type `Result<?,E>` where `E extends CustomException`, maybe use a static function with an appropriate type constraint on `E`

  sealed class Result<T,E> {
      public static <E extends Exception> void throwIfAnyError(Result<?,E>... );
  }
- I like the fact that overriding a method with reference type args + functional interface args triggers an "ambiguous type method call" error if you try to use bare null. This behavior is pretty handy to ensure anti-null behavior on `Option.orElse(T)` & `Option.orElse(Supplier<T>)`. Leaving `Option.get()` as a way to "get the nullable value" and a "code smell indicator"


Yeah, Java's generic-type erasure makes result types difficult, though as you mention in your code example, this can be mitigated some using switch guards. But you could also go into another switch:

    switch (result) {
        case Ok(var value) -> println(value);
        case Err(var ex) -> switch (ex) {
            case HttpException httpEx -> {
                // do something like a retry
            }
            default -> {
                // if not, do something else
            }
        }
    }


Why? It's a good grammatical equivalent to the full stop for the programmer. It can serve as useful context for the compiler. And it's only one character. Antagonism over semicolons is another strange symptom of conciseness at all costs. If you want APL, just use APL.


I always thought ending programming statements with a period '.' like in Prolog was more elegant.


> If you want APL, just use APL.

Or Python, Go, or Typescript.


Or Swift


or Haskell


A counter argument is It’s meaningless for the developer and high level programming is writing for people not things.


That’s why when we write, we use commas and periods. It tells the reader when a thought ends and the next begins. A semi-colon is the traditional period in programming. Not everything fits on one line. Python managed to pull it off and now everyone thinks it’s the right way… it’s just “a way” but by no means modern or right. JavaScript made them optional, but it results in ambiguous parsing sometimes, so it’s not a good idea there either.

In any case, I doubt a run on sentence is “meaningless” but it is hard to parse.


The only complaint I have about RSS is that it seems antagonistic to edits. It's not usual that, when refreshing my podcast RSS feed, there are multiple versions of the same episodes because they made some edit somewhere in the title or description, etc. I've had five versions of the same episode before. I feel like we should have the technology to fix this by now :P


I think the problem is that there are so many different standards[0] which makes it hard to parse them in a uniform format. The second problem is the most feeds only have 15 items, even if a reader handles updates they are fast lost for ever.

[0] https://ivyreader.com/articles/rss-standart-collection


We do. Atom feeds have an updated field for this. But, it's up to whoever is generating the feed to know how to handle their metadata.


RSS provides GUID + update timestamp which combined allows the client to integrate changes or replace entries.


That's what the guid / id field is for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: