Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Toolship: A more secure workstation (yann.pt)
68 points by yapret on Sept 20, 2023 | hide | past | favorite | 58 comments


The best answer to keeping your workstation clean & secure is, in my view, a thin-client paired with ephemeral remote environments:

Immutable or chain-of-trust based host OS (e.g. nix or iOS)

Minimal software installed (including docker, which itself is heavy and full of vulns or opens the door to them)

Do everything on ephemeral remote environments where the configuration is stored in reviewable tools (e.g. GitHub) and the state can be wiped at will. This means you reduce your surface area for persistent malware to supply chain and network attacks, which require careful practices to avoid but which are well-understood

Remote envs are preferred to local virtualization (e.g. quebes) because they lend themselves to team use and sharing more, and so are more likely to be widely adopted and collectively improved. Also easier to create different hardware configurations as needed (when you need a bigger GPU temporarily), as well as different environment types - e.g. always-on previews for QA testing. Also eliminates persistent paths in the local OS for malware storage


> The best answer to keeping your workstation clean & secure ...

> Immutable or chain-of-trust based host OS (e.g. nix or iOS)

Pegassus ?

Did you notice that Apple fixes vulnerabilities only after they are find by third parties ?


Your workstation will never be secure. Ever. It's not possible. Give up and work to implement zero trust. Ephemeral environments like CI/CD are not inherently secure because they're ephemeral either.


If we assume a workstation compromise, why would we trust the zero trust infrastructure that was set up with that workstation?


You can't trust it. One of the first real lessons security work teaches you is that nothing can ever be perfectly secure, it's all about achieving a level of security that is appropriate given your threat model, and gives you enough confidence to be able to sleep at night.

There's steps you can take that involve incredibly secure procurement processes of air gapped devices, but even that will not prove absolute security.


Interesting approach to maintaining a clean dev environment using containers. This approach reminds me of Fedora Silverblue[1] that I’ve been wanting to try. It leverages OSTree for atomic upgrades and rollbacks. Users can run containers for CLI utils using toolbox[2]. This way, the base OS remains pristine, and there's less risk of "dependency hell" or inadvertent “package upgrades gone wrong”.

[1]: https://fedoraproject.org/silverblue/

[2]: https://docs.fedoraproject.org/en-US/fedora-silverblue/toolb...


I'm running silverblue but running my containers through distrobox. Both toolbox and distrobox are running on podman under the hood, so it's the same technology as far as I understand. However, distrobox has some interesting features relevant to this idea of development isolation. One is that it has an assemble feature[1] built-in. Where you can feed it a recipe file and it will build or rebuild containers accordingly. The other is that it allows setting a custom home directory for the container, among other host/container isolating options[2].

Perfomance wise my containers take a couple MB of rams and no perceptible CPU usage when not in use. At least as far as I can tell.

[1] https://github.com/89luca89/distrobox/blob/main/docs/usage/d...

[2] https://github.com/89luca89/distrobox/blob/main/docs/usage/d...


I use Silverblue and it works like a charm. The only problem I found is that you need to install your IDE locally instead of using a Flatpak to be able to launch it from within a toolbox. Also, toolboxes can not be upgraded across major versions (eg Fedora 37 -> 38) and have to be recreated instead.


I have silverblue on a laptop, but hadn't poked about sufficiently yet. Toolbox looks great. Thanks for pointing it out.


Docker seems a bit unwieldy for this use case. Given it runs in a Linux VM on macOS, wouldn't all these commands have considerable overhead?

On Linux, the pledge utility[1] seems like a better fit for this. I'm not aware of what the macOS alternative would be, but considering this functionality stems from OpenBSD, maybe it can be ported to Darwin?

[1]: https://justine.lol/pledge/


Mac doesn't have seccomp so no. The functionality could be implemented but it would have to use a completely different mechanism.


You might enjoy https://github.com/jessfraz/dotfiles/blob/master/.dockerfunc I think she had an article about this as well.


https://blog.jessfraz.com/post/docker-containers-on-the-desk... is the one I remember, a bit old but still useful to see how she does it.

Seems super painful and indirected for a nebulous gain to me, but find your joy however you want I guess


I would also recommend looking into NixOS reproducible builds, which allows declaratively specifying the entire system configuration and precisely defining which packages are installed, their versions, and dependencies. The OS remains immutable and consistent. A quite powerful tool for creating a secure and minimalistic workstation environment.

https://nixos.org/


You can also use Nix/Home Manager to manage your Mac (what the author is using). I used NixOS for a few months as a VM and then eventually just switched back to my Mac. Couldn't stand all of the hacks that were needed to get software to behave the way Nix expected them (i.e., JIT binaries not dynamically linked to /nix/store).


Nix-Darwin is nice module system for macOS as well. It's a bit older than Home Manager, and also supports configuring some macOS-specific and systemwide settings.


A more extreme version of this would be to install something like Prox-mox on a machine (doesn't have to be the actual machine you're using, but probably could be) instead of the standard OS, and then create virtualized containers for each "use case" (and then use good security practices on each containerized OS as well of course).

Setup correctly, if any one container was to get compromised, it shouldn't leak out to anywhere of the other ones. Would be super inconvenient, I'm guessing to actually have a semblance of efficiency there would still likely be a "main" container and you'd SSH into others in order to do tasks associated with that container. Not too much different than the "clean OS" described here, probably the helper scripts could be similarly adapted to utilize the individual containers instead of docker containers.

I personally would be hard pressed to consider something like that, but seems like the logical continuation of this type of machine configuration/setup.



They are similar, but Qubes is more targetted at the end user - proxmox is more traditionally used as the hypervisor for distributed applications. You can probably achieve the same goals with either though (barring differences in Xen and KVM).


Proxmox isn't a hypervisor (last time I checked!), it's a management plane to different hypervisors.


> Proxmox is an open-source, Type 1 hypervisor that comes as a Debian-based Linux distribution. With Proxmox, users can experience a hypervisor that can integrate Linux containers (LXC) and KVM hypervisor, networking functionality, and software-defined storage in a unified platform.

I'm not saying you are wrong entirely - it can be used as a management plane for different hypervisors, but it is also a hypervisor in it's own right as I understand it. (grabbed the quote above from ServerWatch). There is a lot of confusion about this topic as some people argue it isn't a type 1 because it goes through KVM, but others rebut that because KVM is in the kernel and has direct hardware access (very gross summary of arguments I barely know enough about to keep up, and sometimes don't).


> Proxmox isn't a hypervisor

KVM is in the kernel, and I specifically called out KVM. If the point you are trying to make is that KVM is the hypervisor, then Qubes is also not a hypervisor because it uses Xen.

But this seems like a very strange distinction to make to me unless you are specifically trying to peer into the inner-workings. At that point you'd probably be saying ESXi is "not a hypervisor" because it has to defer the actual VM deployment to vmkernel.

I don't know of an OS without a kernel.


Proxmox can use a hypervisor, KVM, or manage containers. It's not a hypervisor. Xen is also hypervisor. Saying proxmox is a hypervisor is like saying virt-manager is a hypervisor.


You're just debating semantics, debatably incorreectly, and for no reason. KVM is not a type-2 hypervisor in this case - Proxmox can be hosted on bare metal and use KVM natively.

> Saying proxmox is a hypervisor is like saying virt-manager is a hypervisor.

This is... just wrong? Proxmox is much more equivalent to ESXi than to a UI application.

--

To grante you the tiniest bit of good faith, I would wager that you and I are two heads of this specific coin.

An excerpt from Wikipedia on the matter (https://en.wikipedia.org/wiki/Hypervisor):

> The distinction between these two types is not always clear. For instance, KVM and bhyve are kernel modules[6] that effectively convert the host operating system to a type-1 hypervisor.[7] At the same time, since Linux distributions and FreeBSD are still general-purpose operating systems, with applications competing with each other for VM resources, KVM and bhyve can also be categorized as type-2 hypervisors.[8]

You seem very concerned that KVM (and thus Proxmox) cannot be considered a Type 1 Hypervisor. I disagree.

But if your assertion is that Proxmox cannot natively deploy VMs... then I have no idea what to tell you. You're blatantly wrong. Just try it.


Just try it? Didn't want to have a pissing contest, but if you will.

I've have, since late 2000's and used it to deploy production deployments for eden.sahanafoundation.org in Haiti, Chengdu and other places, using Proxmox and KVM.

I've also built public and private clouds using OpenNebula and OpenStack (using KVM/libvirt). I'm also vmware certified (or was, back in late 2000's, when working for a prominent UK ISP).

It's a management framework, it doesn't do virtualisation itself, it uses the libvirt framework. I can use the same kvm hypervisor by using qemu-kvm (or virt-manager, which uses the same stuff). Again it's just a management layer.


I’m matching your energy. Your original comment came in just to say “you’re wrong” as a weird, nitpicky contrarian - so I’m going to fight you on that.

You’re trying to make the distinction KVM is separate - I’m saying KVM is a part of Proxmox making them functionally one and the same.

If you want to be very precise - KVM is the hypervisor. It just so happens to also be a part of the kernel! And Proxmox can also be run on bare metal hardware meaning - it can deploy and manage VMs with access to direct hardware management. As I already gave a reference to - this seems to be a common point people hit disagreement on, like we are precisely having at this moment.

Your nitpick is a muddying of waters in an attempt to “be superior” (and attaching your LinkedIn is pretty odd).

If you’re to be consistent, you’d also be saying ESXi is not a hypervisor - you’d say only vmkernel is. On a tight technicality this might be true, but it’s such a nitpick that unless you’re actively debugging in that layer of the stack the distinction is worthless.

I don’t think you and I will see eye to eye. You are so hyper focused on nitpicking a tiny definition that I’m not willing to concede on.


Proxmox doesn't do the virtualisation, that's KVM. That's the hypervisor. I'm not sure why you find this so difficult to understand.

You told me to, "Try It". I have. Many times, I told you about them and you despite that you think I'm attaching a linked in. I'm answering the thing you told me.

Not sure what 'energy' you're going on about, if you reread this you might realise you're the one being a bit obtuse.

You're also making up stuff I might potentially say, whilst also admitting I'm probably right, which says a lot about you imho. Maybe work from the things people say, not what you think they say in your head.

I hope you find inner peace, but for reference, proxmox is a management layer using libvirt, which interacts with the hypervisor, KVM. Jeez...


So, it's not a hypervisor, KVM is the hypervisor, it's a management plane using perl and qemu (fixed it for you, you're right, libvirt isn't used, my bad) to do the same thing libvirt does.

Glad we cleared up that it's not a hypervisor though, KVM is the hypervisor, whatever glue that sits between them (be it perl/qemu or libvirt). Promox is still not a hypervisor.


> So, it's not a hypervisor

No. I disagree for reasons you refuse to respond to.

> KVM is the hypervisor

Yes I’ve said this since the beginning?

> qemu

Thanks for acknowledging you were a liar.

> Proxmox is still not a hypervisor.

I’ve already acknowledged why I see why you think this, because you want to strictly define the line at KVM. I, and many others, disagree with you.

In fact, general Google consensus also disagrees with you. Please, change https://en.m.wikipedia.org/wiki/Proxmox_Virtual_Environment and https://en.m.wikipedia.org/wiki/Hypervisor if this is the biggest hill you must die on.

Your weird pretense that Proxmox is not capable of deploying VMs is objectively wrong. How does it do it? Via KVM - an integral component of Proxmox.

Since you’re a brick wall who can’t see nuance, understand conventional definitions, or even give the slightest amount of understanding to another point of view, then there’s no reason to keep talking with you.


> Proxmox doesn't do the virtualisation, that's KVM. That's the hypervisor.

Proxmox has KVM as part of its kernel. You're deliberately ignoring this fact. I've already expressly stated that KVM is the specific part that does the virtualization multiple times and you keep pretending I'm not.

> I'm not sure why you find this so difficult to understand.

I'm not?

> Not sure what 'energy' you're going on about, if you reread this you might realise you're the one being a bit obtuse.

You went out of your way to nitpick a comment I made about the differences in how Proxmox and Qubes OS are being used to say "you're wrong" about a detail that was not only irrelevant to the conversation.

Proxmox, ESXi, etc. are conventionally considered hypervisors.

> You're also making up stuff I might potentially say, whilst also admitting I'm probably right, which says a lot about you imho.

No, you don't know how to read. Let me take you back to first grade for a second.

What I said was that there is some dispute in over if KVM being a part of the kernel that helps constitute an OS makes the OS the hypervisor. I literally gave a reference to this distinction as well, and conceded that if you want to be very technically accurate, that KVM is the hypervisor.

What I'm saying is that KVM is a core part of Proxmox that enables it to function as a hypervisor, and you are going to deep ends to ensure everyone knows that my claim is 100% verifiably wrong even though it's semantics.

> Maybe work from the things people say, not what you think they say in your head.

Let's take a step back. I said:

"proxmox is more traditionally used as the hypervisor for distributed applications... (barring differences in Xen and *KVM*)"

to which you said:

"Proxmox isn't a hypervisor (last time I checked!), it's a management plane to different hypervisors."

In other words - "you're fucking wrong, it doesn't include a hypervisor at all". Which is:

a) Not what I said. I said it is used as the hypervisor. You’re not fucking installing Virtualbox on it. b) Intentionally ignoring the fact that I call out KVM in reference to it. What the hell? c) Inaccurate.

Let’s break it apart.

> it's a management plane to different hypervisors.

Source? I haven't seen any capability of Proxmox to integrate with Xen, vmkernel, Hyper-V, XCP-ng, etc.

It deeply integrates with KVM (which you seem to never address, as if accepting this fact is akin to Voldemort to you.

> It's a management framework, it doesn't do virtualisation itself, it uses the libvirt framework.

Not true. From a developer themselves: https://forum.proxmox.com/threads/how-hypervisors-like-proxm...

> I hope you find inner peace, but for reference, proxmox is a management layer using libvirt, which interacts with the hypervisor, KVM. Jeez...

Again, spreading lies.

I can do this all day with you. I don't think you're just wrong now, I think you're actively lying.

--

Or, you can stop being such a dick. I already gave you an out, which is that this specific topic is actively debated online - just like we are doing now. But instead, you went this route:

> whilst also admitting I'm probably right, which says a lot about you imho

So no, you're clearly a bad faith author. Imagine telling a VI Admin that ESXi is not the hypervisor.


I think the killer app for proxmox would be integrating docker or podman as a first-order feature.

Right now you can set up a VM or an LXC container. In comparison to docker/podman, LXC is more like being a sysadmin.


Yah, as I understand it (not a sysadmin, just hobby stuff), if you want to run something in docker, you'd have to setup the system first and then docker on top of that. I'm not sure you need the docker layer though - it's unnecessary overhead probably. Each VM/container could just have the necessary OS with the packages needed for that particular use case installed directly.

Guessing someone who tried to use a system like this would probably have their own custom containers / linux distributions, or at least have custom install scripts that would function much like the docker compose file does in getting everything installed on that container for different use cases.


I was nodding along until it became clear the Docker containers were being run as root...


Since the author of this post is making efforts to bind mount specific directories, is that still a legit risk? root inside the container isn’t essentially the same root on host. But yes, UID and GID mappings along with user namespacing will be better.


But you have to be root outside of the container to run Docker. Which means the author has to run every single little command as root. That violates the principle of least privilege, increases the potential damage caused by bugs or mistakes, and therefore is a very legit risk.

Also notice how the shell snippets in the article doesn't use sudo to run docker. That indicates that the author probably added their user to the docker group, which is equivalent to always logging in as root. That's terrible, terrible security practice.

I can't agree that root inside the container is different from root on the host, either. The kernel makes no such distinction unless user namespacing is enabled. When containerized processes gain access to host resources, whether intentional or not, they'll have the same level of access as root on the host.


Noted. Yep, valid points. Thanks for explaining!


I think if I were going to the trouble of dockerizing and isolating all my tools, I wouldn’t want to rely on someone else’s registry of dockerfiles.

This reminds me a lot of the cycle of sandboxing:

https://xkcd.com/2044/


I dream of situation where I'd be developing software on plain Debian Stable with nothing else needed. It is already an immense platform so it does feel bit ridiculous that I'd truly need much more than that.


I have written dew (https://github.com/efrecon/dew) for more or less the same purpose. I hardly keep any binary (and dependency) in my installation, they are all inside containers that I can easily dispose of at any time. The default in dew is to run them as your user. At the command prompt, instead of running, for example, kubectl xxx, I run dew kubectl xxx. It's a bit slower but provides an increased level of security.


I have the same concern. Something might be worth looking into is replacing docker with Podman because it runs as the authenticated user rather than using a daemon running as root. Also, I believe Podman desktop allows for multiple VMs.

Also consider QubesOS. Where everything runs in a VM (if you can find appropriate hardware on which to run it).

Less flexible but easier to install is ChromeOS FLEX (or a high end Chromebook). Like QubesOS, ChromeOS lets you run Linux in a VM but with the ability to open native windows.


I understand the benefits of VM/Docker based isolation. But, how to efficiently share data across boundaries and still stay protected? How can a development VM protect against malicious NPM packages that steal sensitive data (e.g., secrets/keys/confidential code needed for development and present inside the VM)? Am I missing something here?


No, I don't think you're missing anything, other than you'd only mount the directories you want the tool / development environment to have access to. Take for instance the `npm` command [1], it mounts `$PWD` so if you install a compromised package then it can go through the folder you're in, but it can't then go up directories and sniff around your home directory. It would also only have access to the environment variables that have been configured for the container, which in this case would also include AWS credentials.

1 - https://github.com/yapret/toolship/blob/main/src/node/functi...


Makes sense. However, to be able to use packages installed in $PWD (compatibility), the Docker image must be an exact replica of the host (e.g., same node version, same libs such as libcrypto) Besides, bins installed under ~/.local or /usr/local during "npm install" are not available outside the Docker image.


It looks like you are focussing on business secrets and, while important, at least for me my main concern would be personal secrets on personal hardware that I am also doing side projects on.

A business today can reduce the blast radius by quite a lot with separate laptops ("customer/project laptop"), sample data and restricted/time limited access to production data.

In an ideal world no npm dependency could affect my online banking, icloud photo library or private messengers.


I do this for years. but instead of docker I use LXD (even for GUI stuff). my base system is always clean and idempotent (chezmoi). all my clients look the same. laptop, desktop workstation, etc.

I even have a portable LXD environment which always looks the same, on a USB flash drive. wherever I go, I have the same environment.


Anyone see a reason this couldn’t work for a fully “remote” dev environment, if the Docker daemon were running somewhere else?


I have been thinking how to work with multiple users in a convenient way. The problem is that file permissions causes problems when you want to switch between users. I have not found a really ideal way of working using different users, but I think it needs some approach, I just haven't found it yet.


Does group membership not help?


It can help a bit but it is still complex.


WSL is quite good for this type of thing.


WSL probably is good but I believe it still has access to the whole Windows file system.

Linux on ChromeOS is probably better because its file system is separate. Files and directories must be explicitly shared from the main OS.

I think both have the ability to open windows on the main desktop so you can run graphical IDEs and the like in the VM which is nice.


The filesystem access is optional and can be removed, WSL2 is actually a fairly nice implementation of a linux sandbox that can be used like it's just running alongside windows. I think your concern is valid for WSL1, where the seperation is managed by a Microsoft provided Windows driver, but WSL2 is just a VM with dynamic memory allocation, so you can lock it down basically as well as any other Hyper-V VM, it just has a lot of integrations enabled by default.


Firejail can also be a useful option, though no good if you're on Mac https://firejail.wordpress.com/

Uses the same Linux primitives as docker etc, but can be a bit more ergonomic for this use case


Reminder: Docker Hub image tags are not cryptographically secure. They can be replaced by DockerHub or their colo or government at any time.

You need something like org/image@sha256:<hash> instead.


You are entirely right. Problem with the sha256 is that you kind of need a comment pointing to the proper tag (at the time of generation). It gets tedious with time.


Workstation security is definitely in the category of things I'll only vaguely pretend to care about if you're paying me.


The favicon of the site looks like familiar.


Docker is not a trust boundary CMV




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: