Yes. When Varnish Cache launched, in 2006, I worked in a rather small OSS consultancy, which did the Linux port of Varnish Cache and provided maintenance and funding for the project.
You say, "Yes. When Varnish Cache launched, in 2006, I worked in a rather small OSS consultancy, which did the Linux port of Varnish Cache and provided maintenance and funding for the project."
But eventually phk left, and you came into conflict with him over the name, which was resolved by him choosing a different name for his version of Varnish?
We've been funding phks work on Varnish and Vinyl cache for 20 years. Do you think phk can write, maintain and release something on his own? Vinyl Cache cannot be a one-man-show, be real.
He knows a lot of things and is amongst the best software developers I've worked with, but on a project like this you need a lot more breath than any single developer can bring.
> If you have 100 instances you really want them to share the cache
I think that assumes decoupled compute and storage. If instead I couple compute and storage, I can shard the input, and then I won't share the cache across the instances. I don't think there is one approach that wins every time.
As for egress fees, that is an orthogonal concern.
People eventually come to realize they're not so great when having to apply real-world, cross-cutting concerns like access control, audit, logging, debugging, profiling, monitoring, throttling, backup, and recovery.
The emperor's new clothes might have a hole or two in them.
It seems that way because people are stuck thinking in terms of an operating system. Need access control? Put a file on the server. Need auditing? Log into the server. Need logging? A text file... on the server. None of these need be done this way, and in fact ways that make sense when you have a full operating system don't make sense with a unikernel. Hint: All of these things should be database-driven.
Access Control: There is none internally. We don't have the notion of users.
Logging: Keep using whatever you want be it elasticsearch, syslog, cloudwatch, etc. No opinions here.
Debugging: GDB works fine and in many cases since you can simply export the vm in it's entirety and then attach to it locally this becomes even easier to debug than the same application running on linux.
Profiling: We support things like ftrace and of course things like prometheus you can export.
Monitoring: Kinda in the same boat as logging - keep using whatever you are using today - datadog, victoria metrics, etc.
Throttling: This is traditionally an app-level concern that someone would implement at perhaps a load balancing layer - keep using whatever you are using.
Backup/Recovery: Running unikernels make it trivial to backup as you can clone running vms. In fact most cloud deploys involve making a snapshot that is already stored as a 'backup' and makes things like rollback much easier to do.
Command line. Packages. Mounts. File systems. Any standard anything. There's nothing unless you reinvent the wheel. Standardization and repeatability allow reuse of the work of many others. Unikernels throw 99.99% of it away.
Mounts also exist - in fact you can hotplug volumes on the fly on all the major clouds. People really like this cause they can do things like rotate ssl certs every few hours.
The file system exists - at the very least you need to load a program that sits on the filesystem but people are running webservers that write to temp files and load static assets and also databases.
Back in 2008-2009 I remember the Varnish project struggled with what looked very much like a memory leak. Because of the somewhat complex way memory was used, replacing the Glibc malloc with jemalloc was an immediate improvement and removed the leak-like behavior.