Hacker Newsnew | past | comments | ask | show | jobs | submit | more perbu's commentslogin

The point is that invoking the OS has a cost. Using mmap, for those situations where it makes sense, lets you avoid that cost.


This is a niche scenario. The scenario outlined is reading CDB databases.


Yes. When Varnish Cache launched, in 2006, I worked in a rather small OSS consultancy, which did the Linux port of Varnish Cache and provided maintenance and funding for the project.


You say, "Yes. When Varnish Cache launched, in 2006, I worked in a rather small OSS consultancy, which did the Linux port of Varnish Cache and provided maintenance and funding for the project."

But eventually phk left, and you came into conflict with him over the name, which was resolved by him choosing a different name for his version of Varnish?


Not really.

We've been funding phks work on Varnish and Vinyl cache for 20 years. Do you think phk can write, maintain and release something on his own? Vinyl Cache cannot be a one-man-show, be real.


(I do, in fact, think phk can write, maintain, and release something on his own.)


He knows a lot of things and is amongst the best software developers I've worked with, but on a project like this you need a lot more breath than any single developer can bring.


I see. Thank you for explaining!


It's about scalability. If you have 100 instances you really want them to share the cache so you increase hitrate and keep egress costs low.


> If you have 100 instances you really want them to share the cache

I think that assumes decoupled compute and storage. If instead I couple compute and storage, I can shard the input, and then I won't share the cache across the instances. I don't think there is one approach that wins every time.

As for egress fees, that is an orthogonal concern.


End-to-end (glass-to-glass) latency is substantially better. Mostly because the protocol isn't request/response any more.


Unikernels are quite an intriguing concept. They'll be re-discovered every five years, like programmers keep re-discovering functional programming.


They're flying cars and VR.

People eventually come to realize they're not so great when having to apply real-world, cross-cutting concerns like access control, audit, logging, debugging, profiling, monitoring, throttling, backup, and recovery.

The emperor's new clothes might have a hole or two in them.


It seems that way because people are stuck thinking in terms of an operating system. Need access control? Put a file on the server. Need auditing? Log into the server. Need logging? A text file... on the server. None of these need be done this way, and in fact ways that make sense when you have a full operating system don't make sense with a unikernel. Hint: All of these things should be database-driven.


Access Control: There is none internally. We don't have the notion of users.

Logging: Keep using whatever you want be it elasticsearch, syslog, cloudwatch, etc. No opinions here.

Debugging: GDB works fine and in many cases since you can simply export the vm in it's entirety and then attach to it locally this becomes even easier to debug than the same application running on linux.

Profiling: We support things like ftrace and of course things like prometheus you can export.

Monitoring: Kinda in the same boat as logging - keep using whatever you are using today - datadog, victoria metrics, etc.

Throttling: This is traditionally an app-level concern that someone would implement at perhaps a load balancing layer - keep using whatever you are using.

Backup/Recovery: Running unikernels make it trivial to backup as you can clone running vms. In fact most cloud deploys involve making a snapshot that is already stored as a 'backup' and makes things like rollback much easier to do.


Unikernels lack infrastructure to do any of these. That's why they're self-defeating canards.


I'm not sure what your comment means? What infrastructure? I just broke apart each of those into examples of how people use them today.


Command line. Packages. Mounts. File systems. Any standard anything. There's nothing unless you reinvent the wheel. Standardization and repeatability allow reuse of the work of many others. Unikernels throw 99.99% of it away.


Packages exist: https://repo.ops.city

Mounts also exist - in fact you can hotplug volumes on the fly on all the major clouds. People really like this cause they can do things like rotate ssl certs every few hours.

The file system exists - at the very least you need to load a program that sits on the filesystem but people are running webservers that write to temp files and load static assets and also databases.


They made the amd64 architecture. Let’s not forget that.


The black market isn't going to care about it. It isn't really exploitable.


It isn't. Filippo is a x-googler that used to work on Go crypto for Google, so assumptions are easy to make.

The project seems to be sponsored by Let's Encrypt, fwiw.


Back in 2008-2009 I remember the Varnish project struggled with what looked very much like a memory leak. Because of the somewhat complex way memory was used, replacing the Glibc malloc with jemalloc was an immediate improvement and removed the leak-like behavior.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: