This reminds me very much of one of my favourite series on Netflix, Midnight Diner (not Midnight Diner - Tokyo Stories, which is a Netflix remake with many of the same cast, but not as enjoyable as the original in my opinion). Most of the action centres around a group of regulars talking while at a small izakaya in Shinjuku, Tokyo, which is run by someone known only as "Master" and only opens from midnight to 7am. You see a bit of their lives outside, but it always reverts back to the izakaya where they debate on various topics. Given the setting, each episode feels a bit like a theatre play.
Reminds me of a bit from a novel I read (won't be naming the title to avoid spoilers) where one of the minor twists is that the gigastructure of galaxies that we observe in the universe - the thing that's conducive to things like "star formation" and "life" - is an art project by intelligent species who've been alive since around the time of the Big Bang.
If you want to pull back the curtain, I highly recommend hand-writing your own small ELF binary. In particular, on Linux, the C ELF structures are available via:
#include <elf.h>
Writing C code to generate an ELF makes it apparent that an ELF is just a couple of structs and some assembled code dumped to a file. (I've used Keystone with decent success for assembly.) It's actually pretty easy to build something that works if you follow along with the man page:
man 5 elf
For debugging handmade ELF files, it's handy to explicitly run the system loader under strace:
strace /lib/ld-linux.so.3 ./homemade_elf
You can find the path to the interpreter that will be used via something like:
readelf -a "$(which ls)" | grep -i interpreter
For example, debugging with strace will make it apparent if any memory mappings are failing. The loader also sometimes has its own error messages that are more descriptive than a normal segfault.
Ha! This was probably the first serious problem I ever tackled with an open source contribution!
The year was 2002, the 2.4 Linux kernel had just been released and I was making money on the side building monitoring software for a few thousand (mostly Solaris) hosts owned by a large German car manufacturer. Everything was built in parallel ksh code, “deployed” to Solaris 8 on Sun E10Ks, and mostly kicked off by cron. Keeping total script runtime down to avoid process buildup and delay was critical. The biggest offender: long timeouts for host/port combinations that would sporadically not be available.
Eventually, I grabbed W. Richard Stevens’ UNIX network programming book and created tcping [0]. FreeBSD, NetBSD, a series of Linux distros picked it up at the time and it was steady decline from there… good times!
It was about 30 images, though I'm planning on adding more and training again sometime. Either that or splitting it up between when her hair is short and when it's long, as it really changes how she looks.
I'd recommend increasing the network dimension to at least 64, if your VRAM can take it. I can do 64 with my 12GB card. At least for people, I've had better luck using a token that's a celebrity. I'm not sure how to try that with my dog - perhaps just "terrier dog" or something.
I like combining this with a bash implementation of an event API (https://github.com/bashup/events). This makes it easy/idiomatic, for example, to conditionally add cleanup as you go.
jq is incredibly powerful and I'm using it more and more. Even better, there is a whole ecosystem of tools that are similar or work in conjunction with jq:
* jq (a great JSON-wrangling tool)
* jc (convert various tools’ output into JSON)
* jo (create JSON objects)
* yq (like jq, but for YAML)
* fq (like jq, but for binary)
* htmlq (like jq, but for HTML)
List shamelessly stolen from Julia Evans[1]. For live links see her page.
Just a few days ago I needed to quickly extract all JWT token expiration dates from a network capture. This is what I came up with:
I use stackprinter[1] for the same. It's advisable to suppress printing of library functions (`suppressed_paths=["lib/python"])`) otherwise deep stack traces (as some libraries---hello pandas---like to produce) will cause the actually-problematic code to be elided. Long reprs are also truncated so your 2GB JSON buffer isn't going to blow up your trace.
this:
Traceback (most recent call last):
File "demo.py", line 12, in <module>
dangerous_function(somelist + anotherlist)
File "demo.py", line 6, in dangerous_function
return sorted(blub, key=lambda xs: sum(xs))
File "demo.py", line 6, in <lambda>
return sorted(blub, key=lambda xs: sum(xs))
TypeError: unsupported operand type(s) for +: 'int' and 'str'